id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
10,694,451 | https://en.wikipedia.org/wiki/DSS%20%28NMR%20standard%29 | Sodium trimethylsilylpropanesulfonate (DSS) is the organosilicon compound with the formula (CH3)3SiCH2CH2CH2SO3−Na+. It is the sodium salt of trimethylsilylpropanesulfonic acid. A white, water-soluble solid, it is used as a chemical shift standard for proton NMR spectroscopy of aqueous solutions. The chemical shift, specifically the signal for the trimethylsilyl group, is relatively insensitive to pH.
The proton spectrum of DSS also exhibits resonances at 2.91 ppm (m), 1.75 ppm (m), and 0.63 ppm (m) at an intensity of 22% of the reference resonance at 0 ppm.
Alternatives
Sodium trimethylsilyl propionate (TSP) is a related compound used as an NMR standard. It uses a carboxylic acid instead of the sulfonic acid found in DSS to confer water solubility. As a weak acid, TSP is more sensitive to changes in pH.
4,4-Dimethyl-4-silapentane-1-ammonium trifluoroacetate (DSA) has also been proposed as an alternative, to overcome certain drawbacks of DSS.
References
Sulfonic acids
Trimethylsilyl compounds
Organic sodium salts
Nuclear magnetic resonance | DSS (NMR standard) | [
"Physics",
"Chemistry"
] | 298 | [
"Sulfonic acids",
"Nuclear magnetic resonance",
"Functional groups",
"Trimethylsilyl compounds",
"Salts",
"Organic sodium salts",
"Nuclear physics"
] |
10,694,483 | https://en.wikipedia.org/wiki/Joint%20%28building%29 | A building joint is a junction where building elements meet without applying a static load from one element to another. When one or more of these vertical or horizontal elements that meet are required by the local building code to have a fire-resistance rating, the resulting opening that makes up the joint must be firestopped in order to restore the required compartmentalisation.
Qualification requirements
Such joints are often subject to movement. Firestops must be able to demonstrate the ability to withstand operational movement prior to fire testing. Firestops for such building joints can be qualified to UL 2079 -- Tests for Fire Resistance of Building Joint Systems.
The joint design must consider the anticipated operational movement of each joint. Timing is also important, as freshly poured concrete shrinks particularly during the first few months of a new building, potentially causing joint size changes.
Head-of-Wall (HOW)
Where vertical fire-resistance rated wall assemblies meet the underside of the floor slab above, a movement joint results, which can be subject to compression, as the freshly placed concrete cures and shrinks all over a new building. This joint must be firestopped in a flexible manner.
See also
Concrete
Penetration (firestop)
Sealant
Firestop
Curtain wall
Passive fire protection
Active fire protection
Mineral wool
Packing (firestopping)
Fire sprinkler
Articulation
References
External links
UL2079 Scope: Tests for Fire Resistance of Building Joint Systems
UL treatise on building joints
UL HW-S-0055: A common condition
Building engineering
Passive fire protection
Firestops | Joint (building) | [
"Engineering"
] | 308 | [
"Building engineering",
"Civil engineering",
"Architecture"
] |
10,695,013 | https://en.wikipedia.org/wiki/Geographic%20Messaging%20Service | Geographic Messaging Service, or GMS for short, is a form of messaging for cell phones. It is a message associated with a geographic region that is delivered to a subscriber when they are within that region. This form of messaging extends traditional Short Messaging Service (SMS) and Multimedia Messaging Service (MMS), by allowing subscribers to send and receive SMS or MMS. Similar to SMS and MMS, GMS can be the vehicle for peer-to-peer communications, as well as for other content and marketing services. For example, a tourist organization can leave tidbits about interesting locations in New York City and have them delivered to visitors when they are nearby those locations.
The technology underlying GMS is called geofencing—detecting when a cellphone crosses a geographic fence. The term GMS was coined by researchers at Bell Laboratories.
Mobile telecommunications standards | Geographic Messaging Service | [
"Technology"
] | 176 | [
"Mobile telecommunications",
"Mobile telecommunications standards"
] |
10,696,110 | https://en.wikipedia.org/wiki/Supralateral%20arc | A supralateral arc is a comparatively rare member of the halo family which in its complete form appears as a large, faintly rainbow-colored band in a wide arc above the sun and appearing to encircle it, at about twice the distance as the familiar 22° halo. In reality, however, the supralateral arc does not form a circle and never reaches below the sun. When present, the supralateral arc touches the (much more common) circumzenithal arc from below. As in all colored halos, the arc has its red side directed towards the sun, its blue part away from it.
Formation
Supralateral arcs form when sun light enters horizontally oriented, rod-shaped hexagonal ice crystals through a hexagonal base and exits through one of the prism sides. Supralateral arcs occur about once a year.
Confusion with the 46° halo
Due to its apparent circular shape and nearly identical location in the sky, the supralateral arc is often mistaken for the 46° halo, which does form a complete circle around the sun at approximately the same distance, but which is much rarer and fainter. Distinguishing between the two phenomena can be difficult, requiring the combination of several subtle indicators for proper identification.
In contrast to the static 46° halo, the shape of a supralateral arc varies with the elevation of the sun. Before the sun reaches 15°, the bases of the arc touch the lateral (oriented sidewise) sides of the 46° halo. As the sun rises from 15° to 27°, the supralateral arc almost overlaps the upper half of the 46° halo, which is why many reported observations of the latter most likely are observations of the former. As the sun goes from 27° to 32°, the apex of the arc touches the circumzenithal arc centered on zenith (as does the 46° halo when the sun is located between 15° and 27°). In addition, the supralateral arc is always located above the parhelic circle (the arc located below it is the infralateral arc), and is never perfectly circular.
Arguably the best way of distinguishing the halo from the arc is to carefully study the difference in colour and brightness. The 46° halo is six times fainter than the 22° halo and generally white with a possible red inner edge. The supralateral arc, in contrast, can even be confused with the rainbow with clear blue and green strokes.
Gallery
See also
Infralateral arc
Parry arc
Lowitz arc
References
External links
Atmospheric Optics - Supralateral & infralateral arcs - including HaloSim computer simulations and crystal illustrations.
Paraselene.de - Gallery of images from March 2002
Paraselene.de - Gallery of images from December 2007
Atmospheric optical phenomena | Supralateral arc | [
"Physics"
] | 592 | [
"Optical phenomena",
"Physical phenomena",
"Atmospheric optical phenomena",
"Earth phenomena"
] |
10,696,667 | https://en.wikipedia.org/wiki/W%20Cephei | W Cephei is a spectroscopic binary and variable star located in the constellation Cepheus. It is thought to be a member of the Cep OB1 stellar association at about 8,000 light years. The supergiant primary star is one of the largest known stars and as well as one of the most luminous red supergiants.
Discovery
W Cephei was catalogued as BD+57°2568 in the Bonner Durchmusterung published in 1903, and HD 214369 in the Henry Draper Catalogue. It was discovered to be a variable star by T. H. E. C. Espin, in 1885. It was described in 1896 as a red star varying from magnitude 7.3 to 8.3.
In 1925, W Cep was included in a listing of Be stars. It was recognised as a cool star with spectral type Mep. It was classified as K0ep Ia from a 1949 spectrum, but also recognised to have a small hot companion, plus an unusual infrared excess. Ultraviolet spectra allowed absorption lines from the companion to be studied and it was given a spectral type of B0-1.
System
The W Cephei system contains a luminous red supergiant star with a non-supergiant early B companion. The star has unusual emission lines including both permitted and forbidden FeII, produced by a circumstellar envelope containing dust and ionised gas. The two components have been resolved at using speckle interferometry. An orbital period of 2,090 days has been proposed.
Variability
W Cephei varies in brightness from 7th to 9th magnitude. The General Catalogue of Variable Stars lists it as a semiregular variable with a period of 370 days, but later attempts to find a period have shown only random variations. It has also been proposed that eclipses occur.
References
External links
AAVSO chart of comparison stars for W Cephei
British Astronomical Association VSS light curves
Cephei, W
Cepheus (constellation)
Spectroscopic binaries
Emission-line stars
K-type supergiants
M-type supergiants
B-type main-sequence stars
Semiregular variable stars
BD+57 2568
214369
111592 | W Cephei | [
"Astronomy"
] | 458 | [
"Constellations",
"Cepheus (constellation)"
] |
10,696,700 | https://en.wikipedia.org/wiki/Infralateral%20arc | An infralateral arc (or lower lateral tangent arc) is a rare halo, an optical phenomenon appearing similar to a rainbow under a white parhelic circle. Together with the supralateral arc they are always located outside the seldom observable 46° halo, but in contrast to supralateral arcs, infralateral arcs are always located below the parhelic circle.
The shape of an infralateral arc varies with the elevation of the Sun. Between sunrise and before the observed Sun reaches about 50° over Earth's horizon, two infralateral arcs are located on either side (e.g. lateral) of the 46° halo, their convex apexes lying tangental to the 46° halo. As the observed Sun reaches above 68°, the two arcs unite into a single concave arc tangent to the 46° halo vertically under the Sun.
Infralateral arcs form when sunlight enters horizontally oriented, rod-shaped hexagonal ice crystals through a hexagonal base and exits through one of the prism sides. Infralateral arcs occur about once a year. They are often observed together with circumscribed halos and upper tangent arcs.
See also
Circumzenithal arc
Tangent arc
Parry arc
References
External links
Atmospheric Optics - Supralateral & infralateral arcs - including HaloSim computer simulations and crystal illustrations.
Atmospheric optical phenomena
ja:ラテラルアーク#下部ラテラルアーク | Infralateral arc | [
"Physics"
] | 307 | [
"Optical phenomena",
"Physical phenomena",
"Atmospheric optical phenomena",
"Earth phenomena"
] |
10,697,483 | https://en.wikipedia.org/wiki/Blast%20wall | A blast wall is a barrier designed to protect vulnerable buildings or other structures and the people inside them from the effects of a nearby explosion, whether caused by industrial accident, military action, or terrorism.
Effectiveness
Research by Cranfield University Defence Academy, building on earlier work, has shown that blast walls have the following properties:
A non-deforming upright wall will significantly reduce the peak blast overpressure and impulse in an area between 4 and 6 wall heights behind it
Similar protection occurs at greater distances behind the wall, but to a diminishing extent
Blast walls perform best if the explosion is relatively close to the front of the wall
"Canopied" walls (with a top section overhanging the front face) show some improved blast protection over plane walls
A 90-degree canopy is more effective than a 45-degree one
Walls containing sand or water work well, and cause little damage if they fail
A wall has to stay intact long enough to "interact" with the blast in order to have any effect
Types
Permanent blast walls can be made from pre-cast reinforced concrete, or steel sheeting. Various types of moveable blast wall have been manufactured. These include the Bremer wall concrete barriers used in Iraq and Afghanistan by US Armed Forces, and the Concertainers, wire mesh containers filled with sand or soil, which are used by British Armed Forces.
See also
Blast shelter
Shockwave
Revetment (aircraft)
References
External links
Fortification (architectural elements)
Explosives engineering | Blast wall | [
"Engineering"
] | 299 | [
"Explosives engineering"
] |
10,697,967 | https://en.wikipedia.org/wiki/Triethyl%20phosphonoacetate | Triethyl phosphonoacetate is a reagent for organic synthesis used in the Horner-Wadsworth-Emmons reaction (HWE) or the Horner-Emmons modification.
Triethyl phosphonoacetate can be added dropwise to sodium methoxide solution to prepare a phosphonate anion. It has an acidic proton that can easily be abstracted by a weak base. When used in an HWE reaction with a carbonyl the resulting alkene formed is usually the E alkene, and is generated with excellent regioselectivity.
References
Phosphonate esters
Reagents for organic chemistry
Ethyl esters | Triethyl phosphonoacetate | [
"Chemistry"
] | 144 | [
"Reagents for organic chemistry"
] |
10,698,414 | https://en.wikipedia.org/wiki/Operator%20system | Given a unital C*-algebra , a *-closed subspace S containing 1 is called an operator system. One can associate to each subspace of a unital C*-algebra an operator system via .
The appropriate morphisms between operator systems are completely positive maps.
By a theorem of Choi and Effros, operator systems can be characterized as *-vector spaces equipped with an Archimedean matrix order.
See also
Operator space
References
Operator theory
Operator algebras | Operator system | [
"Mathematics"
] | 99 | [
"Mathematical analysis",
"Mathematical analysis stubs"
] |
10,698,707 | https://en.wikipedia.org/wiki/Gleaning%20%28birds%29 | Gleaning is a feeding strategy by birds and bats in which they catch invertebrate prey, mainly arthropods, by plucking them from foliage or the ground, from crevices such as rock faces and under the eaves of houses, or even, as in the case of ticks and lice, from living animals. This behavior is contrasted with hawking insects from the air or chasing after moving insects such as ants. Gleaning, in birds, does not refer to foraging for seeds or fruit.
Gleaning is a common feeding strategy for some groups of birds, including nuthatches, tits (including chickadees), wrens, woodcreepers, treecreepers, Old World flycatchers, Tyrant flycatchers, babblers, Old World warblers, New World warblers, Vireos and some hummingbirds and cuckoos. Many birds make use of multiple feeding strategies, depending on the availability of different sources of food and opportunities of the moment.
Techniques and adaptations
Foliage gleaning, the strategy of gleaning over the leaves and branches of trees and shrubs, can involve a variety of styles and maneuvers. Some birds, such as the common chiffchaff of Eurasia and the Wilson's warbler of North America, feed actively and appear energetic. Some will even hover in the air near a leaf or twig while gleaning from it; this behavior is called "hover-gleaning". Other birds are more methodical in their approach to gleaning, even seeming lethargic as they perch upon and deliberately pick over foliage. This behavior is characteristic of the bay-breasted warbler and many vireos. Another tactic is to hang upside-down from the tips of branches to glean the undersides of leaves. Tits such as the familiar black-capped chickadee are often observed feeding in this manner. Some birds, like the ruby-crowned kinglet and red-eyed vireo of North America use a combination of these tactics.
Gleaning birds are typically small with compact bodies and have small, sharply pointed bills. These features are even seen in gleaning birds that are not closely related. For example, in flycatchers of the family Tyrannidae, in which some member species are more adapted for hawking insects on the wing and others for gleaning, the gleaners have bills that resemble those of tits and warblers, unlike their larger-billed relatives. Also, some members of the woodpecker family, particularly piculets such as the rufous piculet of Southeast Asia, are similarly adapted for gleaning, with small, compact bodies and sharp bills, rather than the long, supportive tails and wedge-shaped bills more typical of woodpeckers. Birds such as the aforementioned piculet are specialized for gleaning the bark of trees, as are nuthatches, woodcreepers, and treecreepers. Most bark-gleaners work their way up tree trunks or along branches, though nuthatches are well known as the birds that can go the opposite direction, facing down and working their way down the trunk, as well. This requires strong legs and feet on the part of the nuthatch and piculet, while birds that face upwards tend to have stiff tail feathers to prop them up.
Birds often specialize in a particular niche, such as a particular stratum of forest or type of vegetation. In South and Southeast Asia, for example, the mountain tailorbird is often found gleaning in thickets and stands of bamboo, Abbott's babbler gleans lower-storey foliage in lowland forest, the rufous-chested flycatcher and brown fulvetta are birds of the mid-storey forest, the yellow-breasted warbler gleans in the mid- to upper-storey, and the greater green leafbird specializes in the upper-storey forest. The Javan white-eye is a bird of coastal scrub and mangroves, while the related black-capped white-eye is restricted to montane forest.
Further specialization within a habitat is associated with behaviors and morphological adaptations (physical traits of size and shape). Tiny birds are lightweight enough to hang onto the ends of twigs and pluck small prey; the goldcrest of Europe and its counterpart the golden-crowned kinglet of North America exhibit this feeding style. The related common firecrest is very similar in size and shape, but slightly bulkier, and has less of a tendency to glean along twigs and more of a habit of flying from perch to perch. Having a very small bill seems to be good for taking tiny prey from the surfaces of leaves, and small-billed birds such as the blue tit forage in broad-leafed woodlands. The long-billed gnatwren and speckled spinetail of Central and South America, and the ashy tailorbird and striped tit-babbler of South Asia, show a preference for gleaning in tangles of vines. The ash-browed spinetail of South America specializes in gleaning among epiphytes on moss-covered tree branches. Many hummingbirds take small insects from flowers while probing for nectar, and some species glean actively among bark and leaves. The Puerto Rican emerald is one such hummingbird. Found only on the island of Puerto Rico, the female subsists on insects and spiders, while the male has a typical hummingbird diet of nectar. Hummingbirds and other gleaners are also sometimes attracted to the sap wells created by sapsuckers. Sapsuckers, which are in the woodpecker family, drill small holes in living tree branches to get the sap flowing. The sap and the insects it attracts are then consumed, and rufous hummingbirds have been observed to follow the movements of sapsuckers and take advantage of this food source. Clusters of dead leaves also often harbor invertebrate prey, and the Bewick's wren and worm-eating warbler of North America have long bills well-suited for probing them, as do certain Asian babblers, such as the rusty-cheeked scimitar-babbler. In Central and South America, foliage-gleaners such as the red-faced spinetail and buff-throated foliage-gleaner are also examples of birds that glean clusters of dead leaves.
Crevice-gleaning is a niche particular to dry and rocky habitats. Adaptations for crevice-gleaning are similar to that of bark-gleaning. Just as the Bewick's wren, as mentioned in the preceding paragraph, has a long bill suited for poking around in the small places of woods and gardens, another North American wren, the canyon wren, has an even longer bill, which allows it to probe crevices in rocky cliffs. It also has skeletal adaptations to aid it in reaching deep into small spaces. These same traits are useful for gleaning the sides of buildings, as well. Another kind of rocky habitat is found along mountain streams, where birds such as the Louisiana waterthrush of North America and the forktails of Asia pick over stream-side rocks and exposed roots for aquatic insects and other moisture-loving prey.
Other foraging techniques
Foraging for invertebrate prey on the ground often involves gleaning the leaf litter of the forest floor, sometimes flicking, flipping, or scratching through dead leaves. Birds can use their bills to flick or toss dead leaves from the ground to reveal prey residing beneath. The leaftossers of Central and South America and the pittas and laughingthrushes of Asia do this. An example of a bird that employs flipping is the ovenbird, a species of North American wood-warbler. It deliberately turns over leaves on the ground to search for spiders, worms, and such underneath. In other parts of the world, similar leaf-flipping behavior has been observed in unrelated birds, such as the jungle babbler of India. Some birds, such as hummingbirds, will use their wings to create a blast of air to roll leaves over. Other birds rake a foot through the leaf litter, like a chicken, for the same purpose. This has been observed in buttonquails. Some American sparrows, such as the green-tailed towhee, perform a double-scratch by raking both legs simultaneously through the leaf-litter. They then catch prey items dislodged by the disturbance. Ground-foraging birds can be very hard for humans to observe, as they often occupy densely vegetated habitat, as in the case of the Bornean wren-babbler, which specializes in gleaning leaf litter in gullies in the forest of Southeast Asia.
A feeding technique that is somewhere between gleaning and hawking is where a bird flies from a perch and takes prey off foliage; this is called "sally-gleaning". The pygmy tyrants of South America are tiny flycatchers that feed this way. The todies of the Caribbean employ a distinct version of sally-gleaning. These small birds choose a perch within their lush forest and plantation habitats in the Greater Antilles, from which they scan the undersides of leaves above them. Upon spotting an insect or spider, they fly up in an arcing sally, pluck their prey item without stopping, and complete the arcing movement to land on a new perch.
An unusual feeding strategy is that of the oxpeckers of Africa. They perch on living animals and glean parasites from the animals' hides. On furry animals, such as buffalo, giraffe, and donkey, these birds run their bills through the fur of the animal, using a scissors-like motion to extract ticks and lice from near the skin. When they pull the insect out to the end of the fur, they catch it and eat it. (On animals with bare hides, such as rhinoceros and hippopotamus, oxpeckers pick at any open wounds the animals happen to have, consuming blood and pus, and possibly keeping the wounds free of maggots.) Historically, rhinoceros and other large wild mammals have been among the favored hosts, but as the populations of large mammals in the African savanna have declined in modern times, the population and range of both red-billed and yellow-billed oxpecker have also changed, and now the birds will use donkeys and domestic cattle as hosts.
There are other tactics. Dippers forage underwater in fast-moving streams. Common grackles have been observed to follow farmers’ plows to glean the grubs exposed in the fresh soil. Similarly, on the island of Borneo, the Bornean ground-cuckoo will follow wild pigs and sun bears as they turn up soil while foraging in the forest. Brewer's blackbirds are often seen in parking lots, where they pick off dead insects from car grilles. Some hummingbirds are known to take prey items from spiderwebs.
Behavioral implications
Gleaning, like other methods of foraging, is a highly visual activity, and as such has some implications for birds. First, to see requires light, and thus time allotted to gleaning is limited to daytime. Second, while a bird focuses on examining an area for prey items, it must necessarily divert its attention from scanning its surroundings for predators. Birds that glean in tree branches will often join together in a flock, and often with other gleaners in a mixed-species foraging flock. It has been shown that individual birds feeding in flocks are able to spend more time looking for food and less time looking for predators.
On the other hand, it is not a universal trait of gleaning birds to join with other species or even to be gregarious with their own kind. The leafbirds of Asia are foliage-gleaners, but are often found singly or in pairs. Also, where multiple species of gleaning birds forage in the same area, they may show niche segregation; for example, one species may stick to conifers while another species inhabits broadleaf trees, or they may even divide up a habitat, with smaller species feeding among higher, smaller tree branches and larger species staying on lower, larger branches.
References
Bird behavior
Bird feeding
Ornithology | Gleaning (birds) | [
"Biology"
] | 2,545 | [
"Behavior by type of animal",
"Behavior",
"Bird behavior"
] |
10,699,036 | https://en.wikipedia.org/wiki/Recycling%20codes | Recycling codes are used to identify the materials out of which the item is made, to facilitate easier recycling process. The presence on an item of a recycling code, a chasing arrows logo, or a resin code, is not an automatic indicator that a material is recyclable; it is an explanation of what the item is made of. Codes have been developed for batteries, biomatter/organic material, glass, metals, paper, and plastics. Various countries have adopted different codes. For example, the table below shows the polymer resin (plastic) codes. In the United States there are fewer, because ABS is placed with "others" in group 7.
A number of countries have a finer-grained system with more recycling codes. For example, China's polymer identification system has seven different classifications of plastic, five different symbols for post-consumer paths, and 140 identification codes. The lack of a code system in some countries has encouraged those who fabricate their own plastic products, such as RepRap and other prosumer 3-D printer users, to adopt a voluntary recycling code based on the more comprehensive Chinese system.
Resin identification codes and codes defined by the European Commission
Chinese codes for plastics products
The Standardization Administration of the People's Republic of China (SAC) has defined material codes for different types of plastics in the document GB 16288-2008. The numbers are consistent with RIC up to #6.
Alternative recycling labels
The following recycling label projects are designed with the consumer in mind while SPI or Resin Identification Codes are designed to be recognized by waste sorting facilities. They provide an alternative that eliminates confusion as people often mistake any resin code to be recyclable, but this is not necessarily true. The recyclability of the numbers depends on the abilities of the facilities in the community. Thus, they are not all automatically recyclable.
How2Recycle is a project that started in 2008. The label provides information about the packaging material and clearly indicates whether it is recyclable, partially or totally. If it is not recyclable at all, it is shown by a diagonal line going through the recycling label.
OPRL is a not-for-profit organisation that provides simple, consistent 'recycle' & 'refill' labels for retailer & brand packaging in the UK market. The labels clearly state whether the packaging is recyclable or not, helping consumers recycle better, more often.
See also
Resin identification code
Japanese recycling symbols
Waste hierarchy
Waste management
Food safe symbol
Bag It (documentary)
References
External links
Packaging Material Codes Includes lists of material codes in Germany.
Recycling
Environmental standards
Waste management concepts
Recycling Codes
Recycling Codes
Recycling Codes
Consumer symbols | Recycling codes | [
"Mathematics"
] | 549 | [
"Mathematical objects",
"Numbers",
"Number-related lists"
] |
10,699,094 | https://en.wikipedia.org/wiki/Public%20Transport%20Information%20and%20Priority%20System | The Public Transport Information and Priority System, abbreviated PTIPS,
is a computer-based system used in New South Wales, Australia, that brings together information about public transport entities, such as buses. Where applicable, PTIPS can also provide transport vehicles with priority at traffic signals.
PTIPS consists of a number of hardware and software components installed on-board buses which wirelessly communicate with a central set of servers. PTIPS also relies on an interface with Sydney Coordinated Adaptive Traffic System (SCATS) - to provide the priority feature) and bus/route/timetable data provided by bus organisations and government authorities.
PTIPS provides:
Real-time tracking of bus location and status
Traffic light priority for late running buses
Bus/Timetable performance and reliability reports
Real-time Bus arrival information for bus stops
How PTIPS works
PTIPS works by combining, on the one hand, schedule and route path information for buses performing timetabled services (as opposed to, say, charter trips), and on the other hand, live location data transmitted by the buses to PTIPS.
PTIPS receives XML data files from the bus operators, which contain information relating to planned trips (for example, route paths, trips & schedules, bus stops etc.)
Each bus that PTIPS tracks is equipped with a hardware device that records its location via GPS, and transmits it to the central PTIPS servers via the cellular radio communications network. Buses transmit these messages at certain intervals (which are configurable, and which vary depending on what the bus is doing), and also when they pass certain points along their intended route. Apart from GPS location, the transmitted messages also include information about the vehicle and which trip it is doing.
With the above information, PTIPS can compare the location of a bus performing a certain trip, at a certain point in time, with where it should be, based on the planned route and timetable data.
Real time apps
Transport for NSW worked with several developers in late 2012 to create, and release smartphone applications with access to the real-time bus data provided from PTIPS. Released in December, several iOS and Android apps went live on their respective App stores, allowing customers to track where their buses were in real-time, as well as any delays or timetable changes as they occur. It was initially trialled on Sydney Buses' route 400.
In 2013, this real-time data was further expanded to provide live information from Sydney Trains, and private bus operators Hillsbus and Busways Blacktown, and was eventually rolled out across bus operators in Greater Sydney.
In 2020, Transport for NSW started working with bus operators to introduce real-time tracking to regional bus services. As of March 2022, PTIPS-assisted real-time tracking was available for the regional centres of Albury, Armidale, Bathurst, Bega, Coffs Harbour, Dubbo, Forbes, Grafton, Nowra, Parkes, Port Macquarie, Queanbeyan, Tamworth, Tweed Heads and Wagga Wagga.
References
Sample of Realtime Data Government of New South Wales
Contract ID: PTIPS Roads & Traffic Authority Retrieved on 16 April 2007.
Priority bus green lights scrapped Sydney Morning Herald Retrieved on 16 April 2007.
Bus transport in New South Wales
Intelligent transportation systems | Public Transport Information and Priority System | [
"Technology"
] | 670 | [
"Warning systems",
"Intelligent transportation systems",
"Information systems",
"Transport systems"
] |
10,699,615 | https://en.wikipedia.org/wiki/Water%20efficiency | Water efficiency is the practice of reducing water consumption by measuring the amount of water required for a particular purpose and is proportionate to the amount of essential water used. Water efficiency differs from water conservation in that it focuses on reducing waste, not restricting use. Solutions for water efficiency not only focus on reducing the amount of potable water used but also on reducing the use of non-potable water where appropriate (e.g. flushing toilet, watering landscape, etc.). It also emphasizes the influence consumers can have on water efficiency by making small behavioral changes to reduce water wastage, and by choosing more water-efficient products.
Importance
According to the UN World Water Development Report, over the past 100 years, global water use has increased by a factor of six. Annually, the rate steadily increases at an estimated amount of one percent as a result of population increase, economic development and changing consumption patterns. Increasing human demand for water coupled with the effects of climate change mean that the future of water supply is not secure. Billion people do not have safe drinking water. In addition, there are changes in climate, population growth, and lifestyles. The changes in human lifestyle and activities require more water per capita. This creates competition for water among agricultural, industrial, and human consumption.
Organizations
Many countries recognize water scarcity as a growing problem. Global organizations such as the World Water Council, continue to prioritize water efficiency alongside water conservation.
The Alliance for Water Efficiency, Waterwise, California Water Efficiency Partnership (formally the California Urban Water Conservation Council), Smart Approved WaterMark in Australia, and the Partnership for Water Sustainability in British Columbia in Canada are non-governmental organizations that support water efficiency at national and regional levels.
Governmental organizations such as Environment Canada, the EPA in the USA, the Environment Agency in the UK, DEWR in Australia, have recognized and created policies and strategies to raise water efficiency awareness. The EPA established the WaterSense in 2006. The program is a voluntary program to encourage water efficiency in the United States by identifying and testing products that demonstrate improvement over standard models for toilets, bathroom faucets and faucet accessories, urinals, and residential shower heads through the use of the WaterSense label.
The government of China created a five-year (2010-2015) plan to deliver safe drinking water to about 54 percent of the population by 2015. It would cost about $66 billion US dollars or ¥410 billion Yuan to upgrade about 57,353 miles (92,300 kilometers) of main pipes and water treatment plants. The government hopes these steps will help to better conserve water and meet demands.
The Indian state of Haryana implemented the State Rural Water Policy 2012; under this policy individual household metered connections would be provided to 50% of the rural population by 2017, to stop water wastage in villages.
Water efficient solutions
Residential
Water efficiency solutions in residences include:
Turning off the faucet sink while brushing teeth — saves approximately five gallons (about 19 liters) of water
Installing faucet aerators
Fixing a water valve leakage
Only running the dishwasher and washing machine with a full load
Taking a shower instead of a bath
Washing fruits and vegetables in a bowl rather than continuously running the tap water
Using leftover water for houseplants
Using a watering can or garden hose with a trigger nozzle instead of a sprinkler
Using a bucket and sponge when washing a car rather than a running a hose
Washing clothes and linens in a washing machine rather than washing them by hand
Recycling greywater for toilet flushing water and garden use
Watering outdoor plants in the morning or in the evening when temperatures are cooler
Consumers can voluntarily, or with government incentives or mandates, purchase water-efficient appliances such as low-flush toilets and washing machines.
Manufacturers
Water efficiency solutions in manufacturing:
Identifying and eliminating wastage (such as leaks) and inefficient processes (such as continual spray devices on stop-start production lines). This may be the most low-cost area for water savings, as it involves minimal capital outlay. Savings can be made through implementing procedural changes, such as cleaning plant areas with brooms rather than water.
Changing processes and plant machinery. A retrofit of key plant equipment may increase efficiency. Alternatively, upgrades to more efficient models can be factored into planned maintenance and replacement schedules.
Reusing wastewater. As well as saving on mains water, this option may improve the reliability of supply, whilst reducing trade waste charges and associated environmental risks.
Waterless products
Using waterless car wash products to wash cars, boats, motorcycles, and bicycles. This could save up to of water per wash.
Utilities
The United States Environmental Protection Agency (EPA) makes the following recommendations for communities and utilities:
Implementing a water-loss management program (e.g. locate and repair leaks).
Universal metering.
Ensuring that fire hydrants are tamperproof.
Equipment changes — setting a good example by using water-efficient equipment.
Installing faucet aerators and low-flow shower heads in municipal buildings.
Replacing worn-out plumbing fixtures, appliances, and equipment with water-saving models.
Minimizing the water used in space cooling equipment in accordance with the manufacturer's recommendations.
Shutting off cooling units when not needed.
Encouraging the use of urinals instead of toilet stalls in school (boys') and work office (men's) restrooms.
Utilities can also modify their billing software to track customers who have taken advantage of various utility-sponsored water conservation initiatives (toilet rebates, irrigation rebates, etc.) to see which initiatives provide the greatest water savings for the least cost.
Data centers
Water policies and impact assessments
Environmental policies and the difference usages of models that are generated by these enforcement can have significant impacts on society. Hence, improving policies regarding environmental justice issues often require local government's decision-making, public awareness, and a significant amount of scientific tools. Furthermore, it is important to understand that positively impacting policy decisions require more than good intentions, and they necessitate analysis of risk-related information along with consideration of economic issues, ethical and moral principles, legal precedents, political realities, cultural beliefs, societal values, and bureaucratic impediments. Also, ensuring that the rights of people regardless of their age, race, and background are being protected should not be neglected according to "The Role of Cumulative risk Assessment in Decisions about Environmental Justice." If a policy protects the natural environment but negatively affects those who are in the reach of the enforcement of the policy, that policy is subjected to revaluation. Researchers suggest racial and socioeconomic disparities in exposure to environmental hazards describing the demographic composition of areas and their proximity to hazardous sites. Then, any improvements of a social policy and models that are generated by these improvements should reflect the policy-makers' and researchers' environmental justice beliefs. Therefore, researchers and social changes should examine the promises and pitfalls associated with the environmental justice struggles, explore implications of proposed solutions, and recognize the fact that tools necessary to sufficiently carry preceding requirements are underdeveloped.
Examples
Reef Plan (Australia)
The Reef Plan began to incorporate new ways to create models that integrate environmental, economical, and social consequences. Pre-existing Australian water policies were often criticized by previous models for investment prioritization and economic dimensions when it came to policy impact assessment. However, the policy makers and researchers in Australia now suggest that "sustainability focused policy requires multi-dimensional indicators" that combine different disciplines. The Reef Plan allows the policy makers to identify issues relating to Reef water quality and implement management strategies and actions to conserve and rehabilitate areas such as riparian zones and wetlands. With the Reef Plan, Nine strategies were implemented in the Great Barrier Reef region. They include self-management approaches, education, and extension, economic incentives, planning for natural resources management and land use, regulatory frameworks, research and information sharing, partnership, priorities and targets, and monitoring and evaluation. And such improvements invoked benefits such as:
A more comprehensive picture of the policy impacts. New models projected possible outcomes of different simulations of the proposed policies under various circumstances. In addition, they provided the optimal decisions to be made regarding each outcome through the usage of what is known as computable general equilibrium (CGE) which "integrate dynamics on a catchment scale"
Helping the aggregation of both economic aspects of water and non-monetary elements of water usage.
Acknowledging the fact that farm production should depend on the global dynamics
Conserved Water Statutes (United States)
Conserved Water Statutes are state laws enacted by California, Montana, Washington, and Oregon to conserve water and allocate water resources to meet the needs of increasing demand for water in the dry lands where irrigation is or was occurring. These laws help the states to dismiss the disincentives to conserve water and do so without damaging pre-existing water rights. Because any extra amount of water after applying water to beneficiaries of the pre-existing water policies does not belong to the appropriators, such a condition creates an incentive to use as much water as possible rather than saving. This obviously causes the costs of irrigation to be greater than the optimal amount which makes the policy very inefficient. However, by enacting Conserved Water Statutes, state legislatures are able to address the disincentives to save water. The policy allows the appropriators to have rights over the surplus water and enforces them to verify their water savings by the water resources department. Out of the four states that adopted the Conserved Water Statutes, Oregon is often renowned to be the most successful. According to "How Expanding The Productivity of Water Rights Could Lessen Our Water Woes," The Oregon Water Resources Department (OWRD) has been a success because a high percentage of submitted applications submitted, and the OWRD serves as a good intermediaries that help appropriators to conserve water. OWRD's programs are not only a success because its effectiveness but also because of their efforts to improve the workers' working conditions. According to OWRD's website, the state policies regarding the water rights are divided into Cultural Competency, Traditional Health Worker, Coordinated Care Organizations, and Race, Ethnicity and Language Data Collection.
Water pollution in Malaysia
In Malaysia, the citizens have been experiencing harm from water pollutants in the river that have been accumulating over decades due to fast-growing urbanization and industrialization. The planners of Malaysia have been trying to come up with models that indicate the amount of pollutants has grown over time as cities became more industrialized and how these chemicals are distributed in various regions with the usage of econometrics and various scientific tools. Such an attempt is to encourage in-depth researches because sources should be able to analyzed numerically and give economic evaluations while also evaluating the environment. With an abundance of evidence provided by models which reveal the inadequacy of current policies, the Malaysian decision-makers now recognize that appropriate treatments are necessary for regions that are industrialized to protect the residents from water pollutants. As a result, the government seeks to increase public awareness and provide affordable water services to residents by year 2020.
Benefits of impact assessments
Successful policies and assessments integrate environmental, economical, and social consequences which provide better models and potential future improvements of the policies. Understanding the importance of water policies and impact assessments is a crucial part of both water justice and environmental justice issues. Not only does it help to protect the quality of water but also the quality of living for humans who are directly affected by the environment.
In addition, successful policies go beyond water issues. Beneficial policies that are intended to benefit the general public touch upon subjects such as transportation and other environmental policies that may have a significant impact on the surrounding environment. Instead of mere cost-benefit analysis, decisions are made so that they account for the priorities of the people.
Notable benefits of impact assessments:
comprehensive picture of the policy impacts. New models projected possible outcomes of different simulations of the proposed policies under various circumstances. In addition, they provided the optimal decisions to be made regarding each outcome through the usage of what is known as computable general equilibrium (CGE) which "integrate dynamics on a catchment scale"
aggregation of both economics aspects of water and non-monetary elements of water usage.
acknowledging the fact that farm production should depend on the global dynamics.
Protection of the human rights of the workers and improvements in working conditions.
Provision of data that can be analyzed in terms of the economy, health impacts, and recognition of the need for appropriate treatments.
See also
Deficit irrigation
Nonresidential water use in the U.S.
Rainwater collection
Residential water use in the U.S. and Canada
Water conservation
Water resource management
Tap water
References
External links
Water Conservation for Small and Medium-Sized Utilities
Water Efficiency A Free Trade Publication
Savewater
Water conservation
Water and the environment
Water supply | Water efficiency | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 2,605 | [
"Hydrology",
"Water supply",
"Environmental engineering"
] |
10,700,701 | https://en.wikipedia.org/wiki/Manufacturing%20supermarket | A manufacturing supermarket (or market location) is, for a factory process, what a retail supermarket is for the customer. The customers draw products from the 'shelves' as needed and this can be detected by the supplier who then initiates a replenishment of that item. It was the observation that this 'way of working' could be transferred from retail to manufacturing that is one of the cornerstones of the Toyota Production System (TPS).
History
In the 1950s Toyota sent teams to the United States to learn how they achieved mass-production. However, the Toyota Delegation first got inspiration for their production system at an American Supermarket (a Piggly Wiggly, to be precise). They saw the virtue in the supermarket only reordering and restocking goods once they’d been bought by customers.
In a supermarket (like the TPS) customers (processes) buy what they need when they need it. Since the system is self-service the sales effort (materials management) is reduced. The shelves are refilled as products are sold (parts withdrawn) on the assumption that what has sold will sell again which makes it easy to see how much has been used and to avoid overstocking. The most important feature of a supermarket system is that stocking is triggered by actual demand. In the TPS this signal triggers the 'pull' system of production.
Implementation
Market locations are appropriate where there is a desire to communicate customer pull up the supply chain. The aim of the 'market' is to send single unit consumption signals back up the supply chain so that a demand leveling effect occurs. Just as in a supermarket it is possible for someone to decide to cater for a party of 300 from the supermarket so it is possible to decide to suddenly fill ten trucks and send massively distorting signals up those same pathways. Thus the 'market location' can be used as a sort of isolator between actual demand and how supply would like demand to be, an isolator between batch demand spikes and the up upstream supply process.
For example, if the market were positioned at the loading bay, then it will receive 'spikes' of demand whenever a truck comes in to be loaded. Since, in general, one knows in advance when trucks will arrive and what they will require to be loaded onto them, it is possible to spread that demand spike over a chosen period before the truck actually arrives. It is possible to do this by designating a location, say a marked floor area, to be the 'virtual' truck and moving items from the market to the 'virtual truck' smoothly over the chosen period prior to the load onto the actual truck commencing. Smoothly here means that for each item its 'loading' is evenly spread across the period. For regular shipments this period might start the moment the last shipment in that schedule departs the loading bay. This has four key impacts:
Loading movements rise, which is the reason often given for not doing this 'virtual' truck loading;
Demand evenness (Mura) increases which allows stock reductions and exposes new issues to be resolved;
Any last minute searching for items to load is eliminated since before the real truck need to be loaded the 'virtual' truck will have completed its loading;
Any potential shortages that may affect the shipment can be exposed earlier by the 'stockout' in the market location. This is true because the 'virtual' truck loading sequence will be constructed to fit with the supply process tempo.
This logic can, obviously, be applied upstream of any batch process and not just deliveries to another plant. It is a workaround for the fact that the batch process hasn't been made to flow yet. It therefore has some costs but the benefits in terms of reducing the three wastes should outweigh these.
Toyota use this technique and demand it of their suppliers in order to generate focus on the supply issues it uncovers. They then demand the preparation of loads for more frequent 'virtual' trucks than will actually appear in order to raise this pressure (see Frequent deliveries).
At low stocking levels for some items the 'market location' can require Just in Sequence supply rather than Just in Time.
References
Lean manufacturing
Toyota Production System | Manufacturing supermarket | [
"Engineering"
] | 849 | [
"Lean manufacturing"
] |
10,701,087 | https://en.wikipedia.org/wiki/Trick%20roping | Floreo de reata or trick roping is a Mexican entertainment or competitive art involving the spinning of a lasso, also known as a lariat or a rope. Besides Mexico and Mexican charrería, it is also associated with Wild West shows or Western arts in the United States.
The lasso is a well-known tool of Mexican vaqueros, who developed rope spinning and throwing skills in using lassos to catch animals. Mexican vaqueros developed various tricks to show off their prowess with the lasso and demonstrations of these tricks evolved into entertainment and competitive disciplines.
Trick roping was introduced to the United States by Mexican charro Vicente Oropeza while working for Buffalo Bill's Wild West Show in the 1890's and was declared “Champion of the World” in 1900.
The well-established repertoire of tricks can be divided into three fundamental categories: "flat loop", "vertical loop", and "butterfly". In addition, thrown-loop tricks and tricks that involve the use of two ropes are used. Among the vertical loop tricks is the "Texas Skip", which involves the performer spinning the lasso in a wide loop in a vertical plane and jumping through the loop from one side to the other on each rotation.
Well-known trick ropers include:
Vicente Oropeza was the Mexican charro that introduced the Mexican art of trick roping to the United States. He was posthumously inducted into the National Cowboy and Western Heritage Museum Hall of Fame.
Texas Jack Omohundro was the first performer to introduce roping acts to the American stage.
Texas Rose Bascom was of Cherokee Choctaw ancestry billed as the "Queen of the Trick Ropers," appeared in Hollywood movies, toured the world with the Bob Hope Troupe, Roy Rogers and Dale Evans, and Montie Montana, inducted into the National Cowgirl Hall of Fame.
Montie Montana had a 60-year career as a trick roper, and appeared in several John Wayne movies.
Actor and humorist Will Rogers, known for his roles as a cowboy, was an expert at trick roping. Rogers' rope tricks were showcased in the 1922 silent film The Ropin' Fool. He credited Mexican Charro Vicente Oropeza for inspiring him to become a trick roper, and called Oropeza the greatest trick roper ever.
Vince Bruce (b. April 4, 1955, d. September 24, 2011) was internationally acclaimed as one of the best Western acts in the world; Bruce made his Broadway debut in 1991, in the Tony Award-winning musical The Will Rogers Follies — A Life in Revue. Appearing as the trick-roping star and portraying Rogers in this tribute to the cowboy and vaudeville star, Bruce remained with the show for two and a half years at New York’s Palace Theatre. For his act, he performed a spin with two ropes, a feat first devised 60 years earlier by Will Rogers himself. On July 21, 1991, at the Empire State Building, Vince set a new world record — 4,011 — for “Texas Skips”.
Flores LaDue (1883-1951) was the only cowgirl to claim three world championships for trick and fancy roping; Flores remained undefeated in the event. Flores and her husband, Guy Weadick, also a trick roper, organized and produced the first Calgary Stampede. Flores Ladue is reputed to have been the first trick roper to perform the Texas Skip.
Horse trainer Buck Brannaman began his career in a child trick roping act with his brother.
The English troupe known as El Granadas, who performed at the 1946 Royal Variety Performance.
See also
Bullwhip
Wild West shows
Montie Montana
References
External links
The Lasso: A Rational Guide to Trick Roping by Carey Bunks, a book on trick roping that is available online under a GPL-type licence.
American frontier
Circus skills
Performing arts
Object manipulation
Rodeo-affiliated events | Trick roping | [
"Biology"
] | 801 | [
"Behavior",
"Object manipulation",
"Motor control"
] |
10,701,217 | https://en.wikipedia.org/wiki/Video%20decoder | A video decoder is an electronic circuit, often contained within a single integrated circuit chip, that converts base-band analog video signals to digital video. Video decoders commonly allow programmable control over video characteristics such as hue, contrast, and saturation. A video decoder performs the inverse function of a video encoder, which converts raw (uncompressed) digital video to analog video. Video decoders are commonly used in video capture devices and frame grabbers.
Signals
The input signal to a video decoder is analog video that conforms to a standard format. For example, a standard definition (SD) decoder accepts (composite or S-Video) that conforms to SD formats such as NTSC or PAL. High definition (HD) decoders accept analog HD formats such as AHD, HD-TVI, or HD-CVI.
The output digital video may be formatted in various ways, such as 8-bit or 16-bit 4:2:2, 12-bit 4:1:1, BT.656 (SD) or BT.1120 (HD). Usually, in addition to the digital video output bus, a video decoder will also generate a clock signal and other signals such as:
Sync — indicates the beginning of a video frame
Blanking — indicates video blanking interval
Field — indicates whether the current video field is even or odd (applies to interlaced formats)
Lock — indicates the decoder has detected and is locked (synchronized) to a valid analog input video signal
Functional blocks
The main functional blocks of a video decoder typically include these:
Analog processors
Y/C (luminance/chrominance) separation
Chrominance processor
Luminance processor
Clock/timing processor
A/D converters for Y/C
Output formatter
Host communication interface
Process
Video decoding involves several processing steps. First the analog signal is digitized by an analog-to-digital converter to produce a raw, digital data stream. In the case of composite video, the luminance and chrominance are then separated; this is not necessary for S-Video sources. Next, the chrominance is demodulated to produce color difference video data. At this point, the data may be modified so as to adjust brightness, contrast, saturation and hue. Finally, the data is transformed by a color space converter to generate data in conformance with any of several color space standards, such as RGB and YCbCr. Together, these steps constitute video decoding because they "decode" an analog video format such as NTSC or PAL.
References
Electronic circuits | Video decoder | [
"Engineering"
] | 539 | [
"Electronic engineering",
"Electronic circuits"
] |
10,701,883 | https://en.wikipedia.org/wiki/Reassignment%20method | The method of reassignment is a technique for sharpening a time-frequency representation (e.g. spectrogram or the short-time Fourier transform) by mapping the data to time-frequency coordinates that are nearer to the true region of support of the analyzed signal. The method has been independently introduced by several parties under various names, including method of reassignment, remapping, time-frequency reassignment, and modified moving-window method. The method of reassignment sharpens blurry time-frequency data by relocating the data according to local estimates of instantaneous frequency and group delay. This mapping to reassigned time-frequency coordinates is very precise for signals that are separable in time and frequency with respect to the analysis window.
Introduction
Many signals of interest have a distribution of energy that varies in time and frequency. For example, any sound signal having a beginning or an end has an energy distribution that varies in time, and most sounds exhibit considerable variation in both time and frequency over their duration. Time-frequency representations are commonly used to analyze or characterize such signals. They map the one-dimensional time-domain signal into a two-dimensional function of time and frequency. A time-frequency representation describes the variation of spectral energy distribution over time, much as a musical score describes the variation of musical pitch over time.
In audio signal analysis, the spectrogram is the most commonly used time-frequency representation, probably because it is well understood, and immune to so-called "cross-terms" that sometimes make other time-frequency representations difficult to interpret. But the windowing operation required in spectrogram computation introduces an unsavory tradeoff between time resolution and frequency resolution, so spectrograms provide a time-frequency representation that is blurred in time, in frequency, or in both dimensions. The method of time-frequency reassignment is a technique for refocussing time-frequency data in a blurred representation like the spectrogram by mapping the data to time-frequency coordinates that are nearer to the true region of support of the analyzed signal.
The spectrogram as a time-frequency representation
One of the best-known time-frequency representations is the spectrogram, defined as the squared magnitude of the short-time Fourier transform. Though the short-time phase spectrum is known to contain important temporal information about the signal, this information is difficult to interpret, so typically, only the short-time magnitude spectrum is considered in short-time spectral analysis.
As a time-frequency representation, the spectrogram has relatively poor resolution. Time and frequency resolution are governed by the choice of analysis window and greater concentration in one domain is accompanied by greater smearing in the other.
A time-frequency representation having improved resolution, relative to the spectrogram, is the Wigner–Ville distribution, which may be interpreted as a short-time Fourier transform with a window function that is perfectly matched to the signal. The Wigner–Ville distribution is highly concentrated in time and frequency, but it is also highly nonlinear and non-local. Consequently, this
distribution is very sensitive to noise, and generates cross-components that often mask the components of interest, making it difficult to extract useful information concerning the distribution of energy in multi-component signals.
Cohen's class of bilinear time-frequency representations is a class of "smoothed" Wigner–Ville distributions, employing a smoothing kernel that can reduce sensitivity of the distribution to noise and suppresses cross-components, at the expense of smearing the distribution in time and frequency. This smearing causes the distribution to be non-zero in regions where the true Wigner–Ville distribution shows no energy.
The spectrogram is a member of Cohen's class. It is a smoothed Wigner–Ville distribution with the smoothing kernel equal to the Wigner–Ville distribution of the analysis window. The method of reassignment smooths the Wigner–Ville distribution, but then refocuses the distribution back to the true regions of support of the signal components. The method has been shown to reduce time and frequency smearing of any member of Cohen's class.
In the case of the reassigned
spectrogram, the short-time phase spectrum is used to
correct the nominal time and frequency coordinates of the
spectral data, and map it back nearer to the true regions of
support of the analyzed signal.
The method of reassignment
Pioneering work on the method of reassignment was published by Kodera, Gendrin, and de Villedary under the name of Modified Moving Window Method. Their technique enhances the resolution in time and frequency of the classical Moving Window Method (equivalent to the spectrogram) by assigning to each data point a new time-frequency coordinate that better-reflects the distribution of energy in the analyzed signal.
In the classical moving window method, a time-domain signal, is decomposed into a set of coefficients, , based on a set of elementary signals, , defined
where is a (real-valued) lowpass kernel function, like the window function in the short-time Fourier transform. The coefficients in this decomposition are defined
where is the magnitude, and the phase, of , the Fourier transform of the signal shifted in time by and windowed by .
can be reconstructed from the moving window coefficients by
For signals having magnitude spectra, , whose time variation is slow relative to the phase variation, the maximum contribution to the reconstruction integral comes from the vicinity of the point satisfying the phase stationarity condition
or equivalently, around the point defined by
This phenomenon is known in such fields as optics as the principle of stationary phase, which states that for periodic or quasi-periodic signals, the variation of the Fourier phase spectrum not attributable to periodic oscillation is slow with respect to time in the vicinity of the frequency of oscillation, and in surrounding regions the variation is relatively rapid. Analogously, for impulsive signals, that are concentrated in time, the variation of the phase spectrum is slow with respect to frequency near the time of the impulse, and in surrounding regions the variation is relatively rapid.
In reconstruction, positive and negative contributions to the synthesized waveform cancel, due to destructive interference, in frequency regions of rapid phase variation. Only regions of slow phase variation (stationary phase) will contribute significantly to the reconstruction, and the maximum contribution (center of gravity) occurs at the point where the phase is changing most slowly with respect to time and frequency.
The time-frequency coordinates thus computed are equal to the local group delay, and local instantaneous frequency, and are computed from the phase of the short-time Fourier transform, which is normally ignored when constructing the spectrogram. These quantities are local in the sense that they represent a windowed and filtered signal that is localized in time and frequency, and are not global properties of the signal under analysis.
The modified moving window method, or method of reassignment, changes (reassigns) the point of attribution of to this point of maximum contribution , rather than to the point at which it is computed. This point is sometimes called the center of gravity of the distribution, by way of analogy to a mass distribution. This analogy is a useful reminder that the attribution of spectral energy to the center of gravity of its distribution only makes sense when there is energy to attribute, so the method of reassignment has no meaning at points where the spectrogram is zero-valued.
Efficient computation of reassigned times and frequencies
In digital signal processing, it is most common to sample the time and frequency domains. The discrete Fourier transform is used to compute samples of the Fourier transform from samples of a time domain signal. The reassignment operations proposed by Kodera et al. cannot be applied directly to the discrete short-time Fourier transform data, because partial derivatives cannot be computed directly on data that is discrete in time and frequency, and it has been suggested that this difficulty has been the primary barrier to wider use of the method of reassignment.
It is possible to approximate the partial derivatives using finite differences. For example, the phase spectrum can be evaluated at two nearby times, and the partial derivative with respect to time be approximated as the difference between the two values divided by the time difference, as in
For sufficiently small values of and and provided that the phase difference is appropriately "unwrapped", this finite-difference method yields good approximations to the partial derivatives of phase, because in regions of the spectrum in which the evolution of the phase is dominated by rotation due to sinusoidal oscillation of a single, nearby component, the phase is a linear function.
Independently of Kodera et al., Nelson arrived at a similar method for improving the time-frequency precision of short-time spectral data from partial derivatives of the short-time phase
spectrum. It is easily shown that Nelson's cross spectral surfaces compute an approximation of the derivatives that is equivalent to the finite differences method.
Auger and Flandrin showed that the method of reassignment, proposed in the context of the spectrogram by Kodera et al., could be extended to any member of Cohen's class of time-frequency representations by generalizing the reassignment operations to
where is the Wigner–Ville distribution of , and is the kernel function that defines the distribution. They further described an efficient method for computing the times and frequencies for the reassigned spectrogram efficiently and accurately without explicitly computing the partial derivatives of
phase.
In the case of the spectrogram, the reassignment operations can be computed by
where is the short-time Fourier transform computed using an analysis window is the short-time Fourier transform computed using a time-weighted analysis window and is the short-time Fourier transform computed using a time-derivative analysis window .
Using the auxiliary window functions and , the reassignment operations can be computed at any time-frequency coordinate
from an algebraic combination of three Fourier transforms evaluated at . Since these algorithms operate only on short-time spectral data evaluated at a single time and frequency, and do not explicitly compute any derivatives, this gives an efficient method of computing the reassigned discrete short-time Fourier transform.
One constraint in this method of computation is that the must be non-zero. This is not much of a restriction, since the reassignment operation itself implies that there is some energy to reassign, and has no meaning when the distribution is zero-valued.
Separability
The short-time Fourier transform can often be used to estimate the amplitudes and phases of the individual components in a multi-component signal, such as a quasi-harmonic musical instrument tone. Moreover, the time and frequency reassignment operations can be used to sharpen the representation by attributing the spectral energy reported by the short-time Fourier transform to the point that is the local center of gravity of the complex energy distribution.
For a signal consisting of a single component, the instantaneous frequency can be estimated from the partial derivatives of phase of any short-time Fourier transform channel that passes the component. If the signal is to be decomposed into many components,
and the instantaneous frequency of each component is defined as the derivative of its phase with respect to time, that is,
then the instantaneous frequency of each individual component can be computed from the phase of the response of a filter that passes that component, provided that no more than one component lies in the passband of the filter.
This is the property, in the frequency domain, that Nelson called separability and is required of all signals so analyzed. If this property is not met, then the desired multi-component decomposition cannot be achieved, because the parameters of individual components cannot be estimated from the short-time Fourier transform. In such cases, a different analysis window must be chosen so that the separability criterion is satisfied.
If the components of a signal are separable in frequency with respect to a particular short-time spectral analysis window, then the output of each short-time Fourier transform filter is a filtered version of, at most, a single dominant (having significant energy) component, and so the derivative, with respect to time, of the phase of the is equal to the derivative with respect to time, of the phase of the dominant component at Therefore, if a component, having instantaneous frequency is the dominant component in the vicinity of then the instantaneous frequency of that component can be computed from the phase of the short-time Fourier transform evaluated at That is,
Just as each bandpass filter in the short-time Fourier transform filterbank may pass at most a single complex exponential component, two temporal events must be sufficiently separated in time that they do not lie in the same windowed segment of the input signal. This is the property of separability in the time domain, and is equivalent to requiring that the time between two events be
greater than the length of the impulse response of the short-time Fourier transform filters, the span of non-zero samples in
In general, there is an infinite number of equally valid decompositions for a multi-component signal. The separability property must be considered in the context of the desired decomposition. For example, in the analysis of a speech signal, an analysis window that is long relative to the time between glottal pulses is sufficient to separate harmonics, but the individual glottal pulses will be smeared, because many pulses are covered by each window (that is, the individual pulses are not separable, in time, by the chosen analysis window). An analysis window that is much shorter than the time between glottal pulses may resolve the glottal pulses, because no window spans more than one pulse, but the harmonic frequencies are smeared together, because the main lobe of the analysis window spectrum is wider than the spacing between the harmonics (that is, the harmonics are not separable, in frequency, by the chosen analysis window).
Extensions
Consensus complex reassignment
Gardner and Magnasco (2006) argues that the auditory nerves may use a form of the reassignment method to process sounds. These nerves are known for preserving timing (phase) information better than they do for magnitudes. The authors come up with a variation of reassignment with complex values (i.e. both phase and magnitude) and show that it produces sparse outputs like auditory nerves do. By running this reassignment with windows of different bandwidths (see discussion in the section above), a "consensus" that captures multiple kinds of signals is found, again like the auditory system. They argue that the algorithm is simple enough for neurons to implement.
Synchrosqueezing transform
References
Further reading
S. A. Fulop and K. Fitz, A spectrogram for the twenty-first century, Acoustics Today, vol. 2, no. 3, pp. 26–33, 2006.
S. A. Fulop and K. Fitz, Algorithms for computing the time-corrected instantaneous frequency (reassigned) spectrogram, with applications, Journal of the Acoustical Society of America, vol. 119, pp. 360 – 371, Jan 2006.
External links
TFTB — Time-Frequency ToolBox
SPEAR - Sinusoidal Partial Editing Analysis and Resynthesis
Loris - Open-source software for sound modeling and morphing
SRA - A web-based research tool for spectral and roughness analysis of sound signals (supported by a Northwest Academic Computing Consortium grant to J. Middleton, Eastern Washington University)
Time–frequency analysis
Transforms
Data compression | Reassignment method | [
"Physics",
"Mathematics"
] | 3,155 | [
"Functions and mappings",
"Spectrum (physical sciences)",
"Time–frequency analysis",
"Frequency-domain analysis",
"Mathematical objects",
"Mathematical relations",
"Transforms"
] |
10,702,027 | https://en.wikipedia.org/wiki/Harmonic%20wavelet%20transform | In the mathematics of signal processing, the harmonic wavelet transform, introduced by David Edward Newland in 1993, is a wavelet-based linear transformation of a given function into a time-frequency representation. It combines advantages of the short-time Fourier transform and the continuous wavelet transform. It can be expressed in terms of repeated Fourier transforms, and its discrete analogue can be computed efficiently using a fast Fourier transform algorithm.
Harmonic wavelets
The transform uses a family of "harmonic" wavelets indexed by two integers j (the "level" or "order") and k (the "translation"), given by , where
These functions are orthogonal, and their Fourier transforms are a square window function (constant in a certain octave band and zero elsewhere). In particular, they satisfy:
where "*" denotes complex conjugation and is Kronecker's delta.
As the order j increases, these wavelets become more localized in Fourier space (frequency) and in higher frequency bands, and conversely become less localized in time (t). Hence, when they are used as a basis for expanding an arbitrary function, they represent behaviors of the function on different timescales (and at different time offsets for different k).
However, it is possible to combine all of the negative orders (j < 0) together into a single family of "scaling" functions where
The function φ is orthogonal to itself for different k and is also orthogonal to the wavelet functions for non-negative j:
In the harmonic wavelet transform, therefore, an arbitrary real- or complex-valued function (in L2) is expanded in the basis of the harmonic wavelets (for all integers j) and their complex conjugates:
or alternatively in the basis of the wavelets for non-negative j supplemented by the scaling functions φ:
The expansion coefficients can then, in principle, be computed using the orthogonality relationships:
For a real-valued function f(t), and so one can cut the number of independent expansion coefficients in half.
This expansion has the property, analogous to Parseval's theorem, that:
Rather than computing the expansion coefficients directly from the orthogonality relationships, however, it is possible to do so using a sequence of Fourier transforms. This is much more efficient in the discrete analogue of this transform (discrete t), where it can exploit fast Fourier transform algorithms.
References
Time–frequency analysis
Transforms
Wavelets | Harmonic wavelet transform | [
"Physics",
"Mathematics"
] | 493 | [
"Functions and mappings",
"Spectrum (physical sciences)",
"Time–frequency analysis",
"Frequency-domain analysis",
"Mathematical objects",
"Mathematical relations",
"Transforms"
] |
10,702,410 | https://en.wikipedia.org/wiki/Phi%20bond | In chemistry, phi bonds (φ bonds) are usually covalent chemical bonds, where six lobes of one involved atomic orbital overlap six lobes of the other involved atomic orbital. This overlap leads to the formation of a bonding molecular orbital with three nodal planes which contain the internuclear axis and go through both atoms.
The Greek letter φ in their name refers to f orbitals, since the orbital symmetry of the φ bond is the same as that of the usual (6-lobed) type of f orbital when seen down the bond axis.
There was one possible candidate known in 2005 of a molecule with phi bonding (a U−U bond, in the molecule U2). However, later studies that accounted for spin orbit interactions found that the bonding was only of fourth order. Experimental evidence for phi bonding between a thorium atom and cyclooctatetraene in thorocene has been supported by computational analysis, though this mixed-orbital bond has strong ionic character and is not a traditional phi bond.
References
Chemical bonding
Hypothetical processes | Phi bond | [
"Physics",
"Chemistry",
"Materials_science"
] | 210 | [
"nan",
"Chemical bonding",
"Condensed matter physics",
"Hypotheses in chemistry"
] |
10,702,544 | https://en.wikipedia.org/wiki/Russian%20floating%20nuclear%20power%20station | Floating nuclear power stations () are vessels designed by Rosatom, the Russian state-owned nuclear energy corporation. They are self-contained, low-capacity, floating nuclear power plants. Rosatom plans to mass-produce the stations at shipbuilding facilities and then tow them to ports near locations that require electricity.
The work on such a concept dates back to the MH-1A in the United States, which was built in the 1960s into the hull of a World War II Liberty Ship, which was followed on much later in 2022 when the United States Department of Energy funded a three-year research study of offshore floating nuclear power generation. The Rosatom project is the first floating nuclear power plant intended for mass production. The initial plan was to manufacture at least seven of the vessels by 2015. On 14 September 2019, Russia’s first-floating nuclear power plant, Akademik Lomonosov, arrived to its permanent location in the Chukotka region. It started operation on 19 December 2019.
History
The project for a floating nuclear power station began in 2000, when the Ministry for Atomic Energy of the Russian Federation (Rosatom) chose Severodvinsk in Arkhangelsk Oblast as the construction site, Sevmash was appointed as general contractor.
Construction of the first power station, the Akademik Lomonosov, started on 15 April 2007 at the Sevmash Submarine-Building Plant in Severodvinsk.
In August 2008 construction works were transferred to the Baltic Shipyard in Saint Petersburg, which is also responsible for the construction of future vessels.
Akademik Lomonosov was launched on 1 July 2010, at an estimated cost of 6 billion rubles (232 m$).
In 2015 construction of a second vessel starting in 2019 was announced by Russia's state nuclear corporation Rosatom.
On 27 July 2021 Rosatom signed an agreement with GDK Baimskaya LLC for energy delivery for Baimskaya copper mining operations. Rosatom suggests delivering up to three new floating power plants (with fourth being in reserve), all using the latest RITM-200M 55 MWe reactors, currently serving on Project 22220 icebreakers. These are to be docked at Cape Nagloynyn, Chaunskaya Bay port and connected to the Baimskaya mine by 400 km long 110 kV line through Bilibino. According to Rosatom, production of the first new reactors by Atomenergomash has already started. In August 2022, construction of the first hull started in China, planned to be delivered to Russia in 2023 for installation of reactors and equipment.
On 31 December 2021 Rosatom announced that these four new floating plants will carry a new, slightly improved version of RITM-200 cores, named RITM-200S, currently in development. TVEL has been charged with development of new fuel assemblies for its improved core. Each barge will produce 106 MWe of power.
Technical characteristics
The floating nuclear power station is a non-self propelled vessel.
It has length of , width of , height of , and draught of . The vessel has a displacement of 21,500 tonnes and a crew of 69 people.
Each vessel of this type has two modified KLT-40 naval propulsion reactors together providing up to 70 MW of electricity or 300 MW of heat, or cogeneration of electricity and heat for district heating, enough for a city with a population of 200,000 people. Because of its ability to float and be assembled in extreme weather conditions, it can provide heat and power to areas that do not have easy access to these amenities because of their geographic location. It could also be modified as a desalination plant producing 240,000 cubic meters of fresh water a day.
Smaller modification of the plant can be fitted with two ABV-6M reactors with the electrical power around 18 MWe (megawatts of electricity).
The much larger VBER-300 917 MW thermal or 325 MWe and the slightly larger RITM-200 55 MWe reactors have both been considered as a potential energy source for these floating nuclear power stations. The station also incorporates a floating unit (FPU), waterworks, guaranteeing solid establishment, separation FPU and transmitting created power and heat on the banks, inland offices for accepting and transmitting the produced power to outside systems for circulation to purchasers.
Objectives
The primary goal of the venture is to give increasing energy needs of the area, effective energy investigation and advancement of gold and rest of the different fields in Chaun-Bilibino energy arrangement of the industrial group, guaranteeing adjustment of taxes for electric and heat energy for the populace and modern customers, and the making of a solid energy base for monetary and social improvement of the locale.
Contractors
The hull and sections of vessels are built by the Baltic Shipyard in Saint Petersburg and Wison (Nantong) Heavy Industry in China. Reactors are designed by OKBM Afrikantov and assembled by Nizhniy Novgorod Research and Development Institute Atomenergoproekt (both part of Atomenergoprom). The reactor vessels are produced by Izhorskiye Zavody.
Kaluga Turbine Plant supplies the turbo-generators.
Fueling
The floating power stations need to be refueled every three years while saving up to 200,000 metric tons of coal and 100,000 tons of fuel oil a year. The reactors are supposed to have a lifespan of 40 years. Every 12 years, the whole plant will be towed home and overhauled at the wharf where it was constructed. The manufacturer will arrange for the disposal of the nuclear waste and maintenance is provided by the infrastructure of the Russian nuclear industry. Thus, virtually no radiation traces are expected at the place where the power station produced its energy.
Safety
The safety systems of the KLT-40S are designed according to the reactor design itself, physical successive systems of protection and containment, self-activating active and passive safety systems, self-diagnostic automatic systems, reliable diagnostics relating to equipment and systems status, and provisioned methods regarding accident control. Additionally, the safety systems on board operate independently of the plant’s power supply.
Environmental groups and citizens are concerned that floating plants will be more vulnerable to accidents, natural disasters specific to oceans, and terrorism than land-based stations. They point to a history of naval and nuclear accidents in Russia and the former Soviet Union, including the Chernobyl disaster of 1986.
Russia does have 50 years of experience operating a fleet of nuclear-powered icebreakers that are also used for scientific and Arctic tourism expeditions. However earlier incidents (Lenin, 1957, and Taymyr, 2011) involving radioactive leaking from such vessels also contribute to safety concerns for FNPPs. Commercialization of floating nuclear power plants in the United States have failed due to high costs and safety concerns.
Environmental concerns around the health and safety of the project have arisen. Radioactive steam may be produced, negatively impacting people living nearby. Earthquake activity is common in the area and there are fears that a tsunami wave could damage the facility and release radioactive substances and waste. Being on the water exposes it to natural forces, according to environmental groups.
Environmental impacts
Both coastal and floating nuclear powerplants may result in similar consequences for the ocean environments. Although the surrounding seawall could provide an artificial reef that is an advantageous environment for some marine life forms, there are potential negative effects on animal and plant life near-shore (for coastal plants) or further offshore (with deep-water floating plants). Intrusion of marine organisms into power station systems during water entrainment could reduce species variety and number of individual organisms. The thermal impact of water discharge from stations may permanently change the area's marine ecosystem, with, for example, cooler-water species unable to maintain populations, and non-local, warmer-water species, colonizing the vicinity. While power plants may instigate such environmental transformations, the thermal plumes caused by the warmed-water discharge are narrow, so their effect is geographically restricted. Winter shutdown of the plant may result in fish kills from the thermal shock. However, this can be mitigated in stations with multiple units, by avoiding simultaneous shutdowns. By sequentially turning off only one unit at a time, the water temperature variation is minimized. These problems are shared by all thermal power plants.
The breakwater will constitute an artificial island of appreciable size.
Locations
Floating nuclear power stations are planned to be used mainly in the Russian Arctic. Five of these are planned to be used by Gazprom for offshore oil and gas field development and for operations on the Kola and Yamal peninsulas. Other locations include Dudinka on the Taymyr Peninsula, Vilyuchinsk on the Kamchatka Peninsula and Pevek on the Chukchi Peninsula. In 2007, Rosatom signed an agreement with the Sakha Republic to build a floating plant for its northern parts, using smaller ABV reactors.
According to Rosatom, 15 countries, including China, Indonesia, Malaysia, Algeria, Sudan, Namibia, Cape Verde, and Argentina, have shown interest in hiring such a device. It has been estimated that 75% of the world's population live within 100 miles of a port city.
See also
Atlantic Nuclear Power Plant
Nuclear marine propulsion
Offshore Power Systems
Soviet naval reactors
References
Further reading
Vladimir Kuznetsov et al. (2004), Floating Nuclear Power Plants in Russia: A Threat to the Arctic, World Oceans and Non-Proliferation. Green Cross Russia
Akademik Lomonosov Floating Nuclear Co-generation Plant, Russian Federation
Floating nuclear power stations
Nuclear power stations in Russia
Nuclear power stations using pressurized water reactors
Nuclear-powered ships
Nuclear technology
Russian inventions
Science and technology in Russia
Service vessels of Russia | Russian floating nuclear power station | [
"Physics"
] | 1,987 | [
"Nuclear technology",
"Nuclear physics"
] |
10,702,868 | https://en.wikipedia.org/wiki/Design%20quality%20indicator | The Design Quality Indicator (DQI) is a toolkit to measure, evaluate and improve the design quality of buildings.
Development of DQI was started in the United Kingdom by the Construction Industry Council (CIC) in 1999. It was initiated in response to the success of Key Performance Indicators devised for assessing construction process issues such as timely completion, financial control and safety on site by the construction industry's Movement for Innovation (M4I). The aim of the DQI systems was to ensure that the M4I's indicators of construction process were balanced by an assessment of the building as a product. The Science Policy Research Unit at the University of Sussex was commissioned to develop the indicator tool, which was launched as an online resource on 1 October 2003. In 2004 the DQI received recognition from the British Institute of Facilities Management for the role of involving users in the design process. The DQI tool was made available to users in the United States in 2006, and an online American version was launched on 20 October 2008.
Unlike its forerunner the Housing Quality Indicator (HQI) system devised for the UK's Department for the Environment, Transport and the Regions (DETR) by the consultancy DEGW and published on open access in February 1999, the DQI system instead could be used only by approved facilitators. The criteria and the method of assessment, which though unacknowledged is a simple form of multi-attribute utility analysis, remained inaccessible to design teams and their clients unless they employed a facilitator licensed to use it. Guidance on using the HQI system can be found on the government website. The DQI version for hospitals is also on open access on the national archive.
Conceptual framework
DQI applies a structured approach to assess design quality based on the model by the architect Vitruvius, the Roman author of the earliest surviving theoretical treatise on building in Western culture, who described design in terms of utilitas, firmitas and venustas, often translated as commodity, firmness and delight. DQI uses a modern-day interpretation of these terms as:
Functionality (utilitas) – the arrangement, quality and interrelationship of spaces and how the building is designed to be useful to all.
Build Quality (firmitas) – the engineering performance of the building, which includes structural stability and the integration, safety and robustness of the systems, finishes and fittings.
Impact (venustas) – the building's ability to create a sense of place and have a positive effect on the local community and environment.
Methodology
DQI is completed by a range of stakeholders in the briefing and design stages of a building project, or on a completed building. Stakeholders who participate include:
Architects
Building users (or potential users)
Building clients
Facilities managers (or future facilities managers)
Project managers
Quantity surveyors (Cost engineer)
Structural and building services engineers
DQI is applied in a facilitated workshop that is led by a certified DQI facilitator.
Models and related approaches
There are three models of design quality indicator:
DQI which is applicable to all building types
DQI for schools which is applicable to school buildings. This model of DQI is being used on all current school projects in the UK and forms part of the Department for Children, Schools and Families 'Minimum Design Standard' for new school buildings.
DQI for health buildings which was released in beta format in June 2012 on the DQI website.
References
Other references
Whyte, J and Gann, D (2003), Design Quality Indicators: work in progress: Building Research and Information, London: Spon Press.
Markus, T. (2003), Lessons from the Design Quality Indicator: Building Research and Information, London: Spon Press.
Thomson at al. (2003), Managing value and quality in design: Building Research and Information, London: Spon Press.
Prasad, S. (2004) 'Inclusive maps', in Designing Better Buildings: quality and value in the built environment edited by Macmillan, S. London: Spon Press
Dickson, M. (2004) 'Achieving quality in building design by intention', in Designing Better Buildings: quality and value in the built environment edited by Macmillan, S. London: Spon Press
Whyte, J Gann, D and Salter, A (2004) 'Building indicators of design quality', in Designing Better Buildings: quality and value in the built environment edited by Macmillan, S. London: Spon Press
Prasad, S. (2004), Clarifying intentions: the design quality indicator: Building Research and Information, London: Spon Press.
Cole, R. (2005), Building environmental assessment methods: redefining intentions and roles: Building Research and Information, London: Spon Press.
Kaatz, E., Root, D. and Bowen, P (2005), Broadening project participation through a modified building sustainability assessment: Building Research & Information, London: Spon Press.
Commission for Architecture and the Built Environment (2009), Case study: International Digital Laboratory, University of Warwick, Coventry
Commission for Architecture and the Built Environment (2009), Case study: Maples Respite Centre, Harlow, Essex
Commission for Architecture and the Built Environment (2009), Case study: St Nicholas Church of England Primary School, Essex
Commission for Architecture and the Built Environment (2009), Case study: British Library Centre for Conservation, London
Commission for Architecture and the Built Environment (2009), Case study: Frederick Bremer School, Waltham Forest London
Construction
Architectural design | Design quality indicator | [
"Engineering"
] | 1,122 | [
"Construction",
"Design",
"Architectural design",
"Architecture"
] |
11,671,698 | https://en.wikipedia.org/wiki/Soil%20salinity%20control | Soil salinity control refers to controlling the process and progress of soil salinity to prevent soil degradation by salination and reclamation of already salty (saline) soils. Soil reclamation is also known as soil improvement, rehabilitation, remediation, recuperation, or amelioration.
The primary man-made cause of salinization is irrigation. River water or groundwater used in irrigation contains salts, which remain in the soil after the water has evaporated.
The primary method of controlling soil salinity is to permit 10–20% of the irrigation water to leach the soil, so that it will be drained and discharged through an appropriate drainage system. The salt concentration of the drainage water is normally 5 to 10 times higher than that of the irrigation water which meant that salt export will more closely match salt import and it will not accumulate.
Problems with soil salinity
Salty (saline) soils have high salt content. The predominant salt is normally sodium chloride (NaCl, "table salt"). Saline soils are therefore also sodic soils but there may be sodic soils that are not saline, but alkaline.
According to a study by UN University, about , representing 20% of the world's irrigated lands are affected, up from in the early 1990s. In the Indo-Gangetic Plain, home to over 10% of the world's population, crop yield losses for wheat, rice, sugarcane and cotton grown on salt-affected lands could be 40%, 45%, 48%, and 63%, respectively.
Salty soils are a common feature and an environmental problem in irrigated lands in arid and semi-arid regions, resulting in poor or little crop production. The causes of salty soils are often associated with high water tables, which are caused by a lack of natural subsurface drainage to the underground. Poor subsurface drainage may be caused by insufficient transport capacity of the aquifer or because water cannot exit the aquifer, for instance, if the aquifer is situated in a topographical depression.
Worldwide, the major factor in the development of saline soils is a lack of precipitation. Most naturally saline soils are found in (semi) arid regions and climates of the earth.
Primary cause
Man-made salinization is primarily caused by salt found in irrigation water. All irrigation water derived from rivers or groundwater, regardless of water purity, contains salts that remain behind in the soil after the water has evaporated.
For example, assuming irrigation water with a low salt concentration of 0.3 g/L (equal to 0.3 kg/m3 corresponding to an electric conductivity of about 0.5 FdS/m) and a modest annual supply of irrigation water of 10,000 m3/ha (almost 3 mm/day) brings 3,000 kg salt/ha each year. With the absence of sufficient natural drainage (as in waterlogged soils), and proper leaching and drainage program to remove salts, this would lead to high soil salinity and reduced crop yields in the long run.
Much of the water used in irrigation has a higher salt content than 0.3 g/L, compounded by irrigation projects using a far greater annual supply of water. Sugar cane, for example, needs about 20,000 m3/ha of water per year. As a result, irrigated areas often receive more than 3,000 kg/ha of salt per year, with some receiving as much as 10,000 kg/ha/year.
Secondary cause
The secondary cause of salinization is waterlogging in irrigated land. Irrigation causes changes to the natural water balance of irrigated lands. Large quantities of water in irrigation projects are not consumed by plants and must go somewhere. In irrigation projects, it is impossible to achieve 100% irrigation efficiency where all the irrigation water is consumed by the plants. The maximum attainable irrigation efficiency is about 70%, but usually, it is less than 60%. This means that minimum 30%, but usually more than 40% of the irrigation water is not evaporated and it must go somewhere.
Most of the water lost this way is stored underground which can change the original hydrology of local aquifers considerably. Many aquifers cannot absorb and transport these quantities of water, and so the water table rises leading to waterlogging.
Waterlogging causes three problems:
The shallow water table and lack of oxygenation of the root zone reduces the yield of most crops.
It leads to an accumulation of salts brought in with the irrigation water as their removal through the aquifer is blocked.
With the upward seepage of groundwater, more salts are brought into the soil and the salination is aggravated.
Aquifer conditions in irrigated land and the groundwater flow have an important role in soil salinization, as illustrated here:
Salt affected area
Normally, the salinization of agricultural land affects a considerable area of 20% to 30% in irrigation projects. When the agriculture in such a fraction of the land is abandoned, a new salt and water balance is attained, a new equilibrium is reached and the situation becomes stable.
In India alone, thousands of square kilometers have been severely salinized. China and Pakistan do not lag far behind (perhaps China has even more salt affected land than India). A regional distribution of the 3,230,000 km2 of saline land worldwide is shown in the following table derived from the FAO/UNESCO Soil Map of the World.
Spatial variation
Although the principles of the processes of salinization are fairly easy to understand, it is more difficult to explain why certain parts of the land suffer from the problems and other parts do not, or to predict accurately which part of the land will fall victim. The main reason for this is the variation of natural conditions in time and space, the usually uneven distribution of the irrigation water, and the seasonal or yearly changes of agricultural practices. Only in lands with undulating topography is the prediction simple: the depressional areas will degrade the most.
The preparation of salt and water balances for distinguishable sub-areas in the irrigation project, or the use of agro-hydro-salinity models, can be helpful in explaining or predicting the extent and severity of the problems.
Diagnosis
Measurement
Soil salinity is measured as the salt concentration of the soil solution in tems of g/L or electric conductivity (EC) in dS/m. The relation between these two units is about 5/3: y g/L => 5y/3 dS/m. Seawater may have a salt concentration of 30 g/L (3%) and an EC of 50 dS/m.
The standard for the determination of soil salinity is from an extract of a saturated paste of the soil, and the EC is then written as ECe. The extract is obtained by centrifugation. The salinity can more easily be measured, without centrifugation, in a 2:1 or 5:1 water:soil mixture (in terms of g water per g dry soil) than from a saturated paste. The relation between ECe and EC2:1 is about 4, hence: ECe = 4EC1:2.
Classification
Soils are considered saline when the ECe > 4. When 4 < ECe < 8, the soil is called slightly saline, when 8 < ECe < 16 it is called (moderately) saline, and when ECe > 16 severely saline.
Crop tolerance
Sensitive crops lose their vigor already in slightly saline soils; most crops are negatively affected by (moderately) saline soils, and only salinity resistant crops thrive in severely saline soils. The University of Wyoming and the Government of Alberta report data on the salt tolerance of plants.
Principles of salinity control
Drainage is the primary method of controlling soil salinity. The system should permit a small fraction of the irrigation water (about 10 to 20 percent, the drainage or leaching fraction) to be drained and discharged out of the irrigation project.
In irrigated areas where salinity is stable, the salt concentration of the drainage water is normally 5 to 10 times higher than that of the irrigation water. Salt export matches salt import and salt will not accumulate.
When reclaiming already salinized soils, the salt concentration of the drainage water will initially be much higher than that of the irrigation water (for example 50 times higher). Salt export will greatly exceed salt import, so that with the same drainage fraction a rapid desalinization occurs. After one or two years, the soil salinity is decreased so much, that the salinity of the drainage water has come down to a normal value and a new, favorable, equilibrium is reached.
In regions with pronounced dry and wet seasons, the drainage system may be operated in the wet season only, and closed during the dry season. This practice of checked or controlled drainage saves irrigation water.
The discharge of salty drainage water may pose environmental problems to downstream areas. The environmental hazards must be considered very carefully and, if necessary mitigating measures must be taken. If possible, the drainage must be limited to wet seasons only, when the salty effluent inflicts the least harm.
Drainage systems
Land drainage for soil salinity control is usually by horizontal drainage system (figure left), but vertical systems (figure right) are also employed.
The drainage system designed to evacuate salty water also lowers the water table. To reduce the cost of the system, the lowering must be reduced to a minimum. The highest permissible level of the water table (or the shallowest permissible depth) depends on the irrigation and agricultural practices and kind of crops.
In many cases a seasonal average water table depth of 0.6 to 0.8 m is deep enough. This means that the water table may occasionally be less than 0.6 m (say 0.2 m just after an irrigation or a rain storm). This automatically implies that, in other occasions, the water table will be deeper than 0.8 m (say 1.2 m). The fluctuation of the water table helps in the breathing function of the soil while the expulsion of carbon dioxide (CO2) produced by the plant roots and the inhalation of fresh oxygen (O2) is promoted.
The establishing of a not-too-deep water table offers the additional advantage that excessive field irrigation is discouraged, as the crop yield would be negatively affected by the resulting elevated water table, and irrigation water may be saved.
The statements made above on the optimum depth of the water table are very general, because in some instances the required water table may be still shallower than indicated (for example in rice paddies), while in other instances it must be considerably deeper (for example in some orchards). The establishment of the optimum depth of the water table is in the realm of agricultural drainage criteria.
Soil leaching
The vadose zone of the soil below the soil surface and the water table is subject to four main hydrological inflow and outflow factors:
Infiltration of rain and irrigation water (Irr) into the soil through the soil surface (Inf) :
Inf = Rain + Irr
Evaporation of soil water through plants and directly into the air through the soil surface (Evap)
Percolation of water from the unsaturated zone soil into the groundwater through the watertable (Perc)
Capillary rise of groundwater moving by capillary suction forces into the unsaturated zone (Cap)
In steady state (i.e. the amount of water stored in the unsaturated zone does not change in the long run) the water balance of the unsaturated zone reads: Inflow = Outflow, thus:
Inf + Cap = Evap + Perc or:
Irr + Rain + Cap = Evap + Perc
and the salt balance is
Irr.Ci + Cap.Cc = Evap.Fc.Ce + Perc.Cp + Ss
where Ci is the salt concentration of the irrigation water, Cc is the salt concentration of the capillary rise, equal to the salt concentration of the upper part of the groundwater body, Fc is the fraction of the total evaporation transpired by plants, Ce is the salt concentration of the water taken up by the plant roots, Cp is the salt concentration of the percolation water, and Ss is the increase of salt storage in the unsaturated soil. This assumes that the rainfall contains no salts. Only along the coast this may not be true. Further it is assumed that no runoff or surface drainage occurs. The amount of removed by plants (Evap.Fc.Ce) is usually negligibly small: Evap.Fc.Ce = 0
The salt concentration Cp can be taken as a part of the salt concentration of the soil in the unsaturated zone (Cu) giving: Cp = Le.Cu, where Le is the leaching efficiency. The leaching efficiency is often in the order of 0.7 to 0.8, but in poorly structured, heavy clay soils it may be less. In the Leziria Grande polder in the delta of the Tagus river in Portugal it was found that the leaching efficiency was only 0.15.
Assuming that one wishes to avoid the soil salinity to increase and maintain the soil salinity Cu at a desired level Cd we have:
Ss = 0, Cu = Cd and Cp = Le.Cd. Hence the salt balance can be simplified to:
Perc.Le.Cd = Irr.Ci + Cap.Cc
Setting the amount percolation water required to fulfill this salt balance equal to Lr (the leaching requirement) it is found that:
Lr = (Irr.Ci + Cap.Cc) / Le.Cd .
Substituting herein Irr = Evap + Perc − Rain − Cap and re-arranging gives :
Lr = [ (Evap−Rain).Ci + Cap(Cc−Ci) ] / (Le.Cd − Ci)
With this the irrigation and drainage requirements for salinity control can be computed too.
In irrigation projects in (semi)arid zones and climates it is important to check the leaching requirement, whereby the field irrigation efficiency (indicating the fraction of irrigation water percolating to the underground) is to be taken into account.
The desired soil salinity level Cd depends on the crop tolerance to salt. The University of Wyoming, US, and the Government of Alberta, Canada, report crop tolerance data.
Strip cropping: an alternative
In irrigated lands with scarce water resources suffering from drainage (high water table) and soil salinity problems, strip cropping is sometimes practiced with strips of land where every other strip is irrigated while the strips in between are left permanently fallow.
Owing to the water application in the irrigated strips they have a higher water table which induces flow of groundwater to the unirrigated strips. This flow functions as subsurface drainage for the irrigated strips, whereby the water table is maintained at a not-too-shallow depth, leaching of the soil is possible, and the soil salinity can be controlled at an acceptably low level.
In the unirrigated (sacrificial) strips the soil is dry and the groundwater comes up by capillary rise and evaporates leaving the salts behind, so that here the soil salinizes. Nevertheless, they can have some use for livestock, sowing salinity resistant grasses or weeds. Moreover, useful salt resistant trees can be planted like Casuarina, Eucalyptus, or Atriplex, keeping in mind that the trees have deep rooting systems and the salinity of the wet subsoil is less than of the topsoil. In these ways wind erosion can be controlled. The unirrigated strips can also be used for salt harvesting.
Soil salinity models
The majority of the computer models available for water and solute transport in the soil (e.g. SWAP, DrainMod-S, UnSatChem, and Hydrus) are based on Richard's differential equation for the movement of water in unsaturated soil in combination with Fick's differential convection–diffusion equation for advection and dispersion of salts.
The models require the input of soil characteristics like the relations between variable unsaturated soil moisture content, water tension, water retention curve, unsaturated hydraulic conductivity, dispersity, and diffusivity. These relations vary greatly from place to place and time to time and are not easy to measure. Further, the models are complicated to calibrate under farmer's field conditions because the soil salinity here is spatially very variable. The models use short time steps and need at least a daily, if not hourly, database of hydrological phenomena. Altogether, this makes model application to a fairly large project the job of a team of specialists with ample facilities.
Simpler models, like SaltMod, based on monthly or seasonal water and soil balances and an empirical capillary rise function, are also available. They are useful for long-term salinity predictions in relation to irrigation and drainage practices.
LeachMod, Using the SaltMod principles helps in analyzing leaching experiments in which the soil salinity was monitored in various root zone layers while the model will optimize the value of the leaching efficiency of each layer so that a fit is obtained of observed with simulated soil salinity values.
Spatial variations owing to variations in topography can be simulated and predicted using salinity cum groundwater models, like SahysMod.
See also
References
External links
Food and Agriculture Organization of the United Nations on soil salinity
US Salinity Laboratory at Riverside, California
Soil
Soil science
Environmental soil science
Agricultural soil science | Soil salinity control | [
"Environmental_science"
] | 3,662 | [
"Environmental soil science"
] |
11,673,172 | https://en.wikipedia.org/wiki/Pradeep%20Dubey | Pradeep Dubey (born 9 January 1951) is an Indian game theorist. He is a Professor of Economics at the State University of New York, Stony Brook, and a member of the Stony Brook Center for Game Theory. He also holds a visiting position at Cowles Foundation, Yale University. He did his schooling at the St. Columba's School, Delhi. He received his Ph.D. in applied mathematics from Cornell University and B.Sc. (with honors in physics) from the University of Delhi. His research areas of interest are game theory and mathematical economics. He has published, among others, in Econometrica, Games and Economic Behavior, Journal of Economic Theory, and Quarterly Journal of Economics. He is a Fellow of The Econometric Society, ACM Fellow and a member of the council of the Game Theory Society.
Academic positions
From 1975 until 1978, Dubey was an assistant professor in the School of Organization and Management and Cowles Foundation for
Research in Economics at Yale University. In 1978, he became an associate professor at Yale, a position he held until 1984. In 1979, Dubey was a research fellow at the Institute for Advanced Studies at the Hebrew University of Jerusalem. Throughout 1982, he was a senior research fellow at the International Institute for Applied Systems Analysis in Austria. In 1984, Dubey became an economics professor at the University of Illinois Urbana-Champaign and taught there for one year. In the following year, he taught at Stony Brook University in the Department of Applied Mathematics and Statistics and the Institute for Decision Sciences. In 1986, Dubey entered his current position as the leading professor and co-director for the Center for Game Theory in Economics at the university. Additionally, in 2005, he became a visiting professor at the Cowles Foundation for Research in Economics at Yale.
Selected publications
"On the Uniqueness of the Shapley Value" (1975) International Journal of Game Theory Vol. 4, pp. 131–139.
"Trade and Prices in a Closed Economy with Exogenous Uncertainty and Different Levels of Information" (1977) (with M. Shubik) Econometrica Vol. 45, pp. 1657–1680.
"Some Properties of the Banzhaf Power Index" (1979) (with L.S. Shapley) Mathematics of Operations Research Vol. 4, pp. 99–131.
"Nash Equilibria of Market Games: Finiteness and Inefficiency" (1980) Journal of Economic Theory Vol. 22, pp. 363–376.
"Price-Quantity Strategic Market Games" (1982) Econometrica Vol. 50, pp. 111–126.
"Payoffs in Non-atomic Economies: an Axiomatic Approach" (1984) (with A. Neyman) Econometrica Vol. 52, pp. 1129–1150.
"Noncooperative General Exchange with a Continuum of Traders" (1994) (with L.S. Shapley) Journal of Mathematical Economics Vol. 23, pp. 253–293.
"Competitive Pooling: Rothschild-Stiglitz Reconsidered" (2002) (with J. Geanakoplos) Quarterly Journal of Economics Vol. 117, pp. 1529–1570.
"From Nash to Walras via Shapley-Shubik" (2003) (with J. Geanakoplos) Journal of Mathematical Economics Vol. 39, pp. 391–400.
"Learning with Perfect Information" (2004) (with O. Haimanko) Games and Economic Behavior Vol. 46, pp. 304–324.
"Default and Punishment in General Equilibrium" (2005) (with J. Geanakoplos & M. Shubik) Econometrica Vol. 73, pp. 1–37.
References
External links
Webpage from Stony Brook University
Personal Webpage
1951 births
Living people
Cornell University alumni
Game theorists
Fellows of the Econometric Society
20th-century Indian economists
21st-century American economists
Stony Brook University faculty
Delhi University alumni
St. Columba's School, Delhi alumni
American people of Indian descent
Scientists from Patna
2023 fellows of the Association for Computing Machinery | Pradeep Dubey | [
"Mathematics"
] | 858 | [
"Game theorists",
"Game theory"
] |
11,673,742 | https://en.wikipedia.org/wiki/Nicol%20Hugh%20Baird | Nicol Hugh Baird (26 August 1796 – 18 October 1849) was a Scottish surveyor who worked for his uncle Charles Baird in St Petersburg for several years, and emigrated to Canada in 1828.
Works
Baird is known in Canada for the work on various canal and road construction in Upper and Lower Canada as well as inventing equipment for making existing locks more accessible for steamships. His skills as a surveyor and engineer are reflected as an integral part of projects such as the Rideau, Trent, and Welland canals. His thorough written accounts give historians a record of early Canadian engineering.
References
External links
1796 births
1849 deaths
Canadian surveyors
British canal engineers
Canadian civil engineers
People from Glasgow
Scottish civil engineers
Scottish emigrants to Canada
Scottish surveyors
19th-century Scottish engineers | Nicol Hugh Baird | [
"Engineering"
] | 152 | [
"Civil engineering",
"Civil engineering stubs"
] |
11,673,883 | https://en.wikipedia.org/wiki/N-Formylmethionine%20%28data%20page%29 |
References
Chemical data pages
Chemical data pages cleanup | N-Formylmethionine (data page) | [
"Chemistry"
] | 10 | [
"Chemical data pages",
"nan"
] |
11,674,244 | https://en.wikipedia.org/wiki/Institute%20for%20Liquid%20Atomization%20and%20Spray%20Systems | The Institute for Liquid Atomization and Spray Systems, (ILASS), is an organization of researchers, industrial practitioners and students engaged in professional activities connected with the spraying of liquids and slurries. Annual technical conferences are organized by each of the ILASS organizations ILASS-Americas, ILASS-Asia, and ILASS-Europe. ILASS-International is board made up of representatives from the three regional ILASS Institutes.
ILASS meetings have practitioner and researchers from many areas where spray technology is utilized. This includes injectors for gas turbines, rockets, and diesels, agricultural and medical sprays, industrial sprays, fire protection, paint and coating applications, liquid combustion, and many others. This breadth of spray applications at this conference and technical community provides cross-fertilization of research methodologies and innovative efforts. Mechanical, agricultural and chemical engineers all participate in these meetings.
The ILASS (Institute for Liquid Atomization and Spray Systems) organization is an international organization dedicated to the advancement of knowledge and technology in the field of liquid atomization and spray systems.
The ILASS global organization has three branches: ILASS-Europe, ILASS-Americas, and ILASS Asia. ILASS-Europe was the first to be established, and was founded in 1982 as initiative of the late Prof. Paul Eisenklam. The first Annual General Meeting took place at UMIST 1983. The Institute’s objectives have remained unchanged since that time: “… to promote the science and applications of liquid atomization and spray systems by means of sponsorship of annual scientific meetings, promotion and preparation of technical papers, and promotion of membership in ILASS Europe among interested and qualified persons…”
Prof. Norman Chigier, one of the leaders of the atomization and spraying systems community and the ILASS organization founded in 1991 the Atomization and Sprays Journal that focuses publishing, peer-reviewed papers on the topics that are central to ILASS.
ILASS organizes conferences, and symposia annually in each of its branches, and every three years, three regional ILASS institutes gather in a joint conference, entitled ICLASS (International Conference of Liquid Atomization and Spray Systems).
Three awards are granted at each conference, honouring Professor Paul Eisenklam, Professor Arthur H. Lefebvre, and Professor Y. Tanasawa.
External links
Official website
Official website of ILASS - Europe
Engineering societies | Institute for Liquid Atomization and Spray Systems | [
"Engineering"
] | 488 | [
"Engineering societies"
] |
11,674,624 | https://en.wikipedia.org/wiki/International%20Conference%20on%20Web%20Services | The International Conference on Web Services (ICWS) denotes an international forum for researchers and industry practitioners focused on Web services. Since 2018 there are two ICWS events, one is sponsored by Services Society and Springer, and the other is sponsored by the IEEE Computer Society (IEEE ICWS). The IEEE ICWS event has an 'A' rating in the Conference Portal - Core and an 'A' rating in the Excellence in Research for Australia.
Areas of focus
ICWS features research papers with a wide range of topics, focusing on various aspects of IT services. Some of the topics include Web services specifications and enhancements, Web services discovery and integration, Web services security, Web services standards and formalizations, Web services modeling, Web services-oriented software engineering, Web services-oriented software testing, Web services-based applications and solutions, Web services realizations, semantics in Web services, and all aspects of Service-Oriented Architecture (SOA) infrastructure.
History
The International Conference on Web Services was founded by Dr. Liang-Jie Zhang in June 2003, Las Vegas, USA. Meanwhile, the first ICWS-Europe 2003 (ICWS-Europe'03), founded by Dr. Liang-Jie Zhang with Prof. Mario Jeckle, was held in Germany in October 2003. In 2004, ICWS-Europe was changed to the European Conference on Web Services (ECOWS), held in Erfurt, Germany. In 2012, ECOWS was formally merged into ICWS. Since then, the entire Services Computing community combined the efforts and focused on one prime international forum for web-based services: ICWS.
References
External links
International Conference on Web Services
IEEE Computer Society Technical Committee on Services Computing
Computer science conferences
Academic conferences | International Conference on Web Services | [
"Technology"
] | 349 | [
"Computer science",
"Computer science conferences"
] |
11,675,192 | https://en.wikipedia.org/wiki/ISO/IEC%2027002 | ISO/IEC 27002 is an information security standard published by the International Organization for Standardization (ISO) and by the International Electrotechnical Commission (IEC), titled Information security, cybersecurity and privacy protection — Information security controls.
The ISO/IEC 27000 family of standards are descended from a corporate security standard donated by Shell to a UK government initiative in the early 1990s. The Shell standard was developed into British Standard BS 7799 in the mid-1990s, and was adopted as ISO/IEC 17799 in 2000. The ISO/IEC standard was revised in 2005, and renumbered ISO/IEC 27002 in 2007 to align with the other ISO/IEC 27000-series standards. It was revised again in 2013 and in 2022. Later in 2015 the ISO/IEC 27017 was created from that standard in order to suggest additional security controls for the cloud which were not completely defined in ISO/IEC 27002.
ISO/IEC 27002 provides best practice recommendations on information security controls for use by those responsible for initiating, implementing or maintaining information security management systems (ISMS). Information security is defined within the standard in the context of the CIA triad:
the preservation of confidentiality (ensuring that information is accessible only to those authorized to have access), integrity (safeguarding the accuracy and completeness of information and processing methods) and availability (ensuring that authorized users have access to information and associated assets when required).
Outline
Outline for ISO/IEC 27002:2022
The standard starts with 4 introductory chapters:
Scope
Normative Reference
Terms, definitions, and abbreviated terms
Structure of this document
These are followed by 4 main chapters:
Organizational controls
People controls
Physical controls
Technological controls
Outline for ISO/IEC 27002:2013
The standard starts with 5 introductory chapters:
Introduction
Scope
Normative references
Terms and definitions
Structure of this standard
These are followed by 14 main chapters:
Information Security Policies
Organization of Information Security
Human Resource Security
Asset Management
Access Control
Cryptography
Physical and environmental security
Operation Security- procedures and responsibilities, Protection from malware, Backup, Logging and monitoring, Control of operational software, Technical vulnerability management and Information systems audit coordination
Communication security - Network security management and Information transfer
System acquisition, development and maintenance - Security requirements of information systems, Security in development and support processes and Test data
Supplier relationships - Information security in supplier relationships and Supplier service delivery management
Information security incident management - Management of information security incidents and improvements
Information security aspects of business continuity management - Information security continuity and Redundancies
Compliance - Compliance with legal and contractual requirements and Information security reviews
Controls
Within each chapter, information security controls and their objectives are specified and outlined. The information security controls are generally regarded as best practice means of achieving those objectives. For each of the controls, implementation guidance is provided.
Specific controls are not mandated since:
Each organization is expected to undertake a structured information security risk assessment process to determine its specific requirements before selecting controls that are appropriate to its particular circumstances. The introduction section outlines a risk assessment process although there are more specific standards covering this area such as ISO/IEC 27005. The use of information security risk analysis to drive the selection and implementation of information security controls is an important feature of the ISO/IEC 27000-series standards: it means that the generic good practice advice in this standard gets tailored to the specific context of each user organization, rather than being applied by rote. Not all of the 39 control objectives are necessarily relevant to every organization for instance, hence entire categories of control may not be deemed necessary. The standards are also open ended in the sense that the information security controls are 'suggested', leaving the door open for users to adopt alternative controls if they wish, just so long as the key control objectives relating to the mitigation of information security risks, are satisfied. This helps keep the standard relevant despite the evolving nature of information security threats, vulnerabilities and impacts, and trends in the use of certain information security controls.
It is practically impossible to list all conceivable controls in a general purpose standard. Industry-specific implementation guidelines for ISO/IEC 27001:2013 and ISO/IEC 27002 offer advice tailored to organizations in the telecomms industry (see ISO/IEC 27011) and healthcare (see ISO 27799).
Most organizations implement a wide range of information security-related controls, many of which are recommended in general terms by ISO/IEC 27002. Structuring the information security controls infrastructure in accordance with ISO/IEC 27002 may be advantageous since it:
Is associated with a well-respected international standard
Helps avoid coverage gaps and overlaps
Is likely to be recognized by those who are familiar with the ISO/IEC standard
Implementation example of ISO/IEC 27002
Here are a few examples of typical information security policies and other controls relating to three parts of ISO/IEC 27002. (Note: this is merely an illustration. The list of example controls is incomplete and not universally applicable.)
Physical and Environmental security
Physical access to premises and support infrastructure (communications, power, air conditioning etc.) must be monitored and restricted to prevent, detect and minimize the effects of unauthorized and inappropriate access, tampering, vandalism, criminal damage, theft etc.
The list of people authorized to access secure areas must be reviewed and approved periodically (at least once a year) by Administration or Physical Security Department, and cross-checked by their departmental managers.
Photography or video recording is forbidden inside Restricted Areas without prior permission from the designated authority.
Suitable video surveillance cameras must be located at all entrances and exits to the premises and other strategic points such as Restricted Areas, recorded and stored for at least one month, and monitored around the clock by trained personnel.
Access cards permitting time-limited access to general and/or specific areas may be provided to trainees, vendors, consultants, third parties and other personnel who have been identified, authenticated, and authorized to access those areas.
Other than in public areas such as the reception foyer, and private areas such as rest rooms, visitors should be escorted at all times by an employee while on the premises.
The date and time of entry and departure of visitors along with the purpose of visits must be recorded in a register maintained and controlled by Site Security or Reception.
Everyone on site (employees and visitors) must wear and display their valid, issued pass at all times, and must present their pass for inspection on request by a manager, security guard or concerned employee.
Access control systems must themselves be adequately secured against unauthorized/inappropriate access and other compromises.
Fire/evacuation drills must be conducted periodically (at least once a year).
Smoking is forbidden inside the premises other than in designated Smoking Zones.
Human Resource security
All employees must be screened prior to employment, including identity verification using a passport or similar photo ID and at least two satisfactory professional references. Additional checks are required for employees taking up trusted positions.
All employees must formally accept a binding confidentiality or non-disclosure agreement concerning personal and proprietary information provided to or generated by them in the course of employment.
Human Resources department must inform Administration, Finance and Operations when an employee is taken on, transferred, resigns, is suspended or released on long-term leave, or their employment is terminated.
Upon receiving notification from HR that an employee's status has changed, Administration must update their physical access rights and IT Security Administration must update their logical access rights accordingly.
An employee's manager must ensure that all access cards, keys, IT equipment, storage media and other valuable corporate assets are returned by the employee on or before their last day of employment.
Access control
User access to corporate IT systems, networks, applications and information must be controlled in accordance with access requirements specified by the relevant Information Asset Owners, normally according to the user's role.
Generic or test IDs must not be created or enabled on production systems unless specifically authorized by the relevant Information Asset Owners.
After a predefined number of unsuccessful logon attempts, security log entries and (where appropriate) security alerts must be generated and user accounts must be locked out as required by the relevant Information Asset Owners.
Passwords or pass phrases must be lengthy and complex, consisting of a mix of letters, numerals and special characters that would be difficult to guess.
Passwords or pass phrases must not be written down or stored in readable format.
Authentication information such as passwords, security logs, security configurations and so forth must be adequately secured against unauthorized or inappropriate access, modification, corruption or loss.
Privileged access rights typically required to administer, configure, manage, secure and monitor IT systems must be reviewed periodically (at least twice a year) by Information Security and cross-checked by the appropriate departmental managers.
Users must either log off or password-lock their sessions before leaving them unattended.
Password-protected screensavers with an inactivity timeout of no more than 10 minutes must be enabled on all workstations/PCs.
Write access to removable media (USB drives, CD/DVD writers etc.) must be disabled on all desktops unless specifically authorized for legitimate business reasons.
History
National equivalent standards
ISO/IEC 27002 has directly equivalent national standards in several countries. Translation and local publication often results in several months' delay after the main ISO/IEC standard is revised and released, but the national standard bodies go to great lengths to ensure that the translated content accurately and completely reflects ISO/IEC 27002.
Certification
ISO/IEC 27002 is an advisory standard that is meant to be interpreted and applied to all types and sizes of organization according to the particular information security risks they face. In practice, this flexibility gives users a lot of latitude to adopt the information security controls that make sense to them, but makes it unsuitable for the relatively straightforward compliance testing implicit in most formal certification schemes.
ISO/IEC 27001:2013 (Information technology – Security techniques – Information security management systems – Requirements) is a widely recognized certifiable standard. ISO/IEC 27001 specifies a number of firm requirements for establishing, implementing, maintaining and improving an ISMS, and in Annex A there is a suite of information security controls that organizations are encouraged to adopt where appropriate within their ISMS. The controls in Annex A are derived from and aligned with ISO/IEC 27002.
Ongoing development
Both ISO/IEC 27001 and ISO/IEC 27002 are revised by ISO/IEC JTC1/SC27 every few years in order to keep them current and relevant. Revision involves, for instance, incorporating references to other issued security standards (such as ISO/IEC 27000, ISO/IEC 27004 and ISO/IEC 27005) and various good security practices that have emerged in the field since they were last published. Due to the significant 'installed base' of organizations already using ISO/IEC 27002, particularly in relation to the information security controls supporting an ISMS that complies with ISO/IEC 27001, any changes have to be justified and, wherever possible, evolutionary rather than revolutionary in nature.
See also
BS 7799, the original British Standard from which ISO/IEC 17799 and then ISO/IEC 27002 was derived
ISO/IEC 27000-series
IT baseline protection
IT risk management
List of ISO standards
Sarbanes–Oxley Act
Standard of Good Practice for Information Security published by the Information Security Forum
ISO/IEC JTC 1/SC 27 – IT Security techniques
NIST Cybersecurity Framework
Cyber Risk Quantification
References
External links
The ISO 17799 Newsletter
ISO/IEC 27002:2022
Computer security standards
Information assurance standards
27002 | ISO/IEC 27002 | [
"Technology",
"Engineering"
] | 2,353 | [
"Computer security standards",
"Computer standards",
"Information assurance standards",
"Cybersecurity engineering"
] |
11,675,243 | https://en.wikipedia.org/wiki/Retail%20design | Retail design is a creative and commercial discipline that combines several different areas of expertise together in the design and construction of retail space. Retail design is primarily a specialized practice of architecture and interior design; however, it also incorporates elements of industrial design, graphic design, ergonomics, and advertising.
Retail design is a very specialized discipline due to the heavy demands placed on retail space. Because the primary purpose of retail space is to stock and sell product to consumers, the spaces must be designed in a way that promotes an enjoyable and hassle-free shopping experience for the consumer.
For example, research shows that male and female shoppers who were accidentally touched from behind by other shoppers left a store earlier than people who had not been touched and evaluated brands more negatively. The space must be specially-tailored to the kind of product being sold in that space; for example, a bookstore requires many large shelving units to accommodate small products that can be arranged categorically while a clothing store requires more open space to fully display product.
Retail spaces, especially when they form part of a retail chain, must also be designed to draw people into the space to shop. The storefront must act as a billboard for the store, often employing large display windows that allow shoppers to see into the space and the product inside. In the case of a retail chain, the individual spaces must be unified in their design.
History
Retail design first began to grow in the middle of the 19th century, with stores such as Bon Marche and Printemps in Paris, "followed by Marshall Fields in Chicago, Selfridges in London and Macy's in New York." These early retail design stores were swiftly continued with an innovation called the chain store.
The first known chain department stores were established in Belgium in 1868, when Isidore, Benjamin and Modeste Dewachter incorporated Dewachter frères (Dewachter Brothers) selling ready-to-wear clothing for men and children and specialty clothing such as riding apparel and beachwear. The firm opened with four locations and, by 1904, Maison Dewachter (House of Dewachter) had stores in 20 cities and towns in Belgium and France, with multiple stores in some cities. Isidore's eldest son, Louis Dewachter, managed the chain at its peak and also became an internationally known landscape artist, painting under the pseudonym Louis Dewis.
The first retail chain store in the United States was opened in the early 20th century by Frank Winfield Woolworth, which quickly became a franchise across the US. Other chain stores began growing in places like the UK a decade or so later, with stores like Boots. After World War II, a new type of retail design building known as the shopping centre came into being. This type of building took two different paths in comparison between the US and Europe. Shopping centres began being built out of town within the United States to benefit the suburban family, while Europe began putting shopping centres in the middle of town. The first shopping centre in the Netherlands was built in the 1950s, as retail design ideas began spreading east.
The next evolution of retail design was the creation of the boutique in the 1960s, which emphasized retail design run by individuals. Some of the earliest examples of boutiques are the Biba boutique created by Barbara Hulanicki and the Habitat line of stores made by Terence Conran. The rise of the boutique was followed, in the next two decades, with an overall increase in consumer spending across the developed world. This rise made retail design shift to compensate for increased customers and alternative focuses. Many retail design stores redesigned themselves over the period to keep up with changing consumer tastes. These changes resulted on one side with the creation of multiple "expensive, one-off designer shops" catering to specific fashion designers and retailers.
The rise of the internet and internet retailing in the latter part of the 20th century and into the 21st century saw another change in retail design to compensate. Many different sectors not related to the internet reached out to retail design and its practices to lure online shoppers back to physical shops, where retail design can be properly utilized.
Usage
Role
A retail designer must create a thematic experience for the consumer, by using spatial cues to entertain as well as entice the consumer to purchase goods and interact with the space. The success of their designs are not measured by design critics but rather the records of the store which compare amount of foot traffic against the overall productivity. Retail designers have an acute awareness that the store and their designs are the background to the merchandise and are only there to represent and create the best possible environment in which to reflect the merchandise to the target consumer group.
Design elements
Since the evolution of retail design and its impact on productivity have become clear, a series of standardizations in the techniques and design qualities has been determined. These standardizations range from alterations to the perspective of the structure of the space, entrances, circulation systems, atmospheric qualities (light and sound) and materiality. By exploring these standardizations in retail design the consumer will be given a thematic experience that entices them to purchase the merchandise. It is also important to acknowledge that a retail space must combine both permanent and non permanent features, that allow it to change as the needs of the consumer and merchandise change (e.g. per season).
The structure of retail space creates the constraints of the overall design; often the spaces already exist, and have had many prior uses. It is at this stage that logistics must be determined, structural features like columns, stairways, ceiling height, windows and emergency exits all must be factored into the final design. In retail one hundred percent of the space must be utilised and have a purpose. The floor plan creates the circulation which then directly controls the direction of the traffic flow based on the studied psychology of consumer movement pattern within a retail space. Circulation is important because it ensures that the consumer moves through the store from front to back, guiding them to important displays and in the end to the cashier. There are six basic store layouts and circulation plans that all provide a different experience:
Straight plan: this plan divides transitional areas from one part of the store to the other by using walls to display merchandise. It also leads the consumer to the back of the store. This design can be used for a variety of stores ranging from pharmacies to apparel.
Pathway plan: is most suitable for large stores that are single level. In this plan there is a path that is unobstructed by shop fixtures, this smoothly guides the consumer through to the back of the store. This is well suited for apparel department stores, as the clothes will be easily accessible.
Diagonal plan: uses perimeter design which cause angular traffic flow. The cashier is in a central location and easily accessible. This plan is most suited for self-service retail.
Curved plan: aims to create an intimate environment that is inviting. In this plan there is an emphasis on the structure of the space including the walls, corners and ceiling this is achieved by making the structure curved and is enhance by circular floor fixtures. Although this is a more expensive layout it is more suited to smaller spaces like salons and boutiques.
Varied plan: in this plan attention is drawn to special focus areas, as well as having storage areas that line the wall. This is best suited for footwear and jewellery retail stores.
Geometric plan: uses the racks and the retail floor fixtures to create a geometric floor plan and circulation movement. By lowering parts of the ceiling certain areas can create defined retail spaces. This is well suited for apparel stores.
Once the overall structure and circulation of the space has been determined, the atmosphere and thematics of the space must be created through lighting, sound, materials and visual branding. These design elements will cohesively have the greatest impact on the consumer and thus the level of productivity that could be achieved.
Lighting can have a dramatic impact on the space. It needs to be functional but also complement the merchandise as well as emphasize key points throughout the store. The lighting should be layered and of a variety of intensities and fixtures. Firstly, examine the natural light and what impact it has in the space. Natural light adds interest and clarity to the space; also consumers also prefer to examine the quality of merchandise in natural light. If no natural light exists, a sky light can be used to introduce it to the retail space. The lighting of the ceiling and roof is the next thing to consider. This lighting should wash the structural features while creating vectors that direct the consumer to key merchandise selling areas. The next layer should emphasize the selling areas. These lights should be direct but not too bright and harsh. Poor lighting can cause eye strain and an uncomfortable experience for the consumer. To minimize the possibility of eye strain, the ratio of luminance should decrease between merchandise selling areas. The next layer will complement and bring focus onto the merchandise; this lighting should be flattering for the merchandise and consumer. The final layer is to install functional lighting such as clear exit signs.
Ambiance can then be developed within the atmosphere through sound and audio, the music played within the store should reflect what your target market would be drawn to, this would also be developed through the merchandise that is being marketed. In a lingerie store the music should be soft, feminine and romanticized; where in a technology department the music would be more upbeat and more masculine.
Materiality is another key selling tool, the choices made must not only be aesthetically pleasing and persuasive but also functional with a minimal need for maintenance. Retail spaces are high traffic area and are thus exposed to a lot of wear this means that possible finishes of the materials should be durable. The warmth of a material will make the space more inviting, a floor that is firm and somewhat buoyant will be more comfortable for that consumer to walk on and thus this will allow them to take longer when exploring the store. By switching materials throughout the store zones/ areas can be defined, for example by making the path one material and contrast it against another for the selling areas this help to guide the consumer through the store. Colour is also important to consider it must not over power or clash against the merchandise but rather create a complementary background for the merchandise. As merchandise will change seasonally the interior colours should not be trend based but rather have timeless appeal like neutral based colours.
Visual branding of the store will ensure a memorable experience for the consumer to take with them once they leave the store ensuring that they will want to return. The key factor is consistency exterior branding and signage should continue into the interior, they should attract, stimulate and dramatise the store. To ensure consistency the font should be consistent with the font size altering. The interior branding should allow the consumer to easily self direct themselves through the store, proper placement of sales signs that will draw consumer in and show exactly where the cashier is located. The branding should reflect what the merchandise is and what the target market would be drawn to.
Perspective
The final element of a well-executed retail space is the staging of the consumer's perspective. It is the role of retail design to have total control of the view that the consumer will have of the retail space. From the exterior of a retail store the consumer should have a clear unobstructed view into the interior.
See also
Architecture
Brand
Branded environments
Brand implementation
Customer engagement
Display case
Display window
Ergonomics
Interior design
Marketing
Merchandising
Planogram
Retail chain
Retailing
Visual merchandising
References
Further reading
Israel, L.J., 1994. "Store Planning/ Design". United States of America: John Wiley & sons, INC.
Lopez, M. J., 2003. "Retail Store Planning And Design Manual". 2nd ed. Cincinnati, Ohio: ST Publications.
Barr, V. and Broudy, C.E., 1990. "Designing to Sell". 2nd ed. New York: McGraw-Hill, INC.
Curtis, E. and Watson, H., 2007. "Fashion Retail". 2nd ed. West Sussex, England: John Wiley and Sons Ltd.
Brand management
Retail processes and techniques
Design history
Interior design | Retail design | [
"Engineering"
] | 2,453 | [
"Design history",
"Design"
] |
11,675,935 | https://en.wikipedia.org/wiki/Drug%20allergy | A drug allergy is an allergy to a drug, most commonly a medication, and is a form of adverse drug reaction. Medical attention should be sought immediately if an allergic reaction is suspected.
An allergic reaction will not occur on the first exposure to a substance. The first exposure allows the body to create antibodies and memory lymphocyte cells for the antigen. However, drugs often contain many different substances, including dyes, which could cause allergic reactions. This can cause an allergic reaction on the first administration of a drug. For example, a person who developed an allergy to a red dye will be allergic to any new drug which contains that red dye.
A drug allergy is different from an intolerance. A drug intolerance, which is often a milder, non-immune-mediated reaction, does not depend on prior exposure.
Signs and symptoms
Symptoms of drug hypersensitivity reactions can be similar to non-allergic adverse effects. Common symptoms include:
Hives
Itching
Rash
Fever
Facial swelling
Shortness of breath due to the short-term constriction of lung airways or longer-term damage to lung tissue
Anaphylaxis, a life-threatening drug reaction (produces most of these symptoms as well as low blood pressure)
Cardiac symptoms such as chest pain, shortness of breath, fatigue, chest palpitations, light headedness, and syncope due to a rare drug-induced reaction, eosinophilic myocarditis
Causes
Some classes of medications have a higher rate of drug reactions than others. These include antiepileptics, antibiotics, antiretrovirals, NSAIDs, and general and local anesthetics.
Risk factors
Risk factors for drug allergies can be attributed to the drug itself or the characteristics of the patient. Drug-specific risk factors include the dose, route of administration, duration of treatment, repetitive exposure to the drug, and concurrent illnesses. Host risk factors include age, sex, atopy, specific genetic polymorphisms, and inherent predisposition to react to multiple unrelated drugs (multiple drug allergy syndrome).
A drug allergy is more likely to develop with large doses and extended exposure.
People with immunological diseases, such as HIV and cystic fibrosis, or infection with EBV, CMV, or HHV6, are more susceptible to drug hypersensitivity reactions. These conditions lower the threshold for T-cell stimulation.
Mechanisms
There are two broad mechanisms for a drug allergy to occur: IgE or non-IgE mediated. In IgE-mediated reactions, also known as immunoglobulin E mediated reactions, drug allergens bind to IgE antibodies, which are attached to mast cells and basophils, resulting in IgE cross-linking, cell activation and release of preformed and newly formed mediators.
Most drugs do not cause reactions in themselves, but by the formation of haptens.
Types
Drug allergies or hypersensitivities can be broadly divided into two types: immediate reactions and delayed reactions. Immediate reactions take place within an hour of administration and are IgE mediated, while delayed reactions take place hours to weeks after administration and are T-cell mediated. The first category is mostly mediated through specific IgE, whereas the latter is specifically T-cell mediated.
Management
Management of drug allergy consists principally of avoidance or discontinuation of the causative drug. Treatment is largely supportive and symptomatic. It may consist of topical corticosteroids and oral antihistamines for cutaneous symptoms such as hives and itching. Mild cutaneous reactions can be managed with antihistamines only. However, antihistamines cannot antagonize activated histamine that has already been released from mast cells. In severe cases of drug allergy, systemic corticosteroids may be used. Corticosteroids are limited by a delayed onset of action of greater than 45minutes as they act via gene modulation. If anaphylaxis occurs, injectable epinephrine is to be used. If a person is allergic to a drug and no suitable alternative exists, a desensitization procedure with the drug, in which the drug is introduced slowly at very low doses such that tolerance to the drug allergy develops, can be employed.
See also
Adverse drug reaction
Drug reaction with eosinophilia and systemic symptoms
Drug intolerance
Drug tolerance
References
External links
Allergology
Drug safety | Drug allergy | [
"Chemistry"
] | 927 | [
"Drug safety"
] |
11,676,290 | https://en.wikipedia.org/wiki/Trellis%20quantization | Trellis quantization is an algorithm that can improve data compression in DCT-based encoding methods. It is used to optimize residual DCT coefficients after motion estimation in lossy video compression encoders such as Xvid and x264. Trellis quantization reduces the size of some DCT coefficients while recovering others to take their place. This process can increase quality because coefficients chosen by Trellis have the lowest rate-distortion ratio. Trellis quantization effectively finds the optimal quantization for each block to maximize the PSNR relative to bitrate. It has varying effectiveness depending on the input data and compression method.
References
VirtualDub/Xvid guide mentioning Trellis quantization
FFMPEGx option documentation
Trellis explanation and pseudocode by the x264-author
MPEG
Data compression
Video compression | Trellis quantization | [
"Technology"
] | 171 | [
"MPEG",
"Computer science stubs",
"Computer science",
"Multimedia",
"Computing stubs"
] |
11,676,992 | https://en.wikipedia.org/wiki/Japanese%20tissue | Japanese tissue is a thin, strong paper made from vegetable fibers. Japanese tissue may be made from one of three plants, the kōzo plant (Broussonetia papyrifera, paper mulberry tree), the mitsumata (Edgeworthia chrysantha) shrub and the gampi tree (Diplomorpha sikokiana). The long, strong fibers of the kōzo plant produce very strong, dimensionally stable papers, and are the most commonly used fibers in the making of Japanese paper (washi). Tissue made from kōzo, or kōzogami (楮紙), comes in varying thicknesses and colors, and is an ideal paper to use in the mending of books. The majority of mending tissues are made from kōzo fibers, though mitsumata and gampi papers also are used. Japanese tissue is also an ideal material for kites and the covering of airplane models.
Forms
The kōzo plant is used in the manufacture of the following papers:
The gampi plant is used in the manufacture of the following papers:
The mitsumata plant is used in the manufacture of the following papers:
Manufacture
Japanese tissue paper is a handmade paper. The inner bark of the kōzo plant is harvested in the fall and spring, with material from the fall harvest being considered better quality. Bundles of kōzo sticks are steamed in a cauldron, then stripped of their bark and hung in the sun to dry. At this stage in the process, it is known as kuro-kawa, or black bark.
To make paper, the black bark must be converted into white bark. The stored black bark is soaked and then scraped by hand with a knife to remove the black outer coat. It is then washed in water and again placed in the sun to dry.
White bark is boiled with lye for about an hour, then left to steam for several more hours. At this point, it is rinsed with clear water to remove the lye. Then, it is . The fibers are placed in a stream bed around which a dam is built. Clean water is let in periodically to wash the fibers. Alternatively, the fibers may be bleached using a process called small bleaching (ko-arai). In this case, it is first placed on boards and beaten with rods before being placed in a cloth bag and rinsed in clear running water.
Impurities are removed after bleaching though a process known as chiri-tori. Any remaining pieces of bark, hard fibers or other impurities are picked out by hand or, in the case of very small pieces, by the use of pins. The remaining material is rolled into little balls and the balls are then beaten to crush the fibers.
After being beaten, it is common for the kōzo fibers to be mixed with neri, which is a mucilaginous material made from the roots of the tororo-aoi plant. The neri makes the fibers float uniformly on water and also helps to "...slow the speed of drainage so that a better-formed sheet of paper will result." (Narita, p. 45)
A solution of 30 percent pulp and 70 percent water is then mixed together in a vat. Neri may also be added to the vat. Nagashi-zuki, the most common technique for making sheets of paper, is then employed. The mixture is scooped on a screen and allowed to flow back and forth across the screen to interlock the fibers. This process is ideal for forming thin sheets of paper. The other technique for making paper, tame-zuki, does not use neri and forms thicker sheets of paper.
The sheet of paper is placed on a wooden board and dried overnight, then pressed the next day to remove water. After pressing, the sheets are put on a drying board and brushed to smooth them. They are dried in the sun, then removed from the drying board and trimmed.
Uses
Japanese tissue is used in the conservation of books and manuscripts. The tissue comes in varying thicknesses and colors, and is used for a variety of mending tasks, including repairing tears, mending book hinges, and reinforcing the folds of signatures or for reinforcement of an entire sheet through backing. The mender will select a piece of Japanese tissue that closely matches the color of the paper being mended, and chooses a thickness (weight) suitable to the job at hand.
Mending tears
First, Japanese tissue in a color close to that of the paper to be mended is chosen. The tear is aligned and paste may be used on any overlapping surfaces in the tear to help hold it together during the mending process.
A strip of tissue is torn away from the main sheet using a water tear. This is done by wetting the paper along the area to be torn and then pulling sideways with the fingers to separate the strip from the rest of the sheet of tissue, so that it will have feathered edges. The fibers in these feathered edges will allow the tissue to have a firmer hold on the mended paper and also to blend in with it once dried.
Paste is applied to one side of the tissue strip, from the center outward. The tissue is then placed, paste side down, on the tear, leaving a little bit of the mending tissue hanging over the edge. This bit will be trimmed off after the mend dries. A dry brush is used to smooth the tissue over the tear, again from the center outward. The mended page is placed between layers of PET film or glass board, blotting paper, and Reemay (a "spunbonded polyester" cloth) to keep the paste from sticking to the blotting paper, and then lightly weighted and left to dry.
Mending book hinges
This is another task in which Japanese tissue is often used. In some cases, the first step may be to tip in (that is, add with a thin strip of adhesive) a flyleaf to become the base for the attachment of the hinge mend, if the original flyleaf is not well attached. A small support the height of the spine should be placed to eliminate stress on the hinge.
Japanese tissue should be water torn in the same process as described above, in a width and length sufficient to cover the hinge of the book with about 3/8 inch extension over the sides. Paste should be brushed on to the tissue, from the center outward, transferred to the hinge and then brushed down with a dry brush.
A sheet of PET film is placed to prevent the hinge from sticking together and it is weighted until it dries.
Reattaching signatures
In the case where an entire signature (a folded sheet of paper forming several pages, or leaves, of a book) has come out, it may be reinserted by being sewn first onto a strip of Japanese paper, and then by pasting into the book along the newly formed hinge between the Japanese paper and original signature.
Kite making
The washi paper, as long as bamboo sticks and silk, is the most important material to build kites. The use of this material dates back for centuries in the eastern cultures.
Aeromodelling
Washi paper is used for covering the frame and wings of airplane models since the beginning of the 19th century. It is used especially on small models for the strength and the light weight. The vast majority of the washi paper used is either abaca or wood pulp. Abaca is vastly superior to wood pulp papers in strength overall. Gampi and mitsumata can be hit or miss with wet strength. Even with abaca, if a wet strengthening agent is not added to the fiber, it can almost melt in water.
Notes
See also
Preservation (library and archival science)
Japanese paper
Aburatorigami
Paper mulberry
References
Ballofet, Nelly and Jenny Hille. Preservation and Conservation for Libraries and Archives. Chicago: American Library Association. 2005.
Conservation in the Library. Ed. by Susan Garretson Swartzburg. Westport, CT: Greenwood Press. 1983.
DePew, John N. with C. Lee Jones. A Library, Media and Archival Preservation Glossary. Santa Barbara, California: ABC-CLIO. 1992.
DePew, John N. A Library, Media and Archival Preservation Handbook. Santa Barbara, California: ABC-CLIO. 1991.
The E. Lingle Craig Preservation Laboratory Repair and Enclosure Treatment Manual. Images and text by Garry Harrison, web design by Jacob Nadal.
Turner, Silvie. The Book of Fine Paper. p. 82-101. New York, NY: Thames and Hudson, Inc. 1998.
External links
The E. Lingle Craig Preservation Laboratory Repair and Enclosure Treatment Manual
Japanese paper
Conservation and restoration materials | Japanese tissue | [
"Physics"
] | 1,803 | [
"Materials",
"Matter",
"Conservation and restoration materials"
] |
11,677,095 | https://en.wikipedia.org/wiki/GAB2 | GRB2-associated-binding protein 2 also known as GAB2 is a protein that in humans is encoded by the GAB2 gene.
GAB2 is a docking protein with a conserved, folded PH domain attached to the membrane and a large disordered region, which hosts interactions with signaling molecules. It is a member of the GAB/DOS family localized on the internal membrane of the cell. It mediates the interaction between receptor tyrosine kinases (RTKs) and non-RTK receptors serving as the gateway into the cell for activation of SHP2, Phosphatidylinositol 3-kinase (PI3K), Grb2, ERK, and AKT and acting as one of the first steps in these signaling pathways. GAB2 has been shown to be important in physiological functions such as growth in bone marrow and cardiac function. GAB2 has also been associated with many diseases including leukemia and Alzheimer's disease.
Discovery
GAB proteins were one of the first docking proteins identified in the mammalian signal transduction pathway. GAB2 along with many other adaptor, scaffold, and docking proteins, was discovered in the mid-1990s during the isolation and cloning of protein tyrosine kinase substrates and association partners. GAB2 was initially discovered as a binding protein and substrate of protein tyrosine phosphatase Shp2/PTPN11. Two other groups later cloned GAB2 by searching DNA database for protein with sequence homology to GAB1.
Structure
GAB2 is a large multi-site docking protein (LMD) of about 100kD that has a folded N-terminal domain attached to an extended, disordered C-terminal tail rich in short linear motifs. LMDs are docking proteins that function as platforms mediating interaction between different signaling pathways and assisting with signal integration. The N-terminal is characterized by a Pleckstrin Homology (PH) domain that is the most highly conserved region between all members of the GAB family of proteins. (GAB1, GAB2, GAB3 and GAB4) GAB2 is an Intrinsically disordered protein, meaning that beyond the folded N-terminal region, the C-terminal region extends out into the cytoplasm with little or no secondary structure. The disordered region of the protein however may not be as disordered as was initially expected, as sequencing has revealed significant similarity between the "disordered" regions of GAB orthologs in different species.
The PH domain of GAB2 recognizes phosphatidylinositol 3,4,5-triphosphate(PIP3) in the membrane and is responsible for localizing the GAB protein on the intracellular surface of the membrane and in regions where the cell contacts another cell. Some evidence also suggests that the PH domain plays a role in some signal regulation as well.
Adjacent to the PH domain is a central, proline-rich domain that contains many PXXP motifs for binding to the SH3 domains of signaling molecules such as Grb2 (from which the name "Grb2-associated binding" protein, GAB, comes). It is hypothesized that binding sites in this region may be used in indirect mechanisms pairing the GAB2 protein to receptor tyrosine kinases. It is on the C-terminal tail that the various conserved protein binding motifs and phosphorylation sites of GAB2 are found. GAB2 binds to the SH2 domains of such signaling molecules as SHP2 and PI3K. By binding to the p85 subunit of PI3K, and continuing this signaling pathway GAB provides positive feedback for the creation of PIP3, produced as a result of the PI3K pathway, which binds to GAB2 in the membrane and promotes activation of more PI3Ks. Discovery of multiple binding sites in GAB proteins has led to the N-terminal folding nucleation (NFN) hypothesis for the structure of the disordered region. This theory suggests that the disordered domain is looped back to connect to the N-terminal, structured region several times to make the protein more compact. This would assist in promoting interactions between molecules bound to GAB and resisting degradation.
Function
GAB2 mediates the interactions between receptor tyrosine kinases (RTK) or non-RTK receptors, such as G protein coupled receptors, cytokine receptors, multichain immune recognition receptors and integrins, and the molecules of the intracellular signaling pathways. By providing a platform to host a wide array of interactions from extracellular inputs to intracellular pathways, GAB proteins can act as a gatekeeper to the cell, modulating and integrating signals as they pass them along, to control the functional state within the cell.
Mutagenesis and Binding assays have helped to identify which molecules and what pathways are downstream of GAB2. The two main pathways of GAB proteins are SHP2 and PI3K. GAB protein binding to SHP2 molecules acts as an activator whose main effect is the activation of the ERK/MAPK pathway. There are also, however, other pathways that are activated by this interaction such as the pathways c-Kit-induced Rac activation and β1-integrin. PI3K activation by GAB2 promotes cell growth.
The effects of all the pathways activated by GAB proteins are not known, but it is easy to see that amplification of signal can progress quickly and these proteins can have large effects on the state of the cell. While not lethal, GAB2 deficient knockout mice do exhibit phenotypic side-effects. These include weak allergic reactions, reduced mast cell growth in bone marrow and osteopetrosis. Knockout mice have also been used to show the importance of GAB2 in maintenance of cardiac function. A paracrine factor, NRG1 β, utilizes GAB2 to activate the ERK and AKT pathways in the heart to produce angiopoietin 1.
Interactions
The C-terminal tail of GAB2 acts as a site for multiple phosphorylation of tyrosine kinases. It acts as a docking station for the Src homology 2(SH2) domain that is contained in the adaptor protein families Crk, Grb2, and Nck. These adaptor proteins then couple to enzymes to amplify different cellular signals. GAB2 may also bind directly to SH2-containing enzymes, such as PI3K, to produce such signals.
GAB2 has been shown to interact with:
AKT1
Through the PI3K signaling pathway, PI3K activates the serine/threonine protein kinase (AKT), which in turn through phosphorylation inactivates GSK3. This in turn causes the phosphorylation of tau and amyloid production.
CRKL
CT10 regulator of kinase (Crk) is also known as the breast cancer anti-oestrogen resistance protein. It plays a role in both fibroblast formation and breast cancer. The YXXP binding motif is required for the association of CRKL and GAB2. This leads to the activation of c-Jun N-terminal kinase(JNK) as part of the JNK signaling pathway.
Grb2
Upon stimulation by growth hormone, insulin, epidermal growth factor (EFG), etc., the GAB2 protein can be recruited from the cytoplasm to the cell membrane, where it forms a complex with Grb2 and SHC. The interaction between GAB2 and Grb2 requires a PX3RX2KP motif in order to produce a regulatory signal. The activated GAB2 can now recruit SH2 domain-containing molecules, such as SHP2 or PI3K to activate signaling pathways.
PI3K
The p85 subunit of PI3K (or PIK3) possessed the SH2 domain required to be activated by GAB2. The activation of the PI3K signaling pathway produces increased amyloid production and microglia-mediated inflammation. The immunoglobulin receptor FceRI requires GAB2 as a necessity for mast cells to activate PI3K receptor to create an allergic response. In a study of knockout mice lacking the GAB2 gene, subjects experienced impaired allergic reactions, including passive cutaneous and systemic anaphylaxis. PI3K is found to be mutated in most breast cancer subtypes. Sufficient GAB2 expression by these cancerous subtypes proves necessary in order to sustain a cancerous phenotype.
PLCG2
The erythropoietin hormone (Epo) is responsible for the regulation and proliferation of erythrocytes. Epo is able to self phosphorylate, which causes recruitment of SH2 proteins. An activated complex of GAB2, SHC, and SHP2 is required for binding of Phospholipase C gamma 2 (PLCG2) through its SH2 domain, which activates PIP3.
PTPN11
Protein tyrosine phosphatase non-receptor 11 (PTPN11) interaction with GAB2 is part of the Ras pathway. Mutations found in PTPN11 cause disruption in the binding to GAB2, which in turn impairs correct cellular growth. Thirty-five percent of patients diagnosed with JMML show activating mutations in PTPN11.
RICS
GC-GAP is part of the Rho GTP-ase activating protein family (RICS). It contains a highly proline-rich motifs that allow favorable interactions with GAB2. GC-GAP is responsible for the proliferation of astroglioma cells.
SHC1
The interaction between GAB2 and Grb2 at the cell membrane recruits another adaptor protein, the Src homology domain-containing transforming protein 1 (SHC1), before being able to recruit SH2 domain-containing molecules.
Clinical Implications
Alzheimer's disease
Ten SNPs of GAB2 have been associated with late-onset Alzheimer's disease (LOAD). However, this association is found only in APOE ε4 carriers. In LOAD brains, GAB2 is overexpressed in neurons, tangle-bearing neurons, and dystrophic neuritis.
GAB2 has been indicated in playing a role in the pathogenesis of Alzheimer's disease via its interaction with tau and amyloid precursor proteins. GAB2 may prevent neuronal tangle formation characteristic of LOAD by reducing phosphorylation of tau protein via the activation of the PI3K signaling pathway, which activates Akt. Akt inactivates Gsk3, which is responsible for tau phosphorylation. Mutations in GAB2 could affect Gsk3-dependent phosphorylation of tau and the formation of neurofibrillary tangles. Interactions between GAB2-Grb2 and APP are enhanced in AD brains, suggesting an involvement of this coupling in the neuropathogenesis of AD.
Cancer
GAB2 has been linked to the oncogenesis of many cancers including colon, gastric, breast, and ovarian cancer. Studies suggest that GAB2 is used to amplify the signal of many RTKs implicated in breast cancer development and progression.
GAB2 has been particularly characterized for its role in leukemia. In chronic myelogenous leukemia (CML), GAB2 interacts with the Bcr-Abl complex and is instrumental in maintaining the oncogenic properties of the complex. The Grb2/GAB2 complex is recruited to phosphorylated Y177 of the Bcr-Abl complex leading to Bcr-Abl-mediated transformation and leukemogenesis. GAB2 also plays a role in juvenile myelomonocytic leukemia (JMML). Studies have shown the protein's involvement in the disease via the Ras pathway. In addition, GAB2 appears to play an important role in PTPN11 mutations associated with JMML.
References
Further reading
External links
Scientists find new dementia gene – BBC News, 9 June 2007.
Molecular neuroscience
Alzheimer's disease
Proteins | GAB2 | [
"Chemistry"
] | 2,544 | [
"Molecular neuroscience",
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
11,677,312 | https://en.wikipedia.org/wiki/34%20equal%20temperament | In musical theory, 34 equal temperament, also referred to as 34-TET, 34-EDO or 34-ET, is the tempered tuning derived by dividing the octave into 34 equal-sized steps (equal frequency ratios). Each step represents a frequency ratio of , or 35.29 cents .
History and use
Unlike divisions of the octave into 19, 31 or 53 steps, which can be considered as being derived from ancient Greek intervals (the greater and lesser diesis and the syntonic comma), division into 34 steps did not arise 'naturally' out of older music theory, although Cyriakus Schneegass proposed a meantone system with 34 divisions based in effect on half a chromatic semitone (the difference between a major third and a minor third, 25:24 or 70.67 cents). Wider interest in the tuning was not seen until modern times, when the computer made possible a systematic search of all possible equal temperaments. While Barbour discusses it, the first recognition of its potential importance appears to be in an article published in 1979 by the Dutch theorist Dirk de Klerk. The luthier Larry Hanson had an electric guitar refretted from 12 to 34 and persuaded American guitarist Neil Haverstick to take it up.
As compared with 31-et, 34-et reduces the combined mistuning from the theoretically ideal just thirds, fifths and sixths from 11.9 to 7.9 cents. Its fifths and sixths are markedly better, and its thirds only slightly further from the theoretical ideal of the 5:4 ratio. Viewed in light of Western diatonic theory, the three extra steps (of 34-et compared to 31-et) in effect widen the intervals between C and D, F and G, and A and B, thus making a distinction between major tones, ratio 9:8 and minor tones, ratio 10:9. This can be regarded either as a resource or as a problem, making modulation in the contemporary Western sense more complex. As the number of divisions of the octave is even, the exact halving of the octave (600 cents) appears, as in 12-et. Unlike 31-et, 34 does not give an approximation to the harmonic seventh, ratio 7:4.
Interval size
The following table outlines some of the intervals of this tuning system and their match to various ratios in the harmonic series.
Scale diagram
The following are 15 of the 34 notes in the scale:
The remaining notes can easily be added.
References
J. Murray Barbour, Tuning and Temperament, Michigan State College Press, 1951.
External links
Dirk de Klerk. "Equal Temperament", Acta Musicologica, Vol. 51, Fasc. 1 (Jan. - Jun., 1979), pp. 140-150.
Stickman: Neil Haverstick - Neil Haverstick is a composer and guitarist who uses microtonal tunings, especially 19, 31 and 34 tone equal temperament.
Equal temperaments
Microtonality | 34 equal temperament | [
"Physics"
] | 611 | [
"Physical quantities",
"Musical symmetry",
"Logarithmic scales of measurement",
"Equal temperaments",
"Symmetry"
] |
11,677,633 | https://en.wikipedia.org/wiki/Withy | A withy or withe (also willow and osier) is a strong flexible willow stem, typically used in thatching, basketmaking, gardening and for constructing woven wattle hurdles. The term is also used to refer to any type of flexible rod of natural wood used in rural crafts such as hazel or ash created through coppicing or pollarding.
Several species and hybrid cultivars of willows (often known as osiers) are grown for withy production; typical species include Salix acutifolia, Salix daphnoides, Salix × mollissima, Salix purpurea, Salix triandra, and Salix viminalis.
Places such as Wythenshawe and Withy Grove (both in Manchester) take their names from the willow woods and groves that grew there in earlier times. The Somerset Levels remain the only area in the UK growing basket willow commercially.
Use in water navigation
Withies were used to mark minor tidal channels in UK harbours and estuaries. In many places they remain in use and are often marked on navigation charts. At high tide the tops of a line of withies stuck in the mud on one or both sides of a channel will show above water to indicate where the deeper water lies. Note the images of international navigation-chart symbols for withies (port and starboard).
See also
"The Bitter Withy", a folk song
Coppicing
Fascine
Widmore, London, a suburb named for the withy
Willow Man, a sculpture in England
References
External links
Willows in the farming landscape: a forgotten eco-cultural icon (2022)
Salix
Building materials
Natural materials | Withy | [
"Physics",
"Engineering"
] | 342 | [
"Natural materials",
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
11,677,929 | https://en.wikipedia.org/wiki/IcedTea | IcedTea is a build and integration project for OpenJDK launched by Red Hat in June 2007. IcedTea also includes some addon libraries: IcedTea-Web is a free software implementation of Java Web Start and the Java web browser applet plugin. IcedTea-Sound is a collection of plugins for the Java sound subsystem, including the PulseAudio provider which used to be included with IcedTea. The Free Software Foundation recommends that all Java programmers use IcedTea as their development environment.
Historically, the initial goal of the IcedTea project was to make the OpenJDK software, which Sun Microsystems released as free software in 2007, usable without requiring any proprietary software, and hence make it possible to add OpenJDK to Fedora and other Linux distributions that insist on free software. This goal was met, and a version of IcedTea based on OpenJDK was packaged with Fedora 8 in November 2007. April 2008 saw the first release of a new variant, IcedTea6, which is based on Sun's build drops of OpenJDK6, a fork of the OpenJDK with the goal of being compatible with the existing JDK6. This was released in Ubuntu and Fedora in May 2008. The IcedTea package in these distributions has been renamed to OpenJDK using the OpenJDK trademark notice. In June 2008, the Fedora build passed Sun's rigorous TCK testing on x86 and x86-64. IcedTea 2, the first version based on OpenJDK 7, was released in October 2011. IcedTea 3, the first version based on OpenJDK 8, was released in April 2016. Support for IcedTea 1 was dropped in January 2017.
History
This project was created following Sun's release under open source licenses of its HotSpot Virtual Machine and Java compiler in November 2006, and most of the source code of the class library in May 2007. However, parts of the class library, such as font rendering, colour management and sound support, were only provided as proprietary binary plugins. This was because the source code for these plugins was copyrighted to third parties, rather than Sun Microsystems. The released parts were published under the terms of the GNU General Public License, a free software license.
Due to these missing components, it was not possible to build OpenJDK only with free software components. Sun aimed to negotiate with the license holders to allow this code to be released under a free software license, or failing that, to replace these proprietary elements with alternative implementations. With the plugins replaced, the class library would then be completely free. Sun has continued to use the proprietary code in their certified binary releases.
Following the announcement, the IcedTea project was started and was formally announced on June 7, 2007, with a build repository provided by the GNU Classpath team. The team could not call their software product "OpenJDK" because this is a trademark which was owned by Sun Microsystems. They instead decided to use the temporary name "IcedTea".
On November 5, 2007, Red Hat signed both the Sun Contributor Agreement and the OpenJDK Community Technology Compatibility Kit (TCK) License. The press release suggested that this would benefit the IcedTea project. Simon Phipps suggested the possibility of IcedTea being hosted on openjdk.java.net, and Mark Reinhold noted that signing the copyright assignment could allow Red Hat to contribute parts of IcedTea to Sun for inclusion in the mainstream JDK.
Since then, a number of patches from IcedTea have made their way into OpenJDK.
In June 2008, it was announced that IcedTea6 (as the packaged version of OpenJDK on Fedora 9) has passed the (TCK) tests and can claim to be a fully compatible Java 6 implementation. The project continues to track OpenJDK 6, OpenJDK 7 and OpenJDK 8 development in separate repositories, and contribute patches back upstream
where possible; the current state of each IcedTea patch is maintained on the IcedTea wiki.
Aims
The IcedTea project started with two aims:
to make it possible for the GNU Compiler for Java to compile the OpenJDK code. OpenJDK presented a bootstrapping question of itself being written in Java. Hence, developers needed an already-working Java compiler and runtime in order to build OpenJDK. Originally, only the existing proprietary Sun JDK met that requirement. Free distributions like Fedora can't depend on proprietary tools in order to build packages, so the IcedTea project had to make it possible to compile the code using free software. When this was done, the resulting IcedTea version of OpenJDK could be used to compile itself, thus escaping the need to use non-Free software for future compiling.
to provide free equivalents of the binary plugins that existed in OpenJDK because Sun was unable to release all the source code. As of March 2008, this is no longer necessary for IcedTea6, as the OpenJDK 6 build drops can be built with no binary plugins. With the release of b10, which replaces the proprietary sound support with that from the Gervill project, a full implementation of Java 1.6 can be built without binary plugins. The only remaining binary plug is for SNMP support, which is an optional provider for the JMX architecture and not part of the specification. As of b53 in April 2009, the same is true for OpenJDK 7. Outside the core of OpenJDK, binary plugins are still required for utilizing Java Web Start applets that run using the browser plugin (distinct from the core plugins discussed earlier); as of 2013, the only source code available that accomplishes this goal is the IcedTea-Web project.
Other benefits
IcedTea also provides a more familiar build system by providing a wrapper around the OpenJDK makefiles using the GNU Autotools. This removes the need to remember numerous environment variables for configuring the build. (The current IcedTea builds set roughly forty such variables for the underlying OpenJDK build.) It has also provided a place for early work on features which will eventually appear in the main OpenJDK builds such as Gervill and for work on ports to other platforms.
IcedTea-Web
IcedTea-web provides a free-software Java Web browser plugin. It was the first to work in 64-bit browsers under 64-bit Linux, a feature Sun's proprietary JRE later addressed. This makes it suitable to enable support for Java applets in 64-bit Mozilla Firefox, among others. IcedTea-web also provides a free Java Web Start (Java Network Launching Protocol (JNLP)) implementation. Sun had promised to release their plugin and Web Start implementation as part of OpenJDK. Despite pressure from the community, Sun Microsystems did not succeed in doing so before the company was acquired by Oracle. Development on the IcedTea-web plugin continues, with the latest version of the next-generation plugin supporting Google's Chromium in addition to Firefox.
Since 2011, development takes place in the separate IcedTea-Web project. As of April 2013, Oracle has kept the codebase of the Java plugin fully proprietary, in contrast to the remainder of OpenJDK. As of December, 2017, IcedTea-Web 1.7.1 adds support for jdk9.
As of October 2018, Oracle has announced that public Java Web Start support will end with Java SE 11. In March the icedtea-web source code was donated to the AdoptOpenJDK project. Based on this the sources and issue management of IcedTea-Web were migrated to GitHub.
One goal of the migration is to provide an integration for the Java 8 releases of AdoptOpenJDK, and provide JDK vendor independent installers for IcedTea-Web. The integration project is a cooperation between the AdoptOpenJDK community, Red Hat, and Karakun AG. The project for the installers is named OpenWebStart and first information can be found here.
Progress and availability
From June 2007, IcedTea was able to build itself and pass a significant portion of Mauve, the GNU Classpath test suite. In May 2008, support was added to IcedTea for running the Sun jtreg regression tests.
IcedTea has become popular among package maintainers for the following Linux distributions.
Currently (as of April 2012):
IcedTea is the default JVM in Ark Linux and Arch Linux.
It can be built and run under Debian. Packages entered unstable on 12 July 2008. As of May 2022, packages icedtea-netx and icedtea-netx-common are available in official Debian repositories for at least Debian 9 through 12.
IcedTea[7] was available in Fedora 8 and IcedTea6 appeared in Fedora 9 through to 17 as java-1.6.0-openjdk. A java-1.7.0-openjdk package using the IcedTea 2.x OpenJDK forest, but not its build system, first appeared in Fedora 16.
Binary and source packages for IcedTea 3.x are available in Gentoo's official repository. A source package for IcedTea 2.x continues to be maintaining in the Java overlay repository. Installing a Java application by default pulls in IcedTea instead of oracle-jdk because it can be installed without extra work from the user, as users have to manually agree to Oracle's EULA to download the oracle-jdk.
IcedTea is available in Ubuntu 7.10 (Gutsy Gibbon), from the "universe" repository, and IcedTea6 in 8.04 (Hardy Heron). Starting with Ubuntu 11.04 only IcedTea is available.
Architecture
OpenJDK contained approximately (on release in May 2007) 4% encumbered code, which was only packaged as binary plugins. These were required to build and use the JDK. OpenJDK 6 was released with only 1% encumbered code, and the encumbered sound support has also since been replaced. IcedTea6 is based on this release. IcedTea still provides its own web browser plugin and Web Start support, as Sun's implementation remains proprietary.
IcedTea 1.x and 2.x can compile OpenJDK using GNU Classpath-based solutions such as GCJ and optionally bootstraps itself using the HotSpot Java Virtual Machine and the javac Java compiler it just built. For now, building IcedTea 3.x requires using IcedTea 2.x or 3.x, or an OpenJDK 7 or 8 build from another source.
Platform support
Cross-architecture ports of HotSpot (OpenJDK's Virtual Machine) are difficult, because the code contains much assembly language, in addition to the C++ core. The IcedTea project has developed a generic port of the HotSpot interpreter called zero-assembler Hotspot (or zero), with almost no assembly code. This port is intended to allow the interpreter part of HotSpot to be very easily adapted to any Linux processor architecture. The code of zero-assembler Hotspot was used for all the non-x86 ports of HotSpot (PPC, IA-64, S390 and ARM) from version 1.6 of IcedTea7.
The IcedTea project has also developed a platform-independent just-in-time compiler called Shark for HotSpot, using LLVM, to complement Zero. This was included in upstream OpenJDK in August 2010. A JIT for ARM32 was first included in 1.6.0 and 2.1.1. A native port to AArch64 from Red Hat appeared in 2.4.6 and a native PPC64 port from SAP/IBM will be included in 2.5.0. The PPC/AIX port is included upstream in OpenJDK from version 8u20, and the AArch64 port will be included from version 9.
See also
Free Java implementations
Apache Harmony
Gnash (software)
FlashPaper
References
External links
IcedTea announcement
Classpath mailing list announcement
Thomas Fitzsimmons (Red Hat developer) blog entry announcing IcedTea
Guide to porting IcedTea
OpenJDK and IcedTea, A view from the Fedora side
Zero and Shark: a Zero-Assembly Port of OpenJDK
Java platform
Java virtual machine | IcedTea | [
"Technology"
] | 2,628 | [
"Computing platforms",
"Java platform"
] |
11,678,343 | https://en.wikipedia.org/wiki/UMTS%20frequency%20bands | The UMTS frequency bands are radio frequencies used by third generation (3G) wireless Universal Mobile Telecommunications System networks. They were allocated by delegates to the World Administrative Radio Conference (WARC-92) held in Málaga-Torremolinos, Spain between 3 February 1992 and 3 March 1992. Resolution 212 (Rev.WRC-97), adopted at the World Radiocommunication Conference held in Geneva, Switzerland in 1997, endorsed the bands specifically for the International Mobile Telecommunications-2000 (IMT-2000) specification by referring to S5.388, which states "The bands 1,885-2,025 MHz and 2,110-2,200 MHz are intended for use, on a worldwide basis, by administrations wishing to implement International Mobile Telecommunications 2000 (IMT-2000). Such use does not preclude the use of these bands by other services to which they are allocated. The bands should be made available for IMT-2000 in accordance with Resolution 212 (Rev. WRC-97)." To accommodate the reality that these initially defined bands were already in use in various regions of the world, the initial allocation has been amended multiple times to include other radio frequency bands.
UMTS-FDD frequency bands and channel bandwidths
From Tables 5.0 "UTRA FDD frequency bands" of the latest published version of the 3GPP TS 25.101, the following table lists the specified frequency bands of UMTS (FDD):
Deployments by region (UMTS-FDD)
The following table shows the standardized UMTS bands and their regional use. The main UMTS bands are in bold print.
Networks on UMTS-bands 1 and 8 are suitable for global roaming in ITU Regions 1, 2 (some countries) and 3.
Networks on UMTS bands 2 and 4 are suitable for roaming in ITU Region 2 (Americas) only.
Networks on UMTS band 5 are suitable for roaming in ITU Regions 2 and 3 (single countries).
UMTS-TDD frequency bands and channel bandwidths
UMTS-TDD technology is standardized for usage in the following bands:
See also
3GPP
List of UMTS networks
Cellular frequencies
GSM frequency bands
LTE frequency bands
5G NR frequency bands
CDMA frequency bands
Mobile network code
Roaming
United States 2008 wireless spectrum auction
White spaces (radio)
References
External links
3GPP Specifications for group: R4 - Frequencies info for UMTS (TS 25.101/102/104/105)
Bandplans
Mobile telecommunications
UMTS | UMTS frequency bands | [
"Technology"
] | 517 | [
"Mobile telecommunications"
] |
11,678,446 | https://en.wikipedia.org/wiki/List%20of%20representations%20of%20e | The mathematical constant can be represented in a variety of ways as a real number. Since is an irrational number (see proof that e is irrational), it cannot be represented as the quotient of two integers, but it can be represented as a continued fraction. Using calculus, may also be represented as an infinite series, infinite product, or other types of limit of a sequence.
As a continued fraction
Euler proved that the number is represented as the infinite simple continued fraction :
Here are some infinite generalized continued fraction expansions of . The second is generated from the first by a simple equivalence transformation.
This last non-simple continued fraction , equivalent to , has a quicker convergence rate compared to Euler's continued fraction formula and is a special case of a general formula for the exponential function:
As an infinite series
The number can be expressed as the sum of the following infinite series:
for any real number x.
In the special case where x = 1 or −1, we have:
, and
Other series include the following:
where is the th Bell number.
Consideration of how to put upper bounds on e leads to this descending series:
which gives at least one correct (or rounded up) digit per term. That is, if 1 ≤ n, then
More generally, if x is not in {2, 3, 4, 5, ...}, then
As a recursive function
The series representation of , given as can also be expressed using a form of recursion. When is iteratively factored from the original series the result is the nested series which equates to This fraction is of the form , where computes the sum of the terms from to .
As an infinite product
The number is also given by several infinite product forms including Pippenger's product
and Guillera's product
where the nth factor is the nth root of the product
as well as the infinite product
More generally, if 1 < B < e2 (which includes B = 2, 3, 4, 5, 6, or 7), then
Also
As the limit of a sequence
The number is equal to the limit of several infinite sequences:
and
(both by Stirling's formula).
The symmetric limit,
may be obtained by manipulation of the basic limit definition of .
The next two definitions are direct corollaries of the prime number theorem
where is the nth prime, is the primorial of the nth prime, and is the prime-counting function.
Also:
In the special case that , the result is the famous statement:
The ratio of the factorial , that counts all permutations of an ordered set S with cardinality , and the subfactorial (a.k.a. the derangement function) , which counts the amount of permutations where no element appears in its original position, tends to as grows.
As a binomial series
Consider the sequence:
By the binomial theorem:
which converges to as increases. The term is the th falling factorial power of , which behaves like when is large. For fixed and as :
As a ratio of ratios
A unique representation of can be found within the structure of Pascal's Triangle, as discovered by Harlan Brothers. Pascal's Triangle is composed of binomial coefficients, which are traditionally summed to derive polynomial expansions. However, Brothers identified a product-based relationship between these coefficients that links to . Specifically, the ratio of the products of binomial coefficients in adjacent rows of Pascal's Triangle tends to as the row number increases:
The details of this relationship and its proof are outlined in the discussion on the properties of the rows of Pascal's Triangle.
In trigonometry
Trigonometrically, can be written in terms of the sum of two hyperbolic functions,
at .
See also
List of formulae involving π
Notes
Exponentials
Logarithms
E (mathematical constant) | List of representations of e | [
"Mathematics"
] | 797 | [
"E (mathematical constant)",
"Logarithms",
"Exponentials"
] |
11,679,320 | https://en.wikipedia.org/wiki/Millennium%20Seed%20Bank%20Partnership | The Millennium Seed Bank Partnership (MSBP or MSB), formerly known as the Millennium Seed Bank Project, is the largest ex situ plant conservation programme in the world coordinated by the Royal Botanic Gardens, Kew. After being awarded a Millennium Commission grant in 1995, the project commenced in 1996, and is now housed in the Wellcome Trust Millennium Building situated in the grounds of Wakehurst Place, West Sussex. Its purpose is to provide an "insurance policy" against the extinction of plants in the wild by storing seeds for future use. The storage facilities consist of large underground frozen vaults preserving the world's largest wild-plant seedbank or collection of seeds from wild species. The project had been started by Dr Peter Thompson and run by Paul Smith after the departure of Roger Smith. Roger Smith was awarded the OBE in 2000 in the Queen's New Year Honours for services to the Project.
Project
In collaboration with other biodiversity projects around the world, expeditions are sent to collect seeds from dryland plants. Where possible, collections are kept in the country of origin with duplicates being sent to the Millennium Seed Bank Project for storage. Major partnerships exist on all the continents, enabling the countries involved to meet international objectives such as the Global Strategy for Plant Conservation and the Millennium Development Goals of the United Nations Environment Programme.
The seed bank at Kew has gone through many iterations. The Kew Seed Bank facility, set up by Peter Thompson in 1980, preceded the MSBP and was headed by Roger Smith from 1980 to 2005. From 2005, Paul Smith took over as head of the MSBP. The Wellcome Trust Millennium Seed Bank building was designed by the firm Stanton Williams and opened by Prince Charles in 2000. The laboratories and offices are in two wings flanking a wide space open to visitors which houses an exhibition. From here visitors can also view the cleaning and preparation of seeds through windows of the work areas and see the entrance to the underground vaults where the seeds are stored at . In 2001, the international programme of the MSBP was launched.
In April 2007, it banked its billionth seed, the Oxytenanthera abyssinica, a type of African bamboo.
In October 2009, it reached its 10% goal of banking all the world's wild plant species by adding Musa itinerans, a wild banana, to its seed vault. As estimates for the number of seed bearing plant species have increased, 34,088 wild plant species and 1,980,405,036 seeds in storage as of June 2015 represent over 13% of the world's wild plant species.
Aims
The main aims of the project are to:
Collect the seeds from 75,000 species of plants by 2020, representing 25% of known flora. This is the second phase of this goal, with the original partnership goal of banking 10% of known flora by 2010 was achieved in October 2009.
Collect seeds from all of the UK's native flora.
Further research into conservation and preservation of seeds and plants.
Act as a focal point for research in this area and encourage public interest and support.
International partnerships
There are over 100 partnerships worldwide, including Australia, Mexico, Chile, Kenya, China, United States, Jordan, Mali, Malawi, Madagascar, Burkina Faso, Botswana, Tanzania, Saudi Arabia, Lebanon and South Africa. Australia is particularly significant as its flora constitutes 15% of the world's total of species, with 22% of them identified as under threat of extinction.
Preservation of seeds
Seed collections arrive at the MSBP in varying states, sometimes attached to fruits, sometimes clean. The collections usually also include a voucher specimen that can be used to identify the plant. The collections are immediately moved to a dry room until processing can be conducted where the seeds are cleaned of debris and other plant material, X-rayed, counted, and banked at . Seeds are banked in hermetically sealed glass containers along with silica gel packets impregnated with indicator compounds that change colour if moisture seeps into the collection. Seeds are tested for viability with a germination test shortly after banking and then at regular 10 year intervals. If seed collections are low, re-harvesting from the wild is always the preferred option.
Seed distribution
When seeds are required for research purposes, they can be requested from the MSBP's seedlist. If it has the legal permission to do so, the MSB can then provide up to 60 seeds for free, to bona fide, non-commercial organisations for the purposes of research, restoration, and reintroduction. All seeds provided to institutions are on a non-profit mutual benefit basis. The MSB also operates the UK Native Seed Hub which aims to improve the resilience of the UK's ecological networks by providing high-quality UK native seeds to conservation and restoration groups.
See also
Svalbard Global Seed Vault
Australian Grains Genebank
References
External links
The Millennium Seed Bank homepage
The MSB seed collection
Photos of the buildings
Convention on Biological Diversity
TED talk: Jonathan Drori – Millennium Seed Bank – TED talk on the seed bank
Biodiversity
Conservation projects
Environmental ethics
Gene banks
International environmental organizations
Plant reproduction
Community seed banks
Royal Botanic Gardens, Kew
Rare breed conservation
Tourist attractions in West Sussex
Millennium Development Goals
Ardingly | Millennium Seed Bank Partnership | [
"Biology",
"Environmental_science"
] | 1,068 | [
"Behavior",
"Plant reproduction",
"Plants",
"Reproduction",
"Environmental ethics",
"Biodiversity"
] |
11,679,783 | https://en.wikipedia.org/wiki/Victoria%20Lines | The Victoria Lines, originally known as the North West Front, are a line of fortifications that spans 12 kilometres along the width of Malta, dividing the north of the island from the more heavily populated south.
Location
The Victoria Lines run along a natural geographical barrier known as the Great Fault, from Madliena in the east, through the limits of the town of Mosta in the centre of the island, to Binġemma and the limits of Rabat, on the west coast. The complex network of linear fortifications known collectively as the Victoria Lines, that cut across the width of the island north of the old capital of Mdina, was a unique monument of military architecture.
Background
When built by the British military in the late 19th century, the line was designed to present a physical barrier to invading forces landing in the north of Malta, intent on attacking the harbour installations, so vital for the maintenance of the British fleet, their source of power in the Mediterranean. Although never tested in battle, this system of defences, spanning some 12 km of land and combining different types of fortifications—forts, batteries, entrenchments, stop-walls, infantry lines, searchlight emplacements and howitzer positions—constituted a unique ensemble of varied military elements all brought together to enforce the strategy adopted by the British for the defence of Malta in the latter half of the 19th century, a singular solution which exploited the defensive advantages of geography and technology as no other work of fortifications does in the Maltese islands.
The Victoria Lines owe their origin to a combination of international events and the military realities of the time. The opening of the Suez Canal in 1869 highlighted the importance of the Maltese islands.
Beginnings
By 1872, the coastal works had progressed considerably, but the question of landward defences remained unsettled. Although the girdle of forts proposed by Colonel Jervois in 1866 would have considerably enhanced the defence of the harbour area, other factors had cropped up that rendered the scheme particularly difficult to implement, particularly the creation of suburbs. Another proposal, put forward by Col. Mann RE, was to take up a position well forward of the original.
The chosen position was the ridge of commanding ground north of the old City of Mdina, cutting transversely across the width of the island at a distance varying from 4 to 7 miles from Valletta. There, it was believed, a few detached forts could cut off all the westerly portion of the island containing good bays and facilities for landing. At the same time, the proposed line of forts retained the resources of the greater part of the country and the water on the side of the defenders; whereas the ground required for the building of the fortifications could be had far more cheaply than that in the vicinity of Valletta. Col. Mann estimated that the entire cost of the land and works of the new project would amount to £200,000, much less than would have been required to implement Jervois' scheme of detached forts.
This new defensive strategy was one which sought to seal off all the area around the Grand Harbour within an extended box-like perimeter, with the detached forts on the line of the Great Fault forming the north-west boundary, the cliffs to the south forming a natural, inaccessible barrier; while the north and east sides were to be defended by a line of coastal forts and batteries. In a way, the use of the Great Fault for defensive purposes was not an altogether original idea, for it had already been put forward by the Order of Saint John in the early decades of the 18th century, when they realized that they did not have the necessary manpower to defend the whole island. The Order had built a few infantry entrenchments at strategic places along the general line of the fault, namely, the Falca Lines and San Pawl tat-Tarġa, Naxxar. In fact, the use of parts of the natural escarpment for defensive purposes can be traced back even further, as illustrated by Nadur Tower at Bingemma (17th century), the Torri Falca (16th century) and the remains of a Bronze Age fortified citadel which possibly occupied the site of Fort Mosta.
Building
In 1873, the Defence Committee approved Adye’s defensive strategy and recommended the improvement of the already strong position between the Bingemma Hills and the heights above St. George’s Bay. Work on what was originally to be called the North-West Front began in 1875 with construction of a string of isolated forts and batteries, designed to stiffen the escarpment. Three forts were to be built along the position, at Bingemma, Madliena and Mosta, (designed to cover the western and eastern extremities and the centre of the front, respectively). The first to be built was Fort Bingemma. By 1878, work had still not commenced on the other two and the entrenched position at Dwerja; all of these were to be completed within the £200,000 budget. General Simmons recommended that the old Knights’ entrenchments located along the line of the escarpment at Tarġa and Naxxar were to be restored and incorporated into the defences. He also recommended that good communication roads should be formed in the rear of the lines and that those that already existed be improved. The fortifications of Mdina, the Island’s old capital, were to be considered as falling within the defensive system.
The forts on the defensive line were designed with a dual land/coastal defence role in mind, particularly the ones at the extremities but, due to the topography in the northern part of the island, there were areas of dead ground along the coast and inland approaches which could not be properly covered by the guns in the main forts. As a result, it was decided that new works should be built between Forts Mosta and Bingemma and emplacements for guns placed in them. It was also considered advisable to have new emplacements for guns built to the left of Fort Madalena and in the area between it and Fort Pembroke. The latter fort was built on the eastern littoral, below and to the rear of Fort Madalena, in order to control the gap caused by the accessible shoreline leading towards Valletta. Gun batteries were eventually proposed at Tarġa, Għargħur and San Giovanni. Plans for these works were drawn up but only the one at San Giovanni was actually built and armed, while the two at Għargħur were never constructed and that at Tarġa, although actually built, was never armed.
Limitations
By 1888, the line of the cliffs formed by the great geological fault and the works which had been constructed along its length from Fort Bingemma on the left to Fort Madalena on the right constituted, in the words of Nicholson and Goodenough, "a military position of great strength". The main defects inherent in the defensive position were the extremities where the high ground descended towards the shore, leaving wide gaps through which enemy forces could by-pass the whole position. Particularly weak in this respect was the western extremity. There, a considerable interval existed between Fort Bingemma and the sea. Military manoeuvres held in the area revealed that it was possible for troops to land in Fomm ir-Riħ Bay and gain the rear of the fortified line undetected from the existing works. To counter this threat, recommendations were made for the construction of two epaulements for a movable armament of quick-firing or field guns, the construction of blockhouses, the improvement of the wall which closed the head of the deep valley to the south of Fort Bingemma and the strengthening of the line of cliffs by scarping in places. It was also suggested that the existing farmhouses in the area be made defensible.
There were even suggestions for the reconstruction and re-utilization of the old Hospitaller lines at ta' Falca and Naxxar, but only the latter was put to use, mainly because these commanded the approaches to the village of Naxxar, described as a position of great importance, in the event of a landing in St. Paul's Bay.
A serious shortcoming of the North West Front defences was the lack of barrack accommodation for the troops who were required to man and defend the works. The lines extended six miles and the accommodation provided in the forts was rather scanty. Consequently, it was considered necessary to build new barracks capable of accommodating a regiment (PRO MPH 234) and later a full battalion of infantry, and a new site was chosen to the rear of the Dwerja Lines, at Mtarfa. Although initially designed as a series of detached strong-points, the fortifications along the North West Front were eventually linked together by a continuous infantry line and the whole complex, by then nearing completion, was christened the Victoria Lines in order to commemorate the Diamond Jubilee of Queen Victoria in 1897. The long stretches of infantry lines linking the various strong-points—consisting in most places of a simple masonry parapet—were completed on 6 November 1899.
Other changes
The line of the intervening stretches followed the configuration of the crest of the ridge, along the contours of the escarpment. The nature of the wall varied greatly along its length but basically consisted of a sandwich-type construction with an outer and inner revetment, bonded at regular intervals and filled in with terreplein. The average height of the parapet was about five feet (1.5 metres). The walls were frequently topped by loopholes, of which only a very few sections have survived. In places, the debris from scarping was dumped in front of the wall to help create a glacis and ditch. In places, the rocky ground immediately behind the parapet was carved out to provide a walkway or patrol path along the length of the line. A number of valleys interrupted the line of the natural fault and, at such places, the continuation of the defensive perimeter was only permitted through the construction of shallow, defensible masonry bridges, as can still be seen today at Wied il-Faħam near Fort Madalena, Wied Anglu and Bingemma Gap. Other bridges, now demolished, existed at Mosta Ravine and Wied Filip.
During the last phase of their development, the Victoria Lines were strengthened by a number of batteries and additional fortifications. An infantry redoubt was built at the western extremity of the front at Fomm ir-Riħ and equipped with emplacements for Maxim machine guns. In 1897 a High Angle Battery was built well to the rear of the defensive lines at Għargħur and another seven howitzer batteries, each consisting of four emplacements for field guns protected by earthen traverses, were built close to the rear of the defensive line. Searchlight emplacements were built at il-Kunċizzjoni and Wied il-Faħam.
Aftermath
Military training exercises staged in May 1900 revealed that the Victoria Lines were of dubious defensive value. With the exception of the coastal forts, by 1907 they were abandoned altogether. During World War Two, a joint Germans-Italian invasion seemed likely so the lines were rehabilitated and new guard posts built along them as a second line of defence to the coastal defences. Again the lines were untested. Fort Mosta is still in use as an ammunitions depot, while Fort Madalena is still used by the Communications Information Systems Company of the AFM.
In 1998 the Government of Malta submitted the Victoria Lines to UNESCO for consideration as a World Heritage Site.
Large parts of the fortification walls have collapsed, although some parts in the countryside remain intact and in general the Victoria Lines have fallen into obscurity. The Maltese Tourism Authority is proposing that by the end of 2019 two trails along the Lines will become Malta’s inaugural national walkway.
References
Notes
Further reading
British fortifications in Malta
Ruins in Malta
Rabat, Malta
Mġarr
Mosta
Naxxar
Għargħur
Fortification lines
World Heritage Tentative List
Buildings and structures completed in 1899
Limestone buildings in Malta
Military installations closed in 1907
19th-century fortifications | Victoria Lines | [
"Engineering"
] | 2,483 | [
"Fortification lines"
] |
11,680,057 | https://en.wikipedia.org/wiki/Radiotrophic%20fungus | Radiotrophic fungi are fungi that can perform the hypothetical biological process called radiosynthesis, which means using ionizing radiation as an energy source to drive metabolism. It has been claimed that radiotrophic fungi have been found in extreme environments such as in the Chernobyl Nuclear Power Plant.
Most radiotrophic fungi use melanin in some capacity to survive. The process of using radiation and melanin for energy has been termed radiosynthesis, and is thought to be analogous to anaerobic respiration. However, it is not known if multi-step processes such as photosynthesis or chemosynthesis are used in radiosynthesis or even if radiosynthesis exists in living organisms.
Discovery
Many fungi have been isolated from the area around the destroyed Chernobyl Nuclear Power Plant, some of which have been observed directing their growth of hyphae toward radioactive graphite from the disaster, a phenomenon called “radiotropism”. Study has ruled out the presence of carbon as the resource attracting the fungal colonies, and in fact concluded that some fungi will preferentially grow in the direction of the source of beta and gamma ionizing radiation, but were not able to identify the biological mechanism behind this effect. It has also been observed that other melanin-rich fungi were discovered in the cooling water from some other working nuclear reactors. The light-absorbing compound in the fungus cell membranes had the effect of turning the water black. While there are many cases of extremophiles (organisms that can live in severe conditions such as that of the radioactive power plant), a hypothetical radiotrophic fungus would grow because of the radiation, rather than in spite of it.
Further research conducted at the Albert Einstein College of Medicine showed that three melanin-containing fungi—Cladosporium sphaerospermum, Wangiella dermatitidis, and Cryptococcus neoformans—increased in biomass and accumulated acetate faster in an environment in which the radiation level was 500 times higher than in the normal environment. C. sphaerospermum in particular was chosen due to this species being found in the reactor at Chernobyl. Exposure of C. neoformans cells to these radiation levels rapidly (within 20–40 minutes of exposure) altered the chemical properties of its melanin, and increased melanin-mediated rates of electron transfer (measured as reduction of ferricyanide by NADH) three- to four-fold compared with unexposed cells. However, each culture was performed with at least limited nutrients provided to each fungus. The increase in biomass and other effects could be caused either by the cells directly deriving energy from ionizing radiation, or by the radiation allowing the cells to utilize traditional nutrients either more efficiently or more rapidly.
Outside of the fungal studies, similar effects on melanin electron-transport capability were observed by the authors after exposure to non-ionizing radiation. The authors did not conclude whether light or heat radiation would have a similar effect on living fungal cells.
Role of melanin
Melanins are an family of dark-colored, naturally occurring pigments with radiation-shielding properties. These pigments can absorb electromagnetic radiation due to their dark color and high molecular weights; this quality suggests that melanin could help protect radiotropic fungi from ionizing radiation. It has been suggested that melanin's radiation-shielding properties are due to its ability to trap free radicals formed during radiolysis of water. Melanin production is also advantageous to the fungus in that it can aid survival in many extreme environments. Examples of these environments include the Chernobyl Nuclear Power Plant, the International Space Station, and the Transantarctic Mountains. Melanin may also be able to help the fungus metabolize radiation, but more evidence and research is still needed.
Comparisons with non-melanized fungi
Melanization may come at some metabolic cost to the fungal cells. In the absence of radiation, some non-melanized fungi (that had been mutated in the melanin pathway) grew faster than their melanized counterparts. Limited uptake of nutrients due to the melanin molecules in the fungal cell wall or toxic intermediates formed in melanin biosynthesis have been suggested to contribute to this phenomenon. It is consistent with the observation that despite being capable of producing melanin, many fungi do not synthesize melanin constitutively (i.e., all the time), but often only in response to external stimuli or at different stages of their development. The exact biochemical processes in the suggested melanin-based synthesis of organic compounds or other metabolites for fungal growth, including the chemical intermediates (such as native electron donor and acceptor molecules) in the fungal cell and the location and chemical products of this process, are unknown.
Use in human spaceflight
It is hypothesized that radiotrophic fungi could potentially be used as a shield to protect against radiation, specifically in affiliation to the use of astronauts in space or other atmospheres. An experiment taking place at the International Space Station in December 2018 through January 2019 was conducted in order to test whether radiotrophic fungi could provide protection from ionizing radiation in space, as part of research efforts preceding a possible trip to Mars. This experiment used the radiotrophic strain of the fungus Cladosporium sphaerospermum. The growth of this fungus and its ability to deflect the effects of ionizing radiation were studied for 30 days aboard the International Space Station. This experimental trial yielded very promising results.
The amount of radiation deflected was found to directly correlate with the amount of fungus. There was no difference in the reduction of ionizing radiation between the experimental and control group within the first 24 hour period; however, once the fungus had reached an adequate maturation, and with a 180° protection radius, amounts of ionizing radiation were significantly reduced as compared to the control group. With a 1.7 mm thick shield of melanized radiotrophic Cladosporium sphaerospermum, measurements of radiation nearing the end of the experimental trial were found to be 2.42% lower, demonstrating radiation deflecting capabilities five times that of the control group. Under circumstances in which the fungi would fully encompass an entity, radiation levels would be reduced by an estimated 4.34±0.7%. Estimations indicate that approximately a 21 cm thick layer could significantly deflect the annual amount of radiation received on Mars’ surface. Limitations to the use of a radiotrophic fungi based shield include increased mass on missions. However as a viable substitute to reduce overall mass on potential Mars missions, a mixture with equal mole concentration of Martian soil, melanin, and a layer of fungi roughly 9 cm thick, could be used.
See also
Nylon-eating bacteria
Plastivore
References
External links
Einstein College of Medicine on radiotrophic fungi
The blacker the better… especially in Chernobyl at Earthling Nature.
Fungi by adaptation
Evolution by taxon
Radiation effects
Radiobiology
Gamma rays
Extremophiles | Radiotrophic fungus | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering",
"Biology",
"Environmental_science"
] | 1,452 | [
"Physical phenomena",
"Fungi",
"Spectrum (physical sciences)",
"Fungi by adaptation",
"Radiobiology",
"Electromagnetic spectrum",
"Materials science",
"Organisms by adaptation",
"Extremophiles",
"Radiation",
"Gamma rays",
"Condensed matter physics",
"Bacteria",
"Radiation effects",
"Envi... |
11,680,645 | https://en.wikipedia.org/wiki/Interval%20order | In mathematics, especially order theory,
the interval order for a collection of intervals on the real line
is the partial order corresponding to their left-to-right precedence relation—one interval, I1, being considered less than another, I2, if I1 is completely to the left of I2.
More formally, a countable poset is an interval order if and only if
there exists a bijection from to a set of real intervals,
so ,
such that for any we have
in exactly when .
Such posets may be equivalently
characterized as those with no induced subposet isomorphic to the
pair of two-element chains, in other words as the -free posets
. Fully written out, this means that for any two pairs of elements and one must have or .
The subclass of interval orders obtained by restricting the intervals to those of unit length, so they all have the form , is precisely the semiorders.
The complement of the comparability graph of an interval order (, ≤)
is the interval graph .
Interval orders should not be confused with the interval-containment orders, which are the inclusion orders on intervals on the real line (equivalently, the orders of dimension ≤ 2).
Interval orders' practical applications include modelling evolution of species and archeological histories of pottery styles.
Interval orders and dimension
An important parameter of partial orders is order dimension: the dimension of a partial order is the least number of linear orders whose intersection is . For interval orders, dimension can be arbitrarily large. And while the problem of determining the dimension of general partial orders is known to be NP-hard, determining the dimension of an interval order remains a problem of unknown computational complexity.
A related parameter is interval dimension, which is defined analogously, but in terms of interval orders instead of linear orders. Thus, the interval dimension of a partially ordered set is the least integer for which there exist interval orders on with exactly when and .
The interval dimension of an order is never greater than its order dimension.
Combinatorics
In addition to being isomorphic to -free posets,
unlabeled interval orders on are also in bijection
with a subset of fixed-point-free involutions
on ordered sets with cardinality
. These are the
involutions with no so-called left- or right-neighbor nestings where, for any involution
on , a left nesting is
an such that and a right nesting is an such that
.
Such involutions, according to semi-length, have ordinary generating function
The coefficient of in the expansion of gives the number of unlabeled interval orders of size . The sequence of these numbers begins
1, 2, 5, 15, 53, 217, 1014, 5335, 31240, 201608, 1422074, 10886503, 89903100, 796713190, 7541889195, 75955177642, …
Notes
References
.
.
.
.
.
Further reading
Order theory
Combinatorics | Interval order | [
"Mathematics"
] | 623 | [
"Discrete mathematics",
"Order theory",
"Combinatorics"
] |
11,680,739 | https://en.wikipedia.org/wiki/Paecilomyces%20fulvus | Paecilomyces fulvus is a plant pathogen that causes Byssochlamys rot on strawberries.
References
Fungal strawberry diseases
Fungi described in 1971
fulvus
Fungus species | Paecilomyces fulvus | [
"Biology"
] | 39 | [
"Fungi",
"Fungus species"
] |
11,680,848 | https://en.wikipedia.org/wiki/Kelvin%20bridge | A Kelvin bridge, also called a Kelvin double bridge and in some countries a Thomson bridge, is a measuring instrument used to measure unknown electrical resistors below 1 ohm. It is specifically designed to measure resistors that are constructed as four terminal resistors. Historically Kelvin bridges were used to measure shunt resistors for ammeters and sub one ohm reference resistors in metrology laboratories. In the scientific community the Kelvin bridge paired with a Null Detector was used to achieve the highest precision.
Background
Resistors above about 1 ohm in value can be measured using a variety of techniques, such as an ohmmeter or by using a Wheatstone bridge. In such resistors, the resistance of the connecting wires or terminals is negligible compared to the resistance value. For resistors of less than an ohm, the resistance of the connecting wires or terminals becomes significant, and conventional measurement techniques will include them in the result.
To overcome the problems of these undesirable resistances (known as 'parasitic resistance'), very low value resistors and particularly precision resistors and high current ammeter shunts are constructed as four terminal resistors. These resistances have a pair of current terminals and a pair of potential or voltage terminals. In use, a current is passed between the current terminals, but the volt drop across the resistor is measured at the potential terminals. The volt drop measured will be entirely due to the resistor itself as the parasitic resistance of the leads carrying the current to and from the resistor are not included in the potential circuit. To measure such resistances requires a bridge circuit designed to work with four terminal resistances. That bridge is the Kelvin bridge.
Principle of operation
The operation of the Kelvin bridge is very similar to the Wheatstone bridge, but uses two additional resistors. Resistors R1 and R2 are connected to the outside potential terminals of the four terminal known or standard resistor Rs and the unknown resistor Rx (identified as P1 and P′1 in the diagram). The resistors Rs, Rx, R1 and R2 are essentially a Wheatstone bridge. In this arrangement, the parasitic resistance of the upper part of Rs and the lower part of Rx is outside of the potential measuring part of the bridge and therefore are not included in the measurement. However, the link between Rs and Rx (Rpar) is included in the potential measurement part of the circuit and therefore can affect the accuracy of the result. To overcome this, a second pair of resistors R′1 and R′2 form a second pair of arms of the bridge (hence 'double bridge') and are connected to the inner potential terminals of Rs and Rx (identified as P2 and P′2 in the diagram). The detector D is connected between the junction of R1 and R2 and the junction of R′1 and R′2.
The balance equation of this bridge is given by the equation
In a practical bridge circuit, the ratio of R′1 to R′2 is arranged to be the same as the ratio of R1 to R2 (and in most designs, R1 = R′1 and R2 = R′2). As a result, the last term of the above equation becomes zero and the balance equation becomes
Rearranging to make Rx the subject
The parasitic resistance Rpar has been eliminated from the balance equation and its presence does not affect the measurement result. This equation is the same as for the functionally equivalent Wheatstone bridge.
In practical use the magnitude of the supply B, can be arranged to provide current through Rs and Rx at or close to the rated operating currents of the smaller rated resistor. This contributes to smaller errors in measurement. This current does not flow through the measuring bridge itself. This bridge can also be used to measure resistors of the more conventional two terminal design. The bridge potential connections are merely connected as close to the resistor terminals as possible. Any measurement will then exclude all circuit resistance not within the two potential connections.
Accuracy
The accuracy of measurements made using this bridge are dependent on a number of factors. The accuracy of the standard resistor (Rs) is of prime importance. Also of importance is how close the ratio of R1 to R2 is to the ratio of R′1 to R′2. As shown above, if the ratio is exactly the same, the error caused by the parasitic resistance (Rpar) is eliminated. In a practical bridge, the aim is to make this ratio as close as possible, but it is not possible to make it exactly the same. If the difference in ratio is small enough, then the last term of the balance equation above becomes small enough that it is negligible. Measurement accuracy is also increased by setting the current flowing through Rs and Rx to be as large as the rating of those resistors allows. This gives the greatest potential difference between the innermost potential connections (R2 and R′2) to those resistors and consequently sufficient voltage for the change in R′1 and R′2 to have its greatest effect.
Commercial Kelvin Bridges were initially using galvanometers replaced by micro-ammeters and that was limiting factor of the precision, when voltage difference comes close to zero. Further improvement in precision was achieved using null detectors with a sensitivity of nanovolts.
There are some commercial bridges reaching accuracies of better than 2% for resistance ranges from 1 microohm to 25 ohms. One such type is illustrated above. Modern digital meters exceed 0.25%.
Laboratory bridges are usually constructed with high accuracy variable resistors in the two potential arms of the bridge and achieve accuracies suitable for calibrating standard resistors. In such an application, the 'standard' resistor (Rs) will in reality be a sub-standard type (that is a resistor having an accuracy some 10 times better than the required accuracy of the standard resistor being calibrated). For such use, the error introduced by the mis-match of the ratio in the two potential arms would mean that the presence of the parasitic resistance Rpar could have a significant impact on the very high accuracy required. To minimise this problem, the current connections to the standard resistor (Rx); the sub-standard resistor (Rs) and the connection between them (Rpar) are designed to have as low a resistance as possible, and the connections both in the resistors and the bridge more resemble bus bars rather than wire.
Some ohmmeters include Kelvin bridges in order to obtain large measurement ranges. Instruments for measuring sub-ohm values are often referred to as low-resistance ohmmeters, milli-ohmmeters, micro-ohmmeters, etc.
References
Further reading
External links
DC Metering Circuits chapter from Lessons In Electric Circuits Vol 1 DC and Lessons In Electric Circuits series.
Discussion of 4 terminal measurement
Bridge circuits
Electrical engineering
Measuring instruments
British inventions
Impedance measurements
William Thomson, 1st Baron Kelvin | Kelvin bridge | [
"Physics",
"Technology",
"Engineering"
] | 1,424 | [
"Physical quantities",
"Measuring instruments",
"Electrical engineering",
"Impedance measurements",
"Electrical resistance and conductance"
] |
11,681,950 | https://en.wikipedia.org/wiki/79%20Ceti%20b | 79 Ceti b (also known as HD 16141 b) is an extrasolar planet orbiting 79 Ceti every 75 days. Discovered along with HD 46375 b on March 29, 2000, it was the joint first known extrasolar planet to have minimum mass less than the mass of Saturn.
See also
94 Ceti b
References
External links
SolStation: 79 Ceti
Extrasolar Planets Encyclopaedia: HD 16141
Cetus
Exoplanets discovered in 2000
Giant planets
Exoplanets detected by radial velocity | 79 Ceti b | [
"Astronomy"
] | 109 | [
"Cetus",
"Constellations"
] |
11,682,101 | https://en.wikipedia.org/wiki/Space%20Chimps | Space Chimps is a 2008 animated comic science fiction film directed by Kirk DeMicco (in his debut), who wrote the screenplay with Rob Moreland. It features the voices of Andy Samberg, Cheryl Hines, Jeff Daniels, Patrick Warburton, Kristin Chenoweth, Kenan Thompson, Zack Shada, Carlos Alazraqui, Omid Abtahi, Patrick Breen, Jane Lynch, Kath Soucie, and Stanley Tucci.
The film follows three chimpanzees who go into space to an alien planet. 20th Century Fox theatrically released the film on July 18, 2008, and received mostly negative reviews by critics. The film grossed $64.8 million on a $37 million budget. It received an Artios Award nomination for Outstanding Achievement in Casting – Animation Feature. A video game based on the film was also released in July 2008.
A direct-to-video sequel, titled Space Chimps 2: Zartog Strikes Back, was released on May 28, 2010, to cinemas in the United Kingdom by Entertainment Film Distributors and was released on DVD on October 5, 2010, in the United States by 20th Century Fox Home Entertainment.
Plot
In outer space, an uncrewed, intelligent life-searching NASA space probe, Infinity, is dragged into an intergalactic wormhole and crash-lands on the other side of the galaxy. It lands on the Earth-like alien planet named Malgor, populated by colorful alien beings. Zartog, an inhabitant, accidentally discovers how to take manual control of the onboard machinery and uses it to enslave the population. Faced with the possible extinction of Infinity and their budget, the scientists hire multiple chimpanzees as astronauts to regain contact with the probe and retrieve it: technical genius Comet, lieutenant Luna and commander Titan. For media attention, the Senator adds to the team Ham III, grandson of Ham, the first chimpanzee in space, who works as a cannonball at a circus in company of Houston, a chimpanzee and friend of Ham III's grandfather. Ham III is uninterested in the mission, but despite his best attempts to escape, he is launched into space with Titan and Luna.
Ham, Luna and Titan enter the wormhole, where the latter two pass out from the pressure, leaving Ham with the task of getting the ship out and landing it. The ship and Titan are taken by Zartog's henchmen, and Titan, unaware of Zartog's agenda, teaches him about the probe's features. Ham and Luna journey to Zartog's palace. Ham reveals that he believes Space Chimps is a joke which makes Luna angry at him. They receive guidance from inhabitant Kilowatt. They go into a valley of the aliens' food where they meet some globhoppers, and then they go into the cave of the Flesh-Devouring Beast. Kilowatt volunteers to distract the beast so Luna and Ham can escape, and was devoured in the process. They then go inside the Dark Cloud of Id in which they fall out of. Once at the palace, they rescue Titan and plan to leave. However, Ham, Luna and Titan alter their course of action after noticing Zartog torturing the inhabitants who are being frozen in a pool of freznar, feeling they owe it to Kilowatt to stop Zartog. They abandon the ship, which returns home on autopilot.
Zartog attacks the chimpanzees with the probe and threatens to freeze them all, but Titan tricks him into activating the probe's ejection mechanism, which launches Zartog into a pool of freznar and freezes him. Kilowatt, who has survived, frees the chimps. The chimps re-establish contact with Houston and Comet to discuss their prospects on leaving. By Ham's suggestion and with help from Malgor's inhabitants, they manage to engineer a ship from the probe's constituents, launching it through a volcanic eruption while using Zartog as their nose cone.
Before they re-enter the wormhole, Titan hands the controls over to Ham. Though Ham becomes skeptical once more, he is reassured by a vision of his grandfather and steers it out of the wormhole. Comet advises to decrease the ship's entry angle, and the ship starts spinning out, though Luna recovers in time to aid Ham. The repurposed mechanical arms soon fail, and the Zartog nosecone detaches in the atmosphere, damaging one of the ship's fins while Comet and Houston appropriate one HEMTT to prepare for the ship's arrival. Since Ham needs to fly and Titan is still out, Luna climbs out to repair it. She succeeds and Ham regains control as it passes a media conference, but loses grip and is presumably killed. Ham nearly crashes the ship and one of the arms break off, but he manages to successfully land it on the HEMTT. He leaves the ship and finds Luna merely knocked out. Moments later, Houston, Comet and Titan catch up with them. Attracted by the commotion, the scientist, Senator and media discover the ship and the chimps. Under pressure from the press, the Senator decides to instead dramatically increase the space program's funding. Subsequently, the scientists celebrate their return.
Zartog is later revealed to have landed in front of a suburban residence. Alive but helpless, he can only watch as a Dachshund urinates on him.
Cast
Andy Samberg as Ham III, Ham I's grandson and a circus chimpanzee who loves cannon acts and crashing.
Cheryl Hines as Luna, Titan's lieutenant who is fearless and intelligent and Ham's love interest.
Patrick Warburton as Titan, the flamboyant commander of the expedition. He has a great love of chimpanzee puns.
Jeff Daniels as Zartog, an alien tyrant who enslaves the planet Malgor.
Kristin Chenoweth as Kilowalawhizasahooza (Kilowatt for short), a young alien who befriends Ham and Luna.
Kenan Thompson as The Ringmaster, the owner of a circus where Ham III works.
Zack Shada as Comet, a technical genius chimp.
Carlos Alazraqui as Houston, a friend of Ham's grandfather; and Piddles the Clown.
Omid Abtahi as Dr. Jagu
Patrick Breen as Dr. Bob
Jane Lynch as Dr. Poole
Kath Soucie as Dr. Smothers
Stanley Tucci as The Senator
Wally Wingert as Splork, Infinity Probe, and Pappy Ham
Tom Kenny as Newsreel
Jason Harris as Guard
Production
In 2002, Kirk DeMicco conceived a film premise of anthropomorphic chimpanzees on a spaceship from viewing The Right Stuff (1983), a fictional depiction of the Mercury Seven program. It included the line, "Does a monkey know he's sitting on top of a rocket that might explode?" which made him wonder what happened if the monkey knew. Shortly after the lightbulb moment, he saw the famous space chimpanzee Ham on the cover of a 1961 issue of Life magazine; the chimpanzee's smug expression gave him the idea of a self-centered protagonist going on a dangerous space mission. Using the Life magazine issue with him, DeMicco pitched his ideas to John H. Williams, comparing the plot to that of Tommy Boy (1995). Williams was instantly hooked and began working with him from there. They later decided on "a great sci-fi adventure" for children that was also a mocking of science fiction media in the same way the Shrek films, which Williams also produced, parodied fairy tales. De Micco wanted the planet to have the vibe of the Mos Eisley cantina of the Star Wars series.
The project and its title, Space Chimps, were first publicized in a Variety article on June 7, 2004, announcing it was next in Vanguard's production line after Valiant (2005). The film was produced in two years by Williams' Vanguard Animation studio with a team of around 170, a $37 million budget, and DeMicco as director. For the film, a new pipeline was created, as well as a studio constructed in Vancouver. Chris Bacon was chosen as composer, who was recommended to DeMicco by James Newton Howard. The limited budget meant creative choices had to be made for the music to sound interesting; according to DeMicco, beds were occasionally used alongside the orchestra, and the Blue Man Group played PVC pipes.
Release
On April 11, 2006, 20th Century Fox signed a deal with Vanguard minority owner IDT Entertainment to distribute four films, the second in line being Space Chimps.
Space Chimps was originally set to be released on May 2, 2008, but on December 19, 2007, the movie's release date was changed to July 18, 2008. This was mainly because of the 2007–08 Writers Guild of America strike.
Reception
Critical response
Rotten Tomatoes reported that 33% of professional critics gave positive reviews based on 92 reviews. The consensus states: "Space Chimps cheap animation and overabundance of monkey puns feels especially dated in a post WALL-E world." On Metacritic, the film holds a 36/100 based on 18 critics, indicating "generally unfavorable reviews". Audiences polled by CinemaScore gave the film an average grade of "B+" on an A+ to F scale.
Roger Ebert gave a positive review of three stars and said in his review that "Space Chimps is delightful from beginning to end." Neil Genzlinger of The New York Times said that Space Chimps was "hilarious". Lael Loewenstein of Variety called it "fairly fatuous but enjoyably slim family entertainment".
Box office
The film has grossed $30.1 million in the United States, and $34.7 million in other countries, totalling $64.8 million worldwide. The film was released in the United Kingdom on August 1, 2008, and opened on #7, grossing £563,543.
On its opening weekend, Space Chimps was number seven with a gross of $7.1 million in 2,511 theatres, with a $2,860 average; it was a poor opening for the film, debuting on (at the time) the highest-grossing box office weekend ever in the United States.
Awards
Home media
20th Century Home Entertainment released Space Chimps on DVD and Blu-ray on November 25, 2008.
Video game
A video game based on the film was released in July 2008, published by Brash Entertainment and developed by Redtribe, Wicked Witch Software and WayForward Technologies.
Sequel
Space Chimps 2: Zartog Strikes Back was released on May 28, 2010, to cinemas in the United Kingdom by Entertainment Film Distributors and was released on DVD on October 5, 2010, in the United States by 20th Century Fox Home Entertainment. It was universally panned by critics and grossed just over $4 million during its theatrical run.
See also
Space Chimps 2: Zartog Strikes Back
Space Chimps (video game)
References
External links
2008 films
2008 computer-animated films
2000s adventure comedy films
2000s science fiction comedy films
2008 American animated films
Animated films about apes
Animated films about talking animals
American adventure comedy films
American children's animated space adventure films
American children's animated comic science fiction films
British adventure comedy films
British science fiction comedy films
British children's films
Canadian adventure comedy films
Canadian animated science fiction films
Canadian children's animated films
Canadian science fiction comedy films
2000s children's adventure films
Films produced by Barry Sonnenfeld
Films produced by John H. Williams
Animated films set on fictional planets
Animals in space
20th Century Fox animated films
20th Century Fox films
20th Century Studios franchises
Vanguard Animation
2000s children's films
2000s children's animated films
2008 directorial debut films
2008 comedy films
2000s English-language films
2000s Canadian animated films
2000s British films
The Weinstein Company animated films
Canadian animated comedy films
English-language science fiction comedy films
English-language adventure comedy films | Space Chimps | [
"Chemistry",
"Biology"
] | 2,485 | [
"Animal testing",
"Space-flown life",
"Animals in space"
] |
11,682,141 | https://en.wikipedia.org/wiki/Tetrachloroethylene%20%28data%20page%29 | This page provides supplementary chemical data on tetrachloroethylene.
Material Safety Data Sheet
The handling of this chemical may incur notable safety precautions. It is highly recommended that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source such as SIRI, and follow its directions. MSDS is available from Fisher Scientific.
Structure and properties
Thermodynamic properties
Vapor pressure of liquid
Table data obtained from CRC Handbook of Chemistry and Physics 47th ed. Note that "(s)" annotation indicates equilibrium temperature of vapor pressure of solid. Otherwise indication is equilibrium temperature of vapor of liquid.
Distillation data
See also
Trichloroethylene (data page)
Spectral data
References
Chemical data pages
Chemical data pages cleanup | Tetrachloroethylene (data page) | [
"Chemistry"
] | 157 | [
"Chemical data pages",
"nan"
] |
5,407,968 | https://en.wikipedia.org/wiki/Rubroboletus%20satanas | Rubroboletus satanas, commonly known as Satan's bolete or the Devil's bolete, is a basidiomycete fungus of the bolete family (Boletaceae) and one of its most infamous members. It was known as Boletus satanas before its transfer to the new genus Rubroboletus in 2014, based on molecular phylogenetic data. Found in broad-leaved and mixed woodland in the warmer regions of Europe, it is classified as a poisonous mushroom, known to cause violent gastroenteritis. However, reports of poisoning are rare, due to the striking coloration and unpleasant odor of the fruiting bodies, which discourage experimentation.
These squat, brightly coloured fruiting bodies are often massive and imposing, with a beige-coloured velvet-textured cap up to across, yellow to orange-red pores and a bulbous red stem. The flesh turns blue when cut or bruised, and fruit bodies often emit an unpleasant rotten odor. It is arguably the largest bolete found in Europe.
Taxonomy and phylogeny
Originally known as Boletus satanas, Satan's bolete was described by German mycologist Harald Othmar Lenz in 1831. Lenz was aware of several reports of adverse reactions from people who had consumed this fungus and apparently felt himself ill from its "emanations" while describing it, hence giving it its sinister epithet. The Greek word (satanas, or Satan) is derived from Hebrew śāṭān (שטן). The American mycologist Harry D. Thiers concluded that material from North America matches the species description; however, genetic testing has since confirmed that western North American collections represent Rubroboletus eastwoodiae, a different species.
Genetic analysis published in 2013 revealed that B. satanas and several other red-pored boletes, are part of the "dupainii" clade (named after B. dupainii), and are distantly nested from the core group of Boletus (including B. edulis and relatives) within the Boletineae. This indicated that B. satanas and its relatives belonged to a distinct genus. The species was hence transferred to the new genus Rubroboletus in 2014, along with several allied red-pored, blue-staining bolete species. Genetic testing on several species of the genus revealed that R. satanas is most closely related to R. pulchrotinctus, a morphologically similar but much rarer species occurring in the Mediterranean regions
Common names
Both Rubroboletus satanas and Suillellus luridus are known as ayimantari ('bear mushroom') in eastern Turkey.
Description
The compact cap can reach an impressive , extraordinarily , very rarely in diameter. At first it is hemispherical with an inrolled margin, but becomes convex at maturity as the fruit body expands, while in older specimens the margin might be slightly undulating. When young, the pileus is greyish white to silvery-white or buff, but older specimens tend to develop olivaceous, ochraceous or brownish tinges. The surface of the cap is finely tomentose, becoming smooth at maturity and is often slightly viscid in wet weather. The cuticle is tightly attached to the flesh and does not peel.
The free to slightly adnate tubes are up to long, pale yellow or greenish yellow and bluing when cut. The pores (tube mouths) are rounded, yellow to orange at first, but soon turning red from the point of their attachment to the stem outwards, eventually becoming entirely purplish red or carmine-red at full maturity and instantly bluing when touched or bruised. The stipe is , extraordinarily , very rarely long, distinctly bulbous (, extraordinarily , very rarely ), and often wider than its length, becoming more ventricose as the fungus expands but remaining bulbous at the base. Its colour is golden-yellow to orange at the apex, becoming increasingly pinkish-red to reddish-orange further down and deep carmine-red to purple-red towards the base. It is decorated in a fine, yellowish to reddish hexagonal net, sometimes confined to the upper half of the stipe. The flesh is thick, spongy and whitish, but may be yellow to straw-coloured in immature specimens and is sometimes reddish at the stem base. It slowly turns a faded blue colour when cut, bluing more intensely around the apex and above the tubes. The smell is weak and pleasantly musky in young fruit bodies, but becomes increasingly putrid in older specimens, reminiscent of carrion. Young specimens have a reportedly pleasant, nutty taste. The spore print is olivaceous green.
The spores are fusiform (spindle-shaped) when viewed under a microscope and measure 10–16 × 4.5–7.5 μm. The cap cuticle is composed of interwoven septate hyphae, which are often finely incrusted.
Similar species
Satan's bolete can be confused with a number of other species:
Rubroboletus rhodoxanthus is found predominantly on acidic soil, develops pinkish tinges of the cap, has a more or less cylindrical or clavate stipe with a very dense, well-developed net and lemon-yellow flesh that distinctly stains blue only in the cap when longitudinally sliced.
Rubroboletus legaliae is also acidophilous, has pinkish tinges on the cap, flesh that stains more extensively blue when cut and narrower spores, measuring 9–15 × 4–6 μm.
Rubroboletus pulchrotinctus has a variable cap colour often featuring a pinkish band at the margin; has a dull-coloured stipe without deep red tinges, pores that remain yellow or orange even in mature fruit bodies, and somewhat narrower spores, measuring 12–15 × 4.5–6 μm.
Rubroboletus rubrosanguineus is associated with spruce (Picea) or fir (Abies), has pinkish tinges on the cap and smaller spores, measuring 10–14.5 × 4–6 μm.
Caloboletus calopus is usually associated with coniferous trees, has pores that remain persistently yellow even in overripe fruit bodies, has a more slender, cylindrical or clavate stipe and narrower spores, measuring 11–16 × 4–5.5 μm.
Distribution and habitat
Rubroboletus satanas is widely distributed throughout the temperate zone, but is rare in most of its reported localities. In Europe, it mostly occurs in the southern regions and is rare or absent in northern countries. It fruits in the summer and early autumn in warm, broad-leaved and mixed forests, forming ectomycorrhizal associations with oak (Quercus) and sweet chestnut (Castanea), with a preference for calcareous (chalky) soils. Other frequently reported hosts are hornbeam (Carpinus), beech (Fagus) and lime and linden trees (Tilia).
In the United Kingdom, this striking bolete is found only in the south of England. It is rare in Scandinavia, occurring primarily on a few islands in the Baltic Sea where conditions are favourable, with highly calcareous soil. In the eastern Mediterranean region, it has been reported from the Bar'am Forest in the Upper Galilee region of northern Israel, as well as the island of Cyprus, where it is found in association with the narrow-endemic golden oak (Quercus alnifolia). It has further been documented in the Black Sea and eastern Anatolia regions of Turkey, as well as Crimea and Ukraine, with its distribution possibly extending as far south as Iran.
In the past, R. satanas had been reported from the United States, however, these sightings are instead of the closely related species Rubroboletus eastwoodiae.
Toxicity
This mushroom is moderately poisonous, especially if eaten raw. The symptoms, which are predominantly gastrointestinal in nature, include nausea, abdominal pain, and violent vomiting with bloody diarrhea that can last up to six hours.
The toxic enzyme bolesatine has been isolated from fruiting bodies of R. satanas and is implicated in the poisonings. Bolesatine is a protein synthesis inhibitor and, when given to mice, causes massive thrombosis. At lower concentrations, bolesatine is a mitogen, inducing cell division in human T lymphocytes. Muscarine has also been isolated from this fungus, but the quantities are believed to be far too small to cause toxic effects in humans. More recent studies have associated the poisoning caused by R. satanas with hyperprocalcitonemia, and classified it as a distinct syndrome among fungal poisonings.
Controversially, English mycologist John Ramsbottom reported in 1953 that R. satanas is consumed in certain parts of Italy and the former Czechoslovakia. In those regions, the fungus is reportedly eaten following prolonged boiling that may neutralise the toxins, though this has never been proven scientifically. Similar reports exist from the San Francisco Bay Area of the United States, but probably involve a different fungus misidentified as R. satanas. Ramsbottom speculated that there may be a regional variation in its toxicity, and conceded that the fungus may not be as poisonous as widely reported. Nevertheless, R. satanas is rarely sampled casually, not least because of the foul smell, which in addition to their bright red colour and blue staining, make this fungus unappealing for human consumption.
References
Poisonous fungi
satanas
Fungi of Europe
Fungi described in 1831
Fungus species | Rubroboletus satanas | [
"Biology",
"Environmental_science"
] | 2,006 | [
"Poisonous fungi",
"Fungi",
"Toxicology",
"Fungus species"
] |
5,407,984 | https://en.wikipedia.org/wiki/Ivan%20Moscovich | Ivan Moscovich (14 June 1926 – 21 April 2023) was a Yugoslav-Hungarian inventor, designer and commercial developer of puzzles, games, toys, and educational aids. He wrote many books and was internationally recognized in the toy industry as an innovative inventor.
Biography
Early life
Ivan Moscovich was born to Jewish Hungarian parents on 14 June 1926 in Novi Sad in the Yugoslav province of Vojvodina and had a sheltered, middle class childhood. His father, a professional painter, escaped into Yugoslavia after World War I and opened a photographic studio there which he named Photo Ivan after his son.
World War II and Nazi concentration camp prisoner
In 1941, Yugoslavia surrendered unconditionally to the Axis powers during World War II. Hungary occupied Vojvodina. In January 1942, Moscovich's 44 year old father was a victim of the Novi Sad massacre. In 1943, Hungary started secret armistice negotiations with the Allied Powers which was discovered by Germany resulting in the German occupation of Vojvodina. Soon after, at the age of 17, Moscovich was taken to the concentration camp at Auschwitz with his grandfather, grandmother, and mother. His grandparents were immediately taken to the crematoria and were killed. While his mother remained imprisoned at Auschwitz, Moscovich was sent to Wustegiersdorf, one of the surrounding work camps, and put to work laying rail lines.
In January 1945, Auschwitz was evacuated and Moscovich along with 60,000 prisoners marched west to Bergen-Belsen. After only a few days, Moscovich volunteered for a selection of 500 volunteer prisoner workers.
These volunteers were sent to clear the railway station in Hildesheim by dislodging the wagons to free the rails so they could be fixed and used for German transports. While there, several groups found food supplies including sugar, butter, and eggs. On 22 March Hildesheim was bombed, killing or wounding both prisoners and German guards. The volunteers were made to move the bodies for easier identification.
They were then marched to the Hannover-Ahlem prison camp. The prisoners worked in an asbestos mine converting it into an ammunition depot safe from aerial attacks. Hannover-Ahlem was evacuated on 6 April 1945 and Moscovich marched towards Bergen-Belsen again. Moscovich described the last days in Bergen-Belsen as “the ultimate in human misery, suffering, degradation, death and humiliation”. He hid himself among a pile of dead bodies to avoid the Germans.
British soldiers liberated Bergen-Belsen on 15 April 1945. Moscovich, who had endured four concentration camps and two forced work camps, was sent to Sweden for recuperation, and while there he was reunited with his mother who had been liberated from Mauthausen by American troops.
Life after World War II (1945–1952)
Moscovich got his first job in Yugoslavia when a friend in Tito’s Ministry of Transport offered him a position repairing Yugoslavia’s railway system which had been damaged during the war. The job required a large, untested German machine using high electrical wattage to weld rail lines. By 1947, Moscovich reported directly to the deputy minister.
Moscovich was given control over a squad of 50 German prisoners of war including some high ranking German officers, some regular soldiers, some Wehrmacht, and some SS. Although he considered taking his revenge, Moscovich elected to increase their rations in order to increase their productivity. However, he never told them he was a camp survivor. After six months, Joseph Tito released the workers.
During the time he worked the position, he received a medal from Tito himself.
Later years
After finishing his university studies in mechanical engineering at the University of Belgrade, Moscovich emigrated to Israel, where he initially worked as a research scientist involved in the design of teaching materials, educational aids, and educational games.
Moscovich died on 21 April 2023, at the age of 96.
Artist
Moscovich's kinetic art and other art creations have been shown in major art exhibitions at the Institute of Contemporary Arts, the International International Design Centrum in Berlin, and the Museo de Arte Moderno in Mexico City. He patented his harmonograph drawing device in 1967, and had a show of harmonographic art at the National Museum of Mathematics (MoMath) in New York City on 21 October 2021. In 2019, the Museum of Tolerance featured a retrospective of his work.
Publications
References
1926 births
2023 deaths
Auschwitz concentration camp prisoners
Bergen-Belsen concentration camp survivors
Recreational mathematicians
Toy designers
Yugoslav people of Hungarian descent
Yugoslav Jews
People from Novi Sad | Ivan Moscovich | [
"Mathematics"
] | 933 | [
"Recreational mathematics",
"Recreational mathematicians"
] |
5,408,291 | https://en.wikipedia.org/wiki/Pragmatic%20General%20Multicast | Pragmatic General Multicast (PGM) is a reliable multicast computer network transport protocol. PGM provides a reliable sequence of packets to multiple recipients simultaneously, making it suitable for applications like multi-receiver file-transfer.
Multicast is a network addressing method for the delivery of information to a group of destinations simultaneously using the most efficient strategy to deliver the messages over each link of the network only once, creating copies only when the links to the multiple destinations split (typically network switches and routers). However, like the User Datagram Protocol, multicast does not guarantee the delivery of a message stream. Messages may be dropped, delivered multiple times, or delivered out of order. A reliable multicast protocol, like PGM, adds the ability for receivers to detect lost and/or out-of-order messages and take corrective action (similar in principle to TCP), resulting in a gap-free, in-order message stream.
While TCP uses ACKs to acknowledge groups of packets sent (something that would be uneconomical over multicast), PGM uses the concept of negative acknowledgements (NAKs). A NAK is sent unicast back to the host via a defined network-layer hop-by-hop procedure whenever there is a detection of data loss of a specific sequence. As PGM is heavily reliant on NAKs for integrity, when a NAK is sent, a NAK confirmation (NCF) is sent via multicast for every hop back. Repair data (RDATA) is then sent back either from the source or from a Designated Local Repairer (DLR) at some point closer to the destination.
PGM is an IETF experimental protocol. It is not yet a standard, but has been implemented in some networking devices and operating systems, including Windows XP and later versions of Microsoft Windows, as well as in third-party libraries for Linux, Windows and Solaris.
External links
https://tools.ietf.org/html/rfc3208
https://github.com/steve-o/openpgm/
https://web.archive.org/web/20110111200232/http://www.cisco.com/en/US/docs/ios/12_0t/12_0t5/feature/guide/pgmscale.html
Communications protocols | Pragmatic General Multicast | [
"Technology"
] | 492 | [
"Computing stubs",
"Computer standards",
"Communications protocols",
"Computer network stubs"
] |
5,408,457 | https://en.wikipedia.org/wiki/Quantum%20critical%20point | A quantum critical point is a point in the phase diagram of a material where a continuous phase transition takes place at absolute zero. A quantum critical point is typically achieved by a continuous suppression of a nonzero temperature phase transition to zero temperature by the application of a pressure, field, or through doping. Conventional phase transitions occur at nonzero temperature when the growth of random thermal fluctuations leads to a change in the physical state of a system. Condensed matter physics research over the past few decades has revealed a new class of phase transitions called quantum phase transitions which take place at absolute zero. In the absence of the thermal fluctuations which trigger conventional phase transitions, quantum phase transitions are driven by the zero point quantum fluctuations associated with Heisenberg's uncertainty principle.
Overview
Within the class of phase transitions, there are two main categories: at a first-order phase transition, the properties shift discontinuously, as in the melting of solid, whereas at a second order phase transition, the state of the system changes in a continuous fashion. Second-order phase transitions are marked by the growth of fluctuations on ever-longer length-scales. These fluctuations are called "critical fluctuations". At the critical point where a second-order transition occurs the critical fluctuations are scale invariant and extend over the entire system. At a nonzero temperature phase transition, the fluctuations that develop at a critical point are governed by classical physics, because the characteristic energy of quantum fluctuations is always smaller than the characteristic Boltzmann thermal energy .
At a quantum critical point, the critical fluctuations are quantum mechanical in nature, exhibiting scale invariance in both space and in time. Unlike classical critical points, where the critical fluctuations are limited to a narrow region around the phase transition, the influence of a quantum critical point is felt over a wide range of temperatures above the quantum critical point, so the effect of quantum criticality is felt without ever reaching absolute zero. Quantum criticality was first observed in ferroelectrics, in which the ferroelectric transition temperature is suppressed to zero.
A wide variety of metallic ferromagnets and antiferromagnets have been observed to develop quantum critical behavior when their magnetic transition temperature is driven to zero through the application of pressure, chemical doping or magnetic fields. In these cases, the properties of the metal are radically transformed by the critical fluctuations, departing qualitatively from the standard Fermi liquid behavior, to form a metallic state sometimes called a non-Fermi liquid or a "strange metal". There is particular interest in these unusual metallic states, which are believed to exhibit a marked preponderance towards the development of superconductivity. Quantum critical fluctuations have also been shown to drive the formation of exotic magnetic phases in the vicinity of quantum critical points.
Quantum critical endpoints
Quantum critical points arise when a susceptibility diverges at zero temperature. There are a number of materials (such as CeNi2Ge2) where this occurs serendipitously. More frequently a material has to be tuned to a quantum critical point. Most commonly this is done by taking a system with a second-order phase transition which occurs at nonzero temperature and tuning it—for example by applying pressure or magnetic field or changing its chemical composition. CePd2Si2 is such an example, where the antiferromagnetic transition which occurs at about 10K under ambient pressure can be tuned to zero temperature by applying a pressure of 28,000 atmospheres. Less commonly a first-order transition can be made quantum critical. First-order transitions do not normally show critical fluctuations as the material moves discontinuously from one phase into another. However, if the first order phase transition does not involve a change of symmetry then the phase diagram can contain a critical endpoint where the first-order phase transition terminates. Such an endpoint has a divergent susceptibility. The transition between the liquid and gas phases is an example of a first-order transition without a change of symmetry and the critical endpoint is characterized by critical fluctuations known as critical opalescence.
A quantum critical endpoint arises when a nonzero temperature critical point is tuned to zero temperature. One of the best studied examples occurs in the layered ruthenate metal, Sr3Ru2O7 in a magnetic field. This material shows metamagnetism with a low-temperature first-order metamagnetic transition where the magnetization jumps when a magnetic field is applied within the directions of the layers. The first-order jump terminates in a critical endpoint at about 1 kelvin. By switching the direction of the magnetic field so that it points almost perpendicular to the layers, the critical endpoint is tuned to zero temperature at a field of about 8 teslas. The resulting quantum critical fluctuations dominate the physical properties of this material at nonzero temperatures and away from the critical field. The resistivity shows a non-Fermi liquid response, the effective mass of the electron grows and the magnetothermal expansion of the material is modified all in response to the quantum critical fluctuations.
Notes
References
Quantum phases
Condensed matter physics | Quantum critical point | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,035 | [
"Quantum phases",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Matter"
] |
5,409,095 | https://en.wikipedia.org/wiki/Dead-beat%20control | In discrete-time control theory, the dead-beat control problem consists of finding what input signal must be applied to a system in order to bring the output to the steady state in the smallest number of time steps.
For an Nth-order linear system it can be shown that this minimum number of steps will be at most N (depending on the initial condition), provided that the system is null controllable (that it can be brought to state zero by some input). The solution is to apply feedback such that all poles of the closed-loop transfer function are at the origin of the z-plane. This approach is straightforward for linear systems. However, when it comes to nonlinear systems, dead beat control is an open research problem.
Usage
The sole design parameter in deadbeat control is the sampling period. As the error goes to zero within N sampling periods, the settling time remains within the range of Nh, where h is the sampling parameter.
Also, the magnitude of the control signal increases significantly as the sampling period decreases. Thus, careful selection of the sampling period is crucial when employing this control method.
Finally, since the controller is based upon cancelling plant poles and zeros, these must be known precisely, otherwise the controller will not be deadbeat.
Transfer function of dead-beat controller
Consider that a plant has the transfer function
where
The transfer function of the corresponding dead-beat controller is
where d is the minimum necessary system delay for controller to be realizable. For example, systems with two poles must have at minimum 2 step delay from controller to output, so d = 2.
The closed-loop transfer function is
and has all poles at the origin.
Notes
References
Kailath, Thomas: Linear Systems, Prentice Hall, 1980,
Warwick, Kevin: Adaptive dead beat control of stochastic systems, International Journal of Control, 44(3), 651-663, 1986.
Control theory | Dead-beat control | [
"Mathematics"
] | 389 | [
"Mathematical analysis",
"Mathematical analysis stubs",
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
5,409,292 | https://en.wikipedia.org/wiki/Isaac%20Lee%20Patterson%20Bridge | The Isaac Lee Patterson Bridge, also known as the Rogue River Bridge and the Isaac Lee Patterson Memorial Bridge, is a concrete arch bridge that spans the Rogue River in Curry County, Oregon. The bridge was constructed by the Mercer Fraser Company of Eureka, California. The bridge carries U.S. Route 101 across the river, near the point where the river empties into the Pacific Ocean, and connects the towns of Gold Beach and Wedderburn. A bridge with strong Art Deco influences, the Isaac Lee Patterson Bridge is a prominent example of the designs of the Oregon bridge designer and highway engineer Conde McCullough. It was designated a National Historic Civil Engineering Landmark by the American Society of Civil Engineers in 1982. It is part of a series of notable bridges designed by McCullough for the Oregon Coast Highway in the 1930s. It was placed on the National Register of Historic Places in 2005.
History
The Oregon State Highway Department awarded the $568,181.00 ($ in dollars) construction contract to the Mercer, Fraser Company of Eureka, California, on January 16, 1930. Work began on the bridge at Gold Beach in April 1930. In order to avoid problems with concrete shrinkage that had plagued concrete arch bridges in the past, McCullough used the Freyssinet method of pre-tensioning the arches during construction using hydraulic jacks, using sixteen 250-ton jacks from Freyssinet's firm, enough to work with two arch panels at a time. McCullough's design was the first usage of this technique in the United States. The remote location of the building site presented a significant challenge, with reinforcing steel shipped southward from Port Orford, and built a concrete plant on the north bank of the river. Pilings for the piers were obtained locally. The bridge was planned to open in January 1932, but the ferry Rogue was damaged in December 1931 flooding and the bridge opened early, on December 24, 1931. It was dedicated on May 28, 1932 and named after Isaac Lee Patterson, the governor of Oregon from 1927 to 1929. The Mercer-Fraser Company presented the new bridge to the State on January 21, 1932, and the bridge was officially accepted as complete on January 27, 1932, at a final cost of $592,725.56.
Description
The bridge is long and consists of seven deck arch spans and nine deck girder sections. The roadbed is wide, and the structure is wide overall. Piers 1 and 8, at the ends, rest on solid rock. The intermediate piers rest on driven timber pilings. Piers 2, 4, 5, and 7 rest on 180 vertical piles, while piers 3 and 6, required to resist lateral thrust, have 260 piers driven at an angle.
The detailing of the bridge incorporates Art Deco motifs, with prominent pylons at the ends with stepped Moderne elements and stylized Palladian windows crowned by sunbursts. The railings use a simplified, rectilinear Tuscan order with arches on short ribbed columns.
The bridge has required extensive preventive maintenance to mitigate deterioration due to the location's salt air. A $20 million rehabilitation ran from 2001 to 2004. A previous project in 1976 mitigated scouring problems at pier 2.
Construction of the bridge required the excavation of of earth and consumed of piling, of concrete, of reinforcing steel, and of structural steel.
Designation
The Isaac Lee Patterson Bridge was placed on the National Register of Historic Places on August 5, 2005.
Further reading
See also
List of bridges documented by the Historic American Engineering Record in Oregon
List of bridges on U.S. Route 101 in Oregon
List of bridges on the National Register of Historic Places in Oregon
References
External links
Road bridges on the National Register of Historic Places in Oregon
Bridges completed in 1932
Open-spandrel deck arch bridges in the United States
U.S. Route 101
Historic Civil Engineering Landmarks
Concrete bridges in Oregon
Art Deco architecture in Oregon
National Register of Historic Places in Curry County, Oregon
Transportation buildings and structures in Curry County, Oregon
Historic American Engineering Record in Oregon
Bridges by Conde McCullough
Bridges of the United States Numbered Highway System
Gold Beach, Oregon | Isaac Lee Patterson Bridge | [
"Engineering"
] | 844 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
5,409,651 | https://en.wikipedia.org/wiki/Pecking%20order%20theory | In corporate finance, the pecking order theory (or pecking order model) postulates that
"firms prefer to finance their investments internally, using retained earnings, before turning to external sources of financing such as debt or equity" - i.e. there is a “pecking order” when it comes to financing decisions.
The theory was first suggested by Gordon Donaldson in 1961
and was modified by Stewart C. Myers and Nicolas Majluf in 1984.
Theory
The theory assumes asymmetric information, and that the firm's financing decision constitutes a signal to the market. Under the theory, managers know more about their company's prospects, risks and value than outside investors; see efficient market hypothesis.
This asymmetry affects the choice between internal and external financing and between the issue of debt or equity:
companies prioritize their sources of financing, first preferring internal financing, and then debt, with equity financing seen as a "last resort".
Here, the issue of debt signals the board's confidence that an investment is profitable; further, the current stock price is undervalued, mitigating against issuing shares at these levels.
The issue of equity, on the other hand, would signal some lack of confidence, or at least that the share is over-valued. An issue of equity may then lead to a drop in share price.
(This does not however apply to high-tech industries where the issue of equity is preferable, due to the high cost of debt issue as assets are intangible.)
Other more practical consderations include the fact that issue costs are least for internal funds, low for debt and highest for equity.
Further, issuing shares means 'bringing external ownership' into the company, leading to stock dilution.
The pecking order theory may explain the inverse relationship between profitability and debt ratios;
and, in that dividends are a use of capital, the theory also links to the firm's dividend policy.
In general, internally generated cash flow may exceed required capital expenditures, and at other times will fall short.
Thus when profitable, since firms prefer internal financing, the firm will pay off debt, leading to a reduction in the ratio.
When profit or cashflow falls short, rather than relying on external financing, the firm first draws down its cash balance or sells its marketable securities.
Coupled with this is the fact that the larger the dividend paid, the less funds are available for reinvestment, and the more the company will have to rely on external financing to fund its investments. Thus the dividend payout ratio may also "adapt" to the firm's investment opportunities and current cash levels.
Evidence
Tests of the pecking order theory have not been able to show that it is of first-order importance in determining a firm's capital structure. However, several authors have found that there are instances where it is a good approximation of reality. Zeidan, Galil and Shapir (2018) document that owners of private firms in Brazil follow the pecking order theory, and also Myers and Shyam-Sunder (1999) find that some features of the data are better explained by the pecking order than by the trade-off theory. Frank and Goyal show, among other things, that pecking order theory fails where it should hold, namely for small firms where information asymmetry is presumably an important problem.
See also
Capital structure substitution theory
Cost of capital
Market timing hypothesis
Trade-off theory of capital structure
References
Corporate finance
Asymmetric information
Debt
Finance theories | Pecking order theory | [
"Physics"
] | 719 | [
"Asymmetric information",
"Symmetry",
"Asymmetry"
] |
5,409,892 | https://en.wikipedia.org/wiki/Woodboring%20beetle | The term woodboring beetle encompasses many species and families of beetles whose larval or adult forms eat and destroy wood (i.e., are xylophagous). In the woodworking industry, larval stages of some are sometimes referred to as woodworms. The three most species-rich families of woodboring beetles are longhorn beetles, bark beetles and weevils, and metallic flat-headed borers. Woodboring is thought to be the ancestral ecology of beetles, and bores made by beetles in fossil wood extend back to the earliest fossil record of beetles in the Early Permian (Asselian), around 295-300 million years ago.
Ecology
Woodboring beetles most often attack dying or dead trees. In forest settings, they are important in the turnover of trees by culling weak trees, thus allowing new growth to occur. They are also important as primary decomposers of trees within forest systems, allowing for the recycling of nutrients locked away in the relatively decay-resilient woody material of trees. To develop and reach maturity woodboring beetles need nutrients provided by fungi from outside of the inhabited wood. These nutrients are not only assimilated into the beetles' bodies but also are concentrated in their frass, contributing to soil nutrients cycles. Though the vast majority of woodboring beetles are ecologically important and economically benign, some species can become economic pests by attacking relatively healthy trees (e.g. Asian longhorn beetle, emerald ash borer) or by infesting downed trees in lumber yards. Species such as the Asian longhorn beetle and the emerald ash borer are examples of invasive species that threaten natural forest ecosystems.
Invasion and control
Woodboring beetles are commonly detected a few years after new construction. The lumber supply may have contained wood infected with beetle eggs or larvae, and since beetle life cycles can be one or more years, several years may pass before the presence of beetles becomes noticeable. In many cases, the beetles will be of a type that only attacks living wood, and thus incapable of "infesting" any other pieces of wood, or doing any further damage.
Genuine infestations are far more likely in areas with high humidity, such as poorly ventilated crawl spaces. Housing with central heating/air-conditioning tends to cut the humidity of wood in the living areas to less than half of natural humidity, thus strongly reducing the likelihood of an infestation. Some species will infest furniture.
Some beetles invade wood used in construction and furniture making; others limit their activity to forests or roots of living trees. The following lists some of those beetles that are house pests.
Ambrosia beetle
Common furniture beetle
Deathwatch beetle
Flat-headed wood-borer
Powderpost beetle (Ptinidae, Bostrichidae)
Old-house borer
See also
Bark beetles and weevils
Carpenter ants
Longhorn beetles
Metallic flat-headed borers
Termites
Wood ants
References
External links
Building defects
Insect ecology | Woodboring beetle | [
"Materials_science"
] | 608 | [
"Mechanical failure",
"Building defects"
] |
5,410,719 | https://en.wikipedia.org/wiki/Wood%20wool | Wood wool, known primarily as excelsior in North America, is a product made of wood slivers cut from logs. It is mainly used in packaging, for cooling pads in home evaporative cooling systems known as swamp coolers, for erosion control mats, and as a raw material for the production of other products such as bonded wood wool boards. In the past it was used as stuffing, or padding, in upholstery, or to fill stuffed toys. It is also sometimes used by taxidermists to construct the armatures of taxidermy mounts.
History
A different product was once known as "wood wool", as well as "pine needle-wool", or "pine wood-wool". According to E. Littell, it was produced in Breslau, Silesia (today Wrocław, Poland) by von Pannewich, who mentioned that in 1842 five hundred counterpanes made of it were purchased for a hospital in Vienna. The process was chemical and made use of the leaves (needles) of Scots Pine.
In England, yet another product known as wood wool was produced by the chemical breakdown of wood strips by means of sulphurous acid, for use in such applications as absorbent material in surgical dressings. Another application of this product was use in sanitary towels, as shown in advertisements from 1885 to 1892 in Britain for "wood wool diapers" or "sanitary wood wool sheets". European "wood wool" was known in America in the late nineteenth century as being distinctly different from excelsior.
Fifteen US patents related to "slivering machines" for producing the small wood shreds "known as excelsior" were listed by 1876.
The earliest, a machine for "Manufacturing wood to be used as a substitute for curled hair in stuffing beds" was patented in the US in 1842; however, the product had no specific name when the process was first patented.
The 1868 patent, "Improved capillary material for filling gas and air carburettors", was for a new use for "fibres torn from the wood by suitable machinery" to be "sold and used as filling for mattresses, its commercial name being 'excelsior'." This is the earliest description of the material by this name cited by the Oxford English Dictionary, though the term "excelsior mattress" had appeared in print as early as 1856.
In 1906, the now-common use of wood wool in the cooling pads of evaporative coolers appeared in a patent that stated, "I have found that excelsior makes a very cheap and good material for this purpose."
In the beginning of the 20th century wood wool was used as a raw material for producing wood wool panels in Europe, especially in Austria. By 1930, wood wool cement boards were being widely produced.
In the 21st century, wood wool appears in numerous patents for erosion control and sediment control methods and devices; for example, the 2006 "Sediment control device and system". A few late-twentieth-century patents on these uses refer to "excelsior/wood wool".
Terminology
In the United States the term wood wool is reserved for finer grades. The US Forest Service stated in 1948 and 1961 that, "In this country the product has no other general name, but in most other countries all grades of excelsior are known as wood wool. In the United States the name wood wool is reserved for only a small proportion of the output consisting of certain special grades of extra thin and narrow stock."
The US Standard Industrial Classification Index SIC is 2429 for the product "Wood wool (excelsior)". The same term is used by the United States for the external trade number under which wood wool is monitored: HTS Number: 4405.00.00 Description: Wood wool (excelsior); wood flour.
The number 4405.00 is applied to wood wool by the World Customs Organization in the Harmonized Commodity Description and Coding System (HS).
Grades and classifications
The 1973 US Federal Government procurement specification PPP-E-911, cancelled in 1991, categorized "wood excelsior" products according to the following table of terms and dimensions:
Properties
Wood wool fibers can be compressed and when the pressure is removed they resume their initial volume. This is a useful property for minimizing their volume when shipping. Due to its high volume and large surface area, wood wool can be used for applications where water or moisture retention is necessary. The width of wood wool fibers varies from , while their length is usually around 500 mm (depending on the production process).
In the UK there are specifications for dimensions, pH, moisture content and freedom from dust and small pieces, set by British Standard BS 2548 for wood wool for general packaging purposes. This standard was originally issued in 1954 and subsequently re-issued in 1986.
When these fibers are bonded with cement or magnesite, bonded wood wool boards are produced. Slabs of bonded wood wool are considered environmentally friendly construction and insulation materials because they do not contain organic binders.
Production
Wood wool is cut from "bolts" (round, halved, quartered, or otherwise split logs) of poplar (for example aspen), pine, spruce or eucalyptus. For evaporative cooler pads, the dominant source is the aspen.
Wood wool can be produced in either horizontal or vertical shredding machines.
A possible further processing option is washing, which removes dust. Wood-wool processing may involve drying to reduce moisture in compliance with local requirements, as in the UK.
Applications
Wood wool has many applications; examples include:
Restoration of antique buggy seats, and furniture.
A packaging material for cushioning, such as Fibrenap (Cushy Pads). In the early 20th century, wood wool was packed around paleontological specimens to minimize damage during shipment.
A stuffing for plush toys or for real animals in taxidermy. It was traditionally used in stuffing Teddy bears and still is for the muzzles of some collectible bears.
Cooling pads in home evaporative cooler systems known as "swamp coolers".
Bedding for animals and their cages. Wood wool serves to cushion the animals while providing some warmth and absorbing waste. For example, it is found in dairies, in hutches and in cardboard boxes when shipping day-old poultry within the United States.
When dyed green, the material can be used as an artificial grass in Easter egg baskets. This was popular before the prevalence of plastics.
Mats and blankets for erosion control.
A material used in the production of cement-bonded wood wool boards.
When banded into a bale form, it is used as an archery backstop, comparable to how a straw bale would be used for the same purpose. If protected from the elements, a wood wool archery backstop can last for many years. If sections of it wear down because of repeated targeting, the bale can be soaked liberally since it then expands and holds water, just like a dry sponge.
Garden mulches and as a growing medium for hydroponic gardens.
References
Biodegradable materials
Fibers
Wood products | Wood wool | [
"Physics",
"Chemistry"
] | 1,449 | [
"Biodegradation",
"Biodegradable materials",
"Materials",
"Matter"
] |
5,410,977 | https://en.wikipedia.org/wiki/Phencyclidine%20%28data%20page%29 |
References
Chemical data pages
Chemical data pages cleanup | Phencyclidine (data page) | [
"Chemistry"
] | 10 | [
"Chemical data pages",
"nan"
] |
5,411,159 | https://en.wikipedia.org/wiki/Low-affinity%20nerve%20growth%20factor%20receptor | The p75 neurotrophin receptor (p75NTR) was first identified in 1973 as the low-affinity nerve growth factor receptor (LNGFR) before discovery that p75NTR bound other neurotrophins equally well as nerve growth factor. p75NTR is a neurotrophic factor receptor. Neurotrophic factor receptors bind Neurotrophins including Nerve growth factor, Neurotrophin-3, Brain-derived neurotrophic factor, and Neurotrophin-4. All neurotrophins bind to p75NTR. This also includes the immature pro-neurotrophin forms. Neurotrophic factor receptors, including p75NTR, are responsible for ensuring a proper density to target ratio of developing neurons, refining broader maps in development into precise connections. p75NTR is involved in pathways that promote neuronal survival and neuronal death.
Receptor family
p75NTR is a member of the tumor necrosis factor receptor superfamily. p75NTR/LNGFR was the first member of this large family of receptors to be characterized, that now contains about 25 receptors, including tumor necrosis factor 1 (TNFR1) and TNFR2, Fas, RANK, and CD40.
All members of the TNFR superfamily contain structurally related cysteine-rich modules in their ECDs. p75NTR is an unusual member of this family due to its propensity to dimerize rather than trimerize, because of its ability to act as a tyrosine kinase co-receptor, and because the neurotrophins are structurally unrelated to the ligands, which typically bind TNFR family members. Indeed, with the exception of p75NTR, essentially all members of the TNFR family preferentially bind structurally related trimeric Type II transmembrane ligands, members of the TNF ligand superfamily.
Structure
p75NTR is a type I transmembrane protein, with a molecular weight of 75 kDa, determined by glycosylation through both N- and O-linkages in the extracellular domain.
It consists of an extracellular domain, a transmembrane domain and an intracellular domain. The extracellular domain consists of a stalk domain connecting the transmembrane domain and four cysteine-rich repeat domains, CRD1, CRD2, CRD3, and CRD4; which are negatively charged, a property that facilitates Neurotrophin binding. The intracellular part is a global-like domain, known as a death domain, which consists of two sets of perpendicular helixes arranged in sets of three. It connects the transmembrane domain through a flexible linker region N-terminal domain. It is important to say that, in contrast to the type I death domain found in other TNFR proteins, the type II intracellular death domain of p75NTR does not self-associate. This was an early indication that p75NTR does not signal death through the same mechanism as the TNFR death domains, although the ability of the p75NTR death domain to activate other second messengers is conserved.
The p75ECD-binding interface to NT-3 can be divided into three main contact sites, two in the case of NGF, that are stabilized by hydrophobic interactions, salt bridges, and hydrogen bonds. The junction regions between CDR1 and CDR2 form the site 1 that contains five hydrogen bonds and one salt bridge. Site 2 is formed by equal contributions from CDR3 and CRD4 and involves two salt bridges and two hydrogen bonds. Site 3, in the CRD4, includes only one salt bridge.
Function
Interactions with neurotrophins
Neurotrophins that interact with p75NTR include NGF, NT-3, BDNF, and NT-4/5. Neurotrophins activating p75NTR may initiate apoptosis (for example, via c-Jun N-terminal kinases signaling, and subsequent p53, Jax-like proteins and caspase activation). This effect can be counteracted by anti-apoptotic signaling by TrkA.
Neurotrophin binding to p75NTR, in addition to apoptotic signaling, can also promote neuronal survival (for example, via NF-kB activation). There are multiple targets of Akt that could play a role in mediating p75NTR-dependent survival, but one of the more intriguing possibilities is that Ant-induced phosphorylation of IkB kinase 1 (IKK1) plays a role in the induction of NF-kB.
Interactions with proneurotrophins
Proforms of NGF and BDNF (proNGF and proBDNF) are precursors to NGF and BDNF. proNGF and proBDNF interact with p75NTR and cause p75NTR-mediated apoptosis without activating TrkA-mediated survival mechanisms. Cleavage of proforms into mature Neurotrophins allows the mature NGF and BDNF to activate TrkA-mediated survival mechanisms.
Sensory development
Recent research has suggested a number of roles for the LNGFR, including in development of the eyes and sensory neurons, and in repair of muscle and nerve damage in adults. Two distinct subpopulations of Olfactory ensheathing glia have been identified with high or low cell surface expression of low-affinity nerve growth factor receptor (p75).
Interactions with other receptors
Sortilin
Sortilin is required for many apoptosis-promoting p75NTR reactions, functioning as a co-receptor for the binding of neurotrophins such as BDNF. pro-neurotrophins (such as proBDNF) bind especially well to p75NTR when sortilin is present.
Crosstalk with Trk receptors
When p75NTR initiates apoptosis, NGF binding to Tropomyosin receptor kinase A (TrkA) can negate p75NTR apoptotic effects. p75NTR c-Jun kinase pathway activation (which causes apoptosis) is suppressed when NGF binds to TrkA. p75NTR activation of NF-kB, which promotes survival, is unaffected by NGF binding to TrkA.
Nogo-66 receptor (NgR1)
p75NTR functions in a complex with Nogo-66 receptor (NgR1) to mediate RhoA-dependent inhibition of growth of regenerating axons exposed to inhibitory proteins of CNS myelin, such as Nogo, MAG or OMgP. Without p75NTR, OMgP can activate RhoA and inhibit CNS axon regeneration. Coexpression of p75NTR and OMgP suppress RhoA activation. A complex of NgR1, p75NTR and LINGO1 can activate RhoA.
p75NTR-mediated signaling pathways
NF-kB activation
NF-kB is a transcription factor that can be activated by p75NTR. Nerve growth factor (NGF) is a neurotrophin that promotes neuronal growth, and, in the absence of NGF, neurons die. Neuronal death in the absence of NGF can be prevented by NF-kB activation. Phosphorylated IκB kinase binds to and activates NF-kB before separating from NF-kB. After separation, IκB degrades and NF-kB continues to the nucleus to initiate pro-survival transcription. NF-kB also promotes neuronal survival in conjunction with NGF.
NF-kB activity is activated by p75NTR, and is not activated via Trk receptors. NF-kB activity does not effect Brain-derived neurotrophic factor promotion of neuronal survival.
RhoGDI and RhoA
p75NTR serves as a regulator for actin assembly. Ras homolog family member A (RhoA) causes the actin cytoskeleton to become rigid which limits growth cone mobility and inhibits neuronal elongation in the developing nervous system. p75NTR without a ligand bound activates RhoA and limits actin assembly, but neurotrophin binding to p75NTR can inactivate RhoA and promote actin assembly. p75NTR associates with the Rho GDP dissociation inhibitor (RhoGDI), and RhoGDI associates with RhoA. Interactions with Nogo can strengthen the association between p75NTR and RhoGDI. Neurotrophin binding to p75NTR inhibits the association of RhoGDI and p75NTR, thereby suppressing RhoA release and promoting growth cone elongation (inhibiting RhoA actin suppression).
JNK signaling pathway
Neurotrophin binding to p75NTR activates the c-Jun N-terminal kinases (JNK) signaling pathway causing apoptosis of developing neurons. JNK, through a series of intermediates, activates p53 and p53 activates Bax which initiates apoptosis. TrkA can prevent p75NTR-mediated JNK pathway apoptosis.
JNK-Bim-EL signaling pathway
JNK can directly phosphorylate Bim-EL, a splicing isoform of Bcl-2 interacting mediator of cell death (Bim), which activates Bim-EL apoptotic activity. JNK activation is required for apoptosis but c-jun, a protein in the JNK signaling pathway, is not always required.
Caspase-dependent signaling
LNGFR also activates a caspase-dependent signaling pathway that promotes developmental axon pruning, and axon degeneration in neurodegenerative disease.
In the apoptosis pathway, members of the TNF receptor superfamily assemble a death-inducing signaling complex (DISC) in which TRADD or FADD bind directly to the receptor's death domain, thereby allowing aggregation and activation of Caspase 8 and subsequent activation of the Caspase cascade. However, Caspase 8 induction does not appear to be involved in p75NTR-mediated apoptosis, but Caspase 9 is activated during p75NTR-mediated killing.
Role in disease
Huntington's disease
Huntington's disease is characterized by cognitive impairments. There is increased expression of p75NTR in the hippocampus of Huntington's disease patients (including mice models and humans). Over expression of p75NTR in mice causes cognitive impairments similar to Huntington's disease. p75NTR is linked to reduced numbers of dendritic spines in the hippocampus, likely through p75NTR interactions with Transforming protein RhoA. Modulating p75NTR function could be a future direction in treating Huntington's disease.
Amyotrophic lateral sclerosis
Amyotrophic lateral sclerosis ALS is a neurodegenerative disease characterized by progressive muscular paralysis reflecting degeneration of motor neurons in the primary motor cortex, corticospinal tracts, brainstem and spinal cord. One study using the superoxide dismutase 1 (SOD1) mutant mouse, an ALS model which develops severe neurodegeneration, the expression of p75NTR correlated with the extent of degeneration and p75NTR knockdown delayed disease progression.
Alzheimer's disease
Alzheimer's disease (AD) is the most common cause of dementia in the elderly. AD is a neurodegenerative disease characterized by the loss of cognitive functioning - thinking, remembering and reasoning- and behavioral abilities to such an extent that it interferes with a person's daily life and activities. The neuropathological hallmarks of AD include amyloid plaques and neurofibrillary tangles, which lead to neuronal death. Studies in animal models of AD have shown that p75NTR contributes to amyloid β-induced neuronal damage. In humans with AD, increases in p75NTR expression relative to TrkA have been suggested to be responsible for the loss of cholinergic neurons. Increases in proNGF in AD indicate that the Neurotrophin environment is favorable for p75NTR/sortilin signaling and supports the theory that age-related neural damage is facilitated by a shift toward proNGF-mediated signaling. A recent study found that activation of Ngfr signaling in astroglia of Alzheimer's disease mouse model enhanced neurogenesis and reduced two hallmarks of Alzheimer's disease. This study also found that NGFR signaling in humans is age-related and correlates with proliferative potential of neural progenitors.
Role in cancer stem cells
p75NTR has been implicated as a marker for cancer stem cells in melanoma and other cancers. Melanoma cells transplanted into an immunodeficient mouse model were shown to require expression of CD271 in order to grow a melanoma.<ref Gene knockdown of CD271 has also been shown to abolish neural crest stem cell properties of melanoma cells and decrease genomic stability leading to a reduced migration, tumorigenicity, proliferation and induction of apoptosis. Furthermore, increased levels of CD271 were observed in brain metastatic melanoma cells whereas resistance to the BRAF inhibitor vemurafenib supposedly selects for highly malignant brain and lung-metastasizing melanoma cells. Recently, expression of p75NTR (NGFR) was associated with progressive intracranial disease in melanoma patients
Interactions
Low-affinity nerve growth factor receptor has been shown to interact with:
FSCN1,
MAGEH1,
NDN,
NGFRAP1
NGF,
PRKACB,
TRAF2, and
TRAF4.
Nogo-66 receptor
c-Jun N-terminal kinases
RhoA
Rho GDP dissociation inhibitor (RhoGDI)
NF-kB
Neurotrophin-3
Brain-derived neurotrophic factor
Neurotrophin-4
References
Further reading
External links
Clusters of differentiation
TNF receptor family
Neurochemistry | Low-affinity nerve growth factor receptor | [
"Chemistry",
"Biology"
] | 2,983 | [
"Biochemistry",
"Neurochemistry"
] |
5,411,217 | https://en.wikipedia.org/wiki/Hylotrupes | Hylotrupes is a monotypic genus of woodboring beetles in the family Cerambycidae, the longhorn beetles. The sole species, Hylotrupes bajulus, is known by several common names, including house longhorn beetle, old house borer, and European house borer. In South Africa it also is known as the Italian beetle because of infested packing cases that had come from Italy. Hylotrupes is the only genus in the tribe Hylotrupini.
Distribution
This species, originating in Europe, and having been spread in timber and wood products, now has a practically cosmopolitan distribution, including Southern Africa, Asia, the Americas, Australia, and much of Europe and the Mediterranean.
Description
Hylotrupes bajulus can reach a body length of about , while mature larva can reach . These beetles are brown to black, appearing grey because of a fine grey furriness on most of the upper surface. On the pronotum two conspicuously hairless tubercles are characteristic of the species. On the elytra usually there are two whitish pubescent spots. Females do not have a real ovipositor, only a little more elongated telson. The species can be defined polymorphic, having an extreme variability, both in the dimensions and in the aspect. In small specimens the pubescent spots on the elytra disappear almost completely and the legs and antennae turn to a reddish color.
Biology
Adults are most active in the summer (June–September). Only the larvae feed on the wood, with a preference for dead wood of pines (Pinus), fir, spruce (Picea abies), Araucaria and Pseudotsuga species . Ecologically it can be quite important as a scavenger of dead pine trees, pine fence posts, and similar objects, hastening their decay and collapse. The life cycle from egg to beetle typically takes two to ten years, depending on the type of wood, its age and quality, its moisture content, and also depending on environmental conditions such as temperature. Larvae usually pupate just beneath the wood surface and eclose in mid to late summer. Once the exoskeleton of the newly emerged adult beetle has hardened sufficiently the adults cut oval exit holes 6–10 mm (¼ to 3/8 in) in diameter, typically leaving coarse, powdery frass in the vicinity of the hole.
Hylotrupes bajulus preferentially attacks freshly produced sapwood of softwood timber. Contrary to the name "old-house borer", the species is more often found in new houses; maybe because the beetles are attracted to the higher resin content of wood harvested more recently than 10 years earlier. If old wood is attacked, the damage is usually greater. As the nutrient content of wood decreases with age the larva has to consume larger amounts of wood.
In Australia the infection of home construction is mainly caused by the use of wood already infected with the eggs or larvae of the beetles if the wood is not properly kiln-dried in production.
Gallery
References
Woodboring beetles
Callidiini
Building defects
Household pest insects
Monotypic Cerambycidae genera
Insects in culture | Hylotrupes | [
"Materials_science"
] | 650 | [
"Mechanical failure",
"Building defects"
] |
5,411,259 | https://en.wikipedia.org/wiki/Antarafacial%20and%20suprafacial | In organic chemistry, antarafacial (Woodward-Hoffmann symbol a) and suprafacial (s) are two topological concepts in organic chemistry describing the relationship between two simultaneous chemical bond making and/or breaking processes in or around a reaction center. The reaction center can be a p- or spn-orbital (Woodward-Hoffmann symbol ω), a conjugated system (π) or even a sigma bond (σ).
The relationship is antarafacial when opposite faces of the π system or isolated orbital are involved in the process (think anti). For a σ bond, it corresponds to involvement of one "interior" lobe and one "exterior" lobe of the bond.
The relationship is suprafacial when the same face of the π system or isolated orbital are involved in the process (think syn). For a σ bond, it corresponds to involvement of two "interior" lobes or two "exterior" lobes of the bond.
The components of all pericyclic reactions, including sigmatropic reactions and cycloadditions, and electrocyclizations, can be classified as either suprafacial or antarafacial, and this determines the stereochemistry. In particular, antarafacial topology corresponds to inversion of configuration for the carbon atom of a [1, n]-sigmatropic rearrangement, and conrotation for electrocyclic ring closure, while suprafacial corresponds to retention and disrotation.
An example is the [1,3]-hydride shift, in which the interacting frontier orbitals are the allyl free radical and the hydrogen 1s orbitals. The suprafacial shift is symmetry-forbidden because orbitals with opposite algebraic signs overlap. The symmetry allowed antarafacial shift would require a strained transition state and is also unlikely. In contrast a symmetry allowed and suprafacial [1,5]-hydride shift is a common event.
References
Stereochemistry | Antarafacial and suprafacial | [
"Physics",
"Chemistry"
] | 416 | [
"Stereochemistry",
"Space",
"Stereochemistry stubs",
"nan",
"Spacetime"
] |
5,411,305 | https://en.wikipedia.org/wiki/Tropomyosin%20receptor%20kinase%20C | Tropomyosin receptor kinase C (TrkC), also known as NT-3 growth factor receptor, neurotrophic tyrosine kinase receptor type 3, or TrkC tyrosine kinase is a protein that in humans is encoded by the NTRK3 gene.
TrkC is the high affinity catalytic receptor for the neurotrophin NT-3 (neurotrophin-3). As such, TrkC mediates the multiple effects of this neurotrophic factor, which includes neuronal differentiation and survival.
The TrkC receptor is part of the large family of receptor tyrosine kinases. A "tyrosine kinase" is an enzyme which is capable of adding a phosphate group to the certain tyrosines on target proteins, or "substrates". A receptor tyrosine kinase is a "tyrosine kinase" which is located at the cellular membrane, and is activated by binding of a ligand via its extracellular domain. Other example of tyrosine kinase receptors include the insulin receptor, the IGF-1 receptor, the MuSK protein receptor, the vascular endothelial growth factor (VEGF) receptor, etc. The "substrate" proteins which are phosphorylated by TrkC include PI3 kinase.
Function
TrkC is the high affinity catalytic receptor for the neurotrophin-3 (also known as NTF3 or NT-3). Similar to other NTRK receptors and receptor tyrosine kinases in general, ligand binding induces receptor dimerization followed by trans-autophosphorylation on conserved tyrosine in the intracellular (cytoplasmic) domain of the receptor. These conserved tyrosine serve as docking sites for adaptor proteins that trigger downstream signaling cascades. Signaling through PLCG1, PI3K and RAAS, downstream of activated NTRK3, regulates cell survival, proliferation and motility
Moreover, TrkC has been identified as a novel synaptogenic adhesion molecule responsible for excitatory synapse development.
The TrkC locus encodes at least eight isoforms including forms without the kinase domain or with kinase insertions adjacent to the major autophosphorylation site. These forms arise by alternative splicing events and are expressed in different tissues and cell types. NT-3 activation of catalytic TrkC isoform promotes both proliferation of neural crest cells and neuronal differentiation. On the other hand, the binding of NT-3 to the non-catalytic TrkC isoform induces neuronal differentiation, but nor neuronal proliferation
Family members
Tropomyosin receptor kinases, also known as neurotrophic tyrosine kinase receptors (Trk) play an essential role in the biology of neurons by mediating Neurotrophin-activated signaling. There are three transmembrane receptors TrkA, TrkB and TrkC (encoded by the genes NTRK1, NTRK2 and NTRK3 respectively) make up the Trk receptor family. This family of receptors are all activated by neurotrophins, including NGF (for Nerve Growth Factor), BDNF (for Brain Derived Neurotrophic Factor), NT-4 (for Neurotrophin-4) and NT-3 (for Neurotrophin-3).
While TrkA mediated the effects of NGF, TrkB is bound and activated by BDNF, NT-4 and NT-3. Further, TrkC binds and is activated by NT-3. TrkB binds BDNF and NT-4 more strongly than it binds NT-3. TrkC binds NT-3 more strongly than TrkB does.
There is one other NT-3 receptor family besides the Trks (TrkC & TrkB), called the "LNGFR" (for "low affinity nerve growth factor receptor"). As opposed to TrkC, the LNGFR plays a somewhat less clear role in NT-3 biology. Some researchers have shown the LNGFR binds and serves as a "sink" for neurotrophins. Cells which express both the LNGFR and the Trk receptors might therefore have a greater activity - since they have a higher "microconcentration" of the neurotrophin. It has also been shown, however, that the LNGFR may signal a cell to die via apoptosis - so therefore cells expressing the LNGFR in the absence of Trk receptors may die rather than live in the presence of a neurotrophin.
It has been demonstrated that NTRK3 is a dependence receptor, meaning that it can be capable of inducing proliferation when it binds to its ligand NT-3, however, the absence of the NT-3 will result in the induction of apoptosis by NTRK3.
Role in disease
With the past of the years, lot of studies have shown that the lack or deregulation of TrkC or the complex TrkC:NT-3 can be associated with different diseases.
One study have demonstrated that mice defective for either NT-3 or TrkC display severe sensory defects. These mice have normal nociception, but they are defective in proprioception, the sensory activity responsible for localizing the limbs in space.
The reduction of TrkC expression has been observed in neurodegenerative diseases, including Alzheimer's (AD), Parkinson's (PD), and Huntington's diseases (HD).
The role of NT-3 was also therapeutically studied in models of amyotrophic lateral sclerosis (ALS) with loss of spinal cord motor neurons that express TrkC
Moreover, it has been shown that TrkC plays a role in cancer. The expression and function of Trk subtypes are dependent on the tumor type. For example, in neuroblastoma, TrkC expression correlates with a good prognosis, but in breast, prostate and pancreatic cancers, the expression of the same TrkC subtype is associated with cancer progression and metastasis.
Role in cancer
Although originally identified as an oncogenic fusion in 1982, only recently has there been a renewed interest in the Trk family as it relates to its role in human cancers because of the identification of NTRK1 (TrkA), NTRK2 (TrkB) and NTRK3 (TrkC) gene fusions and other oncogenic alterations in a number of tumor types. A number of Trk inhibitors are (in 2015) in clinical trials and have shown early promise in shrinking human tumors. Family of neurotrophin receptors including NTRK3 have been shown to induce a variety of pleiotorpic response in malignant cells, including enhanced tumor cell invasiveness and chemotoxis. Increased NTRK3 expression has been demonstrated in neuroblastoma, in medulloblastoma, and in neuroectodermal brain tumors.
NTRK3 methylation
The promoter region of NTRK3 contains a dense CpG island located relatively adjacent to the transcription start site (TSS). Using HumanMethylation450 arrays, quantitative methylation-specific PCR (qMSP), and Methylight assays, it has been indicated that NTRK3 is methylated in all CRC cell lines and non of the normal epithelium samples. In light of its preferential methylation in CRCs and because of its role as a neurotrophin receptor, it has been suggested to have a functional role in colorectal cancer formation. It has also been suggested that methylation status of NTRK3 promoter is capable of discriminating CRC tumor samples from normal adjacent tumor-free tissue. Hence it can be considered as a biomarker for molecular detection of CRC, specially in combination with other markers like SEPT9. NTRK3 has also been indicated as one of the genes in the panel of nine CpG methylation probes located at promoter or exon 1 region of eight genes (including DDIT3, FES, FLT3, SEPT5, SEPT9, SOX1, SOX17, and NTRK3) for prognostic prediction in ESCC (esophageal squamous cell carcinoma) patients.
TrkC (NTRK3 gene) inhibitors in development
Entrectinib (formerly RXDX-101) is an investigational drug developed by Ignyta, Inc., which has potential antitumor activity. It is an oral pan-TRK, ALK and ROS1 inhibitor that has demonstrated its anti tumor activity in murine, human tumor cell lines, and patient-derived xenograft tumor models. In vitro, entrectinib inhibits the Trk family members TrkA, TrkB and TrkC at low nano molar concentrations. It is highly bound to plasma proteins (99,5%), and can readily diffuse across the blood-brain barrier (BBB).
Entrectinib has been approved by the FDA on August 15, 2019 for the treatment of adult and pediatric patients 12 years of age and older with solid tumors that have a neurotrophic tyrosine kinase receptor gene fusion
Interactions
TrkC has been shown to interact with:
SH2B2
SQSTM1
KIDINS220
PTPRS
MAPK8IP3/JIP3
Neurotrophin-3
TβRII
DOK5
BMPRII
PLCG1
Ligands
Small molecules peptidomimetics based on β-turn NT-3, with the rationale of targeting the extracellular domain of the TrkC receptor have shown to be agonist of TrkC. Posterior studies, have shown that peptidomimetics with an organic backbone, and a pharmacophore based on β-turn NT-3 structure can also function as an antagonist of TrkC.
References
Further reading
Developmental neuroscience
Programmed cell death
Tyrosine kinase receptors | Tropomyosin receptor kinase C | [
"Chemistry",
"Biology"
] | 2,093 | [
"Senescence",
"Programmed cell death",
"Tyrosine kinase receptors",
"Signal transduction"
] |
5,411,318 | https://en.wikipedia.org/wiki/Sendust | Sendust is a magnetic metal powder that was invented by Hakaru Masumoto at Tohoku Imperial University in Sendai, Japan circa 1936 as an alternative to permalloy in inductor applications for telephone networks. Sendust composition is typically 85% iron, 9% silicon and 6% aluminium. The powder is sintered into cores to manufacture inductors. Sendust cores have high magnetic permeability (up to 140 000), low loss, low coercivity (5 A/m) good temperature stability and saturation flux density up to .
Due to its chemical composition and crystallographic structure Sendust exhibits simultaneously zero magnetostriction and zero magnetocrystalline anisotropy constant K1.
Sendust is harder than permalloy, and is thus useful in abrasive wear applications such as magnetic recording heads.
See also
Alperm
External links
Comparison of molybdenum permalloy with sendust as energy storage inductors (PDF file)
Sendust properties
Magnetic alloys
Ferrous alloys
Ferromagnetic materials | Sendust | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 225 | [
"Ferrous alloys",
"Alloy stubs",
"Ferromagnetic materials",
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic alloys",
"Materials",
"Alloys",
"Matter"
] |
5,411,423 | https://en.wikipedia.org/wiki/Social%20organism | Social organism is a sociological concept, or model, wherein a society or social structure is regarded as a "living organism". Individuals interacting through the various entities comprising a society, such as law, family, crime, etc., are considered as they interact with other entities of the society to meet its needs. Every entity of a society, or social organism, has a function in helping maintain the organism's stability and cohesiveness.
History
The model, or concept, of society-as-organism is traced by Walter M. Simon from Plato ('the organic theory of society'), and by George R. MacLay from Aristotle (384–322 BCE) through 19th-century and later thinkers, including the French philosopher and founder of sociology, Auguste Comte, the Scottish essayist, historian and philosopher Thomas Carlyle, the English philosopher and polymath Herbert Spencer, and the French sociologist Émile Durkheim.
According to Durkheim, the more specialized the function of an organism or society, the greater its development, and vice versa. The three core activities of a society are culture, politics, and economics. Societal health depends on the harmonious interworking of these three activities.
This concept was further developed beginning in 1904, over the next two decades, by the Austrian philosopher and social reformer Rudolf Steiner in his lectures, essays, and books on the Threefold Social Order. The "health" of a social organism can be thought of as a function of the interaction of culture, politics and rights, and economics, which in theory can be studied, modeled, and analyzed.
During his work on social order, Steiner developed his "Fundamental Social Law" of economic systems:
David Sloan Wilson, in his 2002 book, Darwin's Cathedral, applies his multilevel selection theory to social groups and proposes to think of society as an organism. Human groups thus function as single units rather than mere collections of individuals. He claims that organisms and that .
See also
Body politic
Global brain
Noosphere
The Organic Theory of Societies
Superorganism
References
Bibliography
George R. MacLay, The Social Organism: A Short History of the Idea that a Human Society May Be Regarded as a Gigantic Living Creature, North River Press, 1990, .
Henry Rawie, The Social Organism and its Natural Laws, Williams & Wilkins Co., 1990, .
Rudolf Steiner, The Renewal of the Social Organism, Steiner Books, 1985, .
Oliver Luckett, Michel J Casey, The Social Organism: A Radical Understanding of Social Media to Transform Your Business and Life, Hachette Books, 2016, .
External links
Conceivia.com – Creating a new system of society.
Social Psychology and the Social Organism
Superorganisms
Sociological theories | Social organism | [
"Biology"
] | 551 | [
"Superorganisms",
"Symbiosis"
] |
5,411,444 | https://en.wikipedia.org/wiki/Cluster%20genealogy | Cluster genealogy is a research technique employed by genealogists to learn more about an ancestor by examining records left by their cluster. A cluster consists of extended family, friends, neighbors, and other associates such as business partners. Investigating the lives of people connected to an ancestor offers a deeper and more precise insight into the ancestor’s own life at the time and place.
Background
Genealogical research begins with a question of identity, relationship, event, or situation. To answer the question, a genealogist gathers and analyzes data from source documents and formulates an answer to the question based on the resulting evidence.
The basic method of research is to gather data from records left by the target ancestor and his or her immediate family. There are several situations, however, where a genealogist wants or needs to use alternate research methods. One such method is cluster genealogy, in which the records left by members of the ancestor's cluster are examined for evidence with which to resolve the question at hand.
Purpose
Cluster genealogy is most often used for the following reasons.
To break through a "brick wall". In genealogy, a brick wall is a question for which a genealogist has not been able to formulate a satisfactory answer based on the evidence thus far collected. Using cluster genealogy, additional evidence is sought in data gathered from the records left by persons in the ancestor's cluster. For example, if the question is one of place of birth, researching the origins of the ancestor’s neighbors can be helpful. Unrelated family groups often migrated together or followed earlier migrations of neighbors or family members.
To build a genealogical proof. When constructing a genealogical proof, it is not sufficient to simply accumulate an assortment of evidence that supports a conclusion. To meet the Genealogical Proof Standard, a genealogist must "conduct reasonably exhaustive research involving all information that is or may be pertinent to the identity, relationship, event, or situation in question." (Emphasis added.) It follows that a reasonably exhaustive research will often include a search of records created by persons in the target ancestor's cluster.
To develop context for an ancestor's life. The facts of an ancestor's life are often meaningful only in the context of his cluster. For example, the fact that an ancestor was a Catholic is interesting; the fact that the ancestor and his family were the only Catholics in their community is intriguing.
See also
One-place study
References
Further reading
Lenzen, Connie. "Proving a Maternal Line: The Case of Frances B. Whitney". Originally published in the National Genealogical Society Quarterly, 82, no. 1 (March 1994): 17–31. A case study illustrating the use of the cluster genealogy technique.
Tony Proctor. "FAN Principles Unfolded". Parallax View blog (November 2016). A study of the relationship between 'cluster genealogy', the 'FAN Club' (Friends, Associates, and Neighbours), and general 'cluster analysis'.
Genealogy | Cluster genealogy | [
"Biology"
] | 601 | [
"Phylogenetics",
"Genealogy"
] |
5,411,659 | https://en.wikipedia.org/wiki/Marine%20ecosystem | Marine ecosystems are the largest of Earth's aquatic ecosystems and exist in waters that have a high salt content. These systems contrast with freshwater ecosystems, which have a lower salt content. Marine waters cover more than 70% of the surface of the Earth and account for more than 97% of Earth's water supply and 90% of habitable space on Earth. Seawater has an average salinity of 35 parts per thousand of water. Actual salinity varies among different marine ecosystems. Marine ecosystems can be divided into many zones depending upon water depth and shoreline features. The oceanic zone is the vast open part of the ocean where animals such as whales, sharks, and tuna live. The benthic zone consists of substrates below water where many invertebrates live. The intertidal zone is the area between high and low tides. Other near-shore (neritic) zones can include mudflats, seagrass meadows, mangroves, rocky intertidal systems, salt marshes, coral reefs, lagoons. In the deep water, hydrothermal vents may occur where chemosynthetic sulfur bacteria form the base of the food web. Marine ecosystems are characterized by the biological community of organisms that they are associated with and their physical environment. Classes of organisms found in marine ecosystems include brown algae, dinoflagellates, corals, cephalopods, echinoderms, and sharks.
Marine ecosystems are important sources of ecosystem services and food and jobs for significant portions of the global population. Human uses of marine ecosystems and pollution in marine ecosystems are significantly threats to the stability of these ecosystems. Environmental problems concerning marine ecosystems include unsustainable exploitation of marine resources (for example overfishing of certain species), marine pollution, climate change, and building on coastal areas. Moreover, much of the carbon dioxide causing global warming and heat captured by global warming are absorbed by the ocean, ocean chemistry is changing through processes like ocean acidification which in turn threatens marine ecosystems.
Because of the opportunities in marine ecosystems for humans and the threats created by humans, the international community has prioritized "Life below water" as Sustainable Development Goal 14. The goal is to "Conserve and sustainably use the oceans, seas and marine resources for sustainable development".
Types or locations
Marine coastal ecosystems
Coral reefs
Coral reefs are one of the most well-known marine ecosystems in the world, with the largest being the Great Barrier Reef. These reefs are composed of large coral colonies of a variety of species living together. The corals form multiple symbiotic relationships with the organisms around them.
Mangroves
Mangroves are trees or shrubs that grow in low-oxygen soil near coastlines in tropical or subtropical latitudes. They are an extremely productive and complex ecosystem that connects the land and sea. Mangroves consist of species that are not necessarily related to each other and are often grouped for the characteristics they share rather than genetic similarity. Because of their proximity to the coast, they have all developed adaptions such as salt excretion and root aeration to live in salty, oxygen-depleted water. Mangroves can often be recognized by their dense tangle of roots that act to protect the coast by reducing erosion from storm surges, currents, wave, and tides. The mangrove ecosystem is also an important source of food for many species as well as excellent at sequestering carbon dioxide from the atmosphere with global mangrove carbon storage is estimated at 34 million metric tons per year.
Seagrass meadows
Seagrasses form dense underwater meadows which are among the most productive ecosystems in the world. They provide habitats and food for a diversity of marine life comparable to coral reefs. This includes invertebrates like shrimp and crabs, cod and flatfish, marine mammals and birds. They provide refuges for endangered species such as seahorses, turtles, and dugongs. They function as nursery habitats for shrimps, scallops and many commercial fish species. Seagrass meadows provide coastal storm protection by the way their leaves absorb energy from waves as they hit the coast. They keep coastal waters healthy by absorbing bacteria and nutrients, and slow the speed of climate change by sequestering carbon dioxide into the sediment of the ocean floor.
Seagrasses evolved from marine algae which colonized land and became land plants, and then returned to the ocean about 100 million years ago. However, today seagrass meadows are being damaged by human activities such as pollution from land runoff, fishing boats that drag dredges or trawls across the meadows uprooting the grass, and overfishing which unbalances the ecosystem. Seagrass meadows are currently being destroyed at a rate of about two football fields every hour.
Kelp forests
Kelp forests occur worldwide throughout temperate and polar coastal oceans. In 2007, kelp forests were also discovered in tropical waters near Ecuador.
Physically formed by brown macroalgae, kelp forests provide a unique habitat for marine organisms and are a source for understanding many ecological processes. Over the last century, they have been the focus of extensive research, particularly in trophic ecology, and continue to provoke important ideas that are relevant beyond this unique ecosystem. For example, kelp forests can influence coastal oceanographic patterns and provide many ecosystem services.
However, the influence of humans has often contributed to kelp forest degradation. Of particular concern are the effects of overfishing nearshore ecosystems, which can release herbivores from their normal population regulation and result in the overgrazing of kelp and other algae. This can rapidly result in transitions to barren landscapes where relatively few species persist. Already due to the combined effects of overfishing and climate change, kelp forests have all but disappeared in many especially vulnerable places, such as Tasmania's east coast and the coast of Northern California. The implementation of marine protected areas is one management strategy useful for addressing such issues, since it may limit the impacts of fishing and buffer the ecosystem from additive effects of other environmental stressors.
Estuaries
Estuaries occur where there is a noticeable change in salinity between saltwater and freshwater sources. This is typically found where rivers meet the ocean or sea. The wildlife found within estuaries is unique as the water in these areas is brackish - a mix of freshwater flowing to the ocean and salty seawater. Other types of estuaries also exist and have similar characteristics as traditional brackish estuaries. The Great Lakes are a prime example. There, river water mixes with lake water and creates freshwater estuaries. Estuaries are extremely productive ecosystems that many humans and animal species rely on for various activities. This can be seen as, of the 32 largest cities in the world, 22 are located on estuaries as they provide many environmental and economic benefits such as crucial habitat for many species, and being economic hubs for many coastal communities. Estuaries also provide essential ecosystem services such as water filtration, habitat protection, erosion control, gas regulation nutrient cycling, and it even gives education, recreation and tourism opportunities to people.
Lagoons
Lagoons are areas that are separated from larger water by natural barriers such as coral reefs or sandbars. There are two types of lagoons, coastal and oceanic/atoll lagoons. A coastal lagoon is, as the definition above, simply a body of water that is separated from the ocean by a barrier. An atoll lagoon is a circular coral reef or several coral islands that surround a lagoon. Atoll lagoons are often much deeper than coastal lagoons. Most lagoons are very shallow meaning that they are greatly affected by changed in precipitation, evaporation and wind. This means that salinity and temperature are widely varied in lagoons and that they can have water that ranges from fresh to hypersaline. Lagoons can be found in on coasts all over the world, on every continent except Antarctica and is an extremely diverse habitat being home to a wide array of species including birds, fish, crabs, plankton and more. Lagoons are also important to the economy as they provide a wide array of ecosystem services in addition to being the home of so many different species. Some of these services include fisheries, nutrient cycling, flood protection, water filtration, and even human tradition.
Salt marsh
Salt marshes are a transition from the ocean to the land, where fresh and saltwater mix. The soil in these marshes is often made up of mud and a layer of organic material called peat. Peat is characterized as waterlogged and root-filled decomposing plant matter that often causes low oxygen levels (hypoxia). These hypoxic conditions causes growth of the bacteria that also gives salt marshes the sulfurous smell they are often known for. Salt marshes exist around the world and are needed for healthy ecosystems and a healthy economy. They are extremely productive ecosystems and they provide essential services for more than 75 percent of fishery species and protect shorelines from erosion and flooding. Salt marshes can be generally divided into the high marsh, low marsh, and the upland border. The low marsh is closer to the ocean, with it being flooded at nearly every tide except low tide. The high marsh is located between the low marsh and the upland border and it usually only flooded when higher than usual tides are present. The upland border is the freshwater edge of the marsh and is usually located at elevations slightly higher than the high marsh. This region is usually only flooded under extreme weather conditions and experiences much less waterlogged conditions and salt stress than other areas of the marsh.
Intertidal zones
Intertidal zones are the areas that are visible and exposed to air during low tide and covered up by saltwater during high tide. There are four physical divisions of the intertidal zone with each one having its distinct characteristics and wildlife. These divisions are the Spray zone, High intertidal zone, Middle Intertidal zone, and Low intertidal zone. The Spray zone is a damp area that is usually only reached by the ocean and submerged only under high tides or storms. The high intertidal zone is submerged at high tide but remains dry for long periods between high tides. Due to the large variance of conditions possible in this region, it is inhabited by resilient wildlife that can withstand these changes such as barnacles, marine snails, mussels and hermit crabs. Tides flow over the middle intertidal zone two times a day and this zone has a larger variety of wildlife. The low intertidal zone is submerged nearly all the time except during the lowest tides and life is more abundant here due to the protection that the water gives.
Ocean surface
Organisms that live freely at the surface, termed neuston, include keystone organisms like the golden seaweed Sargassum that makes up the Sargasso Sea, floating barnacles, marine snails, nudibranchs, and cnidarians. Many ecologically and economically important fish species live as or rely upon neuston. Species at the surface are not distributed uniformly; the ocean's surface harbours unique neustonic communities and ecoregions found at only certain latitudes and only in specific ocean basins. But the surface is also on the front line of climate change and pollution. Life on the ocean's surface connects worlds. From shallow waters to the deep sea, the open ocean to rivers and lakes, numerous terrestrial and marine species depend on the surface ecosystem and the organisms found there.
The ocean's surface acts like a skin between the atmosphere above and the water below, and harbours an ecosystem unique to this environment. This sun-drenched habitat can be defined as roughly one metre in depth, as nearly half of UV-B is attenuated within this first meter. Organisms here must contend with wave action and unique chemical and physical properties. The surface is utilised by a wide range of species, from various fish and cetaceans, to species that ride on ocean debris (termed rafters). Most prominently, the surface is home to a unique community of free-living organisms, termed neuston (from the Greek word, υεω, which means both to swim and to float. Floating organisms are also sometimes referred to as pleuston, though neuston is more commonly used). Despite the diversity and importance of the ocean's surface in connecting disparate habitats, and the risks it faces, not a lot is known about neustonic life.
A stream of airborne microorganisms circles the planet above weather systems but below commercial air lanes. Some peripatetic microorganisms are swept up from terrestrial dust storms, but most originate from marine microorganisms in sea spray. In 2018, scientists reported that hundreds of millions of viruses and tens of millions of bacteria are deposited daily on every square meter around the planet.
Deep sea and sea floor
The deep sea contains up to 95% of the space occupied by living organisms. Combined with the sea floor (or benthic zone), these two areas have yet to be fully explored and have their organisms documented.
Large marine ecosystems
In 1984, National Oceanic and Atmospheric Administration (NOAA) of the United States developed the concept of large marine ecosystems (sometimes abbreviated to LMEs), to identify areas of the oceans for environmental conservation purposes and to enable collaborative ecosystem-based management in transnational areas, in a way consistent with the 1982 UN Convention on the Law of the Sea. This name refers to relatively large regions on the order of or greater, characterized by their distinct bathymetry, hydrography, productivity, and trophically dependent populations. Such LMEs encompass coastal areas from river basins and estuaries to the seaward boundaries of continental shelves and the outer margins of the major ocean current systems.
Altogether, there are 66 LMEs, which contribute an estimated $3 trillion annually. This includes being responsible for 90% of global annual marine fishery biomass. LME-based conservation is based on recognition that the world's coastal ocean waters are degraded by unsustainable fishing practices, habitat degradation, eutrophication, toxic pollution, aerosol contamination, and emerging diseases, and that positive actions to mitigate these threats require coordinated actions by governments and civil society to recover depleted fish populations, restore degraded habitats and reduce coastal pollution. Five modules are considered when assessing LMEs: productivity, fish and fisheries, pollution and ecosystem health, socioeconomics, and governance. Periodically assessing the state of each module within a marine LME is encouraged to ensure maintained health of the ecosystem and future benefit to managing governments. The Global Environment Facility (GEF) aids in managing LMEs off the coasts of Africa and Asia by creating resource management agreements between environmental, fisheries, energy and tourism ministers of bordering countries. This means participating countries share knowledge and resources pertaining to local LMEs to promote longevity and recovery of fisheries and other industries dependent upon LMEs.
Large marine ecosystems include:
East Bering Sea
Gulf of Alaska
California Current
Gulf of California
Gulf of Mexico
Southeast U.S. Continental Shelf
Northeast U.S. Continental Shelf
Scotian Shelf
Newfoundland-Labrador Shelf
Insular Pacific-Hawaiian
Pacific Central-American Coastal
Caribbean Sea
Humboldt Current
Patagonian Shelf
South Brazil Shelf
East Brazil Shelf
North Brazil Shelf
West Greenland Shelf
East Greenland Shelf
Barents Sea
Norwegian Shelf
North Sea
Baltic Sea
Celtic-Biscay Shelf
Central Arctic
Iberian Coastal
Mediterranean Sea
Canary Current
Guinea Current
Benguela Current
Agulhas Current
Somali Coastal Current
Arabian Sea
Red Sea
Bay of Bengal
Gulf of Thailand
South China Sea
Sulu-Celebes Sea
Indonesian Sea
North Australian Shelf
Northeast Australian Shelf/Great Barrier Reef
East-Central Australian Shelf
Southeast Australian Shelf
Southwest Australian Shelf
West-Central Australian Shelf
Northwest Australian Shelf
New Zealand Shelf
East China Sea
Yellow Sea
Kuroshio Current
Sea of Japan
Oyashio Current
Sea of Okhotsk
West Bering Sea
Chukchi Sea
Beaufort Sea
East Siberian Sea
Laptev Sea
Kara Sea
Iceland Shelf
Faroe Plateau
Antarctica
Black Sea
Hudson Bay
Arctic Ocean
Greenland Sea
Role in ecosystem services
In addition to providing many benefits to the natural world, marine ecosystems also provide social, economic, and biological ecosystem services to humans. Pelagic marine systems regulate the global climate, contribute to the water cycle, maintain biodiversity, provide food and energy resources, and create opportunities for recreation and tourism. Economically, marine systems support billions of dollars worth of capture fisheries, aquaculture, offshore oil and gas, and trade and shipping.
Ecosystem services fall into multiple categories, including supporting services, provisioning services, regulating services, and cultural services.
The productivity of a marine ecosystem can be measured in several ways. Measurements pertaining to zooplankton biodiversity and species composition, zooplankton biomass, water-column structure, photosynthetically active radiation, transparency, chlorophyll-a, nitrate, and primary production are used to assess changes in LME productivity and potential fisheries yield. Sensors attached to the bottom of ships or deployed on floats can measure these metrics and be used to quantitatively describe changes in productivity alongside physical changes in the water column such as temperature and salinity. This data can be used in conjunction with satellite measurements of chlorophyll and sea surface temperatures to validate measurements and observe trends on greater spatial and temporal scales.
Bottom-trawl surveys and pelagic-species acoustic surveys are used to assess changes in fish biodiversity and abundance in LMEs. Fish populations can be surveyed for stock identification, length, stomach content, age-growth relationships, fecundity, coastal pollution and associated pathological conditions, as well as multispecies trophic relationships. Fish trawls can also collect sediment and inform us about ocean-bottom conditions such as anoxia.
Threats
Human exploitation and development
Coastal marine ecosystems experience growing population pressures with nearly 40% of people in the world living within 100 km of the coast. Humans often aggregate near coastal habitats to take advantage of ecosystem services. For example, coastal capture fisheries from mangroves and coral reef habitats are estimated to be worth a minimum of $34 billion per year. Yet, many of these habitats are either marginally protected or not protected. Mangrove area has declined worldwide by more than one-third since 1950, and 60% of the world's coral reefs are now immediately or directly threatened. Human development, aquaculture, and industrialization often lead to the destruction, replacement, or degradation of coastal habitats.
Moving offshore, pelagic marine systems are directly threatened by overfishing. Global fisheries landings peaked in the late 1980s, but are now declining, despite increasing fishing effort. Fish biomass and average trophic level of fisheries landing are decreasing, leading to declines in marine biodiversity. In particular, local extinctions have led to declines in large, long-lived, slow-growing species, and those that have narrow geographic ranges. Biodiversity declines can lead to associated declines in ecosystem services. A long-term study reports the decline of 74–92% of catch per unit effort of sharks in Australian coastline from the 1960s to 2010s. Such biodiversity losses impact not just species themselves, but humans as well, and can contribute to climate change across the globe. The National Oceanic and Atmospheric Administration (NOAA) states that managing and protecting marine ecosystems is crucial in attempting to conserve biodiversity in the face of Earth’s rapidly changing climate.
Pollution
Invasive species
Global aquarium trade
Ballast water transport
Aquaculture
Climate change
Warming temperatures (see ocean heat content, sea surface temperature, and marine heat wave)
Increased frequency/intensity of storms
Ocean acidification
Sea level rise
Society and culture
Global goals
By integrating socioeconomic metrics with ecosystem management solutions, scientific findings can be utilized to benefit both the environment and economy of local regions. Management efforts must be practical and cost-effective. In 2000, the Department of Natural Resource Economics at the University of Rhode Island has created a method for measuring and understanding the human dimensions of LMEs and for taking into consideration both socioeconomic and environmental costs and benefits of managing Large Marine Ecosystems.
International attention to address the threats of coasts has been captured in Sustainable Development Goal 14 "Life Below Water" which sets goals for international policy focused on preserving coastal ecosystems and supporting more sustainable economic practices for coastal communities. Furthermore, the United Nations has declared 2021-2030 the UN Decade on Ecosystem Restoration, but restoration of coastal ecosystems has received insufficient attention.
See also
Aquatic toxicology
Blue carbon
Fishing down the food web
Marine biology
Marine habitats
Marine life
Marine biomass
Marine trophic cascades
Tropical marine climate
References
External links
U.S. Environmental Protection Agency—EPA: Marine Ecosystems
Smithsonian Institution: Ocean Portal
Marine Ecosystems Research Programme (UK)
Aquatic ecology
Ecosystems
Biological oceanography
Fisheries science
Systems ecology
Water
Oceanographical terminology | Marine ecosystem | [
"Biology",
"Environmental_science"
] | 4,160 | [
"Hydrology",
"Symbiosis",
"Systems ecology",
"Marine biology",
"Ecosystems",
"Water",
"Aquatic ecology",
"Environmental social science"
] |
5,411,727 | https://en.wikipedia.org/wiki/Multiple%20description%20coding | Multiple description coding (MDC) in computing is a coding technique that fragments a single media stream into n substreams (n ≥ 2) referred to as descriptions. The packets of each description are routed over multiple, (partially) disjoint paths. In order to decode the media stream, any description can be used, however, the quality improves with the number of descriptions received in parallel. The idea of MDC is to provide error resilience to media streams. Since an arbitrary subset of descriptions can be used to decode the original stream, network congestion or packet loss — which are common in best-effort networks such as the Internet — will not interrupt the stream but only cause a (temporary) loss of quality. The quality of a stream can be expected to be roughly proportional to data rate sustained by the receiver.
MDC is a form of data partitioning, thus comparable to layered coding as it is used in MPEG-2 and MPEG-4. Yet, in contrast to MDC, layered coding mechanisms generate a base layer and n enhancement layers. The base layer is necessary for the media stream to be decoded, enhancement layers are applied to improve stream quality. However, the first enhancement layer depends on the base layer and each enhancement layer n + 1 depends on its subordinate layer n, thus can only be applied if n was already applied. Hence, media streams using the layered approach are interrupted whenever the base layer is missing and, as a consequence, the data of the respective enhancement layers is rendered useless. The same applies for missing enhancement layers. In general, this implies that in lossy networks the quality of a media stream is not proportional to the amount of correctly received data.
Besides increased fault tolerance, MDC allows for rate-adaptive streaming: Content providers send all descriptions of a stream without paying attention to the download limitations of clients. Receivers that cannot sustain the data rate only subscribe to a subset of these streams, thus freeing the content provider from sending additional streams at lower data rates.
The vast majority of state-of-the art codecs uses single description (SD) video coding. This approach does not partition any data at all. Despite the aforementioned advantages of MDC, SD codecs are still predominant. The reasons are probably the comparingly high complexity of codec development, the loss of some compression efficiency as well as the caused transmission overhead.
Though MDC has its practical roots in media communication, it is widely researched in the area of information theory.
A related technology is layered coding, which also produces multiple compressed streams, but with a hierarchy between these streams.
References
V. K. Goyal, "Multiple Description Coding: Compression Meets the Network," IEEE Signal Processing Magazine, vol. 18, no. 5, pp. 74–94, Sept. 2001.
R. Puri and K. Ramchandran, “Multiple description source coding through forward error correction codes,” IEEE Proceedings Asilomar Conference on Signals, Systems, and Computers, Asilomar, CA, October 1999.
A. Farzamnia, S. K. Syed-Yusof, N. Fisal, and S. A. Abu-Bakar, "Investigation of Error Concealment Using Different Transform Codings and Multiple Description Codings," Journal of Electrical Engineering, vol. 63, pp. 171–179, 2012.
Ilan Sadeh, "The rate distortion region for coding in stationary systems", Journal of Applied Mathematics and Computer Science, vol. 6, No. 1, 123-136, 1996.
Coding theory | Multiple description coding | [
"Mathematics"
] | 724 | [
"Discrete mathematics",
"Coding theory"
] |
5,411,814 | https://en.wikipedia.org/wiki/Arran%20whitebeams | The Arran whitebeams are species of whitebeam endemic to the island of Arran, Ayrshire, Scotland.
Status
These trees, sometimes called the Scottish or Arran whitebeam (Sorbus arranensis), the bastard mountain ash or cut-leaved whitebeam (Sorbus pseudofennica) and the Catacol whitebeam (Sorbus pseudomeinichii) are, if rarity is measured by numbers alone, amongst the most endangered tree species in the world. They are protected in Glen Diomhan off Glen Catacol, which was formerly part of a National Nature Reserve; although this designation was removed in 2011 the area continues to form part of a designated Site of Special Scientific Interest (SSSI). Only 283 Arran whitebeam and 236 cut-leaved whitebeam were recorded as mature trees in 1980, and it is thought that grazing pressures and insect damage are preventing regeneration of the woodland.
They are typically trees of the mountain slopes, close to the tree line. However, they will grow at lower altitudes, and they are being grown within the Brodick Country Park. Also, North Ayrshire Council Parks and Recreation staff are growing specimens for conservation purposes. A few specialist garden centres and tree nurseries are able to supply them as grafts, and Ardrossan Academy in North Ayrshire has a grafted specimen for its use within the Scottish Higher Biology course in which it features as an example of evolution and survival of the fittest.
Distribution
Sorbus arranensis: Abhainn Bheag (Uisge solus), Glen Diomhan (and tributary), Glen Catacol, Allt nan Calman, Allt Dubh, Gleann Easan Biorach and Glen Iorsa (Allt-nan-Champ).
Sorbus pseudofennica: Abhainn Bheag (Uisge solus), Glen Diomhan and Allt nan Calma.
Sorbus pseudomeinichii: Glen Catacol.
History
The oldest preserved specimen is from the bastard mountain ash, S. pseudofennica, collected in 1797 from North Arran and another of the same species is in the British Museum dated 1838, when it was known as Pyrus pinnatifida (the pear group). S. pseudofennica was authoritatively recognised as a separate species by Clapham, Tutin and Warburg in 1952. Landsborough in 1875 noted the two kinds growing in Glen Diomhan and called them French rowan or whitebeams.
The Scottish mountain ash, S. arranensis, evoked most collecting interest in 1870–1890 and 1920–1940, although older herbarium specimens exist.
Evolution
The trees developed in a highly complex fashion, which involved the common whitebeam (Sorbus aria) giving rise to the tetraploid rock whitebeam (Sorbus rupicola) which is still found on Holy Isle. This species is able to survive at higher altitudes and therefore occupies a less competitive niche with fewer tree species able to tolerate the harsher conditions. The rock whitebeam interbred with the rowan / mountain ash (Sorbus aucuparia) to produce the hybrid, a fertile separate species the Scottish whitebeam (Sorbus arranesis) which grows well in this zone of reduced competitive growth at higher altitudes. The bastard mountain ash (Sorbus pseudofennica) arose from a further cross between S. arranensis and the mountain ash (S. aucuparia).
The Sorbus group are apomictic, producing viable seed without the need for pollination and fertilisation. Each time this hybrid cross occurs a new clone is effectively produced.
Smart showed by using physical characteristics that the species were separate and not a result of random variation. Some overlap does however occur and this suggests that some hybridising may occur between the two species.
A number of other Sorbus species have been produced in this way, such as the Devon whitebeam, the Bristol whitebeam, the Cheddar whitebeam, Irish whitebeam, Lancaster whitebeam, etc. All are rare and require careful protection and expert habitat management if they are to survive in the wild.
In Scandinavia, particularly Norway, similar species have evolved following similar evolutionary pressures, but quite independently of the Arran whitebeams.
Islands are well known as sites of endemic species. The Lundy cabbage (Coincya wrightii) is another British example, only growing on Lundy Island off the North Devon coast.
Characteristics
The mountain ash has a leaf made up of a number of leaflets, whilst the whitebeam leaf is entire and doesn't even have lobes. The result of crossing the two is that the hybrids begin to merge or mix characteristics, so that S. arranensis has lobes but no leaflets, while S. pseudofennica, having an extra cross with the "leafleted" mountain ash, has a variable number of true leaflets and lobes. These characteristics are not always definitive and sometimes the actual species cannot be ascertained with certainty, possibly due to hybridisation between the species in question.
Some differences in the flower and seed characteristics are also noted.
Unlike other endemic British species, they do not seem to grow on base-rich soils.
Future prospects
Although actual numbers haven't dropped since the first quantitative survey was carried out in 1897, this may be a false impression, since with more searching more have been found, which does not necessarily suggest a stable population. Various attempts at introducing saplings grown from native seed have had widely differing degrees of success.
Grazing by sheep has probably reduced the population from being widespread and numerous to what it is now, confined to steep slopes, cracks in rocks, and restricted to the mountainous northern end of the island.
The trees are not well known to the islanders and two fine specimens were even cut down in the 1980s by a professional gardener working at a site near Brodick Castle. The Ranger's Service have taken steps to increase the distribution of the trees, planting both species in the park. However, a great deal more could be done to make visitors and islanders aware of these unique species possessed by Arran.
Catacol whitebeam - a new species
In 2007 it was announced that two specimens of the newly named Catacol whitebeam (Sorbus pseudomeinichii) had been discovered by researchers on Arran. The tree is again a cross between the native rowan and whitebeam, the discovery being made following work by Scottish Natural Heritage (SNH), Dougarie Estate and Royal Botanic Garden Edinburgh. Research into the genetics of whitebeam trees had shown that the population was much more diverse than previously thought and that the Arran whitebeams seem to be gradually evolving towards a new type of tree which will in all likelihood look very similar to a rowan.
A team from the Royal Botanic Gardens collected seeds and cuttings to ensure the long-term survival of the trees and steps were taken to protect the two known specimens.
As of 2016, only one of the two specimens could be found.
Further reading
Gibson, Rob, "The Battle to Save the Arran Whitebeam, in Meikle, Mandy (ed.), Reforesting Scotland'' 31, Spring 2004, pp. 35 & 36,
See also
Eglinton Country Park Arran whitebeams on the mainland.
References
External links
Arran whitebeams at Ardrossan Academy
Sorbus
Endemic flora of Scotland
Isle of Arran
Speciation | Arran whitebeams | [
"Biology"
] | 1,534 | [
"Evolutionary processes",
"Speciation"
] |
5,411,925 | https://en.wikipedia.org/wiki/Agrin | Agrin is a large proteoglycan whose best-characterised role is in the development of the neuromuscular junction during embryogenesis. Agrin is named based on its involvement in the aggregation of acetylcholine receptors during synaptogenesis. In humans, this protein is encoded by the AGRN gene.
This protein has nine domains homologous to protease inhibitors. It may also have functions in other tissues and during other stages of development. It is a major proteoglycan component in the glomerular basement membrane and may play a role in the renal filtration and cell-matrix interactions.
Agrin functions by activating the MuSK protein (for Muscle-Specific Kinase), which is a receptor tyrosine kinase required for the formation and maintenance of the neuromuscular junction. Agrin is required to activate MuSK. Agrin is also required for neuromuscular junction formation.
Discovery
Agrin was first identified by the U.J. McMahan laboratory, Stanford University.
Mechanism of action
During development in humans, the growing end of motor neuron axons secrete a protein called agrin. When secreted, agrin binds to several receptors on the surface of skeletal muscle. The receptor which appears to be required for the formation of the neuromuscular junction (NMJ) is called the MuSK receptor (Muscle specific kinase). MuSK is a receptor tyrosine kinase - meaning that it induces cellular signaling by causing the addition of phosphate molecules to particular tyrosines on itself and on proteins that bind the cytoplasmic domain of the receptor.
In addition to MuSK, agrin binds several other proteins on the surface of muscle, including dystroglycan and laminin. It is seen that these additional binding steps are required to stabilize the NMJ.
The requirement for Agrin and MuSK in the formation of the NMJ was demonstrated primarily by knockout mouse studies. In mice that are deficient for either protein, the neuromuscular junction does not form. Many other proteins also comprise the NMJ, and are required to maintain its integrity. For example,
MuSK also binds a protein called "dishevelled" (Dvl), which is in the Wnt signalling pathway. Dvl is additionally required for MuSK-mediated clustering of AChRs, since inhibition of Dvl blocks clustering.
Signaling
The nerve secretes agrin, resulting in phosphorylation of the MuSK receptor.
It seems that the MuSK receptor recruits casein kinase 2, which is required for clustering.
A protein called rapsyn is then recruited to the primary MuSK scaffold, to induce the additional clustering of acetylcholine receptors (AChR). This is thought of as the secondary scaffold. A protein called Dok-7 has shown to be additionally required for the formation of the secondary scaffold; it is apparently recruited after MuSK phosphorylation and before acetylcholine receptors are clustered.
Structure
There are three potential heparan sulfate (HS) attachment sites within the primary structure of agrin, but it is thought that only two of these actually carry HS chains when the protein is expressed.
In fact, one study concluded that at least two attachment sites are necessary by inducing synthetic agents. Since agrin fragments induce acetylcholine receptor aggregation as well as phosphorylation of the MuSK receptor, researchers spliced them and found that the variant did not trigger phosphorylation. It has also been shown that the G3 domain of agrin is very plastic, meaning it can discriminate between binding partners for a better fit.
Heparan sulfate glycosaminoglycans covalently linked to the agrin protein have been shown to play a role in the clustering of AChR. Interference in the correct formation of heparan sulfate through the addition of chlorate to skeletal muscle cell culture results in a decrease in the frequency of spontaneous acetylcholine receptor (AChR) clustering. It may be that rather than solely binding directly to the agrin protein core a number of components of the secondary scaffold may also interact with its heparan sulfate side-chains.
A role in the retention of anionic macromolecules within the vasculature has also been suggested for agrin-linked HS at the glomerular or alveolar basement membrane.
Functions
Agrin may play an important role in the basement membrane of the microvasculature as well as in synaptic plasticity. Also, agrin may be involved in blood–brain barrier (BBB) formation and/or function and it influences Aβ homeostasis.
Research
Agrin is investigated in relation with osteoarthritis. In addition, by its ability to activate the Hippo signaling pathway, agrin is emerging as a key proteoglycan in the tumor microenvironment.
Clinical significance
AGRN gene mutation leads to congenital myasthenic syndromes and myasthenia gravis.
A recent genome-wide association study (GWAS) has found that genetic variations in AGRN are associated with late-onset sporadic Alzheimer’s disease (LOAD). These genetic variations alter β-amyloid homeostasis contributing to its accumulation and plaque formation.
References
Further reading
d
External links
Developmental neuroscience
Molecular neuroscience
Extracellular matrix proteins
Proteoglycans | Agrin | [
"Chemistry"
] | 1,133 | [
"Molecular neuroscience",
"Molecular biology"
] |
5,412,385 | https://en.wikipedia.org/wiki/Tom%20Liston | Tom Liston is the founder and owner of the Johnsburg,_Illinois-based network security consulting firm, Bad Wolf Security.
He is the author of the first network tarpit, the open source LaBrea. He was a finalist for eWeek and PC Magazine’s "Innovations In Infrastructure" (i3) award in 2002 for LaBrea. He is one of the handlers at the SANS Institute’s Internet Storm Center, where he deals with developing security issues and authors a series of articles under the title “Follow the Bouncing Malware.”
Liston is also, with Ed Skoudis, co-author of the second edition of the network security book Counter Hack Reloaded: A Step-by-Step Guide to Computer Attacks and Effective Defenses.
Works
Books
References
Living people
Year of birth missing (living people) | Tom Liston | [
"Technology"
] | 169 | [
"Computing stubs",
"Computer specialist stubs"
] |
5,414,219 | https://en.wikipedia.org/wiki/Subaudible%20tone | A subaudible tone is a tone that is used to trigger an automated event at a radio station. A subaudible tone is audible; however, it is usually at a low level that is not noticeable to the average listener at normal volumes. It is a form of in-band signaling.
Overview
These tones are included in the audible main portion of audio in the case of satellite; on tape, these often are filtered. Normally, subaudible tones are at one of the following frequencies: 25, 35, 50, 75 hertz (Hz), or combinations of those frequencies. Until computerized radio automation became inexpensive and common, 25 and 35 Hz were used either in the audio stream or, in the case of tape cartridges used in radio broadcasting (better known as "carts"), on a special track on the tape to indicate to a radio station's automation system that it was time to trigger another event.
With the advent of computers and digital satellite, these tones are relegated to triggering commercial announcements and legal IDs on a dwindling number of radio networks, as tones in the audio have been supplanted by external data channels sent independent of audio on digital satellite feeds for radio. These trigger relay closure terminals on the satellite receiver itself (Starguide being a prominent system).
Use for filmstrips
Subaudible tones have also been used by later filmstrip projectors to advance to the next frame in a filmstrip presentation. Previously, the phonographic record or audio cassette accompanying a filmstrip to provide its soundtrack would have an audible tone to signal the person operating the projector to advance the film to the next frame. But automatic filmstrip projectors were introduced in the 1970s (that had an integrated phonograph or cassette player) that would read a subaudible tone of 50 Hz recorded on the soundtrack to automatically trigger the projector to advance to the next frame.
Most of the cassettes accompanying filmstrips from the 1970s and 80s would have one side of the media with audible tones for use with manual projectors, and the other side with the same program audio, but with 50 Hz subaudible tones instead for automatic projectors. Some filmstrip releases would have both audible & subaudible tones combined, making the filmstrip and its companion cassette or record compatible with any filmstrip projector.
External links
Examples of subaudible tone units
Radio technology
Broadcast engineering | Subaudible tone | [
"Technology",
"Engineering"
] | 488 | [
"Information and communications technology",
"Broadcast engineering",
"Telecommunications engineering",
"Radio technology",
"Electronic engineering"
] |
5,414,549 | https://en.wikipedia.org/wiki/Tetramethylenedisulfotetramine | Tetramethylenedisulfotetramine (TETS) is an organic compound used as a rodenticide (rat poison). It is an odorless, tasteless white powder that is slightly soluble in water, DMSO and acetone, and insoluble in methanol and ethanol. It is a sulfamide derivative. It can be synthesized by reacting sulfamide with formaldehyde solution in acidified water. When crystallized from acetone, it forms cubic crystals with a melting point of 255–260 °C.
Toxicity and mechanism
TETS is a neurotoxin and convulsant, causing lethal convulsions. Its effect is similar to but stronger than picrotoxin, a GABA-A receptor antagonist widely used in research. As one of the most hazardous pesticides, it is 100 times more toxic than potassium cyanide. TETS binds to neuronal GABA gated chloride channels, often causing status epilepticus. No antidote is known. The lethal dose for humans is 7–10 mg. Poisoning is diagnosed by GC-MS and the treatment is mainly supportive, with large IV doses of a benzodiazepine (e.g clonazepam) and pyridoxine to control symptoms. TETS is sequestered in tissues of poisoned birds and can thus pose severe risk of secondary poisoning.
History
Previous research has documented the effectiveness of tetramethylenedisulfotetramine against mice. The dangers of this chemical were first suspected in 1949. The U.S. Forest Service, looking to protect tree seeds for reforestation, noted its lethal effect against the rodent populations. Rather than repel wandering scavengers, the chemical was proved to be toxic to the local rodent population for up to 4 years. Continued experiments conducted by the U.S. Forest Service found no direct effect between TETS and the gastro-intestinal or renal systems of spinal dogs. In this same study, no effects were seen within the peripheral or skeletal nerve system, limiting symptoms of toxicity to the brain stem. Curtis and Johnson were the first to hypothesize TETS antagonistic behavior on GABA. An in-vitro study using superior cervical ganglion neurons of rats found TETS to antagonize the depolarization actions of GABA, while having no influence on the cholinomimetic agent carbachol. This evidence suggests that TETS may act as a non-competitive inhibitor for GABA. Further research findings using crustacean models, indicated a dose-dependent, non-competitive response to TETS that is reversible.
Research
In vitro and rapid screening tools
Recent studies have indicated the usefulness of pH sensitivity in identifying Chloride ion influx, resulting from GABA-A receptor excitation. Other potential screening tools include spontaneous Calcium ion oscillations seen in hippocampal cell cultures from new born mice. This phenomenon can be measured by Calcium ion sensitive fluorescent dye. Further analyses showed that these Calcium ion oscillations are sensitive to MK-801 (an NMDA open channel blocker), suggesting that NMDA receptor operated channels are involved in TMDT induced spontaneous activity. When considering GABAA receptor activity, diazepam and pregnanolone reversed TMDT activity when applied to cell cultures individually and in combination. MK-801 and ketamine show more antagonistic effects on TMDT than diazepam within cerebral cortical cell cultures of embryonic rats.
In vivo mouse models
Low dosages of ketamine and MK-801, administered separately, were associated with increased clonic seizures with no effect on tonic clonic seizures on mice exposed to TETS. Further analysis on the same sample of mice, found that dual administration of diazepine and MK-801 had a synergistic protective effect against tonic-clonic seizures and 24-hour lethality, as opposed to clonic seizures that were poorly controlled. Sequential administration diazepine and MK-801 for clonic control of seizures in TETS exposed mice, may indicate the benefits of benzodiazepine-NMDA receptor antagonist regimens used to treat TETS exposed patients.
Worldwide restriction
Its use worldwide has been banned since 1984, but due to continuing demand and its ease of production, it is still readily, although illegally, available in China, until being formally banned in 2002. The best known Chinese rodenticide, containing about 6–20% TETS, is Dushuqiang, "very strong rat poison". It has been used for mass poisonings in China: in April 2004, there were 74 casualties after eating scallion-flavored pancakes tainted by their vendor's competitor; and in September 2002, 400 people were poisoned and 38 died from contaminated food. In 2002, there was one documented case of accidental poisoning in the US.
See also
GABAA receptor negative allosteric modulator
GABAA receptor § Ligands
Strychnine
Picrotoxin
References
Rodenticides
GABAA receptor negative allosteric modulators
Nitrogen heterocycles
Convulsants
Sulfur heterocycles
Mass poisoning
Neurotoxins
Adamantane-like molecules
Sulfamides
Chloride channel blockers
Sulfur–nitrogen compounds | Tetramethylenedisulfotetramine | [
"Chemistry",
"Biology"
] | 1,093 | [
"Neurochemistry",
"Neurotoxins",
"Rodenticides",
"Biocides"
] |
5,414,818 | https://en.wikipedia.org/wiki/C6H5N3 | {{DISPLAYTITLE:C6H5N3}}
The molecular formula C6H5N3 (molar mass: 119.12 g/mol) may refer to:
Benzotriazole (BTA)
Phenyl azide
Pyrazolopyrimidine | C6H5N3 | [
"Chemistry"
] | 64 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
5,416,072 | https://en.wikipedia.org/wiki/GRANK | GRANK, or Global Rank is a ranking of the rarity of a species, and is a useful tool in determining conservation needs.
Global Ranks are derived from a consensus of various conservation data centres, natural heritage programmes, scientific experts and NatureServe.
They are based on the total number of known, extant populations worldwide, and to what degree they are threatened by destruction. Criteria also include securely protected populations, size of populations, and the ability of the species to persist.
G1 — Critically Imperiled At very high risk of extinction or collapse due to very restricted range, very few populations or occurrences, very steep declines, very severe threats, or other factors.
G2 — Imperiled At high risk of extinction or collapse due to restricted range, few populations or occurrences, steep declines, severe threats, or other factors.
G3 — Vulnerable At moderate risk of extinction or collapse due to a fairly restricted range, relatively few populations or occurrences, recent and widespread declines, threats, or other factors.
G4 — Apparently Secure At fairly low risk of extinction or collapse due to an extensive range and/or many populations or occurrences, but with possible cause for some concern as a result of local recent declines, threats, or other factors.
G5 — Secure At very low risk or extinction or collapse due to a very extensive range, abundant populations or occurrences, and little to no concern from declines or threats.
GH — Possibly Extinct (species) or Possibly Collapsed (ecosystems/communities) Known from only historical occurrences but still some hope of rediscovery. Examples of evidence include (1) that a species has not been documented in approximately 20–40 years despite some searching and/or some evidence of significant habitat loss or degradation; (2) that a species or ecosystem has been searched for unsuccessfully, but not thoroughly enough to presume that it is extinct or collapsed throughout its range.
GU — Unrankable Currently unrankable due to lack of information or due to substantially conflicting information about status or trends. NOTE: Whenever possible (when the range of uncertainty is three consecutive ranks or less), a range rank (e.g., G2G3) should be used to delineate the limits (range) of uncertainty.
GX — Presumed Extinct (species) or Presumed Collapsed (ecosystems/communities) Not located despite intensive searches and virtually no likelihood of rediscovery (species) or Collapsed throughout its range, due to loss of key dominant and characteristic taxa and/or elimination of the sites and ecological processes on which the type depends (ecosystems/communities).
? Denotes inexact numeric rank (i.e. G4?).
T Denotes that the rank applies to a subspecies or variety.
External links
Ontario Ministry of Natural Resources / Ministère des richesses naturelles de l'Ontario
Natural Heritage Information Centre / Centre d'information des heritages naturelles
NatureServe
NatureServe Explorer Global Conservation Status Ranks
Conservation biology | GRANK | [
"Biology"
] | 599 | [
"Conservation biology"
] |
5,416,131 | https://en.wikipedia.org/wiki/NRANK | NRANK, or National Rank, is a ranking of the rarity of a species within a nation. Each nation can assign their own NRANK based on information from conservation data centres, natural heritage programmes, and expert scientists. Other sources of species endangerment levels come from a data bases from the IUCN. This list was first compiled in 1963 to highlight endangered species in each region as a way to allow conservation, called the Red List of Threatened Species.
References
Taxonomy (biology) | NRANK | [
"Biology"
] | 100 | [
"Taxonomy (biology)"
] |
5,416,311 | https://en.wikipedia.org/wiki/List%20of%20built-in%20macOS%20apps | This is a list of built-in apps and system components developed by Apple Inc. for macOS that come bundled by default or are installed through a system update. Many of the default programs found on macOS have counterparts on Apple's other operating systems, most often on iOS and iPadOS.
Apple has also included versions of iWork, iMovie, and GarageBand for free with new device activations since 2013. However, these programs are maintained independently from the operating system itself. Similarly, Xcode is offered for free on the Mac App Store and receives updates independently of the operating system despite being tightly integrated.
Applications
App Store
The Mac App Store is macOS's digital distribution platform for macOS apps, created and maintained by Apple Inc. based on the iOS version, the platform was announced on October 20, 2010, at Apple's "Back to the Mac" event. First launched on January 6, 2011, as part of the free Mac OS X 10.6.6 update for all current Snow Leopard users, Apple began accepting app submissions from registered developers on November 3, 2010, in preparation for its launch. After 24 hours of release, Apple announced that there were over one million downloads.
Automator
Automator is an app used to create workflows for automating repetitive tasks into batches for quicker alteration via point-and-click (or drag and drop). This saves time and effort over human intervention to manually change each file separately. Automator enables the repetition of tasks across a wide variety of programs, including Finder, Safari, Calendar, Contacts and others. It can also work with third-party applications such as Microsoft Office, Adobe Photoshop or Pixelmator.
The icon features a robot holding a pipe, a reference to pipelines, a computer science term for connected data workflows. Automator was first released with Mac OS X Tiger (10.4).
Books
Books, previously known as iBooks, is an eBook reading application first released with OS X Mavericks. It allows users to read and purchase digital books, as well as listen to audiobooks. Reading goals can be set which encourage users to read for an amount of time each day.
Calculator
Calculator is a basic calculator application made by Apple Inc. and bundled with macOS. It has three modes: basic, scientific, and programmer. Basic includes a number pad, buttons for adding, subtracting, multiplying, and dividing, as well as memory keys. Scientific mode supports exponents and trigonometric functions, and programmer mode gives the user access to more options related to computer programming.
The Calculator program has a long history going back to the very beginning of the Macintosh platform, where a simple four-function calculator program was a standard desk accessory from the earliest system versions. Though no higher math capability was included, third-party developers provided upgrades, and Apple released the Graphing Calculator application with the first PowerPC release (7.1.2) of the Mac OS, and it was a standard component through Mac OS 9. Apple currently ships a different application called Grapher.
Calculator has Reverse Polish notation support, and can also speak the buttons pressed and result returned.
Calendar
Calendar, previously known as iCal before OS X Mountain Lion, is a personal calendar app made by Apple Inc., originally released as a free download for Mac OS X v10.2 on September 10, 2002, before being bundled with the operating system as iCal 1.5 with the release of Mac OS X v10.3. It tracks events and appointments added by the user and includes various holidays depending on the location the device is set to as well as birthdays from contacts. Users are also able to subscribe to other calendars from friends or third-parties.
iCal was the first calendar application for Mac OS X to offer support for multiple calendars and the ability to intermittently publish/subscribe to calendars on WebDAV servers. Calendar also offers online cloud backup of calendars using Apple's iCloud service, or it can synchronize with other calendar services, including Google Calendar and Microsoft Exchange Server.
Chess
Chess is a 3D chess game for macOS, developed by Apple Inc. as a fork of GNOME Chess (formerly "glChess"). Its history dates back to OpenStep and Mac OS X 10.2. It supports chess variants such as crazyhouse and suicide chess. Apple redistributes the source code under its own Apple Sample Code License, after a special permission has been granted from the original authors of GNOME Chess (which is licensed under GPL3). Apple also ships the game with the Sjeng chess engine (GPL).
Clock
Clock is a timekeeping app first made available in MacOS Ventura. It allows users to view the current time in locations around the world, set alarms and timers, and use their phone as a stopwatch. Alarms and timers will play a chime once completed, which the user can choose from their ringtone library.
Contacts
Contacts, previously known as Address Book before OS X Mountain Lion, is a computerized address book. Contacts can be synchronized over iCloud and other online address book services and allows for the storage of names, phone numbers, email addresses, home addresses, job titles, birthdays, and social media usernames.
Dictionary
Dictionary is an application introduced with OS X 10.4 that provides definitions and synonyms from various sources, serving as a built-in dictionary and thesaurus. The program also includes definitions for a list of Apple-related terms as well as access to Wikipedia articles. Dictionary supports several languages and currently provides American-English definitions from the New Oxford American Dictionary and Oxford American Writer's Thesaurus.
FaceTime
FaceTime is s a videotelephony app introduced in Mac OS X 10.6.6, replacing the video and audio calling functionality of iChat on Mac. Users can also make standard phone calls through the FaceTime app if a connected iPhone under the same Apple ID is nearby. In 2018, Apple added group video and audio support to FaceTime which can support up to 32 people alongside the release of MacOS Mojave.
With the release of MacOS Monterey, Apple introduced a feature called SharePlay, which allows users to simultaneously watch videos, listen to music together, or share their display.
Find My
Find My is an app and service that enables users to track the locations of iOS, iPadOS, macOS, watchOS, AirPods and AirTags via iCloud. First introduced in macOS Catalina, it replaces Find My Mac and Find My Friends from previous versions. Missing devices can be made to play a sound at maximum volume, flagged as lost and locked with a passcode, or remotely erased. Users are also able to share their GPS locations with friends and family who own Apple devices of their own and can set notifications for when a person arrives or leaves a destination.
Font Book
Font Book is a font manager first released with Mac OS X Panther in 2003. It allows users to browse and view all fonts installed on device, as well as install new fonts from .otf and .tff files. A font can be selected to see its alphabets, complete repertoire of characters, and how it sets a sample text of the user's choice.
Freeform
Freeform is a virtual brainstorming app first made available on alongside MacOS 13.1. It allows users to create canvases called "boards", which can display a range of inputs including text notes, photos, documents, and web links. The app offers real-time collaboration between users, with support for FaceTime and iCloud syncing.
Home
Home is a smart-home management app released with macOS 10.14 Mojave, that serves as the front-end for Apple's HomeKit software framework. It lets users configure, communicate with, and control their HomeKit enabled smart appliances from a single application. Appliances can be divided into separate rooms and access to home controls can be shared with others.
Image Capture
Image Capture is an application that enables users to upload pictures from digital cameras or scanners which are either connected directly to their computer or network. It provides no organizational tools like Photos but is useful for collating pictures from a variety of sources with no need for drivers.
Mail
Mail is an email client first originating in NeXTstep, before being carried over to Mac OS X. It is preconfigured to work with popular email providers, such as Yahoo! Mail, AOL Mail, Gmail, Outlook and iCloud (formerly MobileMe) and supports Exchange. Mail includes the ability to read and write emails, file emails into folders, search for emails, automatically append signatures to outgoing emails, filter out junk mail, and automatically unsubscribe from newsletters.
Maps
Maps is a web mapping app and service introduced to macOS with OS X Mavericks. It provides directions and estimated times of arrival for automobile, pedestrian, cycling and public transportation navigation. Apple Maps features a Flyover mode that enables a user to explore certain densely populated urban centers and other places of interest in a 3D landscape composed of models of buildings and structures, as well as Look Around, which allows the user to view 360° street-level imagery.
Messages
Messages is an instant messaging app introduced with OS X, replacing the messaging component of iChat in prior versions while providing support for the iMessage protocol from iOS. A number of upgrades have been introduced to the iMessage platform over time, including message effects, editing and deleting messages within a fifteen minute window, and a dedicated iMessage App Store which allows users to download sticker packs that can be sent in conversations.
Music
Music is a media player first introduced macOS Catalina, replacing the music-playing capabilities of iTunes. It can play music files stored locally on devices and allows users to curate their song library into playlists. Songs can be purchased directly from the iTunes Store or streamed through Apple Music if the user has an active subscription. Internet radio stations can also be found within the app, with both local and international broadcasters available. Music supports lossless and spatial audio, and is capable of video playback, used primarily for music videos, artist interviews, and live performances.
News
News is a news aggregator first introduced in selected regions with the release of macOS Mojave 10.14. Users can read news articles with it, based on publishers, websites and topics they select, such as technology or politics. On March 25, 2019, Apple News+ was made available within the News app, which is a subscription service allowing access to content from a number magazines and newspapers.
Notes
Notes is a notetaking app first introduced with OS X Mountain Lion. It functions as a service for making text notes and sketches, which can be synchronised between devices using Apple's iCloud service. Notes features support for advanced text formatting options, several styles of lists, rich web and map link previews, support for more file type attachments, a corresponding dedicated attachment browser, and a system share extension point for saving web links and images.
Passwords
Passwords is an app for managing passwords, introduced in macOS Sequoia. It replaces Keychain Access.
Photo Booth
Photo Booth is a camera application first introduced on devices running Mac OS X Tiger with a built-in iSight camera, allowing users to take picture and video. Photo Booth displays a preview showing the camera's view in real time, while thumbnails of saved photos and videos are displayed along the bottom of this window, obscuring the bottom of the video preview. These can be shown or played by clicking on the thumbnails. Users can also apply a variety of effects to a photo, which act similarly to social media filters.
Photos
Photos is a photo management and editing application first introduced with OS X Yosemite 10.10.3, replacing both iPhoto and Aperture. Photos is based on the rebuilt version of the in-built app released for iOS 8. The photos library is organized chronologically on a timeline, determined by the metadata attached to the photo. Photos can also be sorted manually into albums, searched by location or by tagged persons. Photos can be synced and backed up through the iCloud Photo Library and shared albums. Photos contains a number of simple editing tools which allow users to crop, rotate, and adjust their photos, with a limited number of editing tools available for videos.
Podcasts
Podcasts is a media player used for playing and subscribing to podcasts first introduced macOS Catalina to replace the podcasting capabilities of iTunes. Podcasts can be discovered and followed or subscribed to in the 'Browse' and 'Search' tabs, with the 'Listen Now’ tab showing new episodes of followed podcasts as they are made available. Podcast channels allow users to follow or subscribe to creators rather than individual shows.
Preview
Preview is an image and PDF viewer application, first originating in NeXTstep, before being carried over to Mac OS X. It is capable of viewing a number of viewing, printing, and editing a number digital image formats, as well as Portable Document Format (PDF) files. It employs the Quartz graphics layer, and the ImageIO and Core Image frameworks.
QuickTime Player
QuickTime Player is an application that can play compatible video and sound files. It is capable of limited editing features, including triming video clips and exporting to one of four video resolutions or an audio-only format. QuickTime Player can also record video and audio from the device's camera and microphone, or record a user's display for screen recording.
Reminders
Reminders is a task-managing app introduced to OS X Mountain Lion and later rebuilt from the ground up in MacOS Catalina. The app allows users to create their own lists of reminders and set notifications for themselves. New reminders can be placed into lists or set as subtasks and can include several details including: a priority tag, a note about the reminder, and an image or URL attachment. Additionally, alarms can be set for reminders, sending a notification to users at a certain time and date, when a geofence around an area is crossed, or when a message starts being typed to a set contact.
Safari
Safari is a graphical web browser based on the WebKit engine, included with macOS since version 10.3 "Panther", where it replaces Internet Explorer for Mac OS X. Websites can be bookmarked, added to a reading list, or saved to the home screen and are synced between devices through iCloud. In 2010, Safari 5 introduced a reader mode, extensions, and developer tools. Safari 11, released in 2017, added Intelligent Tracking Prevention, which uses artificial intelligence to block web tracking. Safari 13 added support for Apple Pay, and authentication with FIDO2 security keys. Its interface was redesigned in Safari 15, including a new landing page.
Shortcuts
Shortcuts, formerly Workflow, is a visual scripting app that allows users to create macros for executing specific tasks on their device. These task sequences can be created by the user and shared online through iCloud. A number of curated shortcuts can also be downloaded from the integrated gallery.
Stickies
Stickies is a desktop note program first included in System 7.5, later being re-written in Cocoa during the transition to Mac OS X in 2001. It allows a user to put post-it note-like windows on the screen for to write short reminders, notes and other clippings. The ability to collapse note windows, which is present in all versions of Stickies, is a holdover from System 7.5's WindowShade feature. The window button layout, which is unusual for a modern macOS application, is retained from Mac OS 8.
Stocks
Stocks is a stock market tracking app first introduced with macOS Mojave. It allows users to check the Yahoo! Finance data for any company valued on the stock exchange, including the current value of a company and their increase or decrease percentage. A graph shows the trends of each company over time, with a green graph showing positive growth and a red graph showing a decline. Business News is provided when a stock is not selected, which shows Apple News articles about companies a user is following.
System Settings
System Settings, formerly System Preferences, is an application included with macOS. It allows users to access information about their device and modify various system settings and options on their device such as the desktop wallpaper, screen saver, notifications, Wi-Fi and Bluetooth, display and brightness, keyboard and trackpad, accessibility features, and more. With the release of macOS Catalina, a Screen Time feature was introduced which is intended to help user's focus and combat screen addiction. Furthermore, macOS Monterey introduces Focus modes, which expand on Apple's previous Do Not Disturb feature to filter notifications during scenarios such as sleeping or working.
TextEdit
TextEdit is an open-source word processor and text editor, first featured in NeXT's NeXTSTEP and OPENSTEP. TextEdit has support for formatted text, justification, and even the inclusion of graphics and other multimedia elements, as well as the ability to read and write to different character encodings, including Unicode (UTF-8 and UTF-16). It automatically adjusts letter spacing in addition to word spacing while justifying text. TextEdit does not support multiple columns of text.
TV
TV, also known as Apple TV, is a media player first introduced macOS Catalina, replacing the video-playing capabilities of iTunes. The app can be used for viewing television shows and films purchased or rented through the iTunes Store, which can be accessed from within the app. It also houses original content from the Apple TV+ streaming service, and can even directly stream content from some third-party services through the a la carte video on demand "Apple TV Channels" service. The TV app can be used to index and access content from other linked video on demand services, allowing programs watched in other apps to appear in a user's Up Next feed, even if they are not subscribed through the Channels service. The TV app is also capable of broadcasting live sports and events, such as through the MLS Season Pass.
Voice Memos
Voice Memos is a voice recording app, first introduced in macOS Mojave, designed for saving short snippets of audio for later playback. Saved voice memos can be shared as a .m4a file or can be edited, which allows parts of a recording to be replaced, background noise to be removed, or the length of a recording to be trimmed. Other playback options include the ability to change playback speed, skip silent parts of a memo, or enhance a recording. Audio files can also be organised into different folders.
Weather
Weather was introduced to macOS in macOS Ventura.
Utilities
Activity Monitor
Activity Monitor is a system monitor for the macOS operating system, which also incorporates task manager functionality. Activity Monitor appeared in Mac OS X v10.3, when it subsumed the functionality of the programs Process Viewer (a task manager) and CPU Monitor found in the previous version of OS X. In OS X 10.9, Activity Monitor was significantly revamped and gained a fifth tab for "energy" (in addition to CPU, memory, disk, and network).
AirPort Utility
AirPort Utility is a program that allows users to configure an AirPort wireless network and manage services associated with and devices connected to AirPort Routers. It comes pre-installed on macOS, and is available to download for Microsoft Windows and iOS. AirPort Utility is unique in that it offers network configuration in a native application as opposed to a web application. It provides a graphical overview of AirPort devices attached to a network, and provides tools to manage each one individually. It allows users to configure their network preferences, assign Back to My Mac accounts to the network, and configure USB attached Printers and hard drives. The current versions are 6.3.6 for recent versions of macOS, 5.6.1 for Microsoft Windows and older versions of Mac OS X, and 1.3.4 for iOS.
On January 30, 2013, Apple released AirPort Utility 6.0 for macOS featuring a redesign of the user interface focused on increasing usability for novice users. Reception was mixed with some media outlets reporting IT professionals and network administrators being frustrated over some removed features. It was reported that most end users, however, wouldn't notice the feature omissions. Users requiring the removed features can still access the previous version of AirPort Utility using a workaround.
Audio MIDI Setup
Audio MIDI Setup is a utility program that comes with the macOS operating system for adjusting the computer's audio input and output configuration settings and managing MIDI devices.
It was first introduced in Mac OS X 10.5 Leopard as a simplified way to configure MIDI Devices. Users need to be aware that prior to this release, MIDI devices did not require this step, and it mention of it might be omitted from MIDI devices from third-party manufactures.
Bluetooth File Exchange
Bluetooth File Exchange is a utility that comes with the macOS operating system, used to exchange files to or from a Bluetooth-enabled device. For example, it could be used to send an image to a cellphone, or to receive an image or other documents from a PDA.
Boot Camp Assistant
Boot Camp Assistant assists users with installing Windows on their Intel Mac using Boot Camp. It does not support Macs with Apple silicon processors, as Microsoft does not have a commercial version of Windows 10 that runs on ARM-based processors.
ColorSync Utility
ColorSync Utility is a macOS application used for management of color profiles and filters used in Apple's PDF workflows, or applying filters to PDF documents. The interface is composed of two parts, the document browser and the utility window. The document browser allows the user to zoom in and out of an image or apply a Filter to it. The utility window has several options, including Profile First Aid, Profiles, Devices, Filters and Calculator.
Console
Console is a log viewer developed by Apple Inc. and included with macOS. It allows users to search through all of the system's logged messages, and can alert the user when certain types of messages are logged. The Console is generally used for troubleshooting when there is a problem with the computer. macOS itself, as well as any applications that are used, send a constant stream of messages to the system in the form of log files. The console allows users to read the system logs, help find certain ones, monitor them, and filter their contents.
Clicking on "Show Log List" in the toolbar will bring up the Log List. The Log List opens a sidebar which shows all of the different logs that the system maintains. This list helps in viewing the many different logs maintained in various parts of the system by bringing them all together to one place. By clicking on a particular log category, all of the logs will be shown.
The System Log Queries contains all of the logs that have to do with the entire system. This includes system logs as well as individual application logs.
Selecting All Messages gives a live look at your computer's activities, updated live. This includes all activities from both the system as well as any applications running. Logs in this section of the Console are all formatted uniformly. They all include a timestamp, the name of the process or application, and the actual message of the log. When the message displayed includes a paperclip icon next to it, it means that it is a shortened version of a longer report, and clicking the icon will show the complete report.
In addition to viewing all messages, users can also create custom queries with any criteria that they like. These custom queries will filter the messages and will also be shown in the All Messages section. In order to make a new query, choose "New System Log Query" from the File menu.
Digital Color Meter
Digital Color Meter is a utility for measuring and displaying the color values of pixels displayed on the screen of a Macintosh computer.
The utility presents a "window" onto the screen which includes a cursor which by default is 1 × 1 pixel in size. The color displayed in that pixel is shown as a color value which may be represented as decimal or hexadecimal RGB triplets, CIE 1931, CIE 1976 or CIELAB triplets or a Tristimulus triplet. The displayed color could be copied either as a solid color or as the color value which represents it, to be used in other applications (for instance an RGB triplet may be used in a color specification to be used on a World Wide Web page).
Disk Utility
Disk Utility is a system utility for performing disk and disk volume-related tasks. It can create, convert, backup, compress, and encrypt logical volume images from a wide range of formats, mount or unmount disk volumes, verify a disk's integrity and repair it if damaged, and erase, format, partition, or clone disks.
Grapher
Grapher is a graphing calculator program first introduced in Mac OS X Tiger that is able to create 2D and 3D graphs from simple and complex equations. Users edit the appearance of graphs by changing line colors, adding patterns to rendered surfaces, adding comments, and changing the fonts and styles used to display them. Grapher is able to create animations of graphs by changing constants or rotating them in space.
Keychain Access
Keychain is the encrypted password management system in macOS, first introduced with Mac OS 8.6. A keychain can contain several types of data, including passwords, private keys, certificates, and secure notes.
Migration Assistant
Migration Assistant is a utility by Apple Inc. that transfers data, user accounts, computer settings and apps from one Macintosh computer to another computer, or from a full drive backup. As of OS X Lion and later, it can also migrate contacts, calendars, and email accounts and other files from Microsoft Windows. Migration Assistant can be used during initial setup of a new computer or run manually on a system that has already been set up. It may be used multiple times to copy only applications, user account(s), or settings. Its primary purpose is to duplicate the contents and configuration of an existing computer user account(s) on a new one.
The Migration Assistant does not transfer the operating system of the old computer to the new one. Similarly, applications and utilities bundled by Apple with the operating system (e.g. Safari) are not transferred, based on the assumption that the newer machine has the same or newer version already installed. However, settings for these applications (e.g. bookmarks) are transferred.
Print Centre
Print Centre is a utility that allows a user to view all current and pending jobs on any connected printers or fax machines. The program will open automatically when a job is sent from the device to a printer, and allows for pending jobs may to be paused or canceled. Furthermore, it is capable of displaying information about a connected printer including approximate ink supply levels and can open Image Capture if the printer or fax has a scanner attached.
Screen Sharing
Screen Sharing is a utility that may be used to control remote computers and access their files. To connect, one may enter a VNC or Apple ID and authenticate as a local user on the remote computer, or, if the computers are linked via the same Apple ID, automatically initialise the connection. It supports features such as a shared clipboard between the two computers and remotely transferring files.
The feature must be enabled in the Sharing preference pane in System Settings.
Screenshot
Screenshot is an application introduced with macOS Mojave, replacing Grab which functioned similarly. The app allows for screen recording and taking screenshots, either for a single window, a selected portion of the screen, or the entire screen. Screenshot is initialized whenever the user presses the keyboard shortcuts , , , or .
Script Editor
Script Editor, formerly AppleScript Editor is a code editor for the AppleScript and Javascript for Automation scripting languages, included in classic Mac OS and macOS.
System Information
System Information, formerly System Profiler, is a software utility derived from field service diagnostics produced by Apple's Service Diagnostic Engineering team, at that time located in Apple satellite buildings in Campbell, California, that was bundled with the classic Mac OS since Mac OS 7.6 under the name Apple System Profiler. In Mac OS X 10.0, the first release of macOS, it was renamed System Profiler; with the release of Mac OS X 10.7 "Lion" it was again renamed to System Information. Other new features in Lion are the ability to look up support information for the user's hardware model as well. In OS X Mountain Lion and later versions of macOS users can also access System Information by holding down the option key and "System Information" will replace "About This Mac" in the Apple Menu.
It compiles technical information on all of the installed hardware, devices, drivers, applications, system settings, system software programs and kernel extensions installed on the host computer. It can export this information as plain text, RTF or in the plist XML format. This information is used to diagnose problems. System Profiler can be extremely useful if attempting to diagnose a hardware problem. A user can send the information directly to Apple if the user desires. It has support for scripting automation through AppleScript and some limited support in Automator.
System Information can also be accessed by using the "system_profiler" command through macOS's Terminal application.
Terminal
Terminal is a terminal emulator program, first originating in NeXTSTEP and OPENSTEP, before being carried over into Mac OS X. It provides text-based access to the operating system, in contrast to the mostly graphical nature of the user experience of macOS, by providing a command-line interface to the operating system when used in conjunction with a Unix shell, such as zsh (the default shell since macOS Catalina). The user can choose other shells available with macOS, such as the KornShell, tcsh, and bash.
VoiceOver Utility
VoiceOver Utility is a screen reader application which allows the user to listen to spoken directions from the computer, providing accessibility for blind and low-vision users. VoiceOver also includes support for many Braille displays. In addition, VoiceOver includes features for those that cannot use the mouse, such as keyboard-based navigation.
Features
Control Center
Control Center provides access to system controls, such as Wi-Fi, Bluetooth, and Sound, in a unified interface accessible from the menu bar. Some of these controls can be added to the menu bar by dragging them from Control Center. Additional components can be added in System Settings. Available controls include:
Wi-Fi
Bluetooth
AirDrop
Focus
Stage Manager
Keyboard Brightness (available on Mac notebooks)
Screen Mirroring
Display
Sound
Now Playing
Accessibility Shortcuts
Battery (available on Mac notebooks)
Fast User Switching
Dock
The Dock is the main method of launching and switching between applications on macOS. It can hold any number of items and resizes them dynamically to fit while using magnification to better view smaller items. By default, it appears on the bottom edge of the screen, but it can also instead be placed on the left or right edges of the screen if the user wishes.
Finder
Finder is the default file manager and graphical interface shell of macOS. It is responsible for the launching of other applications, and for the overall user management of files, disks, and network volumes. The Finder uses a view of the file system that is rendered using a desktop metaphor; that is, the files and folders are represented as appropriate icons. There is a "favorites" sidebar of commonly used and important folders on the left of the Finder window. Finder can also display previews of a range of files, such as images, applications and PDF files. The Quick Look feature allows users to quickly examine documents and images in more detail from the finder by pressing the space bar without opening them in a separate application.
Following the deprecation of iTunes, Finder is also now responsible for the backup and transfer of files to iPhone and iPad devices.
Launchpad
Launchpad is an application launcher that was first introduced in OS X Lion. It displays all applications installed on the user's machine in a grid of icons, which can be put into folders. Launchpad provides an alternative way to start applications in macOS, in addition to other options such as the Dock, Finder, and Spotlight search. Launchpad can be used to uninstall apps that came from the Mac App Store.
Mission Control
Mission Control is a window management system and application introduced with the release of Mac OS X 10.7 Lion, combining the features of the previous Dashboard, Exposé, and Spaces programs. It allows a user to view and organise all open application windows at once, including the ability to move windows between different connected monitors and virtual desktops.
Notification Center
Notification Center displays notifications from apps and websites. Users access Notification Center by clicking the clock in the menu bar on macOS Big Sur or the Notification Center icon in earlier versions of macOS. Notification Center can be customized in System Settings.
Siri
Siri is a digital assistant introduced in macOS Sierra that allows the user to interact with it to ask questions, make recommendations, and perform actions either on the device or by delegating requests to a set of Internet services. With continued use, it adapts to users' individual language usages, searches, and preferences, returning individualized results.
Spotlight
Spotlight is macOS's selection-based search system, used for indexing documents, pictures, music, applications, and System Settings within the computer. In addition, specific words in documents and in web pages in a web browser's history or bookmarks can be searched. It also allows the user to narrow down searches with creation dates, modification dates, sizes, types and other attributes.
Time Machine
Time Machine is a backup mechanism first introduced in Mac OS X 10.5 Leopard. It creates incremental backups of files that can be restored at a later date, and allows the user to restore the whole system or specific files. The software is designed to work with both local storage devices and network-attached disks, and is most commonly used with external disk drives connected using either USB or Thunderbolt.
System components
Archive Utility
Archive Utility (BOMArchiveHelper until Mac OS X 10.5) is the default archive file handler in macOS. It is usually invoked automatically when opening a file in one of its supported formats. It can be used to create compressed ZIP archives by choosing "Create archive of 'file (Leopard: "Compress") in the Finder's File or contextual menu. It is located at /System/Library/CoreServices/Applications/Archive Utility.app in OS X 10.10 and later, /System/Library/CoreServices/Archive Utility.app in 10.5 through 10.9, and /System/Library/CoreServices/BOMArchiveHelper.app in 10.4. Prior to Archive Utility's inclusion in Mac OS X v10.3, beginning with Mac OS 7.6, Apple bundled the freeware StuffIt Expander with the operating system.
Invoking Archive Utility manually shows a minimal GUI letting the user change Archive Utility preferences or choose files to compress or uncompress.
BOM is an abbreviation of Bill of Materials. Bill of Materials files or .bom files are used by the macOS Installer program to document where files in an installer bundle are installed, what their file permissions should be, and other file metadata. Thus, a Bill of Materials is read by the Installer, and Archive Utility helps it by extracting the files specified in the BOM.
Crash Reporter
Crash Reporter is the standard crash reporter in macOS. Crash Reporter can send the crash logs to Apple Inc. for their engineers to review.
Crash Reporter has three modes of operations:
Basic — The default mode. Only application crashes are reported, and the dialog does not contain any debugging information.
Developer — In addition to application crashes, crashes are also displayed for background and system processes.
Server — The default for macOS Server systems. No crash reports are shown to the user (though they are still logged).
None — Disables the dialog prompt. Crash reports are neither displayed nor logged.
The developer tool CrashReporterPrefs can be used to change modes, as can using the terminal command defaults write com.apple.CrashReporter DialogType [basic|developer|server].
In basic mode, if Crash Reporter notices an application has crashed twice in succession, it will offer to rename the application's preference file and try again (corrupted preference files being a common cause of crashes).
When reporting a crash, the top text field of the window has the crash log, while the bottom field is for user comments. Users may also copy and paste the log into their e-mail client to send to a third-party application developer for the developer to use.
DiskImageMounter
DiskImageMounter is the utility that handles mounting disk volume images in Mac OS X, starting with version 10.3. DiskImageMounter works by either launching a daemon to handle the disk image or by contacting a running daemon and have it mount the disk.
Like BOMArchiveHelper, DiskImageMounter has no GUI when double-clicked; doing so does nothing. The only GUI the program ever displays is a window with a progress bar and mount options (cancel or skip verification) or an error report if it could not mount the image. It is found in /System/Library/CoreServices/DiskImageMounter.app.
Starting with version 10.7, Apple "removed double-click support for images using legacy metadata." DiskImageMounter will not be able to open .img (NDIF only), .smi (self mounting), .dc42 (Disk Copy 4.2), and .dart (DART) disk image formats that was previously supported in version 10.6 and earlier.
DiskImageMounter supports a variety of disk image file types:
Apple Disk Image (.dmg, com.apple.disk-image)
UDIF disk images (.udif, com.apple.disk-image-udif); UDIF segment (.devs, .dmgpart, com.apple.disk-image-udif-segment)
self mounting image (.smi, com.apple.disk-image-smi)
DVD/CD-R master image (.toast, .dvdr, .cdr, com.apple.disk-image-cdr, com.roxio.disk-image-toast)
disk image segment (dmgpart)
raw disk image (OSTypes: devr, hdrv, DDim, com.apple.disk-image-raw)
PC drive container (OSTypes: OPCD, com.apple.disk-image-pc)
ISO image (.iso, public.iso-image)
sparse disk image (.sparseimage, com.apple.disk-image-sparse, .sparsebundle)
As of macOS 11.0, support for the following formats has been removed:
Disk Copy 4.2 disk image (.dc42, .diskcopy42, com.apple.disk-image-dc42)
DART disk image (.dart, com.apple.disk-image-dart)
NDIF disk image (.ndif, .img, com.apple.disk-image-ndif); NDIF disk image segment (.imgpart, com.apple.disk-image-ndif-segment)
Directory Utility
Directory Utility is a utility included with the macOS (previously Mac OS X) operating system to configure connections to directory services. Prior to Mac OS X 10.5, this tool was named Directory Access. Apple's LDAP implementation is called Apple Open Directory.
DVD Player
DVD Player, formerly Apple DVD Player, is the default DVD player in macOS. It supports all the standard DVD features such as multiple audio, video & subtitle tracks as well as Dolby Digital 5.1 passthrough, DVD access URLs and closed captions. In some instances, users can choose which VOB file to open. DVD Player is also fully compatible with DVDs authored by DVD Studio Pro and iDVD, including HD DVDs by DVD Studio Pro. As of macOS Mojave, it has been updated to 64-bit, sports a new icon and has better Touch Bar support.
DVD Player complies with most copyright laws, and will thus enforce most restrictive measures of DVD technology, such as region-restrictive encodings and user-inhibited operations ("disabled actions"). It does this even when using an all-region DVD drive. It will even force Apple's Screenshot program to cease functioning through the Finder interface until the DVD Player application is quit, effectively preventing the user from taking screen captures of visual DVD content.
The software does not contain a DTS decoder, so DTS tracks cannot be played through the Mac's built in speakers or analog output. However, DTS tracks can be output to devices that have their own decoder, so playback is supported through outputs such as S/PDIF, DisplayPort and HDMI. It has never supported the ability to play Blu-Ray discs.
Feedback Assistant
The Feedback Assistant is made available to customers in the Apple Software Customer Seeding, AppleSeed for IT or Apple Beta Software programs and allows a user to manually send feedback, reports, or requests to Apple.
HelpViewer
Help Viewer is a WebKit based HTML viewer for macOS aimed at displaying help files and other documentation. It is found in . The default file extension is ".help". Help index files are generated with Help Indexer. macOS applications typically use Help Viewer to display their help content, rather than a custom system.
Help Viewer's implementation in Mac OS X 10.5 (Leopard) found its way to Rob Griffiths' list of Leopard criticisms, because Apple changed the software from a standalone application with a standard window interface to one with a floating window that always appears in front of all other application windows, obscuring the interface for which one is seeking help (see image below).
Although one can close or minimize the Help Viewer window, it is difficult to consult the Help Viewer while simultaneously working with the application, short of changing the size of windows so both fit on the screen. The Help Viewer window also does not work with the Exposé window management feature (Mission Control in OS X 10.7 or later). There is a workaround using the defaults command accessible in the Terminal.
Installer
Installer extracts and installs files out of .pkg packages, allowing developers to create uniform software installers.
Installer launches when a package or metapackage file is opened. The installation process itself can vary substantially, as Installer allows developers to customize the information the user is presented with. For example, it can be made to display a custom welcome message, software license and readme. Installer also handles authentication, checks that packages are valid before installing them, and allows developers to run custom scripts at several points during the installation process.
Installer packages have the file extension . Prior to Mac OS X Leopard, installer packages were implemented as Mac OS X packages. These packages were a collection of files that resided in folders with a .pkg file extension. In Mac OS X Leopard the software packaging method was changed to use the XAR (eXtensible ARchiver) file format; the directory tree containing the files is packaged as an xar archive file with a extension. Instead of distributing multiple files for a package, this allowed all of the software files to be contained in a single file for easier distribution with the benefit of package signing.
loginwindow
The loginwindow process displays the macOS login window at system startup if auto-login is not set, verifies login attempts, and launches login applications. It also implements the Force Quit window, restarts macOS user interface components (the Dock and Finder) if they crash, and handles the logout, restart, and shutdown routines.
Users are assigned their own loginwindow when they log in; if a loginwindow process belonging to a specific user is force quit, they will be logged out.
Software Update
Software Update is a section in System Settings for Mac Software Updates, as well as updates to core Mac apps, starting in macOS Mojave (10.14); it also has an item in the Apple menu. From OS X Mountain Lion (10.8) to macOS High Sierra (10.13), the Mac App Store was used for Software Updates; prior to that, Software Update was a separate utility, which could be launched from the Apple menu or from the Software Update pane in System Settings.
Other
Other system components include:
About This Mac, which shows information about the Mac it is running on, such as the hardware, serial number, and macOS version.
Captive Network Assistant, a daemon used to access captive portals when connected to public Wi-Fi networks.
Certificate Assistant, a utility for creating and verifying digital certificates.
ControlStrip, a daemon that controls the Touch Bar.
CoreLocationAgent, a daemon responsible for displaying authorization prompts to allow apps and widgets to access location services.
Expansion Slot Utility, a program that allows manual allocation of PCIe card bandwidth. It is only available on certain Mac Pro models.
FolderActionsDispatcher, a daemon responsible for monitoring changes to the filesystem to run Folder Action scripts.
Install Command Line Developer Tools, a utility that allows developers to easily install Xcode's command line developer tools if Xcode is not installed. It can be executed by running in the terminal.
iOS App Installer, an app that downloads .ipa files for iPadOS applications so that they can be run on Apple silicon-based Macs.
Keychain Circle Notification, a daemon involved in iCloud Keychain syncing.
ManagedClient manages various functions pertaining to managed preferences and configuration profiles.
Setup Assistant is the application that starts on first boot of a fresh copy of macOS or a new Mac. It configures computer accounts, Apple ID, iCloud, and Accessibility settings. It is also run after major macOS system upgrades.
OBEXAgent, a server that handles Bluetooth access.
ODSAgent, a server that handles remote disk access.
OSDUIHelper, a daemon that displays on-screen graphics when certain settings, such as volume or display brightness, are adjusted.
PIPAgent, which manages the picture-in-picture feature available in macOS Sierra and later.
Photo Library Migration Utility, which can migrate iPhoto and Aperture libraries to Photos.
PowerChime, present on some MacBook models, plays a chime when the notebook is plugged in to power.
ReportPanic, an app that displays a window when the system reboots from a kernel panic; it allows the user to send a report to Apple.
screencaptureui, a daemon responsible for drawing the user interface shown when taking a screenshot.
ScreenSaverEngine, the process that handles screen saver access. When invoked, it will display the screensaver.
SystemUIServer, a daemon that manages status items in the menu bar.
ThermalTrap, a daemon which notifies users when the system temperature exceeds a usable limit.
Ticket Viewer, an app that displays Kerberos tickets.
UnmountAssistantAgent, which displays a dialog if there is a process preventing ejection of a disk and offers to forcibly eject the disk if the process cannot be quit.
Wireless Diagnostics, an app that launches when W-Fi connectivity problems are detected.
Discontinued
Classic
The Classic Environment, usually referred to as Classic, is a hardware and software abstraction layer in PowerPC versions of Mac OS X that allows most legacy applications compatible with Mac OS 9 to run on Mac OS X. The name "Classic" is also sometimes used by software vendors to refer to the application programming interface available to "classic" applications, to differentiate between programming for Mac OS X and the classic version of the Mac OS.
The Classic Environment is supported on PowerPC-based Macintosh computers running versions of Mac OS X up to 10.4 "Tiger", but not with 10.5 "Leopard" or Macintoshes utilizing any other architecture than PowerPC.
The Classic Environment is a descendant of Rhapsody's "Blue Box" virtualization layer, which served as a proof of concept. (Previously, Apple A/UX also offered a virtualized Mac OS environment on top of a UNIX operating system.) It uses a Mac OS 9 System Folder, and a New World ROM file to bridge the differences between the older PowerPC Macintosh platforms and the XNU kernel environment. The Classic Environment was created as a key element of Apple's strategy to replace the classic Mac OS (versions 9 and below) with Mac OS X as the standard operating system (OS) used by Macintosh computers by eliminating the need to use the older OS directly.
The Classic Environment can be loaded at login (for faster activation when needed later), on command, or whenever a Mac OS application that requires it is launched (to reduce the use of system resources when not needed). It requires a full version of Mac OS 9 to be installed on the system, and loads an instance of that OS in a sandbox environment, replacing some low-level system calls with equivalent calls to Mac OS X via updated system files and the Classic Support system enabler. This sandbox is used to launch all "classic" Mac OS applications—there is only one instance of the Classic process running for a given user, and only one user per machine may be running Classic at a time.
If the user chooses to launch the Classic Environment only when needed, launching a "classic" application first launches the Classic Environment, which can be configured to appear in a window resembling the display of a computer booting into Mac OS 9. When the Classic Environment has finished loading, the application launches. When a "classic" application is in the foreground, the menu bar at the top of the screen changes to look like the older Mac OS system menu. Dialog boxes and other user-interface elements retain their traditional appearance.
The Classic Environment provides a way to run "Classic" applications on Apple's G5 systems as well as on most G4 based computers sold after January 2003. These machines cannot boot Mac OS 9 or earlier without the bridging capabilities of the Classic Environment or other software (see SheepShaver).
The Classic Environment's compatibility is usually sufficient for many applications, provided the application using it does not require direct access to hardware or engage in full-screen drawing. However, it is not a complete clone of Mac OS 9. The Finder included with Mac OS X v10.2 and later does not support the "Reveal Object" Apple events used by some Mac OS 9 applications, causing the "Reveal In Finder" functionality for those applications to be lost. Early releases of Mac OS X would often fail to draw window frames of Classic applications correctly, and after the Classic Environment's windowing was made double buffered in Mac OS X Panther, some older applications and games sometimes failed to update the screen properly, such as the original Macintosh port of Doom. However, the Classic Environment "resurrected" some older applications that had previously been unusable on the Macintosh Quadra and Power Macintosh series; this is because Mac OS X replaced Mac OS 9's virtual memory system with a more standard and less fragile implementation.
The Classic Environment's performance is also generally acceptable, with a few exceptions. Most of an application is run directly as PowerPC code (which would not be possible on Intel-based Macs). Motorola 68k code is handled by the same Motorola 68LC040 emulator that Mac OS 9 uses. Some application functions are actually faster in the Classic Environment than under Mac OS 9 on equivalent hardware, due to performance improvements in the newer operating system's device drivers. These applications are largely those that use heavy disk processing, and were often quickly ported to Mac OS X by their developers. On the other hand, applications that rely on heavy processing and which did not share resources under Mac OS 9's co-operative multitasking model will be interrupted by other (non-Classic) processes under Mac OS X's preemptive multitasking. The greater processing power of most systems that run Mac OS X (compared to systems intended to run Mac OS 8 or 9) helps to mitigate the performance degradation of the Classic Environment's virtualization.
Dashboard
Dashboard was an application for Apple Inc.'s macOS operating systems, used as a secondary desktop for hosting mini-applications known as widgets. These were intended to be simple applications that launched quickly. Dashboard applications supplied with macOS included a stock ticker, weather report, calculator and notepad; users can create or download their own. Before Mac OS X 10.7 Lion, when Dashboard is activated, the user's desktop is dimmed and widgets appear in the foreground. Like application windows, they could be moved around, rearranged, deleted, and duplicated (so that more than one of the same Widget is open at the same time, possibly with different settings). New widgets could be opened via an icon bar on the bottom layer, loading a list of available apps similar to the iOS homescreen or the macOS Launchpad.
Dashboard was first introduced in Mac OS X 10.4 Tiger. It could be activated as an application, from the Dock, Launchpad or Spotlight. It could also be accessed by a dashboard key. Alternatively, the user can choose to make Dashboard open on moving the cursor into a preassigned hot corner or keyboard shortcut. Starting with Mac OS X 10.7 Lion, the Dashboard can be configured as a space, accessed by swiping four fingers to the right from the Desktops either side of it. In OS X 10.10 Yosemite, the Dashboard is disabled by default, as the Notification Center is now the primary method of displaying widgets.
Dashboard was removed in macOS Catalina.
Grab
Grab was a built-in utility for taking screenshots. It supported capturing a marquee selection, a whole window, or the whole screen, as well as timed screenshots. The program originated from NeXTSTEP, and was replaced by the Screenshot utility in macOS Mojave. Grab saved screenshots in the TIFF format. It was also possible to save screenshots in PDF format (earlier versions of macOS) or PNG format (later versions).
iDVD
iDVD is a discontinued application that could be used to create DVDs.
Internet Connect
The Internet Connect program in Mac OS X allows the user to activate dial-up connections to the Internet via an ISP or VPN. It also provides a simple way to connect to an AirPort Network. Up to the latest version of Mac OS X 10.4, the Internet Connect application provides more general tools than the more detailed Network pane in System Settings, which allows the user to configure and control systemwide network settings. However, as of Mac OS X 10.5, Internet Connect's functions have been incorporated into the Network pane of System Settings, and the application is no longer included.
Use of Internet Connect is generally not necessary if the Macintosh is connected to the internet through an Ethernet device to DSL or cable internet service, except to manage connections to any subordinate bluetooth equipment.
iSync
iSync was a tool made to sync iCal and Address Book data to a SyncML-enabled mobile phone, via Bluetooth or by using a USB connection. It was released on January 2, 2003, with technology licensed from fusionOne. Support for many (pre-October 2007) devices was built-in, with newer devices being supported via manufacturer and third-party iSync Plugins. Support for Palm OS organizers and compatible smartphones was removed with the release of iSync 3.1 and Mac OS X 10.6 Snow Leopard. BlackBerry OS, Palm OS, and Windows Mobile (Pocket PC) devices could not be used with iSync, but were supported by third-party applications. Before the release of Mac OS X 10.4, iSync also synchronized a user's Safari bookmarks with the then usable .Mac subscription service provided by Apple.
iTunes
iTunes is a media player, media library, Internet radio broadcaster, mobile device management utility, and the client app for iTunes Store. It is used to purchase, play, download, and organize digital multimedia, on personal computers running the macOS and Windows operating systems. iTunes is developed by Apple Inc. It was announced on January 9, 2001.
Because iTunes was criticized for having a bloated user experience, Apple decided to split iTunes into separate apps as of macOS Catalina: Apple Music, Apple Podcasts, and Apple TV. Finder would take over the device management aspect that iTunes previously served. This change would not affect Windows or older macOS versions.
Network Utility
Network Utility was an application for macOS up to macOS Catalina that provided a variety of tools that could be used related to computer network information gathering and analysis. Starting with macOS Big Sur the application is no longer included and was replaced with a message that it has been deprecated. Starting with macOS Ventura, the application is removed from the OS.
Network Utility showed information about each of your network connections, including the MAC address of the interface, the IP address assigned to it, its speed and status, a count of data packets sent and received, and a count of transmission errors and collisions. It also provided a GUI to the netstat, ping, traceroute, whois, finger, and stroke UNIX programs.
ODBC Administrator
ODBC Administrator was a 32-bit utility in the Mac OS X operating system for administering ODBC, which enables interaction with ODBC-compliant data sources. Features included connection pooling, trace log creation, and ODBC driver management, among other administration features.
Although Apple started including the underlying iODBC libraries in Mac OS X Jaguar, and continued to do so through at least macOS Big Sur, Apple only included their ODBC Administrator through Mac OS X Leopard, and temporarily made it available as a separate download (since removed) for Snow Leopard.
Alternatives to Apple's 32-bit ODBC Administrator include the free and open source 32-bit and 64-bit iODBC Administrator included with the iODBC SDK, which is available for all extant versions of Mac OS X (10.0.x through 11.2.x).
Printer Setup Utility
The Printer Setup Utility was an application that served to allow the user to configure printers physically connected to the computer, or connected via a network. The Utility provided more specific tools than the more user friendly printers pane in System Preferences. In Mac OS X 10.5 Leopard, the Printer Setup Utility was removed and its features placed in the Print & Fax System Preferences pane. Viewing individual printers' queues was moved to a Printer Proxy application.
Remote Install Mac OS X
Remote Install Mac OS X was a remote installer for use with MacBook Air laptops over the network. It could run on a Mac or a Windows PC with an optical drive. A client MacBook Air (lacking an optical drive) could then wirelessly connect to the other Mac or PC to perform system software installs.
Remote Install Mac OS X was released as part of Mac OS X 10.5.2 on February 12, 2008. Support for the Mac mini was added in March 2009, allowing the DVD drive to be replaced with a second hard drive.
With the launch of Mac OS X Lion, Apple has omitted Remote Install. A workaround is to enable Target Disk Mode.
Sherlock
Software Update
In Mac OS 9 and early versions of Mac OS X, Software Update was a standalone tool. The program was part of the CoreServices in OS X. It could automatically inform users of new updates (with new features and bug and security fixes) to the operating system, applications, device drivers, and firmware. All updates required the user to enter their administrative password and some required a system restart. It could be set to check for updates daily, weekly, monthly, or not at all; in addition, it could download and store the associated .pkg file (the same type used by Installer) to be installed at a later date, and it maintained a history of installed updates. Starting with Mac OS X 10.5 Leopard, updates that required a reboot logged out the user prior to installation and automatically restarted the computer when complete. In earlier versions of OS X, the updates were installed, but critical files were not replaced until the next system startup.
Beginning with OS X 10.8, Software Update became part of the App Store application. Beginning with macOS Mojave (10.14), it became a part of System preferences.
X11
In Mac OS X Tiger, X11 was an optional install included on the install DVD. Mac OS X Leopard, Snow Leopard and Lion installed X11 by default, but from OS X Mountain Lion (10.8), Apple dropped dedicated support for X11, with users directed to the open source XQuartz project (to which it contributes) instead.
Development tools
Server technology
Core components
AppleScript
Aqua
Audio Units
Bonjour
Boot Camp
Carbon
Cocoa
Core Animation
Core Audio
Core Data
Core Image
Core Video
Darwin
Mission Control
Keychain
OpenGL
plist
Quartz
QuickTime
Rosetta
Smart folder
Spaces
WebKit
XNU
Notes
References
components
macOS components | List of built-in macOS apps | [
"Technology"
] | 12,832 | [
"Computing-related lists",
"Apple Inc. lists"
] |
5,416,952 | https://en.wikipedia.org/wiki/Associazione%20Friulana%20di%20Astronomia%20e%20Meteorologia | The Associazione Friulana di Astronomia e Meteorologia (AFAM, eng. Friulian Association of Astronomy and Meteorology) is a non-profit cultural association whose goal is the promotion of astronomy and meteorology to the public and the development of scientific research activities, often in collaboration with professional scientists.
Established in 1969, now AFAM has its own operating structures in Remanzacco (Friuli, Italy).
AFAM is member of the Unione Astrofili Italiani (the Italian union of amateur astronomers).
The Association has an own library, a conference room, a permanent Astronomical Observatory with optical instruments for visual observation and CCD sensors for research.
Members
Luca Donato, president
Giovanni Sostero
See also
List of astronomical societies
References
External links
Official site of the Associazione Friulana di Astronomia e Meteorologia
Astronomy organizations
1969 establishments in Italy
Scientific organizations established in 1969
Astronomy in Italy | Associazione Friulana di Astronomia e Meteorologia | [
"Astronomy"
] | 196 | [
"Astronomy stubs",
"Astronomy organizations",
"Astronomy organization stubs"
] |
2,943,429 | https://en.wikipedia.org/wiki/Samsung%20D600 | The Samsung SGH-D600 (and its successor, the D600i) is a GSM mobile phone released in the first quarter of 2005 made by Samsung Electronics.
The SGH-D600i is a later version of the SGH-D600, released in 2007 to address issues concerning microSD card support.
Features and specifications
The Samsung SGH-D600 is the successor to the Samsung SGH-D500, and differs from it with a slightly revised design, a higher resolution 2-megapixel camera located outside the sliding area instead of inside, TV output, and support for microSD external flash memory cards. It also includes a Picsel Viewer for Microsoft Office documents. It is available in two colors, grey and red. The SGH-D600 also includes a camera with a resolution of 1600x1200 pixels, Bluetooth connectivity, and a 240x320 pixel screen.
The battery claims to have a stand-by time of up to 300 hours and a talk time of up to 7 hours.
Reception and criticism
The Register praised its looks and small size. Trusted Reviews awarded it 9/10, calling it "a great looking phone with a screen that puts other handsets to shame". CNet gave a positive review scoring 3.5/5 suggesting it was "good both for professionals and those looking for fun features".
The phone's poor visual TV out quality has received criticism.
Variants
SGH-D600E: A variant of D600 with EDGE connectivity with tri-band instead of quad-band.
SGH-D606: Exclusively sold on Rogers in Canada.
SGH-D608: Chinese Anycall variant sold for the Chinese market. It was also sold on China Mobile in China.
SGH-D600i: A variant of D600 with fixed microSD/TransFlash problems.
Related phones
Samsung SCH-U620: CDMA phone with identical design.
Samsung SGH-C300: Lower end slider phone with similar design and Yamaha MA-2 sound chip.
Samsung SGH-D500: The predecessor to the SGH-D600.
References
External links
Datasheet
D600
Slider phones | Samsung D600 | [
"Technology"
] | 464 | [
"Mobile technology stubs",
"Mobile phone stubs"
] |
2,943,460 | https://en.wikipedia.org/wiki/Email%20hub | The term Mail Hub is used to denote an MTA (message transfer agent) or system of MTAs used to route email but not act as a mail server (having no end-user email store) since there is no MUA (mail user agent) access. Examples could include dedicated anti-SPAM appliances, anti-virus engines running on dedicated hardware, email gateways and so forth.
DNS Based Mail Hub
A first example for a Mail Hub consisting of a network of MTAs would be that of a typical small-to-medium size Internet service provider (ISP), or for a FOSS corporate mail system. This solution is very good for developing nation ISPs and NGOs. As well as any other low-budget but high availability mail system needs. This is mostly due to not using expensive Network level switches and hardware.
Simple DNS MX record based Mail Hub cluster with parallelism and front-end failover and load balancing is illustrated in the following diagram:
The servers would be all Linux x86 servers with low cost SATA or PATA hard disk storage. The front-end servers would most likely run Postfix with Spamassassin and ClamAV. This RAIS server Cluster would then overcome the problem with Perl based Spamassassin being too CPU and memory hungry for low cost servers. The solution presented here is based on all GPL FOSS free software, but of course there are alternative configurations using other free or non-free software.
References
Mail Clustering, , ISOC, 2005.
Email | Email hub | [
"Technology"
] | 315 | [
"Computing stubs"
] |
2,943,479 | https://en.wikipedia.org/wiki/Trenbolone | Trenbolone is an androgen and anabolic steroid (AAS) of the nandrolone group which itself was never marketed. Trenbolone ester prodrugs, including trenbolone acetate (brand names Finajet, Finaplix, others) and trenbolone hexahydrobenzylcarbonate (brand names Parabolan, Hexabolan), are or have been marketed for veterinary and clinical use. Trenbolone acetate is used in veterinary medicine in livestock to increase muscle growth and appetite, while trenbolone hexahydrobenzylcarbonate was formerly used clinically in humans but is now no longer marketed. In addition, although it is not approved for clinical or veterinary use, trenbolone enanthate is sometimes sold on the black market under the nickname Trenabol.
Uses
Veterinary
Trenbolone, as trenbolone acetate, improves muscle mass, feed efficiency, and mineral absorption in cattle.
Side effects
Sometimes human users may experience an event called "tren cough" shortly after or during an injection, where the user experiences a violent and extreme coughing fit, which can last for minutes and in some cases even longer.
"Tren cough", despite its name, is not exclusive to trenbolone. It can occur when injecting any oil-steroid solutions, if the solution accidentally is injected intravenously. When the oil-steroid solution gets into the bloodstream, the steroid oil solution travels into the lungs, therefore causing a coughing fit. There exist several theories on why this phenomenon happens.
It is possible that the androgenic effect from steroids activates a variety of lipid-like active compounds which are called prostaglandins. Many of these prostaglandins are inflammatory and vasoconstrictive. Prostaglandins are signalled through two varying pathways cyclooxygenase (COX) (Also known as: prostaglandin-endoperoxide synthase) and lipoxygenases (LOX) (also known as: EC 1.13.11.34, EC 1.13.11.33, etc.). The bradykinin peptide is well known to promote a cough reaction associated with ACE inhibitor medications prescribed for hypertension.
Pharmacology
Pharmacodynamics
Trenbolone has both anabolic and androgenic effects. Once metabolized, trenbolone esters have the effect of increasing ammonium ion uptake by muscles, leading to an increase in the rate of protein synthesis. It may also have the secondary effects of stimulating appetite and decreasing the rate of catabolism, as all anabolic steroids are believed to; however, catabolism likely increases significantly once the steroid is no longer taken. At least one study in rats has shown trenbolone to cause gene expression of the androgen receptor (AR) at least as potent as dihydrotestosterone (DHT). This evidence tends to indicate trenbolone can cause an increase in male secondary sex characteristics without the need to convert to a more potent androgen in the body.
Studies on metabolism are mixed, with some studies showing that it is metabolized by aromatase or 5α-reductase into estrogenic compounds, or into 5α-reduced androgenic compounds, respectively.
The potency of Trenbolone is not known, although it's often falsely believed to be five times high as that of testosterone. This is based on a book by William Llewellyn but has not been definitively proven. Trenbolone was never approved for human use, and therefore limited data on the subject exists. The relevant literature, is usually done in rats, which makes the 500/100 potency number inaccurate. Rats respond differently to androgens and are less sensitive to androgens. While some literature report a 5 fold higher potency, two other scientific reviews report a 3 fold higher potency, which makes it unclear as to how large the relative potency actually is. Trenbolone also binds with high affinity to the progesterone receptor, and binds to the glucocorticoid receptor as well.
Pharmacokinetics
To prolong its elimination half-life, trenbolone is administered as a prodrug as an ester conjugate such as trenbolone acetate, trenbolone enanthate, or trenbolone hexahydrobenzylcarbonate. Plasma lipases then cleave the ester group in the bloodstream leaving free trenbolone.
Trenbolone and 17-epitrenbolone are both excreted in urine as conjugates that can be hydrolyzed with beta-glucuronidase. This implies that trenbolone leaves the body as beta-glucuronides or sulfates.
Chemistry
Trenbolone, also known as 19-nor-δ9,11-testosterone or as estra-4,9,11-trien-17β-ol-3-one, is a synthetic estrane steroid and a derivative of nandrolone (19-nortestosterone). It is specifically nandrolone with two additional double bonds in the steroid nucleus. Trenbolone esters, which have an ester at the C17β position, include trenbolone acetate, trenbolone enanthate, trenbolone hexahydrobenzylcarbonate, and trenbolone undecanoate.
History
Trenbolone was first synthesized in 1963.
Society and culture
Generic names
Trenbolone is the generic name of the drug and its and . It has also been referred to as trienolone or trienbolone or tren.
Legal status
Some bodybuilders and athletes use trenbolone hexahydrobenzylcarbonate and other esters (acetate, enanthate) for their muscle-building and otherwise performance-enhancing effects. Such use is illegal in the United States and several European and Asian countries. The DEA classifies trenbolone and its esters as Schedule III controlled substances under the Controlled Substances Act. Trenbolone is classified as a Schedule 4 drug in Canada and a class C drug with no penalty for personal use or possession in the United Kingdom. Use or possession of steroids without a prescription is a crime in Australia.
Doping in sports
There are known cases of doping in sports with trenbolone esters by professional athletes.
See also
Metribolone
References
Further reading
Anabolic–androgenic steroids
Estranes
Ketones
Progestogens
Secondary alcohols | Trenbolone | [
"Chemistry"
] | 1,419 | [
"Ketones",
"Functional groups"
] |
2,943,640 | https://en.wikipedia.org/wiki/Antibiotic%20sensitivity%20testing | Antibiotic sensitivity testing or antibiotic susceptibility testing is the measurement of the susceptibility of bacteria to antibiotics. It is used because bacteria may have resistance to some antibiotics. Sensitivity testing results can allow a clinician to change the choice of antibiotics from empiric therapy, which is when an antibiotic is selected based on clinical suspicion about the site of an infection and common causative bacteria, to directed therapy, in which the choice of antibiotic is based on knowledge of the organism and its sensitivities.
Sensitivity testing usually occurs in a medical laboratory, and uses culture methods that expose bacteria to antibiotics, or genetic methods that test to see if bacteria have genes that confer resistance. Culture methods often involve measuring the diameter of areas without bacterial growth, called zones of inhibition, around paper discs containing antibiotics on agar culture dishes that have been evenly inoculated with bacteria. The minimum inhibitory concentration, which is the lowest concentration of the antibiotic that stops the growth of bacteria, can be estimated from the size of the zone of inhibition.
Antibiotic susceptibility testing has been needed since the discovery of the beta-lactam antibiotic penicillin. Initial methods were phenotypic, and involved culture or dilution. The Etest, an antibiotic impregnated strip, has been available since the 1980s, and genetic methods such as polymerase chain reaction (PCR) testing have been available since the early 2000s. Research is ongoing into improving current methods by making them faster or more accurate, as well as developing new methods for testing, such as microfluidics.
Uses
In clinical medicine, antibiotics are most frequently prescribed on the basis of a person's symptoms and medical guidelines. This method of antibiotic selection is called empiric therapy, and it is based on knowledge about what bacteria cause an infection, and to what antibiotics bacteria may be sensitive or resistant. For example, a simple urinary tract infection might be treated with trimethoprim/sulfamethoxazole. This is because Escherichia coli is the most likely causative bacterium, and may be sensitive to that combination antibiotic. However, bacteria can be resistant to several classes of antibiotics. This resistance might be because a type of bacteria has intrinsic resistance to some antibiotics, because of resistance following past exposure to antibiotics, or because resistance may be transmitted from other sources such as plasmids. Antibiotic sensitivity testing provides information about which antibiotics are more likely to be successful and should therefore be used to treat the infection.
Antibiotic sensitivity testing is also conducted at a population level in some countries as a form of screening. This is to assess the background rates of resistance to antibiotics (for example with methicillin-resistant Staphylococcus aureus), and may influence guidelines and public health measures.
Methods
Once a bacterium has been identified following microbiological culture, antibiotics are selected for susceptibility testing. Susceptibility testing methods are based on exposing bacteria to antibiotics and observing the effect on the growth of the bacteria (phenotypic testing), or identifying specific genetic markers (genetic testing). Methods used may be qualitative, meaning that a result indicates resistance is or is not present; or quantitative, using a minimum inhibitory concentration (MIC) to describe the concentration of antibiotic to which a bacterium is sensitive.
There are many factors that can affect the results of antibiotic sensitivity testing, including failure of the instrument, temperature, moisture, and potency of the antimicrobial agent. Quality control (QC) testing helps to ensure the accuracy of test results. Organizations such as the American Type Culture Collection and National Collection of Type Cultures provide strains of bacteria with known resistance phenotypes that can be used for quality control.
Phenotypic methods
Testing based on exposing bacteria to antibiotics uses agar plates or dilution in agar or broth. The selection of antibiotics will depend on the organism grown, and the antibiotics that are available locally. To ensure that the results are accurate, the concentration of bacteria that is added to the agar or broth (the inoculum) must be standardized. This is accomplished by comparing the turbidity of bacteria suspended in saline or broth to McFarland standards—solutions whose turbidity is equivalent to that of a suspension containing a given concentration of bacteria. Once an appropriate concentration (most commonly an 0.5 McFarland standard) has been reached, which can be determined by visual inspection or by photometry, the inoculum is added to the growth medium.
Manual
The disc diffusion method involves selecting a strain of bacteria, placing it on an agar plate, and observing bacterial growth near antibiotic-impregnated discs. This is also called the Kirby-Bauer method, although modified methods are also used. In some cases, urine samples or positive blood culture samples are applied directly to the test medium, bypassing the preliminary step of isolating the organism. If the antibiotic inhibits microbial growth, a clear ring, or zone of inhibition, is seen around the disc. The bacteria are classified as sensitive, intermediate, or resistant to an antibiotic by comparing the diameter of the zone of inhibition to defined thresholds which correlate with MICs.
Mueller–Hinton agar is frequently used in the disc diffusion test. The Clinical and Laboratory Standards Institute (CLSI) and European Committee on Antimicrobial Susceptibility Testing (EUCAST) provide standards for the type and depth of agar, temperature of incubation, and method of analysing results. Disc diffusion is considered the cheapest and most simple of the methods used to test for susceptibility, and is easily adapted to testing newly available antibiotics or formulations. Some slow-growing and fastidious bacteria cannot be accurately tested by this method, while others, such as Streptococcus species and Haemophilus influenzae, can be tested but require specialized growth media and incubation conditions.
Gradient methods, such as Etest, use a plastic strip placed on agar. A plastic strip impregnated with different concentrations of antibiotics is placed on a growth medium, and the growth medium is viewed after a period of incubation. The minimum inhibitory concentration can be identified based on the intersection of the teardrop-shaped zone of inhibition with the marking on the strip. Multiple strips for different antibiotics may be used. This type of test is considered a diffusion test.
In agar and broth dilution methods, bacteria are placed in multiple small tubes with different concentrations of antibiotics. Whether a bacterium is sensitive or not is determined by visual inspection or automatic optical methods, after a period of incubation. Broth dilution is considered the gold standard for phenotypic testing. The lowest concentration of antibiotics that inhibits growth is considered the MIC.
Automated
Automated systems exist that replicate manual processes, for example, by using imaging and software analysis to report the zone of inhibition in diffusion testing, or dispensing samples and determining results in dilutional testing. Automated instruments, such as the VITEK 2, BD Phoenix, and Microscan systems, are the most common methodology for AST. The specifications of each instrument vary, but the basic principle involves the introduction of a bacterial suspension into pre-formulated panels of antibiotics. The panels are incubated and the inhibition of bacterial growth by the antibiotic is automatically measured using methodologies such as turbidimetry, spectrophotometry or fluorescence detection. An expert system correlates the MICs with susceptibility results, and the results are automatically transmitted into the laboratory information system for validation and reporting. While such automated testing is less labour-intensive and more standardized than manual testing, its accuracy can be comparatively poor for certain organisms and antibiotics, so the disc diffusion test remains useful as a backup method.
Genetic methods
Genetic testing, such as via polymerase chain reaction (PCR), DNA microarray, and loop-mediated isothermal amplification, may be used to detect whether bacteria possess genes which confer antibiotic resistance. An example is the use of PCR to detect the mecA gene for beta-lactam resistant Staphylococcus aureus. Other examples include assays for testing vancomycin resistance genes vanA and vanB in Enterococcus species, and antibiotic resistance in Pseudomonas aeruginosa, Klebsiella pneumoniae and Escherichia coli. These tests have the benefit of being direct and rapid, as compared with observable methods, and have a high likelihood of detecting a finding when there is one to detect. However, whether resistance genes are detected does not always match the resistance profile seen with phenotypic method. The tests are also expensive and require specifically trained personnel.
Polymerase chain reaction is a method of identifying genes related to antibiotic susceptibility. In the PCR process, a bacterium's DNA is denatured and the two strands of the double helix separate. Primers specific to a sought-after gene are added to a solution containing the DNA, and a DNA polymerase is added alongside a mixture containing molecules that will be needed (for example, nucleotides and ions). If the relevant gene is present, every time this process runs, the quantity of the target gene will be doubled. After this process, the presence of the genes is demonstrated through a variety of methods including electrophoresis, southern blotting, and other DNA sequencing analysis methods.
DNA microarrays and chips use the binding of complementary DNA to a target gene or nucleic acid sequence. The benefit of this is that multiple genes can be assessed simultaneously.
Using magnetic nanoparticles studded with a beta-2-glycoprotein I peptide imitating a plasma protein, microbial pathogens could selectively be retrieved from blood culture specimens within hours, in a study published September 2024. Magnets are used to fish out the peptide-bacterial complex, followed by genetic testing.
MALDI-TOF
Matrix-assisted laser desorption ionisation-time of flight mass spectrometry (MALDI-TOF MS) is another method of susceptibility testing. This is a form of time-of-flight mass spectrometry, in which the molecules of a bacterium are subject to matrix-assisted laser desorption. The ionised particles are then accelerated, and spectral peaks recorded, producing an expression profile, which is capable of differentiating specific bacterial strains after being compared to known profiles. This includes, in the context of antibiotic susceptibility testing, strains such as beta-lactamase producing E. coli. MALDI-TOF is rapid and automated. There are limitations to testing in this format however; results may not match the results of phenotypic testing, and acquisition and maintenance is expensive.
Reporting
Bacteria are marked as sensitive, resistant, or having intermediate resistance to an antibiotic based on the minimum inhibitory concentration (MIC), which is the lowest concentration of the antibiotic that stops the growth of bacteria. The MIC is compared to standard threshold values (called "breakpoints") for a given bacterium and antibiotic. Breakpoints for the same organism and antibiotic may differ based on the site of infection: for example, the CLSI generally defines Streptococcus pneumoniae as sensitive to intravenous penicillin if MICs are ≤0.06 μg/ml, intermediate if MICs are 0.12 to 1 μg/ml, and resistant if MICs are ≥2 μg/ml, but for cases of meningitis, the breakpoints are considerably lower. Sometimes, whether an antibiotic is marked as resistant is also based on bacterial characteristics that are associated with known methods of resistance such as the potential for beta-lactamase production. Specific patterns of drug resistance or multidrug resistance may be noted, such as the presence of an extended-spectrum beta lactamase. Such information may be useful to the clinician, who can change the empiric treatment to a tailored treatment that is directed only at the causative bacterium. The results of antimicrobial susceptibility tests performed during a given time period can be compiled, usually in the form of a table, to form an antibiogram. Antibiograms help the clinician to select the best empiric antimicrobial therapy based on the local resistance patterns until the laboratory test results are available.
Clinical practice
Ideal antibiotic therapy is based on determining the causal agent and its antibiotic sensitivity. Empiric treatment is often started before laboratory microbiological reports are available. This might be for common or relatively minor infections based on clinical guidelines (such as community-acquired pneumonia), or for serious infections, such as sepsis or bacterial meningitis, in which delayed treatment carries substantial risks. The effectiveness of individual antibiotics varies with the anatomical site of the infection, the ability of the antibiotic to reach the site of infection, and the ability of the bacteria to resist or inactivate the antibiotic.
Specimens for antibiotic sensitivity testing are ideally collected before treatment is started. A sample may be taken from the site of a suspected infection; such as a blood culture sample when bacteria are suspected to be present in the bloodstream (bacteraemia), a sputum sample in the case of a pneumonia, or a urine sample in the case of a urinary tract infection. Sometimes multiple samples may be taken if the source of an infection is not clear. These samples are transferred to the microbiology laboratory where they are added to culture media, in or on which the bacteria grow until they are present in sufficient quantities for identification and sensitivity testing to be carried out.
When antibiotic sensitivity testing is completed, it will report the organisms present in the sample, and which antibiotics they are susceptible to. Although antibiotic sensitivity testing is done in a laboratory (in vitro), the information provided about this is often clinically relevant to the antibiotics in a person (in vivo). Sometimes, a decision must be made for some bacteria as to whether they are the cause of an infection, or simply commensal bacteria or contaminants, such as Staphylococcus epidermidis and other opportunistic infections. Other considerations may influence the choice of antibiotics, including the need to penetrate through to an infected site (such as an abscess), or the suspicion that one or more causes of an infection were not detected in a sample.
History
Since the discovery of the beta-lactam antibiotic penicillin, the rates of antimicrobial resistance have increased. Over time, methods for testing the sensitivity of bacteria to antibiotics have developed and changed.
Alexander Fleming in the 1920s developed the first method of susceptibility testing. The "gutter method" that he developed was a diffusion method, involving an antibiotic that was diffused through a gutter made of agar. In the 1940s, multiple investigators, including Pope, Foster and Woodruff, Vincent and Vincent used paper discs instead. All these methods involve testing only susceptibility to penicillin. The results were difficult to interpret and not reliable, because of inaccurate results that were not standardised between laboratories.
Dilution has been used as a method to grow and identify bacteria since the 1870s, and as a method of testing the susceptibility of bacteria to antibiotics since 1929, also by Alexander Fleming. The way of determining susceptibility changed from how turbid the solution was, to the pH (in 1942), to optical instruments. The use of larger tube-based "macrodilution" testing has been superseded by smaller "microdilution" kits.
In 1966, the World Health Organisation confirmed the Kirby–Bauer method as the standard method for susceptibility testing; it is simple, cost-effective and can test multiple antibiotics.
The Etest was developed in 1980 by Bolmstrӧm and Eriksson, and MALDI-TOF developed in 2000s. An array of automated systems has been developed since and after the 1980s. PCR was the first genetic test available and first published as a method of detecting antibiotic susceptibility in 2001.
Further research
Point-of-care testing is being developed to speed up the time for testing, and to help practitioners avoid prescribing unnecessary antibiotics in the style of precision medicine. Traditional techniques typically take between 12 and 48 hours, although it can take up to five days. In contrast, rapid testing using molecular diagnostics is defined as "being feasible within an 8-h(our) working shift". Progress has been slow due to a range of reasons including cost and regulation.
Additional research is focused at the shortcomings of current testing methods. As well as the duration it takes to report phenotypic methods, they are laborious, have difficult portability and are difficult to use in resource-limited settings, and have a chance of cross-contamination.
As of 2017, point-of-care resistance diagnostics were available for methicillin-resistant Staphylococcus aureus (MRSA), rifampin-resistant Mycobacterium tuberculosis (TB), and vancomycin-resistant enterococci (VRE) through GeneXpert by molecular diagnostics company Cepheid.
Quantitative PCR, with the view of determining the percent of a detected bacteria that possesses a resistance gene, is being explored. Whole genome sequencing of isolated bacteria is also being explored, and likely to become more available as costs decrease and speed increases over time.
Additional methods explored include microfluidics, which uses a small amount of fluid and a variety of testing methods, such as optical, electrochemical, and magnetic. Such assays do not require much fluid to be tested, are rapid and portable.
The use of fluorescent dyes has been explored. These involve labelled proteins targeted at biomarkers, nucleic acid sequences present within cells that are found when the bacterium is resistant to an antibiotic. An isolate of bacteria is fixed in position and then dissolved. The isolate is then exposed to fluorescent dye, which will be luminescent when viewed.
Improvements to existing platforms are also being explored, including improvements in imaging systems that are able to more rapidly identify the MIC in phenotypic samples; or the use of bioluminescent enzymes that reveal bacterial growth to make changes more easily visible.
Bibliography
References
External links
Antibiotics
Microbiology techniques
Infectious diseases
Antimicrobial resistance | Antibiotic sensitivity testing | [
"Chemistry",
"Biology"
] | 3,806 | [
"Antibiotics",
"Biotechnology products",
"Microbiology techniques",
"Biocides"
] |
2,943,922 | https://en.wikipedia.org/wiki/4104 | 4104 (four thousand one hundred [and] four) is the natural number following 4103 and preceding 4105. It is the second positive integer which can be expressed as the sum of two positive cubes in two different ways. The first such number, 1729, is called the "Ramanujan–Hardy number".
4104 is the sum of 4096 + 8 (that is, 163 + 23), and also the sum of 3375 + 729 (that is, 153 + 93).
See also
Taxicab number
1729
External links
MathWorld: Hardy–Ramanujan Number
Integers | 4104 | [
"Mathematics"
] | 126 | [
"Elementary mathematics",
"Integers",
"Mathematical objects",
"Numbers"
] |
2,944,387 | https://en.wikipedia.org/wiki/Eurocodes | The Eurocodes are the ten European standards (EN; harmonised technical rules) specifying how structural design should be conducted within the European Union (EU). These were developed by the European Committee for Standardization upon the request of the European Commission.
The purpose of the Eurocodes is to provide:
a means to prove compliance with the requirements for mechanical strength and stability and safety in case of fire established by European Union law.
a basis for construction and engineering contract specifications.
a framework for creating harmonized technical specifications for building products (CE mark).
By March 2010, the Eurocodes are mandatory for the specification of European public works and are intended to become the de facto standard for the private sector. The Eurocodes therefore replace the existing national building codes published by national standard bodies (e.g. BS 5950), although many countries had a period of co-existence. Additionally, each country is expected to issue a National Annex to the Eurocodes which will need referencing for a particular country (e.g. The UK National Annex). At present, take-up of Eurocodes is slow on private sector projects and existing national codes are still widely used by engineers.
The motto of the Eurocodes is "Building the future". The second generation of the Eurocodes (2G Eurocodes) is being prepared.
History
In 1975, the Commission of the European Community (presently the European Commission), decided on an action programme in the field of construction, based on article 95 of the Treaty. The objective of the programme was to eliminate technical obstacles to trade and the harmonisation of technical specifications. Within this action programme, the Commission took the initiative to establish a set of harmonised technical rules for the design of construction works which, in a first would serve as an alternative to the national rules in force in the member states of the European Union (EU) and, ultimately, would replace them. For fifteen years, the Commission, with the help of a steering committee with representatives of the member states, conducted the development of the Eurocodes programme, which led to the first generation of European codes in the 1980s.
In 1989, the Commission and the member states of the EU and the European Free Trade Association (EFTA) decided, on the basis of an agreement between the Commission and to transfer the preparation and the publication of the Eurocodes to the European Committee for Standardization (CEN) through a series of mandates, in order to provide them with a future status of European Standard (EN). This links de facto the Eurocodes with the provisions of all the Council's Directives and/or Commission's Decisions dealing with European standards (e.g. Regulation (EU) No. 305/2011 on the marketing of construction products and Directive 2014/24/EU on government procurement in the European Union).
List
The Eurocodes are published as a separate European Standards, each having a number of parts. By 2002, ten sections have been
developed and published:
Eurocode 0: Basis of structural design(EN 1990)
''Eurocode 1: Actions on structures(EN 1991)Part 1-1: Densities, self-weight, imposed loads for buildings(EN 1991-1-1)
Part 1-2: Actions on structures exposed to fire(EN 1991-1-2)
Part 1-3: General actions - Snow loads(EN 1991-1-3)
Part 1-4: General actions - Wind actions(EN 1991-1-4)
Part 1-5: General actions - Thermal actions(EN 1991-1-5)
Part 1-6: General actions - Actions during execution(EN 1991-1-6)
Part 1-7: General actions - Accidental Actions(EN 1991-1-7)
Part 2: Traffic loads on bridges(EN 1991-2)
Part 3: Actions induced by cranes and machinery(EN 1991-3)
Part 4 : Silos and tanks(EN 1991-4)
Eurocode 2: Design of concrete structures(EN 1992)Part 1-1: General rules, and rules for buildings(EN 1992-1-1)
Part 1-2: Structural fire design(EN 1992-1-2)
Part 1-3: Precast Concrete Elements and Structures(EN 1992-1-3)
Part 1-4: Lightweight aggregate concrete with closed structure(EN 1992-1-4)
Part 1-5: Structures with unbonded and external prestressing tendons(EN 1992-1-5)
Part 1-6: Plain concrete structures(EN 1992-1-6)
Part 2: Reinforced and prestressed concrete bridges(EN 1992-2)
Part 3: Liquid retaining and containing structures(EN 1992-3)
Part 4: Design of fastenings for use in concrete(EN 1992-4)
Eurocode 3: Design of steel structures(EN 1993)Part 1-1: General rules and rules for buildings(EN 1993-1-1)
Part 1-2: General rules - Structural fire design(EN 1993-1-2)
Part 1-3: General rules - Supplementary rules for cold-formed members and sheeting(EN 1993-1-3)
Part 1-4: General rules - Supplementary rules for stainless steels(EN 1993-1-4)
Part 1-5: Plated structural elements(EN 1993-1-5)
Part 1-6: Strength and Stability of Shell Structures(EN 1993-1-6)
Part 1-7: General Rules - Supplementary rules for planar plated structural elements with out of plane loading(EN 1993-1-7)
Part 1-8: Design of joints(EN 1993-1-8)
Part 1-9: Fatigue(EN 1993-1-9)
Part 1-10: Material Toughness and through-thickness properties(EN 1993-1-10)
Part 1-11: Design of Structures with tension components(EN 1993-1-11)
Part 1-12: High Strength steels(EN 1993-1-12)
Part 2: Steel Bridges(EN 1993-2)
Part 3-1: Towers, masts and chimneys(EN 1993-3-1)
Part 3-2: Towers, masts and chimneys - Chimneys(EN 1993-3-2)
Part 4-1: Silos(EN 1993-4-1)
Part 4-2: Tanks(EN 1993-4-2)
Part 4-3: Pipelines(EN 1993-4-3)
Part 5: Piling(EN 1993-5)
Part 6: Crane supporting structures(EN 1993-6)
Eurocode 4: Design of composite steel and concrete structures(EN 1994)Part 1-1: General rules and rules for buildings(EN 1994-1-1)
Part 1-2: Structural fire design(EN 1994-1-2)
Part 2: General rules and rules for bridges(EN 1994-2)
Eurocode 5: Design of timber structures(EN 1995)Part 1-1: General – Common rules and rules for buildings(EN 1995-1-1)
Part 1-2: General – Structural fire design(EN 1995-1-2)
Part 2: Bridges(EN 1995-2)
Eurocode 6: Design of masonry structures(EN 1996)Part 1-1: General – Rules for reinforced and unreinforced masonry structures(EN 1996-1-1)
Part 1-2: General rules – Structural fire design(EN 1996-1-2)
Part 2: Design, selection of materials and execution of masonry(EN 1996-2)
Part 3: Simplified calculation methods for unreinforced masonry structures(EN 1996-3)
Eurocode 7: Geotechnical design(EN 1997)Part 1: General rules(EN 1997-1)
Part 2: Ground investigation and testing(EN 1997-2)
Part 3: Design assisted by field testing(EN 1997-3)
Eurocode 8: Design of structures for earthquake resistance(EN 1998)Part 1: General rules, seismic actions and rules for buildings(EN 1998-1)
Part 2: Bridges(EN 1998-2)
Part 3: Assessment and retrofitting of buildings(EN 1998-3)
Part 4: Silos, tanks and pipelines(EN 1998-4)
Part 5: Foundations, retaining structures and geotechnical aspects(EN 1998-5)
Part 6: Towers, masts and chimneys(EN 1998-6)
Eurocode 9: Design of aluminium structures''(EN 1999)
Part 1-1: General structural rules(EN 1999-1-1)
Part 1-2: Structural fire design(EN 1999-1-2)
Part 1-3: Structures susceptible to fatigue(EN 1999-1-3)
Part 1-4: Cold-formed structural sheeting(EN 1999-1-4)
Part 1-5: Shell structures(EN 1999-1-5)
Each of the codes (except EN 1990) is divided into a number of Parts covering specific aspects of the subject. In total there are 58 EN Eurocode parts distributed in the ten Eurocodes (EN 1990 – 1999).
All of the EN Eurocodes relating to materials have a Part 1-1 which covers the design of buildings and other civil engineering structures and a Part 1-2 for fire design. The codes for concrete, steel, composite steel and concrete, and timber structures and earthquake resistance have a Part 2 covering design of bridges. These Parts 2 should be used in combination with the appropriate general Parts (Parts 1).
See also
Geotechnical Engineering
Limit state design (Load and Resistance Factor Design)
List of EN standards
Structural Engineering
Structural robustness
Previous national standards
BS 5950: British Standard on steel design, replaced by Eurocode 3 in March, 2010.
BS 8110: British Standard on concrete design, replaced by Eurocode 2 in March, 2010.
BS 6399: British Standard on loading for buildings, replaced by Eurocode 1 in March, 2010.
References
External links
Eurocodes: Building the Future - European Commission
Eurocodes available in PDF and HTML format, without national annexes
'National Annexes & Eurocodes', European standards institutes and links to download national annexes.
Building codes
Civil engineering
EN standards | Eurocodes | [
"Engineering"
] | 2,086 | [
"Construction",
"Civil engineering",
"Building codes",
"Building engineering"
] |
2,944,872 | https://en.wikipedia.org/wiki/Lipogenesis | In biochemistry, lipogenesis is the conversion of fatty acids and glycerol into fats, or a metabolic process through which acetyl-CoA is converted to triglyceride for storage in fat. Lipogenesis encompasses both fatty acid and triglyceride synthesis, with the latter being the process by which fatty acids are esterified to glycerol before being packaged into very-low-density lipoprotein (VLDL). Fatty acids are produced in the cytoplasm of cells by repeatedly adding two-carbon units to acetyl-CoA. Triacylglycerol synthesis, on the other hand, occurs in the endoplasmic reticulum membrane of cells by bonding three fatty acid molecules to a glycerol molecule. Both processes take place mainly in liver and adipose tissue. Nevertheless, it also occurs to some extent in other tissues such as the gut and kidney. A review on lipogenesis in the brain was published in 2008 by Lopez and Vidal-Puig. After being packaged into VLDL in the liver, the resulting lipoprotein is then secreted directly into the blood for delivery to peripheral tissues.
Fatty acid synthesis
Fatty acid synthesis starts with acetyl-CoA and builds up by the addition of two-carbon units. Fatty acid synthesis occurs in the cytoplasm of cells while oxidative degradation occurs in the mitochondria. Many of the enzymes for the fatty acid synthesis are organized into a multienzyme complex called fatty acid synthase. The major sites of fatty acid synthesis are adipose tissue and the liver.
Triglyceride synthesis
Triglycerides are synthesized by esterification of fatty acids to glycerol. Fatty acid esterification takes place in the endoplasmic reticulum of cells by metabolic pathways in which acyl groups in fatty acyl-CoAs are transferred to the hydroxyl groups of glycerol-3-phosphate and diacylglycerol. Three fatty acid chains are bonded to each glycerol molecule. Each of the three -OH groups of the glycerol reacts with the carboxyl end of a fatty acid chain (-COOH). Water is eliminated and the remaining carbon atoms are linked by an -O- bond through dehydration synthesis.
Both the adipose tissue and the liver can synthesize triglycerides. Those produced by the liver are secreted from it in the form of very-low-density lipoproteins (VLDL). VLDL particles are secreted directly into blood, where they function to deliver the endogenously derived lipids to peripheral tissues.
Hormonal regulation
Insulin is a peptide hormone that is critical for managing the body's metabolism. Insulin is released by the pancreas when blood sugar levels rise, and it has many effects that broadly promote the absorption and storage of sugars, including lipogenesis.
Insulin stimulates lipogenesis primarily by activating two enzymatic pathways. Pyruvate dehydrogenase (PDH), converts pyruvate into acetyl-CoA. Acetyl-CoA carboxylase (ACC), converts acetyl-CoA produced by PDH into malonyl-CoA. Malonyl-CoA provides the two-carbon building blocks that are used to create larger fatty acids.
Insulin stimulation of lipogenesis also occurs through the promotion of glucose uptake by adipose tissue. The increase in the uptake of glucose can occur through the use of glucose transporters directed to the plasma membrane or through the activation of lipogenic and glycolytic enzymes via covalent modification. The hormone has also been found to have long term effects on lipogenic gene expression. It is hypothesized that this effect occurs through the transcription factor SREBP-1, where the association of insulin and SREBP-1 lead to the gene expression of glucokinase. The interaction of glucose and lipogenic gene expression is assumed to be managed by the increasing concentration of an unknown glucose metabolite through the activity of glucokinase.
Another hormone that may affect lipogenesis through the SREBP-1 pathway is leptin. It is involved in the process by limiting fat storage through inhibition of glucose intake and interfering with other adipose metabolic pathways. The inhibition of lipogenesis occurs through the down regulation of fatty acid and triglyceride gene expression. Through the promotion of fatty acid oxidation and lipogenesis inhibition, leptin was found to control the release of stored glucose from adipose tissues.
Other hormones that prevent the stimulation of lipogenesis in adipose cells are growth hormones (GH). Growth hormones result in loss of fat but stimulates muscle gain. One proposed mechanism for how the hormone works is that growth hormones affects insulin signaling thereby decreasing insulin sensitivity and in turn down regulating fatty acid synthase expression. Another proposed mechanism suggests that growth hormones may phosphorylate with STAT5A and STAT5B, transcription factors that are a part of the Signal Transducer And Activator Of Transcription (STAT) family.
There is also evidence suggesting that acylation stimulating protein (ASP) promotes the aggregation of triglycerides in adipose cells. This aggregation of triglycerides occurs through the increase in the synthesis of triglyceride production.
PDH dephosphorylation
Insulin stimulates the activity of pyruvate dehydrogenase phosphatase. The phosphatase removes the phosphate from pyruvate dehydrogenase activating it and allowing for conversion of pyruvate to acetyl-CoA. This mechanism leads to the increased rate of catalysis of this enzyme, so increases the levels of acetyl-CoA. Increased levels of acetyl-CoA will increase the flux through not only the fat synthesis pathway but also the citric acid cycle.
Acetyl-CoA carboxylase
Insulin affects ACC in a similar way to PDH. It leads to its dephosphorylation via activation of PP2A phosphatase whose activity results in the activation of the enzyme. Glucagon has an antagonistic effect and increases phosphorylation, deactivation, thereby inhibiting ACC and slowing fat synthesis.
Affecting ACC affects the rate of acetyl-CoA conversion to malonyl-CoA. Increased malonyl-CoA level pushes the equilibrium over to increase production of fatty acids through biosynthesis. Long chain fatty acids are negative allosteric regulators of ACC and so when the cell has sufficient long chain fatty acids, they will eventually inhibit ACC activity and stop fatty acid synthesis.
AMP and ATP concentrations of the cell act as a measure of the ATP needs of a cell. When ATP is depleted, there is a rise in 5'AMP. This rise activates AMP-activated protein kinase, which phosphorylates ACC and thereby inhibits fat synthesis. This is a useful way to ensure that glucose is not diverted down a storage pathway in times when energy levels are low.
ACC is also activated by citrate. When there is abundant acetyl-CoA in the cell cytoplasm for fat synthesis, it proceeds at an appropriate rate.
Transcriptional regulation
SREBPs have been found to play a role with the nutritional or hormonal effects on the lipogenic gene expression.
Overexpression of SREBP-1a or SREBP-1c in mouse liver cells results in the build-up of hepatic triglycerides and higher expression levels of lipogenic genes.
Lipogenic gene expression in the liver via glucose and insulin is moderated by SREBP-1. The effect of glucose and insulin on the transcriptional factor can occur through various pathways; there is evidence suggesting that insulin promotes SREBP-1 mRNA expression in adipocytes and hepatocytes. It has also been suggested that the hormone increases transcriptional activation by SREBP-1 through MAP-kinase-dependent phosphorylation regardless of changes in the mRNA levels. Along with insulin glucose also have been shown to promote SREBP-1 activity and mRNA expression.
References
Lipid metabolism | Lipogenesis | [
"Chemistry"
] | 1,703 | [
"Lipid biochemistry",
"Lipid metabolism",
"Metabolism"
] |
2,945,005 | https://en.wikipedia.org/wiki/Network-to-network%20interface | In telecommunications, a network-to-network interface (NNI) is an interface that specifies signaling and management functions between two networks. An NNI circuit can be used for interconnection of signalling (e.g., SS7), Internet Protocol (IP) (e.g., MPLS) or ATM networks.
In networks based on MPLS or GMPLS, NNI is used for the interconnection of core provider routers (class 4 or higher).
In the case of GMPLS, the type of interconnection can vary across Back-to-Back, EBGP or mixed NNI connection scenarios, depending on the type of VRF exchange used for interconnection. In case of Back-to-Back, VRF is necessary to create VLANs and subsequently sub-interfaces (VLAN headers and DLCI headers for Ethernet and Frame Relay network packets) on each interface used for the NNI circuit. In the case of eBGP NNI interconnection, IP routers are taught how to dynamically exchange VRF records without VLAN creation.
NNI also can be used for interconnection of two VoIP nodes. In cases of mixed or full-mesh scenarios, other NNI types are possible.
NNI interconnection is encapsulation independent, but Ethernet and Frame Relay are commonly used.
See also
User–network interface
Asynchronous Transfer Mode
References
Network management | Network-to-network interface | [
"Technology",
"Engineering"
] | 308 | [
"Computing stubs",
"Computer networks engineering",
"Network management",
"Computer network stubs"
] |
2,945,076 | https://en.wikipedia.org/wiki/Phase-shift%20mask | Phase-shift masks are photomasks that take advantage of the interference generated by phase differences to improve image resolution in photolithography. There exist alternating and attenuated phase shift masks. A phase-shift mask relies on the fact that light passing through a transparent media will undergo a phase change as a function of its optical thickness.
Types and effects
A conventional photomask is a transparent plate with the same thickness everywhere, parts of which are covered with non-transmitting material in order to create a pattern on the semiconductor wafer when illuminated.
In alternating phase-shift masks, certain transmitting regions are made thinner or thicker. That induces a phase-shift in the light traveling through those regions of the mask (see the illustration). When the thickness is suitably chosen, the interference of the phase-shifted light with the light coming from unmodified regions of the mask has the effect of improving the contrast on some parts of the wafer, which may ultimately increase the resolution on the wafer. The ideal case is a phase shift of 180 degrees, which results in all the incident light being scattered. However, even for smaller phase shifts, the amount of scattering is not negligible. It can be shown that only for phase shifts of 37 degrees or less will a phase edge scatter 10% or less of the incident light.
Attenuated phase-shift masks employ a different approach. Certain light-blocking parts of the mask are modified to allow a small amount of light to be transmitted through (typically just a few percent). That light is not strong enough to create a pattern on the wafer, but it can interfere with the light coming from the transparent parts of the mask, with the goal again of improving the contrast on the wafer.
Attenuated phase-shift masks are already extensively used, due to their simpler construction and operation, particularly in combination with optimized illumination for memory patterns. On the other hand, alternating phase-shift masks are more difficult to manufacture and this has slowed their adoption, but their use is becoming more widespread. For example, the alternating phase-shift mask technique is being used by Intel to print gates for their 65 nm and subsequent node transistors.
While alternating phase-shift masks are a stronger form of resolution enhancement than attenuated phase-shift masks, their use has more complex consequences. For example, a 180 degree phase edge or boundary will generally print. This printed edge is usually an unwanted feature and is usually removed by a second exposure.
Application
A benefit of using phase-shift masks in lithography is the reduced sensitivity to variations of feature sizes on the mask itself. This is most commonly used in alternating phase-shift masks, where the linewidth becomes less and less sensitive to the chrome width on the mask, as the chrome width decreases. In fact, even with no chrome the phase edge can still print, as noted above. Some cases of attenuated phase-shifting masks also demonstrate the same benefit (see figure). Attenuated phase-shift masks also improve the image log-slope without requiring a very high exposure dose with a widened dark feature. A higher transmission enhances the effect.
As phase-shift masks are applied to printing smaller and smaller features, it becomes more and more important to model them accurately using rigorous simulation software, such as Panoramic Technology or Sigma-C. It becomes especially important as the mask topography starts to play an important role in scattering the light, and the light itself starts to propagate at larger angles. The performance of phase-shift masks can also be previewed with the use of aerial image microscopes. Defect inspection remains a critical aspect of phase-shift mask technology, as the set of printable mask defects has expanded to include those with phase effects in addition to conventional transmission effects.
Attenuated phase shift masks have been in use in production since the 90 nm node.
References
Further reading
Lithography (microfabrication) | Phase-shift mask | [
"Materials_science"
] | 806 | [
"Nanotechnology",
"Microtechnology",
"Lithography (microfabrication)"
] |
2,945,139 | https://en.wikipedia.org/wiki/Internet%20Authentication%20Service | Internet Authentication Service (IAS) is a component of Windows Server operating systems that provides centralized user authentication, authorization and accounting.
Overview
While Routing and Remote Access Service (RRAS) security is sufficient for small networks, larger companies often need a dedicated infrastructure for authentication. RADIUS is a standard for dedicated authentication servers.
Windows 2000 Server and Windows Server 2003 include the Internet Authentication Service (IAS), an implementation of RADIUS server. IAS supports authentication for Windows-based clients, as well as for third-party clients that adhere to the RADIUS standard. IAS stores its authentication information in Active Directory, and can be managed with Remote Access Policies. IAS first showed up for Windows NT 4.0 in the Windows NT 4.0 Option Pack and in Microsoft Commercial Internet System (MCIS) 2.0 and 2.5.
While IAS requires the use of an additional server component, it provides a number of advantages over the standard methods of RRAS authentication. These advantages include centralized authentication for users, auditing and accounting features, scalability, and seamless integration with the existing features of RRAS.
In Windows Server 2008, Network Policy Server (NPS) replaces the Internet Authentication Service (IAS). NPS performs all of the functions of IAS in Windows Server 2003 for VPN and 802.1X-based wireless and wired connections and performs health evaluation and the granting of either unlimited or limited access for Network Access Protection clients.
Logging
By default, IAS logs to local files (%systemroot%\LogFiles\IAS\*) though it can be configured to log to SQL as well (or in place of).
When logging to SQL, IAS appears to wrap the data into XML, then calls the stored procedure report_event, passing the XML data as text... the stored procedure can then unwrap the XML and save data as desired by the user.
History
The initial version of Internet Authentication Service was included with the Windows NT 4.0 Option Pack.
Windows 2000 Server's implementation added support for more intelligent resolution of user names that are part of a Windows Server domain, support for UTF-8 logging, and improved security. It also added support for EAP Authentication for IEEE 802.1x networks. Later on it added PEAP (with service Pack 4).
Windows Server 2003's implementation introduces support for logging to a Microsoft SQL Server database, cross-forest authentication (for Active Directory user accounts in other Forests that the IAS server's Forest has a cross-forest trust relationship with, not to be confused with Domain trust which has been a feature in IAS since NT4), support for IEEE 802.1X port-based authentication, and other features.
All versions of IAS support multi domain setups. Only Windows Server 2003 supports cross forest. While NT4 version includes a Radius Proxy, Windows 2000 didn't have such a feature. Windows Server 2003 reintroduced the feature and is capable of intelligently proxy, load balance, and tolerate faults from faulty or unreachable back-end servers.
References
External links
Deploying Internet Authentication Service (IAS) in Windows 2003
Internet Authentication Service in the Microsoft Windows 2000 Resource Kit
Article describing how to log IAS (RADIUS) + DHCP to SQL
IAS Log parsing utility. Allows to visualize ias log files
Microsoft Windows security technology
Windows Server
Microsoft server technology
Computer access control
Internet security | Internet Authentication Service | [
"Engineering"
] | 697 | [
"Cybersecurity engineering",
"Computer access control"
] |
2,945,164 | https://en.wikipedia.org/wiki/Ecofascism | Ecofascism (sometimes spelled eco-fascism) is a term used to describe individuals and groups which combine environmentalism with fascism.
Philosopher André Gorz characterized eco-fascism as hypothetical forms of totalitarianism based on an ecological orientation of politics. Similar definitions have been used by others in older academic literature in accusations of ecofascism of "environmental fascism". However, since the 2010s, a number of individuals and groups have emerged that either self-identify as "ecofascist" or have been labelled as "ecofascist" by academic or journalistic sources. These individuals and groups synthesise radical far-right politics with environmentalism, and will typically argue that overpopulation is the primary threat to the environment and that the only solution is a complete halt to immigration or, at their most extreme, genocide against non-White groups and ethnicities. Many far-right political parties have added green politics to their platforms. Through the 2010s ecofascism has seen increasing support.
Definition
In 2005, environmental historian Michael E. Zimmerman defined "ecofascism" as "a totalitarian government that requires individuals to sacrifice their interests to the well-being of the 'land', understood as the splendid web of life, or the organic whole of nature, including peoples and their states". This was supported by philosopher Patrick Hassan’s work analysing historical accusations of ecofascism in academic literature. Zimmerman argued that while no ecofascist government has existed so far, "important aspects of it can be found in German National Socialism, one of whose central slogans was "Blood and Soil". Other political agendas, instead of environmental protection and prevention of climate change, are nationalist approaches to climate such as national economic environmentalism, securitization of climate change, and ecobordering.
Ecofascists often believe there is a symbiotic relationship between a nation-group and its homeland. They often blame the global south for ecological problems, with their proposed solutions often entailing extreme population control measures based on racial categorisations, and advocating for the accelerated collapse of current society to be replaced by fascist societies. This latter belief is often accompanied with vocal support for terrorist actions.
Vice has defined ecofascism as an ideology "which blames the demise of the environment on overpopulation, immigration, and over-industrialization, problems that followers think could be partly remedied through the mass murder of refugees in Western countries." Environmentalist author Naomi Klein has suggested that ecofascists' primary objectives are to close borders to immigrants and, on the more extreme end, to embrace the idea of climate change as a divinely-ordained signal to begin a mass purge of sections of the human race. Ecofascism is "environmentalism through genocide", opined Klein. Political researcher Alex Amend defined ecofascist belief as "The devaluing of human life—particularly of populations seen as inferior—in order to protect the environment viewed as essential to White identity."
Terrorism researcher Kristy Campion defined ecofascism as "a reactionary and revolutionary ideology that champions the regeneration of an imagined community through a return to a romanticised, ethnopluralist vision of the natural order."
The European Commission describes ecofascism as the "weaponization of climate change by far right populist political parties and white supremacist groups". Tactics of this weaponization include the use of language and equating actors in population and migration discourses to components of the climate crisis. As said in a policy brief for The International Center for Counter-Terrorism, this "linguistic violence" entails that "the invasion of non-native species that threaten the environment becomes synonymous with the invasion of immigrants, the protection of the environment with the protection of borders, trash with people, and environmental cleansing with ethnic cleaning."
Helen Cawood and Xany Jansen Van Vuuren have criticised previous attempts to define ecofascism as focusing too heavily on environmental and ecological conservationism in historical fascist movements, and the subsequent definitions being too broad and encompassing many ontologically different ideologies. In their criticism they summarise the current definition of ecofascism as used in the academic literature as "a movement that uses environmental and ecological conservationist talking points to push an ideology of ethnic or racial separatism". This is supported by Blair Taylor statement that ecofascism refers to "groups and ideologies that offer authoritarian, hierarchical, and racist analyses and solutions to environmental problems". Similarly, extremism researchers Brian Hughes, Dave Jones, and Amarnath Amarasingam argue that ecofascism is less a coherent ideology and more a cultural expression of mystical, anti-humanist romanticism. This is further supported by Maria Darwish in her research into the Nordic Resistance Movement where while there is concern for environmental issues they are "a concern for Neo-Nazis only in so far as it supports and popularizes the backstage mission of the NRM", that is the implementation of a fascist regime, and Jacob Blumenfeld stating "ecofascism names a specific far-right ideology that rationalizes white supremacist violence by invoking imminent ecological collapse and scarce natural resources".
Borrowing from the "watermelon" analogy of eco-socialism, Berggruen Institute scholar Nils Gilman has coined the term "avocado politics" for eco-fascism, being "green on the outside but brown(shirt) at the core".
In his book "", the political scientist Carlos Taibo characterises the phenomenon as a response to crises brought about by climate change. The ecofascist solution is to "[P]reserve increasingly scarce resources for a select minority. And to marginalize – in the mildest version – and exterminate – in the harshest – what are seen as surplus populations, on a planet that has visibly exceeded its limits." Crucially, Taibo argues that far from being circumscribed to the margins of right-wing extremism, which traditionally has mostly been associated with Climate change denial, ecofascist notions are likely to be pursued by "political forces we usually label as liberal and social-democratic", emerging within major centers of power in the west and among elites in the developing world. From this perspective, the antecedents of ecofascism, extending beyond ecological currents in fascist movements of the past, would be ideologies typical of Western colonialism, returning in modernised forms.
Ideological origins
Madison Grant
Sometimes dubbed the "founding father" of ecofascism, Madison Grant was a pioneer of conservationism in America in the late 19th and early 20th century. Grant is credited as a founder of modern wildlife management. Grant built the Bronx River Parkway, was a co-founder of the American Bison Society, and helped create Glacier National Park, Olympic National Park, Everglades National Park and Denali National Park. As president of the New York Zoological Society, he founded the Bronx Zoo in 1899.
In addition to his conservationist work, Grant was a racist. In 1906, Grant supported the placement of Ota Benga, a member of the Mbuti people who was kidnapped, removed from his home in the Congo, and put on display in the Bronx Zoo as an exhibit in the Monkey House. In 1916, Grant wrote The Passing of the Great Race, a work of pseudoscientific literature which claimed to give an account of the anthropological history of Europe. The book divides Europeans into three races; Alpines, Mediterraneans and Nordics, and it also claims that the first two races are inferior to the superior Nordic race, which is the only race which is fit to rule the earth. Adolf Hitler would later describe Grant's book as "his bible" and Grant's "Nordic theory" became the bedrock of Nazi racial theories. Additionally, Grant was a eugenicist: He cofounded and was the director of the American Eugenics Society and he also advocated the culling of the unfit from the human population. Grant concocted a 100-year plan to perfect the human race, a plan in which one ethnic group after another would be killed off until racial purity would be obtained. Grant campaigned for the passage of the Emergency Quota Act of 1921 and he also campaigned for the passage of the Immigration Act of 1924, which drastically reduced the number of immigrants from eastern Europe and Asia who were allowed to enter the United States.
In the modern era, Grant's ideas have been cited by advocates of far-right politics such as Richard Spencer and Anders Breivik.
Nazism
The authors Janet Biehl and Peter Staudenmaier suggest that the synthesis of fascism and environmentalism began with Nazism, stating that 19th and 20th century Germany was an early center of ecofascist thought, finding its antecedents in many prominent natural scientists and environmentalists, including Ernst Moritz Arndt, Wilhelm Heinrich Riehl, and Ernst Haeckel. With the works and ideas of such individuals being later established as policies in the Nazi regime. This is supported by other researchers who identify the Völkisch movement as an ideological originator of later ecofascism. In Biehl and Staudenmaier's book Ecofascism: Lessons from the German Experience, they note the Nazi Party's interest in ecology, and suggest their interest was "linked with traditional agrarian romanticism and hostility to urban civilization". With Zimmerman pointing to the works of conservationist and Nazi Walther Schoenichen as having pertinence to later ecofascism and similarities to developments in deep ecological understanding. During the Nazi rise to power, there was strong support for the Nazis among German environmentalists and conservationists. Richard Walther Darré, a leading Nazi ideologist and Reich Minister of Food and Agriculture who invented the term "Blood and Soil", developed a concept of the nation having a mystic connection with their homeland, and as such, the nation was dutybound to take care of the land. This was supported by other Nazi theorists such as Alfred Rosenberg who wrote of how society's move from agricultural systems to industrialised systems broke their connection to nature and contributed to the death of the . Similar sentiments are found in speeches from Fascist Italy’s Minister of Agriculture Giuseppe Tassinari. Because of this, modern ecofascists cite the Nazi Party as an origin point of ecofascism. Beyond Darré, Rudolf Hess and Fritz Todt are viewed as representatives of environmentalism within the Nazi party. Roger Griffin has also pointed to the glorification of wildlife in Nazi art and ruralism in the novels of the fascist sympathizers Knut Hamsun and Henry Williamson as examples.
After the outlawing of the neo-nazi Socialist Reich Party, one of its members August Haußleiter moved towards organising within the environmental and anti-nuclear movements, going on to become a founding member of the German Green Party. When green activists later uncovered his past activities in the neo-nazi movement, Haußleiter was forced to step down as the party's chairman, although he continued to hold a central role in the party newspaper. As efforts to expel nationalist elements within the party continued, a conservative faction split off and founded the Ecological Democratic Party, which became noted for persistent holocaust denial, rejection of social justice and opposition to immigration.
Savitri Devi
The French-born Greek fascist Savitri Devi (born Maximiani Julia Portas) was a prominent proponent of Esoteric Nazism and deep ecology. A fanatical supporter of Hitler and the Nazi Party from the 1930s onwards, she also supported animal rights activism and was a vegetarian from a young age. In her works, she espoused ecologist views, such as the Impeachment of Man (1959), in which she espoused her views on animal rights and nature. In accordance with her ecologist views, human beings do not stand above the animals; instead, humans are a part of the ecosystem and as a result, they should respect all forms of life, including animals and the whole of nature. Because of her dual devotion to Nazism and deep ecology, she is considered an influential figure in ecofascist circles.
Malthusianism
Malthusian ideas of overpopulation have been adopted by ecofascists, using Malthusian rationale in anti-immigration arguments and seeking to resolve the perceived global issue by enforcing population control measures on the global south and racial minorities in white majority countries. Such Malthusian ideas are often paired with Social Darwinist and eugenicist views.
Ted Kaczynski, the Unabomber
Ted Kaczynski, better known as "The Unabomber", is cited as a figure who was highly influential in the development of ecofascist thought, and features prominently in contemporary ecofascist propaganda. Between 1978 and 1995 Kaczynski instigated a terrorist bombing campaign aimed at inciting a revolution against modern industrial society, in the name of returning humanity to a primitive state he suggested offered humanity more freedom while protecting the environment. In 1995 Kaczynski offered to end his bombing campaign if The Washington Post or The New York Times would publish his 35,000-word Unabomber manifesto. Both newspapers agreed to those terms. The manifesto railed not only against modern industrial society but also against "modern leftists", whom Kaczynski defined as "mainly socialists, collectivists, 'politically correct' types, feminists, gay and disability activists, animal rights activists and the like".
Because of Kaczynski's intelligence and because of his ability to write in a high-level academic tone, his manifesto was given serious consideration upon its release and it became highly influential, even amongst those who severely disagreed with his use of violence. Kaczynski's staunchly radical pro-green, anti-left work was quickly absorbed into ecofascist thought.
Kaczynski also criticized right-wing activists who complained about the erosion of traditional social mores because they supported technological and economic progress, a view which he opposed. He stated that technology erodes traditional social mores that conservatives and right wingers want to protect, and he referred to conservatives as fools.
Although Kaczynski and his manifesto have been embraced by ecofascists, he rejected "fascism", including specifically "the 'ecofascists'", describing 'ecofascism' itself as 'an aberrant branch of leftism':
In his manifesto, Kaczynski wrote that he considered fascism a "kook ideology" and he also wrote that he considered Nazism "evil". Kaczynski never tried to align himself with the far-right at any point before or after his arrest.
In 2017, Netflix released a dramatisation of Kaczynski's life, titled Manhunt: Unabomber. Once again, the popularity of the show thrust Kaczynski and his manifesto into the public's mind and it also raised the profile of ecofascism.
Garrett Hardin, Pentti Linkola, and "Lifeboat Ethics"
Two figures influential in ecofascism are Garrett Hardin and Pentti Linkola, both of whom were proponents of what they refer to as "Lifeboat Ethics". Hardin was a professor of Human Ecology at the University of California often described as a white nationalist. His work was focused on the ethics of overpopulation and population control and suggested different methods like "birth control, abortion, and sterilization". Not only did he have medical suggestions but also stood against immigration and for the end of foreign aid.
Linkola was a Finnish ecologist and radical Malthusian accused of being an active ecofascist who actively advocated ending democracy and replacing it with dictatorships that would use totalitarian and even genocidal tactics to end climate change. Both men used versions of the following analogy to illustrate their viewpoint:
Renaud Camus
Renaud Camus' conspiracy theory, the Great Replacement, has been influential on ecofascism, being referenced explicitly in multiple manifestos and had its ideas relayed in others. In the conspiracy theory, the "native" white populations of western countries are being replaced by non-white populations as a directed political effort.
Association with violence
Ecofascist violence has occurred since the 21st century, with academics and researchers warning that as ecological crises worsen and remain unaddressed, support for ecofascism and violence in the name of ecofascism will increase.
In December 2020, the Swedish Defence Research Agency released a report on ecofascism. The paper argued that ecofascism is intimately tied to the ideology of accelerationism, and ecofascists nearly exclusively choose terror tactics over the political approach. Further, the SDRA argues not all ecofascist mass shooters have been recognized as such: Pekka-Eric Auvinen who shot eight people in Finland in 2007 before killing himself adhered to the ideology according to his manifesto titled "The Natural Selector's Manifesto". He advocated "total war against humanity" due to the threat humanity posed to other species. He wrote that death and killing is not a tragedy, as it constantly happens in nature between all species. Auvinen also wrote that the modern society hinders "natural justice" and that all inferior "subhumans" should be killed and only the elite of humanity be spared. In one of his YouTube videos Auvinen paid tribute to the prominent deep ecologist Pentti Linkola.
2010s
James Jay Lee, the eco-terrorist who took several hostages at the Discovery Communications headquarters on 1 September 2010, was described as an ecofascist by Mark Potok of the Southern Poverty Law Center.
Anders Breivik committed the 2011 Norway attacks on 22 July 2011, in which he killed eight people by detonating a van bomb at Regjeringskvartalet in Oslo, and then killed 69 participants of a Workers' Youth League (AUF) summer camp, in a mass shooting on the island of Utøya. While dismissive of climate change, Breivik's manifesto was concerned with the carrying capacity of the planet, taking inspiration from Kaczynski and Grant’s The Passing of the Great Race. Breivik’s solution to this perceived problem was to cap the global population at 2.5 billion people, with the reduction in the global population being forced upon the global south. Through his actions he sought to inspire other terrorist attacks, and was an inspiration for later ecofascist terrorists.
William H. Stoetzer, a member of the Atomwaffen Division, an organisation responsible for at least eight murders, was active in the Earth Liberation Front as late as 2008 and joined Atomwaffen in 2016.
Brenton Tarrant, the Australian-born perpetrator of the Christchurch mosque shootings in New Zealand described himself as an ecofascist, ethno-nationalist, and racist in his manifesto The Great Replacement, named after a far-right conspiracy theory originating in France. In the manifesto Tarrant specifically mentions Breivik as an ideological and operational influence. Researchers point to Tarrant's terrorist attack as the moment when discussion of ecofascism moved from academic and specialist circles into the mainstream. Jordan Weissmann, writing for Slate, describes the perpetrator's version of ecofascism as "an established, if somewhat obscure, brand of neo-Nazi" and quotes Sarah Manavis of New Statesman as saying, "[Eco-fascists] believe that living in the original regions a race is meant to have originated in and shunning multiculturalism is the only way to save the planet they prioritise above all else". Similarly, Luke Darby clarifies it as: "eco-fascism is not the fringe hippie movement usually associated with ecoterrorism. It's a belief that the only way to deal with climate change is through eugenics and the brutal suppression of migrants."
Patrick Crusius, the perpetrator of the 2019 El Paso shooting wrote a similar manifesto, professing support for Tarrant. Posted to the online message board 8chan, it blames immigration to the United States for environmental destruction, saying that American lifestyles were "destroying the environment", invoking an ecological burden to be borne by future generations, and concluding that the solution was to "decrease the number of people in America using resources". Crusius outlined how he took inspiration from Tarrant and Breivik in his manifesto. Crusius and Tarrant also inspired Philip Manshaus who attacked a mosque in Norway in 2019.
2020s
The Swedish self-identified ecofascist Green Brigade is an eco-terrorist group linked to The Base that is responsible for multiple mass murder plots. The Green Brigade has been responsible for arson attacks against targets deemed to be enemies of nature, like an attack on a mink farm that caused multi-million-dollar damages. Two members were arrested by Swedish police, allegedly planning assassinating judges and bombings.
In June 2021, the Telegram-based Terrorgram collective published an online guide with incitements for attacks on infrastructure and violence against minorities, police, public figures, journalists, and other perceived enemies. In December 2021, they published a second document containing ideological sections on accelerationism, white supremacy, and ecofascism.
During 2021, several neo-Nazi groups and individuals who espoused ecofascist rhetoric were arrested and charged by French authorities for planning terrorist attacks. These include the group , and two "accelerationists" in Occitania.
In an interview with a blog a leader of the eco-extremist group Individualists Tending to the Wild (ITS) claimed to have taken organisational influence from the fascist accelerationist terrorist group Order of Nine Angles. The Foundation for Defense of Democracies and European Union Counter-Terrorism Coordinator characterized ITS as ecofascist.
Payton S. Gendron, the instigator of the 2022 Buffalo shooting, also wrote a manifesto self-describing as "an ethno-nationalist eco-fascist national socialist" within it and also professing support for far-right shooters from Tarrant and Dylann Roof to Breivik and Robert Bowers. Later in 2022, the Terrorgram collective released another publication, with analysts believing it would likely inspire further "Buffalo shootings".
In Finland on 15 March 2024, the anniversary of Christchurch mosque shooting, a Finnish army non-commissioned officer was arrested for allegedly planning a mass shooting in a university in Vaasa that day. As her motivation she said the world needed "a mass culling" to put an end to "selfish individualism", "human degeneration", global warming and conspicuous consumption. The Finnish police described her as ecofascist and that she had read books by Nietzsche, Linkola and Kaczynski. Additionally she had praised Pekka-Eric Auvinen in internet conversations and had visited Jokela school where he perpetrated the mass shooting.
On 12 August 2024 at least five people were wounded in a mass stabbing attack in Eskisehir, Turkey. The perpetrator had called for "Total Human Death" and voiced support for Ted Kaczynski and Accelerationism on the Internet.
Criticism
The deep ecologic activist and "left biocentrism" advocate David Orton stated in 2000 that the term is pejorative in nature and it has "social ecology roots, against the deep ecology movement and its supporters plus, more generally, the environmental movement. Thus, 'ecofascist' and 'ecofascism', are used not to enlighten but to smear." Orton argued that "it is a strange term/concept to really have any conceptual validity" as there has not "yet been a country that has had an "eco-fascist" government or, to my knowledge, a political organization which has declared itself publicly as organized on an ecofascist basis."
Accusations of ecofascism have often been made but are usually strenuously denied. Left wing critiques view ecofascism as an assault on human rights, as in social ecologist Murray Bookchin's use of the term.
Deep ecology
Deep ecology is an environmental philosophy that promotes the inherent worth of all living beings regardless of their instrumental utility to human needs. It has long been linked to fascist ideologies, both by critics and fascist proponents. In certain texts, the Norwegian philosopher Arne Næss, a leading voice of the "deep ecology" movement, opposes environmentalism and humanism, even proclaiming, in imitation of a famous phrase of the Marquis de Sade, ("Ecologists, another effort to become anti-humanists!"). Luc Ferry, in his anti-environmentalist book published in 1992, particularly incriminated deep ecology as being an anti-humanist ideology bordering on Nazism. Modern ecofascism has been described as a deep ecological philosophy combined with antihumanism and an accelerationist stance.
Bookchin's critique of deep ecology
Murray Bookchin criticizes the political position of deep ecologists such as David Foreman:
Sakai on "natural purity"
Such observations among the left are not exclusive to Bookchin. In his review of Anna Bramwell's biography of Richard Walther Darré, political writer J. Sakai and author of Settlers: The Mythology of the White Proletariat, observes the fascist ideological undertones of natural purity. Prior to the Russian Revolution, the tsarist intelligentsia was divided on the one hand between liberal "utilitarian naturalists", who were "taken with the idea of creating a paradise on earth through scientific mastery of nature" and influenced by nihilism as well as Russian zoologists such as Anatoli Petrovich Bogdanov; and, on the other, "cultural-aesthetic" conservationists such as Ivan Parfenevich Borodin, who were influenced in turn by German Romantic and idealist concepts such as and .
Narrowness of the label
Political scientist Balša Lubarda has criticised the use of the term "ecofascism" as not sufficiently covering and describing the wider network of ideologies and systems that feed into ecofascist action, suggesting the term "far-right ecologism" (FRE) instead. Lubarda is supported by researcher Bernhard Forchtner who emphasises ecofascism's existence as a fringe ideology that has had little impact on the wider far-right's interaction with environmentalism.
Disavowment
As ecofascism has become more prevalent various environmental groups and organisations have publicly disavowed the ideology and those who subscribe to it.
Far-right green movements
In recent years there has been a greater proliferation in ecofascist groups globally in line with the proliferation of ecofascist rhetoric.
Australia
Australia has seen an increasing prominence of ecofascism among its far-right groups in recent years.
Austria
(DGÖ) had been founded in 1982 by the former NDP official Alfred Bayer to use the popularity of the green movement at the time for the purposes of the NDP. The party managed to win a number of municipal seats in the mid-1980s but in 1988 the Constitutional Court banned the party on grounds of Neo-Nazism alongside a parallel ban on the NDP.
Finland
The neo-fascist Blue-and-Black Movement includes ecofascist policy goals, stating that they aim to protect the nature and biodiversity of Finland, and to live in harmony with nature, ending ritual slaughter, fur-farming and animal testing.
France
Nouvelle Droite movement
The European movement, developed by Alain de Benoist and other individuals involved with the GRECE think tank, have also combined various left-wing ideas, including green politics, with right-wing ideas such as European ethnonationalism. Various other far-right figures have taken the lead from de Benoist, providing an appeal to nature in their politics, including: Guillaume Faye, Renaud Camus, and Hervé Juvin.
Génération identitaire
In 2020, following articles from self-described ecofascist , a spokesperson for , Clément Martin, advocated for , ethnically homogenous zones to be violently defended in order to protect the environment.
National Rally
Marine Le Pen, president of the far-right National Rally (, or RN) in the French National Assembly, has shown an ecofascist approach towards climate change issue and has incorporated environmental issues into her platform, although her policies regarding the climate often reflect a nationalist and protectionist stance to address it. Le Pen has stated that concern for the climate is inherently nationalist, and that immigrants "do not care about the environment". Jordan Bardella, president of National Rally, embraces similar beliefs and has stated "Borders are the environment’s greatest ally; it is through them we will save the planet."
Solutions for climate change proposed for Le Pen also align with right-wing conservative economics. She has disregarded liberal free trade economics, under her belief that it "kills the planet" and creates "suffering for animals". Rather than supporting mass production of international commerce, she designed a localist project for "economic patriotism" to boost French products.
Climate change was not in the RN's party platform until around 2019, when the issue began to be capitalized electorally by both leftist and center parties alike. In response to this rising awareness regarding environmental issues, Le Pen designed an energy plan focused on fossil fuels, opposing wind and solar energy, and emphasizing expanding nuclear power wherein she delineated a party policy where 70% of France's electricity was to come from nuclear energy by 2050. Additionally, Le Pen supports maintaining oil heating systems and reducing taxes on fossil fuels, which contradicts climate experts' recommendations, and could increase France's dependence on fossil fuels.
Germany
Staudenmaier points to how from the post-war period in Germany an ecofascist section has always been present in the German far-right, though as a minor peripheral section, with others pointing out a long history of right-wing individuals and groups being present in the environmental and green movement in Germany.
Die Heimat
Die Heimat (The Homeland), previously known as the National Democratic Party of Germany (NPD), a German Nationalist far-right party, has long sought to utilise the green movement. This is one of many strategies the party has used to try to gain supporters.
The German far-right has published the magazine , that masquerades as a garden and nature publication but intertwines garden tips with extremist political ideology. This is known as a "camouflage publication" in which the NPD has spread its mission and ideologies through a discrete source and made its way into homes they otherwise wouldn’t. Right-wing environmentalists are settling in the northern regions of rural Germany and are forming nationalistic and authoritarian communities which produce honey, fresh produce, baked goods, and other such farm goods for profit. Their ideology is centered around "blood and soil" ruralism in which they humanely raise produce and animals for profit and sustenance. Through their support of this operation, and the backing of many others, it’s reported that the NPD is trying to wrestle the green movement, which has been dominated by the left since the 1980s, back from the left through these avenues.
It's difficult to know if when one is buying local produce or farm fresh eggs from a farmer at their stand, they're supporting a right-wing agenda. Various efforts are being made to halt or slow the infiltration of right-wing ecologists into the community of organic farmers such as brochures about their communities and common practices. However, as the organic cultivation organisation, Biopark, demonstrates with their vetting process, it's difficult to keep people out of communities because of their ideologies. Biopark specifies that they vet based on cultivation habits, not opinions or doctrines, especially when they're not explicitly stated.
AfD
Prominent (Alternative for Germany) politician, Björn Höcke, has stated his desire to "reclaim" natural conservation from the left. Höcke believes that nature conservation is not correctly executed under climate justice politics, and is quoted stating that the AfD has "to take the issue of nature conservation back from the Greens" However, Höcke recognizes that a socially conservative position that strongly values environmental protection is not the majority position of the AfD. Regardless, Höcke sees the work of far-right ecological magazine, , as laying a theoretical standpoint for the AfD to later draw from.
Collegium Humanum
Other groups
The term is also used to a limited extent within the .
The neo-Artamans have been identified as ecofascists in their attempts to revive the agrarian and völkisch traditions of the Artaman League in communes that they have built up since the 1990s.
Hungary
Following the fall of Communism in Hungary at the end of the 1980s, one of the new political parties that emerged in the country was the Green Party of Hungary. Initially having a moderate centre-right green outlook, after 1993 the party adopted a radical anti-liberal, anti-communist, anti-Semitic and pro-fascist stance, paired with the creation of a paramilitary wing. This ideological swing resulted in many members breaking off from the party to form new green parties, first with Green Alternative in 1993 and secondly with Hungarian Social Green Party in 1995. Each green party remained on the political fringe of Hungarian politics and petered out over time. It was not until the formation of LMP – Hungary's Green Party in the 2010s that green politics in Hungary consolidated around a single green party.
The far-right Hungarian political party Our Homeland Movement has adopted some elements of environmentalism, and commonly refers to itself as the only true green party; for example, the party has called on Hungarians to show patriotism by supporting the removal of pollution from the Tisza River while simultaneously placing the blame on the pollution on Romania and Ukraine. Similarly, elements of the far-right Sixty-Four Counties Youth Movement proscribe themselves to the "Eco-Nationalist" label, with one member stating "no real nationalist is a climate denialist".
India
Narendra Modi's leadership of India with the Bharatiya Janata Party seeks to install a complete system of Hindutva, with repression of racial and religious minorities and caste discrimination. Since 2018 Modi has been increasingly viewed as an environmental champion and used rhetoric about protecting the environment to greenwash his image and the image of his party.
International
Greenline Front is an international network of ecofascists which originated in Eastern Europe, with chapters in a variety of countries such as Argentina, Belarus, Chile, Germany, Italy, Poland, Russia, Serbia, Spain and Switzerland.
Serbia
The Leviathan Movement claims to promote ecology and protect animals from cruelty by, among other things, saving them from abusers. Leviathan has been reported as an ideologically neo-fascist and neo-nazi group. They used to share an office with the Serbian Right, a far-right political party, and Leviathan's leader, Pavle Bihali, is seen in pictures on his social media accounts posing with neo-Nazis.
Sweden
The Nordic Resistance Movement, a pan-Nordic neo-Nazi movement in the Nordic countries and a political party in Sweden has been continually described as ecofascist, and have declared themselves as the "new green party" of the Nordics.
Switzerland
In Switzerland, the initiators of the Ecopop initiative were accused of eco-fascism by FDFA State Secretary at a Christian Democratic People's Party of Switzerland event on 11 January 2013. However, after threatening to sue, Rossier apologized for the allegation.
United Kingdom
There is also a historic tradition between the far-right and environmentalism in the UK. Throughout its history, the far-right British National Party has flirted on and off with environmentalism. During the 1970s the party's first leader John Bean expressed support for the emerging environmentalist movement in the pages of the party's newspaper and suggested the primary cause of pollution as overpopulation, and therefore immigration into Britain must be halted. During the 2000s the BNP sought to position itself as the "only 'true' green party in the United Kingdom, dedicating a significant portion of their manifestos to green issues. During an appearance on BBC One's Question Time in October 2009, then-leader Nick Griffin proclaimed:
The Guardian criticised Griffin's claims that himself and the BNP were truly environmentalists at heart, suggesting it was merely a smokescreen for anti-immigrant rhetoric and pointed to previous statements by Griffin in which he suggested that climate change was a hoax. These suspicions seemed to be proven correct when in December 2009 the BNP released a 40-page document denying that global warming is a "man-made" phenomenon. The party reiterated this stance in 2011, as well as making claims that wind farms were causing the deaths of "thousands of Scottish pensioners from hypothermia".
John Bean a far-right activist and politician, the first leader of the BNP and latterly a leader within the National Front, wrote regularly in the National Front’s magazine about the problems of pollution and environmental degradation tying them to ideas of overpopulation and immigration.
In 2024 it was reported by Searchlight that the fascist groups Patriotic Alternative and Homeland party has also started to make claims that the countryside was being destroyed by immigration.
In Scotland, former UKIP candidate and activist Alistair McConnachie, who has questioned the Holocaust, founded the Independent Green Voice in 2003, and multiple ex-BNP members and activists have stood as candidates for the party.
United States
During the 1990s a highly militant environmentalist subculture called Hardline emerged from the straight edge hardcore punk music scene and established itself in a number of cities across the US. Adherents to the Hardline lifestyle combined the straight edge belief in no alcohol, no drugs, no tobacco with militant veganism and advocacy for animal rights. Hardline touted a biocentric worldview that claimed to value all life, and therefore opposed abortion, contraceptives, and sex for any purpose other than procreation. On this same line, Hardline opposed homosexuality as "unnatural" and "deviant”. Hardline groups were highly militant; In 1999 Salt Lake City grouped Hardliners as a criminal gang and suggested they were behind dozens of assaults in the metro area. That same year CBS News reported that Hardliners were behind the firebombing of fast food outlets and clothing stores selling leather items, and attributed 30 attacks to Hardliners. The Hardline subculture dissolved after the 1990s.
White supremacist John Tanton and the network of organisations he created, dubbed the Tanton network, have been described as ecofascist. Tanton and his organisations spent decades linking immigration to environmental concerns.
Political researchers Blair Taylor and Eszter Szenes have identified multiple threads in alt-right discourse and ideology that align with far-right ecologism and ecofascism.
The Green Party of the United States has also long been the target of various far-right figures, such as anti-Semitic conspiracy theorists, who have tried to shift the party drastically to the far-right.
In 1994, so-called "Takings" bills were introduced by the U.S. Congress to financially compensate wetlands owners who were unable to develop their land for profit due to environmental protection policies. These bills were met with resistance by "anthropocentric market liberals", who oppose any sort of market regulation or intervention of the state into private ownership. Hence, these "takings" bills were deemed ecofascist and proponents of the bills were "disparaged" and viewed as "'nature-loving' romantics for having reactionary tendencies that may be consistent with fascism". The journal Social Theory and Practice uses this instance to exemplify how growing public frustration with complex federal environmental regulations leads to rapidly polarizing opinions on environmental regulations in the United States: one is either a citizen who supports people, private property, and the U.S. Constitution, or a radical environmentalist who supports nature, communal ownership, and ecofascism.
Pejorative
Detractors on the political right tend to use the term "ecofascism" as a hyperbolic general pejorative against all environmental activists, including more mainstream groups such as Greenpeace, prominent activists such as Greta Thunberg, and government agencies tasked with protecting environmental resources. Such detractors include Rush Limbaugh and other conservative and wise use movement commentators. The term as a pejorative has been used in multiple countries.
See also
Adolf Hitler and vegetarianism
Animal welfare in Nazi Germany
ATWA
Conspirituality
Definitions of fascism
Ecoauthoritarianism
Ecocapitalism
Eco-nationalism
Eco-socialism
Eco-terrorism
Environmental movement
Environmental racism
Green Imperialism
Hardline (subculture)
Neo-Luddism
Pastel QAnon
Radical environmentalism
Red-green-brown alliance
Notes
References
Bibliography
Further reading
External links
Fascism
Green politics
Political pejoratives
Far-right politics
Deep ecology
Environmental movements
Syncretic political movements
Environmentalism
Eco-terrorism
Political ecology
Totalitarian ideologies | Ecofascism | [
"Biology",
"Environmental_science"
] | 8,460 | [
"Environmental ethics",
"Political ecology",
"Deep ecology",
"Biophilia hypothesis",
"Biological hypotheses",
"Environmental social science"
] |
2,945,172 | https://en.wikipedia.org/wiki/Age%20adjustment | In epidemiology and demography, age adjustment, also called age standardization, is a technique used to allow statistical populations to be compared when the age profiles of the populations are quite different.
Example
For example, in 2004/5, two Australian health surveys investigated rates of long-term circulatory system health problems (e.g. heart disease) in the general Australian population, and specifically in the Indigenous Australian population. In each age category over age 24, Indigenous Australians had markedly higher rates of circulatory disease than the general population: 5% vs 2% in age group 25–34, 12% vs 4% in age group 35–44, 22% vs 14% in age group 45–54, and 42% vs 33% in age group 55+.
However, overall, these surveys estimated that 12% of all Indigenous Australians had long-term circulatory problems compared to 18% of the overall Australian population.
Standard populations
In order to adjust for age, a standard population must be selected. Some agencies which produce health statistics also publish standard populations for age adjustment. Standard populations have been developed for specific countries and regions. World standard populations have also been developed to compare data from different countries, including the Segi World Standard and the World Health Organization (WHO) standard. These agencies must balance between setting weights which may be used over a long period of time, which maximizes comparability of published statistics, and revising weights to be close to the current age distribution. When comparing data from a specific country or region, using a standard population from that country or region means that the age-adjusted rates are similar to the true population rates. On the other hand, standardizing data using a widely used standard such as the WHO standard population allows for easier comparison with published statistics.
See also
References
Further reading
Epidemiology
Design of experiments
Demography
Ageism | Age adjustment | [
"Environmental_science"
] | 384 | [
"Epidemiology",
"Demography",
"Environmental social science"
] |
2,945,180 | https://en.wikipedia.org/wiki/Superplasticizer | Superplasticizers (SPs), also known as high range water reducers, are additives used for making high-strength concrete or to place self-compacting concrete. Plasticizers are chemical compounds enabling the production of concrete with approximately 15% less water content. Superplasticizers allow reduction in water content by 30% or more. These additives are employed at the level of a few weight percent. Plasticizers and superplasticizers also retard the setting and hardening of concrete.
According to their dispersing functionality and action mode, one distinguishes two classes of superplasticizers:
Ionic interactions (electrostatic repulsion): lignosulfonates (first generation of ancient water reducers), sulfonated synthetic polymers (naphthalene, or melamine, formaldehyde condensates) (second generation), and;
Steric effects: Polycarboxylates-ether (PCE) synthetic polymers bearing lateral chains (third generation).
Superplasticizers are used when well-dispersed cement particle suspensions are required to improve the flow characteristics (rheology) of concrete. Their addition allows to decrease the water-to-cement ratio of concrete or mortar without negatively affecting the workability of the mixture. It enables the production of self-consolidating concrete and high-performance concrete. The water–cement ratio is the main factor determining the concrete strength and its durability. Superplasticizers greatly improve the fluidity and the rheology of fresh concrete. The concrete strength increases when the water-to-cement ratio decreases because avoiding to add water in excess only for maintaining a better workability of fresh concrete results in a lower porosity of the hardened concrete, and so to a better resistance to compression.
The addition of SP in the truck during transit is a fairly modern development within the industry. Admixtures added in transit through automated slump management system, allow to maintain fresh concrete slump until discharge without reducing concrete quality.
Working mechanism
Traditional plasticizers are lignosulphonates as their sodium salts. Superplasticizers are synthetic polymers. Compounds used as superplasticizers include (1) sulfonated naphthalene formaldehyde condensate, sulfonated melamine formaldehyde condensate, acetone formaldehyde condensate and (2) polycarboxylates ethers. Cross-linked melamine- or naphthalene-sulfonates, referred to as PMS (polymelamine sulfonate) and PNS (polynaphthalene sulfonate), respectively, are illustrative. They are prepared by cross-linking of the sulfonated monomers using formaldehyde or by sulfonating the corresponding crosslinked polymer.
The polymers used as plasticizers exhibit surfactant properties. They are often ionomers bearing negatively charged groups (sulfonates, carboxylates, or phosphonates...). They function as dispersants to minimize particles segregation in fresh concrete (separation of the cement slurry and water from the coarse and fine aggregates such as gravels and sand respectively). The negatively charged polymer backbone adsorbs onto the positively charged colloidal particles of unreacted cement, especially onto the tricalcium aluminate () mineral phase of cement.
Melaminesulfonate (PMS) and naphthalenesulfonate (PNS) mainly act by electrostatic interactions with cement particles favoring their electrostatic repulsion while polycarboxylate-ether (PCE) superplasticizers sorb and coat large agglomerates of cement particles, and thanks to their lateral chains, sterically favor the dispersion of large cement agglomerates into smaller ones.
However, as their working mechanisms are not fully understood, cement-superplasticizer incompatibilities can be observed in certain cases.
Common superplasticizer types
Sodium Lignosulfonate
Sulfonated Naphthalene Formaldehyde
Polycarboxylate Superplasticizer
Polycarboxylate superplasticizer (PCE), the third generation of high-performance superplasticizer, follows the development of ordinary plasticizers and superplasticizers. It significantly reduces water content while enhancing concrete's workability, strength, and durability. Known for its cutting-edge technology, exceptional application prospects, and superior overall performance, PCE has revolutionized concrete admixtures.
See also
Particle aggregation (inverse process of)
Peptization
Plasticizer
Polycarboxylates
Rheology
Surfactant
Suspension (chemistry)
References
Further reading
External links
Cement
Concrete
Chemistry
Colloidal chemistry
Heterogeneous chemical mixtures
Concrete admixtures | Superplasticizer | [
"Chemistry",
"Engineering"
] | 999 | [
"Structural engineering",
"Colloidal chemistry",
"Surface science",
"Colloids",
"Chemical mixtures",
"Concrete",
"Heterogeneous chemical mixtures"
] |
2,945,235 | https://en.wikipedia.org/wiki/Infectivity | In epidemiology, infectivity is the ability of a pathogen to establish an infection. More specifically, infectivity is the extent to which the pathogen can enter, survive, and multiply in a host. It is measured by the ratio of the number of people who become infected to the total number exposed to the pathogen.
Infectivity has been shown to positively correlate with virulence, in plants. This means that as a pathogen's ability to infect a greater number of hosts increases, so does the level of harm it brings to the host.
A pathogen's infectivity is different from its transmissibility, which refers to a pathogen's capacity to pass from one organism to another.
See also
Basic reproduction number (basic reproductive rate, basic reproductive ratio, R0, or r nought)
References
Epidemiology | Infectivity | [
"Environmental_science"
] | 178 | [
"Epidemiology",
"Environmental social science"
] |
2,945,299 | https://en.wikipedia.org/wiki/Lefschetz%20zeta%20function | In mathematics, the Lefschetz zeta-function is a tool used in topological periodic and fixed point theory, and dynamical systems. Given a continuous map , the zeta-function is defined as the formal series
where is the Lefschetz number of the -th iterate of . This zeta-function is of note in topological periodic point theory because it is a single invariant containing information about all iterates of .
Examples
The identity map on has Lefschetz zeta function
where is the Euler characteristic of , i.e., the Lefschetz number of the identity map.
For a less trivial example, let be the unit circle, and let be reflection in the x-axis, that is, . Then has Lefschetz number 2, while is the identity map, which has Lefschetz number 0. Likewise, all odd iterates have Lefschetz number 2, while all even iterates have Lefschetz number 0. Therefore, the zeta function of is
Formula
If f is a continuous map on a compact manifold X of dimension n (or more generally any compact polyhedron), the zeta function is given by the formula
Thus it is a rational function. The polynomials occurring in the numerator and denominator are essentially the characteristic polynomials of the map induced by f on the various homology spaces.
Connections
This generating function is essentially an algebraic form of the Artin–Mazur zeta function, which gives geometric information about the fixed and periodic points of f.
See also
Lefschetz fixed-point theorem
Artin–Mazur zeta function
Ruelle zeta function
References
Zeta and L-functions
Dynamical systems
Fixed points (mathematics) | Lefschetz zeta function | [
"Physics",
"Mathematics"
] | 349 | [
"Mathematical analysis",
"Fixed points (mathematics)",
"Topology",
"Mechanics",
"Dynamical systems"
] |
2,945,357 | https://en.wikipedia.org/wiki/Amateur%20rocketry | Amateur rocketry, sometimes known as experimental rocketry or amateur experimental rocketry, is a hobby in which participants experiment with fuels and make their own rocket motors, launching a wide variety of types and sizes of rockets. Amateur rocketeers have been responsible for significant research into hybrid rocket motors, and have built and flown a variety of solid, liquid, and hybrid propellant motors.
History
Amateur rocketry was an especially popular hobby in the late 1950s and early 1960s following the launch of Sputnik, as described in Homer Hickam's 1998 memoir Rocket Boys.
One of the first organizations set up in the US to engage in amateur rocketry was the Pacific Rocket Society established in California in the early 1950s. The group did their research on rockets from a launch site deep in the Mojave Desert.
In the summer of 1956, 17-year-old Jimmy Blackmon of Charlotte, North Carolina, built a 6-foot rocket in his basement. The rocket was designed to be powered by combined liquid nitrogen, gasoline, and liquid oxygen. On learning that Blackmon wanted to launch his rocket from a nearby farm, the Civil Aeronautics Administration notified the U.S. Army. Blackmon's rocket was examined at Redstone Arsenal and eventually grounded on the basis that some of the material he had used was too weak to control the flow and mixing of the fuel.
Interest in the rocketry hobby was spurred to a great extent by the publication of a Scientific American article in June 1957 that described the design, propellant formulations, and launching techniques utilized by typical amateur rocketry groups of the time (including the Reaction Research Society of California). The subsequent publication, in 1960, of a book entitled Rocket Manual for Amateurs by Bertrand R. Brinley provided even more detailed information regarding the hobby, and further contributed to its burgeoning popularity.
At this time, amateur rockets nearly always employed either black powder, zinc-sulfur (also called "micrograin"), or rocket candy (often referred to as "caramel candy") propellant mixtures. However, such amateur rockets can be dangerous because noncommercial rocket motors may fail more often than commercial rocket motors if not correctly engineered. An appalling accident rate led individuals such as G. Harry Stine and Vernon Estes to make model rocketry a safe and widespread hobby by developing and publishing the National Association of Rocketry Model Rocket Safety Code, and by commercially producing safe, professionally designed and manufactured model rocket motors. Model rocketry by definition then became a separate and distinct activity from amateur rocketry.
As knowledge of modern advances in composite and liquid propellants became more available to the public, it became possible to develop amateur motors with greater safety. Hobbyists were no longer dependent on dangerous packed-powder mixtures that could be delicate and unpredictable in handling and performance.
The Reaction Research Society conducts complex amateur rocket projects, utilizing solid, liquid, and hybrid propellant technologies. The Tripoli Rocketry Association sanctions some amateur activities, which they call "research rocketry," provided certain safety guidelines are followed, and provided the motors are of relatively standard design.
Projects such as Sugar Shot to Space attempt to launch rockets using "rocket candy" as a propellant.
Records
An amateur spaceshot refers to a rocket launch by non-commercial entities that successfully reached or exceeded the Kármán line, the internationally recognized boundary of space.
Notable events
On May 17, 2004, Civilian Space eXploration Team (CSXT) successfully launched the GoFast rocket which achieved the first officially verified flight of an amateur high-power rocket into space, achieving an altitude of 116 km (72 mi).
Prior to that, the Reaction Research Society on November 23, 1996, launched a solid-fuel rocket, designed by longtime member George Garboden, to an altitude of 80 km (50 mi) from the Black Rock Desert in Nevada.
For Series 9, Episode 4 of the BBC's Top Gear, a group of amateur rocketeers were given four and a half months to convert a Reliant Robin into a space shuttle with the assistance of an engineering firm. The shuttle used 6 x 40,960 N·s O hybrid motors for a maximum thrust of 8 metric tonnes, making it the most powerful non-governmental rocket launch in Europe. Unfortunately, the explosive bolts holding the Robin to the external tank failed to separate, causing it to crash into a nearby hill.
On 22 March 2007, Embry-Riddle Aeronautical University, Daytona Beach launched the two-staged Icarus rocket from NASA Wallops Flight Facility in Virginia. Icarus was designed and built by students from the Embry-Riddle Future Space Explorers and Developers Society. This vehicle set the world record for highest altitude launch by a student team with an apogee of 37.8 miles (200,000 feet), with a maximum velocity of Mach 4.04. It also became the first two-stage student-built sounding rocket to launch from a NASA facility.
On June 3, 2011, Copenhagen Suborbitals launched the HEAT 1X Tycho Brahe rocket with a capsule containing a test dummy. The flight had the wrong trajectory and had to be aborted in-flight (potentially the first in-flight termination of an amateur rocket based on telemetry data and radio command).
On June 23, 2013, Copenhagen Suborbitals launched the SAPPHIRE-1 rocket with active guidance. This rocket reached an altitude of 8.2 km with a horizontal error/drift of 180 m at apogee with respect to the launch platform. This launch was also a potential first in amateur rocketry as the first guided rocket launched by amateurs.
On October 16, 2015, Delft Aerospace Rocket Engineering (DARE) launched the Stratos II+ rocket from El Arenosillo, in Spain, to an altitude of 21.457 km with a successful water landing and capsule recovery. This broke the original amateur European altitude record of 12.3 km set by DARE in 2009 with the launch of Stratos I. This record stood as the European altitude record among all student rocketry programs.
On November 8, 2016, Hybrid Engine Development (HyEnD), a student team from the University of Stuttgart, Germany, launched the HEROS 3 (Hybrid Experimental ROcket Stuttgart) from Esrange Space Center in Northern Sweden to an altitude of above 30 km. By this, the European altitude record for student programs and the World record for hybrid propulsion student rockets was taken by HyEnD.
On April 21, 2019, the USC Rocket Propulsion Laboratory (USCRPL) launched Traveler IV, an eight-inch diameter vehicle from Spaceport America. All of the subsystems were reported as successful, and the vehicle was fully recovered. On May 22, 2019, a whitepaper was published calculating apogee altitude of 339,800 ft ± 16,500, giving a 90% confidence that it passed the Kármán line. This makes it the highest-performing student-designed and student-manufactured rocket in the world, and the first to reach the internationally accepted definition of space. However, even though all subsystems were reported as performing nominally throughout the flight, the rocket experienced a loss of GPS data from approximately 13 seconds to 278 seconds of flight, therefore missing apogee.
On 3 August 2019, Cape Rocketry launched JR101 in the Karoo, South Africa. An altitude of 10.3 km was reached, making it the highest verified altitude achieved in Africa by an amateur group. This was an especially notable achievement as the propellant was based on Ammonium Nitrate, as opposed to the more common ammonium perchlorate. All major components used were manufactured in South Africa, including electronics and propellant.
On February 22, 2020 Mike Hughes, known as "Mad Mike", died after the parachute in his homemade rocket deployed prematurely and detached during liftoff.
On March 8, 2021, a student group of the South African University of KwaZulu-Natal beat the previous African amateur hybrid rocket altitude record with their Phoenix-1B Mk IIr vehicle by reaching 18 km height after successfully launching it at the Denel Overberg Test Range in the Western Cape.
See also
Amateur rocket motor classification
Civilian Space eXploration Team
Copenhagen Suborbitals
Delft Aerospace Rocket Engineering
Elon Musk
Friends of Amateur Rocketry
High-power rocket
Model rocket
Reaction Research Society
Rocketry Organization of California
Rocket candy
Rocket Festival
Space Frontier Foundation
Thermalite
Robert Truax
References
External links
Build Your Own Radio Beacon Rocket Tracker
Reaction Research Society (RRS)
Rocketry Online
European Model Rocketry
Richard Nakka's Experimental Rocketry Web Site
Steve Jurvetson's TED Talk on Amateur Rocketry
I Build Rockets Website
Argentinean Amateur Rocketry Association
Altus Metrum Free hardware and software for rocketry
Apogee Rockets
National Association of Rocketry
UK Rocketry Association
British Model Flying Association
The Rocket Range - UK Space News and Model Rocketry
Fins Over Gwent Rocketry Club
Rocketry | Amateur rocketry | [
"Engineering"
] | 1,813 | [
"Rocketry",
"Aerospace engineering"
] |
2,945,390 | https://en.wikipedia.org/wiki/Scalar%20boson | A scalar boson is a boson whose spin equals zero. A boson is a particle whose wave function is symmetric under particle exchange and therefore follows Bose–Einstein statistics. The spin–statistics theorem implies that all bosons have an integer-valued spin. Scalar bosons are the subset of bosons with zero-valued spin.
The name scalar boson arises from quantum field theory, which demands that fields of spin-zero particles transform like a scalar under Lorentz transformation (i.e. are Lorentz invariant).
A pseudoscalar boson is a scalar boson that has odd parity, whereas "regular" scalar bosons have even parity.
Examples
Scalar
The only fundamental scalar boson in the Standard Model of particle physics is the Higgs boson, the existence of which was confirmed on 14 March 2013 at the Large Hadron Collider by CMS and ATLAS. As a result of this confirmation, the 2013 Nobel Prize in Physics was awarded to Peter Higgs and François Englert.
Various known composite particles are scalar bosons, e.g. the alpha particle and scalar mesons.
The φ4-theory or quartic interaction is a popular "toy model" quantum field theory that uses scalar bosonic fields, used in many introductory quantum textbooks to introduce basic concepts in field theory.
Pseudoscalar
There are no fundamental pseudoscalars in the Standard Model, but there are pseudoscalar mesons, like the pion.
See also
Scalar field theory
Klein–Gordon equation
Vector boson
Higgs boson
References
Bosons
Quantum field theory | Scalar boson | [
"Physics"
] | 333 | [
"Quantum field theory",
"Matter",
"Quantum mechanics",
"Bosons",
"Subatomic particles"
] |
2,945,563 | https://en.wikipedia.org/wiki/La%20Grande-4%20Airport | La Grande-4 Airport is an airfield exclusively serving the La Grande-4 hydro-electric generating station in northern Quebec, Canada.
See also
La Grande Rivière Airport
La Grande-3 Airport
La Grande-4/Lac de la Falaise Water Aerodrome
James Bay Project
References
External links
James Bay Project
Registered aerodromes in Nord-du-Québec | La Grande-4 Airport | [
"Engineering"
] | 70 | [
"James Bay Project",
"Macro-engineering"
] |
2,945,565 | https://en.wikipedia.org/wiki/La%20Grande-3%20Airport | La Grande-3 Airport is an airfield exclusively serving the La Grande-3 hydro-electric generating station in northern Quebec, Canada.
See also
La Grande Rivière Airport
La Grande-4 Airport
James Bay Project
References
External links
James Bay Project
Registered aerodromes in Nord-du-Québec | La Grande-3 Airport | [
"Engineering"
] | 58 | [
"James Bay Project",
"Macro-engineering"
] |
2,945,778 | https://en.wikipedia.org/wiki/Hemicontinuity | In mathematics, upper hemicontinuity and lower hemicontinuity are extensions of the notions of upper and lower semicontinuity of single-valued functions to set-valued functions.
A set-valued function that is both upper and lower hemicontinuous is said to be continuous in an analogy to the property of the same name for single-valued functions.
To explain both notions, consider a sequence a of points in a domain, and a sequence b of points in the range. We say that b corresponds to a if each point in b is contained in the image of the corresponding point in a.
Upper hemicontinuity requires that, for any convergent sequence a in a domain, and for any convergent sequence b that corresponds to a, the image of the limit of a contains the limit of b.
Lower hemicontinuity requires that, for any convergent sequence a in a domain, and for any point x in the image of the limit of a, there exists a sequence b that corresponds to a subsequence of a, that converges to x.
Examples
The image on the right shows a function that is not lower hemicontinuous at x. To see this, let a be a sequence that converges to x from the left. The image of x is a vertical line that contains some point (x,y). But every sequence b that corresponds to a is contained in the bottom horizontal line, so it cannot converge to y. In contrast, the function is upper hemicontinuous everywhere. For example, considering any sequence a that converges to x from the left or from the right, and any corresponding sequence b, the limit of b is contained in the vertical line that is the image of the limit of a.
The image on the left shows a function that is not upper hemicontinuous at x. To see this, let a be a sequence that converges to x from the right. The image of a contains vertical lines, so there exists a corresponding sequence b in which all elements are bounded away from f(x). The image of the limit of a contains a single point f(x), so it does not contain the limit of b. In contrast, that function is lower hemicontinuous everywhere. For example, for any sequence a that converges to x, from the left or from the right, f(x) contains a single point, and there exists a corresponding sequence b that converges to f(x).
Definitions
Upper hemicontinuity
A set-valued function is said to be upper hemicontinuous at a point if, for every open with there exists a neighbourhood of such that for all is a subset of
Lower hemicontinuity
A set-valued function is said to be lower hemicontinuous at the point
if for every open set intersecting there exists a neighbourhood of such that intersects for all (Here means nonempty intersection ).
Continuity
If a set-valued function is both upper hemicontinuous and lower hemicontinuous, it is said to be continuous.
Properties
Upper hemicontinuity
Sequential characterization
As an example, look at the image at the right, and consider sequence a in the domain that converges to x (either from the left or from the right). Then, any sequence b that satisfies the requirements converges to some point in f(x).
Closed graph theorem
The graph of a set-valued function is the set defined by
The graph of is the set of all such that is not empty.
Lower hemicontinuity
Sequential characterization
Open graph theorem
A set-valued function is said to have if the set
is open in for every If values are all open sets in then is said to have .
If has an open graph then has open upper and lower sections and if has open lower sections then it is lower hemicontinuous.
Operations Preserving Hemicontinuity
Set-theoretic, algebraic and topological operations on set-valued functions (like union, composition, sum, convex hull, closure) usually preserve the type of continuity. But this should be taken with appropriate care since, for example, there exists a pair of lower hemicontinuous set-valued functions whose intersection is not lower hemicontinuous.
This can be fixed upon strengthening continuity properties: if one of those lower hemicontinuous multifunctions has open graph then their intersection is again lower hemicontinuous.
Function Selections
Crucial to set-valued analysis (in view of applications) are the investigation of single-valued selections and approximations to set-valued functions.
Typically lower hemicontinuous set-valued functions admit single-valued selections (Michael selection theorem, Bressan–Colombo directionally continuous selection theorem, Fryszkowski decomposable map selection).
Likewise, upper hemicontinuous maps admit approximations (e.g. Ancel–Granas–Górniewicz–Kryszewski theorem).
Other concepts of continuity
The upper and lower hemicontinuity might be viewed as usual continuity:
(For the notion of hyperspace compare also power set and function space).
Using lower and upper Hausdorff uniformity we can also define the so-called upper and lower semicontinuous maps in the sense of Hausdorff (also known as metrically lower / upper semicontinuous maps).
See also
Selection theorem - a theorem about constructing a single-valued function from a set-valued function.
Notes
References
Theory of continuous functions
Mathematical analysis
Variational analysis | Hemicontinuity | [
"Mathematics"
] | 1,125 | [
"Theory of continuous functions",
"Mathematical analysis",
"Topology"
] |
2,945,880 | https://en.wikipedia.org/wiki/Hobble%20%28device%29 | A hobble (also, and perhaps earlier, hopple), or spancel, is a device which prevents or limits the locomotion of an animal, by tethering one or more legs. Although hobbles are most commonly used on horses, they are also sometimes used on other animals. On dogs, they are used especially during force-fetch training to limit the movement of a dog's front paws when training it to stay still. They are made from leather, rope, or synthetic materials such as nylon or neoprene. There are various designs for breeding, casting (causing a horse or other large animal to lie down with its legs underneath it), and mounting horses.
Types
Western horse hobbles
"Western"-style horse hobbles are tied around the pasterns or cannon bones of the horse's front legs. They comprise three basic types:
The vaquero or braided hobble, which is often of a quite fancy plaiting and lighter than other varieties, and is therefore only suitable for short term use.
The figure eight hobble or Queensland Utility Strap, a common style of hobble that stockmen wear as a belt and can be used neck strap, lunch-time hobble, or tie for a “micky”. This hobble is made with three pieces of leather and two rings, plus a buckle fastening.
The twist hobble, made of soft leather or rope, with a twist between the horse's legs.
The above patterns are unsuitable for training, as they can tighten around a leg and cause injury.
Western hobbles are normally used to secure a horse when no tie device, tree, or other object is available for that purpose; e.g., when, if traveling across open lands, a rider has to dismount for various reasons. Hobbles also allow a horse to graze and move short distances slowly, yet prevent the horse from running off too far. This is handy at night if the rider has to get some sleep; using a hobble ensures that, in the morning, they can find their horse not too far away.
Hobble training a horse is a form of sacking out and desensitizing a horse to accept restraints on its legs. This helps a horse accept pressure on its legs in case it ever becomes entangled in barbed wire or fencing. A hobble-trained horse is less likely to pull, struggle, and cut its legs in a panic, since it has been taught to give to pressure in its legs.
Other hobbles
Breeding or service hobbles usually fasten around a mare's hocks, pass between her front legs to a neck strap. They are used to protect a stallion from kicks.
Casting hobbles are the same as the above, but with another rope or strap attached to the other hind foot. When these straps or ropes are pulled up together, the horse will fall.
Cattle hobbles are a strong strap with a metal keeper in the middle and a buckle at the end. They are used on the hind legs for a short period when capturing feral cattle.
Drovers’ or grazing hobbles have a buckle on a wide double redhide or chrome leather strap and a swivel and 5 ring chain connecting them. They are placed around the pasterns.
Hind leg pull up strap passes from a neck strap and around a hind pastern to draw up a hind foot for shoeing or treatment.
Hopples (sometimes called hobbles) are a piece of equipment used by Standardbred pacers to help the horse maintain its pacing gait.
Humble or one leg hobble is a strap placed around the front pastern, and then the leg is lifted and the strap is wrapped around the upper leg and then buckled, leaving the horse with three legs to stand on.
Mounting hobbles are knee hobbles that are made with a quick release, on a lead that passes to the rider. They are used to mount fractious horses and when mounted the rider can retrieve them.
Picket hobble is a single hobble that is placed on a front pastern and then attached to a tether chain.
Sideline hobbles may be made in the same manner as above, but with a longer chain to hobble a front and back leg. Rope may also replace the chain. They, too, are placed around the pasterns. This pattern may be useful on a persistent jumper or a horse that has mastered the art of travelling in front leg hobbles
Three or four leg hobbles are made in a similar pattern to the above and hobble three or four legs. Used for securing legs for operations, etc.
History
Hobbles date at least as far back as Ancient Egypt. Two Egyptian hieroglyphs are believed to depict hobbles.
A hobble is illustrated on a silver vase excavated from a 4th century B.C. tomb at Chertomlyk in modern day Ukraine.
The Persians were also known for their custom of hobbling. In Anabasis, Xenophon claims "a Persian army is good for nothing at night. Their horses are haltered, and, as a rule, hobbled as well to prevent their escaping as they
might if loose."
See also
Hobble skirt
Legcuffs
References
— A detailed discussion of the various types of Western hobbles
Animal equipment
Horse tack
Physical restraint | Hobble (device) | [
"Biology"
] | 1,102 | [
"Animal equipment",
"Animals"
] |
2,945,894 | https://en.wikipedia.org/wiki/Social%20invisibility | Social invisibility refers to a group of people in the society who have been separated or systematically ignored by the majority of the public. As a result, those who are marginalized feel neglected or being invisible in the society. It can include disadvantaged, elderly homes, child orphanages, homeless people or anyone who experiences a sense of being ignored or separated from society as a whole.
Psychological consequences
The subjective experience of being unseen by others in a social environment is social invisibility. A sense of disconnectedness from the surrounding world is often experienced by invisible people. This disconnectedness can lead to absorbed coping and breakdowns, based on the asymmetrical relationship between someone made invisible and others.
Among African-American men, invisibility can often take the form of a psychological process that both deals with the stress of racialized invisibility, and the choices made in becoming visible within a social framework that predetermines these choices. In order to become visible and gain acceptance, an African-American man has to avoid adopting behavior that made him invisible in the first place, which intensifies the stress already brought on through racism.
See also
LGBT erasure
Racial color blindness
Social exclusion or marginalization
Social vulnerability
Apartheid
References
Social networks
Invisibility
Social rejection
Passing (sociology)
Censorship | Social invisibility | [
"Physics"
] | 261 | [
"Optical phenomena",
"Physical phenomena",
"Invisibility"
] |
2,946,006 | https://en.wikipedia.org/wiki/Dermal%20bone | A dermal bone or investing bone or membrane bone is a bony structure derived from intramembranous ossification forming components of the vertebrate skeleton, including much of the skull, jaws, gill covers, shoulder girdle, fin rays (lepidotrichia), and the shells of turtles and armadillos. In contrast to endochondral bone, dermal bone does not form from cartilage that then calcifies, and it is often ornamented. Dermal bone is formed within the dermis and grows by accretion only – the outer portion of the bone is deposited by osteoblasts.
The function of some dermal bone is conserved throughout vertebrates, although there is variation in shape and in the number of bones in the skull roof and postcranial structures. In bony fish, dermal bone is found in the fin rays and scales. A special example of dermal bone is the clavicle. Some of the dermal bone functions regard biomechanical aspects such as protection against predators. The dermal bones are also argued to be involved in ecophysiological implications such as the heat transfers between the body and the surrounding environment when basking (seen in crocodilians) as well as in bone respiratory acidosis buffering during prolonged apnea (seen in both crocodilians and turtles). These ecophysiological functions rely on the set-up of a blood vessel network within and straight above the dermal bones.
References
Vertebrate anatomy
Dermal and subcutaneous growths
Armour (zoology) | Dermal bone | [
"Biology"
] | 331 | [
"Biological defense mechanisms",
"Armour (zoology)"
] |
2,946,110 | https://en.wikipedia.org/wiki/List%20of%20international%20earthquake%20acceleration%20coefficients | List of international earthquake acceleration coefficients. A list of earthquake coefficients used in structural design for earthquake engineering around the world. For example, a coefficient of 0.09 indicates that a building is designed that 0.09457 of its weight can be applied horizontally during an earthquake.
Australia
From Australian Standard 1170.4. Coeffiecients are based on 10% chance exceedence in 50 years.
Adelaide - 0.10
Brisbane - 0.06
Hobart - 0.05
Melbourne - 0.08
Perth - 0.09
Sydney - 0.08
Note: Meckering, Western Australia, has the largest coefficient in Australia of 0.22.
Greece
From ΕAΚ 2003 building code
Zone 1 = 0.16g (Thrace and most of Northern Greece, Parts of Athens and Parts of Thessaloniki)
Zone 2 = 0.24g (Parts of Athens and Parts of Thessaloniki)
Zone 3 = 0.36g (Zakynthos Island, Cephalonia Island)
Other
Canada uses Spectral acceleration
References | List of international earthquake acceleration coefficients | [
"Engineering"
] | 206 | [
"Earthquake engineering",
"Earthquake and seismic risk mitigation",
"Civil engineering",
"Structural engineering"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.