id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
1,064,603 | https://en.wikipedia.org/wiki/Ancient%20woodland | In the United Kingdom, ancient woodland is that which has existed continuously since 1600 in England, Wales and Northern Ireland (or 1750 in Scotland). The practice of planting woodland was uncommon before those dates, so a wood present in 1600 is likely to have developed naturally.
In most ancient woods, the trees and shrubs have been felled periodically as part of the management cycle. Providing that the area has remained as woodland, the stand is still considered ancient. Since it may have been cut over many times in the past, ancient woodland does not necessarily contain trees that are particularly old.
For many animal and plant species, ancient woodland sites provide the sole habitat. Furthermore, for many others, the conditions prevailing on these sites are much more suitable than those on other sites. Ancient woodland in the UK, like rainforest in the tropics, serves as a refuge for rare and endangered species. Consequently, ancient woodlands are frequently described as an irreplaceable resource, or 'critical natural capital'. The analogous term used in the United States, Canada and Australia (for woodlands that do contain very old trees) is "old-growth forest".
Ancient woodland is formally defined on maps by Natural England and equivalent bodies. Mapping of ancient woodland has been undertaken in different ways and at different times, resulting in a variable quality and availability of data across regions, although there are some efforts to standardise and update it.
Protection
A variety of indirect legal protections exist for many ancient woodlands, but it is not automatically the case that any given ancient woodland is protected. Some examples of ancient woodland are nationally or locally designated, for example as Sites of Special Scientific Interest. Others lack such designations.
Ancient woodlands also require special consideration when they are affected by planning applications. The National Planning Policy Framework, published in 2012, represents the British government's policy document pertaining to planning decisions affecting ancient woodlands. The irreplaceable nature of ancient woodlands is elucidated in paragraph 118 of the NPPF, which states: ‘Planning permission should be refused for development resulting in the loss or deterioration of irreplaceable habitats, including ancient woodland and the loss of aged or veteran trees found outside ancient woodland, unless the need for, and benefits of, the development in that location clearly outweigh the loss.’
Characteristics
The concept of ancient woodland, characterised by high plant diversity and managed through traditional practices, was developed by the ecologist Oliver Rackham in his 1980 book Ancient Woodland, its History, Vegetation and Uses in England, which he wrote following his earlier research on Hayley Wood in Cambridgeshire.
The definition of ancient woodland includes two sub-types: Ancient semi-natural woodland (ASNW) and Planted ancient woodland site (PAWS).
Ancient semi-natural woodland (ASNW) is composed of native tree species that have not obviously been planted. Many of these woods also exhibit features characteristic of ancient woodland, including the presence of wildlife and structures of archaeological interest.
Planted Ancient Woodland Sites (PAWS) are defined as ancient woodland sites where the native species have been partially or wholly replaced with a non-locally native species (usually but not exclusively conifers). These woodlands typically exhibit a plantation structure, characterized by even-aged crops of one or two species planted for commercial purposes. Many of these ancient woodlands were transformed into conifer plantations as a consequence of felling operations conducted during wartime. While PAWS sites may not possess the same high ecological value as ASNW, they often contain remnants of semi-natural species where shading has been less intense. This allows for the gradual restoration of more semi-natural structures through gradual thinning is often possible. Since the ecological and historical values of ancient woodland were recognized, PAWS restoration has been a priority amongst many woodland owners and governmental and non-governmental agencies. Various grant schemes have also supported this endeavor. Some restored PAWS sites are now practically indistinguishable from ASNW. There is no formal method for reclassifying restored PAWS as ASNW, although some woodland managers now use the acronym RPAWS (Restored Planted Ancient Woodland) for a restored site.
Species which are particularly characteristic of ancient woodland sites are called ancient woodland indicator species, such as bluebells, ramsons, wood anemone, yellow archangel and primrose for example, representing a type of ecological indicator. The term is more frequently applied to desiccation-sensitive plant species, and particularly lichens and bryophytes, than to animals. This is due to the slower rate at which they colonise planted woodlands, which makes them more reliable indicators of ancient woodland sites. Sequences of pollen analysis can also serve as indicators of forest continuity.
Lists of ancient woodland indicator species among vascular plants were developed by the Nature Conservancy Council (now Natural England) for each region of England, with each list containing the hundred most reliable indicators for that region. The methodology entailed the study of plants from known woodland sites, with an analysis of their occurrence patterns to determine which species were most indicative of sites from before 1600. In England this resulted in the first national Ancient Woodland Inventory, produced in the 1980s.
Although ancient woodland indicator species have been recorded in post-1600 woodlands and also in non-woodland sites such as hedgerows, it is uncommon for a site that is not ancient woodland to host a double-figure indicator species total. More recent methodologies also supplement these field observations and ecological measurements with historical data from maps and local records, which were not fully assessed in the original Nature Conservancy Council surveys.
History
Ancient woods were valuable properties for their landowners, serving as a source of wood fuel, timber (estovers and loppage) and forage for pigs (pannage). In southern England, hazel was particularly important for coppicing, whereby the branches were used for wattle and daub in buildings, for example. Such old coppice stumps are easily recognised for their current overgrown state, given the waning prevalence of the practice. In such overgrown coppice stools, large boles emerge from a common stump. The term 'forest' originally encompassed more than simply woodland. It also referred to areas such as parkland, open heathland, upland fells, and any other territory situated between or outside of manorial freehold. These forests were the exclusive hunting preserve of the monarch or granted to nobility.
The ancient woods that were situated within forests were frequently designated as Royal Parks. These were afforded special protection against poachers and other interlopers, and subject to tolls and fines where trackways passed through them or when firewood was permitted to be collected or other licenses granted. The forest law was rigorously enforced by a hierarchy of foresters, parkers and woodwards. In English land law, it was illegal to assart any part of a royal forest. This constituted the gravest form of trespass that could be committed in a forest, being considered more egregious than mere waste. While waste involved the felling of trees, which could be replanted, assarting entailed the complete uprooting of trees within the woodland of the afforested area.
Boundary marking
Ancient woods were well-defined, often being surrounded by a bank and ditch, which allowed them to be more easily recognised. The bank may also support a living fence of hawthorn or blackthorn to prevent livestock or deer from entering the area. Since they are attracted by young shoots on coppice stools as a food source, they must be excluded if the coppice is to regenerate. Such indicators can still be observed in many ancient woodlands, and large forests are often subdivided into woods and coppices with banks and ditches as was the case in the past. The hedges at the margins are often overgrown and may have spread laterally due to the neglect of many years.
Many ancient woods are listed in the Domesday Book of 1086, as well as in the earlier Anglo-Saxon Chronicle. This is indicative of their significant value to early communities as a source of fuel and, moreover, as a source of food for farm animals. The boundaries are frequently described in terms of features such as large trees, streams or tracks, and even standing stones for example.
Ancient woodland inventories
Ancient woodland sites over in size are recorded in Ancient Woodland Inventories, compiled in the 1980s and 1990s by the Nature Conservancy Council in England, Wales, and Scotland; and maintained by its successor organisations in those countries. There was no inventory in Northern Ireland until the Woodland Trust completed one in 2006.
Destruction
Britain's ancient woodland cover has diminished considerably over time. Since the 1930s almost half of the ancient broadleaved woodland in England and Wales have been planted with conifers or cleared for agricultural use. The remaining ancient semi-natural woodlands in Britain cover a mere , representing less than 20% of the total wooded area. More than eight out of ten ancient woodland sites in England and Wales are less than in area. Only 617 exceed , which is a relatively small number. Forty-six of these sites exceed .
Management
Most ancient woodland in the UK has been managed in some way by humans for hundreds (in some cases probably thousands) of years. Two traditional techniques are coppicing (the practice of harvesting wood by cutting trees back to ground level) and pollarding (harvesting wood at approximately human head height to prevent new shoots being eaten by grazing species such as deer). Both techniques encourage new growth while allowing the sustainable production of timber and other woodland products. During the 20th century, the use of such traditional management techniques declined, concomitant with an increase in large-scale mechanized forestry. Consequently, coppicing is now seldom practiced, and overgrown coppice stools are a common feature in many ancient woods, with their numerous trunks of similar size. These shifts in management practices have resulted in alternations to ancient woodland habitats and a loss of ancient woodland to forestry.
Examples
Bedgebury Forest, Kent
Bernwood Forest, Buckinghamshire and Oxfordshire
Bradfield Woods, Suffolk
Bradley Woods, Wiltshire
Burnham Beeches, Bucks
Cannock Chase, Staffordshire
Cherry Tree Wood, London
Claybury Woods, London
Coldfall Wood, London
Dolmelynllyn Estate, Gwynedd
Dyscarr Woods, Nottinghamshire.
Edford Woods and Meadows, Somerset
Epping Forest, Essex
Forest of Dean West Gloucestershire
Foxley Wood, Norfolk
Grass Wood, Wharfedale, Yorkshire
Hatfield Forest, Essex
Hazleborough Wood, Northamptonshire, part of Whittlewood Forest
Highgate Wood, London
Hollington Wood, Buckinghamshire
Holt Heath, Dorset
King's Wood, Heath and Reach, Bedfordshire
Lower Woods, Gloucestershire
New Forest, Hampshire
Parkhurst Forest, Isle of Wight
Puzzlewood, in the Forest of Dean
Queen's Wood, London
Ryton Woods, Warwickshire
Salcey Forest, Northamptonshire
Savernake Forest, Wiltshire
Sherwood Forest, Nottinghamshire
Snakes Wood, Suffolk
Titnore Wood, West Sussex
Vincients Wood, Wiltshire
Wentwood, Monmouthshire
Whinfell Forest, Cumbria
Whittlewood Forest, Northamptonshire
Windsor Great Park, Berkshire
Wistman's Wood, Devon
Wormshill, Kent: Barrows Wood, Trundle Wood and High Wood
Wyre Forest, bordering Shropshire and Worcestershire
Yardley Chase, Northamptonshire
See also
References
External links
Ancient Tree Guides by the Woodland Trust (archived 5 November 2011)
History of forestry
Forests and woodlands of the United Kingdom
Forests and woodlands of England
Old-growth forests
Forest history
Types of formally designated forests | Ancient woodland | Biology | 2,328 |
1,358,751 | https://en.wikipedia.org/wiki/Samuel%20Haldeman | Samuel Stehman Haldeman (August 12, 1812 – September 10, 1880) was an American naturalist and philologist.
During a long and varied career he studied, published, and lectured on geology, conchology, entomology and philology. He once confided, "I never pursue one branch of science more than ten years, but lay it aside and go into new fields."
Early life and education
Haldeman was born in Locust Grove, Pennsylvania on August 12, 1812, the oldest of seven children of Henry Haldeman and Frances Stehman Haldeman. Locust Grove was the family estate on the Susquehanna River, twenty miles below Harrisburg. His father was a prosperous businessman and his mother was an accomplished musician who died when Haldeman was twelve years old. In 1826, he was sent to Harrisburg to attend school at the Classical Academy, run by John M. Keagy. After two years in the academy, he enrolled at Dickinson College where his interest in natural history was encouraged by his professor, Henry Darwin Rogers, who would later become a distinguished geologist. Two years after entering Dickinson, the college was forced to close temporarily and Peck left without earning a diploma.
Career
After leaving school, Haldeman took over management of his father's new sawmill and became a silent partner with two of his brothers who started an iron manufacturing business in the area. He eventually became an authority on smelting iron. However, he was always drawn to science and often neglected the family businesses in pursuit of these interests. He later said, "I developed a taste for rainy weather and impassable roads; then I could remain undisturbed in the perusal of my books." In 1833–1834, he attended lectures in the medical department at the University of Pennsylvania in Philadelphia in order to better prepare himself for the study of natural history.
In 1835, Haldeman wrote an article for the Lancaster Journal refuting the Great Moon Hoax, a sensational story claiming that life had been observed on the moon. That same year his former professor, Henry D. Rogers, was appointed state geologist of New Jersey and in 1836 he sent for Haldeman to assist him. A year later, on the reorganization of the Pennsylvania geological survey, Haldeman transferred there, and was actively engaged on the survey until 1842, preparing five annual reports, and personally surveying Dauphin and Lancaster counties.
During the 1840s, Haldeman's interests were focused on the natural history of invertebrates, especially the taxonomy of beetles and freshwater mollusks. In 1840 he began the publication of his Monograph of the Freshwater Univalve Mollusca of the United States, issued in nine parts with the final volume not appearing until 1866. The monograph was well received by the scientific community in America and Europe. In an addendum he described Scolithus linearis, a trace fossil of some burrowing organism, the most ancient organic remains known at the time. In 1844 he wrote a paper, "Enumeration of the Recent Freshwater Mollusca Which are Common to North America and Europe", where he laid out in detail the case for Lamarckian evolution and transmutation of species. In 1861, Charles Darwin wrote in a preface to his On the Origin of Species an acknowledgment of Haldeman's ideas in support of evolution.
He was elected as a member to the American Philosophical Society in 1844.
In 1842, he was instrumental in the establishment of the Entomological Society of Pennsylvania, the first scientific society formed to study insects in America. Haldeman's participation in the society put him in regular contact with other leading American entomologists including Frederick E. Melsheimer and John G. Morris. His first entomological paper was the "Catalogue of the Carabideous Coleoptera of South Eastern Pennsylvania," published in 1842. Over the next 15 years he published many papers on the systematics of beetles and other insects, describing many new species. His ear was remarkably sensitive, and he discovered a new organ of sound in lepidopterous insects, which was described by him in Benjamin Silliman's American Journal of Science in 1848. In 1852 he wrote a description of the insects collected by the Stansbury survey of the Great Salt Lake.
In addition to his work on entomology, Haldeman accepted various college appointments to teach natural history. In 1842, he was a professor of zoology at the Franklin Institute in Philadelphia. From 1851 until 1855 he was professor of natural history at the University of Pennsylvania. He then accepted a similar professorship in Delaware College. Meanwhile, he also lectured on geology and chemistry at the State Agricultural College of Pennsylvania. He visited Texas in 1851 to investigate the presidency of an institution there, but declined the position. On his return trip from Texas, he was offered the position of president of Masonic College in Selma, Alabama, which he accepted and held from January to October 1852.
In the 1850s, Haldeman's focus turned to the study of language. He carried out extensive research among Amerindian dialects, and also in Pennsylvania Dutch, besides investigations in the English, Chinese, and other languages. Haldeman was an earnest advocate of spelling reform. He was a member of many scientific societies, was the founder and president of the American Philological Association, and one of the early members of the National Academy of Sciences. In 1858, Haldeman was awarded the Trevelyan Prize, given by the Phonetic Society of Great Britain, for his article entitled "Analytic Orthography". He made numerous visits to Europe for purposes of research, and when studying the human voice in Rome determined the vocal repertoire of 40–50 varieties of human speech. In 1869, he returned to the University of Pennsylvania as a professor of comparative philology and remained there until his death in 1880.
In 1835, Haldeman married Mary A. Hough and the couple had two sons and two daughters. After the wedding, they moved to a new home at the base of Chickies Rock. He had designed the house and laid out the extensive gardens with native specimens of trees and shrubs. Raised a Protestant, Haldeman converted to Catholicism in the 1840s after undertaking a systematic study of different religions. He died suddenly of a heart attack on September 10, 1880 at his home in Chickies, Pennsylvania.
Works
He was the author of some 150 publications including important works on entomology, conchology, and philology.
A monograph of the Limniades and other freshwater univalve shells of North America. Philadelphia, J. Dobson. (1840)
A monograph of the freshwater univalve mollusca of the United States, including notices of species in other parts of North America (1842)
Zoological Contributions, Parts 1,2,3 (1842–1844)
"Enumeration of the Recent Freshwater Mollusk Which are Common to North America and Europe, with Observations on Species and their Distribution" (1844)
"Monographie du genre leptoxis" (in Chenu's Illustrations conchologiques, Paris, 1847)
"On some Points in Linguistic Ethnology" (in Proceedings of the American Academy, Boston, 1849)
"Zoölogy of the Invertebrate Animals" (in the Iconographic Encyclopædia, New York, 1850)
Elements of Latin Pronunciation. (1851)
"On the Relations of the English and Chinese Languages" (in Proceedings of the American Association for the Advancement of Science, 1856)
Analytic Orthography (1860) In 1858, this essay gained Haldeman the Trevelyan Prize in England over 18 European competitors.
Tours of a Chess Knight (1864)
Pennsylvania Dutch, a Dialect of South German with an Infusion of English (1872)
Outlines of Etymology (1877)
Word-Building (1881)
Notes
Attribution
References
Geiser, S. W. (1945) "Notes on Some Workers in Texas Entomology 1839-1880", Volume 49, Number 4, Southwestern Historical Quarterly Online
External links
Finding aid to the Samuel Stehman Haldeman papers at the University of Pennsylvania Libraries
Samuel Stehman Haldeman Archive
Scientific American, Samuel Stedman Haldeman", October 2, 1880, p. 218
1812 births
1880 deaths
American entomologists
American naturalists
American philologists
Converts to Roman Catholicism
Catholics from Pennsylvania
Conchologists
Dickinson College alumni
Lamarckism
Members of the United States National Academy of Sciences
People from Lancaster County, Pennsylvania
Proto-evolutionary biologists
University of Pennsylvania faculty | Samuel Haldeman | Biology | 1,729 |
78,653,449 | https://en.wikipedia.org/wiki/Carballeira%20de%20San%20Xusto | The Carballeira de San Xusto is an oak forest located in the parish of San Xurxo de Sacos (Cerdedo-Cotobade), on a hill above the Lérez River, near the Ria de Pontevedra, in Galicia, Spain.
Characteristics
It is a traditional gathering place and a reference for legends and old tales; it contains centuries-old oak trees as well as chestnut and American oak trees. Once extensive in area, it has now greatly diminished.
Between 1990 and 1996, there was a dispute over the ownership of the carballeira between the parishioners of San Xurxo de Sacos and the Archdiocese of Santiago de Compostela. On June 6, 1990, the parish priest Manuel Lorenzo registered in Ponte Caldelas the ownership of this space and the mountain of Lixó on behalf of the Church. After various acts of protest and years of confrontation, the Provincial Court and the Supreme Court upheld the resolution issued by a preliminary court that sided with the parishioners. The ownership of the carballeira belongs to the Community of Montes of the parish.
During the wildfire wave of 2006 in Galicia, a strong fire descended from Cerdedo to Pontevedra along the banks of the Lérez River, passing near the carballeira on August 5. The villagers celebrating the pilgrimage had to leave in haste, either to escape the fire or to help extinguish it. Ultimately, the carballeira was saved. In gratitude, a devotee of Saints Xusto and Pastor brought popular singers to the festivities in subsequent years, such as Manolo Escobar, Bertín Osborne, or David Bustamante, attracting thousands of people to the carballeira.
Architectural heritage
It is presided over by a cruceiro and a chapel, where the pilgrimage of Saints Xusto and Pastor is celebrated on August 5 and 6. In the past, the sword dance was performed in the carballeira, and the local youths hunted wild goats by descending the cliffs of the valley. The community and pilgrims participated in the saint's procession through the parish, which included images of various saints as well as the banner of the Agricultural Society of San Xurxo de Sacos, founded in 1903. Some pilgrims came with offerings, bringing coffins and offerings to be bid upon.
The small chapel preserves the covered presbytery, from the 15th or 16th century, with a ribbed vault. The nave dates from the 18th century. Above the main door, there is a stone with the following inscription in Spanish (pictured):
Above a door on the southern-side wall, there is another stone with the following inscription in Spanish (pictured):
In June 2007, a statue commemorating the community's struggle was inaugurated, conceived by the sculptor Alfonso Vilar and completed by students of the Escola de Canteiros de Poio. The sculpture includes the following inscription:
Gallery
References
Forests | Carballeira de San Xusto | Biology | 603 |
26,855,260 | https://en.wikipedia.org/wiki/Journal%20of%20Behavioral%20Medicine | The Journal of Behavioral Medicine is an interdisciplinary medical journal published by Springer, addressing the interactions of the behavioral sciences with other fields of medicine.
Bimonthly journals
Academic journals established in 1978
Springer Science+Business Media academic journals
English-language journals
Behavioral medicine journals | Journal of Behavioral Medicine | Biology | 52 |
24,398,699 | https://en.wikipedia.org/wiki/C15H23N3O4S | The molecular formula C15H23N3O4S (molar mass: 341.42 g/mol, exact mass: 341.1409 u) may refer to:
Ciclacillin
Sulpiride
Levosulpiride
Molecular formulas | C15H23N3O4S | Physics,Chemistry | 55 |
44,529,226 | https://en.wikipedia.org/wiki/Nitrogen%20fixation%20package | A nitrogen fixation package is a piece of research equipment for studying nitrogen fixation in plants. One product of this kind, the Q-Box NF1LP made by Qubit Systems, operates by measuring the hydrogen (H2) given off in the nitrogen-fixing chemical reaction enabled by nitrogenase enzymes.
Principle of operation
Nitrogen is produced by bacteria, which have an endo-symbiotic relationship with the legume host. In this relationship, the plant shares its carbohydrates with the bacteria so that the bacteria can thrive, and the plant benefits by having excess nitrogen made available. The bacteria's creation of nitrogen also creates hydrogen, which is what the unit measures to determine the nitrogen produced. Measurement of H2 evolution as a means of determining nitrogenase activity is an alternative technique to acetylene reduction assay, and allows real-time monitoring of changes in nitrogenase activity.
Product description
Q-Box NF1LP is an experimental package using an open-flow gas exchange system for measurement of nitrogen fixation in H2-producing legume symbioses. A flow-through H2 sensor (Q-S121) measures the production rate of H2 from N2-fixing tissues, allowing in vivo measurement of nitrogenase activity in real time. Measurements of nitrogenase activity on up to three plants is possible, i.e. a four-channel system including a reference sample.
Operation
Nitrogen fixation packages must be used in a laboratory-type environment. This can be a temporary laboratory set up in the field, as long as it is under stable, uncontaminated conditions. The product must be supplied with many potted samples of the plants and of the neighbouring soil, taken from separate areas on the farm or field under study. The tests rely on the availability of the Herbaspirillum bacteria in the soil. This bacterium is found at the root of most legumes, which is where they produce nitrogen. To test soil properly, it must be free of added nitrogen fertilizers, which have harmful effects on the Herbaspirillum bacteria needed for fixation.
Applications
Different aspects of nitrogen fixation can be examined with these products, such as effects of temperature on the fixation process, the regulation of the process by oxygen, and the inhibition of nitrogen fixation by an over-abundance of fertilizers.
References
External links
Microbiology equipment
Nitrogen cycle | Nitrogen fixation package | Chemistry,Biology | 495 |
35,880,640 | https://en.wikipedia.org/wiki/Luminous%20at%20Darling%20Quarter | Luminous Interactive is a digital art platform created by Lendlease in collaboration with Ramus Illumination. It officially opened on 18 May 2012 in the Darling Quarter Precinct, Sydney central business district (CBD).
Luminous at Darling Quarter is a permanent platform solely for illuminated digital 'art' – both animated and static. The 'canvas' extends over four levels of two campus-style buildings, covering 557 windows in total, and presents a digital façade spanning a distance of 150 meters. The artistic and design direction was set by international light artist Bruce Ramus, with Ramus studio responsible for design management and production of the work. Fixtures were manufactured by Australian architectural lighting specialists Klik Systems using advanced LED systems.
Interactive
The online component of Luminous is similar to Project Blinkenlights, a Berlin digital light installation, and allows use by the public.
Luminous concept
The Commonwealth Bank occupying the buildings is joint partner in the project along with the Sydney Harbour Foreshore Authority and Lendlease. The consortium selected Bruce Ramus as the first artist, Artistic Director and Lighting Designer for Luminous.
Technology
At Darling Quarter, each window forms a 'pixel' in the canvas and forms an animated picture when viewed from far away. The system uses RGBW LEDs (red, green blue and white light emitting diodes) for colour. It also integrates music using graphic synchronisation to visualise sound-based designs. Darling Quarter uses automated timber louvres to reflect light (running along the windowsills, angled upwards with a 10-degree spreader lens). The lighting technology is powered using solar panels.
Darling Quarter
Lendlease designed and constructed Darling Quarter for owner APPF Commercial and first proposed the idea of a permanent space for illuminated art. Darling Quarter precinct includes a community green, children’s playground, and a large number of world-food restaurants, cafes and bars, and reflects Lend Lease’s enthusiasm for iconic new spaces for future generations.
References
External links
Designboom
Artdaily
WSJ Life & Style
Treehugger Living/Culture
Guardian on Vivid Sydney
Go Australia
Klik Systems
Ramus Illumination
iion Ltd
Lend Lease
Darling Quarter Official Website
Arts in Australia
Digital technology
Architectural lighting design
Buildings and structures in Sydney | Luminous at Darling Quarter | Technology | 449 |
35,152,779 | https://en.wikipedia.org/wiki/Personal%20RF%20safety%20monitor | Electromagnetic field monitors measure the exposure to electromagnetic radiation in certain ranges of the
electromagnetic spectrum. This article concentrates on monitors used in the telecommunication industry, which measure exposure to radio spectrum radiation. Other monitors, like extremely low frequency monitors which measure exposure to radiation from electric power lines, also exist. The major difference between a "Monitor" and a "Dosimeter" is that a Dosimeter can measure the absorbed dose of ionizing radiation, which does not exist for RF Monitors. Monitors are also separated by "RF Monitors" that simply measure fields and "RF Personal Monitors" that are designed to function while mounted on the human body.
Introduction
Electromagnetic field monitors, as used in the cellular phone industry, are referred as "personal RF safety monitors", personal protection monitors (PPM) or RF exposimeters. They form part of the personal protective equipment worn by a person working in areas exposed to radio spectrum radiation. A personal RF safety monitor is typically worn either on the torso region of the body or handheld and is required by the occupational safety and health acts of many telecommunication companies.
Most of the scientifically proven RF safety monitors are designed to measure the RF exposure as a percentage of the two most common international RF safety guidelines: International Commission on Non-Ionizing Radiation Protection (ICNIRP) guidelines and the U.S. Federal Communications Commission (FCC). The ICNIRP guidelines are also endorsed by the WHO. RF personal safety monitors were originally designed for RF Engineers working in environments where they could be exposed to high levels of RF energy or be working close to a RF source, for example working at the top of a telecommunication tower, or working on the rooftop of a building where transmitting antennas are present. Most international RF safety programs include the training and use of RF personal safety monitors and the IEEE C95.7 specifies what is a RF Personal Monitor.
In some cases the RF safety monitor comes in a version or mode for the general public. These meters can then be used to determine areas where the public might be exposed to high levels of RF energy or used to indicate the RF level in areas where the general public has access.
Specification
The specifications of a RF monitor determines the work environment where could be applicable. Wideband RF monitors can be used at a broader variety of base station sites than for example a narrowband, cellular RF monitor which is designed only to be used in the mobile telephone- and data networks. IEEE Std C95.3 states that "In the region between 1-100 GHz, resistive thermoelectric dipoles are used as sensors with a background of lossy material to reduce the effect of scattering from the body. Electrically short dipoles with diode detectors as sensors may cover a portion of this range". The results of monitors which do not incorporate "lossy material" to reduce the effects of scattering, are questionable on the body.
The type of response is a basic feature of any RF personal monitor and can be expressed in two basic parameters:
Directivity: Some of them have an isotropic response, which means that they are able to measure RF fields from any space direction. Others, like radial field monitors, have a partial space coverage, and have to be worn in a specific way in order to provide a correct reading.
Frequency response:
Flat response: units that have a flat response for all the frequency range covered, i.e. the response does not change with frequency.
Shaped response: contain frequency dependent sensors that automatically weight the detected RF fields in accordance with frequency-dependent RF exposure limits.
It is common that RF personal monitors provide results as a percentage (%) of frequency-dependent limit values of a specific standard (sometimes called reference levels or MPE, maximum permissible exposure). It is important to be careful interpreting exposure during an alarm condition based on a % result; shaped response RF personal monitors will provide a result as a % of the standard, independently of the frequency, while flat response monitors will provide a result as a % of a particular value (not frequency-dependent), so it is important to know which is the particular value this % is referring to.
Some RF personal monitors have different versions, shaped to each standard, so they will be more accurate, but can be used only for that standard. Others have a single version, so will be less accurate, but can be used for different standards.
Usually, the alarm of most RF personal monitors is triggered by instant values, however, standard limits are specified as time-averaged values. Some RF monitors have the possibility to trigger alarms based on average values, which is a better indication of the real exposure situation (as an example, an instant value can be at 200% while the average being below 100%).
As they are typically small, portable units, they are usually equipped with only a few LEDs for a rough field level indication (50%, 100%, etc). Nevertheless, some of them have a datalogger that allows to download the measurements, check for the exact values, and keep a history record of the exposures. Wavecontrol's WaveMon has available a GPS and altimeter to include position information to the data records.
Other specifications that may be relevant, depending on the application are battery characteristics (lifetime, ways to change or recharge), dimensions, weight, and operating temperature.
The following table shows different basic specifications of some RF monitors:
Operating instructions
Each specific personal RF safety monitor has its own operating instructions. And most of the monitors have different operating modes. For instance, the Narda Radman has a mode in which it can be body worn by the operator, but it also has a probe mode where the operator can scan certain areas to find accurate exclusion zones. The FieldSENSE on the other hand has a monitor and measure mode. The measure mode is similar to the Radman's probe mode, but the monitor mode is used by mounting the FieldSENSE onto an inactive antenna and then it is safe to work on the antenna until the FieldSENSE raise an alarm to warn RF technicians that the antenna is live and that any work on the antennas should be ceased until deactivation is confirmed. The Wavecontrol's WaveMon and Narda's RadMan 2[29] can be body-worn, and used off the body as a probe or as a monitor. Most of the RF monitors such as the FieldSENSE, EME Guard, WaveMon and the RadMan 2 also have a data logging functionality that can log the RF exposure of a worker over time. The RadMan 2XT's RF detection mode with its tone search feature can locate leaks in waveguides and verify that an antenna is turned off. [30]
List of personal RF monitors
EME Guard Plus
EME Guard XS
EME Guard XS 40 GHz
EME SPY Evolution
Narda Radman XT
Narda RadMan 2LT
Narda RadMan 2XT
Nardalert S3
FieldSENSE60
FieldSENSE 2.0
Public FieldSENSE
SafeOne Pro SI-1100XT
WaveMon RF-8
WaveMon RF-60
Gallery
References
Dosimeters
Radio spectrum
Radio waves
Radio waves
Electromagnetic radiation | Personal RF safety monitor | Physics,Technology,Engineering | 1,451 |
77,983,529 | https://en.wikipedia.org/wiki/Actinide%20contraction | The actinide contraction is the greater-than-expected decrease in atomic radii and ionic radii of the elements in the actinide series, from left to right.
Description
It is more pronounced than the lanthanide contraction because the 5f electrons are less effective at shielding than 4f electrons. It is caused by the poor shielding effect of nuclear charge by the 5f electrons along with the expected periodic trend of increasing electronegativity and nuclear charge on moving from left to right. About 40-50% of the actinide contraction has been attributed to relativistic effects.
A decrease in atomic radii can be observed across the 5f elements from atomic number 89, actinium, to 102, nobelium. This results in smaller than otherwise expected atomic radii and ionic radii for the subsequent d-block elements starting with 103, lawrencium. This effect causes the radii of transition metals of group 5 and 6 to become unusually similar, as the expected increase in radius going down a period is nearly cancelled out by the f-block insertion, and has many other far ranging consequences in post-actinide elements.
The decrease in ionic radii (M3+) is much more uniform compared to decrease in atomic radii.
References
Atomic radius
Chemical bonding
Periodic table | Actinide contraction | Physics,Chemistry,Materials_science | 273 |
33,705,242 | https://en.wikipedia.org/wiki/Rhamnolipid | Rhamnolipids are a class of glycolipid produced by Pseudomonas aeruginosa, amongst other organisms, frequently cited as bacterial surfactants. They have a glycosyl head group, in this case a rhamnose moiety, and a 3-(hydroxyalkanoyloxy)alkanoic acid (HAA) fatty acid tail, such as 3-hydroxydecanoic acid.
Specifically there are two main classes of rhamnolipids: mono-rhamnolipids and di-rhamnolipids, which consist of one or two rhamnose groups respectively. Rhamnolipids are also heterogeneous in the length and degree of branching of the HAA moiety, which varies with the growth media used and the environmental conditions.
Rhamnolipids biosynthesis
The first genes discovered in a mutagenesis screen for mutants unable to produce rhamnolipids were rhlA and rhlB. They are arranged in an operon, adjacent to rhlRI, a master regulator of quorum sensing in Pseudomonas aeruginosa. The proteins encoded by rhlA and rhlB; RhlA and RhlB respectively, are expected to form a complex because of the operonic nature of the genes which encode these two proteins and because both proteins are necessary for production of rhamnolipids. Furthermore, it was supposed that the role of RhlA was to stabilise RhlB in the cell membrane and thus the RhlAB complex was labelled as the enzyme Rhamnosyltransferase 1 and is frequently cited as such although there is no biochemical evidence for this and RhlA has been shown to be monomeric in solution. RhlA was subsequently shown to be involved in the production of the precursor to RHLs, HAAs. RhlB adds a rhamnose group to the HAA precursor to form mono-rhamnolipid. Therefore, the products of the rhlAB operon, RhlA and RhlB, catalyse the formation of HAAs and mono-rhamnolipids respectively.
RhlA is an α, β hydrolase (analysis by Fugue structural prediction programme). This fold is a common structural motif in fatty acid synthetic proteins and RhlA shows homology to transacylases. It has been shown using enzyme assays that the substrate for RhlA is hydroxyacyl-ACP rather than hydroxyacyl-CoA suggesting that it catalyses the formation of HAAs directly from the type II fatty acid synthase pathway (FASII). Furthermore, RhlA preferentially interacts with hydroxyacyl-ACP with an acyl chain length of ten carbon residues. The hydroxyacyl-ACP substrate of RhlA is the product of FabG, a protein which encodes the NADPH-dependent β-keto-acyl-ACP reductase required for fatty acid synthesis. It is a member of the FASII cycle along with FabI and FabA, which synthesise the precursors utilised by FabG.
Another gene necessary for synthesis of di-rhamnolipids, rhlC, has also been identified. RhlC catalyses the addition of the second rhamnose moiety to mono-rhamnolipids forming di-rhamnolipids, hence is often labelled rhamnosyltransferase 2. Like rhlA and rhlB, rhlC is thought to be an ancestral gene controlled by the same quorum sensing system as rhlA and rhlB. The rhamnose moiety for mono- and di-rhamnolipids is derived from AlgC activity and the RmlABCD pathway, encoded on the rmlBCAD operon. AlgC produces sugar precursors directly for alginate and lipopolysaccharide (LPS) as well as rhamnolipids. In rhamnose synthesis, AlgC produces glucose-1-phosphate (G1P) which is converted to dTDP-D-glucose by RmlA followed by conversion to dTDP-6-deoxy-D-4-hexulose and then dTDP-6-deoxy-L-lyxo-4-hexulose by RmlB and RmlC respectively. Finally, dTDP-6-deoxy-L-lyxo-4-hexulose is converted to dTDP-L-rhamnose by RmlD. The rhamnose can then be used in the synthesis of rhamnolipids by RhlB and RhlC.
The complete pathway of biosynthesis of rhamnolipids has not been confirmed. In summary, mono- and di- rhamnolipids are produced by sequential rhamnosyltransferase reactions catalysed by RhlB and RhlC respectively. The substrate for RhlB is the fatty acid moiety of the detergent, produced by RhlA.
The role of rhamnolipids for the producing cell
The reason that Pseudomonas aeruginosa produces rhamnolipids is the subject of much speculation. They have been shown to have several properties, and investigations in a rhlA mutant that does not make HAAs nor rhamnolipids have attributed many functions to rhamnolipids which may in fact be due to HAAs. These functions fall broadly into five categories, described below.
Uptake of hydrophobic substrates
As mentioned previously, Pseudomonas aeruginosa has the ability to metabolise a variety of substrates including n-alkanes, hexadecane and oils. Uptake of these hydrophobic substrates is speculated to rely on the production of rhamnolipids. It is thought that rhamnolipids either cause the Pseudomonas aeruginosa cell surface to become hydrophobic, promoting an interaction between the substrate and the cell, or secreted rhamnolipids emulsify the substrate and allow it to be taken up by the Pseudomonas aeruginosa cell. There is evidence that rhamnolipids are highly adsorbent to the Pseudomonas aeruginosa cell surface, causing it to become hydrophobic. It has also been shown that production of rhamnolipids promotes uptake of hexadecane by overcoming the inhibitory effect of the hydrophilic interactions caused by LPS. Production of rhamnolipids is observed on hydrophobic substrates but equally high yields are achievable on other carbon sources such as sugars. Furthermore, although mono-rhamnolipids have been shown to interact with the Pseudomonas aeruginosa cell membrane and cause it to become hydrophobic, di-rhamnolipids do not interact well with the cell membrane because the polar head group is too large to penetrate the LPS layer. Therefore, although Rhamnolipids may play a part in interaction of Pseudomonas aeruginosa with hydrophobic carbon sources, they are likely to have additional functions.
Antimicrobial properties
Rhamnolipids have long been reported to have antimicrobial properties. They have been shown to have activity against a range of bacteria including Serratia marcescens, Klebsiella pneumoniae, Staphylococcus aureus and Bacillus subtilis with minimum inhibitory concentrations (MICs) ranging from 0.5 μg/mL to 32 μg/mL. Activity against several fungi such as Fusarium solani and Penicillium funiculosum have also been observed with MICs of 75 μg/mL and 16 μg/mL respectively. Rhamnolipids have been suggested as antimicrobials able to remove Bordetella bronchiseptica biofilms. The mode of killing has been shown to result from intercalation of rhamnolipids into the cell membrane causing pores to form which result in cell lysis, at least in the case of Bacillus subtilis. The anti-microbial action of rhamnolipids may provide a fitness advantage for Pseudomonas aeruginosa by excluding other microorganisms from the colonised niche. Furthermore, rhamnolipids have been shown to have anti-viral and zoosporicidal activities. The antimicrobial properties of rhamnolipids may confer a fitness advantage for Pseudomonas aeruginosa in niche colonisation as Pseudomonas aeruginosa is a soil bacterium, as well as competing with other bacteria in the cystic fibrosis lung.
Virulence
As mentioned previously, Pseudomonas aeruginosa produces a host of virulence factors in concert, under the control of the quorum sensing system. Many studies show that inhibiting quorum sensing down-regulates the pathogenicity of Pseudomonas aeruginosa. However, it has been shown that rhamnolipids specifically are a key virulence determinant in Pseudomonas aeruginosa. A variety of virulence factors were analysed in Pseudomonas aeruginosa strains isolated from pneumonia patients. Rhamnolipids were found to be the only virulence factor that was associated with the deterioration of the patients to ventilator-associated pneumonia. Several other reports also support the role of rhamnolipids in lung infections. The effect of rhamnolipids in Pseudomonas aeruginosa virulence has been further noted in corneal infections (Alarcon et al., 2009; Zhu et al., 2004). It has been shown that rhamnolipids are able to integrate into the epithelial cell membrane and disrupt tight-junctions. This study used reconstituted epithelial membranes and purified rhamnolipids to demonstrate this mechanism. In addition to inhibition and killing of epithelial cells, rhamnolipids are able to kill polymorphonuclear (PMN) leukocytes and macrophages and inhibit phagocytosis. In summary, rhamnolipids have been shown unequivocally to be a potent virulence factor in the human host, however, they are also produced outside of the host, for example in a soil environment.
Rhamnolipids contribute to the establishment and maintenance of infection in cystic fibrosis patients in a number of ways, they disrupt the bronchial epithelium by disrupting the cell membranes, which promotes paracellular invasion of Pseudomonas aeruginosa and causes ciliostasis, further preventing the clearing of mucus. They also solubilise lung surfactant, allowing phospholipase C access to cell membranes and are necessary for correct biofilm formation.
Biofilm mode of growth
There are three main phases of biofilm development and rhamnolipids are implicated in each phase. Rhamnolipids are reported to promote motility, thereby inhibiting attachment by preventing cells from adhering tightly to the substratum. During biofilm development, rhamnolipids are reported to create and maintain fluid channels for water and oxygen flow around the base of the biofilm. Furthermore, they are important for forming structure in biofilms; a rhlA mutant forms a flat biofilm. Biofilm dispersal is dependent on Rhammnolipids, however other factors such as degradation of the matrix and activation of motility are also likely to be necessary. It has been shown using fluorescence microscopy that the rhlAB operon is induced in the centre of the mushroom cap, followed by dispersal of cells from the polysaccharide matrix from the centre of these caps causing a cavity to form. A mutation in rhlA causes a failure in formation of mushroom caps at all.
Motility
Motility is a key virulence determinant in Pseudomonas aeruginosa. Pseudomonas aeruginosa has three distinct methods of moving across or through a medium. Rhamnolipids are particularly important in swarming motility where they are postulated to lower the surface tension of the surface through their surfactant properties, allowing the bacterial cell to swarm. New evidence suggests that rhamnolipids are necessary to allow Pseudomonas aeruginosa cells to overcome attachment mediated by type IV pili. There is some discrepancy between the role of HAAs and RHLs in swarming motility. Some studies use a rhlA mutation to assess the effect on motility, which prevents the formation of HAAs and rhamnolipids. Studies that use a rhlB mutant show that Pseudomonas aeruginosa can swarm in the absence of rhamnolipids, but HAAs are absolutely necessary for swarming. Rhamnolipids have been proposed to be important in regulating swarm tendril formation.
Rhamnolipids and HAAs are also implicated in twitching motility, similarly the surfactant is thought to lower the surface tension allowing cells to move across the substratum. However, the role of rhamnolipids in twitching motility may be nutritionally conditional.
Commercial potential of rhamnolipids
Surfactants are in demand for a wide range of industrial applications as they increase solubility, foaming capacity and lower surface tensions. In particular, rhamnolipids have been used broadly in the cosmetic industry for products such as moisturisers, condom lubricant and shampoo. Rhamnolipids are efficacious in bioremediation of organic and heavy metal polluted sites. They also facilitate degradation of waste hydrocarbons such as crude oil and vegetable oil by Pseudomonas aeruginosa. The rhamnolipid surfactant itself is valuable in the cosmetic industry, and rhamnolipids are a source of rhamnose, which is an expensive sugar in itself.
Other bio-based surfactants include sophorolipids and mannose-erythritol lipids.
References
Carbohydrate chemistry
Glycolipids | Rhamnolipid | Chemistry | 2,976 |
78,856,281 | https://en.wikipedia.org/wiki/4-Mercaptobenzoic%20acid | 4-Mercaptobenzoic acid (p-mercaptobenzoic acid, ''p''-MBA) is an organosulfur compound with the formula para-. It is used as a ligand in thiolate-protected gold cluster compounds, such as .
See also
Gold cluster
References
Benzoic acids
Thiols | 4-Mercaptobenzoic acid | Chemistry | 73 |
34,564,320 | https://en.wikipedia.org/wiki/George%20McMurtry%20%28engineer%29 | George Cannon McMurtry (14 November 1867 – 29 September 1918) was a New Zealand scientist, smelting engineer, mining manager and consultant, orchardist. He was born in Camberwell, Surrey, England on 14 November 1867.
References
1867 births
1918 deaths
New Zealand chemical engineers
Mining engineers
New Zealand orchardists
New Zealand metallurgists
People from Camberwell
English emigrants to New Zealand | George McMurtry (engineer) | Engineering | 83 |
1,980,416 | https://en.wikipedia.org/wiki/History%20of%20information%20technology%20auditing | Information technology auditing (IT auditing) began as electronic data process (EDP) auditing and developed largely as a result of the rise in technology in accounting systems, the need for IT control, and the impact of computers on the ability to perform attestation services. The last few years have been an exciting time in the world of IT auditing as a result of the accounting scandals and increased regulation. IT auditing has had a relatively short yet rich history when compared to auditing as a whole and remains an ever-changing field.
The introduction of computer technology into accounting systems changed the way data was stored, retrieved and controlled. It is believed that the first use of a computerized accounting system was at General Electric in 1954. During the time period of 1954 to the mid-1960s, the auditing profession was still auditing around the computer. At this time only mainframe computers were used and few people had the skills and abilities to program computers. This began to change in the mid-1960s with the introduction of new, smaller and less expensive machines. This increased the use of computers in businesses and with it came the need for auditors to become familiar with EDP concepts in business. Along with the increase in computer use, came the rise of different types of accounting systems. The industry soon realized that they needed to develop their own software and the first of the generalized audit software (GAS) was developed. In 1968, the American Institute of Certified Public Accountants (AICPA) had the Big Eight (now the Big Four) accounting firms participate in the development of EDP auditing. The result of this was the release of Auditing & EDP. The book included how to document EDP audits and examples of how to process internal control reviews.
Around this time EDP auditors formed the Electronic Data Processing Auditors Association (EDPAA). The goal of the association was to produce guidelines, procedures and standards for EDP audits. In 1977, the first edition of Control Objectives was published. This publication is now known as Control Objectives for Information and related Technology (COBIT). COBIT is the set of generally accepted IT control objectives for IT auditors. In 1994, EDPAA changed its name to Information Systems Audit and Control Association (ISACA). The period from the late 1960s through today has seen rapid changes in technology from the microcomputer and networking to the internet and with these changes came some major events that change IT auditing forever.
The formation and rise in popularity of the Internet and e-commerce have had significant influences on the growth of IT audit. The Internet influences the lives of most of the world and is a place of increased business, entertainment and crime. IT auditing helps organizations and individuals on the Internet find security while helping commerce and communications to flourish.
Major events
There are five major events in U.S. history which have had significant impact on the growth of IT auditing. These are the Equity Funding scandal, the development of the Internet and e-commerce, the 1998 IT failure at AT&T Corporation, the Enron and Arthur Andersen LLP scandal, and the September 11, 2001 Attacks.
These events have not only heightened the need for more reliable, accurate, and secure systems but have brought a much needed focus to the importance of the accounting profession. Accountants certify the accuracy of public company financial statements and add confidence to financial markets. The heightened focus on the industry has brought improved control and higher standards for all working in accounting, especially those involved in IT auditing.
Equity Funding Corporation of America
The first known case of misuse of information technology occurred at Equity Funding Corporation of America. Beginning in 1964 and continuing on until 1973, managers for the company booked false insurance policies to show greater profits, thus boosting the price of the capital stock of the company. If it wasn't for a whistle blower, the fraud may have never been caught. After the fraud was discovered, it took the auditing firm Touche Ross two years to confirm that the insurance policies were not real. This was one of the first cases where auditors had to audit through the computer rather than around the computer.
AT&T
In 1998 AT&T suffered an IT failure that impacted worldwide commerce and communication. A major switch failed due to software and procedural errors and left many credit card users unable to access funds for upwards this brought to the forefront our reliance in IT services and reminds us of the need for assurance in our computer systems.
Enron and Arthur Andersen
The Enron and Arthur Andersen LLP scandal led to the demise of a foremost accounting firm, an investor loss of more than $60 billion, and the largest bankruptcy in U.S. history. Although Arthur Andersen were found guilty of obstruction of justice for their role in the collapse of the energy giant in the US District Court for the Southern District of Texas (and affirmed by the Fifth Circuit in 2004), the conviction was overturned by the U.S. Supreme Court in Arthur Andersen LLP v. United States. This scandal had a significant impact on the Sarbanes-Oxley Act and was a major self-regulation violation.
See also
Government Accountability Office
Information technology audit
References
Senft, Sandra; Manson, Danial P. PhD; Gonzales, Carol; Gallegos, Frederick (2004). Information Technology Control and Audit (2nd Ed.). Auerbach Publications.
External links
Spiraling Upward-History of Internal Auditing and the Institute of Internal Auditors
Systems Auditability and Control-A History
History of the Privacy Act of 1974
Computer Fraud Abuse Act
Electronic the Institute of Internal Auditors
Systems Auditability and Control-A History
History of the Privacy Act of 1974
Computer Fraud Abuse Act
Electronic Privacy Information Center-Computer Security Act of 1987
Federal Trade Commission-Privacy Act of 1974
AICPA-Summary of Sarbanes Oxley Act of 2002
Financial Privacy: The Gramm Leach Bliley Act
Reference Library: Regulation
California Financial Information Privacy Act
Financial Accounting Standards Board
Information technology auditing
Information technology audit | History of information technology auditing | Technology | 1,220 |
50,630,768 | https://en.wikipedia.org/wiki/Ogle%20app | Ogle is a free smartphone based social media application. It is available for iOS and Android. Ogle acts like a school wide forum that lets users and users' classmates share and interact. Users can share photos, videos, questions, even thoughts and watch submissions grow in popularity as other users vote and comment on them.
App Features
Campus Feed: Interact by watching and posting videos or pictures to your campus story.
Photos and Videos: share what you want with many different timing options.
Interact: Chat with friends and groups, or share a moment for all to see.
Real-name system: choose to register an account with username and profile picture.
Custom Stickers: Create stickers to add creativity and zest to your pictures.
Flash Interaction: All private chat and group chat history will be deleted after 24 hours on Ogle Chat.
Controversies
Users can post anything on Ogle using text, photos, and videos. As a result, some Ogle user's sense of anonymity, posts have targeted specific schools and students with abusive and hurtful content. The Ogle app's user anonymity makes it difficult for school officials to quickly investigate issues that occur within the Ogle app.
On March 28, 2016, three people were arrested after violent threats were made against an Anaheim high school. 18-year-old Miguel Meza was arrested Sunday afternoon during a traffic stop, along with his passenger, 23-year-old Johnny Aguilar. Police said both men had loaded handguns. Aguilar was also accused of violating his probation. "It is concerning the fact that they did have firearms, but we don't have a crystal ball. We can't determine if they possessed those firearms to engage in some kind of school violence or if they had it for another reason," Sgt. Daron Wyatt with the Anaheim Police Department said. Officials said Meza and Aguilar have known gang ties and detectives began investigating Meza after threats were made against the school on Ogle.
On February 29, 2016, Santa Cruz County sheriff's deputies arrested a 16-year-old Aptos High School student Friday, accused of making an online threat of gun violence at Aptos High and Monte Vista Christian."He basically told detectives that it was all a joke. It's not a joke. You have multiple resources being spent to investigate these cases," said Santa Cruz County Sheriff's Sgt. Roy Morales. The schools remained open throughout the week, with a huge police presence on campus.
In an anonymous emailed statement to the Daily Pilot on Thursday, the "Ogle team" said: "We are aware of the concern, and cyberbullying is absolutely NOT our intention for the app. Our goal for this app is to create a free and safe community space for students, for a better communication. We are currently working around the clock to improve the app. As a matter of fact, we are also in contact with local police departments, anti-bullying organizations and local high schools to try to help the students."
In response to these incidents, Ogle expressed that they takes the safety of its users seriously and does not condone any type of behavior that is illegal or in violation of its content policies. The company also said it has instituted a content moderation team to increase review and identify and remove inappropriate content, and take action against “those who violate our community guidelines.”
See also
Whisper
Nearby
Skout
Yik Yak
References
External links
App in iTunes Store
App in Google Play Store
Social media
Application software | Ogle app | Technology | 720 |
22,934 | https://en.wikipedia.org/wiki/Probability | Probability is the branch of mathematics and statistics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an event is to occur. This number is often expressed as a percentage (%), ranging from 0% to 100%. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written as 0.5 or 50%).
These concepts have been given an axiomatic mathematical formalization in probability theory, which is used widely in areas of study such as statistics, mathematics, science, finance, gambling, artificial intelligence, machine learning, computer science, game theory, and philosophy to, for example, draw inferences about the expected frequency of events. Probability theory is also used to describe the underlying mechanics and regularities of complex systems.
Etymology
The word probability derives from the Latin , which can also mean "probity", a measure of the authority of a witness in a legal case in Europe, and often correlated with the witness's nobility. In a sense, this differs much from the modern meaning of probability, which in contrast is a measure of the weight of empirical evidence, and is arrived at from inductive reasoning and statistical inference.
Interpretations
When dealing with random experiments – i.e., experiments that are random and well-defined – in a purely theoretical setting (like tossing a coin), probabilities can be numerically described by the number of desired outcomes, divided by the total number of all outcomes. This is referred to as theoretical probability (in contrast to empirical probability, dealing with probabilities in the context of real experiments). For example, tossing a coin twice will yield "head-head", "head-tail", "tail-head", and "tail-tail" outcomes. The probability of getting an outcome of "head-head" is 1 out of 4 outcomes, or, in numerical terms, 1/4, 0.25 or 25%. However, when it comes to practical application, there are two major competing categories of probability interpretations, whose adherents hold different views about the fundamental nature of probability:
Objectivists assign numbers to describe some objective or physical state of affairs. The most popular version of objective probability is frequentist probability, which claims that the probability of a random event denotes the relative frequency of occurrence of an experiment's outcome when the experiment is repeated indefinitely. This interpretation considers probability to be the relative frequency "in the long run" of outcomes. A modification of this is propensity probability, which interprets probability as the tendency of some experiment to yield a certain outcome, even if it is performed only once.
Subjectivists assign numbers per subjective probability, that is, as a degree of belief. The degree of belief has been interpreted as "the price at which you would buy or sell a bet that pays 1 unit of utility if E, 0 if not E", although that interpretation is not universally agreed upon. The most popular version of subjective probability is Bayesian probability, which includes expert knowledge as well as experimental data to produce probabilities. The expert knowledge is represented by some (subjective) prior probability distribution. These data are incorporated in a likelihood function. The product of the prior and the likelihood, when normalized, results in a posterior probability distribution that incorporates all the information known to date. By Aumann's agreement theorem, Bayesian agents whose prior beliefs are similar will end up with similar posterior beliefs. However, sufficiently different priors can lead to different conclusions, regardless of how much information the agents share.
History
The scientific study of probability is a modern development of mathematics. Gambling shows that there has been an interest in quantifying the ideas of probability throughout history, but exact mathematical descriptions arose much later. There are reasons for the slow development of the mathematics of probability. Whereas games of chance provided the impetus for the mathematical study of probability, fundamental issues are still obscured by superstitions.
According to Richard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latin probabilis) meant approvable, and was applied in that sense, univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances." However, in legal contexts especially, 'probable' could also apply to propositions for which there was good evidence.
The sixteenth-century Italian polymath Gerolamo Cardano demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes).
Aside from the elementary work by Cardano, the doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject. Jakob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) treated the subject as a branch of mathematics. See Ian Hacking's The Emergence of Probability and James Franklin's The Science of Conjecture for histories of the early development of the very concept of mathematical probability.
The theory of errors may be traced back to Roger Cotes's Opera Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that certain assignable limits define the range of all errors. Simpson also discusses continuous errors and describes a probability curve.
The first two laws of error that were proposed both originated with Pierre-Simon Laplace. The first law was published in 1774, and stated that the frequency of an error could be expressed as an exponential function of the numerical magnitude of the errordisregarding sign. The second law of error was proposed in 1778 by Laplace, and stated that the frequency of the error is an exponential function of the square of the error. The second law of error is called the normal distribution or the Gauss law. "It is difficult historically to attribute that law to Gauss, who in spite of his well-known precocity had probably not made this discovery before he was two years old."
Daniel Bernoulli (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors.
Adrien-Marie Legendre (1805) developed the method of least squares, and introduced it in his Nouvelles méthodes pour la détermination des orbites des comètes (New Methods for Determining the Orbits of Comets). In ignorance of Legendre's contribution, an Irish-American writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error,
where is a constant depending on precision of observation, and is a scale factor ensuring that the area under the curve equals 1. He gave two proofs, the second being essentially the same as John Herschel's (1850). Gauss gave the first proof that seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W.F. Donkin (1844, 1856), and Morgan Crofton (1870). Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula for r, the probable error of a single observation, is well known.
In the nineteenth century, authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Adolphe Quetelet (1853), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre, Didion and Karl Pearson. Augustus De Morgan and George Boole improved the exposition of the theory.
In 1906, Andrey Markov introduced the notion of Markov chains, which played an important role in stochastic processes theory and its applications. The modern theory of probability based on measure theory was developed by Andrey Kolmogorov in 1931.
On the geometric side, contributors to The Educational Times included Miller, Crofton, McColl, Wolstenholme, Watson, and Artemas Martin. See integral geometry for more information.
Theory
Like other theories, the theory of probability is a representation of its concepts in formal termsthat is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are interpreted or translated back into the problem domain.
There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation (see also probability space), sets are interpreted as events and probability as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (i.e., not further analyzed), and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details.
There are other methods for quantifying uncertainty, such as the Dempster–Shafer theory or possibility theory, but those are essentially different and not compatible with the usually-understood laws of probability.
Applications
Probability theory is applied in everyday life in risk assessment and modeling. The insurance industry and markets use actuarial science to determine pricing and make trading decisions. Governments apply probabilistic methods in environmental regulation, entitlement analysis, and financial regulation.
An example of the use of probability theory in equity trading is the effect of the perceived probability of any widespread Middle East conflict on oil prices, which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely can send that commodity's prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are neither assessed independently nor necessarily rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict.
In addition to financial assessment, probability can be used to analyze trends in biology (e.g., disease spread) as well as ecology (e.g., biological Punnett squares). As with finance, risk assessment can be used as a statistical tool to calculate the likelihood of undesirable events occurring, and can assist with implementing protocols to avoid encountering such circumstances. Probability is used to design games of chance so that casinos can make a guaranteed profit, yet provide payouts to players that are frequent enough to encourage continued play.
Another significant application of probability theory in everyday life is reliability. Many consumer products, such as automobiles and consumer electronics, use reliability theory in product design to reduce the probability of failure. Failure probability may influence a manufacturer's decisions on a product's warranty.
The cache language model and other statistical language models that are used in natural language processing are also examples of applications of probability theory.
Mathematical treatment
Consider an experiment that can produce a number of results. The collection of all possible results is called the sample space of the experiment, sometimes denoted as . The power set of the sample space is formed by considering all different collections of possible results. For example, rolling a die can produce six possible results. One collection of possible results gives an odd number on the die. Thus, the subset {1,3,5} is an element of the power set of the sample space of dice rolls. These collections are called "events". In this case, {1,3,5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, the event is said to have occurred.
A probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) is assigned a value of one. To qualify as a probability, the assignment of values must satisfy the requirement that for any collection of mutually exclusive events (events with no common results, such as the events {1,6}, {3}, and {2,4}), the probability that at least one of the events will occur is given by the sum of the probabilities of all the individual events.
The probability of an event A is written as , , or . This mathematical definition of probability can extend to infinite sample spaces, and even uncountable sample spaces, using the concept of a measure.
The opposite or complement of an event A is the event [not A] (that is, the event of A not occurring), often denoted as , , or ; its probability is given by . As an example, the chance of not rolling a six on a six-sided die is For a more comprehensive treatment, see Complementary event.
If two events A and B occur on a single performance of an experiment, this is called the intersection or joint probability of A and B, denoted as
Independent events
If two events, A and B are independent then the joint probability is
For example, if two coins are flipped, then the chance of both being heads is
Mutually exclusive events
If either event A or event B can occur but never both simultaneously, then they are called mutually exclusive events.
If two events are mutually exclusive, then the probability of both occurring is denoted as andIf two events are mutually exclusive, then the probability of either occurring is denoted as and
For example, the chance of rolling a 1 or 2 on a six-sided die is
Not (necessarily) mutually exclusive events
If the events are not (necessarily) mutually exclusive thenRewritten,
For example, when drawing a card from a deck of cards, the chance of getting a heart or a face card (J, Q, K) (or both) is since among the 52 cards of a deck, 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards", but should only be counted once.
This can be expanded further for multiple not (necessarily) mutually exclusive events. For three events, this proceeds as follows:It can be seen, then, that this pattern can be repeated for any number of events.
Conditional probability
Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written , and is read "the probability of A, given B". It is defined by
If then is formally undefined by this expression. In this case and are independent, since However, it is possible to define a conditional probability for some zero-probability events, for example by using a σ-algebra of such events (such as those arising from a continuous random variable).
For example, in a bag of 2 red balls and 2 blue balls (4 balls in total), the probability of taking a red ball is however, when taking a second ball, the probability of it being either a red ball or a blue ball depends on the ball previously taken. For example, if a red ball was taken, then the probability of picking a red ball again would be since only 1 red and 2 blue balls would have been remaining. And if a blue ball was taken previously, the probability of taking a red ball will be
Inverse probability
In probability theory and applications, Bayes' rule relates the odds of event to event before (prior to) and after (posterior to) conditioning on another event The odds on to event is simply the ratio of the probabilities of the two events. When arbitrarily many events are of interest, not just two, the rule can be rephrased as posterior is proportional to prior times likelihood, where the proportionality symbol means that the left hand side is proportional to (i.e., equals a constant times) the right hand side as varies, for fixed or given (Lee, 2012; Bertsch McGrayne, 2012). In this form it goes back to Laplace (1774) and to Cournot (1843); see Fienberg (2005).
Summary of probabilities
Relation to randomness and probability in quantum mechanics
In a deterministic universe, based on Newtonian concepts, there would be no probability if all conditions were known (Laplace's demon) (but there are situations in which sensitivity to initial conditions exceeds our ability to measure them, i.e. know them). In the case of a roulette wheel, if the force of the hand and the period of that force are known, the number on which the ball will stop would be a certainty (though as a practical matter, this would likely be true only of a roulette wheel that had not been exactly levelled – as Thomas A. Bass' Newtonian Casino revealed). This also assumes knowledge of inertia and friction of the wheel, weight, smoothness, and roundness of the ball, variations in hand speed during the turning, and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analyzing the pattern of outcomes of repeated rolls of a roulette wheel. Physicists face the same situation in the kinetic theory of gases, where the system, while deterministic in principle, is so complex (with the number of molecules typically the order of magnitude of the Avogadro constant ) that only a statistical description of its properties is feasible.
Probability theory is required to describe quantum phenomena. A revolutionary discovery of early 20th century physics was the random character of all physical processes that occur at sub-atomic scales and are governed by the laws of quantum mechanics. The objective wave function evolves deterministically but, according to the Copenhagen interpretation, it deals with probabilities of observing, the outcome being explained by a wave function collapse when an observation is made. However, the loss of determinism for the sake of instrumentalism did not meet with universal approval. Albert Einstein famously remarked in a letter to Max Born: "I am convinced that God does not play dice". Like Einstein, Erwin Schrödinger, who discovered the wave function, believed quantum mechanics is a statistical approximation of an underlying deterministic reality. In some modern interpretations of the statistical mechanics of measurement, quantum decoherence is invoked to account for the appearance of subjectively probabilistic experimental outcomes.
See also
Contingency
Equiprobability
Fuzzy logic
Heuristic (psychology)
Notes
References
Bibliography
Kallenberg, O. (2005) Probabilistic Symmetries and Invariance Principles. Springer-Verlag, New York. 510 pp.
Kallenberg, O. (2002) Foundations of Modern Probability, 2nd ed. Springer Series in Statistics. 650 pp.
Olofsson, Peter (2005) Probability, Statistics, and Stochastic Processes, Wiley-Interscience. 504 pp .
External links
Virtual Laboratories in Probability and Statistics (Univ. of Ala.-Huntsville)
Probability and Statistics EBook
Edwin Thompson Jaynes. Probability Theory: The Logic of Science. Preprint: Washington University, (1996). – HTML index with links to PostScript files and PDF (first three chapters)
People from the History of Probability and Statistics (Univ. of Southampton)
Probability and Statistics on the Earliest Uses Pages (Univ. of Southampton)
Earliest Uses of Symbols in Probability and Statistics on Earliest Uses of Various Mathematical Symbols
A tutorial on probability and Bayes' theorem devised for first-year Oxford University students
U B U W E B :: La Monte Young pdf file of An Anthology of Chance Operations (1963) at UbuWeb
Introduction to Probability – eBook , by Charles Grinstead, Laurie Snell Source (GNU Free Documentation License)
Bruno de Finetti, Probabilità e induzione, Bologna, CLUEB, 1993. (digital version)
Richard Feynman's Lecture on probability. | Probability | Physics,Mathematics | 4,263 |
8,531,319 | https://en.wikipedia.org/wiki/Favard%20constant | In mathematics, the Favard constant, also called the Akhiezer–Krein–Favard constant, of order r is defined as
This constant is named after the French mathematician Jean Favard, and after the Soviet mathematicians Naum Akhiezer and Mark Krein.
Particular values
Uses
This constant is used in solutions of several extremal problems, for example
Favard's constant is the sharp constant in Jackson's inequality for trigonometric polynomials
the sharp constants in the Landau–Kolmogorov inequality are expressed via Favard's constants
Norms of periodic perfect splines.
References
Mathematical constants | Favard constant | Mathematics | 129 |
27,524,533 | https://en.wikipedia.org/wiki/Jeremy%20O%27Brien | Jeremy O'Brien (born 1975, Australia) is a physicist who researches in quantum optics, optical quantum metrology and quantum information science. He co-founded and is CEO of the quantum computing firm PsiQuantum. Formerly, he was Professorial Research Fellow in Physics and Electrical Engineering at the University of Bristol, and director of its Centre for Quantum Photonics.
His work in optical quantum computing has included the demonstration of the first optical quantum controlled NOT gate.
Honours and awards
2009 European Quantum Information Young Investigator Award
2010 Adolph Lomb Medal of the Optical Society of America
2010 IUPAP Prize in Atomic Molecular and Optical Physics
2010 Moseley medal and prize of the Institute of Physics
2010 Daiwa Adrian Prize
2011 Elected to the Global Young Academy
Selected publications
References
External links
Jeremy O'Brien: Home page. Department of Physics, University of Bristol.
Professor Jeremy O'Brien. Department of Electrical & Electronic Engineering, University of Bristol.
Jeremy O'Brien: Recent publications. Quantum Computation & Information Group, University of Bristol.
Academics of the University of Bristol
Living people
1975 births
Australian physicists
Quantum physicists
Quantum information scientists
Fellows of the American Physical Society | Jeremy O'Brien | Physics | 232 |
5,917,239 | https://en.wikipedia.org/wiki/Schoenus | Schoenus (; , schoinos, "rush rope"; , "river-measure") was an ancient Egyptian, Greek and Roman unit of length and area based on the knotted cords first used in Egyptian surveying.
Length
The Greeks, who adopted it from the Egyptians, generally considered the schoinos equal to 40 stades, but neither the schoinos nor the stadion had an absolute value, and there were several regional variants of each. Strabo noted that it also varied with terrain, and that when he "ascended the hills, the measures of these schoeni were not everywhere uniform, so that the same number sometimes designated a greater, sometimes a less actual extent of road, a variation which dates from the earliest time and exists in our days." Herodotus (2.6 and 2.149) says, that schoenus is 60 stadia or about . This agrees with the distance implied by the Triacontaschoenus stretching south of the First Cataract in Roman-era Nubia. Pliny the Elder 5.11 that is 30 stadia. Strabo 17.1.24: according to the place, between 30 and 120 stadia. Isidore of Charax's schoenus—used in his Parthian Stations—has been given values between 4.7 and 5.5 kilometers, but the precise value remains controversial given the known errors in some of his distances.
The Byzantine schoinion or "little schoenus" (, skhoinion) was or 33⅓ stades.
Area
The Romans also used the schoenus as a unit of area, equivalent to the actus quadratus or half-jugerum () formed by a square with sides of 120 Roman feet. The Heraclean Tables admonished that each schoenus should be planted with 4 olive trees and some grape vines, upon penalty of fines.
See also
Egyptian, Greek, and Roman units
Rope and knot, related units
Knotted cord, the surveying tool initially responsible for the schoenus
References
Obsolete units of measurement
Units of length
Ancient Greek units of measurement | Schoenus | Mathematics | 439 |
596,600 | https://en.wikipedia.org/wiki/Relatively%20compact%20subspace | In mathematics, a relatively compact subspace (or relatively compact subset, or precompact subset) of a topological space is a subset whose closure is compact.
Properties
Every subset of a compact topological space is relatively compact (since a closed subset of a compact space is compact). And in an arbitrary topological space every subset of a relatively compact set is relatively compact.
Every compact subset of a Hausdorff space is relatively compact. In a non-Hausdorff space, such as the particular point topology on an infinite set, the closure of a compact subset is not necessarily compact; said differently, a compact subset of a non-Hausdorff space is not necessarily relatively compact.
Every compact subset of a (possibly non-Hausdorff) topological vector space is complete and relatively compact.
In the case of a metric topology, or more generally when sequences may be used to test for compactness, the criterion for relative compactness becomes that any sequence in has a subsequence convergent in .
Some major theorems characterize relatively compact subsets, in particular in function spaces. An example is the Arzelà–Ascoli theorem. Other cases of interest relate to uniform integrability, and the concept of normal family in complex analysis. Mahler's compactness theorem in the geometry of numbers characterizes relatively compact subsets in certain non-compact homogeneous spaces (specifically spaces of lattices).
Counterexample
As a counterexample take any finite neighbourhood of the particular point of an infinite particular point space. The neighbourhood itself is compact but is not relatively compact because its closure is the whole non-compact space.
Almost periodic functions
The definition of an almost periodic function at a conceptual level has to do with the translates of being a relatively compact set. This needs to be made precise in terms of the topology used, in a particular theory.
See also
Compactly embedded
Totally bounded space
References
page 12 of V. Khatskevich, D.Shoikhet, Differentiable Operators and Nonlinear Equations, Birkhäuser Verlag AG, Basel, 1993, 270 pp. at google books
Properties of topological spaces
Compactness (mathematics) | Relatively compact subspace | Mathematics | 439 |
5,264,264 | https://en.wikipedia.org/wiki/Erwin%20McManus | Erwin Raphael McManus (born August 28, 1958) is an author, filmmaker, and fashion designer. He is the lead pastor of Mosaic, a megachurch based in Los Angeles. Erwin is a speaker on issues related to postmodernism and postmodern Christianity, and also writes and lectures on culture, identity, change, and other topics.
Personal life
Born Irving Rafael Mesa-Cardona, in San Salvador, El Salvador, McManus was raised by his grandparents for the first years of his life. McManus, along with his brother Alex, immigrated to the United States when he was young, joining his mother. He grew up on the east coast – primarily in Miami, Florida, but also in Queens, NY and Raleigh, NC.
The name McManus comes from his mother's second marriage. McManus was not a legal name, but his stepfather's alias and later was legalized by McManus in adulthood.
McManus originally studied philosophy at Elon University, before transferring to and graduating, with a B.A., in psychology from the University of North Carolina at Chapel Hill. He later obtained his M.Div. from Southwestern Baptist Theological Seminary in Fort Worth, Texas. McManus also received a Doctorate in Humane Letters, an honorary degree, from Southeastern University in Lakeland, Florida.
He and his wife Kim have three children.
Christian minister
While working on his master's degree in Dallas/Fort Worth, McManus began two congregations while serving among the urban poor.
He became the Metropolitan Consultant and City Strategist for the city of Dallas and served in that role as a Futurist and Urbanologist from 1990 to 1993.
He moved with his family to Los Angeles in 1993 and became senior pastor of The Church on Brady in East Los Angeles. He later changed the name to Mosaic. Since the 1990s, McManus has been associated with the Leadership Network. In addition, he partners with Bethel Seminary as a lecturer.
As lead pastor of Mosaic, McManus speaks in multiple Sunday gatherings in Hollywood, California. Mosaic has opened multiple campuses, including Hollywood, Venice, Pasadena, Orange County, and Seattle.
McManus has used psychological personality metrics in his church, such as the Myers-Briggs type indicator and Gallup's Strengths Finder. He is currently an advisor with the Awaken Group, a transformation design company.
Motivational speaker
McManus is a motivational speaker, having been paid by Fortune 100, Fortune 500, and companies such as Nike, HYPEBEAST, Sony, Fox News, Apple, Liberty University, and TED. He speaks on topics like reinventing organizations, culture, community, equality, humanity, leadership, general motivation, futuristic ideas, inspiration, and spirituality.
He is paid between $20,000 to $30,000 per public speaking engagement.
Controversies
Stance on LGBTQ and human equality
On May 28, 2019, McManus in HYPEBEAST said that the church he is the senior pastor and CEO of, Mosaic, are inclusive to the gay (LGBTQ) community."I don’t have data on this, but I'm going to guess that we probably have more people who identify themselves in the gay community at Mosaic" he adds. "And so our posture has always been we're for everybody."On June 25, 2019, the writer of the article, Emily Jensen, updated it after several verified ex-Mosaic staff members claimed that McManus's position on the LGBTQ community is false, and that he and the church itself are anti-LGBTQ. They also claimed that McManus leadership training spoke less of women, treated the homeless poorly, and do not allow LGBTQ members to be in church leadership positions.
McManus responded by saying, "To be clear anyone and everyone is welcome at Mosaic. Mosaic is one of the most diverse communities that exists. There are currently many people of varied race, color, ethnicity, economic status, sexual orientation, and religious beliefs who attend Mosaic."
Author
McManus has written fourteen books:
An Unstoppable Force: Daring to Become the Church God Had in Mind () (June 1, 2001)
Uprising: A Revolution of the Soul () (September 4, 2003)
The Church in Emerging Culture: Five Perspectives () (October 1, 2003) Este libro no es de MacManus, sino de Michael Horton.
The Barbarian Way: Unleash the Untamed Faith Within () (February 10, 2005)
Chasing Daylight: Seize the Power of Every Moment () (January 10, 2006)
Stand Against the Wind: Fuel for the Revolution of Your Soul () (February 14, 2006)
Seizing Your Divine Moment () (June 30, 2006)
Soul Cravings () (November 14, 2006)
Wide Awake () (July 1, 2008)
The Artisan Soul: Crafting Your Life Into A Work Of Art () (February 25, 2014)
The Last Arrow: Save Nothing for the Next Life () (September 5, 2017)
The Way of the Warrior: An Ancient Path to Inner Peace () (February 26, 2019)
The Genius of Jesus: The Man Who Changed Everything () (September 14, 2021)
Mind Shift: It doesn't take a genius to think like one () (October 3, 2023)
Filmmaker
Since 2004, McManus has worked as a filmmaker in multiple roles. McManus started and owned companies in both the fashion industry and film industry.
In 2013 McManus Studios and McManus shut down after investors pulled funding from the failing business. McManus took out loans to pay employees and finish contracted projects.
In 2020, McManus launched a TV Show called McManus on Trinity Broadcasting Network's Hillsong Channel with his son Aaron McManus. The show includes a group of millennials discussing the "hottest, hardest topics" and current events.
Selected filmography
References
External links
McManus Fashion Gallery
1958 births
Clergy from Los Angeles
Baptist writers
Futurologists
Living people
Promise Keepers
Emerging church movement
Salvadoran emigrants to the United States
20th-century Baptist ministers from the United States
21st-century Baptist ministers from the United States
American evangelists
Writers from Los Angeles
Urban planning | Erwin McManus | Engineering | 1,281 |
21,631,514 | https://en.wikipedia.org/wiki/Late%20protein | A late protein is a viral protein that is formed after replication of the virus. One example is VP4 from simian virus 40 (SV40).
In Human papillomaviruses
In Human papillomavirus (HPV), two late proteins are involved in capsid formation: a major (L1) and a minor (L2) protein, in the approximate proportion 95:5%. L1 forms a pentameric assembly unit of the viral shell in a manner that closely resembles VP1 from polyomaviruses. Intermolecular disulphide bonding holds the L1 capsid proteins together. L1 capsid proteins can bind via its nuclear localisation signal (NLS) to karyopherins Kapbeta(2) and Kapbeta(3) and inhibit the Kapbeta(2) and Kapbeta(3) nuclear import pathways during the productive phase of the viral life cycle. Surface loops on L1 pentamers contain sites of sequence variation between HPV types. L2 minor capsid proteins enter the nucleus twice during infection: in the initial phase after virion disassembly, and in the productive phase when it assembles into replicated virions along with L1 major capsid proteins. L2 proteins contain two nuclear localisation signals (NLSs), one at the N-terminal (nNLS) and the other at the C-terminal (cNLS). L2 uses its NLSs to interact with a network of karyopherins in order to enter the nucleus via several import pathways. L2 from HPV types 11 and 16 was shown to interact with karyopherins Kapbeta(2) and Kapbeta(3). L2 capsid proteins can also interact with viral dsDNA, facilitating its release from the endocytic compartment after viral uncoating.
See also
Early protein
References
Protein families
Protein domains
Proteins
Viral protein class | Late protein | Chemistry,Biology | 411 |
51,226,280 | https://en.wikipedia.org/wiki/List%20of%20Lithocarpus%20species | Plants of the World Online recognises about 330 accepted taxa (of species and infraspecific names) in the plant genus Lithocarpus of the beech family Fagaceae. Individual species are described in detail on www.asianfagaceae.com.
A
Lithocarpus acuminatus
Lithocarpus aggregatus
Lithocarpus ailaoensis
Lithocarpus amherstianus
Lithocarpus amoenus
Lithocarpus amygdalifolius
Lithocarpus andersonii
Lithocarpus annamensis
Lithocarpus annamitorus
Lithocarpus apoensis
Lithocarpus apricus
Lithocarpus arcaulus
Lithocarpus areca
Lithocarpus aspericupulus
Lithocarpus atjehensis
Lithocarpus attenuatus
Lithocarpus auriculatus
B
Lithocarpus bacgiangensis
Lithocarpus balansae
Lithocarpus bancanus
Lithocarpus bassacensis
Lithocarpus beccarianus
Lithocarpus bennettii
Lithocarpus bentramensis
Lithocarpus bicoloratus
Lithocarpus blaoensis
Lithocarpus blumeanus
Lithocarpus bolovenensis
Lithocarpus bonnetii
Lithocarpus brachystachyus
Lithocarpus braianensis
Lithocarpus brassii
Lithocarpus brevicaudatus
Lithocarpus brochidodromus
Lithocarpus bullatus
Lithocarpus burkillii
C
Lithocarpus calolepis
Lithocarpus calophyllus
Lithocarpus cambodiensis
Lithocarpus campylolepis
Lithocarpus cantleyanus
Lithocarpus carolinae
Lithocarpus castellarnauianus
Lithocarpus caudatifolius
Lithocarpus caudatilimbus
Lithocarpus celebicus
Lithocarpus cerifer
Lithocarpus chevalieri
Lithocarpus chienchuanensis
Lithocarpus chifui
Lithocarpus chittagongus
Lithocarpus chiungchungensis
Lithocarpus chrysocomus
Lithocarpus cinereus
Lithocarpus clathratus
Lithocarpus cleistocarpus
Lithocarpus clementianus
Lithocarpus coalitus
Lithocarpus coinhensis
Lithocarpus concentricus
Lithocarpus confertus
Lithocarpus confinis
Lithocarpus confragosus
Lithocarpus conocarpus
Lithocarpus coopertus
Lithocarpus corneri
Lithocarpus corneus
var. hainanensis
var. zonatus
Lithocarpus cottonii
Lithocarpus craibianus
Lithocarpus crassifolius
Lithocarpus crassinervius
Lithocarpus cryptocarpus
Lithocarpus cucullatus
Lithocarpus curtisii
Lithocarpus cyclophorus
Lithocarpus cyrtocarpus
D
Lithocarpus dalatensis
Lithocarpus damiaoshanicus
Lithocarpus daphnoideus
Lithocarpus dasystachyus
Lithocarpus dealbatus
subsp. leucostachyus
Lithocarpus debaryanus
Lithocarpus dinhensis
Lithocarpus dodonaeifolius
Lithocarpus dolichostachys
Lithocarpus ducampii
E
Lithocarpus echinifer
Lithocarpus echinocarpus
Lithocarpus echinophorus
Lithocarpus echinops
Lithocarpus echinotholus
Lithocarpus echinulatus
Lithocarpus edulis
Lithocarpus eichleri
Lithocarpus elaeagnifolius
Lithocarpus elegans
Lithocarpus elephantum
Lithocarpus elizabethiae
Lithocarpus elmerrillii
Lithocarpus encleisacarpus
Lithocarpus eriobotryoides
Lithocarpus erythrocarpus
Lithocarpus eucalyptifolius
Lithocarpus ewyckii
F
Lithocarpus falconeri
Lithocarpus fangii
Lithocarpus farinulentus
Lithocarpus fenestratus
Lithocarpus fenzelianus
Lithocarpus ferrugineus
Lithocarpus floccosus
Lithocarpus fohaiensis
Lithocarpus fordianus
Lithocarpus formosanus
G
Lithocarpus gaoligongensis
Lithocarpus garrettianus
Lithocarpus gigantophyllus
Lithocarpus glaber
Lithocarpus glaucus
Lithocarpus glutinosus
Lithocarpus gougerotae
Lithocarpus gracilis
Lithocarpus guinieri
Lithocarpus gymnocarpus
H
Lithocarpus haipinii
Lithocarpus hallieri
Lithocarpus hancei
Lithocarpus handelianus
Lithocarpus harlandii
Lithocarpus harmandii
Lithocarpus hatusimae
Lithocarpus havilandii
Lithocarpus hendersonianus
Lithocarpus henryi
Lithocarpus himalaicus
Lithocarpus honbaensis
Lithocarpus howii
Lithocarpus hypoglaucus
Lithocarpus hystrix
I
Lithocarpus imperialis
Lithocarpus indutus
Lithocarpus irwinii
Lithocarpus iteaphyllus
Lithocarpus ithyphyllus
J
Lithocarpus jacksonianus
Lithocarpus jacobsii
Lithocarpus javensis
Lithocarpus jenkinsii
Lithocarpus jordanae
K
Lithocarpus kalkmanii
Lithocarpus kamengii
Lithocarpus kawakamii
Lithocarpus kemmaratensis
Lithocarpus keningauensis
Lithocarpus kingianus
Lithocarpus kochummenii
Lithocarpus konishii
Lithocarpus kontumensis
Lithocarpus korthalsii
Lithocarpus kostermansii
Lithocarpus kozlovii
Lithocarpus kunstleri
L
Lithocarpus laetus
Lithocarpus lampadarius
Lithocarpus laoticus
Lithocarpus laouanensis
Lithocarpus lappaceus
Lithocarpus lauterbachii
Lithocarpus leiocarpus
Lithocarpus leiophyllus
Lithocarpus leiostachyus
Lithocarpus lemeeanus
Lithocarpus lepidocarpus
Lithocarpus leptogyne
Lithocarpus leucodermis
Lithocarpus levis
Lithocarpus licentii
Lithocarpus lindleyanus
Lithocarpus listeri
Lithocarpus lithocarpaeus
Lithocarpus litseifolius
Lithocarpus longanoides
Lithocarpus longipedicellatus
Lithocarpus longzhouicus
Lithocarpus loratifolius
Lithocarpus lucidus
Lithocarpus luteus
Lithocarpus luzoniensis
Lithocarpus lycoperdon
M
Lithocarpus macilentus
Lithocarpus macphailii
Lithocarpus magneinii
Lithocarpus magnificus
Lithocarpus maingayi
Lithocarpus mairei
Lithocarpus mariae
Lithocarpus megacarpus
Lithocarpus megalophyllus
Lithocarpus megastachyus
Lithocarpus meijeri
Lithocarpus mekongensis
Lithocarpus melanochromus
Lithocarpus melataiensis
Lithocarpus menadoensis
Lithocarpus mianningensis
Lithocarpus microbalanus
Lithocarpus microlepis
Lithocarpus milroyi
Lithocarpus mindanaensis
Lithocarpus moluccus
Lithocarpus monticolus
Lithocarpus muluensis
N
Lithocarpus naiadarum
Lithocarpus nantoensis
Lithocarpus nebularum
Lithocarpus neorobinsonii
Lithocarpus nhatrangensis
Lithocarpus nieuwenhuisii
Lithocarpus nitidinux
Lithocarpus nodosus
O
Lithocarpus oblanceolatus
Lithocarpus oblancifolius
Lithocarpus obovalifolius
Lithocarpus obovatilimbus
Lithocarpus obscurus
Lithocarpus ochrocarpus
Lithocarpus oleifolius
Lithocarpus ollus
Lithocarpus ombrophilus
Lithocarpus oogyne
Lithocarpus orbicarpus
Lithocarpus orbicularis
Lithocarpus ovalis
P
Lithocarpus pachycarpus
Lithocarpus pachylepis
Lithocarpus pachyphyllus
var. fruticosus
Lithocarpus paihengii
Lithocarpus pakhaensis
Lithocarpus pallidus
Lithocarpus palungensis
Lithocarpus paniculatus
Lithocarpus papillifer
Lithocarpus parvulus
Lithocarpus pattaniensis
Lithocarpus paviei
Lithocarpus perakensis
Lithocarpus petelotii
Lithocarpus phansipanensis
Lithocarpus philippinensis
Lithocarpus pierrei
Lithocarpus platycarpus
Lithocarpus platyphyllus
Lithocarpus polystachyus
Lithocarpus porcatus
Lithocarpus proboscideus
Lithocarpus propinquus
Lithocarpus psammophilus
Lithocarpus pseudokunstleri
Lithocarpus pseudomagneinii
Lithocarpus pseudomoluccus
Lithocarpus pseudoreinwardtii
Lithocarpus pseudosundaicus
Lithocarpus pseudovestitus
Lithocarpus pseudoxizangensis
Lithocarpus pulcher
Lithocarpus pulongtauensis
Lithocarpus pusillus
Lithocarpus pycnostachys
Q
Lithocarpus qinzhouicus
Lithocarpus quangnamensis
Lithocarpus quercifolius
R
Lithocarpus rassa
Lithocarpus recurvatus
Lithocarpus reinwardtii
Lithocarpus revolutus
Lithocarpus rhabdostachyus
Lithocarpus rigidus
Lithocarpus robinsonii
Lithocarpus rosthornii
Lithocarpus rotundatus
Lithocarpus rouletii
Lithocarpus rufescens
Lithocarpus rufovillosus
Lithocarpus rufus
Lithocarpus ruminatus
S
Lithocarpus sandakanensis
Lithocarpus scortechinii
Lithocarpus scyphiger
Lithocarpus sericobalanos
Lithocarpus shinsuiensis
Lithocarpus shunningensis
Lithocarpus siamensis
Lithocarpus silvicolarum
Lithocarpus skanianus
Lithocarpus sogerensis
Lithocarpus solerianus
Lithocarpus songkoensis
Lithocarpus sootepensis
Lithocarpus sphaerocarpus
Lithocarpus stenopus
Lithocarpus stonei
Lithocarpus submonticolus
Lithocarpus suffruticosus
Lithocarpus sulitii
Lithocarpus sundaicus
Lithocarpus syncarpus
T
Lithocarpus tabularis
Lithocarpus taitoensis
Lithocarpus talangensis
Lithocarpus tapanuliensis
Lithocarpus tawaiensis
Lithocarpus tenuilimbus
Lithocarpus tenuinervis
Lithocarpus tephrocarpus
Lithocarpus thomsonii
Lithocarpus toumorangensis
Lithocarpus touranensis
Lithocarpus trachycarpus
Lithocarpus triqueter
Lithocarpus truncatus
Lithocarpus tubulosus
Lithocarpus turbinatus
U
Lithocarpus uraianus
Lithocarpus urceolaris
Lithocarpus uvariifolius
var. ellipticus
V
Lithocarpus variolosus
Lithocarpus vestitus
Lithocarpus vidalianus
Lithocarpus vidalii
Lithocarpus vinhensis
Lithocarpus vinkii
W
Lithocarpus wallichianus
Lithocarpus woodii
Lithocarpus wrayi
X
Lithocarpus xizangensis
Lithocarpus xylocarpus
Lithocarpus yangchunensis
Y
Lithocarpus yersinii
Lithocarpus yongfuensis
References
Lithocarpus
Fagaceae
Taxonomy (biology) | List of Lithocarpus species | Biology | 2,678 |
6,440,859 | https://en.wikipedia.org/wiki/Protection%20Profile | A Protection Profile (PP) is a document used as part of the certification process according to ISO/IEC 15408 and the Common Criteria (CC). As the generic form of a Security Target (ST), it is typically created by a user or user community and provides an implementation independent specification of information assurance security requirements. A PP is a combination of threats, security objectives, assumptions, security functional requirements (SFRs), security assurance requirements (SARs) and rationales.
A PP specifies generic security evaluation criteria to substantiate vendors' claims of a given family of information system products. Among others, it typically specifies the Evaluation Assurance Level (EAL), a number 1 through 7, indicating the depth and rigor of the security evaluation, usually in the form of supporting documentation and testing, that a product meets the security requirements specified in the PP.
The National Institute of Standards and Technology (NIST) and the National Security Agency (NSA) have agreed to cooperate on the development of validated U.S. government PPs.
Purpose
A PP states a security problem rigorously for a given collection of system or products, known as the Target of Evaluation (TOE) and to specify security requirements to address that problem without dictating how these requirements will be implemented. A PP may inherit requirements from one or more other PPs.
In order to get a product evaluated and certified according to the CC, the product vendor has to define a Security Target (ST) which may comply with one or more PPs.
In this way a PP may serve as a template for the product's ST.
Problem areas
Although the EAL is easiest for laymen to compare, its simplicity is deceptive because this number is rather meaningless without an understanding the security implications of the PP(s) and ST used for the evaluation. Technically, comparing evaluated products requires assessing both the EAL and the functional requirements. Unfortunately, interpreting the security implications of the PP for the intended application requires very strong IT security expertise. Evaluating a product is one thing, but deciding if some product's CC evaluation is adequate for a particular application is quite another. It is not obvious what trusted agency possesses the depth in IT security expertise needed to evaluate systems applicability of Common Criteria evaluated products.
The problem of applying evaluations is not new. This problem was addressed decades ago by a massive research project that defined software features that could protect information, evaluated their strength, and mapped security features needed for specific operating environment risks. The results were documented in the Rainbow Series. Rather than separating the EAL and functional requirements, the Orange Book followed a less advanced approach defining functional protection capabilities and appropriate assurance requirements as single category. Seven such categories were defined in this way. Further, the Yellow Book defined a matrix of security environments and assessed the risk of each. It then established precisely what security environment was valid for each of the Orange Book categories. This approach produced an unambiguous layman's cookbook for how to determine whether a product was usable in a particular application. Loss of this application technology seems to have been an unintended consequence of the superseding of the Orange Book by the Common Criteria.
Security devices with PPs
Validated U.S. government PP
Anti-Virus (Sunset Date: 2011.06.01)
Key Recovery
Certification Authorities
Tokens
DBMS
Firewalls
Operating System
IDS/h
Validated non-U.S. government PP
Smart Cards
Remote electronic voting systems
Trusted execution environment
External links
International Protection Profiles
NIAP Protection Profiles
Computer Security Act of 1987
References
Computer security procedures | Protection Profile | Engineering | 724 |
892,612 | https://en.wikipedia.org/wiki/Blast%20injury | A blast injury is a complex type of physical trauma resulting from direct or indirect exposure to an explosion. Blast injuries occur with the detonation of high-order explosives as well as the deflagration of low order explosives. These injuries are compounded when the explosion occurs in a confined space.
Classification
Blast injuries are divided into four classes: primary, secondary, tertiary, and quaternary.
Primary injuries
Primary injuries are caused by blast overpressure waves, or shock waves. Total body disruption is the most severe and invariably fatal primary injury. Primary injuries are especially likely when a person is close to an exploding munition, such as a land mine. The ears are most often affected by the overpressure, followed by the lungs and the hollow organs of the gastrointestinal tract. Gastrointestinal injuries may present after a delay of hours or even days. Injury from blast overpressure is a pressure and time dependent function. By increasing the pressure or its duration, the severity of injury will also increase.
Extensive damage can also be inflicted upon the auditory system. The tympanic membrane (also known as the eardrum) may be perforated by the intensity of the pressure waves. Furthermore, the hair cells, the sound receptors found within the cochlea, can be permanently damaged and can result in a hearing loss of a mild to profound degree. Additionally, the intensity of the pressure changes from the blast can cause injury to the blood vessels and neural pathways within the auditory system. Therefore, affected individuals can have auditory processing deficits while having normal hearing thresholds. The combination of these effects can lead to hearing loss, tinnitus, headache, vertigo (dizziness), and difficulty processing sound.
In general, primary blast injuries are characterized by the absence of external injuries; thus internal injuries are frequently unrecognized and their severity underestimated. According to the latest experimental results, the extent and types of primary blast-induced injuries depend not only on the peak of the overpressure, but also other parameters such as number of overpressure peaks, time-lag between overpressure peaks, characteristics of the shear fronts between overpressure peaks, frequency resonance, and electromagnetic pulse, among others. There is general agreement that spalling, implosion, inertia, and pressure differentials are the main mechanisms involved in the pathogenesis of primary blast injuries. Thus, the majority of prior research focused on the mechanisms of blast injuries within gas-containing organs and organ systems such as the lungs, while primary blast-induced traumatic brain injury has remained underestimated. Blast lung refers to severe pulmonary contusion, bleeding or swelling with damage to alveoli and blood vessels, or a combination of these. It is the most common cause of death among people who initially survive an explosion.
Secondary injuries
Secondary injuries are ballistic trauma caused by impacts of flying shrapnels and other objects propelled by the explosion. These injuries may affect any part of the body and sometimes result in penetrating trauma with visible bleeding. At times the propelled object may become embedded in the body, obstructing the loss of blood to the outside. However, there may be extensive blood loss within the body cavities. Secondary blast wounds may be lethal and therefore many anti-personnel explosive devices are designed to generate fast-flying fragments.
Most casualties are caused by secondary injuries as shrapnels generally affect a larger area than the primary blast area, because debris can easily be propelled for hundreds or even thousands of meters. Some explosives, such as nail bombs, are deliberately designed to increase the likelihood of secondary injuries. In other instances, the target provides the raw material for the fragments thrown into surrounding, e.g., shattered glass from a blasted-out window or the glass facade of a building.
Tertiary injuries
Displacement of air by the explosion creates a blast wind that can throw victims against solid objects. Injuries resulting from this type of traumatic impact are referred to as tertiary blast injuries. Tertiary injuries may present as some combination of blunt and penetrating trauma, including bone fractures and coup contre-coup injuries. Children are at particularly high risk of tertiary injury due to their relatively smaller body weight.
Quaternary injuries
Quaternary injuries, or other miscellaneous named injuries, are all other injuries not included in the first three classes. These include flash burns, crush injuries, and respiratory injuries.
Traumatic amputations quickly result in death, unless there are available skilled medical personnel or others with adequate training nearby who are able to quickly respond, with the ability for rapid ground or air medical evacuation to an appropriate facility in time, and with tourniquets (for compression of bleeding sites) and other needed equipment (standard, or improvised; sterile, or not) also available, to treat the injuries. Because of this, injuries of this type are generally rare, though not unheard of, in survivors. Whether survivable or not, they are often accompanied by significant other injuries. The rate of eye injury may depend on the type of blast. Psychiatric injury, some of which may be caused by neurological damage incurred during the blast, is the most common quaternary injury, and post-traumatic stress disorder may affect people who are otherwise completely uninjured.
Mechanism
Blast injuries can result from various types of incidents ranging from industrial accidents to deliberate attacks. High-order explosives produce a supersonic overpressure shock wave, while low order explosives deflagrate and do not produce an overpressure wave. A blast wave generated by an explosion starts with a single pulse of increased air pressure, lasting a few milliseconds. The negative pressure (suction) of the blast wave follows immediately after the positive wave. The duration of the blast wave depends on the type of explosive material and the distance from the point of detonation. The blast wave progresses from the source of explosion as a sphere of compressed and rapidly expanding gases, which displaces an equal volume of air at a very high velocity. The velocity of the blast wave in air may be extremely high, depending on the type and amount of the explosive used. An individual in the path of an explosion will be subjected not only to excess barometric pressure, but to pressure from the high-velocity wind traveling directly behind the shock front of the blast wave. The magnitude of damage due to the blast wave is dependent on the peak of the initial positive pressure wave, the duration of the overpressure, the medium in which it explodes, the distance from the incident blast wave, and the degree of focusing due to a confined area or walls. For example, explosions near or within hard solid surfaces become amplified two to nine times due to shock wave reflection. As a result, individuals between the blast and a building generally suffer two to three times the degree of injury compared to those in open spaces.
Neurotrauma
Blast injuries can cause hidden sensory and brain damage, with potential neurological and neurosensory consequences. It is a complex clinical syndrome caused by the combination of all blast effects, i.e., primary, secondary, tertiary and quaternary blast mechanisms. Blast injuries usually manifest in a form of polytrauma, i.e. injury involving multiple organs or organ systems. Bleeding from injured organs such as lungs or bowel causes a lack of oxygen in all vital organs, including the brain. Damage of the lungs reduces the surface for oxygen uptake from the air, reducing the amount of the oxygen delivered to the brain. Tissue destruction initiates the synthesis and release of hormones or mediators into the blood which, when delivered to the brain, change its function. Irritation of the nerve endings in injured peripheral tissue or organs also contributes significantly to blast-induced neurotrauma.
Individuals exposed to blast frequently manifest loss of memory of events before and after explosion, confusion, headache, impaired sense of reality, and reduced decision-making ability. Patients with brain injuries acquired in explosions often develop sudden, unexpected brain swelling and cerebral vasospasm despite continuous monitoring. However, the first symptoms of blast-induced neurotrauma (BINT) may occur months or even years after the initial event, and are therefore categorized as secondary brain injuries. The broad variety of symptoms includes weight loss, hormone imbalance, chronic fatigue, headache, and problems in memory, speech and balance. These changes are often debilitating, interfering with daily activities. Because BINT in blast victims is underestimated, valuable time is often lost for preventive therapy and/or timely rehabilitation.
Blast wave PTSD research
In addition to known post-traumatic stress disorder (PTSD) risk factors experienced by both civilians and military personnel in combat areas, in early 2018, 60 Minutes reported that neuropathology specialist, Dr. Daniel "Dan" Perl, had conducted research on brain tissue exposed to traumatic brain injury (TBI), discovering a causal relationship between IED blast waves and PTSD. Perl was recruited to the faculty of the Uniformed Services University of the Health Sciences as a professor of pathology and to establish the Center for Neuroscience and Regenerative Medicine mandated by Congress in 2008.
Casualty estimates and triage
Explosions in confined spaces or which cause structural collapse usually produce more deaths and injuries. Confined spaces include mines, buildings and large vehicles. For a rough estimate of the total casualties from an event, double the number that present in the first hour. Less injured patients often arrive first, as they take themselves to the nearest hospital. The most severely injured arrive later, via emergency services ("upside-down" triage). If there is a structural collapse, there will be more serious injuries that arrive more slowly.
See also
Battlefield medicine
Blast-related ocular trauma
Suicide attack
Total body disruption
References
General
External links
Blast injury information from the CDC
Blast injury primer for clinicians
Injuries
Medical emergencies
Explosions | Blast injury | Chemistry | 1,995 |
15,966,621 | https://en.wikipedia.org/wiki/Z-SAN | Z-SAN is a proprietary type of storage area network licensed by Zetera corporation. Z-SAN hardware is bundled with a modified version of SAN-FS, which is a shared disk file system driver and management software product SAN File System (SFS) made by DataPlow. The shared disk file system allows multiple computers to access the same volume at block level. Zetera calls their version of the file system Z-SAN.
The Z-SAN software license is purchased as part of the hardware package and is similar to ATA over Ethernet (AoE) sold by Coraid and LeftHand Networks. Zetera does not sell products directly, but instead licenses its technology to various other companies such as Netgear and Bell Microproducts. Like AoE, Z-SAN is intended to be a low-cost alternative to Fibre Channel and iSCSI. They do this by eliminating the need for the host adapter and TCP offload engine hardware, as well as use standard Ethernet switches instead of the more expensive Fibre Channel switches.
While AoE is mostly supported on Linux, Z-SAN is supported on Microsoft Windows platforms. A Z-SAN can array many more disks than a standard RAID. The Zetera website claims that MIT has a Z-SAN array totaling 1.4 Petabytes of storage. The disk arrays can be both striped and mirrored.
In 2005, the software was licensed to Netgear to be used in the Netgear SC101 product.
References
External links
http://www.computerpoweruser.com/articles/archive/c0604/28c04/28c04.pdf Article about Z-SAN
http://www.zetera.com/
Storage area networks | Z-SAN | Technology | 357 |
24,019,665 | https://en.wikipedia.org/wiki/C16H13O7 | {{DISPLAYTITLE:C16H13O7}}
The molecular formula C16H13O7 (or C16H13O7+, molar mass : 317.27 g/mol, exact mass : 317.066127317) or C16H13ClO7 (exact mass : 352.03498) may refer to:
Petunidin, an anthocyanidin
Pulchellidin, an anthocyanidin | C16H13O7 | Chemistry | 102 |
66,511,145 | https://en.wikipedia.org/wiki/Makita%20AWS | Makita Auto-Start Wireless System (AWS, 2017‒), Festool Autostart (2018‒) and Bosch Wireless Auto-Start (2024‒) are Bluetooth-based systems for remotely starting industrial vacuum cleaners from power tools. Several power tools, cordless battery packs, and industrial vacuum cleaners ship with wireless connectivity, mostly using Bluetooth Low Energy to communicate, but the systems remained incompatible between different brands.
Initial support in 2007 was for data logging monitored torque values; followed in 2015 by adding Bluetooth low energy beacons into power tools for asset tracking, battery status monitoring, and configuration via a mobile app.
Since 2017 onwards various power tools have added support for remote starting or stopping of a dust collector (vacuum cleaner) via Bluetooth; although only between tools from the same manufacturer or group of companies.
Vacuum control
Brand names for the remote starting of a vacuum cleaner include Makita Auto-Start Wireless System (AWS), and DeWalt Wireless Tool Control; plus designs from Festool, and HiKoki. Metabo and Flex use the Cordless Control proprietary standard originally developed by Starmix.
Overview
Although power tools may advertise some form of wireless or Bluetooth support, this may only be a limited subset of features, such as only for configuration or asset tracking, or only for remote control of dust extraction.
As of 2021, the Bluetooth systems were incompatible between manufacturers, with each brand requiring its own application to be downloaded and no support for cross-vendor applications.
Implementations
Black & Decker
In 2016 Black & Decker announced the introduction of Bluetooth support in battery packs. These Black Smartech batteries feature a Bluetooth connection that allows remotely displaying the charge level, locating, or locking a battery pack.
Bosch
Bosch Professional series power tools ending with "C" (for Connectivity) in the model name feature a removable circular opening into which a Bluetooth module can be fitted. two add-on Bluetooth modules were available:
module (supporting Bluetooth 4.1)
upgraded module (supporting Bluetooth 4.2), including Tool-to-Tool communication.
In 2015 Bosch had released an attachable tracking beacon called TrackTag.
Dewalt
DeWalt have shipped two separate and incompatible wireless systems: one for status, tracking and configuration; and one for remote vacuum activation.
Tool Connect
The first system called Tool Connect was released in 2015 and uses Bluetooth Low Energy for asset tracking, configuration of tools and monitoring of battery state.
Bluetooth design and application development was undertaken by Laird Connectivity.
DeWalt produced two battery designs with Tool Connect Bluetooth transmitters built in. This allow allows reading battery status, temperature, firmware/hardware version, and to control locking/disabling when out of range.
DeWalt additionally produced three models of drills (DCD792, DCD797, DCD997) and two impact drivers (DCF888, DCF906) with a Bluetooth module built-in allowing user configuration of maximum RPM (rotation speed) and LED illumination.
For tracking, Dewalt added a retro-fit add-on pass-through "shoe" that connected between the battery and tool (DCE040), plus screw-on external tags (DCE041) with a plan from 2022 to move to a built-in slot for adding a Bluetooth tracking module (DCE042).
Wireless Tool Connect
The second Dewalt system is branded as Wireless Tool Control and uses a transmitter inside tools, or a wrist-mounted remote control in combination with a built-in receiver on some vacuum cleaners. Only one tool or remote can be paired to a vacuum cleaner at a time, previous pairings are overridden.
the range of compatible tools included a 60-volt SDS drill which could be remotely linked to a vacuum cleaner.
The DeWalt WTC operates at ~433.92 MHz in the LPD433 band—the same ISM radio band also used for remote keyless systems and remote door locks.
Festool
In April 2018 Festool released a system that uses a Bluetooth-based control panel that is retro-fitted into vacuum cleaners. The vacuum dust extractor is then activated by special batteries packs which can receive the tool run status from compatible tools and forward this over Bluetooth. The Festool system can also be activated by a small remote transmitter that is affixed to the end of the vacuum hose, allowing operation with all brands of power tools.
Hikoki
As of 2019, Hikoki had announced the RP3608DB vacuum cleaner, and a circular saw that could communicate via Bluetooth.
Hilti
Hilti produce a series of construction vacuum cleaners with optional remote control via Bluetooth.
Makita
Torque Tracer
By 2010 Makita was producing a series of Bluetooth-equipped assembly tools marketed as Torque Tracer capable of transmitting the measured torque and angle applied to each fastening.
Makita had applied for the Torque Tracer trademark in 2007.
Auto-Start Wireless System
Makita Auto-Start Wireless System (AWS) is a wireless communication method used between power tools and dust collection devices/vacuum cleaners, released by Makita in 2017.
The system uses Bluetooth Low Energy (BLE). Tool and vacuum devices must first paired, but can also later be unpaired.
, a miter saw, plunge saw, rotary hammer drill, and an angle grinder were available that could send triggers via AWS. By 2019, a drywall sander was available.
Beginning in 2019, Makita started produce a universal adaptor that could be connected to the automatic pass-through AC mains electricity port of existing vacuum cleaners.
Implementation
The same WUT01 Bluetooth module is used in all tools. When inserted into a compatible vacuum cleaner, or the WUT02 "Universal" resistive-load adapter the module same invisible, in listening mode.
When the WUT01 Bluetooth module is inserted into a compatible tool, a Bluetooth Low Energy advertising message is broadcast approximately ten times per second; this contains the status of the tool inserted into a manufacturer specific packet:
During pairing, the tool-side module must advertise an Generic Attribute Profile (GATT) endpoint used to exchange MAC addresses during the pairing process.
Pairing
Multiple tools can be paired to a single vacuum cleaner; if any of the paired tools are used, the vacuum cleaner will start-up, even if the vacuum hose is not connected to that tool. The same replaceable communications module is used for both ends of the connection.
Pairing between a tool and dust extraction is achieved by pressing a button on both devices for . Unpairing a specific pair of devices is achieved by pressing a button on both devices for . Resetting an individual AWS transmitter/receiver by removing all previous pairings is achieved by pressing the same button for , twice.
Milwaukee
By the end of 2015 Milwaukee Electric Tool were supplying tools and battery packs with One-key support, allowing monitoring of battery status and custom tool settings configuration.
By 2016, three tools were being offered with Bluetooth tracking built-in, for approximately €20 extra per tool.
Ridgid
During 2018 Ridgid started selling Octane battery packs with Bluetooth transmitters built-in. The battery packs can transmit the battery charge percentage status and temperature. Battery capacities are (AC840088), (R8400806) and (R8400809).
References
Bosch (company)
Bluetooth
Particulate control | Makita AWS | Technology | 1,548 |
5,264,737 | https://en.wikipedia.org/wiki/Decay%20scheme | The decay scheme of a radioactive substance is a graphical presentation of all the transitions occurring in a decay, and of their relationships. Examples are shown below.
It is useful to think of the decay scheme as placed in a coordinate system, where the vertical axis is energy, increasing from bottom to top, and the horizontal axis is the proton number, increasing from left to right. The arrows indicate the emitted particles. For the gamma rays (vertical arrows), the gamma energies are given; for the beta decay (oblique arrow), the maximum beta energy.
Examples
These relations can be quite complicated; a simple case is shown here: the decay scheme of the radioactive cobalt isotope cobalt-60. 60Co decays by emitting an electron (beta decay) with a half-life of 5.272 years into an excited state of 60Ni, which then decays very fast to the ground state of 60Ni, via two gamma decays.
All known decay schemes can be found in the Table of Isotopes.,
Nickel is to the right of cobalt, since its proton number (28) is higher by one than that of cobalt (27). In beta decay, the proton number increases by one. For a positron decay and also for an alpha decay (see below), the oblique arrow would go from right to left since in these cases, the proton number decreases.
Since energy is conserved and since the particles emitted carry away energy, arrows can only go downward (vertically or at an angle) in a decay scheme.
A somewhat more complicated scheme is shown here: the decay of the nuclide 198Au which can be produced by irradiating natural gold in a nuclear reactor. 198Au decays via beta decay to one of two excited states or to the ground state of the mercury isotope 198Hg. In the figure, mercury is to the right of gold, since the atomic number of gold is 79, that of mercury is 80. The excited states decay after very short times (2.5 and 23 ps, resp.; 1 picosecond is a millionth of a millionth of a second) to the ground state.
While excited nuclear states are usually very short lived, decaying almost immediately after a beta decay (see above), the excited state of the technetium isotope shown here to the right is comparatively long lived. It is therefore called "metastable" (hence the "m" in 99mTc ). It decays to the ground state via gamma decay with a half-life of 6 hours.
Here, to the left, we now have an alpha decay. It is the decay of the element Polonium discovered by Marie Curie, with mass number 210. The isotope 210Po is the penultimate member of the uranium-radium-decay series; it decays into a stable lead-isotope with a half-life of 138 days. In almost all cases, the decay is via the emission of an alpha particle of 5.305 MeV. Only in one case of 100000, an alpha particle of lower energy appears; in this case, the decay leads to an excited level of 206Pb, which then decays to the ground state via gamma radiation.
Selection rules
Alpha- beta- and gamma rays can only be emitted if the conservation laws (energy, angular momentum, parity) are obeyed. This leads to so-called selection rules.
Applications for gamma decay can be found in Multipolarity of gamma radiation. To discuss such a rule in a particular case, it is necessary to know angular momentum and parity for every state. The figure shows the 60Co decay scheme again, with spins and parities given for every state.
References
Radioactivity
Nuclear physics | Decay scheme | Physics,Chemistry | 757 |
54,558,969 | https://en.wikipedia.org/wiki/Seventh%20power | In arithmetic and algebra, the seventh power of a number n is the result of multiplying seven instances of n together. So:
.
Seventh powers are also formed by multiplying a number by its sixth power, the square of a number by its fifth power, or the cube of a number by its fourth power.
The sequence of seventh powers of integers is:
0, 1, 128, 2187, 16384, 78125, 279936, 823543, 2097152, 4782969, 10000000, 19487171, 35831808, 62748517, 105413504, 170859375, 268435456, 410338673, 612220032, 893871739, 1280000000, 1801088541, 2494357888, 3404825447, 4586471424, 6103515625, 8031810176, ...
In the archaic notation of Robert Recorde, the seventh power of a number was called the "second sursolid".
Properties
Leonard Eugene Dickson studied generalizations of Waring's problem for seventh powers, showing that every non-negative integer can be represented as a sum of at most 258 non-negative seventh powers (17 is 1, and 27 is 128). All but finitely many positive integers can be expressed more simply as the sum of at most 46 seventh powers. If powers of negative integers are allowed, only 12 powers are required.
The smallest number that can be represented in two different ways as a sum of four positive seventh powers is 2056364173794800.
The smallest seventh power that can be represented as a sum of eight distinct seventh powers is:
The two known examples of a seventh power expressible as the sum of seven seventh powers are
(M. Dodrill, 1999);
and
(Maurice Blondot, 11/14/2000);
any example with fewer terms in the sum would be a counterexample to Euler's sum of powers conjecture, which is currently only known to be false for the powers 4 and 5.
See also
Eighth power
Sixth power
Fifth power (algebra)
Fourth power
Cube (algebra)
Square (algebra)
References
Integers
Number theory
Elementary arithmetic
Integer sequences
Unary operations
Figurate numbers | Seventh power | Mathematics | 487 |
24,306,227 | https://en.wikipedia.org/wiki/Galactooligosaccharide | Galactooligosaccharides (GOS), also known as oligogalactosyllactose, oligogalactose, oligolactose or transgalactooligosaccharides (TOS), belong to the group of prebiotics. Prebiotics are defined as non-digestible food ingredients that beneficially affect the host by stimulating the growth and/or activity of beneficial bacteria in the colon. GOS occurs in commercially available products such as food for both infants and adults.
Chemistry
The composition of the galactooligosaccharide fraction varies in chain length and type of linkage between the monomer units. Galactooligosaccharides are produced through the enzymatic conversion of lactose, a component of bovine milk.
A range of factors come into play when determining the yield, style, and type of GOS produced. These factors include:
enzyme source
enzyme dosage
feeding stock (lactose) concentration
origins of the lactose
process involved (e.g. free or immobilized enzyme)
reaction conditions impacting the processing situation
medium composition
GOS generally comprise a chain of galactose units that arise through consecutive transgalactosylation reactions, with a terminal glucose unit. However, where a terminal galactose unit is indicated, hydrolysis of GOS formed at an earlier stage in the process has occurred. The degree of polymerization of GOS can vary quite markedly, ranging from 2 to 8 monomeric units, depending mainly on the type of the enzyme used and the conversion degree of lactose.
Digestion research
Because of the configuration of their glycosidic bonds, galactooligosaccharides largely resist hydrolysis by salivary and intestinal digestive enzymes. Galactooligosaccharides are classified as prebiotics, defined as non-digestible food ingredients as substrate for the host by stimulating the growth and activity of bacteria in the colon.
The increased activity of colonic bacteria results in various effects, both directly by the bacteria themselves or indirectly by producing short-chain fatty acids as byproducts via fermentation. Examples of effects are stimulation of immune functions, absorption of essential nutrients, and synthesis of certain vitamins.
Stimulating bacteria
Galactooligosaccharides are a substrate for bacteria, such as Bifidobacteria and lactobacilli. Studies with infants and adults have shown that foods or drinks enriched with galactooligosaccharides result in a significant increase in Bifidobacteria. These sugars can be found naturally in human milk, known as human milk oligosaccharides. Examples include lacto-N-tetraose, lacto-N-neotetraose, and lacto-N-fucopentaose.
Immune response
Human gut microbiota play a key role in the intestinal immune system. Galactooligosaccharides (GOS) support natural defenses of the human body via the gut microflora, indirectly by increasing the number of bacteria in the gut and inhibiting the binding or survival of Escherichia coli, Salmonella typhimurium and Clostridia. GOS can positively influence the immune system indirectly through the production of antimicrobial substances, reducing the proliferation of pathogenic bacteria.
Constipation
Constipation is a potential problem, particularly among infants, elderly and pregnant women. In infants, formula feeding may be associated with constipation and hard stools. Galactooligosaccharides may improve stool frequency and relieve symptoms related to constipation.
See also
Xylooligosaccharide (XOS)
References
Oligosaccharides
Prebiotics (nutrition) | Galactooligosaccharide | Chemistry | 781 |
40,604,563 | https://en.wikipedia.org/wiki/Rosiridin | Rosiridin is a chemical compound that has been isolated from Rhodiola sachalinensis. Rosiridin can inhibit monoamine oxidases A and B, possibly meaning that the compound could help in the treatment of depression and senile dementia.
References
Crassulaceae
Terpenoid glycosides | Rosiridin | Chemistry | 68 |
764,833 | https://en.wikipedia.org/wiki/Poiseuille | The poiseuille (symbol Pl) has been proposed as a derived SI unit of dynamic viscosity, named after the French physicist Jean Léonard Marie Poiseuille (1797–1869).
In practice the unit has never been widely accepted and most international standards bodies do not include the poiseuille in their list of units. The third edition of the IUPAC Green Book, for example, lists Pa⋅s (pascal-second) as the SI-unit for dynamic viscosity, and does not mention the poiseuille.
The equivalent CGS unit, the poise, symbol P, is most widely used when reporting viscosity measurements.
Liquid water has a viscosity of at at a pressure of ( = = = ).
References
Bibliography
François Cardarelli (2004). Encyclopaedia of Scientific Units, Weights and Measures. Springer-Verlag London Ltd.
SI units
Units of dynamic viscosity | Poiseuille | Mathematics | 196 |
73,789,728 | https://en.wikipedia.org/wiki/Blow%20Up%20%28Australian%20TV%20series%29 | Blow Up is an Australian reality television show based on a Dutch format, in which ten artists compete to create the best balloon artworks for a $100,000 prize. It is hosted by Stephen Curry and Becky Lucas and judged by professional balloon artist Chris Adamo. The series premiered on 15 May 2023 on the Seven Network.
The programme is produced by Endemol Shine Australia and was first announced in August 2022. It commenced filming in the same month in Melbourne, and was officially confirmed at Seven's 2023 upfronts in October 2022.
After the first two episodes drew disappointing ratings, the series was moved to 7flix from its third episode.
Contestants
Reception
Viewership
Although highly advertised for weeks, the series debuted to 288,000 viewers, coming third in its timeslot behind MasterChef Australia and The Summit, respectively, and ranking 19th for the night. The second episode fared no better, with only 224,000 viewers, losing more than 40,000 from its debut and coming fifth in its timeslot, and ranking below the top 20 programs of the night. After the series was moved to 7flix, the third episode drew 30,000 viewers. The final episode drew just 16,000 viewers.
Critical
The show has been unfavourably compared to the similar television show Lego Masters (also produced by Endermol Shine and airing on the rival Nine Network), which had concluded its fifth season just a week before the premiere of Blow Up. Hamish Blake, the host of Lego Masters also poked fun at the premise of the show, stating in an episode of Lego Masters that "Balloons are good for a part of one episode of a show. No, I don’t think there’s a series in them."
References
External links
Blow Up at 7plus
Seven Network original programming
2023 Australian television series debuts
2023 Australian television series endings
2020s Australian reality television series
Australian television series based on Dutch television series
Television series by Banijay
Balloons | Blow Up (Australian TV series) | Chemistry | 405 |
3,003,614 | https://en.wikipedia.org/wiki/Bis%282-ethylhexyl%29%20phthalate | Bis(2-ethylhexyl) phthalate (di-2-ethylhexyl phthalate, diethylhexyl phthalate, diisooctyl phthalate, DEHP; incorrectly — dioctyl phthalate, DIOP) is an organic compound with the formula C6H4(CO2C8H17)2. DEHP is the most common member of the class of phthalates, which are used as plasticizers. It is the diester of phthalic acid and the branched-chain 2-ethylhexanol. This colorless viscous liquid is soluble in oil, but not in water.
Production
Di(2-ethylhexyl) phthalate is produced commercially by the reaction of excess 2-ethylhexanol with phthalic anhydride in the presence of an acid catalyst such as sulfuric acid or para-toluenesulfonic acid. It was first produced in commercial quantities in Japan circa 1933 and in the United States in 1939.
DEHP has two stereocenters, located at the carbon atoms carrying the ethyl groups. As a result, it has three distinct stereoisomers, consisting of an (R,R) form, an (S,S) form (diastereomers), and a meso (R, S) form. As most 2-ethylhexanol is produced as a racemic mixture, commercially-produced DEHP is therefore racemic as well, and consists of a 1:1:2 statistical mixture of stereoisomers.
Use
Due to its suitable properties and the low cost, DEHP is widely used as a plasticizer in manufacturing of articles made of PVC. Plastics may contain 1% to 40% of DEHP. It is also used as a hydraulic fluid and as a dielectric fluid in capacitors. DEHP also finds use as a solvent in glowsticks.
Approximately three million tonnes are produced and used annually worldwide.
Manufacturers of flexible PVC articles can choose among several alternative plasticizers offering similar technical properties as DEHP. These alternatives include other phthalates such as diisononyl phthalate (DINP), di-2-propyl heptyl phthalate (DPHP), diisodecyl phthalate (DIDP), and non-phthalates such as 1,2-cyclohexane dicarboxylic acid diisononyl ester (DINCH), dioctyl terephthalate (DOTP), and citrate esters.
Environmental exposure
DEHP is a component of many household items, including tablecloths, floor tiles, shower curtains, garden hoses, rainwear, dolls, toys, shoes, medical tubing, furniture upholstery, and swimming pool liners. DEHP is an indoor air pollutant in homes and schools. Common exposures come from the use of DEHP as a fragrance carrier in cosmetics, personal care products, laundry detergents, colognes, scented candles, and air fresheners.
The most common exposure to DEHP comes through food with an average consumption of 0.25 milligrams per day. It can also leach into a liquid that comes in contact with the plastic; it extracts faster into nonpolar solvents (e.g. oils and fats in foods packed in PVC). Fatty foods that are packaged in plastics that contain DEHP are more likely to have higher concentrations such as milk products, fish or seafood, and oils. The US FDA therefore permits use of DEHP-containing packaging only for foods that primarily contain water.
DEHP can leach into drinking water from discharges from rubber and chemical factories; The US EPA limits for DEHP in drinking water is 6 ppb. It is also commonly found in bottled water, but unlike tap water, the EPA does not regulate levels in bottled water. DEHP levels in some European samples of milk, were found at 2000 times higher than the EPA Safe Drinking Water limits (12,000 ppb). Levels of DEHP in some European cheeses and creams were even higher, up to 200,000 ppb, in 1994. Additionally, workers in factories that utilize DEHP in production experience greater exposure. The U.S. agency OSHA's limit for occupational exposure is 5 mg/m3 of air.
Use in medical devices
DEHP is the most common phthalate plasticizer in medical devices such as intravenous tubing and bags, IV catheters, nasogastric tubes, dialysis bags and tubing, blood bags and transfusion tubing, and air tubes. DEHP makes these plastics softer and more flexible and was first introduced in the 1940s in blood bags. For this reason, concern has been expressed about leachates of DEHP transported into the patient, especially for those requiring extensive infusions or those who are at the highest risk of developmental abnormalities, e.g. newborns in intensive care nursery settings, hemophiliacs, kidney dialysis patients, neonates, premature babies, lactating, and pregnant women. According to the European Commission Scientific Committee on Health and Environmental Risks (SCHER), exposure to DEHP may exceed the tolerable daily intake in some specific population groups, namely people exposed through medical procedures such as kidney dialysis. The American Academy of Pediatrics has advocated not to use medical devices that can leach DEHP into patients and, instead, to resort to DEHP-free alternatives. In July 2002, the U.S. FDA issued a Public Health Notification on DEHP, stating in part, "We recommend considering such alternatives when these high-risk procedures are to be performed on male neonates, pregnant women who are carrying male fetuses, and peripubertal males" noting that the alternatives were to look for non-DEHP exposure solutions; they mention a database of alternatives. The CBC documentary The Disappearing Male raised concerns about sexual development in male fetal development, miscarriage, and as a cause of dramatically lower sperm counts in men. A review article in 2010 in the Journal of Transfusion Medicine showed a consensus that the benefits of lifesaving treatments with these devices far outweigh the risks of DEHP leaching out of these devices. Although more research is needed to develop alternatives to DEHP that gives the same benefits of being soft and flexible, which are required for most medical procedures, if a procedure requires one of these devices and if patient is at high risk to suffer from DEHP then a DEHP alternative should be considered if medically safe.
Metabolism
DEHP hydrolyzes to mono-ethylhexyl phthalate (MEHP) and subsequently to phthalate salts. The released alcohol is susceptible to oxidation to the aldehyde and carboxylic acid.
Effects on living organisms
Toxicity
The acute toxicity of DEHP is low in animal models: 30 g/kg in rats (oral) and 24 g/kg in rabbits (dermal). Concerns instead focus on its potential as an endocrine disruptor.
Endocrine disruption
DEHP, along with other phthalates, is believed to cause endocrine disruption in males, through its action as an androgen antagonist, and may have lasting effects on reproductive function, for both childhood and adult exposures. Prenatal phthalate exposure has been shown to be associated with lower levels of reproductive function in adolescent males. In another study, airborne concentrations of DEHP at a PVC pellet plant were significantly associated with a reduction in sperm motility and chromatin DNA integrity. Additionally, the authors noted the daily intake estimates for DEHP were comparable to the general population, indicating a "high percentage of men are exposed to levels of DEHP that may affect sperm motility and chromatin DNA integrity". The claims have received support by a study using dogs as a "sentinel species to approximate human exposure to a selection of chemical mixtures present in the environment". The authors analyzed the concentration of DEHP and other common chemicals such as PCBs in testes from dogs from five different world regions. The results showed that regional differences in concentration of the chemicals are reflected in dog testes and that pathologies such as tubule atrophy and germ cells were more prevalent in testes of dogs from regions with higher concentrations.
Development
Numerous studies of DEHP have shown changes in sexual function and development in mice and rats. DEHP exposure during pregnancy has been shown to disrupt placental growth and development in mice, resulting in higher rates of low birthweight, premature birth, and fetal loss. In a separate study, exposure of neonatal mice to DEHP through lactation caused hypertrophy of the adrenal glands and higher levels of anxiety during puberty. In another study, pubertal administration of higher-dose DEHP delayed puberty in rats, reduced testosterone production, and inhibited androgen-dependent development; low doses showed no effect.
Obesity
When DEHP is ingested intestinal lipases convert it to MEHP, which then is absorbed. MEHP is suspected to have an obesogenic effect. Rodent studies and human studies have shown DEHP to be a possible disruptor of thyroid function, which plays a key role in energy balance and metabolism. Exposure to DEHP has been associated with lower plasma thyroxine levels and decreased uptake of iodine in thyroid follicular cells. Previous studies have shown that slight changes in thyroxine levels can have dramatic effects on resting energy expenditure, similar to that of patients with hypothyroidism, which has been shown to cause increased weight gain in those study populations.
Cardiotoxicity
Even at relatively low doses of DEHP, cardiovascular reactivity was significantly affected in mice. A clinically relevant dose and duration of exposure to DEHP has been shown to have a significant impact on the behavior of cardiac cells in culture. This includes an uncoupling effect that leads to irregular rhythms in vitro. Untreated cells had fast conduction velocity, along with homogenous activation wave fronts and synchronized beating. Cells treated with DEHP exhibited fractured wave fronts with slow propagation speeds. This is observed in conjunction with a significant decrease in the amount of expression and instability of gap junctional connexin proteins, specifically connexin-43, in cardiomyocytes treated with DEHP.
The decrease in expression and instability of connexin-43 may be due to the down regulation of tubulin and kinesin genes, and the alteration of microtubule structure, caused by DEHP; all of which are responsible for the transport of protein products. Also, DEHP caused down regulation of several growth factors, such as angiotensinogen, transforming growth factor-beta, vascular endothelial growth factor C and A, and endothelial-1. The DEHP-induced down regulation of these growth factors may also contribute to the reduced expression and instability of connexin-43.
DEHP has also been shown, in vitro using cardiac muscle cells, to cause activation of PPAR-alpha gene, which is a key regulator in lipid metabolism and peroxisome proliferation; both of which can be involved in atherosclerosis and hyperlipidemia, which are precursors of cardiovascular disease.
Once metabolized into MEHP, the molecule has been shown to lengthen action potential duration and slow epicardial conduction velocity in Langendorff perfused rodent hearts.
Other health effects
Studies in mice have shown other adverse health effects due to DEHP exposure. Ingestion of 0.01% DEHP caused damage to the blood-testis barrier as well as induction of experimental autoimmune orchitis. There is also a correlation between DEHP plasma levels in women and endometriosis.
DEHP is also a possible cancer causing agent in humans, although human studies remain inconclusive, due to the exposure of multiple elements and limited research. In vitro and rodent studies indicate that DEHP is involved in many molecular events, including increased cell proliferation, decreased apoptosis, oxidative damage, and selective clonal expansion of the initiated cells; all of which take place in multiple sites of the human body.
Government and industry response
Taiwan
In October 2009, Consumers' Foundation, Taiwan (CFCT) published test results that found 5 out of the sampled 12 shoes contained over 0.1% of phthalate plasticizer content, including DEHP, which exceeds the government's Toy Safety Standard (CNS 4797). CFCT recommend that users should first wear socks to avoid direct skin contact.
In May 2011, the illegal use of the plasticizer DEHP in clouding agents for use in food and beverages has been reported in Taiwan. An inspection of products initially discovered the presence of plasticizers. As more products were tested, inspectors found more manufacturers using DEHP and DINP. The Department of Health confirmed that contaminated food and beverages had been exported to other countries and regions, which reveals the widespread prevalence of toxic plasticizers.
European Union
Concerns about chemicals ingested by children when chewing plastic toys prompted the European Commission to order a temporary ban on phthalates in 1999, the decision of which is based on an opinion by the Commission's Scientific Committee on Toxicity, Ecotoxicity and the Environment (CSTEE). A proposal to make the ban permanent was tabled. Until 2004, EU banned the use of DEHP along with several other phthalates (DBP, BBP, DINP, DIDP and DNOP) in toys for young children. In 2005, the Council and the Parliament compromised to propose a ban on three types of phthalates (DINP, DIDP, and DNOP) "in toys and childcare articles which can be placed in the mouth by children". Therefore, more products than initially planned will thus be affected by the directive. In 2008, six substances were considered to be of very high concern (SVHCs) and added to the Candidate List including musk xylene, MDA, HBCDD, DEHP, BBP, and DBP. In 2011, those six substances have been listed for Authorization in Annex XIV of REACH by Regulation (EU) No 143/2011. According to the regulation, phthalates including DEHP, BBP and DBP will be banned from February 2015.
In 2012, Danish Environment Minister Ida Auken announced the ban of DEHP, DBP, DIBP and BBP, pushing Denmark ahead of the European Union which has already started a process of phasing out phthalates. However, it was postponed by two years and would take effect in 2015 and not in December 2013, which was the initial plan. The reason is that the four phthalates are far more common than expected and that producers cannot phase out phthalates as fast as the Ministry of Environment requested.
In 2012, France became the first country in the EU to ban the use of DEHP in pediatrics, neonatal, and maternity wards in hospitals.
DEHP has now been classified as a Category 1B reprotoxin, and is now on the Annex XIV of the European Union's REACH legislation. DEHP has been phased out in Europe under REACH and can only be used in specific cases if an authorization has been granted. Authorizations are granted by the European Commission, after obtaining the opinion of the Committee for Risk Assessment (RAC) and the Committee for Socio-economic Analysis (SEAC) of the European Chemicals Agency (ECHA).
California
DEHP is classified as a "chemical known to the State of California to cause cancer and birth defects or other reproductive harm" (in this case, both) under the terms of Proposition 65.
References
Further reading
External links
FDA Public Health Notification: PVC devices containing the plasticizer DEHP (archived page)
ATSDR ToxFAQs
CDC - NIOSH Pocket Guide to Chemical Hazards
National Pollutant Inventory - DEHP fact sheet
Healthcare without Harm - PVC and DEHP accessed 25 March 2014
Healthcare without Harm: "Weight of the Evidence on DEHP: Exposures are a Cause for Concern, Especially During Medical Care"; 6p-fact sheet, 16 March 2009 accessed 25 March 2014
Spectrum Laboratories Fact Sheet (archived page)
ChemSub Online : Bis(2-ethylhexyl) phthalate -DEHP
Safety Assessment of Di(2-ethylhexyl)phthalate (DEHP) Released from PVC Medical Devices - Center for Devices and Radiological Health U.S. Food and Drug Administration (archived page)
Ester solvents
IARC Group 2B carcinogens
Phthalate esters
Endocrine disruptors
Plasticizers
2-Ethylhexyl esters | Bis(2-ethylhexyl) phthalate | Chemistry | 3,469 |
31,149,453 | https://en.wikipedia.org/wiki/Silicon%20Border | Silicon Border Holding Company, LLC is a commercial development site designed to produce semi-conductors for consumers in North America. The site is located in Mexicali, Baja California, along the southwestern border of the United States of America and Mexico. The site began manufacturing semiconductors between 2004-2005 with the intention of competing with the global market. Silicon Border provides Mexico with an infrastructure that enables high-tech companies anywhere in the world to relocate manufacturing operations to the country and exploit its competitive advantages such as geographical location, human capital, research, legal and tax benefits, intellectual property, international treaties, and logistics. This allows research to develop processes, design, fabrication and testing able to compete with Asian operations and costs. The infrastructure build-out, financed by ING Clarion, consists of potable water plant and distribution, fiber optic telephone and data cable, power substations, and waste treatment facilities. Silicon Border not only provides manufacturing space to companies creating "green" products, but does so in an environmentally conscious manner.
California's Governor Arnold Schwarzenegger promoted cooperation with the project and has encouraged economic partnerships with Silicon Border in his radio addresses. In 2006, the California governor created the "California/Baja Silicon Border Work Group," run by deputy secretary of the California Business, Transportation, and Housing Agency, Yolanda Benson. State officials promised to hasten the roadways needed to link up with those being built for Silicon Border in Mexico.
The area is supplied with water from the Colorado River and a major electrical sub-station supplied by three separate power plants. Infrastructure improvements associated with the proposed project include a new highway (under construction) and an additional border crossing. Silicon Border estimates that in ten years following the onset of development, the Silicon Border Science Park could generate 100,000 jobs both within Mexico and the U.S.
See also
Silicon Valley
Silicon Wadi
References
Mexicali
Imperial County, California
Information technology places | Silicon Border | Technology | 389 |
75,565,886 | https://en.wikipedia.org/wiki/Ethiopian%20Electric%20Power%20Headquarters | Ethiopian Electric Power Headquarters is a 62-storey office building under construction in the Kirkos district of Addis Ababa, the capital and largest city of Ethiopia. The building is located right on Mexico Square, and once completed, it is expected to become the tallest building both in East Africa and sub-Saharan Africa more broadly, as well as the second tallest in Africa.
Location
The building is located on a plot of land off the southern end of the Mexico Square roundabout. The site was specifically chosen so as to be close to the newly developed central business district that sits North-West of the plot.
Construction & funding
The construction is expected to cost nearly ETB25 billion (approx. US$445 million), meaning if built, the project would surpass the Commercial Bank of Ethiopia Headquarters as the most expensive building project in Ethiopia (the CBE HQ cost roughly ETB5.3 billion(US$303 million), although not adjusted for inflation).
Preminiery topography surveys have already been conducted and excavation work for soil testing is almost complete, with all 28 designated wells having been dug. According to project manager Behailu Tadele, the building's design will meet the gold standard of the Energy and Environmental Design (LEED) system as governed by the U.S. Green Building Council.
Design
The building is 327.5 metres tall with 62 stories. Sitting on a 20,792 square meter plot of land, the building will have 197,800 square meters of floor area.
Upon completion, it will be the tallest building in Ethiopia and East Africa, as well as the second tallest building in Africa, only being surpassed by the Iconic Tower in Egypt's New Administrative Capital.
See also
List of tallest buildings in Africa
References
Buildings and structures under construction | Ethiopian Electric Power Headquarters | Engineering | 359 |
38,761,100 | https://en.wikipedia.org/wiki/Tertiary%20%28chemistry%29 | Tertiary is a term used in organic chemistry to classify various types of compounds (e. g. alcohols, alkyl halides, amines) or reactive intermediates (e. g. alkyl radicals, carbocations).
See also
Primary (chemistry)
Secondary (chemistry)
Quaternary (chemistry)
References
Chemical nomenclature | Tertiary (chemistry) | Chemistry | 72 |
1,013,768 | https://en.wikipedia.org/wiki/LAPACK | LAPACK ("Linear Algebra Package") is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It also includes routines to implement the associated matrix factorizations such as LU, QR, Cholesky and Schur decomposition. LAPACK was originally written in FORTRAN 77, but moved to Fortran 90 in version 3.2 (2008). The routines handle both real and complex matrices in both single and double precision. LAPACK relies on an underlying BLAS implementation to provide efficient and portable computational building blocks for its routines.
LAPACK was designed as the successor to the linear equations and linear least-squares routines of LINPACK and the eigenvalue routines of EISPACK. LINPACK, written in the 1970s and 1980s, was designed to run on the then-modern vector computers with shared memory. LAPACK, in contrast, was designed to effectively exploit the caches on modern cache-based architectures and the instruction-level parallelism of modern superscalar processors, and thus can run orders of magnitude faster than LINPACK on such machines, given a well-tuned BLAS implementation. LAPACK has also been extended to run on distributed memory systems in later packages such as ScaLAPACK and PLAPACK.
Netlib LAPACK is licensed under a three-clause BSD style license, a permissive free software license with few restrictions.
Naming scheme
Subroutines in LAPACK have a naming convention which makes the identifiers very compact. This was necessary as the first Fortran standards only supported identifiers up to six characters long, so the names had to be shortened to fit into this limit.
A LAPACK subroutine name is in the form pmmaaa, where:
p is a one-letter code denoting the type of numerical constants used. S, D stand for real floating-point arithmetic respectively in single and double precision, while C and Z stand for complex arithmetic with respectively single and double precision. The newer version, LAPACK95, uses generic subroutines in order to overcome the need to explicitly specify the data type.
mm is a two-letter code denoting the kind of matrix expected by the algorithm. The codes for the different kind of matrices are reported below; the actual data are stored in a different format depending on the specific kind; e.g., when the code DI is given, the subroutine expects a vector of length n containing the elements on the diagonal, while when the code GE is given, the subroutine expects an array containing the entries of the matrix.
aaa is a one- to three-letter code describing the actual algorithm implemented in the subroutine, e.g. SV denotes a subroutine to solve linear system, while R denotes a rank-1 update.
For example, the subroutine to solve a linear system with a general (non-structured) matrix using real double-precision arithmetic is called DGESV.
Use with other programming languages and libraries
Many programming environments today support the use of libraries with C binding (LAPACKE, a standardised C interface, has been part of LAPACK since version 3.4.0), allowing LAPACK routines to be used directly so long as a few restrictions are observed. Additionally, many other software libraries and tools for scientific and numerical computing are built on top of LAPACK, such as R, MATLAB, and SciPy.
Several alternative language bindings are also available:
Armadillo for C++
IT++ for C++
LAPACK++ for C++
Lacaml for OCaml
SciPy for Python
Gonum for Go
PDL::LinearAlgebra for Perl Data Language
Math::Lapack for Perl
NLapack for .NET
CControl for C in embedded systems
lapack for rust
Implementations
As with BLAS, LAPACK is sometimes forked or rewritten to provide better performance on specific systems. Some of the implementations are:
Accelerate Apple's framework for macOS and iOS, which includes tuned versions of BLAS and LAPACK.
Netlib LAPACK The official LAPACK.
Netlib ScaLAPACK Scalable (multicore) LAPACK, built on top of PBLAS.
Intel MKL Intel's Math routines for their x86 CPUs.
OpenBLAS Open-source reimplementation of BLAS and LAPACK.
Gonum LAPACK A partial native Go implementation.
Since LAPACK typically calls underlying BLAS routines to perform the bulk of its computations, simply linking to a better-tuned BLAS implementation can be enough to significantly improve performance. As a result, LAPACK is not reimplemented as often as BLAS is.
Similar projects
These projects provide a similar functionality to LAPACK, but with a main interface differing from that of LAPACK:
Libflame A dense linear algebra library. Has a LAPACK-compatible wrapper. Can be used with any BLAS, although BLIS is the preferred implementation.
Eigen A header library for linear algebra. Has a BLAS and a partial LAPACK implementation for compatibility.
MAGMA Matrix Algebra on GPU and Multicore Architectures (MAGMA) project develops a dense linear algebra library similar to LAPACK but for heterogeneous and hybrid architectures including multicore systems accelerated with GPGPUs.
PLASMA The Parallel Linear Algebra for Scalable Multi-core Architectures (PLASMA) project is a modern replacement of LAPACK for multi-core architectures. PLASMA is a software framework for development of asynchronous operations and features out of order scheduling with a runtime scheduler called QUARK that may be used for any code that expresses its dependencies with a directed acyclic graph.
See also
List of numerical libraries
Math Kernel Library (MKL)
NAG Numerical Library
SLATEC, a FORTRAN 77 library of mathematical and statistical routines
QUADPACK, a FORTRAN 77 library for numerical integration
References
Fortran libraries
Free software programmed in Fortran
Numerical linear algebra
Numerical software
Software using the BSD license | LAPACK | Mathematics | 1,248 |
326,566 | https://en.wikipedia.org/wiki/Concertina%20wire | Concertina wire or Dannert wire is a type of barbed wire or razor wire that is formed in large coils which can be expanded like a concertina. In conjunction with plain barbed wire (and/or razor wire/tape) and steel pickets, it is most often used to form military-style wire obstacles. It is also used in non-military settings, such as when used in prison barriers, detention camps, riot control, or at international borders.
During World War I, soldiers manufactured concertina wire themselves, using ordinary barbed wire. Today, it is factory made.
Origins
In World War I, barbed wire obstacles were made by stretching lengths of barbed wire between stakes of wood or iron. At its simplest, such a barrier would resemble a fence as might be used for agricultural purposes. The double apron fence comprised a line of pickets with wires running diagonally down to points on the ground either side of the fence. Horizontal wires were attached to these diagonals.
More elaborate and formidable obstructions could be formed with multiple lines of stakes connected with wire running from side-to-side, back-to-front, and diagonally in many directions. Effective as these obstacles were, their construction took considerable time.
Barbed wire obstacles were vulnerable to being pushed about by artillery shells; in World War I, this frequently resulted in a mass of randomly entangled wires that could be even more daunting than a carefully constructed obstacle. Learning this lesson, World War I soldiers would deploy barbed wire in so-called concertinas that were relatively loose. Barbed wire concertinas could be prepared in the trenches and then deployed in no-man's-land relatively quickly under cover of darkness.
Concertina wire packs flat for ease of transport and can then be deployed as an obstacle much more quickly than ordinary barbed wire, since the flattened coil of wire can easily be stretched out, forming an instant obstacle that will at least slow enemy passage. Several such coils with a few stakes to secure them in place are just as effective as an ordinary barbed wire fence, which must be built by driving stakes and running multiple wires between them.
A platoon of soldiers can deploy a single concertina fence at a rate of about a kilometre ( mile) per hour. Such an obstacle is not very effective by itself (although it will still hinder an enemy advance under the guns of the defenders), and concertinas are normally built up into more elaborate patterns as time permits.
Today, concertina wire is factory made and is available in forms that can be deployed very rapidly from the back of a vehicle or trailer.
Dannert wire
Oil-tempered barbed wire was developed during World War I; it was much harder to cut than ordinary barbed wire. During the 1930s, German Horst Dannert developed concertina wire of this high-grade steel wire. The result was entirely self-supporting; it did not require any vertical posts. An individual Dannert wire concertina could be compressed into a compact coil that could be carried by one man and then stretched out along its axis to make a barrier long and each coil could be held in place with just three staples hammered into the ground.
Dannert wire was imported into Britain from Germany before World War II. During the invasion crisis of 1940–1941, the demand for Dannert wire was so great that some was produced with low manganese steel wire which was easier to cut. This material was known as "Yellow Dannert" after the identifying yellow paint on the concertina handles. To compensate for the reduced effectiveness of Yellow Dannert, an extra supply of pickets were issued in lieu of screw pickets.
Triple concertina wire
A barrier known as a triple concertina wire fence consists of two parallel concertinas joined by twists of wire and topped by a third concertina similarly attached. The result is an extremely effective barrier with many of the desirable properties of a random entanglement. A triple concertina fence could be deployed very quickly: it is possible for a party of five men to deploy of triple concertina fence in just 15 minutes. Optionally, triple concertina fence could be strengthened with uprights, but this increases the construction time significantly.
"Constantine" wire
Concertina wire is sometimes mistakenly called "constantine" wire. Constantine probably came from a corruption/misunderstanding of concertina and led to confusion with the Roman Emperor Constantine. This, in turn, has led to some people trying to differentiate between concertina wire and constantine wire by assigning the term constantine wire to what is commonly known as razor wire. In contrast to the helical construction of concertina wire, razor wire consists of a single wire with teeth that project periodically along its length.
See also
Slinky
References
Citations
Works cited
Further reading
External links
Engineering barrages
Fortification (obstacles)
Area denial weapons
Wire | Concertina wire | Engineering | 974 |
37,211,063 | https://en.wikipedia.org/wiki/Theta%20Indi | Theta Indi (θ Ind) is a binary star in the constellation Indus. Its apparent magnitude is 4.40 and it is approximately 98.8 light years away based on parallax. The smaller companion, B, has a spectral type of G0V (yellow main-sequence) and an apparent magnitude of 7.18 at a separation of 6.71″. Recent observations suggest the primary is itself a binary with components Aa and Ab orbiting at 0.0617″, estimated period about 1.3 years.
References
Indus (constellation)
A-type main-sequence stars
Triple star systems
Indi, Theta
Durchmusterung objects
202730
105319
8140 | Theta Indi | Astronomy | 141 |
30,594,071 | https://en.wikipedia.org/wiki/Alternative%20Splicing%20and%20Transcript%20Diversity%20database | The Alternative Splicing and Transcript Diversity database (ASTD) was a database of transcript variants maintained by the European Bioinformatics Institute from 2008 to 2012. It contained transcription initiation, polyadenylation and splicing variant data.
See also
Alternative Splicing Annotation Project
AspicDB
RNA splicing
References
External links
https://web.archive.org/web/20111227225355/http://www.ebi.ac.uk/asd/
Genetics databases
Gene expression
RNA splicing
Science and technology in Cambridgeshire
South Cambridgeshire District | Alternative Splicing and Transcript Diversity database | Chemistry,Biology | 123 |
74,330,503 | https://en.wikipedia.org/wiki/Jennifer%20Waters | Jennifer Waters is an American scientist who is a Lecturer on Cell Biology, the Director of the Core for Imaging Technology & Education(CITE; formally the NIC) and the Director of the Cell Biology Microscopy Facility at Harvard Medical School. She is an imaging expert and educator whose efforts to educate life scientists about microscopy and to systemize the education of microscopists in microscopy facilities serve as a blueprint for similar efforts worldwide.
Education
Waters studied Biology at University at Albany, SUNY and graduated with a B.Sc. in 1992. In 1998, she earned her Ph.D. in Biology. During her Ph.D., she used quantitative fluorescence live cell imaging to study the mechanisms and regulation of mitosis in vertebrate tissue culture cells. After completing her thesis, supervised by Edward D. Salmon, she moved to Wake Forest University, where she taught light microscopy courses in their graduate program.
Career
In 2001, she began her position as Director of the Nikon Imaging Center and the Director of the Cell Biology Microscopy Facility at Harvard Medical School. In 2024, the NIC@HMS contract terminated and the core was renamed the Director of the Core for Imaging Technology & Education. Waters and her staff advise and train users in a wide range of light microscopy techniques. Furthermore, she teaches light microscopy courses for graduate students at Harvard Medical School.
Over the years, Waters recognized the need for systematic training of technical imaging experts and implemented such training in the form of a new well-structured postdoctoral fellowship that other facilities have started to implement as well improving technical microscopy expertise worldwide.
Waters has also been involved in several microscopy courses outside of Harvard over the years, including the Analytical and Quantitative Light Microscopy course at the Marine Biological Laboratory in Woods Hole, MA.
Since 2011, Waters has organized an annual two-week course on Quantitative Imaging at Cold Spring Harbor Laboratory in Laurel Hollow, New York. Waters and her team created this course with a dense and comprehensive curriculum. It has become one of the top microscopy courses in the world.
In 2019, Waters was named Chan Zuckerberg Initiative Imaging Scientist. As part of this recognition, Waters has intensified her microscopy outreach activities, including the YouTube channel Microcourses and the searchable database Microlist.
Waters is on the editorial board of BioTechniques, has authored multiple educational articles and reviews on quantitative microscopy, and edited the book “Quantitative Imaging in Cell Biology” with Torsten Wittmann (UCSF).
Awards and honors
2019–2024, Chan Zuckerberg Initiative Imaging Scientist Award
2021–2022, Chan Zuckerberg Initiative napari Plugin Foundation Award
References
External links
The Microscopists interviews Jennifer Waters
Microcourses
Living people
Women in optics
Year of birth missing (living people)
Microscopists
University of North Carolina at Chapel Hill alumni
University at Albany, SUNY alumni | Jennifer Waters | Chemistry | 573 |
36,233,960 | https://en.wikipedia.org/wiki/Feed%20ramp | A feed ramp is a basic feature of many breech loading cartridge firearm designs. It is a tightly machined and polished piece of metal which guides a cartridge from the top of the magazine into the firing chamber of the barrel. The feed ramp may be part of the magazine (AR-7), part of the receiver or frame (Mauser C96), part of the barrel (H&K USP) or part of the barrel nut/locking lugs (AR-15). Some firearms, like the FN Five-seven, have a beveled chamber instead of a feed ramp.
The feed ramp is a critical part of semi-automatic firearms and automatic firearms. When the weapon is fired and the spent case is ejected, the feed ramp functions to direct a fresh cartridge from the magazine into firing position; that is, the fresh cartridge slides along the feed ramp into battery. The need for the cartridge to slide both forwards and upwards along the feed ramp and into the barrel is the primary design consideration that makes the ogive the preferred shape for all modern automatic pistol rounds (a hollow point bullet is a truncated ogive), as there are many other shapes that are stable in ballistic flight.
A rough surface or the presence of debris on the feed ramp can knock cartridges off-center, inducing a jam. Polishing a feed ramp is among the tasks commonly performed by gunsmiths.
See also
List of firearm terminology
References
External links
Firearm components | Feed ramp | Technology | 297 |
63,411,811 | https://en.wikipedia.org/wiki/Spendor | Spendor is a British loudspeaker manufacturing company founded in 1969 by audio engineer Spencer Hughes (1924–1983) and his wife Dorothy. It is located in East Sussex. The name was derived from the first names of both.
Research in the 1960s
Spencer Hughes worked in an investigation team of the BBC research department in the 1960s. Though journeying into Television, this was a time period when the BBC's licence budget meant the main transmission output was still radio, the imperative behind that of a BBC licensed loudspeaker was that (within the physical confines of a small bookshelf speaker) the principle objective of the loudspeaker was to reproduce an output audio signal with an acoustic fidelity to an original radio presenter's voice; principally within the entire spoken vocal range. To this end the resultant, and historically important BBC LS35A loudspeaker hit the retail market, the optional license meant any manufacture could procure a license to produce the LS35A design only if they were able to build a speaker which matched the same high fidelity standard the BBC had worked to achieve. The goal of the BBC R&D team had been reached with the original LS35A standard was now available to consumers who had the means to buy an amplification and loudspeaker system guaranteed to bring in to their homes the exact same sonic signature BBC sound engineers heard while recording and during replay. An alternative, the unwieldily, but sonically superior Quad Acoustics Electrostatic Loudspeaker ("ELS") were simply too large and pricey for most audiophiles.
One resulting offshoot of the research was a membrane made from a polystyrene ("Bextrene") for mid-range speakers or woofers.
History
Start
In the first days Dorothy assisted with coil winding expertise, later she took over the general management.
The first product was the BC1, which Spencer designed while still working for the BBC. Several other designs followed, the BC2, BC3, SA1, SP1 and other. Spendor also made the BBC LS3/5a under licence from the BBC.
The BC1 is bigger than a LS3/5a and also uses a Bextrene-membrane and was used in many radio stations, too. As a consequence many UK speaker designs are influenced by this improvement of sound quality through reduced colouration and greater consistency as well as the stereo imaging. The BC1 was built with smaller modifications until 1994.
The present Head of Engineering, Terry Miles, started as Spencer Hughes’ assistant in 1975.
Derek Hughes' time at Spendor
Derek Hughes, son of Spencer and his wife Dorothy, worked at Spendor (in his letter from 1980 Spencer mentioned him as assist with research and development and general running of the factory). After the untimely death of Spencer in 1983 he worked with his mother in the capacity of Technical Director, producing the original versions of what is now the Classic Series, most notably the SP1/2, SP2 and the S100 and he did amongst others the redesign of the 3/5 1998 and is still working at loudspeakers as freelance consultant designer.
Since 2000
Since the year 2000 the company is owned by Philip Swift a speaker designer and co-founder of Audiolab, who personally knew Spencer Hughes. then sold to Ajay Shirke. Spendor develops and manufactures all components in UK.
Round about these days Spendor enlarged their product range by floorstanding loudspeakers, first with models of a S line, currently be found as loudspeakers of A line and higher end D line.
Products 2020
A line: compact and floorstanding speakers
Classic: until 2018 without floorstanding speakers
D line: compact and floorstanding speakers
See also
Studio monitor
References
External links
Homepage of the company Spendor
Audio engineering
BBC
Audio equipment manufacturers of the United Kingdom
Loudspeaker manufacturers | Spendor | Engineering | 801 |
13,276,958 | https://en.wikipedia.org/wiki/Initial%20value%20formulation%20%28general%20relativity%29 | The initial value formulation of general relativity is a reformulation of Albert Einstein's theory of general relativity that describes a universe evolving over time.
Each solution of the Einstein field equations encompasses the whole history of a universe – it is not just some snapshot of how things are, but a whole spacetime: a statement encompassing the state of matter and geometry everywhere and at every moment in that particular universe. By this token, Einstein's theory appears to be different from most other physical theories, which specify evolution equations for physical systems; if the system is in a given state at some given moment, the laws of physics allow you to extrapolate its past or future. For Einstein's equations, there appear to be subtle differences compared with other fields: they are self-interacting (that is, non-linear even in the absence of other fields); they are diffeomorphism invariant, so to obtain a unique solution, a fixed background metric and gauge conditions need to be introduced; finally, the metric determines the spacetime structure, and thus the domain of dependence for any set of initial data, so the region on which a specific solution will be defined is not, a priori, defined.
There is, however, a way to re-formulate Einstein's equations that overcomes these problems. First of all, there are ways of rewriting spacetime as the evolution of "space" in time; an earlier version of this is due to Paul Dirac, while a simpler way is known after its inventors Richard Arnowitt, Stanley Deser and Charles Misner as ADM formalism. In these formulations, also known as "3+1" approaches, spacetime is split into a three-dimensional hypersurface with interior metric and an embedding into spacetime with exterior curvature; these two quantities are the dynamical variables in a Hamiltonian formulation tracing the hypersurface's evolution over time. With such a split, it is possible to state the initial value formulation of general relativity. It involves initial data which cannot be specified arbitrarily but needs to satisfy specific constraint equations, and which is defined on some suitably smooth three-manifold ; just as for other differential equations, it is then possible to prove existence and uniqueness theorems, namely that there exists a unique spacetime which is a solution of Einstein equations, which is globally hyperbolic, for which is a Cauchy surface (i.e. all past events influence what happens on , and all future events are influenced by what happens on it), and has the specified internal metric and extrinsic curvature; all spacetimes that satisfy these conditions are related by isometries.
The initial value formulation with its 3+1 split is the basis of numerical relativity; attempts to simulate the evolution of relativistic spacetimes (notably merging black holes or gravitational collapse) using computers. However, there are significant differences to the simulation of other physical evolution equations which make numerical relativity especially challenging, notably the fact that the dynamical objects that are evolving include space and time itself (so there is no fixed background against which to evaluate, for instance, perturbations representing gravitational waves) and the occurrence of singularities (which, when they are allowed to occur within the simulated portion of spacetime, lead to arbitrarily large numbers that would have to be represented in the computer model).
See also
ADM formalism
Notes
References
Kalvakota, Vaibhav R. (July 1, 2021). "A brief account of the Cauchy problem in General Relativity".
General relativity | Initial value formulation (general relativity) | Physics | 737 |
20,726,862 | https://en.wikipedia.org/wiki/Digital%20citizen | The term digital citizen is used with different meanings. According to the definition provided by Karen Mossberger, one of the authors of Digital Citizenship: The Internet, Society, and Participation, digital citizens are "those who use the internet regularly and effectively." In this sense, a digital citizen is a person using information technology (IT) in order to engage in society, politics, and government.
Digital Citizenship refers to the responsible use of technology and the internet. It involves following ethical norms and practices when engaging online, ensuring that individuals contribute positively to the digital world.
Key principles of digital citizenship include:
Digital Access: Ensuring equitable access to technology for all.
Digital Etiquette: Practicing respectful and responsible behavior in online interactions.
Digital Communication: Using digital tools to communicate effectively.
Digital Literacy: Understanding how to use and evaluate digital information critically.
Digital Law: Following legal standards related to online activities.
Digital Rights and Responsibilities: Acknowledging online rights (privacy, freedom of expression) and responsibilities (respect, accountability).
Digital Health and Wellness: Managing screen time and maintaining mental and physical health in a digital world.
Digital Security: Protecting personal information and online safety.
Overall, digital citizenship is about navigating the online world safely, ethically, and responsibly, while fostering a positive and inclusive digital environment.
More recent elaborations of the concept define digital citizenship as the self-enactment of people’s role in society through the use of digital technologies, stressing the empowering and democratizing characteristics of the citizenship idea. These theories aim at taking into account the ever increasing datafication of contemporary societies (as can be symbolically linked to the Snowden leaks), which radically called into question the meaning of “being (digital) citizens in a datafied society”, also referred to as the “algorithmic society”, which is characterised by the increasing datafication of social life and the pervasive presence of surveillance practices – see surveillance and surveillance capitalism, the use of artificial intelligence, and Big Data.
Datafication presents crucial challenges for the very notion of citizenship, so that data collection can no longer be seen as an issue of privacy alone so that:We cannot simply assume that being a citizen online already means something (whether it is the ability to participate or the ability to stay safe) and then look for those whose conduct conforms to this meaning Instead, the idea of digital citizenship shall reflect the idea that we are no longer mere “users” of technologies since they shape our agency both as individuals and as citizens.
Digital citizenship is the responsible and respectful use of technology to engage online, find reliable sources, and protect and promote human rights. It teaches skills to communicate, collaborate, and act positively on any online platform. It also teaches empathy, privacy protection, and security measures to prevent data breaches and identity theft.
Digital citizenship in the "algorithmic society"
In the context of the algorithmic society, the question of digital citizenship "becomes one of the extents to which subjects are able to challenge, avoid or mediate their data double in this datafied society”.
These reflections put the emphasis on the idea of the digital space (or cyberspace) as a political space where the respect of fundamental rights of the individual shall be granted (with reference both to the traditional ones as well as to new specific rights of the internet [see “digital constitutionalism”]) and where the agency and the identity of the individuals as citizens is at stake. This idea of digital citizenship is thought to be not only active but also performative, in the sense that “in societies that are increasingly mediated through digital technologies, digital acts become important means through which citizens create, enact and perform their role in society.”
In particular, for Isin and Ruppert this points towards an active meaning of (digital) citizenship based on the idea that we constitute ourselves as digital citizen by claiming rights on the internet, either by saying or by doing something.
Types of digital participation
People who characterize themselves as digital citizens often use IT extensively—creating blogs, using social networks, and participating in online journalism. Although digital citizenship begins when any child, teen, or adult signs up for an email address, posts pictures online, uses e-commerce to buy merchandise online, and/or participates in any electronic function that is B2B or B2C, the process of becoming a digital citizen goes beyond simple internet activity. According to Thomas Humphrey Marshall, a British sociologist known for his work on social citizenship, a primary framework of citizenship comprises three different traditions: liberalism, republicanism, and ascriptive hierarchy. Within this framework, the digital citizen needs to exist in order to promote equal economic opportunities and increase political participation. In this way, digital technology helps to lower the barriers to entry for participation as a citizen within a society.
They also have a comprehensive understanding of digital citizenship, which is the appropriate and responsible behavior when using technology. Since digital citizenship evaluates the quality of an individual's response to membership in a digital community, it often requires the participation of all community members, both visible and those who are less visible. A large part in being a responsible digital citizen encompasses digital literacy, etiquette, online safety, and an acknowledgement of private versus public information. The development of digital citizen participation can be divided into two main stages.
The first stage is through information dissemination, which includes subcategories of its own:
static information dissemination, characterized largely by citizens who use read-only websites where they take control of data from credible sources in order to formulate judgments or facts. Many of these websites where credible information may be found are provided by the government.
dynamic information dissemination, which is more interactive and involves citizens as well as public servants. Both questions and answers can be communicated, and citizens have the opportunity to engage in question-and-answer dialogues through two-way communication platforms
The second stage of digital citizen participation is citizen deliberation, which evaluates what type of participation and role that they play when attempting to ignite some sort of policy change.
static citizen participants can play a role by engaging in online polls as well as through complaints and recommendations sent up, mainly toward the government who can create changes in policy decisions.
dynamic citizen participants can deliberate amongst others on their thoughts and recommendations in town hall meetings or various media sites.
One of the primary advantages of participating in online debates through digital citizenship is that it incorporates social inclusion. In a report on civic engagement, citizen-powered democracy can be initiated either through information shared through the web, direct communication signals made by the state toward the public, and social media tactics from both private and public companies. In fact, it was found that the community-based nature of social media platforms allow individuals to feel more socially included and informed about political issues that peers have also been found to engage with, otherwise known as a "second-order effect." Understanding strategic marketing on social media would further explain social media customers’ participation. Two types of opportunities rise as a result, the first being the ability to lower barriers that can make exchanges much easier. In addition, they have the chance to participate in transformative disruption, giving people who have a historically lower political engagement to mobilize in a much easier and convenient fashion.
Nonetheless, there are several challenges that face the presence of digital technologies in political participation. Both current as well as potential challenges can create significant risks for democratic processes. Not only is digital technology still seen as relatively ambiguous, it was also seen to have "less inclusivity in democratic life." Demographic groups differ considerably in the use of technology, and thus, one group could potentially be more represented than another as a result of digital participation. Another primary challenge consists in the ideology of a "filter bubble" effect. Alongside a tremendous spread of false information, internet users could reinforce existing prejudices and assist in polarizing disagreements in the public sphere. This can lead to misinformed voting and decisions based on exposure rather than on pure knowledge. A communication technology director, Van Dijk, stated, "Computerized information campaigns and mass public information systems have to be designed and supported in such a way that they help to narrow the gap between the 'information rich' and 'information poor' otherwise the spontaneous development of ICT will widen it." Access and equivalent amounts of knowledge behind digital technology must be equivalent in order for a fair system to put into place.
Alongside a lack of evidenced support for technology that can be proven to be safe for citizens, the OECD has identified five struggles for the online engagement of citizens:
Scale: To what extent can a society allow every individual's voice to be heard, but also not be lost in the mass debate? This can be extremely challenging for the government, which may not effectively know how to listen and respond to each individual contribution.
Capacity: How can digital technology offer citizens more information on public policy-making? The opportunity for citizens to debate with one another is lacking for active citizenship.
Coherence: The government is yet to design a more holistic view of the policy-making cycle and the use of design technology to better prepare information from citizens in each stage of the policy-making cycle.
Evaluation: There is a greater need now than ever before to figure out whether or not the online engagement can help meet the citizen as well as the government's objectives.
Commitment: Is the government committed to analyze and use citizen's public input, and how can this process be validated more regularly?
Developed states and developing countries
Highly developed states possess the capacity to link their respective governments with digital sites. Such sites function in ways such as publicizing recent legislation, current, and future policy objectives; lending agency toward political candidates; and/or allowing citizens to voice themselves in a political way. Likewise, the emergence of these sites has been linked to increased voting advocacy. Lack of access to technology can be a serious obstacle in becoming a digital citizen, since many elementary procedures such as tax report filing, birth registration, and use of websites to support candidates in political campaigns (e-democracy) have become available solely via the internet. Furthermore, many cultural and commercial entities only publicize information on web pages. Non-digital citizens will not be able to retrieve this information, and this may lead to social isolation or economic stagnation.
The gap between digital citizens and non-digital citizens is often referred as the digital divide. In developing countries, digital citizens are fewer. They consist of the people who use technology to overcome local obstacles including development issues, corruption, and even military conflict. Examples of such citizens include users of Ushahidi during the 2007 disputed Kenyan election and protesters in the Arab Spring movements who used media to document repression of protests. Currently, the digital divide is a subject of academic debate as access to the internet has increased in these developing countries, but the place in which it is accessed (work, home, public library, etc.) has a significant effect on how much access will be used, if even in a manner related to the citizenry. Recent scholarship has correlated the desire to be technologically proficient with greater belief in computer access equity, and thus, digital citizenship (Shelley, et al.).
On the other side of the divide, one example of a highly developed digital technology program in a wealthy state is the e-Residency of Estonia. This form of digital residency allows both citizens and non-citizens of the state to pursue business opportunities in a digital business environment. The application is simple; residents can fill out a form with their passport and photograph alongside the reason for applying. Following a successful application, the "e-residency" will allow them to register a company, sign documents, make online banking declarations, and file medical prescriptions online, though they will be tracked through financial footprints. The project plans to cover over 10 million e-residents by 2025 and there were over 54,000 participants from over 162 countries that have expressed an interest, contributing millions of dollars to the country's economy and assisting in access to any public service online. Other benefits include hassle-free administration, lower business costs, access to the European Union market, and a broad range of e-services. Though the program is designed for entrepreneurs, Estonia hopes to value transparency and resourcefulness as a cause for other companies to implement similar policies domestically. In 2021, Estonia's neighbor Lithuania launched a similar e-Residency program.
Nonetheless, Estonia's e-Residency system has been subject to criticism. Many have pointed out that tax treaties within their own countries will play a major role in preventing this idea from spreading to more countries. Another risk is politically for governments to sustain "funding and legislative priorities across different coalitions of power." Most importantly, the threat of cyberattacks may disrupt the seemingly optimal idea of having a platform for eIDs, as Estonia suffered its own massive cyberattack in 2007 by Russian hacktivists. Today, the protection of digital services and databases is essential to national security, and many countries are still hesitant to take the next step forward to promote a new system that will change the scale of politics with all its citizens.
Other forms of digital divide
Within developed countries, the digital divide, other than economic differences, is attributed to educational levels. A study conducted by the United States National Telecommunications and Information Administration determined that the gap in computer usage and internet access widened 7.8% and 25% between those with the most and least educated, and it has been observed that those with college degrees or higher are 10 times more likely to have internet access at work when compared with those with only a high school education.
A digital divide often extends along specific racial lines as well. The difference in computer usage grew by 39.2% between White and Black households and by 42.6% between White and Hispanic households only three years ago. Race can also affect the number of computers at school, and as expected, gaps between racial groups narrow at higher income levels while widening among households at lower economic levels. Racial disparities have been proven to exist irrespective of income, and in a cultural study to determine reasons for the divide other than income, in accordance to the Hispanic community, computers were seen as a luxury, not a need. Participants collectively stated that computer activities isolated individuals and took away valuable time from family activities. In the African-American community, it was observed that they historically have had negative encounters with technological innovations, and with Asian-Americans, education was emphasized, and thus, there was a larger number of people who embraced the rise in technological advances.
An educational divide also takes place as a result of differences in the use of daily technology. In a report analyzed by the ACT Center for Equity in Learning, "85% of respondents reported having access to anywhere from two to five devices at home. The remaining one percent of respondents reported having access to no devices at home." For the 14% of respondents with one device at home, many of them reported the need to share these devices with other household members, facing challenges that are often overlooked. The data all suggest that wealthier families have access to more devices. In addition, out of the respondents that only used one device at home, 24% of them lived in rural areas, and over half reported that this one device was a smartphone; this could make completing schoolwork assignments more difficult. The ACT recommended that underserved students need access to more devices and higher-quality networks, and educators should do their best to ensure that students can find as many electronic materials through their phones to not place a burden on family plans.
Engagement of youth
A recent survey revealed that teenagers and young adults spend more time on the internet than watching TV. This has raised a number of concerns about how internet use could impact cognitive abilities. According to a study by Wartella et al., teens are concerned about how digital technologies may have an impact on their health. Digital youth can generally be viewed as the test market for the next generation's digital content and services. Sites such as Myspace and Facebook have come to the fore in sites where youth participate and engage with others on the internet. However, due to the lack of popularity of MySpace in particular, more young people are turning to websites such as Snapchat, Instagram, and YouTube. It was reported that teenagers spend up to nine hours a day online, with the vast majority of that time spent on social media websites from mobile devices, contributing to the ease of access and availability to young people. Vast amounts of money are spent annually to research the demographic by hiring psychologists, sociologists and anthropologists in order to discover habits, values and fields of interest.
Particularly in the United States, "Social media use has become so pervasive in the lives of American teens that having a presence on a social network is almost synonymous with being online; 95% of all teens ages 12-17 are now online and 80% of those online teens are users of social media sites". However, movements such as these appear to benefit strictly those wishing to advocate for their business towards youth. The critical time when young people are developing their civic identities is between the ages 15–22. During this time they develop three attributes, civic literacy, civic skills and civic attachment, that constitute civic engagement later reflected in political actions of their adult lives.
For youth to fully participate and realize their presence on the internet, a quality level of reading comprehension is required. "The average government web site, for example, requires an eleventh-grade level of reading comprehension, even though about half of the U.S. population reads at an eighth-grade level or lower". So despite the internet being a place irrespective of certain factors such as race, religion, and class, education plays a large part in a person's capacity to present themselves online in a formal manner conducive towards their citizenry. Concurrently, education also affects people's motivation to participate online.
Students should be encouraged to use technology with responsibility and ethical digital citizenship promoted. Education on harmful viruses and other malware must be emphasized to protect resources. A student can be a successful digital citizen with the help of educators, parents, and school counselors.
These 5 competencies will assist and support teachers in teaching about digital citizenship:
Inclusive
I am open to hearing and respectfully recognizing multiple viewpoints and I engage with others online with respect and empathy.
Informed
I evaluate the accuracy, perspective, and validity of digital media and social posts.
Engaged
I use technology and digital channels for civic engagement, to solve problems and be a force for good in both physical and virtual communities.
Balanced
I make informed decisions about how to prioritize my time and activities online and off.
Alert
I am aware of my online actions, and know how to be safe and create safe spaces for others online.
Limits on the use of data
International OECD guidelines state that "personal data should be relevant to the purposes for which they are to be used, and to the extent necessary for those purposes should be accurate, complete, and kept up to date". Article 8 prevents subjects to certain exceptions. Meaning that certain things cannot be published online revealing race, ethnicity, religion, political stance, health, and sex life. in the United States, this is enforced generally by the Federal Trade Commission (FTC)- but very generally. For example, the FTC brought an action against Microsoft for failing to properly protect customers' personal information. In addition, many have described the United States as being in a cyberwar with Russia, and several Americans have credited Russia to their country's downfall in transparency and declining trust in the government. With several foreign users posting anonymous information through social media in order to gather a following, it is difficult to understand whom to target and what affiliation or root cause they may have of performing a particular action aimed to sway public opinion.
The FTC does play a significant role in protecting the digital citizen. However, individuals' public records are increasingly useful to the government and highly sought after. This material can help the government detect a variety of crimes such as fraud, drug distribution rings, terrorist cells. it makes it easier to properly profile a suspected criminal and keep an eye on them. Although there are a variety of ways to gather information on an individual through credit card history, employment history, and more, the internet is becoming the most desirable information gatherer thanks to its façade of security and the amount of information that can be stored on the internet. Anonymity has proven to be very rare online as ISPs can keep track of an individual's activity online.
Three principles of digital citizenship
Digital citizenship is a term used to define the appropriate and responsible use of technology among users. Three principles were developed by Mike Ribble to teach digital users how to responsibly use technology to become a digital citizen: respect, educate, and protect. Each principle contains three of the nine elements of digital citizenship.
Respect: the elements of etiquette, access, and law are used to respect other digital users.
Educate: the elements of literacy, communication, and commerce are used to learn about the appropriate use of the digital world.
Protect: the elements of rights and responsibilities, security, and health and wellness are used to remain safe in the digital and non-digital world.
Within these three core principles, there are nine elements to also be considered in regards to digital citizenship:
Digital access: This is perhaps one of the most fundamental blocks to being a digital citizen. However, due to socioeconomic status, location, and other disabilities, some individuals may not have digital access. Recently, schools have been becoming more connected with the internet, often offering computers, and other forms of access. This can be offered through kiosks, community centers, and open labs. This most often is associated with the digital divide and factors associated with such. Digital access is available in many remote countries via cyber cafés and small coffee shops.
Digital commerce: This is the ability for users to recognize that much of the economy is regulated online. It also deals with the understanding of the dangers and benefits of online buying, using credit cards online, and so forth. As with the advantages and legal activities- there is also dangerous activities such as illegal downloads, gambling, drug deals, pornography, plagiarism, and so forth.
Digital communication: This element deals with understanding the variety of online communication mediums such as email, instant messaging, Facebook Messenger, and so forth. There is a standard of etiquette associated with each medium.
Digital literacy: This deals with the understanding of how to use various digital devices. For example, how to properly search for something on a search engine versus an online database, or how to use various online logs. Oftentimes many educational institutions will help form an individual's digital literacy.
Digital etiquette: As discussed in the third element, digital communication, this is the expectation that various mediums require a variety of etiquette. Certain mediums demand more appropriate behavior and language than others.
Digital law: This is where enforcement occurs for illegal downloads, plagiarizing, hacking, creating viruses, sending spam, identity theft, cyberbullying, etc.
Digital rights and responsibilities: This is the set of rights that digital citizens have, such as privacy and free speech.
Digital health: Digital citizens must be aware of the physical stress placed on their bodies by internet usage. They must be aware to not become overly dependent on the internet causing problems such as eye strain, headaches, and stress.
Digital security: This simply means that citizens must take measures to be safe by practicing using secure passwords, virus protection, backing up data, and so forth.
Digital citizenship in education
According to Mike Ribble, an author who has worked on the topic of digital citizenship for more than a decade, digital access is the first element that is prevalent in today's educational curriculum. He cited a widening gap between the impoverished and the wealthy, as 41% of African Americans and Hispanics use computers in the home when compared to 77% of white students. Other crucial digital elements include commerce, communication, literacy, and etiquette. He also emphasized that educators must understand that technology is important for all students, not only those who already have access to it, in order to decrease the digital divide that currently exists.
Furthermore, in research brought up by Common Sense Media, approximately six out of ten American K-12 teachers used some type of digital citizenship curriculum, and seven out of ten taught some sort of competency skill utilizing digital citizenship. Many of the sections that these teachers focused in on included hate speech, cyberbullying, and digital drama. A problem with digital technology that still exists is that over 35% of students were observed to not possess the proper skills to critically evaluate information online, and these issues and statistics increased as the grade levels rose. Online videos such as those found on YouTube and Netflix have been used approximately by 60% of the K-12 teachers in classrooms, while educational tools such as Microsoft Office and Google G Suite have been used by around half of the teachers. Social media was used the least, at around 13% in comparison to other digital methods of education. When analyzing the social class differences between schools, it was found that Title I schools were more likely to use digital citizenship curricula than teachers in more affluent schools.
In the past two years, there has been a major shift to move students from digital citizenship to digital leadership in order to make a greater impact on online interactions. Though digital citizens take a responsible approach to act ethically, digital leadership is a more proactive approach, encompassing the "use of internet and social media to improve the lives, well-being, and circumstances of others" as part of one's daily life. In February 2018, after the Valentine's Day shooting in Parkland, Florida, students became dynamic digital citizens, using social media and other web platforms to engage proactively on the issue and push back against cyberbullies and misinformation. Students from Marjory Stoneman Douglas High School specifically rallied against gun violence, engaging in live tweeting, texting, videoing, and recording the attack as it happened, utilizing onside digital tools to not only witness what was happening at the time but to allow the world to witness it as well. This allowed the nation to see and react, and as a result, students built a web page and logo for their new movement. They gave interviews to major media outlets and at rallies and protects and coordinated a nationwide march online on March 24 against elected officials at meetings and town halls. The idea of this shift in youth is to express empathy beyond one's self, and moving to seeing this self in the digital company of others.
Nonetheless, several critics state that just as empathy can be spread to a vast number of individuals, hatred can be spread as well. Though the United Nations and groups have been establishing fronts against hate speech, there is no legal definition of hate speech used internationally, and more research needs to be done on its impact.
Along with educational trends, there are overlapping goals of digital citizenship education. Altogether, these facets contribute to one another in the development of a healthy and effective education for digital technology and communication.
Digital footprint: An acknowledgment that posting and receiving information online can be tracked, customized, and marketed for users to click and follow. Not only the internet use but individuals' digital footprints can lead to both beneficial and negative outcomes, but the ability to manage one's digital footprints can be a sub-part of digital literacy. Digital footprints do not simply consist of the active participation of content production as well as sharing of ideas on different media sites, but they can also be generated by other internet users (both active and passive forms of digital participation). Examples of digital footprints includes liking, favoriting, following, or commenting on a certain online content creation, or other data can be found by searching through history, purchases, and searches.
Digital literacy: Almost 20 years ago, Gilster (1997) defined digital literacy as "the ability to understand and use information in multiple formats from a wide range of sources when it is presented via computers." Digital literacy includes the locating and consumption of content online, the creation of content, and the way that this content is communicated amongst a group of people.
Information literacy: The American Library Association defines information literacy as the overall ability for an individual to target information that is valuable, being able to find it, evaluate it, and use it. This can be through information creation, research, scholarly conversations, or simply plugging in keywords into a search engine.
Copyright, intellectual property respect, attribution: By knowing who published sources and whether or not content creation is credible, users can be better educated as to what and what not to believe when engaging in digital participation.
Health and wellness: A healthy community allows for an interactive conversation to take place between educated citizens who are knowledgeable about their environment.
Empowering student voice, agency, advocacy: Utilizing nonprofits as well as government-affiliated organizations in order to empower students to speak up for policy changes that need to be made. Currently, more than 10 different mobile applications aim to allow students the opportunity to speak up and advocate for rights online.
Safety, security and privacy: Addressing freedoms extended to everyone in a digital world and the balance between the right to privacy and the safety hazards that go along with it. This area of digital citizenship includes the assistance of students to understand when they are provided the right opportunities, including the proper access to the internet and products that are sold online. It is on the part of educators to assist students in understanding that it is crucial to protect others online.
Character education and ethics: Knowing that ethically speaking, everyone will come with different viewpoints online and it is crucial to remain balanced and moral in online behavior.
Parenting: Emphasizing the efforts of educators, many want to continue preaching rules and policies addressing issues related to the online world. Cyberbullying, sexting, and other negative issues that are brought up are regulated by the School Resource Officers and other school counsel.
Parents posting about their kids online: Digital footprints has lasting impacts on a reputation that can affect relationships, employment, opportunities. For example, negatively, a post can impact a child's college admissions or ruin relationship. However, positively, a post could influence their future or help their relationship. Parent tend to post their child picture, achievement, ultrasound, or even flyer. However, "technology coupled with parents’ behavior is increasingly putting children at risk for identity theft, humiliation, various privacy violations, future discrimination, and causing concern about developmental issues related to autonomy and consent." Though the posts are innocent and positive, parents still can the biggest invaders of privacy when it come to their child.
Digital Citizenship Curricula
There are free and open curricula developed by different organizations for teaching Digital Citizenship skills in schools:
Be Internet Awesome: Developed by Google in collaboration with The Net Safety Collaborative, and the Internet Keep Safe Coalition.
Digital Citizenship Curriculum: Developed by Common Sense Media, licensed under Creative Commons License CC-BY-NC-ND.
Open Curriculum for Teaching Digital Citizenship & Internet Maturity: Developed by iMature EdTech, licensed under Creative Commons License CC-BY-NC-ND.
See also
Civic technology
Digital integrity
Digital self-determination
E-government
Open government
Service design
Netizen
Digital native
Digital Literacy
References
51. Baron, Jessica. “Posting about Your Kids Online Could Damage Their Futures.” Forbes, Forbes Magazine, 24 Mar. 2022, https://www.forbes.com/sites/jessicabaron/2018/12/16/parents-who-post-about-their-kids-online-could-be-damaging-their-futures/?sh=34d59a4427b7.
1 . Hollebeek, Linda. “Exploring Customer Brand Engagement: Definition and Themes.” Journal of Strategic Marketing, vol. 19, no. 7, Dec. 2011, pp. 555–73. Taylor and Francis+NEJM, https://doi.org/10.1080/0965254X.2011.599493.
External links
Take the Safe Online Surfing Internet Challenge (FBI)
What Is Digital Citizenship & How Do You Teach It?
Digital technology
Internet governance
Information revolution
Information society
Information Age
Digital divide
Internet ethics
Digital media
Youth-led media | Digital citizen | Technology | 6,575 |
304,942 | https://en.wikipedia.org/wiki/Heart%20rate | Heart rate is the frequency of the heartbeat measured by the number of contractions of the heart per minute (beats per minute, or bpm). The heart rate varies according to the body's physical needs, including the need to absorb oxygen and excrete carbon dioxide. It is also modulated by numerous factors, including (but not limited to) genetics, physical fitness, stress or psychological status, diet, drugs, hormonal status, environment, and disease/illness, as well as the interaction between these factors. It is usually equal or close to the pulse rate measured at any peripheral point.
The American Heart Association states the normal resting adult human heart rate is 60–100 bpm. An ultra-trained athlete would have a resting heart rate of 37–38 bpm. Tachycardia is a high heart rate, defined as above 100 bpm at rest. Bradycardia is a low heart rate, defined as below 60 bpm at rest. When a human sleeps, a heartbeat with rates around 40–50 bpm is common and considered normal. When the heart is not beating in a regular pattern, this is referred to as an arrhythmia. Abnormalities of heart rate sometimes indicate disease.
Physiology
While heart rhythm is regulated entirely by the sinoatrial node under normal conditions, heart rate is regulated by sympathetic and parasympathetic input to the sinoatrial node. The accelerans nerve provides sympathetic input to the heart by releasing norepinephrine onto the cells of the sinoatrial node (SA node), and the vagus nerve provides parasympathetic input to the heart by releasing acetylcholine onto sinoatrial node cells. Therefore, stimulation of the accelerans nerve increases heart rate, while stimulation of the vagus nerve decreases it.
As water and blood are incompressible fluids, one of the physiological ways to deliver more blood to an organ is to increase heart rate. Normal resting heart rates range from 60 to 100 bpm. Bradycardia is defined as a resting heart rate below 60 bpm. However, heart rates from 50 to 60 bpm are common among healthy people and do not necessarily require special attention. Tachycardia is defined as a resting heart rate above 100 bpm, though persistent rest rates between 80 and 100 bpm, mainly if they are present during sleep, may be signs of hyperthyroidism or anemia (see below).
Central nervous system stimulants such as substituted amphetamines increase heart rate.
Central nervous system depressants or sedatives decrease the heart rate (apart from some particularly strange ones with equally strange effects, such as ketamine which can cause – amongst many other things – stimulant-like effects such as tachycardia).
There are many ways in which the heart rate speeds up or slows down. Most involve stimulant-like endorphins and hormones being released in the brain, some of which are those that are 'forced'/'enticed' out by the ingestion and processing of drugs such as cocaine or atropine.
This section discusses target heart rates for healthy persons, which would be inappropriately high for most persons with coronary artery disease.
Influences from the central nervous system
Cardiovascular centres
The heart rate is rhythmically generated by the sinoatrial node. It is also influenced by central factors through sympathetic and parasympathetic nerves. Nervous influence over the heart rate is centralized within the two paired cardiovascular centres of the medulla oblongata. The cardioaccelerator regions stimulate activity via sympathetic stimulation of the cardioaccelerator nerves, and the cardioinhibitory centers decrease heart activity via parasympathetic stimulation as one component of the vagus nerve. During rest, both centers provide slight stimulation to the heart, contributing to autonomic tone. This is a similar concept to tone in skeletal muscles. Normally, vagal stimulation predominates as, left unregulated, the SA node would initiate a sinus rhythm of approximately 100 bpm.
Both sympathetic and parasympathetic stimuli flow through the paired cardiac plexus near the base of the heart. The cardioaccelerator center also sends additional fibers, forming the cardiac nerves via sympathetic ganglia (the cervical ganglia plus superior thoracic ganglia T1–T4) to both the SA and AV nodes, plus additional fibers to the atria and ventricles. The ventricles are more richly innervated by sympathetic fibers than parasympathetic fibers. Sympathetic stimulation causes the release of the neurotransmitter norepinephrine (also known as noradrenaline) at the neuromuscular junction of the cardiac nerves. This shortens the repolarization period, thus speeding the rate of depolarization and contraction, which results in an increased heartrate. It opens chemical or ligand-gated sodium and calcium ion channels, allowing an influx of positively charged ions.
Norepinephrine binds to the beta–1 receptor. High blood pressure medications are used to block these receptors and so reduce the heart rate.
Parasympathetic stimulation originates from the cardioinhibitory region of the brain with impulses traveling via the vagus nerve (cranial nerve X). The vagus nerve sends branches to both the SA and AV nodes, and to portions of both the atria and ventricles. Parasympathetic stimulation releases the neurotransmitter acetylcholine (ACh) at the neuromuscular junction. ACh slows HR by opening chemical- or ligand-gated potassium ion channels to slow the rate of spontaneous depolarization, which extends repolarization and increases the time before the next spontaneous depolarization occurs. Without any nervous stimulation, the SA node would establish a sinus rhythm of approximately 100 bpm. Since resting rates are considerably less than this, it becomes evident that parasympathetic stimulation normally slows HR. This is similar to an individual driving a car with one foot on the brake pedal. To speed up, one need merely remove one's foot from the brake and let the engine increase speed. In the case of the heart, decreasing parasympathetic stimulation decreases the release of ACh, which allows HR to increase up to approximately 100 bpm. Any increases beyond this rate would require sympathetic stimulation.
Input to the cardiovascular centres
The cardiovascular centre receive input from a series of visceral receptors with impulses traveling through visceral sensory fibers within the vagus and sympathetic nerves via the cardiac plexus. Among these receptors are various proprioreceptors, baroreceptors, and chemoreceptors, plus stimuli from the limbic system which normally enable the precise regulation of heart function, via cardiac reflexes. Increased physical activity results in increased rates of firing by various proprioreceptors located in muscles, joint capsules, and tendons. The cardiovascular centres monitor these increased rates of firing, suppressing parasympathetic stimulation or increasing sympathetic stimulation as needed in order to increase blood flow.
Similarly, baroreceptors are stretch receptors located in the aortic sinus, carotid bodies, the venae cavae, and other locations, including pulmonary vessels and the right side of the heart itself. Rates of firing from the baroreceptors represent blood pressure, level of physical activity, and the relative distribution of blood. The cardiac centers monitor baroreceptor firing to maintain cardiac homeostasis, a mechanism called the baroreceptor reflex. With increased pressure and stretch, the rate of baroreceptor firing increases, and the cardiac centers decrease sympathetic stimulation and increase parasympathetic stimulation. As pressure and stretch decrease, the rate of baroreceptor firing decreases, and the cardiac centers increase sympathetic stimulation and decrease parasympathetic stimulation.
There is a similar reflex, called the atrial reflex or Bainbridge reflex, associated with varying rates of blood flow to the atria. Increased venous return stretches the walls of the atria where specialized baroreceptors are located. However, as the atrial baroreceptors increase their rate of firing and as they stretch due to the increased blood pressure, the cardiac center responds by increasing sympathetic stimulation and inhibiting parasympathetic stimulation to increase HR. The opposite is also true.
Increased metabolic byproducts associated with increased activity, such as carbon dioxide, hydrogen ions, and lactic acid, plus falling oxygen levels, are detected by a suite of chemoreceptors innervated by the glossopharyngeal and vagus nerves. These chemoreceptors provide feedback to the cardiovascular centers about the need for increased or decreased blood flow, based on the relative levels of these substances.
The limbic system can also significantly impact HR related to emotional state. During periods of stress, it is not unusual to identify higher than normal HRs, often accompanied by a surge in the stress hormone cortisol. Individuals experiencing extreme anxiety may manifest panic attacks with symptoms that resemble those of heart attacks. These events are typically transient and treatable. Meditation techniques have been developed to ease anxiety and have been shown to lower HR effectively. Doing simple deep and slow breathing exercises with one's eyes closed can also significantly reduce this anxiety and HR.
Factors influencing heart rate
Using a combination of autorhythmicity and innervation, the cardiovascular center is able to provide relatively precise control over the heart rate, but other factors can impact on this. These include hormones, notably epinephrine, norepinephrine, and thyroid hormones; levels of various ions including calcium, potassium, and sodium; body temperature; hypoxia; and pH balance.
Epinephrine and norepinephrine
The catecholamines, epinephrine and norepinephrine, secreted by the adrenal medulla form one component of the extended fight-or-flight mechanism. The other component is sympathetic stimulation. Epinephrine and norepinephrine have similar effects: binding to the beta-1 adrenergic receptors, and opening sodium and calcium ion chemical- or ligand-gated channels. The rate of depolarization is increased by this additional influx of positively charged ions, so the threshold is reached more quickly and the period of repolarization is shortened. However, massive releases of these hormones coupled with sympathetic stimulation may actually lead to arrhythmias. There is no parasympathetic stimulation to the adrenal medulla.
Thyroid hormones
In general, increased levels of the thyroid hormones (thyroxine(T4) and triiodothyronine (T3)), increase the heart rate; excessive levels can trigger tachycardia. The impact of thyroid hormones is typically of a much longer duration than that of the catecholamines. The physiologically active form of triiodothyronine, has been shown to directly enter cardiomyocytes and alter activity at the level of the genome. It also impacts the beta-adrenergic response similar to epinephrine and norepinephrine.
Calcium
Calcium ion levels have a great impact on heart rate and myocardial contractility: increased calcium levels cause an increase in both. High levels of calcium ions result in hypercalcemia and excessive levels can induce cardiac arrest. Drugs known as calcium channel blockers slow HR by binding to these channels and blocking or slowing the inward movement of calcium ions.
Caffeine and nicotine
Caffeine and nicotine are both stimulants of the nervous system and of the cardiac centres causing an increased heart rate. Caffeine works by increasing the rates of depolarization at the SA node, whereas nicotine stimulates the activity of the sympathetic neurons that deliver impulses to the heart.
Effects of stress
Both surprise and stress induce physiological response: elevate heart rate substantially. In a study conducted on 8 female and male student actors ages 18 to 25, their reaction to an unforeseen occurrence (the cause of stress) during a performance was observed in terms of heart rate. In the data collected, there was a noticeable trend between the location of actors (onstage and offstage) and their elevation in heart rate in response to stress; the actors present offstage reacted to the stressor immediately, demonstrated by their immediate elevation in heart rate the minute the unexpected event occurred, but the actors present onstage at the time of the stressor reacted in the following 5 minute period (demonstrated by their increasingly elevated heart rate). This trend regarding stress and heart rate is supported by previous studies; negative emotion/stimulus has a prolonged effect on heart rate in individuals who are directly impacted.
In regard to the characters present onstage, a reduced startle response has been associated with a passive defense, and the diminished initial heart rate response has been predicted to have a greater tendency to dissociation. Current evidence suggests that heart rate variability can be used as an accurate measure of psychological stress and may be used for an objective measurement of psychological stress.
Factors decreasing heart rate
The heart rate can be slowed by altered sodium and potassium levels, hypoxia, acidosis, alkalosis, and hypothermia. The relationship between electrolytes and HR is complex, but maintaining electrolyte balance is critical to the normal wave of depolarization. Of the two ions, potassium has the greater clinical significance. Initially, both hyponatremia (low sodium levels) and hypernatremia (high sodium levels) may lead to tachycardia. Severely high hypernatremia may lead to fibrillation, which may cause cardiac output to cease. Severe hyponatremia leads to both bradycardia and other arrhythmias. Hypokalemia (low potassium levels) also leads to arrhythmias, whereas hyperkalemia (high potassium levels) causes the heart to become weak and flaccid, and ultimately to fail.
Heart muscle relies exclusively on aerobic metabolism for energy. Severe myocardial infarction (commonly called a heart attack) can lead to a decreasing heart rate, since metabolic reactions fueling heart contraction are restricted.
Acidosis is a condition in which excess hydrogen ions are present, and the patient's blood expresses a low pH value. Alkalosis is a condition in which there are too few hydrogen ions, and the patient's blood has an elevated pH. Normal blood pH falls in the range of 7.35–7.45, so a number lower than this range represents acidosis and a higher number represents alkalosis. Enzymes, being the regulators or catalysts of virtually all biochemical reactions – are sensitive to pH and will change shape slightly with values outside their normal range. These variations in pH and accompanying slight physical changes to the active site on the enzyme decrease the rate of formation of the enzyme-substrate complex, subsequently decreasing the rate of many enzymatic reactions, which can have complex effects on HR. Severe changes in pH will lead to denaturation of the enzyme.
The last variable is body temperature. Elevated body temperature is called hyperthermia, and suppressed body temperature is called hypothermia. Slight hyperthermia results in increasing HR and strength of contraction. Hypothermia slows the rate and strength of heart contractions. This distinct slowing of the heart is one component of the larger diving reflex that diverts blood to essential organs while submerged. If sufficiently chilled, the heart will stop beating, a technique that may be employed during open heart surgery. In this case, the patient's blood is normally diverted to an artificial heart-lung machine to maintain the body's blood supply and gas exchange until the surgery is complete, and sinus rhythm can be restored. Excessive hyperthermia and hypothermia will both result in death, as enzymes drive the body systems to cease normal function, beginning with the central nervous system.
Physiological control over heart rate
A study shows that bottlenose dolphins can learn – apparently via instrumental conditioning – to rapidly and selectively slow down their heart rate during diving for conserving oxygen depending on external signals. In humans regulating heart rate by methods such as listening to music, meditation or a vagal maneuver takes longer and only lowers the rate to a much smaller extent.
In different circumstances
Heart rate is not a stable value and it increases or decreases in response to the body's need in a way to maintain an equilibrium (basal metabolic rate) between requirement and delivery of oxygen and nutrients. The normal SA node firing rate is affected by autonomic nervous system activity: sympathetic stimulation increases and parasympathetic stimulation decreases the firing rate.
Resting heart rate
Normal pulse rates at rest, in beats per minute (BPM):
The basal or resting heart rate (HRrest) is defined as the heart rate when a person is awake, in a neutrally temperate environment, and has not been subject to any recent exertion or stimulation, such as stress or surprise. The normal resting heart rate is based on the at-rest firing rate of the heart's sinoatrial node, where the faster pacemaker cells driving the self-generated rhythmic firing and responsible for the heart's autorhythmicity are located.
In one study 98% of cardiologists suggested that as a desirable target range, 50 to 90 beats per minute is more appropriate than 60 to 100. The available evidence indicates that the normal range for resting heart rate is 50–90 beats per minute (bpm). In a study of over 35,000 American men and women over age 40 during the 1999–2008 period, 71 bpm was the average for men, and 73 bpm was the average for women.
Resting heart rate is often correlated with mortality. In the Copenhagen City Heart Study a heart rate of 65 bpm rather than 80 bpm was associated with 4.6 years longer life expectancy in men and 3.6 years in women. Other studies have shown all-cause mortality is increased by 1.22 (hazard ratio) when heart rate exceeds 90 beats per minute. ECG of 46,129 individuals with low risk for cardiovascular disease revealed that 96% had resting heart rates ranging from 48 to 98 beats per minute. The mortality rate of patients with myocardial infarction increased from 15% to 41% if their admission heart rate was greater than 90 beats per minute. For endurance athletes at the elite level, it is not unusual to have a resting heart rate between 33 and 50 bpm.
Maximum heart rate
The maximum heart rate (HRmax) is the age-related highest number of beats per minute of the heart when reaching a point of exhaustion without severe problems through exercise stress.
In general it is loosely estimated as 220 minus one's age.
It generally decreases with age. Since HRmax varies by individual, the most accurate way of measuring any single person's HRmax is via a cardiac stress test. In this test, a person is subjected to controlled physiologic stress (generally by treadmill or bicycle ergometer) while being monitored by an electrocardiogram (ECG). The intensity of exercise is periodically increased until certain changes in heart function are detected on the ECG monitor, at which point the subject is directed to stop. Typical duration of the test ranges ten to twenty minutes. Adults who are beginning a new exercise regimen are often advised to perform this test only in the presence of medical staff due to risks associated with high heart rates.
The theoretical maximum heart rate of a human is 300 bpm; however, there have been multiple cases where this theoretical upper limit has been exceeded. The fastest human ventricular conduction rate recorded to this day is a conducted tachyarrhythmia with ventricular rate of 600 beats per minute, which is comparable to the heart rate of a mouse.
For general purposes, a number of formulas are used to estimate HRmax. However, these predictive formulas have been criticized as inaccurate because they only produce generalized population-averages and may deviate significantly from the actual value. (See § Limitations.)
Haskell & Fox (1970)
Notwithstanding later research, the most widely cited formula for HRmax is still:
HRmax = 220 − age
Although attributed to various sources, it is widely thought to have been devised in 1970 by Dr. William Haskell and Dr. Samuel Fox. They did not develop this formula from original research, but rather by plotting data from approximately 11 references consisting of published research or unpublished scientific compilations. It gained widespread use through being used by Polar Electro in its heart rate monitors, which Dr. Haskell has "laughed about", as the formula "was never supposed to be an absolute guide to rule people's training."
While this formula is commonly used (and easy to remember and calculate), research has consistently found that it is subject to bias, particularly in older adults. Compared to the age-specific average HRmax, the Haskell and Fox formula overestimates HRmax in young adults, agrees with it at age 40, and underestimates HRmax in older adults. For example, in one study, the average HRmax at age 76 was about 10bpm higher than the Haskell and Fox equation. Consequently, the formula cannot be recommended for use in exercise physiology and related fields.
Other formulas
HRmax is strongly correlated to age, and most formulas are solely based on this. Studies have been mixed on the effect of gender, with some finding that gender is statistically significant, although small when considering overall equation error, while others finding negligible effect. The inclusion of physical activity status, maximal oxygen uptake, smoking, body mass index, body weight, or resting heart rate did not significantly improve accuracy. Nonlinear models are slightly more accurate predictors of average age-specific HRmax, particularly above 60 years of age, but are harder to apply, and provide statistically negligible improvement over linear models. The Wingate formula is the most recent, had the largest data set, and performed best on a fresh data set when compared with other formulas, although it had only a small amount of data for ages 60 and older so those estimates should be viewed with caution. In addition, most formulas are developed for adults and are not applicable to children and adolescents.
Limitations
Maximum heart rates vary significantly between individuals. Age explains only about half of HRmax variance. For a given age, the standard deviation of HRmax from the age-specific population mean is about 12bpm, and a 95% interval for the prediction error is about 24bpm. For example, Dr. Fritz Hagerman observed that the maximum heart rates of men in their 20s on Olympic rowing teams vary from 160 to 220. Such a variation would equate to an age range of -16 to 68 using the Wingate formula. The formulas are quite accurate at predicting the average heart rate of a group of similarly-aged individuals, but relatively poor for a given individual.
Robergs and Landwehr opine that for VO2 max, prediction errors in HRmax need to be less than ±3 bpm. No current formula meets this accuracy. For prescribing exercise training heart rate ranges, the errors in the more accurate formulas may be acceptable, but again it is likely that, for a significant fraction of the population, current equations used to estimate HRmax are not accurate enough. Froelicher and Myers describe maximum heart formulas as "largely useless". Measurement via a maximal test is preferable whenever possible, which can be as accurate as ±2bpm.
Heart rate reserve
Heart rate reserve (HRreserve) is the difference between a person's measured or predicted maximum heart rate and resting heart rate. Some methods of measurement of exercise intensity measure percentage of heart rate reserve. Additionally, as a person increases their cardiovascular fitness, their HRrest will drop, and the heart rate reserve will increase. Percentage of HRreserve is statistically indistinguishable from percentage of VO2 reserve.
HRreserve = HRmax − HRrest
This is often used to gauge exercise intensity (first used in 1957 by Karvonen).
Karvonen's study findings have been questioned, due to the following:
The study did not use VO2 data to develop the equation.
Only six subjects were used.
Karvonen incorrectly reported that the percentages of HRreserve and VO2 max correspond to each other, but newer evidence shows that it correlated much better with VO2 reserve as described above.
Target heart rate
For healthy people, the Target Heart Rate (THR) or Training Heart Rate Range (THRR) is a desired range of heart rate reached during aerobic exercise which enables one's heart and lungs to receive the most benefit from a workout. This theoretical range varies based mostly on age; however, a person's physical condition, sex, and previous training also are used in the calculation.
By percent, Fox–Haskell-based
The THR can be calculated as a range of 65–85% intensity, with intensity defined simply as percentage of HRmax. However, it is crucial to derive an accurate HRmax to ensure these calculations are meaningful.
Example for someone with a HRmax of 180 (age 40, estimating HRmax As 220 − age):
65% Intensity: (220 − (age = 40)) × 0.65 → 117 bpm
85% Intensity: (220 − (age = 40)) × 0.85 → 154 bpm
Karvonen method
The Karvonen method factors in resting heart rate (HRrest) to calculate target heart rate (THR), using a range of 50–85% intensity:
THR = ((HRmax − HRrest) × % intensity) + HRrest
Equivalently,
THR = (HRreserve × % intensity) + HRrest
Example for someone with a HRmax of 180 and a HRrest of 70 (and therefore a HRreserve of 110):
50% Intensity: ((180 − 70) × 0.50) + 70 = 125 bpm
85% Intensity: ((180 − 70) × 0.85) + 70 = 163 bpm
Zoladz method
An alternative to the Karvonen method is the Zoladz method, which is used to test an athlete's capabilities at specific heart rates. These are not intended to be used as exercise zones, although they are often used as such. The Zoladz test zones are derived by subtracting values from HRmax:
THR = HRmax − Adjuster ± 5 bpm
Zone 1 Adjuster = 50 bpm
Zone 2 Adjuster = 40 bpm
Zone 3 Adjuster = 30 bpm
Zone 4 Adjuster = 20 bpm
Zone 5 Adjuster = 10 bpm
Example for someone with a HRmax of 180:
Zone 1 (easy exercise): 180 − 50 ± 5 → 125 − 135 bpm
Zone 4 (tough exercise): 180 − 20 ± 5 → 155 − 165 bpm
Heart rate recovery
Heart rate recovery (HRR) is the reduction in heart rate at peak exercise and the rate as measured after a cool-down period of fixed duration. A greater reduction in heart rate after exercise during the reference period is associated with a higher level of cardiac fitness.
Heart rates assessed during treadmill stress test that do not drop by more than 12 bpm one minute after stopping exercise (if cool-down period after exercise) or by more than 18 bpm one minute after stopping exercise (if no cool-down period and supine position as soon as possible) are associated with an increased risk of death. People with an abnormal HRR defined as a decrease of 42 beats per minutes or less at two minutes post-exercise had a mortality rate 2.5 times greater than patients with a normal recovery. Another study reported a four-fold increase in mortality in subjects with an abnormal HRR defined as ≤12 bpm reduction one minute after the cessation of exercise. A study reported that a HRR of ≤22 bpm after two minutes "best identified high-risk patients". They also found that while HRR had significant prognostic value it had no diagnostic value.
Development
The human heart beats more than 2.8 billion times in an average lifetime.
The heartbeat of a human embryo begins at approximately 21 days after conception, or five weeks after the last normal menstrual period (LMP), which is the date normally used to date pregnancy in the medical community. The electrical depolarizations that trigger cardiac myocytes to contract arise spontaneously within the myocyte itself. The heartbeat is initiated in the pacemaker regions and spreads to the rest of the heart through a conduction pathway. Pacemaker cells develop in the primitive atrium and the sinus venosus to form the sinoatrial node and the atrioventricular node respectively. Conductive cells develop the bundle of His and carry the depolarization into the lower heart.
The human heart begins beating at a rate near the mother's, about 75–80 beats per minute (bpm). The embryonic heart rate then accelerates linearly for the first month of beating, peaking at 165–185 bpm during the early 7th week, (early 9th week after the LMP). This acceleration is approximately 3.3 bpm per day, or about 10 bpm every three days, an increase of 100 bpm in the first month.
After peaking at about 9.2 weeks after the LMP, it decelerates to about 150 bpm (+/-25 bpm) during the 15th week after the LMP. After the 15th week the deceleration slows reaching an average rate of about 145 (+/-25 bpm) bpm at term. The regression formula which describes this acceleration before the embryo reaches 25 mm in crown-rump length or 9.2 LMP weeks is:
Clinical significance
Manual measurement
Heart rate is measured by finding the pulse of the heart. This pulse rate can be found at any point on the body where the artery's pulsation is transmitted to the surface by pressuring it with the index and middle fingers; often it is compressed against an underlying structure like bone. The thumb should not be used for measuring another person's heart rate, as its strong pulse may interfere with the correct perception of the target pulse.
The radial artery is the easiest to use to check the heart rate. However, in emergency situations the most reliable arteries to measure heart rate are carotid arteries. This is important mainly in patients with atrial fibrillation, in whom heart beats are irregular and stroke volume is largely different from one beat to another. In those beats following a shorter diastolic interval left ventricle does not fill properly, stroke volume is lower and pulse wave is not strong enough to be detected by palpation on a distal artery like the radial artery. It can be detected, however, by doppler.
Possible points for measuring the heart rate are:
The ventral aspect of the wrist on the side of the thumb (radial artery).
The ulnar artery.
The inside of the elbow, or under the biceps muscle (brachial artery).
The groin (femoral artery).
Behind the medial malleolus on the feet (posterior tibial artery).
Middle of dorsum of the foot (dorsalis pedis).
Behind the knee (popliteal artery).
Over the abdomen (abdominal aorta).
The chest (apex of the heart), which can be felt with one's hand or fingers. It is also possible to auscultate the heart using a stethoscope.
In the neck, lateral of the larynx (carotid artery)
The temple (superficial temporal artery).
The lateral edge of the mandible (facial artery).
The side of the head near the ear (posterior auricular artery).
Electronic measurement
A more precise method of determining heart rate involves the use of an electrocardiograph, or ECG (also abbreviated EKG). An ECG generates a pattern based on electrical activity of the heart, which closely follows heart function. Continuous ECG monitoring is routinely done in many clinical settings, especially in critical care medicine. On the ECG, instantaneous heart rate is calculated using the R wave-to-R wave (RR) interval and multiplying/dividing in order to derive heart rate in heartbeats/min. Multiple methods exist:
HR = 1000 · 60/(RR interval in milliseconds)
HR = 60/(RR interval in seconds)
HR = 300/number of "large" squares between successive R waves.
HR= 1,500 number of large blocks
Heart rate monitors allow measurements to be taken continuously and can be used during exercise when manual measurement would be difficult or impossible (such as when the hands are being used). Various commercial heart rate monitors are also available. Some monitors, used during sport, consist of a chest strap with electrodes. The signal is transmitted to a wrist receiver for display.
Alternative methods of measurement include seismocardiography.
Optical measurements
Pulse oximetry of the finger and laser Doppler imaging of the eye fundus are often used in the clinics. Those techniques can assess the heart rate by measuring the delay between pulses.
Tachycardia
Tachycardia is a resting heart rate more than 100 beats per minute. This number can vary as smaller people and children have faster heart rates than average adults.
Physiological conditions where tachycardia occurs:
Pregnancy
Emotional conditions such as anxiety or stress.
Exercise
Pathological conditions where tachycardia occurs:
Sepsis
Fever
Anemia
Hypoxia
Hyperthyroidism
Hypersecretion of catecholamines
Cardiomyopathy
Valvular heart diseases
Acute Radiation Syndrome
Dehydration
Metabolic myopathies (At rest, tachycardia is commonly seen in fatty acid oxidation disorders. An inappropriate rapid heart rate response to exercise is seen in muscle glycogenoses and mitochondrial myopathies, where the tachycardia is faster than would be expected during exercise).
Bradycardia
Bradycardia was defined as a heart rate less than 60 beats per minute when textbooks asserted that the normal range for heart rates was 60–100 bpm. The normal range has since been revised in textbooks to 50–90 bpm for a human at total rest. Setting a lower threshold for bradycardia prevents misclassification of fit individuals as having a pathologic heart rate. The normal heart rate number can vary as children and adolescents tend to have faster heart rates than average adults. Bradycardia may be associated with medical conditions such as hypothyroidism, heart disease, or inflammatory disease. At rest, although tachycardia is more commonly seen in fatty acid oxidation disorders, more rarely acute bradycardia can occur.
Trained athletes tend to have slow resting heart rates, and resting bradycardia in athletes should not be considered abnormal if the individual has no symptoms associated with it. For example, Miguel Indurain, a Spanish cyclist and five time Tour de France winner, had a resting heart rate of 28 beats per minute, one of the lowest ever recorded in a healthy human. Daniel Green achieved the world record for the slowest heartbeat in a healthy human with a heart rate of just 26 bpm in 2014.
Arrhythmia
Arrhythmias are abnormalities of the heart rate and rhythm (sometimes felt as palpitations). They can be divided into two broad categories: fast and slow heart rates. Some cause few or minimal symptoms. Others produce more serious symptoms of lightheadedness, dizziness and fainting.
Hypertension
Elevated heart rate is a powerful predictor of morbidity and mortality in patients with hypertension. Atherosclerosis and dysautonomia are major contributors to the pathogenesis.
Correlation with cardiovascular mortality risk
A number of investigations indicate that faster resting heart rate has emerged as a new risk factor for mortality in homeothermic mammals, particularly cardiovascular mortality in human beings. High heart rate is associated with endothelial dysfunction and increased atheromatous plaque formation leading to atherosclerosis. Faster heart rate may accompany increased production of inflammation molecules and increased production of reactive oxygen species in cardiovascular system, in addition to increased mechanical stress to the heart. There is a correlation between increased resting rate and cardiovascular risk. This is not seen to be "using an allotment of heart beats" but rather an increased risk to the system from the increased rate.
An Australian-led international study of patients with cardiovascular disease has shown that heart beat rate is a key indicator for the risk of heart attack. The study, published in The Lancet (September 2008) studied 11,000 people, across 33 countries, who were being treated for heart problems. Those patients whose heart rate was above 70 beats per minute had significantly higher incidence of heart attacks, hospital admissions and the need for surgery. Higher heart rate is thought to be correlated with an increase in heart attack and about a 46 percent increase in hospitalizations for non-fatal or fatal heart attack.
Other studies have shown that a high resting heart rate is associated with an increase in cardiovascular and all-cause mortality in the general population and in patients with chronic diseases. A faster resting heart rate is associated with shorter life expectancy and is considered a strong risk factor for heart disease and heart failure, independent of level of physical fitness. Specifically, a resting heart rate above 65 beats per minute has been shown to have a strong independent effect on premature mortality; every 10 beats per minute increase in resting heart rate has been shown to be associated with a 10–20% increase in risk of death. In one study, men with no evidence of heart disease and a resting heart rate of more than 90 beats per minute had a five times higher risk of sudden cardiac death. Similarly, another study found that men with resting heart rates of over 90 beats per minute had an almost two-fold increase in risk for cardiovascular disease mortality; in women it was associated with a three-fold increase. In patients having heart rates of 70 beats/minute or above, each additional beat/minute was associated with increased rate of cardiovascular death and heart failure hospitalization.
Given these data, heart rate should be considered in the assessment of cardiovascular risk, even in apparently healthy individuals. Heart rate has many advantages as a clinical parameter: It is inexpensive and quick to measure and is easily understandable. Although the accepted limits of heart rate are between 60 and 100 beats per minute, this was based for convenience on the scale of the squares on electrocardiogram paper; a better definition of normal sinus heart rate may be between 50 and 90 beats per minute.
Standard textbooks of physiology and medicine mention that heart rate (HR) is readily calculated from the ECG as follows: HR = 1000*60/RR interval in milliseconds, HR = 60/RR interval in seconds, or HR = 300/number of large squares between successive R waves. In each case, the authors are actually referring to instantaneous HR, which is the number of times the heart would beat if successive RR intervals were constant.
Lifestyle and pharmacological regimens may be beneficial to those with high resting heart rates. Exercise is one possible measure to take when an individual's heart rate is higher than 80 beats per minute. Diet has also been found to be beneficial in lowering resting heart rate: In studies of resting heart rate and risk of death and cardiac complications on patients with type 2 diabetes, legumes were found to lower resting heart rate. This is thought to occur because in addition to the direct beneficial effects of legumes, they also displace animal proteins in the diet, which are higher in saturated fat and cholesterol. Another nutrient is omega-3 long chain polyunsaturated fatty acids (omega-3 fatty acid or LC-PUFA). In a meta-analysis with a total of 51 randomized controlled trials (RCTs) involving 3,000 participants, the supplement mildly but significantly reduced heart rate (-2.23 bpm; 95% CI: -3.07, -1.40 bpm). When docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA) were compared, modest heart rate reduction was observed in trials that supplemented with DHA (-2.47 bpm; 95% CI: -3.47, -1.46 bpm), but not in those received EPA.
A very slow heart rate (bradycardia) may be associated with heart block. It may also arise from autonomous nervous system impairment.
See also
Heart rate monitor
Cardiac cycle
Electrocardiography
Sinus rhythm
Second wind (heart rate is measured during 12 Minute Walk Test)
Bainbridge reflex
Notes
References
Bibliography
External links
Online Heart Beats Per Minute Calculator Tap along with your heart rate
An application (open-source) for contactless real time heart rate measurements by means of an ordinary web cam
Cardiovascular physiology
Medical signs
Mathematics in medicine
Temporal rates | Heart rate | Physics,Mathematics | 8,476 |
60,110,175 | https://en.wikipedia.org/wiki/2%2C3-Dichlorobutadiene | 2,3-Dichlorobutadiene is a chlorinated derivative of butadiene. This colorless liquid is prone to polymerization, more so than 2-chlorobutadiene. It is used to produce specialized neoprene rubbers.
It can be prepared by the copper-catalyzed isomerization of dichlorobutynes. Alternatively dehydrochlorination of 2,3,4-trichloro-1-butene:
CH2=C(Cl)CH(Cl)CH2Cl + NaOH → CH2=C(Cl)C(Cl)=CH2 + NaCl + H2O
2,3-Dichlorobutadiene is a precursor to 2,3-diarylbutadienes.
References
Conjugated dienes
Monomers
Organochlorides | 2,3-Dichlorobutadiene | Chemistry,Materials_science | 181 |
3,850,023 | https://en.wikipedia.org/wiki/Boyce%E2%80%93Codd%20normal%20form | Boyce–Codd normal form (BCNF or 3.5NF) is a normal form used in database normalization. It is a slightly stricter version of the third normal form (3NF). By using BCNF, a database will remove all redundancies based on functional dependencies.
History
Edgar F. Codd released his original article "A Relational Model of Data for Large Shared Databanks" in June 1970. This was the first time the notion of a relational database was published. All work after this, including the Boyce–Codd normal form method was based on this relational model.
The Boyce–Codd normal form was first described by Ian Heath in 1971, and has also been called Heath normal form by Chris Date.
BCNF was formally developed in 1974 by Raymond F. Boyce and Edgar F. Codd to address certain types of anomalies not dealt with by 3NF as originally defined.
As mentioned, Chris Date has pointed out that a definition of what we now know as BCNF appeared in a paper by Ian Heath in 1971. Date writes:
Since that definition predated Boyce and Codd's own definition by some three years, it seems to me that BCNF ought by rights to be called Heath normal form. But it isn't.
Definition
If a relational schema is in BCNF, then all redundancy based on functional dependency has been removed, although other types of redundancy may still exist. A relational schema R is in Boyce–Codd normal form if and only if for every one of its functional dependencies X → Y, at least one of the following conditions hold:
X → Y is a trivial functional dependency (Y ⊆ X),
X is a superkey for schema R.
If a relational schema is in BCNF, then it is automatically also in 3NF because BCNF is a stricter form of 3NF. While all BCNF relations satisfy the conditions for 3NF, not all 3NF relations satisfy the stricter requirements of BCNF, which eliminates all redundancy caused by functional dependencies.
Relation with 3NF tables
Only in rare cases does a 3NF table not meet the requirements of BCNF. A 3NF table that does not have multiple overlapping candidate keys is guaranteed to be in BCNF. Depending on what its functional dependencies are, a 3NF table with two or more overlapping candidate keys may or may not be in BCNF.
An example of a 3NF table that does not meet BCNF is:
Each row in the table represents a court booking at a tennis club. That club has one hard court (Court 1) and one grass court (Court 2)
A booking is defined by its Court and the period for which the Court is reserved
Additionally, each booking has a Rate Type associated with it. There are four distinct rate types:
SAVER, for Court 1 bookings made by members
STANDARD, for Court 1 bookings made by non-members
PREMIUM-A, for Court 2 bookings made by members
PREMIUM-B, for Court 2 bookings made by non-members
The table's superkeys are:
S1 = {Court, Start time}
S2 = {Court, End time}
S3 = {Rate type, Start time}
S4 = {Rate type, End time}
S5 = {Court, Start time, End time}
S6 = {Rate type, Start time, End time}
S7 = {Court, Rate type, Start time}
S8 = {Court, Rate type, End time}
ST = {Court, Rate type, Start time, End time}, the trivial superkey
Note that even though in the above table Start time and End time attributes have no duplicate values for each of them, we still have to admit that in some other days two different bookings on court 1 and court 2 could start at the same time or end at the same time. This is the reason why {Start time} and {End time} cannot be considered as the table's superkeys.
The table's are candidate keys are:
S1 = {Court, Start time}
S2 = {Court, End time}
S3 = {Rate type, Start time}
S4 = {Rate type, End time}
Only S1, S2, S3 and S4 are candidate keys (that is, minimal superkeys for that relation) because e.g. S1 ⊂ S5, so S5 cannot be a candidate key.
Given that 2NF prohibits partial functional dependencies of non-prime attributes (i.e., an attribute that does not occur in any candidate key) and that 3NF prohibits transitive functional dependencies of non-prime attributes on candidate keys.
In the Today's court bookings table, there are no non-prime attributes: that is, all attributes belong to some candidate key. Therefore, the table adheres to both 2NF and 3NF.
The table does not adhere to BCNF. This is because of the dependency Rate type → Court in which the determining attribute is Rate type, on which Court depends. Note that (1) {Rate type} is not a superkey and (2) {Court} is not a subset of {Rate type}.
Dependency Rate type → Court is respected, since a Rate type should only ever apply to a single Court.
The design can be amended so that it meets BCNF:
The candidate keys for the Rate types table are {Rate type} and {Court, Member flag}; the candidate keys for the Today's bookings table are {Court, Start time} and {Court, End time}. Both tables are in BCNF. When {Rate type} is a key in the Rate types table, having one Rate type associated with two different Courts is impossible, so by using {Rate type} as a key in the Rate types table, the anomaly affecting the original table has been eliminated.
Achievability of BCNF
In some cases, a non-BCNF table cannot be decomposed into tables that satisfy BCNF and preserve the dependencies that held in the original table. Beeri and Bernstein showed in 1979 that, for example, a set of functional dependencies {AB → C, C → B} cannot be represented by a BCNF schema.
Consider the following non-BCNF table whose functional dependencies follow the {AB → C, C → B} pattern:
For each Person / Shop type combination, the table tells us which shop of this type is geographically nearest to the person's home. We assume for simplicity that a single shop cannot be of more than one type.
The candidate keys of the table are:
{Person, Shop type},
{Person, Nearest shop}.
Because all three attributes are prime attributes (i.e. belong to candidate keys), the table is in 3NF. The table is not in BCNF, however, as the Shop type attribute is functionally dependent on a non-superkey: Nearest shop.
The violation of BCNF means that the table is subject to anomalies. For example, Eagle Eye might have its Shop type changed to "Optometrist" on its "Fuller" record while retaining the Shop type "Optician" on its "Davidson" record. This would imply contradictory answers to the question: "What is Eagle Eye's Shop Type?" Holding each shop's Shop type only once would seem preferable, as doing so would prevent such anomalies from occurring:
In this revised design, the "Shop near person" table has a candidate key of {Person, Shop}, and the "Shop" table has a candidate key of {Shop}. Unfortunately, although this design adheres to BCNF, it is unacceptable on different grounds: it allows us to record multiple shops of the same type against the same person. In other words, its candidate keys do not guarantee that the functional dependency {Person, Shop type} → {Shop} will be respected.
A design that eliminates all of these anomalies (but does not conform to BCNF) is possible. This design introduces a new normal form, known as Elementary Key Normal Form. This design consists of the original "Nearest shops" table supplemented by the "Shop" table described above. The table structure generated by Bernstein's schema generation algorithm is actually EKNF, although that enhancement to 3NF had not been recognized at the time the algorithm was designed:
If a referential integrity constraint is defined to the effect that {Shop type, Nearest shop} from the first table must refer to a {Shop type, Shop} from the second table, then the data anomalies described previously are prevented.
Intractability
It is NP-complete, given a database schema in third normal form, to determine whether it violates Boyce–Codd normal form.
Decomposition into BCNF
If a relation R is not in BCNF due to a functional dependency X→Y, then R can be decomposed into BCNF by replacing that relation with two sub-relations:
One with the attributes X+,
and another with the attributes R-X++X. Note that R represents all the attributes in the original relation.
Check whether both sub-relations are in BCNF and repeat the process recursively with any sub-relation which is not in BCNF.
References
Bibliography
Date, C. J. (1999). An Introduction to Database Systems (8th ed.). Addison-Wesley Longman. .
External links
Rules Of Data Normalization
Advanced Normalization by ITS, University of Texas.
BCNF
NP-complete problems
de:Normalisierung (Datenbank)#Boyce-Codd-Normalform (BCNF) | Boyce–Codd normal form | Mathematics | 2,055 |
77,655,555 | https://en.wikipedia.org/wiki/Paleobiota%20of%20the%20Green%20River%20Formation | The Green River Formation is a geological formation located in the Intermountain West of the United States, in the states of Colorado, Utah, and Wyoming. It comprises sediments deposited during the Early Eocene in a series of large freshwater lakes: Lake Gosiute, Lake Uinta, and Fossil Lake (the last containing Fossil Butte National Monument). It preserves a high diversity of freshwater fish, birds, reptiles, and mammals, with some sections of the formation (including Fossil Lake and the Parachute Creek member of Lake Uinta) qualifying as Konservat-Lagerstätten due to their extremely well-preserved fossils.
Cartilaginous fish
Bony fish
Primarily based on Grande (2001), with changes where necessary:
Acipenseriformes
Lepisosteiformes
Amiiformes
Hiodontiformes
Osteoglossiformes
Ellimmichthyiformes
Clupeiformes
Gonorynchiformes
Cypriniformes
Siluriformes
Salmoniformes
Percopsiformes
Carangiformes
Acanthuriformes
Incertae sedis
Amphibians
Frogs
Salamanders
Reptiles
Squamates
Crocodilians
Turtles
Birds
Lithornithiformes
Anseriformes
Galliformes
Coliiformes
Leptosomiformes
Coraciiformes
Piciformes
Strisores
Musophagiformes
Mirandornithes
Suliformes
Pelecaniformes
Charadriiformes
Gruiformes
Eufalconimorphae
Neoaves incertae sedis
Two other genera, "Eoeurypyga" (a stem-sunbittern) and "Wyomingcypselus" (an early apodiform) are mentioned only in a 2002 dissertation, and are presently nomina nuda.
Avian ichnofossils
Mammals
Partially based on Grande (1984). Aside from the few well-preserved mammals found in Fossil Lake, a majority of Green River mammals are based on isolated bones and teeth:
Metatheria
Cimolesta
Chiroptera
Eulipotyphla
Pan-Carnivora
Paraxonia
Pan-Perissodactyla
Apatotheria
Rodentia
Primatomorpha
Arthropoda
Arachnida
Acariformes
Araneae
Parasitiformes
Scorpiones
Crustacea
Branchiopoda
Decapods
Ostracoda
Myriapoda
Insecta
Coleoptera
Adephaga
Elateriformia
Polyphaga - Bostrichiformia
Polyphaga - Cucujiformia
Polyphaga - Scarabaeiformia
Polyphaga - Staphyliniformia
Dictyoptera
Diptera
Hemiptera
Hymenoptera
Lepidoptera
Mantodea
Megaloptera
Neuroptera
Raphidioptera
Odonata
Primarily based on Bechly et al (2020):
Orthoptera
Psocodea
Strepsiptera
Thysanoptera
Trichoptera
Mollusks
Based on Grande (1984):
Bivalvia
Gastropoda
Fungi
Unicellular microbiota
Cyanobacteria
Algae
Charophytes
Euglenozoans
Amoebozoans
Cercozoans
Plants
Liverworts
Lycophytes
Ferns
Cycadalean palynomorphs
Conifers
Conifer palynomorphs
Gnetalean palynomorphs
Magnoliids
Magnoliid palynomorphs
Monocots
Monocot palynomorphs
Ceratophyllales
Eudicots
Basal Eudicots
Superasterids
Superrosids
Eudicot palynomorphs
Angiosperms of uncertain affiliation
Plants of uncertain affiliation
References
Green River Formation
Green River
Eocene United States
Fossils of the United States | Paleobiota of the Green River Formation | Biology | 768 |
14,359 | https://en.wikipedia.org/wiki/Huygens%E2%80%93Fresnel%20principle | The Huygens–Fresnel principle (named after Dutch physicist Christiaan Huygens and French physicist Augustin-Jean Fresnel) states that every point on a wavefront is itself the source of spherical wavelets, and the secondary wavelets emanating from different points mutually interfere. The sum of these spherical wavelets forms a new wavefront. As such, the Huygens-Fresnel principle is a method of analysis applied to problems of luminous wave propagation both in the far-field limit and in near-field diffraction as well as reflection.
History
In 1678, Huygens proposed that every point reached by a luminous disturbance becomes a source of a spherical wave. The sum of these secondary waves determines the form of the wave at any subsequent time; the overall procedure is referred to as Huygens' construction. He assumed that the secondary waves travelled only in the "forward" direction, and it is not explained in the theory why this is the case. He was able to provide a qualitative explanation of linear and spherical wave propagation, and to derive the laws of reflection and refraction using this principle, but could not explain the deviations from rectilinear propagation that occur when light encounters edges, apertures and screens, commonly known as diffraction effects.
In 1818, Fresnel showed that Huygens's principle, together with his own principle of interference, could explain both the rectilinear propagation of light and also diffraction effects. To obtain agreement with experimental results, he had to include additional arbitrary assumptions about the phase and amplitude of the secondary waves, and also an obliquity factor. These assumptions have no obvious physical foundation, but led to predictions that agreed with many experimental observations, including the Poisson spot.
Poisson was a member of the French Academy, which reviewed Fresnel's work. He used Fresnel's theory to predict that a bright spot ought to appear in the center of the shadow of a small disc, and deduced from this that the theory was incorrect. However, Arago, another member of the committee, performed the experiment and showed that the prediction was correct. This success was important evidence in favor of the wave theory of light over then predominant corpuscular theory.
In 1882, Gustav Kirchhoff analyzed Fresnel's theory in a rigorous mathematical formulation, as an approximate form of an integral theorem. Very few rigorous solutions to diffraction problems are known however, and most problems in optics are adequately treated using the Huygens-Fresnel principle.
In 1939 Edward Copson, extended the Huygens' original principle to consider the polarization of light, which requires a vector potential, in contrast to the scalar potential of a simple ocean wave or sound wave.
In antenna theory and engineering, the reformulation of the Huygens–Fresnel principle for radiating current sources is known as surface equivalence principle.
Issues in Huygens-Fresnel theory continue to be of interest. In 1991, David A. B. Miller suggested that treating the source as a dipole (not the monopole assumed by Huygens) will cancel waves propagating in the reverse direction, making Huygens' construction quantiatively correct. In 2021, Forrest L. Anderson showed that treating the wavelets as Dirac delta functions, summing and differentiating the summation is sufficient to cancel reverse propagating waves.
Examples
Refraction
The apparent change in direction of a light ray as it enters a sheet of glass at angle can be understood by the Huygens construction. Each point on the surface of the glass gives a secondary wavelet. These wavelets propagate at a slower velocity in the glass, making less forward progress than their counterparts in air. When the wavelets are summed, the resulting wavefront propagates at an angle to the direction of the wavefront in air.
In an inhomogeneous medium with a variable index of refraction, different parts of the wavefront propagate at different speeds. Consequently the wavefront bends around in the direction of higher index.
Diffraction
Huygens' principle as a microscopic model
The Huygens–Fresnel principle provides a reasonable basis for understanding and predicting the classical wave propagation of light. However, there are limitations to the principle, namely the same approximations done for deriving the Kirchhoff's diffraction formula and the approximations of near field due to Fresnel. These can be summarized in the fact that the wavelength of light is much smaller than the dimensions of any optical components encountered.
Kirchhoff's diffraction formula provides a rigorous mathematical foundation for diffraction, based on the wave equation. The arbitrary assumptions made by Fresnel to arrive at the Huygens–Fresnel equation emerge automatically from the mathematics in this derivation.
A simple example of the operation of the principle can be seen when an open doorway connects two rooms and a sound is produced in a remote corner of one of them. A person in the other room will hear the sound as if it originated at the doorway. As far as the second room is concerned, the vibrating air in the doorway is the source of the sound.
Mathematical expression of the principle
Consider the case of a point source located at a point P0, vibrating at a frequency f. The disturbance may be described by a complex variable U0 known as the complex amplitude. It produces a spherical wave with wavelength λ, wavenumber . Within a constant of proportionality, the complex amplitude of the primary wave at the point Q located at a distance r0 from P0 is:
Note that magnitude decreases in inverse proportion to the distance traveled, and the phase changes as k times the distance traveled.
Using Huygens's theory and the principle of superposition of waves, the complex amplitude at a further point P is found by summing the contribution from each point on the sphere of radius r0. In order to get an agreement with experimental results, Fresnel found that the individual contributions from the secondary waves on the sphere had to be multiplied by a constant, −i/λ, and by an additional inclination factor, K(χ). The first assumption means that the secondary waves oscillate at a quarter of a cycle out of phase with respect to the primary wave and that the magnitude of the secondary waves are in a ratio of 1:λ to the primary wave. He also assumed that K(χ) had a maximum value when χ = 0, and was equal to zero when χ = π/2, where χ is the angle between the normal of the primary wavefront and the normal of the secondary wavefront. The complex amplitude at P, due to the contribution of secondary waves, is then given by:
where S describes the surface of the sphere, and s is the distance between Q and P.
Fresnel used a zone construction method to find approximate values of K for the different zones, which enabled him to make predictions that were in agreement with experimental results. The integral theorem of Kirchhoff includes the basic idea of Huygens–Fresnel principle. Kirchhoff showed that in many cases, the theorem can be approximated to a simpler form that is equivalent to the formation of Fresnel's formulation.
For an aperture illumination consisting of a single expanding spherical wave, if the radius of the curvature of the wave is sufficiently large, Kirchhoff gave the following expression for K(χ):
K has a maximum value at χ = 0 as in the Huygens–Fresnel principle; however, K is not equal to zero at χ = π/2, but at χ = π.
Above derivation of K(χ) assumed that the diffracting aperture is illuminated by a single spherical wave with a sufficiently large radius of curvature. However, the principle holds for more general illuminations. An arbitrary illumination can be decomposed into a collection of point sources, and the linearity of the wave equation can be invoked to apply the principle to each point source individually. K(χ) can be generally expressed as:
In this case, K satisfies the conditions stated above (maximum value at χ = 0 and zero at χ = π/2).
Generalized Huygens' principle
Many books and references - e.g. (Greiner, 2002) and (Enders, 2009) - refer to the Generalized Huygens' Principle using the definition in (Feynman, 1948).
Feynman defines the generalized principle in the following way:
This clarifies the fact that in this context the generalized principle reflects the linearity of quantum mechanics and the fact that the quantum mechanics equations are first order in time. Finally only in this case the superposition principle fully apply, i.e. the wave function in a point P can be expanded as a superposition of waves on a border surface enclosing P. Wave functions can be interpreted in the usual quantum mechanical sense as probability densities where the formalism of Green's functions and propagators apply. What is note-worthy is that this generalized principle is applicable for "matter waves" and not for light waves any more. The phase factor is now clarified as given by the action and there is no more confusion why the phases of the wavelets are different from the one of the original wave and modified by the additional Fresnel parameters.
As per Greiner the generalized principle can be expressed for in the form:
where G is the usual Green function that propagates in time the wave function . This description resembles and generalize the initial Fresnel's formula of the classical model.
Feynman's path integral and the modern photon wave function
Huygens' theory served as a fundamental explanation of the wave nature of light interference and was further developed by Fresnel and Young but did not fully resolve all observations such as the low-intensity double-slit experiment first performed by G. I. Taylor in 1909. It was not until the early and mid-1900s that quantum theory discussions, particularly the early discussions at the 1927 Brussels Solvay Conference, where Louis de Broglie proposed his de Broglie hypothesis that the photon is guided by a wave function.
The wave function presents a much different explanation of the observed light and dark bands in a double slit experiment. In this conception, the photon follows a path which is a probabilistic choice of one of many possible paths in the electromagnetic field. These probable paths form the pattern: in dark areas, no photons are landing, and in bright areas, many photons are landing. The set of possible photon paths is consistent with Richard Feynman's path integral theory, the paths determined by the surroundings: the photon's originating point (atom), the slit, and the screen and by tracking and summing phases. The wave function is a solution to this geometry. The wave function approach was further supported by additional double-slit experiments in Italy and Japan in the 1970s and 1980s with electrons.
Quantum field theory
Huygens' principle can be seen as a consequence of the homogeneity of space—space is uniform in all locations. Any disturbance created in a sufficiently small region of homogeneous space (or in a homogeneous medium) propagates from that region in all geodesic directions. The waves produced by this disturbance, in turn, create disturbances in other regions, and so on. The superposition of all the waves results in the observed pattern of wave propagation.
Homogeneity of space is fundamental to quantum field theory (QFT) where the wave function of any object propagates along all available unobstructed paths. When integrated along all possible paths, with a phase factor proportional to the action, the interference of the wave-functions correctly predicts observable phenomena. Every point on the wavefront acts as the source of secondary wavelets that spread out in the light cone with the same speed as the wave. The new wavefront is found by constructing the surface tangent to the secondary wavelets.
In other spatial dimensions
In 1900, Jacques Hadamard observed that Huygens' principle was broken when the number of spatial dimensions is even. From this, he developed a set of conjectures that remain an active topic of research. In particular, it has been discovered that Huygens' principle holds on a large class of homogeneous spaces derived from the Coxeter group (so, for example, the Weyl groups of simple Lie algebras).
The traditional statement of Huygens' principle for the D'Alembertian gives rise to the KdV hierarchy; analogously, the Dirac operator gives rise to the AKNS hierarchy.
See also
Fraunhofer diffraction
Kirchhoff's diffraction formula
Green's function
Green's theorem
Green's identities
Near-field diffraction pattern
Double-slit experiment
Knife-edge effect
Fermat's principle
Fourier optics
Surface equivalence principle
Wave field synthesis
Kirchhoff integral theorem
References
Further reading
Stratton, Julius Adams: Electromagnetic Theory, McGraw-Hill, 1941. (Reissued by Wiley – IEEE Press, ).
B.B. Baker and E.T. Copson, The Mathematical Theory of Huygens' Principle, Oxford, 1939, 1950; AMS Chelsea, 1987.
Wave mechanics
Diffraction
Christiaan Huygens | Huygens–Fresnel principle | Physics,Chemistry,Materials_science | 2,738 |
16,782,315 | https://en.wikipedia.org/wiki/HD%20183263%20b | HD 183263 b is an extrasolar planet orbiting the star HD 183263. This planet has a minimum mass of 3.6 times more than Jupiter and takes 625 days to orbit the star. The planet was discovered on January 25, 2005 using multiple Doppler measurements of five nearby FGK main-sequence stars and subgiants obtained during the past 4–6 years at the Keck Observatory in Mauna Kea, Hawaii. These stars, namely, HD 183263, HD 117207, HD 188015, HD 45350, and HD 99492, all exhibit coherent variations in their Doppler shifts consistent with a planet in Keplerian motion, and the results were published in a paper by Geoffrey Marcy et al. Photometric observations were acquired for four of the five host stars with an automatic telescope at Fairborn Observatory. The lack of brightness variations in phase with the radial velocities supports planetary-reflex motion as the cause of the velocity variations. An additional planet in the system was discovered later.
See also
HD 183263 c
References
External links
Aquila (constellation)
Exoplanets discovered in 2005
Giant planets
Exoplanets detected by radial velocity | HD 183263 b | Astronomy | 248 |
12,274,025 | https://en.wikipedia.org/wiki/Available%20expression | In the field of compiler optimizations, available expressions is an analysis algorithm that determines for each point in the program the set of expressions that need not be recomputed. Those expressions are said to be available at such a point. To be available on a program point, the operands of the expression should not be modified on any path from the occurrence of that expression to the program point.
The analysis is an example of a forward data flow analysis problem. A set of available expressions is maintained. Each statement is analysed to see whether it changes the operands of one or more available expressions. This yields sets of available expressions at the end of each basic block, known as the outset in data flow analysis terms. An expression is available at the start of a basic block if it is available at the end of each of the basic block's predecessors. This gives a set of equations in terms of available sets, which can be solved by an iterative algorithm.
Available expression analysis is used to do global common subexpression elimination (CSE). If an expression is available at a point, there is no need to re-evaluate it.
References
Aho, Sethi & Ullman: Compilers – Principles, Techniques, and Tools Addison-Wesley Publishing Company 1986
Compiler optimizations
Data-flow analysis | Available expression | Technology | 266 |
61,098,866 | https://en.wikipedia.org/wiki/Circular%20measure | A circular measure was used in comparing circular cross-sections, e.g., of wires, etc. A circular unit of the ares is the area of the circle whose diameter is one linear unit.
For example, 1 circular mil is equivalent to 0.7854 square mil in area, 1 circular millimeter = 1550 circular mils = 0.7854 square millimeter. Here
References
Units of measurement | Circular measure | Mathematics | 84 |
21,844,112 | https://en.wikipedia.org/wiki/Impression%20formation | Impression formation in social psychology refers to the processes by which different pieces of knowledge about another are combined into a global or summary impression. Social psychologist Solomon Asch is credited with the seminal research on impression formation and conducted research on how individuals integrate information about personality traits. Two major models have been proposed to explain how this process of integration takes place. The configural model suggests that people form cohesive impressions by integrating traits into a unified whole, adjusting individual traits to fit an overall context rather than evaluating each trait independently. According to this model, some traits are more schematic and serve as central traits to shape the overall impression. As an individual seeks to form a coherent and meaningful impression of another individual, previous impressions significantly influence the interpretation of subsequent information. In contrast, the algebraic model takes a more additive approach, forming impressions by separately evaluating each trait and then combining these evaluations into an overall summary. A related area to impression formation is the study of person perception, making causal attributions, and then adjusting those inferences based on the information available.
Methods
Impression formation has traditionally been studied using three methods pioneered by Asch: free response, free association, and a check-list form. In addition, a fourth method based on a Likert scale with anchors such as “very favorable” and “very unfavorable”, has also been used in recent research. A combination of some or all of these techniques is often employed to produce the most accurate assessment of impression formation. Beyond accuracy, the thin slices experiment examined the correlation between first impressions based on brief behavior exposures and more sustained judgments.
Free response
Free response is an experimental method frequently used in impression formation research. The participant (or perceiver) is presented with a stimulus (usually a short vignette or a list of personality descriptors such as assured, talkative, cold, etc.) and then instructed to briefly sketch his or her impressions of the type of person described. This is a useful technique for gathering detailed and concrete evidence on the nature of the impression formed. However, the difficulty of accurately coding responses often necessitates the use of additional quantitative measures.
Free association
Free association is another commonly used experimental method in which the perceiver creates a list of personality adjectives that immediately come to mind when asked to think about the type of person described by a particular set of descriptor adjectives.
Check-list
A check-list consisting of assorted personality descriptors is often used to supplement free response or free association data and to compare group trends. After presenting character-qualities of an imagined individual, perceivers are instructed to select the character adjectives from a preset list that best describe the resulting impression. While this produces an easily quantifiable assessment of an impression, it forces participants' answers into a limited, and often extreme, response set. However, when used in conjunction with the above-mentioned techniques, check-list data provides useful information about the character of impressions.
Likert-type rating scales
With Likert scales, perceivers are responding to a presentation of discrete personality characteristics. Common presentation methods include lists of adjectives, photos or videos depicting a scene, or written scenarios. For example, a participant might be asked to answer the question "Would an honest (trait) person ever search for the owner of a lost package (behavior)?" by answering on a 5-point scale ranging from 1 "very unlikely" to 5 "very likely."
Thin slices experiments
In the thin slices experiment, participants are asked to watch brief video clips depicting the target’s behaviors, each lasting a few seconds. They need to then rate the target on various dimensions and provide an overall rating based on the impression from the clips.
Specific results
Primacy-recency effect
Asch stressed the important influence of an individual's initial impressions of a person's personality traits on the interpretation of all subsequent impressions. Asch argued that these early impressions often shaped or colored an individual's perception of other trait-related details. A considerable body of research exists supporting this hypothesis. For example, when individuals were asked to rate their impression of another person after being presented a list of words progressing from either low favorability to high favorability (L - H) or from high favorability to low favorability (H - L), strong primacy effects were found. In other words, impressions formed from initial descriptor adjectives persisted over time and influenced global impressions. In general, primacy can have three main effects: initial trait-information can be integrated into an individual's global impression of a person in a process of assimilation effects, it can lead to a durable impression against which other information is compared in a process of anchoring, and it can cause people to actively change their perception of others in a process of correction.
Valence
Information inconsistent with a person's global impression of another individual is especially prominent in memory. The process of assimilation can lead to causal attributions of personality as this inconsistent information is integrated into the whole. This effect is especially influential when the behavior is perceived as negative. Consistent with negativity bias, negative behaviors are seen as more indicative of an individual's behavior in situations involving moral issues. Extreme negative behavior is also considered more predictive of personality traits than less extreme behavior.However, this reasoning can be flawed, as it can trigger a halo effect, where the influence of a single trait is overestimated, overshadowing other factors.
Central Traits
The emotionality of certain personality traits can influence how subsequent traits are interpreted and ultimately the type of impression formed. For example, when participants are presented with the same list of personality traits, the impression they form can vary notably depending on whether a "warm" trait, as opposed to a “cold” trait is included. People are more likely to perceive an intelligent and warm individual as wise, whereas one described as intelligent and cold tends to be seen as calculating. These traits, which have a disproportionate influence on overall impressions, are referred to as central traits.
History
Classic experiments
In a classic experiment, Solomon Asch's principal theoretical concern revolved around understanding the mechanisms influencing a person's overall impression of others, principally trait centrality and trait valence of various personality characteristics. His research illustrated the influential roles of the primacy effect, valence, and causal attribution on the part of the individual. Based on the findings of ten experiments studying the effect of various personality adjectives on the resulting quality and character of impressions, several key principles of impression formation have been identified:
Individuals have a natural inclination to make global dispositional inferences about the nature of another person's personality.
Individuals expect observed behaviors to reflect stable personality traits.
Individuals attempt to fit information about different traits and behaviors into a meaningful and coherent whole.
Individuals attempt to explain and rationalize inconsistencies when the available information does not fit with the global perception.
Theoretical development
In psychology Fritz Heider's writings on balance theory emphasized that liking or disliking a person depends on how the person is positively or negatively linked to other liked or disliked entities. Heider's later essay on social cognition, along with the development of "psycho-logic" by Robert P. Abelson and Milton J. Rosenberg, embedded evaluative processes in verbal descriptions of actions, with the verb of a descriptive sentence establishing the kind of linkage existing between the actor and object of the sentence. Harry Gollob expanded these insights with his subject-verb-object approach to social cognition, and he showed that evaluations of sentence subjects could be calculated with high precision from out-of-context evaluations of the subject, verb, and object, with part of the evaluative outcome coming from multiplicative interactions among the input evaluations. In a later work, Gollob and Betty Rossman extended the framework to predicting an actor's power and influence. Reid Hastie wrote that "Gollob's extension of the balance model to inferences concerning subject-verb-object sentences is the most important methodological and theoretical development of Heider's principle since its original statement."
Gollob's regression equations for predicting impressions of sentence subjects consisted of weighted summations of out-of-context ratings of the subject, verb, and object, and of multiplicative interactions of the ratings. The equations essentially supported the cognitive algebra approach of Norman H. Anderson's Information integration theory. Anderson, however, initiated a heated technical exchange between himself and Gollob, in which Anderson argued that Gollob's use of the general linear model led to indeterminate theory because it could not completely account for any particular case in the set of cases used to estimate the models. The recondite exchange typified a continuing debate between proponents of contextualism who argue that impressions result from situationally specific influences (e.g., from semantics and nonverbal communication as well as affective factors), and modelers who follow the pragmatic maxim, seeking approximations revealing core mental processes. Another issue in using least-squares estimations is the compounding of measurement error problems with multiplicative variables.
In sociology David R. Heise relabeled Gollob's framework from subject-verb-object to actor-behavior-object in order to allow for impression formation from perceived events as well as from verbal stimuli, and showed that actions produce impressions of behaviors and objects as well as of actors on all three dimensions of Charles E. Osgood's semantic differential—Evaluation, Potency, and Activity. Heise used equations describing impression-formation processes as the empirical basis for his cybernetic theory of action, Affect control theory.
Erving Goffman's book The Presentation of Self in Everyday Life and his essay "On Face-work" in the book Interaction Ritual focused on how individuals engage in impression management. Using the notion of face as identity is used now, Goffman proposed that individuals maintain face expressively. "By entering a situation in which he is given a face to maintain, a person takes on the responsibility of standing guard over the flow of events as they pass before him. He must ensure that a particular expressive order is sustained-an order that regulates the flow of events, large or small, so that anything that appears to be expressed by them will be consistent with his face." In other words, individuals control events so as to create desired impressions of themselves. Goffman emphasized that individuals in a group operate as a team with everyone committed to helping others maintain their identities.
Impression-formation processes in the US
Ratings of 515 action descriptions by American respondents yielded estimations of a statistical model consisting of nine impression-formation equations, predicting outcome Evaluation, Potency, and Activity of actor, behavior, and object from pre-event ratings of the evaluation, potency, and activity of actor, behavior, and object. The results were reported as maximum-likelihood estimations.
Stability was a factor in every equation, with some pre-action feeling toward an action element transferred to post-action feeling about the same element. Evaluation, Potency, and Activity of behaviors suffused to actors so impressions of actors were determined in part by the behaviors they performed. In general objects of action lost Potency.
Interactions among variables included consistency effects, such as receiving Evaluative credit for performing a bad behavior toward a bad object person, and congruency effects, such as receiving evaluative credit for nice behaviors toward weak objects or bad behaviors toward powerful objects. Third-order interactions included a balance effect in which actors received a boost in evaluation if two or none of the elements in the action were negative, otherwise a decrement. Across all nine prediction equations, more than half of the 64 possible predictors (first-order variables plus second- and third-order interactions) contributed to outcomes.
Studies of event descriptions that explicitly specified behavior settings found that impression-formation processes are largely the same when settings are salient, but the setting becomes an additional contributor to impression formation regarding actor, behavior, and object; and the action changes the impression of the setting.
Actor and object are the same person in self-directed actions such as "the lawyer praised himself" or various kinds of self-harm. Impression-formation research indicates that self-directed actions reduce the positivity of actors on the Evaluation, Potency, and Activity dimensions. Self-directed actions therefore are not an optimal way to confirm the good, potent, lively identities that people normally want to maintain. Rather self-directed actions are a likely mode of expression for individuals who want to manifest their low self-esteem and self-efficacy.
Early work on impression formation used action sentences like, "The kind man praises communists," and "Bill helped the corrupt senator," assuming that modifier-noun combinations amalgamate into a functional unit. A later study found that a modifier-noun combination does form an overall impression that works in action descriptions like a noun alone. The action sentences in that study combined identities with status characteristics, traits, moods, and emotions. Another study in 1989 focused specifically on emotion descriptors combined with identities (e.g., an angry child) and again found that emotion terms amalgamate with identities, and equations describing this kind of amalgamation are of the same form as equations describing trait-identity amalgamation.
Cross-cultural studies
Studies of various kinds of impression formation have been conducted in Canada, Japan, and Germany. Core processes are similar cross-culturally. For example, in every culture that has been studied, Evaluation of an actor was determined by-among other things-a stability effect, a suffusion from the behavior Evaluation, and an interaction that rewarded an actor for performing a behavior whose Evaluation was consistent with the Evaluation of the object person.
On the other hand, each culture weighted the core effects distinctively. For example, the impact of behavior-object Evaluation consistency was much smaller in Germany than in the United States, Canada, or Japan, suggesting that moral judgments of actors have a somewhat different basis in Germany than in the other cultures. Additionally, impression-formation processes involved some unique interactions in each culture. For example, attribute-identity amalgamations in Germany involved some Potency and Activity interactions that did not appear in other cultures.
The 2010 book Surveying Cultures reviewed cross-cultural research on impression-formation processes, and provided guidelines for conducting impression-formation studies in cultures where the processes are unexplored currently.
Recent studies
Impression formation is based on the characteristics of both the perceivers and targets. However, research has not been able to quantify the extent to which these two groups contribute to impression. The research was conducted to determine the extent of how impressions originate from ‘our mind’ and ‘target face’. Results demonstrated that perceiver characteristics contribute more than target appearance. Impressions can be made from facial appearance alone and assessments on attributes such as nice, strong, and smart based on variations of the targets’ face. The results show that subtle facial traits have meaningful consequences on impressions, which is true even for young children of 3 years old. Studies have been conducted to study impression formation in social situations rather than situations involving threat. Research reveals that social goals can drive the formation of impressions and that there is flexibility in the possible impressions formed on target faces.
Recent studies on impression formation have highlighted the involvement of several brain regions in processing social and emotional information when forming impression. The medial prefrontal cortex (mPFC) has been shown to play a key role in evaluating others' traits and intentions, particularly in the context of social judgments. The superior temporal sulcus (STS) is crucial for interpreting social cues, such as facial expressions and body language. Additionally, research has found that the hippocampus helps retrieve past experiences, which can influence the formation of new impressions. Together, these brain regions interact to integrate social information, guiding our judgments and perceptions of others.
See also
First impression (psychology)
Notes
References
Interpersonal relationships | Impression formation | Biology | 3,256 |
51,729,450 | https://en.wikipedia.org/wiki/Snap%20Inc. | Snap Inc. is an American technology company, founded on September 16, 2011, by Evan Spiegel, Bobby Murphy, and Reggie Brown based in Santa Monica, California. The company developed and maintains technological products and services, namely Snapchat, Spectacles, and Bitmoji. The company was named Snapchat Inc. at its inception, but it was rebranded Snap Inc. on September 24, 2016, in order to include the Spectacles product under the company name. Snap is co-owned by Tencent (which holds a 45.43% stake) and NBCUniversal, a subsidiary of Comcast (whose stake is undisclosed).
History
The company was founded on September 16, 2011, by Evan Spiegel and Bobby Murphy upon the relaunch of the photo sharing app Picaboo as Snapchat. On December 31, 2013, the application was hacked and 4.6 million usernames and phone numbers were leaked to the Internet.
By January 2014, the company had refused offers of acquisition, including overtures from Mark Zuckerberg, with Spiegel commenting that "trading that for some short-term gain isn’t very interesting."
In May 2014, the company acquired the software company AddLive to provide needed technology to create a new video chat feature. In that same month, it settled U.S. Federal Trade Commission (FTC) charges over its having misled users regarding its collection of their address book data and transmission of their locations (without notice or consent), and regarding its claim that user messages disappeared after their expiration (rather than remaining accessible, as they had). In December, the company acquired Vergence Labs for $15 million in cash and stock, who were the developers of Epiphany Eyewear, and the mobile app Scan for $50 million, which was revealed during the 2014 Sony Pictures hack.
In May 2015, the company moved from its original headquarters to a 47,000 ft2 (4,366 m2) office complex near Venice Beach and signed a 10-year lease. They were one of the first prominent online platforms to establish themselves there, alongside others such as Whisper and Tinder, giving Venice the new title of "Silicon Beach." In February 2017, two weeks before the company's IPO, The New York Times published a feature about Snap's role in turning the area into a technology hub, noting that Snap, with a total of 1,900 employees, had "already changed the face of Venice."
In September 2015, Snapchat acquired Looksery to develop Lenses for its mobile app, a feature based on Looksery's facial recognition software.
In March, July, and August 2016, the company acquired Bitstrips for $100 million, Obvious Engineering, the developers of Seene, for an undisclosed amount and Vurb for $100 million. Vurb formerly developed the eponymous mobile search engine. The Vurb card-based engine removed the need to switch through multiple other applications on the device to perform a task.
In September 2016, the company officially named itself Snap Inc., and unveiled smartglasses known as Spectacles. In November 2016, the company filed documents for an initial public offering (IPO) with an estimated market value of $25–35 billion. In December 2016, the company opened research and development in Shenzhen and acquired advertising and technology company Flite and Israel-based augmented reality startup Cimagine Media for $30–40 million. A partnership issued in December 2016 with Turner Broadcasting System will allow integration of Turner properties on Snapchat, while cooperating with Snap Inc. to develop original content.
In January 2017, the company announced that it had established an international headquarters in Soho, London. In early February 2017, the company confirmed their plans for an IPO in 2017 and its expectation to raise $3 billion. In early March 2017, the company went public under the trading symbol SNAP, and raised almost $30 billion in market capitalization on the first day of trading.
In late May 2017, the company acquired the location sharing app Zenly in a cash and stock deal. The Zenly app will remain functional, but its concepts were incorporated into a Snapchat feature added in June 2017.
In August 2017, Business Insider reported that Google discussed an offer to buy the company for $30 billion in early 2016.
In October 2017, the company announced that it had formed a joint venture with NBCUniversal to produce content for Snap's platforms, and that it had signed Duplass Brothers Productions as its first partner.
In November 2017, Tencent acquired a 12% non-voting minority equity stake of the company in the open market.
On October 26, 2018, at TwitchCon, Snap launched a new desktop application for macOS and Windows known as Snap Camera. It allows users to utilize Snapchat filters via PC webcams in video chat and live streaming services such as Skype, Twitch, YouTube, and Zoom. Snap also announced additional integration with Twitch.
In August 2022, The Verge reported that Snap would be laying off 20% of its 6,400-person workforce. The layoffs primarily impacted the company's hardware division and the developer products including the separately run Zenly.
On 5 February 2024, Snap Inc. announced it would lay off 10% of its global workforce, approximately 500 employees, partly to enhance in-person collaboration, marking another significant reduction following a 20% staff cut in 2022.
On September 12, 2024, Snap appointed Yahoo CEO Jim Lanzone to its board of directors.
Products
The company develops and maintains the image messaging and multimedia mobile app Snapchat, as well as the Snapchat's Augmented Reality (AR) ad lens. The function was launched in September 2015. Specifically, the AR ad lens is one of the unique features of Snapchat, which allows advertisers to attract their target audience with various innovative lenses. Users can also use these lenses to change their faces, appearance, and even surroundings based on their preferences, and share those visual images or videos within their social networks. In addition, the company also develops and manufactures the wearable camera called Spectacles, a pair of smartglasses that connect to the user's Snapchat account and records videos in a circular video format adjustable in any orientation. On February 20, 2017, Snap Spectacles became available for purchase online. The company sold only 220,000 pairs of Snap Spectacles V1. The company developed and launched Spectacles V2 in April 2018 in the U.S., Canada, U.K. and France; and 13 more European countries in May 2018. On April 28, 2022, the company announced a mini drone called Pixy. Later that year in August, it was reported that future development of Pixy would be discontinued, while continuing to sell the current iteration of the drone.
Funding and shares
Snapchat app raised $485,000 in its seed round and an undisclosed amount of bridge funding from Lightspeed Venture Partners. By February 2013, Snapchat confirmed a $13.5 million Series A funding round led by Benchmark, which valued the company at between $60 million and $70 million. In June 2013, Snapchat raised $60 million in a Series B funding round led by venture capital firm Institutional Venture Partners. The firm also appointed a new high-profile board member, Michael Lynton of Sony's American division. By mid-July 2013, a media report valued the company at $860 million. On November 14, 2013, The Wall Street Journal reported that Facebook offered to acquire Snapchat for $3 billion, but Spiegel declined the cash offer. Tech writer Om Malik then claimed on November 15, 2013, that Google had offered $4 billion, but Spiegel again declined. On December 11, 2013, Snapchat confirmed $50 million in Series C funding from Coatue Management. Beyond 2014, the company had achieved a $10–$20 billion valuation, depending on the source, raising $100 million in Series D funding led by KPCB and $485 million in a Series E round led by Alibaba Group. Investors included General Catalyst, Kingdom Holding Company, SV Angel, Yahoo!, and Tencent. According to reports in May 2016, the company's estimated worth was said to be approaching $22.7 billion in the event of a new Series F round of investment of $1.8 billion from Spark Capital, General Atlantic, Sequoia Capital, T. Rowe Price, Meritech Capital Partners, Dragoneer Investment Group and others, led by Fidelity Investments. Later, the company got an additional $200 million in the following Series FP round. Further reports in 2016 suggested that funding was almost at $3 billion and that Snapchat was targeting yearly revenues of a billion dollars.
Google reportedly offered Snap $30 billion in 2016 for acquisition, which Snap turned down.
2017 initial public offering
In January 2017, The Wall Street Journal reported that "people familiar with the matter" stated that Snap Inc. would share 2.5% of the money raised in an upcoming initial public offering (IPO) with the banks managing the IPO. It also reported that after the predicted March 2017 IPO, the two Snap co-founders would hold over "70% of the voting power" in the company and own around 45% of the total stock. On January 29, 2017, it was reported that the Snap Inc. IPO would likely take place on the New York Stock Exchange. As both the NYSE and Nasdaq had been "aggressively courting the listing for more than a year," the Wall Street Journal called it "a big competitive victory for the Big Board." Snap's IPO was estimated to value the company at between $20 billion and $25 billion, the largest IPO on a US exchange since Alibaba debuted in 2014 at a value of $168 billion. Beyond the two founders, the two biggest shareholders for the planned early 2017 Snap IPO were Benchmark and Lightspeed Venture Partners, both prior investors and venture-capital firms from Silicon Valley. They held a combined stake of about 20%. On March 1, 2017, it was reported that Snap Inc. "values itself at nearly $24B with its IPO pricing." Snap Inc.'s stock started trading on March 2, 2017, under the symbol SNAP, on the New York Stock Exchange.
When Snap reported earnings for the first time in May 2017, they reported a $2.2 billion quarterly loss, and the stock fell more than 20%, erasing most of the gains since the IPO. Its market capitalization reached $100 billion for the first time on 22 February 2021.
In February 2024 stock fell 30% after earnings with disappointing profit guidance. Market capitalisation as of 3 March 2024 was $18B, below 2017 IPO level.
Controversy
Reggie Brown lawsuit
In February 2013, Reggie Brown sued Evan Spiegel and Bobby Murphy. Early investors were also eventually named in the lawsuit. Brown said that he had once been the chief marketing officer for the initial selfie app used to launch Snapchat, offering evidence of contacts with publications such as Cosmopolitan. He also claimed that he had come up with the original concept, which he had ultimately called Picaboo, and that he had created the mascot logo for the product while working with Spiegel to promote and market the idea. Originally titled "Toyopa Group, LLC," Brown said that he had named the newly formed company as well. Brown's lawyers offered documentation of a collaboration with Spiegel and Murphy, which included the filing of an original patent by the three Stanford classmates, but Snapchat described the lawsuit as meritless and called Brown's tactics a "shakedown". During April's depositions, Brown testified that he had believed he was an equal partner, and that he had agreed to share costs and profits. Spiegel instead described Brown as an unpaid intern who had been given an opportunity to earn valuable experience, and although Murphy claimed that he had not fully understood what Brown's role was supposed to have been, he too characterized Brown's involvement as having been that of an internship. Months later, Spiegel dismissed the lawsuit as an example of opportunists who seek out rapidly successful companies in an attempt "to also profit from the hard work of others."
On 9 September 2014, the company announced that they had settled the lawsuit for an initially undisclosed amount. The settlement amount was revealed on 2 February, 2017, in Snap's SEC public filing to be $157.5 million. As part of the settlement, they credited Brown with the conceptual idea for Snapchat.
The press release published by Snapchat's communication department quoted Spiegel:"We are pleased that we have been able to resolve this matter in a manner that is satisfactory to Mr. Brown and the Company. We acknowledge Reggie's contribution to the creation of Snapchat and appreciate his work in getting the application off the ground."
FTC settlement
The Federal Trade Commission alleged that the company had exaggerated to the public the degree to which mobile app images and photos could actually be made to disappear. Following a settlement in 2014, Snapchat was not fined, but the app service agreed to have its claims and policies monitored by an independent party for a period of 20 years.
2018 redesign
The redesign of the Snapchat app in early 2018 made changes for which many users were not happy. Around 1.2 million people petitioned Snap, Inc. to roll back the redesign. Snap, Inc's reply makes no concessions, other than noting, "We completely understand the new Snapchat has felt uncomfortable for many."
Due to the redesign and other market factors in 2018, such as the growth of Instagram Stories and WhatsApp Status, Daily Active Users (DAU) of the app only rose 2% from Q4 2017. Snap, Inc. stock fell more than 15% in after-hours trading following the earnings report release. Growth of daily active users slowed in Q1 2018, and the growth rate for Q2 2018 was "planned to decelerate rapidly from Q1 levels." Snap, Inc. has commented on the redesign, saying, "We have also started to realise some of the positive benefits [of the redesign], including increased new user retention for older users." Some publishers feel the turn towards the older demographic spells the end for the app.
Data storage
In a December 2020 announcement, Google Cloud confirmed the development of the memorandum of understanding (MoU) it signed with Saudi Aramco. The update stated the possibility of exploring options to establish cloud services in Saudi, where it confirmed storing Snapchat data. The decision was contested by Access Now, a non-profit organization, and CIPPIC, a Canadian public interest technology law clinic. The firms objected to Google's decision of choosing Saudi Arabia as its new Google Cloud region overlooking an alarming record of human rights abuse and longstanding surveillance accusations. The firms claimed that placing the personal information of millions of Snapchat users there would put it under the jurisdiction of the government of Saudi Arabia, jeopardizing the security of the data.
California civil rights lawsuit
In June 2024, Snap paid $15 million to settle a lawsuit brought by the California Civil Rights Department alleging that the company discriminated against female employees with respect to pay and promotions, failed to prevent workplace sexual harassment, and retaliated against women who complained.
References
External links
2011 establishments in California
2017 initial public offerings
Companies based in Los Angeles
Companies listed on the New York Stock Exchange
Holding companies established in 2016
Mass media companies established in 2011
Internet properties established in 2011
Mass media companies of the United States
Social media companies of the United States
Technology companies based in Greater Los Angeles
Technology companies established in 2011
Venice, Los Angeles | Snap Inc. | Technology | 3,249 |
58,176,588 | https://en.wikipedia.org/wiki/Joint%20Global%20Ocean%20Flux%20Study | The Joint Global Ocean Flux Study (JGOFS) was an international research programme on the fluxes of carbon between the atmosphere and ocean, and within the ocean interior. Initiated by the Scientific Committee on Oceanic Research (SCOR), the programme ran from 1987 through to 2003, and became one of the early core projects of the International Geosphere-Biosphere Programme (IGBP).
The overarching goal of JGOFS was to advance the understanding of, as well as improve the measurement of, the biogeochemical processes underlying the exchange of carbon across the air—sea interface and within the ocean. The programme aimed to study these processes from regional to global spatial scales, and from seasonal to interannual temporal scales, and to establish their sensitivity to external drivers such as climate change.
Early in the programme in 1988, two long-term time-series projects were established in the Atlantic and Pacific basins. These — Bermuda Atlantic Time-series Study (BATS) and Hawaii Ocean Time-series (HOT) — continue to make observations of ocean hydrography, chemistry and biology to the present-day. In 1989, JGOFS undertook the multinational North Atlantic Bloom Experiment (NABE) to investigate and characterise the annual spring bloom of phytoplankton, a key feature in the carbon cycle of the open ocean.
An important aspect of JGOFS lay in its objective to develop an increased network of observations, made using routine procedures, and curated such that they were easily available to researchers. JGOFS also oversaw the development of models of the marine system based on understanding gained from its observational programme.
See also
Biological pump
Geochemical Ocean Sections Study (GEOSECS)
Global Ocean Data Analysis Project (GLODAP)
Global Ocean Ecosystem Dynamics (GLOBEC)
Solubility pump
World Ocean Atlas (WOA)
World Ocean Circulation Experiment (WOCE)
References
External links
International Web Site of the Joint Global Ocean Flux Study , Woods Hole Oceanographic Institution
Joint Global Ocean Flux Study CD-ROM National Oceanic and Atmospheric Administration
Biological oceanography
Carbon
Chemical oceanography
Oceanography
Physical oceanography | Joint Global Ocean Flux Study | Physics,Chemistry,Environmental_science | 428 |
38,525,554 | https://en.wikipedia.org/wiki/5052%20aluminium%20alloy | 5052 is an aluminium–magnesium alloy, primarily alloyed with magnesium and chromium. 5052 is not a heat treatable aluminum alloy, but can be hardened through cold working.
Chemical properties
The alloy composition of 5052 is:
Magnesium - 2.2%-2.8% by weight
Chromium - 0.15%-0.35% maximum
Copper - 0.1% maximum
Iron - 0.4% maximum
Manganese - 0.1% maximum
Silicon - 0.25% maximum
Zinc - 0.1% maximum
Remainder Aluminium
A similar alloy A5652 exists differing only in impurities limits.
Mechanical properties
Uses
Typical applications include marine, aircraft, architecture, general sheet metal work, heat exchangers, fuel lines and tanks, flooring panels, streetlights, appliances, rivets and wire.
The exceptional corrosion resistance of 5052 alloy against seawater and salt spray makes it a primary candidate for the failure-sensitive large marine structures, like tanks of liquefied natural gas tankers.
Weldability
Weldability – Gas: Good
Weldability – Arc: Very Good
Weldability – Resistance: Very Good
Brazability: Acceptable
Solderability: Not recommended
References
Aluminium alloy table
5052
Aluminium–magnesium alloys | 5052 aluminium alloy | Chemistry | 256 |
2,194,910 | https://en.wikipedia.org/wiki/4-Aminopyridine | 4-Aminopyridine (4-AP) is an organic compound with the chemical formula . It is one of the three isomeric aminopyridines. It is used as a research tool in characterizing subtypes of the potassium channel. It has also been used as a drug, to manage some of the symptoms of multiple sclerosis, and is indicated for symptomatic improvement of walking in adults with several variations of the disease. It was undergoing Phase III clinical trials , and the U.S. Food and Drug Administration (FDA) approved the compound on January 22, 2010. Fampridine is also marketed as Ampyra (pronounced "am-PEER-ah," according to the maker's website) in the United States by Acorda Therapeutics and as Fampyra in the European Union, Canada, and Australia. In Canada, the medication has been approved for use by Health Canada since February 10, 2012.
Applications
In the laboratory, 4-AP is a useful pharmacological tool in studying various potassium conductances in physiology and biophysics. It is a relatively selective blocker of members of Kv1 (Shaker, KCNA) family of voltage-activated K+ channels. However, 4-AP has been shown to potentiate voltage-gated Ca2+ channel currents independent of effects on voltage-activated K+ channels.
Convulsant activity
4-Aminopyridine is a potent convulsant and is used to generate seizures in animal models for the evaluation of antiseizure agents.
Vertebrate pesticide
4-Aminopyridine is also used under the trade name Avitrol as 0.5% or 1% in bird control bait. It causes convulsions and, infrequently, death, depending on dosage. The manufacturer says the proper dose should cause epileptic-like convulsions which cause the poisoned birds to emit distress calls resulting in the flock leaving the site; if the dose was sub-lethal, the birds will recover after 4 or more hours without long-term ill effect. The amount of bait should be limited so that relatively few birds are poisoned, causing the remainder of the flock to be frightened away with a minimum of mortality. A lethal dose will usually cause death within an hour. The use of 4-aminopyridine in bird control has been criticized by the Humane Society of the United States.
Medical use
Fampridine has been used clinically in Lambert–Eaton myasthenic syndrome and multiple sclerosis. It acts by blocking voltage-gated potassium channels, prolonging action potentials and thereby increasing neurotransmitter release at the neuromuscular junction.
The drug has been shown to reverse saxitoxin and tetrodotoxin toxicity in tissue and animal experiments.
In calcium entry blocker overdose in humans, 4-aminopyridine can increase the cytosolic Ca2+
concentration very efficiently independent of the calcium channels.
Multiple sclerosis
Fampridine has been shown to improve visual function and motor skills and relieve fatigue in patients with multiple sclerosis (MS). However, the effect of the drug is strongly established for walking capacity only. Common side effects include dizziness, nervousness and nausea, and the incidence of adverse effects was shown to be less than 5% in all studies.
4-AP works as a potassium channel blocker. Strong potassium currents decrease action potential duration and amplitude, which increases the probability of conduction failure − a well documented characteristic of demyelinated axons. Potassium channel blockade has the effect of increasing axonal action potential propagation and improving the probability of synaptic vesicle release. A study has shown that 4-AP is a potent calcium channel activator and can improve synaptic and neuromuscular function by directly acting on the calcium channel beta subunit.
MS patients treated with 4-AP exhibited a response rate of 29.5% to 80%. A long-term study (32 months) indicated that 80-90% of patients who initially responded to 4-AP exhibited long-term benefits. Although improving symptoms, 4-AP does not inhibit progression of MS. Another study, conducted in Brazil, showed that treatment based on fampridine was considered efficient in 70% of the patients.
Spinal cord injury
Spinal cord injury patients have also seen improvement with 4-AP therapy. These improvements include sensory, motor and pulmonary function, with a decrease in spasticity and pain.
Tetrodotoxin poisoning
Clinical studies have shown that 4-AP is capable of reversing the effects of tetrodotoxin poisoning in animals, however, its effectiveness as an antidote in humans has not yet been determined.
Overdose
Case reports have shown that overdoses with 4-AP can lead to paresthesias, seizures, and atrial fibrillation.
Contraindications
4-aminopyridine is excreted by the kidneys. 4-AP should not be given to people with significant kidney disease (e.g., acute kidney injury or advanced chronic kidney disease) due to the higher risk of seizures with increased circulating levels of 4-AP.
Branding
The drug was originally intended, by Acorda Therapeutics, to have the brand name Amaya, however the name was changed to Ampyra to avoid potential confusion with other marketed pharmaceuticals.
Four of Acorda's patents pertaining to Ampyra were invalidated in 2017 by the United States District Court for the District of Delaware and a fifth patent expired in 2018. Since then, generic alternatives have been developed for the U.S. market.
The drug is marketed by Biogen Idec in Canada as Fampyra and as Dalstep in India by Sun Pharma.
Research
Parkinson's disease
Dalfampridine completed Phase II clinical trials for Parkinson's disease in July 2014.
See also
4-Dimethylaminopyridine, a popular laboratory reagent, is prepared directly from pyridine instead of via methylating this compound.
Pyridine
4-Pyridylnicotinamide, useful as a ligand in coordination chemistry, is prepared by the reaction of this compound with nicotinoyl chloride.
References
Potassium channel blockers
Orphan drugs
Avicides
X
4-Aminopyridines
4-Pyridyl compounds | 4-Aminopyridine | Chemistry,Biology | 1,304 |
187,398 | https://en.wikipedia.org/wiki/Cisco%20IOS | The Internetworking Operating System (IOS) is a family of proprietary network operating systems used on several router and network switch models manufactured by Cisco Systems. The system is a package of routing, switching, internetworking, and telecommunications functions integrated into a multitasking operating system. Although the IOS code base includes a cooperative multitasking kernel, most IOS features have been ported to other kernels, such as Linux and QNX, for use in Cisco products.
Not all Cisco networking products run IOS. Exceptions include some Cisco Catalyst switches, which run IOS XE, and Cisco ASR routers, which run either IOS XE or IOS XR; both are Linux-based operating systems. For data center environments, Cisco Nexus switches (Ethernet) and Cisco MDS switches (Fibre Channel) both run Cisco NX-OS, also a Linux-based operating system.
History
The IOS network operating system was created from code written by William Yeager at Stanford University, which was developed in the 1980s for routers with 256 kB of memory and low CPU processing power. Through modular extensions, IOS has been adapted to increasing hardware capabilities and new networking protocols. When IOS was developed, Cisco Systems' main product line were routers. The company acquired a number of young companies that focused on network switches, such as the inventor of the first Ethernet switch Kalpana, and as a result Cisco switches did not initially run IOS. Prior to IOS, the Cisco Catalyst series ran CatOS.
Command-line interface
The IOS command-line interface (CLI) provides a fixed set of multiple-word commands. The set available is determined by the "mode" and the privilege level of the current user. "Global configuration mode" provides commands to change the system's configuration, and "interface configuration mode" provides commands to change the configuration of a specific interface. All commands are assigned a privilege level, from 0 to 15, and can only be accessed by users with the necessary privilege. Through the CLI, the commands available to each privilege level can be defined.
Most builds of IOS include a Tcl interpreter. Using the embedded event manager feature, the interpreter can be scripted to react to events within the networking environment, such as interface failure or periodic timers.
Available command modes include:
User EXEC Mode
Privileged EXEC Mode
Global Configuration Mode
ROM Monitor Mode
Setup Mode
And more than 100 configuration modes and submodes.
Architecture
Cisco IOS has a monolithic architecture, owing to the limited hardware resources of routers and switches in the 1980s. This means that all processes have direct hardware access to conserve CPU processing time. There is no memory protection between processes and IOS has a run to completion scheduler, which means that the kernel does not pre-empt a running process. Instead the process must make a kernel call before other processes get a chance to run. IOS considers each process a single thread and assigns it a priority value, so that high priority processes are executed on the CPU before queued low priority processes, but high priority processes cannot interrupt running low priority processes.
The Cisco IOS monolithic kernel does not implement memory protection for the data of different processes. The entire physical memory is mapped into one virtual address space. The Cisco IOS kernel does not perform any memory paging or swapping. Therefore the addressable memory is limited to the physical memory of the network device on which the operating system is installed. IOS does however support aliasing of duplicated virtual memory contents to the same physical memory. This architecture was implemented by Cisco in order to ensure system performance and minimize the operational overheads of the operating system.
The disadvantage of the IOS architecture is that it increases the complexity of the operating system, data corruption is possible as one process can write over the data of another, and one process can destabilize the entire operating system or even cause a software-forced crash. In the event of an IOS crash, the operating system automatically reboots and reloads the saved configuration.
Routing
In all versions of Cisco IOS, packet routing and forwarding (switching) are distinct functions. Routing and other protocols run as Cisco IOS processes and contribute to the Routing Information Base (RIB). This is processed to generate the final IP forwarding table (FIB, Forwarding Information Base), which is used by the forwarding function of the router. On router platforms with software-only forwarding (e.g., Cisco 7200), most traffic handling, including access control list filtering and forwarding, is done at interrupt level using Cisco Express Forwarding (CEF) or dCEF (Distributed CEF). This means IOS does not have to do a process context switch to forward a packet. Routing functions such as OSPF or BGP run at the process level. In routers with hardware-based forwarding, such as the Cisco 12000 series, IOS computes the FIB in software and loads it into the forwarding hardware (such as an ASIC or network processor), which performs the actual packet forwarding function.
Interface descriptor block
An Interface Descriptor Block, or simply IDB, is a portion of memory or Cisco IOS internal data structure that contains information such as the IP address, interface state, and packet statistics for networking data. Cisco's IOS software maintains one IDB for each hardware interface in a particular Cisco switch or router and one IDB for each subinterface. The number of IDBs present in a system varies with the Cisco hardware platform type.
Physical and logical interfaces on the switch will be referenced with either expanded or abbreviated port description names. This combined with slot, module, and interface numbering creates a unique reference to that interface.
Packages and feature sets
IOS is shipped as a unique file that has been compiled for specific Cisco network devices. Each IOS Image therefore include a feature set, which determine the command-line interface (CLI) commands and features that are available on different Cisco devices. Upgrading to another feature set therefore entails the installation of a new IOS image on the networking device and reloading the IOS operating system. Information about the IOS version and feature-set running on a Cisco device can be obtained with the show version command.
Most Cisco products that run IOS also have one or more "feature sets" or "packages", typically eight packages for Cisco routers and five packages for Cisco network switches. For example, Cisco IOS releases meant for use on Catalyst switches are available as "standard" versions (providing only basic IP routing), "enhanced" versions, which provide full IPv4 routing support, and "advanced IP services" versions, which provide the enhanced features as well as IPv6 support.
Beginning with the 1900, 2900 and 3900 series of ISR Routers, Cisco revised the licensing model of IOS. To simplify the process of enlarging the feature-set and reduce the need for network operating system reloads, Cisco introduced universal IOS images, that include all features available for a device and customers may unlock certain features by purchasing an additional software license. The exact feature set required for a particular function can be determined using the Cisco Feature Navigator. Routers come with IP Base installed, and additional feature pack licenses can be installed as bolt-on additions to expand the feature set of the device. The available feature packs are:
Data adds features like BFD, IP SLAs, IPX, L2TPv3, Mobile IP, MPLS, SCTP.
Security adds features like VPN, Firewall, IP SLAs, NAC.
Unified Comms adds features like CallManager Express, Gatekeeper, H.323, IP SLAs, MGCP, SIP, VoIP, CUBE(SBC).
IOS images can not be updated with software bug fixes. To patch a vulnerability in IOS, a binary file with the entire operating system needs to be loaded.
Versioning
Cisco IOS is versioned using three numbers and some letters, in the general form a.b(c.d)e, where:
a is the major version number.
b is the minor version number.
c is the release number, which begins at one and increments as new releases in a same way a.b train are released. "Train" is Cisco-speak for "a vehicle for delivering Cisco software to a specific set of platforms and features."
d (omitted from general releases) is the interim build number.
e (zero, one or two letters) is the software release train identifier, such as none (which designates the mainline, see below), T (for Technology), E (for Enterprise), S (for Service provider), XA as a special functionality train, XB as a different special functionality train, etc.
Rebuilds – Often a rebuild is compiled to fix a single specific problem or vulnerability for a given IOS version. For example, 12.1(8)E14 is a Rebuild, the 14 denoting the 14th rebuild of 12.1(8)E. Rebuilds are produced to either quickly repair a defect, or to satisfy customers who do not want to upgrade to a later major revision because they may be running critical infrastructure on their devices, and hence prefer to minimize change and risk.
Interim releases – Are usually produced on a weekly basis, and form a roll-up of current development effort. The Cisco advisory web site may list more than one possible interim to fix an associated issue (the reason for this is unknown to the general public).
Maintenance releases – Rigorously tested releases that are made available and include enhancements and bug fixes. Cisco recommend upgrading to Maintenance releases where possible, over Interim and Rebuild releases.
Trains
Cisco says, "A train is a vehicle for delivering Cisco software to a specific set of platforms and features."
Until 12.4
Before Cisco IOS release 15, releases were split into several trains, each containing a different set of features. Trains more or less map onto distinct markets or groups of customers that Cisco targeted.
The mainline train is intended to be the most stable release the company can offer, and its feature set never expands during its lifetime. Updates are released only to address bugs in the product. The previous technology train becomes the source for the current mainline train — for example, the 12.1T train becomes the basis for the 12.2 mainline. Therefore, to determine the features available in a particular mainline release, look at the previous T train release.
The T – Technology train, gets new features and bug fixes throughout its life, and is therefore potentially less stable than the mainline. (In releases prior to Cisco IOS Release 12.0, the P train served as the Technology train.) Cisco doesn't recommend usage of T train in production environments unless there is urgency to implement a certain T train's new IOS feature.
The S – Service Provider train, runs only on the company's core router products and is heavily customized for Service Provider customers.
The E – Enterprise train, is customized for implementation in enterprise environments.
The B – broadband train, supports internet based broadband features.
The X* (XA, XB, etc.) – Special Release train, contains one-off releases designed to fix a certain bug or provide a new feature. These are eventually merged with one of the above trains.
There were other trains from time to time, designed for specific needs — for example, the 12.0AA train contained new code required for Cisco's AS5800 product.
Since 15.0
Starting with Cisco IOS release 15, there is just a single train, the M/T train. This train includes both extended maintenance releases and standard maintenance releases. The M releases are extended maintenance releases, and Cisco will provide bug fixes for 44 months. The T releases are standard maintenance releases, and Cisco will only provide bug fixes for 18 months.
Security and vulnerabilities
Because IOS needs to know the cleartext password for certain uses, (e.g., CHAP authentication) passwords entered into the CLI by default are weakly encrypted as 'Type 7' ciphertext, such as "Router(config)#username jdoe password 7 0832585B1910010713181F". This is designed to prevent "shoulder-surfing" attacks when viewing router configurations and is not secure – they are easily decrypted using software called "getpass" available since 1995, or "ios7crypt", a modern variant, although the passwords can be decoded by the router using the "key chain" command and entering the type 7 password as the key, and then issuing a "show key" command; the above example decrypts to "stupidpass". However, the program will not decrypt 'Type 5' passwords or passwords set with the enable secret command, which uses salted MD5 hashes.
Cisco recommends that all Cisco IOS devices implement the authentication, authorization, and accounting (AAA) security model. AAA can use local, RADIUS, and TACACS+ databases. However, a local account is usually still required for emergency situations.
At the Black Hat Briefings conference in July 2005, Michael Lynn, working for Internet Security Systems at the time, presented information about a vulnerability in IOS. Cisco had already issued a patch, but asked that the flaw not be disclosed. Cisco filed a lawsuit, but settled after an injunction was issued to prevent further disclosures.
With IOS being phased out on devices, IOS-XE adopted many improvements including updated defaults. Some use cases can now store secrets as one-way hashes.
IOS XR train
For Cisco products that required very high availability, such as the Cisco CRS-1, the limitations of a monolithic kernel were not acceptable. In addition, competitive router operating systems that emerged 10–20 years after IOS, such as Juniper's Junos OS, were designed to not have these limitations. Cisco's response was to develop a completely new operating system that offered modularity, memory protection between processes, lightweight threads, pre-emptive scheduling, ability to independently restart failed processes and massive scale for use in Service Provider networks. The IOS XR development train initially used the real-time operating system microkernel (QNX) and a large part of the IOS source code was re-written to take advantage of the features offered by the kernel. In 2005 Cisco introduced the Cisco IOS XR network operating system on the 12000 series of network routers, extending the microkernel architecture from the CRS-1 routers to Cisco's widely deployed core routers. As of release 6.x of Cisco IOS XR, QNX was dropped in favor of Linux. Part of the initial work focused on modularity inspired modification of monolithic IOS into modular IOS, which extends the microkernel architecture into the IOS environment, while still providing the software upgrade capabilities. That idea was only tested on Catalyst 6500, got limited exposure and was quickly discontinued as requirements were too high and significantly impaired platform operation.
See also
Cisco IOS XE
Cisco IOS XR
Cisco NX-OS
Junos OS
Supervisor Engine (Cisco)
Network operating system
Packet Tracer
References
External links
Cisco Content Hub
Cisco Feature Navigator
Cisco Security Advisories
IOS
Embedded operating systems
Internet Protocol based network software
Network operating systems
Routers (computing) | Cisco IOS | Engineering | 3,124 |
1,761,921 | https://en.wikipedia.org/wiki/Polar%20high | In meteorology, the polar highs are areas of high atmospheric pressure, sometimes similar to anticyclones, around the North and South Poles; the south polar high (Antarctic high) being the stronger one because land gains and loses heat more effectively than sea, which the north has much less of. The cold temperatures in the polar regions cause air to descend, creating the high pressure (a process called subsidence), just as the warm temperatures around the equator cause air to rise instead and create the low pressure Intertropical Convergence Zone. Rising air also occurs along bands of low pressure situated just below the polar highs around the 50th parallel of latitude. These extratropical convergence zones are occupied by the polar fronts where air masses of polar origin meet and clash with those of tropical or subtropical origin in a stationary front. This convergence of rising air completes the vertical cycle around the polar cell in each latitudinal hemisphere's polar region. Closely related to this concept is the polar vortex, a rotating low-pressure circle of cold air around the poles.
Surface temperatures under the polar highs are one of the coldest on Earth, with no month having an average temperature above freezing. Regions under the polar high also experience very low levels of precipitation, which leads them to be known as "polar deserts".
Air flows outwards from the poles to create the polar easterlies in the Arctic and Antarctic areas.
See also
Polar vortex
Polar low
References
Regional climate effects
Meteorological phenomena | Polar high | Physics | 295 |
1,336,470 | https://en.wikipedia.org/wiki/Clinker%20%28boat%20building%29 | Clinker-built, also known as lapstrake-built, is a method of boat building in which the edges of longitudinal (lengthwise-running) hull planks overlap each other. Where necessary in larger craft, shorter hull planks can be joined end to end, creating a longer hull plank (strake).
The technique originated in Nordic shipbuilding, and was employed by the Anglo-Saxons, Frisians, and Scandinavians. It was also used in cogs, the other major ship construction type found in Northern Europe in the latter part of the medieval period. Carvel construction—where longitudinal hull planks abut edge to edge (instead of lapping)—supplanted clinker construction in large vessels as the demand for capacity surpassed the limits of clinker construction, such as in larger medieval transport ships (hulks).
UNESCO named the Nordic clinker boat tradition to its List of Intangible Cultural Heritage on December 14, 2021, in the first approval of a joint Nordic application.
Description
Clinker construction is a boat and ship-building method in which the hull planks overlap and are joined by nails that are driven through the overlap. These fastenings typically go through a metal over which the protruding end of the nail is deformed in a process comparable to riveting the planks together. This gives a distinctive appearance to the outside of the hull as the overlaps are obvious in the stepped nature of the hull surface.
Clinker construction is a shell-first technique (in contrast to the frame-based nature of carvel). The construction sequence begins with the joining of the , and (or ) and setting these in place in the build area. Thereafter, the shape of the hull is determined by the shaping and fitting of the hull planking that forms the waterproof exterior of the hull. Any reinforcing s, s or beams are added after the joining of the hull planks. This may involve completely finishing the exterior planking first, or just some planking may be fitted with, for instance, s being added whilst that part of the hull is accessible before planking is continued.
Medieval clinker construction used iron nails and rovesthe latter often being a distinctive diamond shape. There are less common regional instances of planks being joined with treenails or by sewing, but iron fastening predominated. More modern boats generally use copper nails with an annular rove of the same material.
Historically, particularly in the traditional Nordic tradition, clinker construction most commonly used cleft, or radially split, oak planks. This gives a stronger piece of timber than with sawn materialnot only is the grain continuous along the length of the piece, but the medullary rays are aligned in the same plane as the timber surface, so maximising the strength available. However, this timber conversion method does limit the maximum width of plank to slightly more than one third of the diameter of the tree from which it is splitthe narrowest part (including any pith) and the sapwood are cut off. The slightly uneven surface found on cleft timber is the reason why caulking is laid in the overlap between the hull planks during construction, often using animal hair.
Examples
Early examples of clinker-built boats include the longships of the Viking raiders and traders, and the trading cogs of the Hanseatic League. Modern examples of clinker-built boats that are directly descended from those of the early medieval period are seen in the traditional round-bottomed Thames skiffs, the larger (originally) cargo-carrying Norfolk wherries of England, and working craft like the yawls that were once common around the coasts of Britain and Ireland.
History
The term clinker derives from a common Germanic word for clinch or clench, a word meaning “to fasten together”.
Historical context: other systems
In the first few centuries AD, several boat and ship-building systems existed in Europe. In the Mediterranean, flush-planked hulls were produced by edge-to-edge joining of the hull planking with mortise and tenon joints. This was a shell-first technique, which started with a keel, stem and stern-post, to which planking was added. The hull was then reinforced by the addition of s. The shape of the individual planks generates the shape of the hull. In the Roman-occupied parts of Northern Europe, the Romano-Celtic tradition involved flush-planking that was not joined with mortise and tenon joints but was connected by framing elements. (This may be a building tradition that continued with the bottom planking of the medieval cog and then into the Dutch bottom-based building methods of the 17th century.) The Romano-Celtic method of construction is also a shell-first technique, in that the hull shape is dictated by the shaping of the planks, not by the underlying framing of the finished hull.
Origins of clinker
There are precursors of clinker construction. The archaeological remains of a river boat dated to the first two centuries AD (described as Romano-Celtic), found in Pommeroeul in Belgium, had a single that overlapped the underlying plankthough it is not clear how it was fastened. Earlier finds have bevelled lap joints or other similar arrangements that do not have the full lap of clinker. These include the Dover boat and Ferriby 1 (both dating to the middle of the second millennium BC) and the Hjortspring boat (). In these cases, the planks are stitched or sewn together. The Hjortspring boat is built shell-first so suggesting some continuity with the Nordic tradition of clinker construction.
The earliest example of ship and boat building using overlapped planking joined with metal fastenings is in an extended logboat from Björke in Sweden. This dates to . The Nydam boat is an almost complete example of a boat built with clinker construction. It has overlapping planks joined with iron nails driven through the lap. The nails are clenched over s on the inside of the planking. The boat was built shell-first. It dates to .
Into the medieval
Though clinker construction is closely associated with Nordic countries, the same technique was used at an early stage in other parts of Northern Europe. The Saxon burial ship at Sutton Hoo in eastern England is an early () example of this sort of ship occurring in the broader Northern European area. Other sites from the 7th century AD include Kvalsund, Norway, Gretstedbro in Jutland and Snape in eastern England. One difference from the Nydam boat is that individual planks in the later period are shorter and narrower. This suggests that large oak trees for ship-building had become a lot less common by the 7th century, so timber of smaller dimensions had to be used.
The 8th, 9th and 10th centuries saw the use of Viking longships for raiding and settlement. Archaeological remains of these clinker-built ships include the Oseberg ship and the Gokstad ship. These show some development from earlier vessels, including a partial which acted as the mast step. As well as these warship types, cargo vessels were built which were less extreme with greater beam and more emphasis on propulsion by sail, together with extra cross-beams to strengthen the hull for greater weight carrying.
The cog is part of another ship-building tradition in Northern Europe that existed at the same time that the purely Nordic-tradition clinker vessels were being built. Though the classic cog construction uses flush planking for the bottom, the sides are constructed in a clinker methodwith the difference that the nails that passed through overlapping planks were simply bent over and driven back into the plank, rather than using roves.
Clinker-built vessels were constructed as far South as the Basque country; the Newport Medieval Ship is an example of a clinker-built vessel that was built in the Basque region. By the 14th century, clinker-built ships and the cog represented the major construction methods in Northern Europe.
Introduction of carvel to Northern Europe
Carvel construction was developed in the Mediterranean around the end of the Classical antiquity period. By the end of the 13th century AD, Mediterranean ships were being built on a skeleton basis, with hull planks being fixed to the frames and not to each other. At the same time, Northern European cogs were voyaging into the Mediterranean. The two maritime technological traditions had differences beyond the hull construction methods. Mediterranean ships were carvel-built, lateen rigged (using more than one mast on larger vessels) and still used side rudders. The visiting cogs had a single square-rigged mast, a stern-post mounted pintle-and-gudgeon rudder and clinker sides. As part of the process of merging these two sets of traditions, carvel-built ships started to arrive in Northern waters. They were soon followed by shipwrights with the skills to build in carvel construction, with the first being built in this region in the late 1430s. The change is still not well understood. The frames of carvel could be made stronger to support the weight of the guns that ships were starting to carry and allowed gun-ports to be cut in the hull. Carvel construction may have solved the shortage of large cleft oak planks from which to make larger clinker vessels. Despite the large-scale move over to carvel construction for large vessels, clinker construction remained prominent throughout Northern Europe.
The Nordic clinker boat tradition was inscribed to the UNESCO List of the Intangible Cultural Heritage on December 14, 2021, as the first joint Nordic application to the list.
Construction
Planking
In the clinker- (lapstrake-) method of boat building, the edges of longitudinal hull planks overlap each other. Where necessary in larger craft, shorter planks can be joined end to end, creating a longer strake or hull plank.
How this is done is as follows. In building such a boat (e.g., a simple boat), workers assemble and securely set up the keel, stem,, deadwoods, sternpost and perhaps transom. In normal practice, this will be the same way up as they will be in use. From the hog, the garboard, bottom, bilge, topside and sheer strakes are planked up, held together along their ‘lands’ – the areas of overlap between neighbouring strakes – by copper rivets. At the stem and, in a double-ended boat, the sternpost, geralds are formed. That is, in each case, the land of the lower strake is tapered to a feather edge at the end of the strake where it meets the stem or stern-post. This allows the end of the strake to be screwed to the apron with the outside of the planking mutually flush at that point and flush with the stem. This means that the boat's passage through the water will not tend to lift the ends of the planking away from the stem. Before the next plank is laid up, the face of the land on the lower strake is bevelled to suit the angle at which the next strake will lie in relation with it. This varies all along the land. Gripes are used to hold the new strake in position on the preceding one before the fastening is done.
Timbering or framing out
How thes steps are done is as follows. Once the shell of planking is assembled, transverse battens of oak, ash, or elm, called timbers, are steam-bent to fit the internal, concave side. As the timbers are bent in, they are fastened to the shell (e.g., via copper-riveting), through the lands of the planking. Alternatively, as on many clinker-built craft, e.g. in Scandinavia and in Thames skiffs and larger working craft like the coble, sawn frames are used, assembled from floors and top timbers, joggled to fit the lands. (Sometimes the timbers in larger archaic craft were also joggled before being steamed in.)
With the timbers all fitted, longitudinal members are bent in. The ones that run on the underside of the thwarts are called risings. They are fastened through the timbers. Bilge keels are often added to the outside of the land on which the boat would lie on a hard surface to stiffen it and protect it from wear. A stringer is usually fitted round the inside of each bilge to strengthen it. In a small boat, this is usually arranged to serve also as a means of retaining the bottom boards. These are removable assemblies, shaped to lie over the bottom timbers and be walked upon. They spread the stresses from the crew's weight across the bottom structure.
Longitudinals
Inboard of the sheer strake the heavier gunwale is similarly bent in along the line of the sheer. This part of the work is finished by fitting the breast hook and quarter knees. Swivel or crutch chocks are fitted as appropriate to the gunwale, the thwarts fitted down onto the rising and held in position by knees up to the gunwale and perhaps down onto the stringer. The structure of gunwale, rising, thwart and thwart knees greatly stiffens and strengthens the shell and turns it into a boat. There are several ways of fixing the rubbing strake, but, in a clinker boat, it is applied to the outside of the sheer strake.
Fittings and finishing
Fittings such as swivels or crutch plate, painter ring, stretchers, keel and stem band are fitted. In a sailing dinghy, there would be more fittings, such as fairleads, horse, shroud plates, mast step, toe straps and so on.
At stages along the way, painters will have been called in to prime the timber, particularly immediately before the timbering is done. The boatbuilder will clean up the inside of the planking and the painter will prime it and probably more, partly because it is easier that way and partly so as to put some preservative on the planking behind the timbers. Similarly, it is best to have the varnishing done after the fittings are fitted but before they are shipped. Thus, the keel band will be shaped and drilled and the screw holes drilled in the wood of keel and stem then the band will be put aside while the varnishing is done.
Fastenings
In general
The fittings of a clinker boat, as described above (keel, stem band, etc.) are fixed with screws. The planks of the boat may be fastened together in several ways:
With copper or iron rivets—consisting of a square nail and a dish-shaped washer called a rove. The land is pierced, the nail knocked through from the outside, the rove punched on while the head is held up by a dolly (a small portable anvil, usually of cylindrical shape). The nail is cut off just proud of the rove and the cut end clenched over the rove while the dolly is used to hold the nail in place. In planking up clinker work, one man can hold both dolly and clenching hammer. Although this is common where sawn frames are to be used, boats intended for steamed timbers are usually nailed but not clenched until the timbering out is complete. As timbering is a two-handed job it is more efficient to leave the clenching until help is at hand, and then the helper dollies up, whilst the builder sits inside the hull and clenches up.
With iron nails—with the pointed nail ends protruding on the inside of the boat, bent over and back into the wood in the form of a hook. This technique, called clinching, used to be found in Scandinavian-built boats, but even iron nails on the lands were usually properly clenched over roves. Nails fastening timbers were sometimes turned over, particularly where removable bottom boards were to rest on the timbers. However, it was possible to tread the bottom boards onto the clenched nails and, where marks were left, gouge out recesses to accommodate the clenched nails.
With screws—which may be used for fixing the ends of the strakes to the apron and transom. In later times, they also fixed knees to the gunwale and thwarts, but, traditionally, this last would be done with a clench bolt or a large copper nail, clenched.
With adhesive, notably epoxy—where, traditionally, lands were not glued nor was anything used to bed them. The garboard was bedded onto the hog and keel and the ends of the strakes onto the stem and apron with a mixture of white lead and grease. During the world wars of the 20th century, new techniques and materials were developed by the aircraft industry. By the mid-1950s, these were well infiltrated into the boatbuilding trade. New boats in classes of racing dinghy with clinker hulls were built as glued clinker boats. The basic construction was the same as before, but they were built with decks of ply planking, and the lands were glued with no fastenings, except that the ends and garboards were still screwed to the apron and hog. Since plywood would not split, no timbers were used. Except for a light gunwale and wide rubbing strake, the longitudinals were omitted too, and a short thwart rising and knees were glued to the planking. Their decks made such boats sufficiently stiff. So that the liquid glue could be laid onto the land before the next plank was assembled onto it, the boats were built upside down.
Of the centre-line
Where suitable metal was not available, it was possible to use treenails (pronounced trennels), fasteners like clench bolts but made of wood; instead of being clenched, they had a hardwood wedge knocked into each end to spread it, after which, the surplus was then sawn off. In the last few years of wooden boat construction, glue and screws took over, but until the 1950s, the keel, hog, stem, apron, deadwoods, sternpost, and perhaps transom would be fastened together by bolts set in white lead and grease. There are three kinds of bolt used:
The screw bolt (i.e. threaded bolt), with its nut and washer, is by far the most common.
The pin bolt or cotter bolt, instead of a thread, has a tapered hole forged through the end away from the head, into which a tapered pin or cotter is knocked. The taper is in effect a straight thread. In conjunction with a washer, this draws the bolt tight, as a nut does on a screw bolt.
The clench bolt has some of the features of a rivet but is usually much longer than the normal rivet; in a wooden ship, perhaps a metre or more. For a shipwright's use, it is of copper. A head is formed by upsetting one end using a swage. It is then knocked through a hole bored through the work to be fastened, and through a washer. The head is held up with a dolly and the other end is upset over the washer in the same way as the head.
Until well into the nineteenth century, this is what held the great ships of the world together, though some such bolts may have been of iron. Until the late 1950s, the centre-line assembly of British Admiralty twenty-five foot motor cutters were fastened in this way.
Clinker and carvel compared
The Vikings used the clinker form of construction to build their longships from split wood planks. Clinker is the most common English term for this construction in both British and American English, though in American English the method is sometimes also known as lapstrake; lapboard was used especially before the 20th century to side buildings, where the right angles of the structure lend themselves to quick assembly.
The smoother surface of a carvel boat gives the impression at first sight that it is hydrodynamically more efficient. The lands of the planking are not there to disturb the stream line. This distribution of relative efficiency between the two forms of construction is an illusion because for given hull strength, the clinker boat is lighter.
Additionally, the clinker building method as used by the Vikings created a vessel which could twist and flex relative to the line extending length of the vessel, bow to stern. This gave it an advantage in North Atlantic rollers so long as the vessel was small in overall displacement. Increasing the beam, due to the light nature of the method, did not commensurately increase the vessel's survivability under the torsional forces of rolling waves, and greater beam widths may have made the resultant vessels more vulnerable.
There is an upper limit to the size of clinker-built vessels, which could be and was exceeded by several orders of magnitude in later large sailing vessels incorporating carvel-built construction. Clinker building requires relatively wide planking stock compared to carvel, as carvel can employ stealers to reduce plank widths amidships, where their girth is greatest, while clinker planks, needing sufficient lap to accept their clench fastenings, must be wider in proportion to their thickness. In all other areas of construction, including framing, deck, etc., clinker is as capable as carvel. Clinker construction remains to this day a valuable method of construction for small wooden vessels.
See also
Classic Boat (magazine)
Dragon Harald Fairhair (ship)
Gableboat
Montagu whaler
Longship
Naglfar
Oselvar
Rivet
Yoal
Notes
References
Further reading
Shipbuilding
Intangible Cultural Heritage of Humanity | Clinker (boat building) | Engineering | 4,536 |
2,156,081 | https://en.wikipedia.org/wiki/One%20Laptop%20per%20Child | One Laptop per Child (OLPC) was a non-profit initiative that operated from 2005 to 2014 with the goal of transforming education for children around the world by creating and distributing educational devices for the developing world, and by creating software and content for those devices.
When the program launched, the typical retail price for a laptop was considerably in excess of $1,000 (US), so achieving this objective required bringing a low-cost machine to production. This became the OLPC XO Laptop, a low-cost and low-power laptop computer designed by Yves Béhar with Continuum, now EPAM Continuum. The project was originally funded by member organizations such as AMD, eBay, Google, Marvell Technology Group, News Corporation, and Nortel. Chi Mei Corporation, Red Hat, and Quanta provided in-kind support. After disappointing sales, the hardware design part of the organization shut down in 2014.
The OLPC project was praised for pioneering low-cost, low-power laptops and inspiring later variants such as Eee PCs and Chromebooks; for assuring consensus at ministerial level in many countries that computer literacy is a mainstream part of education; for creating interfaces that worked without literacy in any language, and particularly without literacy in English.
It was criticized for its US-centric focus ignoring bigger problems, high total costs, low focus on maintainability and training and its limited success. The OLPC project is critically reviewed in a 2019 MIT Press book titled The Charisma Machine: The Life, Death, and Legacy of One Laptop per Child.
OLPC, Inc, a descendent of the original organization, continues to operate, but the design and creation of laptops is no longer part of its mission.
History
The OLPC program has its roots in the pedagogy of Seymour Papert, an approach known as constructionism, which espoused providing computers for children at early ages to enable full digital literacy. Papert, along with Nicholas Negroponte, were at the MIT Media Lab from its inception. Papert compared the old practice of putting computers in a computer lab to books chained to the walls in old libraries. Negroponte likened shared computers to shared pencils. However, this pattern seemed to be inevitable, given the then-high prices of computers (over $1,500 apiece for a typical laptop or small desktop by 2004).
In 2005, Negroponte spoke at the World Economic Forum, in Davos. In this talk he urged industry to solve the problem, to enable a $100 laptop, which would enable constructionist learning, would revolutionize education, and would bring the world's knowledge to all children. He brought a mock-up and was described as prowling the halls and corridors of Davos to whip up support. Despite the reported skepticism of Bill Gates and others, Negroponte left Davos with committed interest from AMD, News Corp, and with strong indications of support from many other firms. From the outset, it was clear that Negroponte thought that the key to reducing the cost of the laptop was to reduce the cost of the display. Thus, when, upon return from Davos, he met Mary Lou Jepsen, the display pioneer who was in early 2005 joining the MIT Media Lab faculty, the discussions turned quickly to display innovation to enable a low-cost laptop. Convinced that the project was now possible, Negroponte led the creation of the first corporation for this: the Hundred Dollar Laptop Corp.
At the 2006 Wikimania, Jimmy Wales announced that the One Laptop Per Child Project would be including Wikipedia as the first element in their content repository. Wales explained, "I think it is in my rational self interest to care about what happens to kids in Africa," elaborating in his fundraising appeal:
At the 2006 World Economic Forum in Davos, Switzerland, the United Nations Development Program (UNDP) announced it would back the laptop. UNDP released a statement saying they would work with OLPC to deliver "technology and resources to targeted schools in the least developed countries".
Starting in 2007, the Association managed development and logistics, and the Foundation managed fundraising such as the Give One Get One campaign ("G1G1").
Intel was a member of the association for a brief period in 2007. Shortly after OLPC's founder, Nicholas Negroponte, accused Intel of trying to destroy the non-profit, Intel joined the board with a mutual non-disparagement agreement between them and OLPC. Intel resigned its membership on January 3, 2008, citing disagreements with requests from Negroponte for Intel to stop dumping their Classmate PCs.
In 2008, Negroponte showed some doubt about the exclusive use of open-source software for the project, and made suggestions supporting a move towards adding Windows XP, which Microsoft was in the process of porting over to the XO hardware. Microsoft's Windows XP, however, was not seen by some as a sustainable operating system. Microsoft announced that they would sell them Windows XP for $3 per XO. It would be offered as an option on XO-1 laptops and possibly be able to dual boot alongside Linux. In response, Walter Bender, who was the former President of Software and Content for the OLPC project, left OLPC and founded Sugar Labs to continue development of the open source Sugar software which had been developed within OLPC. No significant deployments elected to purchase Windows licenses.
Charles Kane became the new President and Chief Operating Officer of the OLPC Association on May 2, 2008. In late 2008, the NYC Department of Education purchased some XO computers for use by New York schoolchildren.
Advertisements for OLPC began streaming on the video streaming website Hulu and others in 2008. One such ad has John Lennon advertising for OLPC, with an unknown voice actor redubbing over Lennon's voice.
In 2008, OLPC lost significant funding. Their annual budget was slashed from $12 million to $5 million which resulted in a restructuring on January 7, 2009. Development of the Sugar operating environment was moved entirely into the community, the Latin America support organization was spun out and staff reductions, including Jim Gettys, affected approximately 50% of the paid employees. The remaining 32 staff members also saw salary reductions. Despite the downsizing, OLPC continued development of the XO-1.5 laptops.
In 2010, OLPC moved its headquarters to Miami. The Miami office oversaw sales and support for the XO-1.5 laptop and its successors, including the XO Laptop version 4.0 and the OLPC Laptop. Funding from Marvell, finalized in May 2010, revitalized the foundation and enabled the 1Q 2012 completion of the ARM-based XO-1.75 laptops and initial prototypes of the XO-3 tablets. OLPC took orders for mass production of the XO 4.0, and shipped over 3 million XO Laptops to children around the world.
Criticism
At the World Summit on the Information Society held by the United Nations in Tunisia from November 16–18, 2005, several African representatives, most notably Marthe Dansokho (a missionary of United Methodist Church), voiced critic towards the motives of the OLPC project and claimed that the project presented solutions for misplaced priorities, stating that African women would not have enough time to research new crops to grow. She added that clean water and schools were more important. Mohammed Diop specifically criticized the project as an attempt to exploit the governments of poor nations by making them pay for hundreds of millions of machines and the need of further investments into internet infrastructure. Others have similarly criticized laptop deployments in very low income countries, regarding them as cost-ineffective when compared to far simpler measures such as deworming and other expenses on basic child health.
Lee Felsenstein, a computer engineer who played a central role in the development of the personal computer, criticized the centralized, top-down design and distribution of the OLPC.
In September 2009, Alanna Shaikh offered a eulogy for the project at UN Dispatch, stating "It's time to call a spade a spade. OLPC was a failure."
Cost
The project originally aimed for a price of 100 US dollars. In May 2006, Negroponte told the Red Hat's annual user summit: "It is a floating price. We are a nonprofit organization. We have a target of $100 by 2008, but probably it will be $135, maybe $140." A BBC news article in April 2010 indicated the price still remained above $200.
In April 2011, the price remained above $209. In 2013, more than 10% of the world population lived on less than US$2 per day. The latter income segment would have to spend more than a quarter of its annual income to purchase a single laptop, while the global average of Information and communications technology (ICT) spending is 3% of income. Empirical studies show that the borderline between ICT as a necessity good and ICT as a luxury good is roughly around the "magical number" of US$10 per person per month, or US$120 per year.
John Wood, founder of Room to Read (a non-profit which builds schools and libraries), emphasizes affordability and scalability over high-tech solutions. While in favor of the One Laptop per Child initiative for providing education to children in the developing world at a cheaper rate, he has pointed out that a $2,000 library can serve 400 children, costing just $5 a child to bring access to a wide range of books in the local languages (such as Khmer or Nepali) and English; also, a $10,000 school can serve 400–500 children ($20–25 a child). According to Wood, these are more appropriate solutions for education in the dense forests of Vietnam or rural Cambodia.
The Scandinavian aid organization FAIR proposed setting up computer labs with recycled second-hand computers as a cheaper initial investment. Negroponte argued against this proposition, stating the expensive running cost of conventional laptops. Computer Aid International doubted the OLPC sales strategy would succeed, citing the "untested" nature of its technology. CAI refurbishes computers and printers and sells them to developing countries for £42 a piece (compare it to £50 a piece for the OLPC laptops).
Teacher training and ongoing support
The OLPC project has been criticized for allegedly adopting a "one-shot" deployment approach with little or no technical support or teacher training, and for neglecting pilot programs and formal assessment of outcomes in favor of quick deployment. Some authors attribute this unconventional approach to the promoters' alleged focus on constructivist education and digital utopianism. Mark Warschauer, a Professor of University of California at Irvine and Morgan Ames, at the time of writing, a PhD candidate at Stanford University, pointed out that the laptop by itself does not completely fill the need of students in underprivileged countries. The "children's machines", as they have been called, have been deployed to several countries, for example Uruguay, Peru, and in the US, Alabama, but after a relatively short time, their usage declined considerably, sometimes because of hardware problems or breakage, in some cases, as high as 27–59% within the first two years, and sometimes due to a lack of knowledge on the part of the users on how to take full advantage of the machine.
However, another factor has recently been acknowledged: a lack of a direct relation to the pedagogy needed in the local context to be truly effective. Uruguay reports that only 21.5% of teachers use the laptop in the classroom on a daily basis, and 25% report using it less than once a week. In Alabama, 80.3% of students say they never or seldom use the computer for class work, and Peru, teachers report that in the first few months, 68.9% use the laptop three times per week, but after two months, only 40% report such usage. Those of a low socio-economic level tend to not be able to effectively use the laptop for educational purposes on their own, but with scaffolding and mentoring from teachers, the machine can become more useful. According to one of the returning OLPC executives, Walter Bender, the approach needs to be more holistic, combining technology with a prolonged community effort, teacher training and local educational efforts and insights.
The organization has been accused of simply giving underprivileged children laptops and "walking away". Some critics claim this "drive-by" implementation model was the official strategy of the project. While the organisation has learning teams dedicated to support and working with teachers, Negroponte has said in response to this criticism that "You actually can" give children a connected laptop and walk away, noting experiences with self-guided learning.
Other explanations of failure included a high minimum order, low reliability and maintainability, unsuitability to local conditions and culture, and encouragement of children to learn new ways of thinking instead of remaining loyal to old ways.
Technology
The XO, previously known as the "$100 Laptop" or "Children's Machine", is an inexpensive laptop computer designed to be distributed to children in developing countries around the world, to provide them with access to knowledge, and opportunities to "explore, experiment and express themselves" (constructionist learning). The laptop was designed by Yves Béhar with Design Continuum, and manufactured by the Taiwanese computer company Quanta Computer.
The rugged, low-power computers use flash memory instead of a hard drive, run a Fedora-based operating system and use the SugarLabs Sugar user interface. Mobile ad hoc networking based on the 802.11s wireless mesh network protocol allows students to collaborate on activities and to share Internet access from one connection. The wireless networking has much greater range than typical consumer laptops. The XO-1 was designed for lower cost and much longer life than typical laptops.
In 2009, OLPC announced an updated XO (dubbed XO-1.5) to take advantage of the latest component technologies. The XO-1.5 includes a new VIA C7-M processor and a new chipset providing a 3D graphics engine and an HD video decoder. It has 1 GB of RAM and built-in storage of 4 GB, with an option for 8 GB. The XO-1.5 uses the same display, and a network wireless interface with half the power dissipation.
Early prototype versions of the hardware were available in June 2009, and they were available for software development and testing available for free through a developer's program.
An XO-1.75 model was developed that used a Marvell ARM processor, targeting a price below $150 and date in 2011.
The XO-2 two sheet design concept was canceled in favor of the one sheet XO-3.
An XO-3 concept resembled a tablet computer and was planned to have the inner workings of the XO 1.75. Price goal was below $100 and date was 2012.
As of May 2010, OLPC was working with Marvell on other unspecified future tablet designs. In October 2010, both OLPC and Marvell signed an agreement granting OLPC $5.6 million to fund development of its XO-3 next generation tablet computer. The tablet was to use an ARM chip from Marvell.
At CES 2012, OLPC showcased the XO-3 model, which featured a touchscreen and a modified form of SugarLabs "Sugar". In early December 2012, however, it was announced that the XO-3 would not be seeing actual production, and focus had shifted to the XO-4.
The XO-4 was launched at International CES 2013 in Las Vegas The XO Laptop version 4 is available in two models: XO 4 and XO 4 Touch, with the latter providing multi-touch input on the display. The XO Laptop version 4 uses an ARM processor to provide high performance with low power consumption, while keeping the industrial design of the traditional XO Laptop.
Software
The laptops include an anti-theft system which can, optionally, require each laptop to periodically make contact with a server to renew its cryptographic lease token. If the cryptographic lease expires before the server is contacted, the laptop will be locked until a new token is provided. The contact may be to a country-specific server over a network or to a local, school-level server that has been manually loaded with cryptographic "lease" tokens that enable a laptop to run for days or even months between contacts. Cryptographic lease tokens can be supplied on a USB flash drive for non-networked schools. The mass production laptops are also tivoized, disallowing installation of additional software or replacement of the operating system. Users interested in development need to obtain the unlocking key separately (most developer laptops for Western users already come unlocked). It is claimed that locking prevents unintentional bricking and is part of the anti-theft system.
In 2006, the OLPC project was heavily criticised over Red Hat's non-disclosure agreement (NDA) with Marvell concerning the wireless device in OLPC, especially in light of the OLPC project being positioned as an open-source friendly initiative. An open letter for documentation was inked by Theo de Raadt (a recipient of the 2004 Award for the Advancement of Free Software), and the initiative for open documentation has been supported by Richard Stallman, the President of the Free Software Foundation. De Raadt later clarified that he finds an issue with OLPC having proprietary firmware files that are not allowed to be independently re-distributed (even in the binary form) by third-party operating systems like OpenBSD, as well as receiving no documentation to write the necessary drivers for the operating system. De Raadt has pointed out that the OpenBSD project requires no firmware source code, and no low-level documentation to work on firmware, only requiring the binary distribution rights and documentation to interface with the said binary firmware that runs outside of the main CPU, a quite simple request that is generally honoured by many other wireless device vendors like Ralink. Stallman fully agreed with de Raadt's request to open up the documentation, since Stallman is known to hold an even stronger and more idealistic position in regards to the proprietary components, and requires that even the firmware that runs outside of the main CPU must be provided in its source code form, something de Raadt does not require. De Raadt later has had to point out that such more idealistic and less realistic position has instead been misattributed to OpenBSD's more practical approach to make it look unreasonable, and stood on record that OpenBSD's position is much easier to satisfy, yet it nonetheless remained unresolved.
OLPC's dedication to "Free and open source" was questioned with their May 15, 2008, announcement that large-scale purchasers would be offered the choice to add an extra cost, special version of the proprietary Windows XP OS developed by Microsoft alongside the regular, free and open Linux-based operating system with the SugarLabs "Sugar OS" GUI. Microsoft developed a modified version of Windows XP and announced in May 2008 that Windows XP would be available for an additional cost of 10 dollars per laptop. James Utzschneider, from Microsoft, said that initially only one operating system could be chosen. OLPC, however, said that future OLPC work would enable XO-1 laptops to dual boot either the free and open Linux/Sugar OS or the proprietary Microsoft Windows XP. Negroponte further said that "OLPC will sell Linux-only and dual-boot, and will not sell Windows-only [XO-1 laptops]". OLPC released the first test firmware enabling XO-1 dual-boot on July 3, 2008. This option did not prove popular. As of 2011, a few pilots had received a few thousand total dual-boot machines, and the new ARM-based machines do not support Windows XP. No significant deployment purchased Windows licenses. Negroponte stated that the dispute had "become a distraction" for the project, and that its end goal was enabling children to learn, while constructionism and the open source ethos was more of a means to that end. Charles Kane concurred, stating that anything which detracted from the ultimate goal of widespread distribution and use was counterproductive.
Bugs
Jeff Patzer, who interned for One Laptop Per Child in Peru, said that teachers there are told to handle problems in one of two ways: if the problem is a software issue, they are to flash the computer, and if it is a hardware problem, they are to report it. He said that this blackboxing approach caused users to feel disconnected with and confused by the laptop, and often resulted in the laptops eventually going unused. Several defects in OLPC XO-1 hardware have emerged in the field, and laptop repair is often neglected by students or their families (who are responsible for maintenance) due to the relatively high cost of some components (such as displays).
On the software side, the Bitfrost security system has been known to deactivate improperly, rendering the laptop unusable until it is unlocked by support technicians with the proper keys (this is a time-consuming process, and the problem often affects large numbers of laptops at the same time). The Sugar interface has been difficult for teachers to learn, and the mesh networking feature in the OLPC XO-1 was buggy and went mostly unused in the field.
The OLPC XO-1 hardware lacks connectivity to external monitors or projectors, and teachers are not provided with software for remote assessment. As a result, students are unable to present their work to the whole class, and teachers must also assess students' work from the individual laptops. Teachers often find it difficult to use the keyboard and screen, which were designed with student use in mind.
Environmental impact
In 2005 and prior to the final design of the XO-1 hardware, OLPC received criticism because of concerns over the environmental and health impacts of hazardous materials found in most computers. The OLPC asserted that it aimed to use as many environmentally friendly materials as it could; that the laptop and all OLPC-supplied accessories would be fully compliant with the EU's Restriction of Hazardous Substances Directive (RoHS); and that the laptop would use an order of magnitude less power than the typical consumer netbooks available as of 2007 thus minimizing the environmental burden of power generation.
The XO-1 delivered (starting in 2007) uses environmental friendly materials, complies with the EU's RoHS and uses between 0.25 and 6.5 watts in operation. According to the Green Electronics Council's Electronic Product Environmental Assessment Tool, whose sole purpose is assessing and measuring the impact laptops have on the environment, the XO is not only non-toxic and fully recyclable, but it lasts longer, costs less, and is more energy efficient. The XO-1 is the first laptop to have been awarded an EPEAT Gold level rating.
Anonymity
Other discussions question whether OLPC laptops should be designed to promote anonymity or to facilitate government tracking of stolen laptops. A June 2008 New Scientist article critiqued Bitfrost's P_THEFT security option, which allows each laptop to be configured to transmit an individualized, non-repudiable digital signature to a central server at most once each day to remain functioning.
Distribution
The laptops are sold to governments, to be distributed through the ministries of education with the goal of distributing "one laptop per child". The laptops are given to students, similar to school uniforms and ultimately remain the property of the child. The operating system and software is localized to the languages of the participating countries.
OLPC later worked directly with program sponsors from the public and private sectors to implement its educational program in entire schools and communities. As a non-profit organization, OLPC did require a source of funding for its program so that the laptops are given to students at no cost to child or to his/her family.
Early distributions
Approximately 500 developer boards (Alpha-1) were distributed in mid-2006; 875 working prototypes (Beta 1) were delivered in late 2006; 2400 "Beta 2" machines were distributed at the end of February 2007; full-scale production started November 6, 2007. Around one million units were manufactured in 2008.
Give 1 Get 1 program
OLPC initially stated that no consumer version of the XO laptop was planned. The project, however, later established the laptopgiving.org website to accept direct donations and ran a "Give 1 Get 1" (G1G1) offer starting on November 12, 2007. The offer was initially scheduled to run for only two weeks, but was extended until December 31, 2007, to meet demand. With a donation of $399 (plus US$25 shipping cost) to the OLPC "Give 1 Get 1" program, donors received an XO-1 laptop of their own and OLPC sent another on their behalf to a child in a developing country. Shipments of "Get 1" laptops sent to donors were restricted to addresses within the United States, its territories, and Canada.
Some 83,500 people participated in the program. Delivery of all of the G1G1 laptops was completed by April 19, 2008. Delays were blamed on order fulfillment and shipment issues both within OLPC and with the outside contractors hired to manage those aspects of the G1G1 program.
Between November 17 and December 31, 2008, a second G1G1 program was run through Amazon.com and Amazon.co.uk. This partnership was chosen specifically to solve the distribution issues of the G1G1 2007 program. The price to consumers was the same as in 2007, at US$399.
The program aimed to be available worldwide. Laptops could be delivered in the US, in Canada and in more than 30 European countries, as well as in some Central and South American countries (Colombia, Haiti, Peru, Uruguay, Paraguay), African countries (Ethiopia, Ghana, Nigeria, Madagascar, Rwanda) and Asian countries (Afghanistan, Georgia, Kazakhstan, Mongolia, Nepal). Despite this, the program sold only about 12,500 laptops and generated a mere $2.5 million, a 93 percent decline from the year before.
Laptop shipments
, OLPC reported that more than 3 million laptops had been shipped.
Regional responses
Uruguay
In October 2007, Uruguay placed an order for 100,000 laptops, making Uruguay the first country to purchase a full order of laptops. The first real, non-pilot deployment of the OLPC technology happened in Uruguay in December 2007. Since then, 200,000 more laptops have been ordered to cover all public school children between 6 and 12 years old.
President Tabaré Vázquez of Uruguay presented the final laptop at a school in Montevideo on October 13, 2009. Over the last two years 362,000 pupils and 18,000 teachers have been involved, and has cost the state $260 (£159) per child, including maintenance costs, equipment repairs, training for the teachers and internet connection. The annual cost of maintaining the programme, including an information portal for pupils and teachers, will be US$21 (£13) per child.
The country reportedly became the first in the world where every primary school child received a free laptop on October 13, 2009 as part of the Plan Ceibal (Education Connect).
Even though roughly 35% of all OLPC computers went to Uruguay, a 2013 study by the Economics Institute (University of the Republic, Uruguay) of the Ceibal plan concluded that use of the laptops did not improve literacy and that the use of the laptops was mostly recreational, with only 4.1% of the laptops being used "all" or "most" days in 2012. The main conclusion was that the results showed no impact of the OLPC program on the test scores in reading and math. Still, more recent studies give an opposite view of the project's results, regarding it a success, like in the case of the 2020 publication by Broadband Commission for Sustainable Development.
Artsakh
On January 26, 2012, prime minister Ara Harutyunyan and entrepreneur Eduardo Eurnekian signed a memorandum of understanding launching an OLPC program in Artsakh. The program is geared towards elementary schools throughout Artsakh. Eurnekian hopes to decrease the gap by giving the war-zoned region an opportunity to engage in a more solid education. The New York-based nonprofit Armenian General Benevolent Union is helping to undertake the responsibility by providing on-the-ground support. The government of Artsakh is enthusiastic and is working with OLPC to bring the program to fruition.
Nigeria
Lagos Analysis Corp., also called Lancor, a Lagos, US-based Nigerian-owned company, sued OLPC in the end of 2007 for $20 million, claiming that the computer's keyboard design was stolen from a Lancor patented device. OLPC responded by claiming that they had not sold any multi-lingual keyboards in the design claimed by Lancor, and that Lancor had misrepresented and concealed material facts before the court. In January 2008, the Nigerian Federal Court rejected OLPC motion to dismiss LANCOR's lawsuit and extended its injunction against OLPC distributing its XO Laptops in Nigeria. OLPC appealed the Court's decision, the Appeal is still pending in the Nigerian Federal Court of Appeals. In March 2008, OLPC filed a lawsuit in Massachusetts to stop LANCOR from suing it in the United States. In October 2008, MIT News magazine erroneously reported that the Middlesex Superior Court granted OLPC's motions to dismiss all of LANCOR's claims against OLPC, Nicholas Negroponte, and Quanta. On October 22, 2010 OLPC voluntarily moved the Massachusetts Court to dismiss its own lawsuit against LANCOR.
In 2007, XO laptops in Nigeria were reported to contain pornographic material belonging to children participating in the OLPC Program. In response, OLPC Nigeria announced they would start equipping the machines with filters.
India
India's Ministry of Human Resource Development, in June 2006, rejected the initiative, saying "it would be impossible to justify an expenditure of this scale on a debatable scheme when public funds continue to be in inadequate supply for well-established needs listed in different policy documents". Later they stated plans to make laptops at $10 each for schoolchildren. Two designs submitted to the Ministry from a final year engineering student of Vellore Institute of Technology and a researcher from the Indian Institute of Science, Bangalore in May 2007 reportedly describe a laptop that could be produced for "$47 per laptop" for even small volumes. The Ministry announced in July 2008 that the cost of their proposed "$10 laptop" would in fact be $100 by the time the laptop became available. In 2010, a related $35 Sakshat Tablet was unveiled in India, released the next year as the "Aakash". In 2011, each Aakash sold for approximately $44 by an Indian company, DataWind. DataWind plans to launch similar projects in Brazil, Egypt, Panama, Thailand and Turkey.
OLPC later expressed support for the initiative.
In 2009, a number of states announced plans to order OLPCs. However, as of 2010, only the state of Manipur had deployed 1000 laptops.
See also
Child computer
Computer literacy
Digital divide
Digital textbook
Dynabook
Educational technology in sub-Saharan Africa
Simputer
Universal access to education
Web (2013 film)
World Computer Exchange
References
Further reading
External links
501(c)(4) nonprofit organizations
Appropriate technology organizations
Articles containing video clips
Digital divide
Information and communication technologies for development
MIT Media Lab
Organizations based in Cambridge, Massachusetts
Defunct computer companies based in Massachusetts
Defunct software companies of the United States
2005 establishments in Massachusetts
Organizations established in 2005
Defunct computer companies of the United States
Defunct computer hardware companies
2014 disestablishments in Massachusetts
Organizations disestablished in 2014 | One Laptop per Child | Technology | 6,584 |
77,956,445 | https://en.wikipedia.org/wiki/Thuchomyces | Thuchomyces (sometimes mistakenly called “Thucomyces”) is a genus of Archean fossils from the Witwatersrand of South Africa, and is the earliest macroscopic land life known. The generic name derives from thucholite, the carbonaceous material which Thuchomyces is preserved in, and the Ancient Greek word "myces", meaning "fungus". The specific name, lichenoides, derives from its similarity to some modern lichens.
Description
Thuchomyces resembles modern columnar biomats, alongside certain lichens, however the latter are far more recent, only having appeared at most 300 million years ago, and therefore it almost certainly is not a lichen, or even a eukaryote at all. Some fossils have a round structure at their tip, interpreted as a diaspore, and these structures can also be observed in the rock surrounding the fossils. The internal structure of Thuchomyces consists of a network of hyphae, made of intensely branching cells possibly connected via anastomoses. The outer layer of the organism consists of highly agglutinated hyphae with a layer of loose tissue inside it, alongside a "central cord" observed in immature specimens which disappears with age. Thuchomyces columns are roughly 200–500 micrometers across, and reach a height of roughly 1 mm. Thuchomyces shares many similarities with the Paleoproterozoic Diskagma, having a similar size and shape, alongside both forming dense palisades on paleosols. However it lacks the spines of Diskagma and has complex vertical partitions, alongside having rounded terminations instead of the cup-like tips of Diskagma.
Paleoecology
Taking into account various features such as ventifacts, the concentration of carbon-13 in the rock and other geological features, the sediments Thuchomyces is known from are interpreted as being a wind-blasted desert environment crossed by ephemeral streams, which was occasionally flooded. In addition another organism named Witwateromyces conidiophorus, a possible actinomycete bacterium, was found associated with Thuchomyces, possibly as a decomposer.
References
Archean life
Fossil taxa described in 1977
Prehistoric life genera
Incertae sedis | Thuchomyces | Biology | 473 |
59,403,236 | https://en.wikipedia.org/wiki/Morteza%20Gharib | Morteza (Mory) Gharib () (born December 9, 1952) is the Hans W. Liepmann Professor of Aeronautics and Bio-Inspired Engineering at Caltech.
Gharib was elected a member of the National Academy of Engineering in 2015 for contributions to fluid flow diagnostics and imagery, and engineering of bioinspired devices and phenomena.
Research
Professor Gharib's research interests cover a range of topics in conventional fluid dynamics and aeronautics. These include vortex dynamics, active and passive flow control, nano/micro fluid dynamics, bio-inspired wind and hydro energy harvesting, as well as advanced flow imaging diagnostics.
In addition, Professor Gharib is heavily involved in the bio-mechanics and medical engineering fields. His research activities in these fields can be categorized into two main areas: the fluid dynamics of physiological machines (such as the human circulatory system and aquatic breathing/ propulsion), and the development of medical devices (such as heart valves, cardiovascular health monitoring devices, and drug delivery systems).
Awards and honors
Professor Gharib is the recipient of the 2016 G. I. Taylor Medal from the Society of Engineering Science and he received the American Physical Society's Fluid Dynamics award in 2015. He's a Member of the American Academy of Arts and Sciences and the National Academy of Engineering; and he's a Fellow (Charter) of the National Academy of Inventors, the American Association for the Advancement of Science, the American Physical Society, the American Society of Mechanical Engineering and the International Academy of Medical and Biological Engineering.
References
American people of Iranian descent
American mechanical engineers
Fluid dynamicists
Iranian expatriate academics
Living people
California Institute of Technology alumni
University of Tehran alumni
1952 births
Fellows of the American Physical Society | Morteza Gharib | Chemistry | 355 |
48,428,998 | https://en.wikipedia.org/wiki/Tricholoma%20busuense | Tricholoma busuense is an agaric fungus of the genus Tricholoma. Found in Papua New Guinea, it was described as new to science in 1994 by English mycologist E.J.H. Corner.
See also
List of Tricholoma species
References
busuense
Fungi described in 1994
Fungi of New Guinea
Taxa named by E. J. H. Corner
Fungus species | Tricholoma busuense | Biology | 84 |
64,653,862 | https://en.wikipedia.org/wiki/Cyril%20Hazard | Cyril Hazard is a British astronomer. He is known for revolutionising quasar observation with John Bolton in 1962. His work allowed other astronomers to find redshifts from the emission lines from other radio sources.
Early work
Cyril Hazard was born on 18th March 1928 in No.6, Flosh Cottages, Cleator, Cumberland.
Cyril Hazard grew up in Cleator Moor, Cumberland.. He got his doctorate from the University of Manchester, studying under Sir Bernard Lovell and Robert Hanbury Brown. He worked first at Jodrell Bank.
In 1950, radio emission from the Andromeda Galaxy were detected by Robert Hanbury Brown and Hazard at the Jodrell Bank Observatory.
The discovery of quasars
Two radio sources were involved 3C 48 and 3C 273
Measurements taken by Cyril Hazard and John Bolton during one of the occultations using the Parkes Radio Telescope allowed Maarten Schmidt to optically identify the object and obtain an optical spectrum using the 200-inch Hale Telescope on Mount Palomar. This spectrum revealed the same strange emission lines. Schmidt realized that these were actually spectral lines of hydrogen redshifted at the rate of 15.8 percent. This discovery showed that 3C 273 was receding at a rate of 47,000 km/s.
The technique
As the source is occulting behind the moon ( viz. passing behind), Fresnel style diffraction patterns are produced which can be detected by very large radio telescopes and the exact locations calculated.
Memory
The minor planet 9305 Hazard, discovered on 7 October 1986 by Edward "Ted" Bowell, was named after him.
References
Bibliography
Hazard, C.; Mackey, M. B.; and Shimmeris, A. J. "Investigation of the radio Source 3C273 by the Method of Lunar Occultation." Nature 197, 1037, 1963.
1928 births
Living people
20th-century British astronomers
21st-century British astronomers
20th-century English astronomers | Cyril Hazard | Astronomy | 399 |
34,793,528 | https://en.wikipedia.org/wiki/Telecommunications%20billing | Telecommunications billing is the group of processes of communications service providers that are responsible to collect consumption data, calculate charging and billing information, produce bills to customers, process their payments and manage debt collection.
A telecommunications billing system is an enterprise application software designed to support the telecommunications billing processes.
Telecommunications billing is a significant component of any commercial communications service provider regardless specialization: telephone, mobile wireless communication, VoIP companies, mobile virtual network operators, internet service providers, transit traffic companies, cable television and satellite TV companies could not operate without billing, because it creates an economic value of their business.
Telecommunications billing functions
Billing functions can be grouped to three areas: operations, information management, financial management. In the broad sense, when billing and revenue management (BRM) is considered as a single process bundle, as special functional areas could be picked out revenue assurance, profitability management, fraud management.
Operations
Operations area includes functions of capturing usage records (depending on the industry it can be call detail records, charging data records, network traffic measurement data, in some cases usage data could be prepared by telecommunications mediation system), rating consumption (determining factors, significant for further calculation, for example, calculating total time of calls for each tariff zones, count of short messages, traffic summary in gigabytes), applying prices, tariffs, discounts, taxes and compiling charges for each customer account, rendering bills, managing bill delivery, applying adjustments, maintaining of customer account.
Operations area functions implementation can vary significantly depending on communications type and payment model. In particular, for prepaid customers billing should be realized continuously (in near real-time computing standards, also noted as hot billing), and when a lower threshold amount at the account is reached, systems could automatically limit a service. In postpaid service model there are no vital requirements to decrease a balance of a customer account in real time, in this case charging scheduled to be rarely, usually, once per month.
Information management
Information management area unites functions that responsible to support customer information, product and service data, pricing models, including their possible combinations, as well as billing configuration data, such as billing cycles schedules, event triggers, bill delivery channels, audit settings, data archiving parameters. Customer information often integrated with customer relationship management system; collaboration with customer can be a function of information management area of billing system or can be completely allocated in CRM.
Financial management
Financial management area covers functions of payment tracking and processing, mapping correspondence between payments and consumed services, managing credits and debt collections, calculating company taxes.
Convergent billing
Communication service providers, which operates with multiple services in multiple modes used to integrate in one bill all charges, unify customer management in one system. Term convergent billing system refers to such a solution, that could maintain single customer account and produce a single bill for all services (for example, it could be public switched telephone network, cable TV and cable internet services for one customer) and also do it regardless a payment method (prepaid or postpaid).
Telecommunications billing systems market
A global market of the packaged telecommunications billing systems estimated to $6 Billion at 2007 and forecast to grow up to $7.2 Billion in 2012. Market shares by application specific as of 2007 were following:
27,2% — mobile postpaid;
16,4% — business billing for fixed networks;
13,3% — prepaid billing based on intelligent network for mobile;
10,9% — consumer billing for fixed networks;
9,7 % — cable and satellite billing;
8,8 % — convergent billing;
8,2 % — mediation billing;
4,6 % — interconnect billing.
As of 2010, market shares of billing systems by vendors were following:
27% — Amdocs, an Israeli company;
8 % — Huawei, a Chinese company;
6 % — Oracle Corporation, a USA company;
6 % — Convergys, a USA company;
5 % — Ericsson, a Swedish company (including LHS Telekommunikation, acquired in 2007);
5 % — Intec Telecom Systems, a UK middleman company.
References
Sources
Business software
Telecommunications systems | Telecommunications billing | Technology | 838 |
70,458,075 | https://en.wikipedia.org/wiki/Hexenuronic%20acid | Hexenuronic acid (HexA) is an organic compound with the formula C13H20O10. It is an unsaturated sugar produced during the kraft process in the creation of wood pulp.
Kraft process
During the kraft process, which is the turning of wood into wood pulp for papermaking, wood chips are treated with sodium hydroxide and sodium sulfide. Sodium hydroxide catalyzes the demethylation of 4-O-methyl-D-glucuronoxylan, which is found at the ends of the polysaccharide xylan.
Hexenuronic acid decreases a wood's kappa number, which is a measure of bleachability of wood pulp, by 3-7. It readily reacts with common wood pulp bleaching agents like ozone, peracetic acid, and chlorine dioxide. Consequently, research has focused on ways to break down hexenuronic acid prior to bleaching to decrease dangerous waste products and costs.
The main method of destroying hexenuronic acid is to treat the wood pulp post kraft processing with strong acids at high temperatures. HexA is hydrolyzed and broken down into aldehydes and alcohols like 2-furoic acid and 5-carboxy-2-furaldehdye. This process has led to a 50% reduction in bleaching costs of the wood pulp in some cases.
In microbes
Polysaccharide lyases (PL) are a type of enzyme that is found in numerous microorganisms including bacteriophages that break down parts of wood. PL catalyzses β-elimination of uronic acid-containing polysaccharides into HexA.
References
Papermaking
Oxygen heterocycles
Carboxylic acids
Methoxy compounds
Sugar acids
Dihydropyrans
Tetrahydropyrans
Triols | Hexenuronic acid | Chemistry | 393 |
16,868,590 | https://en.wikipedia.org/wiki/Cumberland%20Pontoons | Cumberland pontoons were folding pontoon bridges developed during the American Civil War to facilitate the movement of Union forces across the rivers of the Mid-South as the Federal forces advanced southward through Tennessee and Georgia.
Early pontoon bridges during the Civil War were heavy and awkward, and required special long-geared pontoon carriers to transport them to the site of the planned river crossing. There were two main types—the French-designed wooden bateau (known in the army as a "Cincinnati pontoon") and the Russian pontoon, a canvas boat. Both types were twenty-two feet in length and took considerable time to set up, requiring several men to lift into position and pin the individual sections together.
Early in 1864, the commander of the Army of the Cumberland, Major General George H. Thomas, was seeking a light-weight, easy-to-haul and erect pontoon bridge to move his troops across unfordable rivers and streams. Knowing the limitations of the two systems used by the armies in the Western Theater, he had folding pontoons developed. A similar idea had been instigated by his predecessor William S. Rosecrans earlier in the war, but had not been adopted. Captain William E. Merrill, Thomas's chief engineer, improved on Rosecrans's prototype, making it lighter and stronger. He replaced the pins that held individual sections together with hinges so that the side frame sections folded together instead of separating. The new design yielded a portable boat that was lightweight, small enough to carry on a standard supply wagon, and easier to construct in the field. It was also strong enough to support horse-drawn artillery and fully loaded wagons.
These boats soon became popularly known as Cumberland pontoons. Merrill had the first ones constructed in the army's engineer workshops in Nashville, Tennessee, under the supervision of Lieutenant James R. Willet. Soon, a train of fifty new boats was transported to the field armies. William T. Sherman used the new bridges extensively during the first two months of the Atlanta Campaign, first laying them across the Etowah River. He later used them during the March to the Sea and the 1865 Carolinas Campaign.
Notes
References
Partridge, Charles A., History of the Ninety-sixth Regiment, Illinois Volunteers. Brown, Pettibone, 1887.
Woodward, Steven E., The Art of Command in the Civil War, University of Nebraska Press, 1998. .
Civil War military equipment of the United States
Georgia (U.S. state) in the American Civil War
Pontoon bridges in the United States
Military bridging equipment | Cumberland Pontoons | Engineering | 522 |
11,484,716 | https://en.wikipedia.org/wiki/Soil%20vapor%20extraction | Soil vapor extraction (SVE) is a physical treatment process for in situ remediation of volatile contaminants in vadose zone (unsaturated) soils (EPA, 2012). SVE (also referred to as in situ soil venting or vacuum extraction) is based on mass transfer of contaminant from the solid (sorbed) and liquid (aqueous or non-aqueous) phases into the gas phase, with subsequent collection of the gas phase contamination at extraction wells. Extracted contaminant mass in the gas phase (and any condensed liquid phase) is treated in aboveground systems. In essence, SVE is the vadose zone equivalent of the pump-and-treat technology for groundwater remediation. SVE is particularly amenable to contaminants with higher Henry’s Law constants, including various chlorinated solvents and hydrocarbons. SVE is a well-demonstrated, mature remediation technology and has been identified by the U.S. Environmental Protection Agency (EPA) as presumptive remedy.
SVE Configuration
The soil vapor extraction remediation technology uses vacuum blowers and extraction wells to induce gas flow through the subsurface, collecting contaminated soil vapor, which is subsequently treated aboveground. SVE systems can rely on gas inflow through natural routes or specific wells may be installed for gas inflow (forced or natural). The vacuum extraction of soil gas induces gas flow across a site, increasing the mass transfer driving force from aqueous (soil moisture), non-aqueous (pure phase), and solid (soil) phase into the gas phase. Air flow across a site is thus a key aspect, but soil moisture and subsurface heterogeneity (i.e., a mixture of low and high permeability materials) can result in less gas flow across some zones. In some situations, such as enhancement of monitored natural attenuation, a passive SVE system that relies on barometric pumping may be employed.
SVE has several advantages as a vadose zone remediation technology. The system can be implemented with standard wells and off-the-shelf equipment (blowers, instrumentation, vapor treatment, etc.). SVE can also be implemented with a minimum of site disturbance, primarily involving well installation and minimal aboveground equipment. Depending on the nature of the contamination and the subsurface geology, SVE has the potential to treat large soil volumes at reasonable costs.
The soil gas (vapor) that is extracted by the SVE system generally requires treatment prior to discharge back into the environment. The aboveground treatment is primarily for a gas stream, although condensation of liquid must be managed (and in some cases may specifically be desired). A variety of treatment techniques are available for aboveground treatment and include thermal destruction (e.g., direct flame thermal oxidation, catalytic oxidizers), adsorption (e.g., granular activated carbon, zeolites, polymers), biofiltration, non-thermal plasma destruction, photolytic/photocatalytic destruction, membrane separation, gas absorption, and vapor condensation. The most commonly applied aboveground treatment technologies are thermal oxidation and granular activated carbon adsorption. The selection of a particular aboveground treatment technology depends on the contaminant, concentrations in the offgas, throughput, and economic considerations.
SVE Effectiveness
The effectiveness of SVE, that is, the rate and degree of mass removal, depends on a number of factors that influence the transfer of contaminant mass into the gas phase. The effectiveness of SVE is a function of the contaminant properties (e.g., Henry’s Law constant, vapor pressure, boiling point, adsorption coefficient), temperature in the subsurface, vadose zone soil properties (e.g., soil grain size, soil moisture content, soil permeability, soil carbon content), subsurface heterogeneity, and the air flow driving force (applied pressure gradient). As an example, a residual quantity of a highly volatile contaminant (such as trichloroethene) in a homogeneous sand with high permeability and low carbon content (i.e., low/negligible adsorption) will be readily treated with SVE. In contrast, a heterogeneous vadose zone with one or more clay layers containing residual naphthalene would require a longer treatment time and/or SVE enhancements. SVE effectiveness issues include tailing and rebound, which result from contaminated zones with lower air flow (i.e., low permeability zones or zones of high moisture content) and/or lower volatility (or higher adsorption). Recent work at U.S. Department of Energy sites has investigated layering and low permeability zones in the subsurface and how they affect SVE operations.
Enhancement of SVE
Enhancements for improving the effectiveness of SVE can include directional drilling, pneumatic and hydraulic fracturing, and thermal enhancement (e.g., hot air or steam injection). Directional drilling and fracturing enhancements are generally intended to improve the gas flow through the subsurface, especially in lower permeability zones. Thermal enhancements such as hot air or steam injection increase the subsurface soil temperature, thereby improving the volatility of the contamination. In addition, injection of hot (dry) air can remove soil moisture and thus improve the gas permeability of the soil. Additional thermal technologies (such as electrical resistance heating, six-phase soil heating, radio-frequency heating, or thermal conduction heating) can be applied to the subsurface to heat the soil and volatilize/desorb contaminants, but these are generally viewed as separate technologies (versus a SVE enhancement) that may use vacuum extraction (or other methods) for collecting soil gas.
Design, Optimization, Performance Assessment, and Closure
On selection as a remedy, implementation of SVE involves the following elements: system design, operation, optimization, performance assessment, and closure. Several guidance documents provide information on these implementation aspects. EPA and U.S. Army Corps of Engineers (USACE) guidance documents establish an overall framework for design, operation, optimization, and closure of a SVE system. The Air Force Center for Engineering and the Environment (AFCEE) guidance presents actions and considerations for SVE system optimization, but has limited information related to approaches for SVE closure and meeting remediation goals. Guidance from the Pacific Northwest National Laboratory (PNNL) supplements these documents by discussing specific actions and decisions related to SVE optimization, transition, and/or closure.
Design and operation of a SVE system is relatively straightforward, with the major uncertainties having to do with subsurface geology/formation characteristics and the location of contamination. As time goes on, it is typical for a SVE system to exhibit a diminishing rate of contaminant extraction due to mass transfer limitations or removal of contaminant mass. Performance assessment is a key aspect to provide input for decisions about whether the system should be optimized, terminated, or transitioned to another technology to replace or augment SVE. Assessment of rebound and mass flux provide approaches to evaluate system performance and obtain information on which to base decisions.
Related Technologies
Several technologies are related to soil vapor extraction. As noted above, various soil-heating remediation technologies (e.g., electrical resistive heating, in situ vitrification) require a soil gas collection component, which may take the form of SVE and/or a surface barrier (i.e., hood). Bioventing is a related technology, the goal of which is to introduce additional oxygen (or possibly other reactive gases) into the subsurface to stimulate biological degradation of the contamination. In situ air sparging is a remediation technology for treating contamination in groundwater. Air is injected and "sparged" through the groundwater and then collected via soil vapor extraction wells.
See also
In-situ thermal desorption
In situ soil heating
Bioventing
In situ air sparging
Environmental remediation
Vapor–liquid separator
Volatile organic compounds
Modified active gas sampling
Electro Thermal Dynamic Stripping Process
References
EPA. 1996. "User’s Guide to the VOCs in Soils Presumptive Remedy." EPA/540/F-96/008, U.S. Environmental Protection Agency, Office of Solid Waste and Emergency Response, Washington, D.C.
EPA. 1997. Analysis of Selected Enhancements for Soil Vapor Extraction. EPA/542/R-97/007, U.S. Environmental Protection Agency, Office of Solid Waste and Emergency Response, Washington, D.C.
EPA. 2012. "A Citizen’s Guide to Soil Vapor Extraction and Air Sparging." EPA/542/F-12/018, U.S. Environmental Protection Agency, Office of Solid Waste and Emergency Response, Washington, D.C.
External links
U.S. EPA CLU-IN Soil Vapor Extraction Overview
How To Evaluate Alternative Cleanup Technologies For Underground Storage Tank Sites
Hyperventilate Users Manual: A Software Guidance System Created for Vapor Extraction Applications Environmental Protection Agency
USACE Soil Vapor Extraction and Bioventing (EM 1110-1-4001)
Soil Vapor Extraction System Guidance / Soil Vapor Extraction Endstate Tool (SVEET)
Federal Remediation Technologies Roundtable (FRTR) Screening Matrix Section 4.8, Soil Vapor Extraction
Center for Public Environmental Oversight (CPEO) Tech Tree: Soil Vapor Extraction (SVE)
Center for Public Environmental Oversight (CPEO) Tech Tree: Soil Vapor Extraction Enhancements
New Approach to Assess Volatile Contamination in Vadose Zone Provides Path Forward for Site Closure
Waste treatment technology
Pollution control technologies
Soil contamination | Soil vapor extraction | Chemistry,Engineering,Environmental_science | 2,036 |
31,277,386 | https://en.wikipedia.org/wiki/John%20R.%20Isbell | John Rolfe Isbell (October 27, 1930 – August 6, 2005) was an American mathematician. For many years he was a professor of mathematics at the University at Buffalo (SUNY).
Biography
Isbell was born in Portland, Oregon, the son of an army officer from Isbell, a town in Franklin County, Alabama. He attended several undergraduate institutions, including the University of Chicago, where professor Saunders Mac Lane was a source of inspiration. He began his graduate studies in mathematics at Chicago, briefly studied at Oklahoma A&M University and the University of Kansas, and eventually completed a Ph.D. in game theory at Princeton University in 1954 under the supervision of Albert W. Tucker. After graduation, Isbell was drafted into the U.S. Army, and stationed at the Aberdeen Proving Ground. In the late 1950s he worked at the Institute for Advanced Study in Princeton, New Jersey, from which he then moved to the University of Washington and Case Western Reserve University. He joined the University at Buffalo in 1969, and remained there until his retirement in 2002.
Research
Isbell published over 140 papers under his own name, and several others under pseudonyms. Isbell published the first paper by John Rainwater, a fictitious mathematician who had been invented by graduate students at the University of Washington in 1952. After Isbell's paper, other mathematicians have published papers using the name "Rainwater" and have acknowledged "Rainwater's assistance" in articles. Isbell published other articles using two additional pseudonyms, M. G. Stanley and H. C. Enos, publishing two under each.
Many of his works involved topology and category theory:
He was "the leading contributor to the theory of uniform spaces".
Isbell duality is a form of duality arising when a mathematical object can be interpreted as a member of two different categories; a standard example is the Stone duality between sober spaces and complete Heyting algebras with sufficiently many points.
Isbell was the first to study the category of metric spaces defined by metric spaces and the metric maps between them, and did early work on injective metric spaces and the tight span construction.
In abstract algebra, Isbell found a rigorous formulation for the Pierce–Birkhoff conjecture on piecewise-polynomial functions. He also made important contributions to the theory of median algebras.
In geometric graph theory, Isbell was the first to prove the bound χ ≤ 7 on the Hadwiger–Nelson problem, the question of how many colors are needed to color the points of the plane in such a way that no two points at unit distance from each other have the same color.
See also
Isbell conjugacy
Isbell's zigzag theorem
References
1930 births
2005 deaths
20th-century American mathematicians
21st-century American mathematicians
Category theorists
American game theorists
American topologists
University of Chicago alumni
Princeton University alumni
University of Washington faculty
Case Western Reserve University faculty
University at Buffalo faculty
American operations researchers
Lattice theorists
Mathematicians from Oregon | John R. Isbell | Mathematics | 603 |
43,569,494 | https://en.wikipedia.org/wiki/Maurandya%20scandens | Maurandya scandens, also known as trailing snapdragon and snapdragon vine, is a climbing herbaceous perennial native to Mexico, with snapdragon-like flowers and untoothed leaves. It is grown as an ornamental plant in many parts of the world, and has commonly escaped from cultivation to become naturalized. Other names for this plant include creeping snapdragon, vining snapdragon, creeping gloxinia and chickabiddy.
Description
The perennial plant grows up to 2-3 meters tall or long. The alternate, lanceolate to arrow-shaped, entire and lobed to coarsely toothed, pointed, on the lobes, teeth often fine-pointed leaves sit on 8 to 42 millimeters long petioles. The bare leaf blades are 11 to 62 long and 4 to 45 millimeters wide. The shoot axes often form adventitious roots.
It has been confused with Lophospermum scandens, which has longer flowers and larger, toothed leaves. It resembles Maurandya barclayana, which has blue-violet flowers and hairy rather than hairless sepals. It is semi-deciduous in the colder areas.
Flowers and reproduction
The hermaphrodite, tubular flowers appear axillary and solitary, and come in many different colours including rose pink, violet, indigo blue or white, with double perianth. The fivefold flowers feature a wide throat on long, glabrous pedicels, 30 to 85 millimeters long.
The small, ovate-lanceolate and just overgrown tips of the calyx are 10 to 15 millimeters long. They are 2 to 4 millimeters wide at the base and they are bare to sparsely covered with glandular hairs. The crown, slightly hairy on the outside, with shorter, rounded to indented, expansive lobes has two lips.
The 4 short, dynamic stamens are included. The superior, two-chambered ovary is usually bald and the bald, enclosed, relatively short style is 13 to 16 millimeters long. It flowers profusely between spring and summer, and irregularly in the cool months.
The asymmetrical, irregularly ovoid and many-seeded, cartilaginous seed capsules are 10 to 12 millimeters long and are divided into slightly unequal subjects.
Range
The original distribution area are rocky slopes, canyons and disturbed areas in tropical and subtropical forests in southern Mexico at 1200 to 2200 meters above sea level. The species prefers a medium-humid (mesic) biotope. It seems to have established its habitat from the north along the calcareous Sierra Madre and south into the volcanic belt. Due to human displacement, occurrences are now found worldwide.
Cultivation
Cultivars include Joan Lorraine with velvety purple flowers, Snow White with white flowers and Mystic Rose with fuchsia flowers.
References
Plantaginaceae
Taxa named by Antonio José Cavanilles
Flora of Mexico
Ornamental plants
Vines
Plants described in 1806
Plants that can bloom all year round | Maurandya scandens | Biology | 617 |
3,616,959 | https://en.wikipedia.org/wiki/Gonioreflectometer | A gonioreflectometer is a device for measuring a bidirectional reflectance distribution function (BRDF).
The device consists of a light source illuminating the material to be measured and a sensor that captures light reflected from that material. The light source should be able to illuminate and the sensor should be able to capture data from a hemisphere around the target. The hemispherical rotation dimensions of the sensor and light source are the four dimensions of the BRDF. The 'gonio' part of the word refers to the device's ability to measure at different angles.
Several similar devices have been built and used to capture data for similar functions. Most of these devices use a camera instead of the light intensity-measuring sensor to capture a two-dimensional sample of the target. Examples include:
a spatial gonioreflectometer for capturing the SBRDF (McAllister, 2002).
a camera gantry for capturing the light field (Levoy and Hanrahan, 1996).
an unnamed device for capturing the bidirectional texture function (Dana et al., 1999).
References
Dana, Kristin et al. 1999. Reflectance and Texture of Real-World Surfaces. in ACM Transactions on Graphics. Volume 18, Issue 1 (January, 1999). New York, NY, USA: ACM Press. Pages 1-34.
Foo, Sing Choong. 1997. A Gonioreflectometer for measuring the bidirectional reflectance of materials for use in illumination computations. Masters thesis. Cornell University. Ithaca, New York, USA.
Levoy, Marc & Hanrahan, Pat. 1996. Light field rendering. In Proceedings of the 23rd annual conference on Computer graphics and interactive techniques.
McAllister, David. 2002. A Generalized Surface Appearance Representation for Computer Graphics. PhD dissertation. University of North Carolina at Chapel Hill, Department of Computer Science. Chapel Hill, USA. 118p.
Computer graphics
Photometry
Measuring instruments | Gonioreflectometer | Technology,Engineering | 406 |
255,432 | https://en.wikipedia.org/wiki/Honeyguide | Honeyguides (family Indicatoridae) are a family of birds in the order Piciformes. They are also known as indicator birds, or honey birds, although the latter term is also used more narrowly to refer to species of the genus Prodotiscus. They have an Old World tropical distribution, with the greatest number of species in Africa and two in Asia. These birds are best known for their interaction with humans. Honeyguides are noted and named for one or two species that will deliberately lead humans (but, contrary to popular claims, most likely not honey badgers) directly to bee colonies, so that they can feast on the grubs and beeswax that are left behind.
Taxonomy
The Indicatoridae were noted for their barbet-like structure and brood-parasitic behavior and morphologically considered unique among the non-passerines in having nine primaries. The phylogenetic relationship between the honeyguides and the eight other families that make up the order Piciformes is shown in the cladogram below. The number of species in each family is taken from the list maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC).
Description
Most honeyguides are dull-colored, though some have bright yellow coloring in the plumage. All have light outer tail feathers, which are white in all the African species. The smallest species by body mass appears to be the green-backed honeyguide, at an average of , and by length appears to be the Cassin's honeyguide, at an average of , while the largest species by weight is the lyre-tailed honeyguide, at , and by length, is the greater honeyguide, at .
They are among the few birds that feed regularly on wax—beeswax in most species, and presumably the waxy secretions of scale insects in the genus Prodotiscus and to a lesser extent in Melignomon and the smaller species of Indicator. They also feed on waxworms which are the larvae of the waxmoth Galleria mellonella, on bee colonies, and on flying and crawling insects, spiders, and occasional fruits. Many species join mixed-species feeding flocks.
Behavior
Guiding
Honeyguides are named for a remarkable habit seen in one or two species: guiding humans to bee colonies. Once the hive is open and the honey is taken, the bird feeds on larvae and wax. This behavior has been studied in the greater honeyguide; some authorities (following Friedmann, 1955) state that it also occurs in the scaly-throated honeyguide, while others disagree. Wild honeyguides understand various types of human calls that attract them to engage in the foraging mutualism. In northern Tanzania, honeyguides partner with Hadza hunter-gatherers, and the bird assistance has been shown to increase honey-hunters' rates of finding bee colonies by 560%, and led men to significantly higher yielding nests than those found without honeyguides. Contrary to most depictions of the human-honeyguide relationship, the Hadza did not actively repay honeyguides, but instead, hid, buried, and burned honeycomb, with the intent of keeping the bird hungry and thus more likely to guide again. Some experts believe that honeyguide co-evolution with humans goes back to the stone-tool making human ancestor Homo erectus, about 1.9million years ago. Despite popular belief, no evidence indicates that honeyguides guide the honey badger; though videos about this exist, there have been accusations that they were staged.
Although most members of the family are not known to recruit "followers" in their quest for wax, they are also referred to as "honeyguides" by linguistic extrapolation.
Breeding
The breeding behavior of eight species in Indicator and Prodotiscus is known. They are all brood parasites that lay one egg in a nest of another species, laying eggs in series of about five during a period of 5–7 days. Most favor hole-nesting species, often the related barbets and woodpeckers, but Prodotiscus parasitizes cup-nesters such as white-eyes and warblers. Honeyguide nestlings have been known to physically eject their hosts' chicks from the nests and they have needle-sharp hooks on their beaks with which they puncture the hosts' eggs or kill the nestlings.
African honeyguide birds are known to lay their eggs in underground nests of other bee-eating bird species. The honeyguide chicks kill the hatchlings of the host using their needle-sharp beaks just after hatching, much as cuckoo hatchlings do. The honeyguide mother ensures her chick hatches first by internally incubating the egg for an extra day before laying it, so that it has a head start in development compared to the hosts' offspring.
See also
List of honeyguides
References
External links
Human and Birds Cooperate to Share Beehive Bounty on Scientific American
Brood parasites
Honeyguides
Symbiosis | Honeyguide | Biology | 1,023 |
2,386,081 | https://en.wikipedia.org/wiki/Dentine%20bonding%20agents | Also known as a "bonderizer" bonding agents (spelled dentin bonding agents in American English) are resin materials used to make a dental composite filling material adhere to both dentin and enamel.
Bonding agents are often methacrylates with some volatile carrier and solvent like acetone. They may also contain diluent monomers. For proper bonding of resin composite restorations, dentin should be conditioned with polyacrylic acids to remove the smear layer, created during mechanical treatment with dental bore, and expose some of the collagen network or organic matrix of dentin. Adhesive resin should create the so-called hybrid layer (consisting of a collagen network exposed by etching and embedded in adhesive resin). This layer is an interface between dentin and adhesive resin and the final quality of dental restoration depends greatly on its properties.
Modern dental bonding systems come as a “three-step system”, where the etchant, primer, and adhesive are applied sequentially; as a “two-step system”, where the etchant and the primer are combined for simultaneous application; and as a “one-step system”, where all the components should be premixed and applied in a single application (so-called sixth generation of bonding agents).
Chemical processes involved in bonding to dentine
Removal of the smear layer and etching of dentine
Priming of the dentine surface
Bonding of the primed dentine surface to the restorative material
Removal of the smear layer and dentine etching
A dentine conditioning agent is used initially, to remove the smear layer resulting from the preparation of a cavity and, to alter the dentine surface by partially demineralising the intertubulary dentine. This partially demineralised dentine acts as a hollow scaffolding which can be perfused with the primer. Over-etching (as well as over-drying) of the dentine can lead to collapse of the collagen network, making infiltration of the primer more challenging. However, sclerosed dentine requires a longer time of exposure to the dentine conditioner compared to healthy dentine. Some dentine conditioners contain a chemical called glutaraldehyde, which reinforces the collagen matrix, preventing its collapse.
Some common dentine conditioners include:
phosphoric acid
nitric acid
maleic acid
citric acid
ethylene diamine tetra-acetic acid (EDTA)
Priming of the dentine surface
Bonding of the primed dentine surface to the restorative material
Dentin bonding
How does dentinal bonding occur?
Dentin bonding refers to process of bonding a resin to conditioned dentin, where mineral component is replaced with resin monomers to form a biocomposite comprising dentin collagen and cured resin. The adhesive-dentin interface forms a tight and permanent bond between dentin and composite resins.
It can be accomplished by either etch-and-rinse (total etch) or self-etch adhesives. In etch-and rinse, acid will dissolve the minerals to a certain depth and leaves the highly porous dentinal collagen network suspended in water. Then, the collagen network is infiltrated with resin monomers. After chemical polymerization of these monomers happen, activated by light cure, it will result in a polymer-collagen biocomposite, commonly known as the hybrid layer:
The mechanism of action is explained below:
a) Application of acid to dentin will result in partial/total removal of smear layer and demineralization of the dentin.
b) Acid will demineralize the intertubular and peritubular dentin, and then open the dentinal tubules while exposing the collagen fibres, hence increasing the microporosity of intertubular dentin.
c) Dentin will be demineralized by up to approximately 7.5 μmeter, depending on the type of acid used, time of application and concentration.
d) Primer system is designed to increase critical surface tension of dentin, which gets decreased after etching of acid.
e) Bonding mechanism is when:
Primer and bonding resin are applied to etched dentin, they penetrate the intertubular dentin, forming hybrid layer.
They also penetrate and polymerize in open dentinal tubules, forming resin tags.
Moist bonding technique has been shown repeatedly to enhance bond strengths of etch-and-rinse adhesives because water preserves the porosity of collagen network for monomer interdiffusion.
Hybrid layer
Its presence was identified by Nakabayashi and co-workers where the hybrid layer consists of demineralized intertubular dentin and infiltrated and polymerized adhesive resin.
The hybrid layer is hydrophobic, acid resistant and tough. The quality of hybrid layer formed decides the strength of resin dentin interface. When the hybrid layer becomes thicker and more uniform, the bond strength is better.
Smear layer and its role in bonding
Smear layer refers to a layer of debris on the inorganic surface of substrate which comprises residual inorganic and organic components. This layer is produced whenever the tooth structure undergoes a preparation with a bur.
Smear layer will fill the orifices of the dentinal tubules, hence forming smear plugs. These smear plugs decrease dentin permeability by 90% and the smear plug alone can prevent adhesive resin penetration into dentinal tubules. The thickness of smear layer can range from 0.5-2 μmeter and for the smear plug, 1 to 10 μmeter.
Smear layer poses some threat for optimal bonding to occur. That is why it needs to be removed. For example, smear layer needs to be removed prior to bonding by etch-and-rinse (total etch) adhesives. This will lead to thicker hybrid layer and long, denser resin tags which results in better bond strength.
Carious versus sound dentin for dentinal bonding
Some caries excavation methods lead to leaving caries-affected dentin behind to serve as the bonding substrate, mostly in indirect pulp capping.
It is reported that the immediate bond strengths to caries-affected dentin are 20-50% lower than to sound dentin, and even lower with caries-infected dentin. How does caries progression correlates with this? First, it reduces mineral content, increases porosity and changes the dentinal collagen structure and its distribution too. These changes can cause a significant reduction in the mechanical properties in dentin e.g. hardness, stiffness, tensile strength, modulus of elasticity, and shrinkage during drying, which makes dentin in and under hybrid layer more prone to cohesive failures under occlusal forces.
Lower mineral content of the caries-affected dentin will allow phosphoric acid or acidic monomers to demineralize matrix more deeply than in normal dentin, which results in even more residual water in exposed collagen matrix.
How to improve resin-dentin bonding?
Moist dentine
One of the important factors in determining the dentinal bonding is collagen. When dentin is etched, smear layer and minerals from dentinal structure will be removed, hence exposing the collagen fibres. The areas where the minerals are removed are filled with water which functions as plasticizer for collagen and keeps it at expanded soft state. This means that the spaces for resin-dentin bonding are preserved. However, these collagen fibres can collapse in dry condition and if the organic layer of matrix is denatured, this will obstruct the resin to bond with dentin and form a hybrid layer.
Because of this, the presence of moist or wet dentin is required to achieve successful dentin bonding. This is due to presence of water miscible organic solvents like ethanol or acetone in the primers. The acetone trails water and hence improves the penetration of the monomers into the dentin for better micromechanical bonding. Also, water will prevent collagen fibres from collapsing, thus making better penetration and bonding between resin and dentin.
In order to get a moist dentin, it is advisable to not dry dentin with compressed air after rinsing away the etchant. Instead, high volume evacuation suction can be used to remove excess water and then blot the remaining water present on dentin using gauze or cotton. The dentin surface should appear glistening.
If the dentin surface is too wet, water will dilute the resin primer and compete for the sites in collagen network, which will prevent hybrid layer formation.
If the dentin surface is too dry, collapse of collagen fibres and demineralized dentin can occur, leading to low bond strength.
Agitation of hydrophilic primer or adhesive during application
Besides having adequate dentinal moisture, agitation of the primers during application of two-step etch-and-rinse adhesives may be critical for optimal penetration into the demineralized collagen fibres. It also may aid the evaporation of residual water in the adhesive and hybrid layers, thus preventing nano leakage.
In a clinical trial comparing the performance of Prime & Bond NT using no rubbing action, slight rubbing action and vigorous rubbing action in the restoration of NCCLs, 92.5% of restorations in vigorous rubbing action group were found to retain after 24 months of clinical service. For the other two groups, the retention rates of the restoration were slightly lower, at 82.5%.
See also
Dental cement
References
See also
Dentine bonding agents: an overview
Dental materials
Adhesives
Polymer chemistry | Dentine bonding agents | Physics,Chemistry,Materials_science,Engineering | 2,027 |
213,521 | https://en.wikipedia.org/wiki/Lick%20Observatory | The Lick Observatory is an astronomical observatory owned and operated by the University of California. It is on the summit of Mount Hamilton, in the Diablo Range just east of San Jose, California, United States. The observatory is managed by the University of California Observatories, with headquarters on the University of California, Santa Cruz campus, where its scientific staff moved in the mid-1960s. It is named after James Lick.
The first new moon of Jupiter to be identified since the time of Galileo, Amalthea, the planet's fifth moon, was discovered at this observatory in 1892.
Early history
Lick Observatory is the world's first permanently occupied mountain-top observatory.
The observatory, in a Classical Revival style structure, was constructed between 1876 and 1887, from a bequest from James Lick of $700,000, .
Lick, originally a carpenter and piano maker, had arrived from Peru in San Francisco, California, in late 1847; after accruing significant wealth he began making various donations in 1873. In his last deed he chose the site atop Mount Hamilton, and was buried there in 1887 under the future site of the telescope, with a brass tablet bearing the inscription, "Here lies the body of James Lick".
Lick additionally negotiated that Santa Clara County construct a "first-class road" to the summit, completed in 1876. Lick chose John Wright, of San Francisco's Wright & Sanders firm of architects, to design both the Observatory and the Astronomer's House. All of the construction materials had to be brought to the site by horse and mule-drawn wagons, which could not negotiate a steep grade. To keep the grade below 6.5%, the road had to take a very winding and sinuous path, which the modern-day road (California State Route 130) still follows. The road from Smith Creek to the summit makes 367 complete turns, in a distance of seven miles. The road is closed when there is snow.
The first telescope installed at the observatory was a refractor made by Alvan Clark. Astronomer E. E. Barnard used the telescope to make "exquisite photographs of comets and nebulae", according to D. J. Warner of Warner & Swasey Company.
In 1880, a lens was commissioned to Alvan Clark & Sons, for $51,000 (). Manufacturing of the lens took until 1885 and it was delivered to the observatory on December 29, 1886. Warner & Swasey designed and built the telescope mounting. The telescope, built with this lens, became the world's largest refracting telescope from when it saw first light on January 3, 1888, until the construction of Yerkes Observatory in 1897.
Under the University of California
In May 1888, the observatory was turned over to the Regents of the University of California,
and it became the first permanently occupied mountain-top observatory in the world. Edward Singleton Holden was the first director. The location provided excellent viewing performance because of lack of ambient light and pollution; additionally, the night air at the top of Mt. Hamilton is extremely calm. Often a layer of low coastal clouds invades the valley below, especially on nights from late-spring to mid-summer, a phenomenon known in California as the June Gloom. On nights when the observatory remains above that layer, light pollution can be greatly reduced.
E. E. Barnard used the telescope in 1892 to discover a fifth moon of Jupiter, Amalthea. This was the first addition to Jupiter's known moons since Galileo observed the planet through his parchment tube and spectacle lens. The telescope provided spectra for W. W. Campbell's work on the radial velocities of stars.
In 1905 (Jan. 5 and Feb. 27), Charles Dillon Perrine discovered the sixth and seventh moons of Jupiter (Elara and Himalia) on photographs taken with the 36-inch Crossley reflecting telescope which he had recently rebuilt.
In 1928, Donald C. Shane studied carbon stars, and was able to distinguish them into spectral classes R0–R9 and N0–N7 (on this scale N7 is the reddest and R0 the bluest). This was an expansion of Annie Jump Cannon of Harvard's work on carbon stars that had divided them into R and N types. The N stars have more cyanogen and the R stars have more carbon.
On May 21, 1939, during a nighttime fog that engulfed the summit, a U.S. Army Air Force Northrop A-17 two-seater attack plane crashed into the main building. Because a scientific meeting was being held elsewhere, the only staff member present was Nicholas Mayall. Nothing caught fire and the two individuals in the building were unharmed.
The pilot of the plane, Lt. Richard F. Lorenz, and passenger Private W. E. Scott were killed instantly. The telephone line was broken by the crash, so no help could be called for at first. Eventually help arrived together with numerous reporters and photographers, who kept arriving almost all night long. Evidence of their numbers could be seen the next day by the litter of flash bulbs carpeting the parking lot.
The press widely covered the accident and many reports emphasized the luck in not losing a large cabinet of spectrograms which was knocked over by the crash coming through an astronomer's office window. There was no damage to the telescope dome.
In 1950, the California state legislature appropriated funds for a reflector telescope, which was completed in 1959. The observatory additionally has a Cassegrain reflector dedicated to photoelectric measurements of star brightness, and received a pair of astrographs from the Carnegie Corporation.
Time-signal service
In 1886, Lick Observatory began supplying Railroad Standard Time to the Southern Pacific Railroad, and to other businesses, over telegraph lines. The signal was generated by a clock manufactured by E. Howard & Co. specifically for the Observatory, and which included an electric apparatus for transmitting the time signal over telegraph lines. While most of the nation's railroads received their time signal from the U.S. Naval Observatory time signal via Western Union's telegraph lines, the Lick Observatory time signal was used by railroads from the West Coast all the way to Colorado.
21st century
With the growth of San Jose, and the rest of Silicon Valley, light pollution became a problem for the observatory. In the 1970s, a site in the Santa Lucia Mountains at Junípero Serra Peak, southeast of Monterey, was evaluated for possible relocation of many of the telescopes. However, funding for the move was not available, and in 1980 San Jose began a program to reduce the effects of lighting, most notably replacing all streetlamps with low pressure sodium lamps. The result is that the Mount Hamilton site remains a viable location for a major working observatory.
The International Astronomical Union named Asteroid 6216 San Jose to honor the city's efforts toward reducing light pollution.
In 2006, there were 23 families in residence, plus typically between two and ten visiting astronomers from the University of California campuses, who stay in dormitories while working at the observatory. The little town of Mount Hamilton atop the mountain has its own police and a post office, and until 2005 had a one-room K-8 school.
In 2008, there were 38 people residing on the mountain; the chef and commons dinner were decommissioned. By 2013, with continuing budget and staff cuts there remain only about nineteen residents and it is common for the observers to work from remote observing stations rather than make the drive, partly as a result of the business office raising the cost to stay in the dorms. The swimming pool has been closed.
In 2013, one of Lick Observatory's key funding sources was scheduled for elimination in 2018, which many worried would result in the closing of the entire observatory.
In November 2014, the University of California announced its intention to continue support of Lick Observatory.
Telescopes at Lick Observatory are used by researchers from many campuses of the University of California system. Current topics of research carried out at Lick include exoplanets, supernovae, active galactic nuclei, planetary science, and development of new adaptive optics technologies.
In 2015, Google donated $1 million to the observatory over two years.
In August 2020, the observatory was in danger of being destroyed by the rapidly growing SCU Lightning Complex fires. Firefighters were on standby at Lick Observatory to defend the buildings if necessary. As of the evening of August 19, 2020, the fire was on observatory property and moving quickly. While the residences on Mt. Hamilton sustained some damage during the following night, the telescopes and domes survived.
Significant discoveries
The following astronomical objects were discovered at Lick Observatory:
Measurement of the size of the major moons of Jupiter by A. A. Michelson in 1891
Several moons of Jupiter
Amalthea
Elara
Himalia
Sinope
Near-Earth asteroid (29075) 1950 DA
Several extrasolar planets
Quintuple planet system
55 Cancri
Triple planet system
Upsilon Andromedae (with Whipple Observatory)
Double planet systems
HD 38529 (with Keck Observatory)
HD 12661 (with Keck)
Gliese 876 (with Keck)
47 Ursae Majoris
The first detection of emission lines in the spectrum of an active galaxy
The jet emerging from the active nucleus in Messier 87
The hidden active galactic nucleus in NGC 1068, detected using spectropolarimetry
In addition to observations of natural phenomena, Lick was also the location of the first laser range-finding observation of the Apollo 11 reflector, although this was only for confirmation purposes and no ongoing range-finding work was performed.
Equipment
Below is a list of the nine telescopes operating at the observatory:
The C. Donald Shane telescope reflector (Shane Dome, Tycho Brahe Peak). Its instrumentation includes:
The Hamilton spectrometer
The Kast double spectrograph
The ShaneAO adaptive optics system with laser guide star
The Automated Planet Finder reflector. First light was originally scheduled for 2006. The telescope finally came into regular use in 2013.
The Anna L. Nickel reflector (North (small) Dome, Main Building)
The Great Lick refractor (South Dome, Main Building, Observatory Peak)
The Crossley reflector (Crossley Dome, Ptolemy Peak)
The Katzman Automatic Imaging Telescope (KAIT) reflector (24-inch Dome, Kepler Peak)
The Coudé Auxiliary Telescope (Inside of Shane Dome, South wall, Tycho Brahe Peak)
The Tauchmann reflector (Tauchmann Dome atop the water tank, Huygens Peak)
The Carnegie twin refractor (Double Astrograph Dome, Tycho Brahe Peak)
Below is a list of equipment that formerly operated at the observatory:
CCD Comet Camera Nikon camera lens ("The Outhouse" Southwest of the Shane Dome, Tycho Brahe Peak)
See also
Charles Dillon Perrine
Harland Epps
List of astronomical observatories
List of largest optical refracting telescopes
References
Citations
Sources
Vasilevskis, S. and Osterbrock, D. E. (1989) "Charles Donald Shane" Biographical Memoirs, Volume 58 pp. 489–512, National Academy of Sciences, Washington, DC, .
Further reading
External links
Lick Observatory
Lick Observatory Archive at UC Santa Cruz
Lick Observatory Records Digital Archive, from the UC Santa Cruz Library's Digital Collections
Photographs (1884) from the Paris Observatory Digital library
The University of California Observatories
Astronomical observatories in California
Buildings and structures in Santa Clara County, California
Diablo Range
University of California, Santa Cruz buildings and structures
Astronomy institutes and departments
Research institutes in the San Francisco Bay Area
History of Santa Clara County, California
University of California
Tourist attractions in Santa Clara County, California
Neoclassical architecture in California
1887 establishments in California
University and college astronomical observatories | Lick Observatory | Astronomy | 2,383 |
32,903,787 | https://en.wikipedia.org/wiki/Eden%20Landing%20Ecological%20Reserve | Eden Landing Ecological Reserve is a nature reserve in Hayward and Union City, California, on the eastern shore of San Francisco Bay. The reserve is managed by the California Department of Fish and Game and includes of former industrial salt ponds now used as a low salinity waterbird habitat.
Background
The reserve lies between the Hayward Regional Shoreline and Alameda Creek Regional Trail to the north and adjacent to Don Edwards National Wildlife Refuge and Coyote Hills Regional Park to the south and is south and adjacent to the San Mateo–Hayward Bridge, across which lies the Hayward Shoreline Interpretive Center. Some waterfowl hunting is periodically permitted. The remains of the Oliver Salt Company are located in the reserve.
This is part of the organization's South Bay Salt Pond Restoration Project, which is the largest salt pond restoration project on the west coast of the United States. To date, over 1,000 acres of marsh have been restored, many of the former salt ponds have been enhanced for wildlife, and new trails and a kayak launch were opened to the public in April 2016. The Bay Area environmental organization Save the Bay is also working on the site to plant native vegetation along the edges of the salt marshes.
Further reading
See also
List of California Department of Fish and Game protected areas
Eden Landing, local former community from which its name is derived
References
External links
Nature reserves in California
Bird sanctuaries of the United States
Parks in Hayward, California
Union City, California
Protected areas of Alameda County, California
Protected areas of the San Francisco Bay Area
California State Reserves
California Department of Fish and Wildlife areas
Marshes of California
Ecological restoration
History of salt
Environment of the San Francisco Bay Area
San Francisco Bay
San Francisco Bay Trail | Eden Landing Ecological Reserve | Chemistry,Engineering | 336 |
49,779,115 | https://en.wikipedia.org/wiki/Prior-free%20mechanism | A prior-free mechanism (PFM) is a mechanism in which the designer does not have any information on the agents' valuations, not even that they are random variables from some unknown probability distribution.
A typical application is a seller who wants to sell some items to potential buyers. The seller wants to price the items in a way that will maximize his profit. The optimal prices depend on the amount that each buyer is willing to pay for each item. The seller does not know these amounts, and cannot even assume that the amounts are drawn from a probability distribution. The seller's goal is to design an auction that will produce a reasonable profit even in worst-case scenarios.
PFMs should be contrasted with two other mechanism types:
Bayesian-optimal mechanisms (BOM) assume that the agents' valuations are drawn from a known probability distribution. The mechanism is tailored to the parameters of this distribution (e.g., its median or mean value).
Prior-independent mechanisms (PIM) assume that the agents' valuations are drawn from an unknown probability distribution. They sample from this distribution in order to estimate the distribution parameters.
From the point-of-view of the designer, BOM is the easiest, then PIM, then PFM. The approximation guarantees of BOM and PIM are in expectation, while those of PFM are in worst-case.
What can we do without a prior? A naive approach is to use statistics: ask the potential buyers what their valuations are and use their replies to calculate an empirical distribution function. Then, apply the methods of Bayesian-optimal mechanism design to the empirical distribution function.
The problem with this naive approach is that the buyers may behave strategically. Since the buyers' answers affect the prices that they are going to pay, they may be incentivized to report false valuations in order to push the price down. The challenge in PFMD is to design truthful mechanisms. In truthful mechanisms, the agents cannot affect the prices they pay, so they have no incentive to report untruthfully.
Several approaches for designing truthful prior-free mechanisms are described below.
Deterministic empirical distribution
For every agent , let be the empirical distribution function calculated based on the valuations of all agents except . Use the Bayesian-optimal mechanism with to calculate price and allocation for agent .
Obviously, the bid of agent affects only the prices paid by other agents and not his own price; therefore, the mechanism is truthful.
This "empirical Myerson mechanism" works in some cases but not in others.
Here is a case in which it works quite well. Suppose we are in a digital goods auction. We ask the buyers for their valuation of the good, and get the following replies:
51 buyers bid "$1"
50 buyers bid "$3".
For each of the buyers in group 1, the empirical distribution is 50 $1-buyers and 50 $3-buyers, so the empirical distribution function is "0.5 chance of $1 and 0.5 chance of $3". For each of the buyers in group 2, the empirical distribution is 51 $1-buyers and 49 $3-buyers, so the empirical distribution function is "0.51 chance of $1 and 0.49 chance of $3". The Bayesian-optimal price in both cases is $3. So in this case, the price given to all buyers will be $3. Only the 50 buyers in group 2 agree to that price, so our profit is $150. This is an optimal profit (a price of $1, for example, would give us a profit of only $101).
In general, the empirical-Myerson mechanism works if the following are true:
There are no feasibility constraints (no issues of incompatibility between allocations to different agents), like in a digital goods auction;
The valuations of all agents are drawn independently from the same unknown distribution;
The number of the agents is large.
Then, the profit of the empirical Myerson mechanism approaches the optimum.
If some of these conditions are not true, then the empirical-Myerson mechanism might have poor performance. Here is an example. Suppose that:
10 buyers bid "$10";
91 buyers bid "$1".
For each buyer in group 1, the empirical distribution function is "0.09 chance of $10 and 0.91 chance of $1" so the Bayesian-optimal price is $1. For each buyer in group 2, the empirical distribution function is "0.1 chance of $10 and 0.9 chance of $1" so the Bayesian-optimal price is $10. The buyers in group 1 pay $1 and the buyers in group 2 do not want to pay $10, so we end up with a profit of $10. In contrast, a price of $1 for everyone would have given us a profit of $101. Our profit is less than %10 of the optimum. This example can be made arbitrarily bad.
Moreover, this example can be generalized to prove that:
There do not exist constants and a symmetric deterministic truthful auction, that attains a profit of at least in all cases in which the agents' valuations are in .
Random sampling
In a typical random-sampling mechanism, the potential buyers are divided randomly to two sub-markets. Each buyer goes to each sub-market with probability 1/2, independently of the others. In each sub-market we compute an empirical distribution function, and use it to calculate the prices for the other sub-market. An agent's bid affects only the prices in the other market and not in his own market, so the mechanism is truthful. In many scenarios it provides a good approximation of the optimal profit, even in worst-case scenarios; see Random-sampling mechanism for references.
Consensus estimates
A consensus-estimate is a function that, with high probability, cannot be influenced by a single agent. For example, if we calculate the maximum profit that we can extract from a given set of buyers, then any buyer can influence the profit by reporting untruthfully. But if we round the maximum profit to the nearest $1000 below it, and the bids are bounded by e.g. $10, then, with high probability, a single bid will not affect the outcome at all. This guarantees that the mechanism is truthful. The consensus-estimate function should be selected carefully in order to guarantee a good profit approximation; see Consensus estimate for references.
References
Mechanism design
Management cybernetics | Prior-free mechanism | Mathematics | 1,343 |
68,448,599 | https://en.wikipedia.org/wiki/Fungus%20pocket | Fungus pockets are any of various convergently evolved inoculum-retention and -cultivation organs in a wide range of insect taxa. They are generally divided into mycangia (or "mycetangia") and infrabuccal pockets.
Fungus pockets are found in ambrosia beetles, bark beetles, termites and attine ants.
References
Insect morphology
Mycology
Symbiosis | Fungus pocket | Biology | 82 |
23,815,973 | https://en.wikipedia.org/wiki/Music%20Encoding%20Initiative | The Music Encoding Initiative (MEI) is an open-source effort to create a system for representation of musical documents in a machine-readable structure. MEI closely mirrors work done by text scholars in the Text Encoding Initiative (TEI) and while the two encoding initiatives are not formally related, they share many common characteristics and development practices. The term "MEI", like "TEI", describes the governing organization and the markup language. The MEI community solicits input and development directions from specialists in various music research communities, including technologists, librarians, historians, and theorists in a common effort to discuss and define best practices for representing a broad range of musical documents and structures. The results of these discussions are then formalized into the MEI schema, a core set of rules for recording physical and intellectual characteristics of music notation documents. This schema is expressed in an XML schema Language, with RelaxNG being the preferred format. The MEI schema is developed using the One-Document-Does-it-all (ODD) format, a literate programming XML format developed by the Text Encoding Initiative.
MEI is often used for music metadata catalogs, critical editing (particularly of early music), and OMR-based data collection and interchange.
MEI uses permissive software licence; the Educational Community License, Version 2.0, (related to the Apache license, 2.0).
Verovio is a portable, lightweight library for rendering Music Encoding Initiative (MEI) files by transformation into Scalable Vector Graphics format, released under the LGPLv3 license.
See also
Music Markup Language
Notation Interchange File Format (NIFF)
MusicXML
References
External links
Website of the Music Encoding Initiative
MEI Tutorials
Verovio, an MEI rendering library
MerMEId, a Metadata editor
MPM - Music Performance Markup, a format for performance modelling on the basis of MEI data
Music notation file formats
Musical markup languages
Musical notation
Musicology
Digital humanities
XML-based standards
Markup languages
Metadata standards
Data modeling languages | Music Encoding Initiative | Technology | 411 |
51,235,898 | https://en.wikipedia.org/wiki/Fractional%20synthetic%20rate | A fractional synthetic rate (FSR) is the rate at which a precursor compound is incorporated into a product per unit of product mass. The metric has been used to estimate the rate at which proteins, lipids, and lipoproteins are synthesized within humans and other animals. The formula used to calculate the FSR from a stable isotope tracer experiment is:
References
Metrics
Biochemistry
Biosynthesis | Fractional synthetic rate | Chemistry,Mathematics,Biology | 85 |
78,378,854 | https://en.wikipedia.org/wiki/NGC%202550A | NGC2550A is a spiral galaxy in the constellation of Camelopardalis. Its velocity with respect to the cosmic microwave background is 3670 ± 10km/s, which corresponds to a Hubble distance of . The discovery of this galaxy is credited to Philip C. Keenan, in his paper Studies of Extra-Galactic Nebulae. Part I: Determination of Magnitudes, published in The Astrophysical Journal in 1935.
According to A.M. Garcia, NGC 2550A is a member of the NGC 2523 galaxy group (also known as LGG 154). This group contains five galaxies, including NGC 2441, NGC 2523, UGC 4041, and UGC 4199.
Supernovae
Two supernovae have been observed in NGC 2550A:
SN 2008P (typeII, mag. 17.5) was discovered by Alessandro Dimai on 23 January 2008.
SN 2024ws (type II, mag. 17.8) was discovered by Kōichi Itagaki on 12 January 2024.
See also
List of NGC objects (2001–3000)
References
External links
2550A
023781
+12-08-043
04397
08230+7354
Camelopardalis
Astronomical objects discovered in 1935
spiral galaxies | NGC 2550A | Astronomy | 263 |
78,790,007 | https://en.wikipedia.org/wiki/Great%20Comet%20of%20390 | The Great Comet of 390 AD, also known as C/390 Q1 by its modern designation, was a comet that appeared very bright in the night sky. It was recorded prominently in ancient Chinese and Korean texts, particularly the Chén Shū.
Discovery and observations
No surviving contemporary records of the comet are known. The earliest mention of the comet is found in the Chén Shū text, which was compiled by Chinese astronomer, Li Chunfeng, in 635 AD. The exact date of discovery isn't known, but according to the Chén Shū, the comet was probably discovered as a "sparkling star" that appeared between the stars Castor and Pollux, which based on Ichiro Hasegawa's orbital reconstructions in 1979, it is most likely the early morning of 22 August 390. The Koreans might have also seen the comet at the same time as the Chinese, but they weren't specific on the date of observations.
According to Chinese sources, the comet then moved towards the constellations Lynx and Ursa Major on 28 August. It reached its peak magnitude of –1.0 on 8 September 390, now sporting a tail about 70–100 degrees in length. It was last seen by Chinese astronomers on 17 September 390.
References
External links
Non-periodic comets
Great comets | Great Comet of 390 | Astronomy | 258 |
26,980,274 | https://en.wikipedia.org/wiki/Stephen%20J.%20Lippard | Stephen James Lippard (born October 12, 1940) is the Arthur Amos Noyes Emeritus Professor of Chemistry at the Massachusetts Institute of Technology. He is considered one of the founders of bioinorganic chemistry, studying the interactions of nonliving substances such as metals with biological systems.
He is also considered a founder of metalloneurochemistry, the study of metal ions and their effects in the brain and nervous system. He has done pioneering work in understanding protein structure and synthesis, the enzymatic functions of methane monooxygenase (MMO), and the mechanisms of cisplatin anticancer drugs. His work has applications for the treatment of cancer, for bioremediation of the environment, and for the development of synthetic methanol-based fuels.
Education
Lippard was born in Pittsburgh, Pennsylvania, where he graduated from Taylor Allderdice High School in 1958. He earned his bachelor's degree from Haverford College in 1962. Originally interested in attending medical school, a talk on medicinal chemistry by visiting chemist Francis P.J. Dwyer inspired Lippard to focus on inorganic chemistry for his Ph.D. Lippard worked with F. Albert Cotton at MIT on rhenium oxo complexes and clusters. He completed the thesis Chemistry of the bromorhenates, receiving his Ph.D. from MIT in 1965.
Career
Lippard joined the faculty of Columbia University in 1966 as an assistant professor. He was promoted to associate professor with tenure in 1969 and full Professor in 1972.
In 1983, Lippard returned to MIT as a professor of chemistry. He has held the Arthur Amos Noyes Professorship of Chemistry at MIT since 1989.
He and his wife Judy were housemasters at MIT's MacGregor House from 1991 to 1995.
Lippard served as the head of the MIT chemistry department from 1995 to 2005. He is recognized for his scientific work and for his work with students, having mentored more than 100 PhDs. His students are active in a wide range of areas, in part because "He delivers a strong message that you need to go to the frontier of science and pick interesting problems." Forty percent of his graduate students have been women, who he gives "high-risk, high-reward projects".
Lippard has co-authored over 900 scholarly and professional articles, and co-authored the textbook Principles of Bioinorganic Chemistry (1994) with Jeremy Berg. He edited the book series Progress in Inorganic Chemistry from Volume 11 to 40. He was an Associate Editor of the journal Inorganic Chemistry from 1983 to 1989, and an Associate Editor of the Journal of the American Chemical Society from 1989 to 2013, as well as serving on the editorial boards of numerous other journals.
Research
Lippard's research activities are at the interface of biology and inorganic chemistry. Lippard focuses on understanding the physical and structural properties of metal complexes, their synthesis and reactions, and the involvement of metal ions in biological systems. The formation and breaking of molecular bonds underlie many biochemical transformations. Purely inorganic substances such as iron are often required in essential organic reactions, e.g. oxygen binding in the hemoglobin family. Lippard attempts to better understand the role of metal complexes in the physiology and pathology of existing biological systems, and to identify possible applications of metal ions in medical treatment.
He has made major contributions in a number of areas, including the development of platinum-based anticancer drugs such as the cisplatin family.
Another area of interest is the structure and function of methane and enzymes that consume greenhouse gas hydrocarbons.
In metalloneurochemistry, he studies the molecular activity of metal ions in the brain and develops optical and MRI sensors for binding, tracking, and measuring metal ions as they interact with neurotransmitters and other biological signaling agents.
Cisplatin
Cisplatin is one of the most frequently used chemotherapy medications for many forms of cancer. It was discovered in the 1960s by Barnett Rosenberg, but its mechanism of action was not understood.
Early work in Lippard's lab on the interaction of metal complexes with nucleic acids led to the discovery of the first metallo-intercalators and eventually to the understanding of the mechanisms of cisplatin.
Lippard and his students examined sequences of DNA and RNA and incorporated sulfur atoms into the sugar-phosphate backbone, where they selectively bound mercury or platinum complexes to specific positions. Karen Jennette's discovery that sterically encumbered platinum complexes were more successful in binding to sulfur atoms in tRNA than mercury salts led researchers to propose that the platinum complexes intercalated between the double-stranded RNA's base pairs. It was the first experimental demonstration to show a metal complex binding to DNA by intercalation: platinum terpyridine complexes inserted between the DNA base pairs and unwound the double helix. Using fiber X-ray diffraction, Peter Bond and others were able to display the intercalated platinum complex and to confirm predictions that the spacing of intercalators in DNA base pairs would follow the neighbor exclusion rule.
This established the groundwork for subsequent work on intercalative binding. Jacqueline Barton and others have used electron micrography to show that the covalent binding of platinum complexes changes the supercoiling of the DNA, "bending and unwinding" the double helix.
Further experiments have explored the mechanisms through which platinum drugs bind their biological targets and led to insights into their anticancer activity. Important results include the identification of an intrastrand d(pGpG) cross-link as the major adduct on platinated single-stranded DNA, identification of the major adduct on double-stranded DNA, the binding of high-mobility-group proteins to platinated DNA cross-links. Using X-ray crystallography and other techniques, Lippard and his coworkers have examined the mechanisms involved in binding cisplatin to DNA fragments, to better understand how cisplatin invades tumor cells and interferes with their activity. The interaction of Cisplatin and DNA results in the formation of DNA-DNA interstrand and intrastrand crosslinks which block DNA replication and transcription mechanisms.
As well as the intrastrand cross links created by cisplatin, monofunctional metal complexes may suggest possible cancer treatments.
A related line of research in Lippard's laboratory involves platinum blues. Jacqueline Barton was the first person to synthesize and structurally characterize a crystalline platinum blue, pyridone blue. Since then, extensive research has been done on the structure, properties, and reactions of such complexes.
Methane monooxygenases
Members of the Lippard laboratory studying macromolecular crystallography have explored the structure, mechanisms and activity of bacterial multicomponent monooxygenases.
Methane monooxygenases are enzymes that occur in bacteria called methanotrophs.
The primary function of this enzyme is the hydroxylation of methane to methanol as the first step in methane metabolism.
Amy Rosenzweig determined the protein x-ray structure of the soluble form of methane monooxygenase (MMO) as Lippard's graduate student. Lippard has used X-ray diffraction and a variety of other methods to study such compounds, greatly expanding our understanding of their structure and function. MMO is vital to Earth's carbon cycle, and knowledge of its structure may help to develop clean technologies for methanol-based fuels. Methane monoxygenases may also be useful for bioremediation.
Iron complexes
Lippard and his students have also studied the synthesis of diiron complexes such as diiron hydroxylase to better understand the activities of metal atoms in biological molecules. They have developed model compounds for carboxylate-bridged diiron metalloenzymes which can be compared with corresponding biological forms. They have synthesized analogues of the diiron carboxylate cores of MMO and related carboxylate-bridged diiron proteins such as the dioxygen transporter hemerythrin. In 2010, Lippard received the Ronald Breslow Award for his work on nonheme iron proteins.
Also exciting was the synthesis of a "molecular ferric wheel" by Kingsley Taft, the first wheel structure to be observed in self-assembled polymetallic chemistry.
A nearly perfect circle containing ten ferric ions, the structure spontaneously assembled in methanolic solutions of diiron(III) oxo complexes, which were being studied to better understand polyiron oxo protein cores like those of hemerythrin. Although no particular use is known for the ferric wheel, it and subsequent ring-shaped homometallic molecular clusters are of interest as a subclass of molecular magnets. Another novel complex was a "ferric triple-decker", containing three parallel triangular iron units and a triple bridge of six citrate ligands.
Metalloneurochemistry
Lippard is considered a founder of metalloneurochemistry, the study of metal ions at the molecular level as they affect the brain and the nervous system. Working at the interface of inorganic chemistry and neuroscience, he has devised fluorescent imaging agents for studying mobile zinc and nitric oxide and their effects on neurotransmission and other forms of biological signaling.
Companies
In 2011 Lippard founded Blend Therapeutics with Omid Cameron Farokhzad and Robert Langer, in Watertown, Massachusetts.
Blend focused on developing anti-cancer medicines for treatment of solid tumor cancers, with the goal of targeting cancerous tissue and leaving healthy cells alone. Its proprietary drug candidates included BTP-114, a cisplatin prodrug, and BTP-277, a targeting ligand designed to bond selectively to tumor cells. As of 2016, Blend split off into two separate companies: Tarveda and Placon, to follow these two approaches.
Placon Therapeutics is developing platinum-based cancer therapies. These include BTP-114, the first clinical candidate to use an albumin-conjugating, platinum-prodrug platform, based on Lippard's work. BTP-114 has been cleared for Phase 1 cancer-treatment clinical trials by the Food and Drug Administration (FDA).
Tarveda Therapeutics is developing BTP-277 (renamed PEN-221) and other Pentarins, a proprietary class of therapeutics which use peptide ligands to carry a target drug to tumor cells. Pentarins are nanoparticle drugs, similar to antibody-drug conjugates but smaller, that have been described as "mini-smart bombs". They are believed to be capable of penetrating dense tumor-based cancers.
Honors and awards
Lippard has been elected to the National Academy of Sciences, the National Institute of Medicine, the American Academy of Arts and Sciences, and the American Philosophical Society. He is an honorary member of the Royal Irish Academy (2002), the Italian Chemical Society (1996), and the German National Academy of Sciences (Leopoldina) (2004), and is an external scientific member of the Max-Planck Institute (1996) in Germany.
He has received honorary Doctorate of Science degrees from Haverford College, Texas A&M University, and the University of South Carolina, and an honorary Doctorate degree from Hebrew University of Jerusalem.
Lippard has received many awards throughout his career, most notably the 2004 National Medal of Science, the 2014 Priestley Medal of the American Chemical Society, its highest award, and the 2014 James R. Killian lectureship at MIT, given to one faculty member of the Institute per year. He is also the recipient of the Linus Pauling Medal, Theodore W. Richards Medal, and the William H. Nichols Medal. For his work in bioinorganic and biomimetic chemistry, Lippard received the Ronald Breslow Award and the Alfred Bader Award from the American Chemical Society (ACS). For research in inorganic and organometallic chemistry, as well as his role as an educator, he was honored with ACS awards for Inorganic Chemistry and for Distinguished Service in Inorganic Chemistry. In 2015, Lippard won the Benjamin Franklin Medal in Chemistry bestowed by The Franklin Institute. In 2016, he received the F. A. Cotton Medal for excellence in chemical research and the Welch Award in Chemistry from the Robert A. Welch Foundation. In 2017, he was chosen to receive the American Institute of Chemists Gold Medal.
Personal life
Stephen Lippard married Judith Ann Drezner in 1964. They have two sons, Josh and Alex, a daughter-in-law Sandra, and twin granddaughters, Lucy and Annie. Judy Lippard died on September 9, 2013. Stephen moved to Washington, DC, in 2017, where he remains active in science, writing, consulting, and grandfathering, while expanding his harpsichord playing and cooking skills.
References
External links
Lippard Lab
Stephen J. Lippard, MIT Chemistry Department profile
1940 births
21st-century American chemists
Bioinorganic chemists
Massachusetts Institute of Technology School of Science alumni
Haverford College alumni
Massachusetts Institute of Technology School of Science faculty
Columbia University faculty
Scientists from Pittsburgh
National Medal of Science laureates
Living people
Members of the United States National Academy of Sciences
Members of the American Philosophical Society
Fellows of the American Academy of Arts and Sciences
Members of the German National Academy of Sciences Leopoldina
Fellows of the American Association for the Advancement of Science
Taylor Allderdice High School alumni
Members of the Royal Irish Academy
Members of the National Academy of Medicine
Benjamin Franklin Medal (Franklin Institute) laureates | Stephen J. Lippard | Chemistry | 2,794 |
10,869 | https://en.wikipedia.org/wiki/Frequentist%20probability | Frequentist probability or frequentism is an interpretation of probability; it defines an event's probability as the limit of its relative frequency in infinitely many trials (the long-run probability).
Probabilities can be found (in principle) by a repeatable objective process (and are thus ideally devoid of opinion). The continued use of frequentist methods in scientific inference, however, has been called into question.
The development of the frequentist account was motivated by the problems and paradoxes of the previously dominant viewpoint, the classical interpretation. In the classical interpretation, probability was defined in terms of the principle of indifference, based on the natural symmetry of a problem, so, for example, the probabilities of dice games arise from the natural symmetric 6-sidedness of the cube. This classical interpretation stumbled at any statistical problem that has no natural symmetry for reasoning.
Definition
In the frequentist interpretation, probabilities are discussed only when dealing with well-defined random experiments. The set of all possible outcomes of a random experiment is called the sample space of the experiment. An event is defined as a particular subset of the sample space to be considered. For any given event, only one of two possibilities may hold: It occurs or it does not. The relative frequency of occurrence of an event, observed in a number of repetitions of the experiment, is a measure of the probability of that event. This is the core conception of probability in the frequentist interpretation.
A claim of the frequentist approach is that, as the number of trials increases, the change in the relative frequency will diminish. Hence, one can view a probability as the limiting value of the corresponding relative frequencies.
Scope
The frequentist interpretation is a philosophical approach to the definition and use of probabilities; it is one of several such approaches. It does not claim to capture all connotations of the concept 'probable' in colloquial speech of natural languages.
As an interpretation, it is not in conflict with the mathematical axiomatization of probability theory; rather, it provides guidance for how to apply mathematical probability theory to real-world situations. It offers distinct guidance in the construction and design of practical experiments, especially when contrasted with the Bayesian interpretation. As to whether this guidance is useful, or is apt to mis-interpretation, has been a source of controversy. Particularly when the frequency interpretation of probability is mistakenly assumed to be the only possible basis for frequentist inference. So, for example, a list of mis-interpretations of the meaning of p-values accompanies the article on -values; controversies are detailed in the article on statistical hypothesis testing. The Jeffreys–Lindley paradox shows how different interpretations, applied to the same data set, can lead to different conclusions about the 'statistical significance' of a result.
As Feller notes:
History
The frequentist view may have been foreshadowed by Aristotle, in Rhetoric,
when he wrote:
Poisson (1837) clearly distinguished between objective and subjective probabilities.
Soon thereafter a flurry of nearly simultaneous publications by Mill, Ellis (1843)
and Ellis (1854), Cournot (1843),
and Fries introduced the frequentist view. Venn (1866, 1876, 1888) provided a thorough exposition two decades later.
These were further supported by the publications of Boole and Bertrand. By the end of the 19th century the frequentist interpretation was well established and perhaps dominant in the sciences. The following generation established the tools of classical inferential statistics (significance testing, hypothesis testing and confidence intervals) all based on frequentist probability.
Alternatively,
Bernoulli
understood the concept of frequentist probability and published a critical proof (the weak law of large numbers) posthumously (Bernoulli, 1713).
He is also credited with some appreciation for subjective probability (prior to and without Bayes theorem).
Gauss and Laplace used frequentist (and other) probability in derivations of the least squares method a century later, a generation before Poisson.
Laplace considered the probabilities of testimonies, tables of mortality, judgments of tribunals, etc. which are unlikely candidates for classical probability. In this view, Poisson's contribution was his sharp criticism of the alternative "inverse" (subjective, Bayesian) probability interpretation. Any criticism by Gauss or Laplace was muted and implicit. (However, note that their later derivations of least squares did not use inverse probability.)
Major contributors to "classical" statistics in the early 20th century included Fisher, Neyman, and Pearson. Fisher contributed to most of statistics and made significance testing the core of experimental science, although he was critical of the frequentist concept of "repeated sampling from the same population";
Neyman formulated confidence intervals and contributed heavily to sampling theory; Neyman and Pearson paired in the creation of hypothesis testing. All valued objectivity, so the best interpretation of probability available to them was frequentist.
All were suspicious of "inverse probability" (the available alternative) with prior probabilities chosen by using the principle of indifference. Fisher said, "... the theory of inverse probability is founded upon an error, [referring to Bayes theorem] and must be wholly rejected."
While Neyman was a pure frequentist,
Fisher's views of probability were unique: Both Fisher and Neyman had nuanced view of probability. von Mises offered a combination of mathematical and philosophical support for frequentism in the era.
Etymology
According to the Oxford English Dictionary, the term frequentist was first used by M.G. Kendall in 1949, to contrast with Bayesians, whom he called non-frequentists.
Kendall observed
3. ... we may broadly distinguish two main attitudes. One takes probability as 'a degree of rational belief', or some similar idea...the second defines probability in terms of frequencies of occurrence of events, or by relative proportions in 'populations' or 'collectives';
...
12. It might be thought that the differences between the frequentists and the non-frequentists (if I may call them such) are largely due to the differences of the domains which they purport to cover.
...
I assert that this is not so ... The essential distinction between the frequentists and the non-frequentists is, I think, that the former, in an effort to avoid anything savouring of matters of opinion, seek to define probability in terms of the objective properties of a population, real or hypothetical, whereas the latter do not. [emphasis in original]
"The Frequency Theory of Probability" was used a generation earlier as a chapter title in Keynes (1921).
The historical sequence:
Probability concepts were introduced and much of the mathematics of probability derived (prior to the 20th century)
classical statistical inference methods were developed
the mathematical foundations of probability were solidified and current terminology was introduced (all in the 20th century).
The primary historical sources in probability and statistics did not use the current terminology of classical, subjective (Bayesian), and frequentist probability.
Alternative views
Probability theory is a branch of mathematics. While its roots reach centuries into the past, it reached maturity with the axioms of Andrey Kolmogorov in 1933. The theory focuses on the valid operations on probability values rather than on the initial assignment of values; the mathematics is largely independent of any interpretation of probability.
Applications and interpretations of probability are considered by philosophy, the sciences and statistics. All are interested in the extraction of knowledge from observations—inductive reasoning. There are a variety of competing interpretations;
All have problems. The frequentist interpretation does resolve difficulties with the classical interpretation, such as any problem where the natural symmetry of outcomes is not known. It does not address other issues, such as the dutch book.
Classical probability assigns probabilities based on physical idealized symmetry (dice, coins, cards). The classical definition is at risk of circularity: Probabilities are defined by assuming equality of probabilities. In the absence of symmetry the utility of the definition is limited.
Subjective (Bayesian) probability (a family of competing interpretations) considers degrees of belief: All practical "subjective" probability interpretations are so constrained to rationality as to avoid most subjectivity. Real subjectivity is repellent to some definitions of science which strive for results independent of the observer and analyst. Other applications of Bayesianism in science (e.g. logical Bayesianism) embrace the inherent subjectivity of many scientific studies and objects and use Bayesian reasoning to place boundaries and context on the influence of subjectivities on all analysis. The historical roots of this concept extended to such non-numeric applications as legal evidence.
Propensity probability views probability as a causative phenomenon rather than a purely descriptive or subjective one.
Footnotes
Citations
References
Probability interpretations | Frequentist probability | Mathematics | 1,794 |
58,630,568 | https://en.wikipedia.org/wiki/Knee-chest%20position | The knee-chest position or genupectoral position is a position used in a number of medical situations including gynecological examination and surgery, lumbar spine surgery, repair of vesico-vaginal fistula (VVF) by Sims's saucerisation procedure, labor and delivery for which it is recommended in those with a cord prolapse until delivery can occur, and administering enemas.
References
Surgery
Gynaecology
Human positions | Knee-chest position | Biology | 98 |
78,668,488 | https://en.wikipedia.org/wiki/NGC%204632 | NGC4632 is a spiral galaxy in the constellation of Virgo. Its velocity with respect to the cosmic microwave background for is , which corresponds to a Hubble distance of . However, 15 non-redshift measurements give a much closer distance of . It was discovered by German-British astronomer William Herschel on 22 February 1784.
Polar Ringed Galaxy
It was discovered in 2023 that the galaxies NGC 4632 and NGC 6156 are surrounded by a disk of cold hydrogen orbiting 90 degrees around their disks. These are the very first polar-ringed galaxies discovered through radio wave observations. These observations were made as part of the WALLABY astronomical survey.
NGC 4666 Group
According to A. M. Garcia, NGC 4632 is a member of the NGC 4666 galaxy group (also known as LGG 299). This group has 3 members, including NGC 4666 and NGC 4668.
Supernova
One supernova has been observed in NGC 4632:
SN1946B (typeII, mag. 15.7) was discovered by Edwin Hubble in May, 1946.
See also
List of NGC objects (4001–5000)
References
External links
4632
042689
07870
+00-32-038
12399+0011
Virgo (constellation)
17840222
Discoveries by William Herschel
Spiral galaxies | NGC 4632 | Astronomy | 275 |
69,436,351 | https://en.wikipedia.org/wiki/Gliese%20367%20b | Gliese 367 b, formally named Tahay, is a sub-Earth exoplanet orbiting the red dwarf star Gliese 367 (GJ 367), from Earth in the constellation of Vela. The exoplanet takes just 7.7 hours to orbit its star, one of the shortest orbits of any planet.
, Gliese 367 b is the smallest known exoplanet within 10 parsecs of the Solar System, and the second-least massive after Proxima Centauri d.
Nomenclature
In August 2022, this planet and its host star were included among 20 systems to be named by the third NameExoWorlds project. The approved names, proposed by a team from Chile, were announced in June 2023. Gliese 367 b is named Tahay and its host star is named Añañuca, after names for the endemic Chilean wildflowers Calydorea xiphioides and Phycella cyrtanthoides. Calydorea xiphioides only blooms for between 7 and 8 hours each year, alluding to the planet's short orbital period of 7.7 hours.
Properties
Due to its close orbit, the exoplanet gets bombarded with radiation over 500 times more than Earth receives from the Sun. Dayside temperatures on GJ 367b are around .
Gliese 367 b is presumably tidally locked, and any atmosphere, if ever existed, would have boiled away due to the planet's extreme temperatures. Observations from JWST provide evidence that the planet indeed lacks an atmosphere, and that its albedo is low. The absence of day-night heat recirculation suggests significant volatile loss, shaping its current atmospheric and surface properties. GJ 367b's exceptional density raises intriguing hypotheses about its origin, from mantle evaporation to Mercury-like collisions. This discovery prompts broader inquiries into the habitability of small rocky planets orbiting M dwarfs and offers valuable insights into planetary formation and atmospheric dynamics across the cosmos.
The core of GJ 367b is likely composed of iron and nickel, making it similar to Mercury's core. The core of GJ 367b is extremely dense, making up about 91% of the planet's mass; the entire planet has a total density of , about twice that of Earth. The planet may have been stripped of the outer silicate layers, like Mercury and other iron planets, due to collisions or evaporation by the extreme stellar radiation.
References
Exoplanets in the Gliese Catalog
Vela (constellation)
Exoplanets discovered in 2021
Transiting exoplanets
Exoplanets detected by radial velocity
Exoplanets discovered by TESS
Exoplanets with proper names
Sub-Earth exoplanets | Gliese 367 b | Astronomy | 580 |
44,981,423 | https://en.wikipedia.org/wiki/Central%20Anatolia%20region%20%28statistical%29 | The Central Anatolia Region (Turkish: Orta Anadolu Bölgesi) (TR7) is a statistical region in Turkey.
Subregions and provinces
Kırıkkale Subregion (TR71)
Kırıkkale Province (TR711)
Aksaray Province (TR712)
Niğde Province (TR713)
Nevşehir Province (TR714)
Kırşehir Province (TR715)
Kayseri Subregion (TR72)
Kayseri Province (TR721)
Sivas Province (TR722)
Yozgat Province (TR723)
Age groups
Internal immigration
State register location of Central Anatolia residents
Marital status of 15+ population by gender
Education status of 15+ population by gender
See also
NUTS of Turkey
References
External links
TURKSTAT
Sources
ESPON Database
Statistical regions of Turkey | Central Anatolia region (statistical) | Mathematics | 178 |
33,794,361 | https://en.wikipedia.org/wiki/Antonio%20Nieto | Antonio Nieto (born September 1967) is an Earth Systems and Mining engineer.
Career
Prof. Dr. Nieto is Director of the FLSmidth Mining and Minerals Technology and Research Center located in Salt Lake City, USA. Previously, Dr. Nieto served as Professor and JCI Chair in Minerals Resources and Reserves at The School of Mining Engineering at the University of the Witwatersrand, Johannesburg and as Associate Professor at Penn State, teaching courses on mining engineering, earth systems, economics engineering, computing modeling, sampling methods, and geostatistics. Prior to his academic and research career Nieto worked as a mining engineer in underground mining and surface mining.
Nieto graduated as mining engineer from Guanajuato School of Mines in 1990. He holds a master's degree (1997) in Geostatistics from Ecole Des Mines de Paris (Paris Tech), and a MS (1995) and PhD (2002) in Earth-Systems Engineering from Colorado School of Mines. He was nominated in 2017 as member of the Mexico's National Academy of Engineering.
Nieto specializes in ore resource and reserve estimation and mining operations optimization. Dr. Nieto is often interviewed by news networks and weekly news magazines such as Newsweek on strategic minerals, mine safety, and technology topics.
Nieto has published more than 80 technical papers.
See also
School of Mines
Antonio Nieto
References
External links
https://web.archive.org/web/20151222081411/http://www.antonionieto.com/
http://www.antonionieto.com/publications.html
Mining engineers
1967 births
Living people
Pennsylvania State University faculty | Antonio Nieto | Engineering | 338 |
60,030,692 | https://en.wikipedia.org/wiki/Packaging%20waste | Packaging waste, the part of the waste that consists of packaging and packaging material, is a major part of the total global waste, and the major part of the packaging waste consists of single-use plastic food packaging, a hallmark of throwaway culture. Notable examples for which the need for regulation was recognized early, are "containers of liquids for human consumption", i.e. plastic bottles and the like. In Europe, the Germans top the list of packaging waste producers with more than 220 kilos of packaging per capita.
Background
According to the United States Environmental Protection Agency (EPA), defined containers and packaging as products that are assumed to be discarded the same year the products they contain are purchased. The majority of the solid waste are packaging products, estimating to be about 77.9 million tons of generation in 2015 (29.7 percent of total generation). Packaging can come in all shapes and forms ranging from Amazon boxes to soda cans and are used to store, transport, contain, and protect goods to keep customer satisfaction. The type of packaging materials including glass, aluminum, steel, paper, cardboard, plastic, wood, and other miscellaneous packaging. Packaging waste is a dominant contributor in today's world and responsible for half of the waste in the globe.
The recycling rate in 2015 for containers and packaging was 53 percent. Furthermore, the process of burning of containers and packaging was 7.2 million tons (21.4 percent of total combustion with energy recovery). Following the landfills that received 29.4 million tons (21.4 percent of total land filling) within the same year.
As packaging waste pollutes the Earth, all life on Earth experiences negative impacts that affected their lifestyle. Marine or land-living animals are suffocating due to the pollution of packaging waste. This is a major issue for low income countries who do not have an efficient waste management system to clean up their environments and being the main sources for the global ocean pollution. But 'litter louts', individuals who lack the motivation to recycle and instead leave their waste anywhere they want are also major contributors, especially in high income nations where such facilities are available. The current location with the greatest amount of solid waste that includes most of packaging products is the Great Pacific Garbage Patch located at West Coast of North America to Japan. Most packaging waste that eventually goes into the ocean often comes from places such as lakes, streams, and sewage.
Possible solutions to reducing packaging waste are very simple and easy and could start with minimisation of packaging material ranging up to a zero waste strategy (package-free products). The problem is mainly in a lack of motivation to start making a change. But examples of effective ways to help reduce packaging pollution include banning the use of single-use plastics, more social awareness and education, promotion of eco-friendly alternatives, public pressure, voluntary cleaning up, and adopting reusable or biodegradable bags.
Overpackaging
The Institute of Packaging Professionals defines overpackaging as "a condition where the methods and materials used to package an item exceed the requirements for adequate containment, protection, transport, and sale." Overpackaging is an opportunity for source reduction, reducing waste by proper package design and practice.
A classic example of a wasteful package design is a breakfast cereal box. This is typically a folding carton enclosing a plastic bag of cereal. Cartons are typically tall and wide but very thin. This has an inefficient material-to-volume ratio; it is wasteful. Structural packaging engineers are aware of the opportunity to save packaging costs, materials, and waste but marketers find benefit in a "billboard" style package for advertising and graphics. An optimized folding box would use much less paperboard for the same volume of cereal, but with reduced surface area for graphics. The use of a plastic bag without an enclosing box would use less material per unit of cereal.
Slackfill packaging is that which is intentionally under-filled, resulting in non-functional headspace. Packagers doing this not only risk charges of deceptive packaging but are using excessive packaging: packaging waste.
With fragile items such as consumer electronics, engineers try to match the fragility of the product with the expected stresses of distribution handling. Package cushioning is used to help ensure safe delivery of the product. With overpackaging, excessive cushion and a larger corrugated box are used: wasteful packaging. Conversely, underpackaging would be the use of insufficient cushioning. Excessive product waste caused by underpackaging may be worse for the environment than the waste of the package.
Sometimes packaging is designed to protect its product for controlled distribution to a retail store. With online shopping or E-commerce, however, items packed for retail sale may be shipped individually by Fulfillment houses by package delivery or small parcel carriers. Retail packages are frequently packed into a larger corrugated box for shipment. Often these secondary boxes are much larger than needed, thus use void-fill to immobilize the contents. The rapid growth of e-commerce has increased packaging challenges, as items often require additional protective materials and oversized boxes to ensure safe transportation, which further exacerbates packaging waste. This can have the appearance of gross overpackaging but is sometimes necessary. If the product packager designed all packaging to meet the requirements of individual shipment, then the portion delivered to a retail store would have excessive packaging. Sometimes two levels of packaging are needed for separate distribution, resulting in production inefficiencies.
Types of packaging wastes
Glass containers
Bottles and jars for drinks and storing foods or juices are examples of glass containers. It has been estimated by the EPA that 9.1 million tons of glass containers were generated in 2015, or 3.5 percent of municipal solid waste (MSW). About 70 percent of glass consumption is used for containers and packaging purposes. At least 13.2 percent of the production of glass and containers are burned with energy recovery. The amount of glass containers and packaging going into the land fill is about 53 percent.
Aluminum containers and packaging
Aluminum container and packaging waste usually comes from cans from any kind of beverages, but foil can also be another that contributes it as well. It has been given that about 25 percent of aluminum is used for packaging purposes. Using the Aluminum Association Data, it has been calculated that at least 1.8 million tons of aluminum packaging were generated in 2015 or 0.8 percent MSW produced. Of those that are produced, only about 670,000 tons of aluminum containers and packaging were recycled, about 54.9 percent. And, the ones that ends up in the land fill is 50.6 percent.
Steel containers and packaging
The production of steel containers and packaging mostly comes in cans and other things like steel barrels. Only about 5 percent of steel use for packaging purposes of the total world of steel consumption which makes it the least amount wasted and the most recycled. It has totaled that 2.2 million tons or 0.9 percent of MSW generated in 2015. While according to the Steel Recycling Institute, an estimate of 1.6 million tons (73 percent) of steel packaging were recycled. Adding on, the steel packaging that were combusted with energy recover was about 5.4 percent and 21.6 percent were land filled.
Paper and paperboard containers and packaging
The most of it being generated, and within the MSW in 2015, was corrugated boxes coming with at least 31.3 million tons (11.3 percent total) produced. However, it also the top most recycled at 28.9 million tons (92.3 percent) boxes being recycled in 2015.
Later on, they are then combusted which makes 0.5 million tons and landfills received 1.9 million tons. Other than corrugated boxes, cartons, bags, sacks, wrapping papers, and other boxes used for shoes or cosmetics are other examples of paper and paperboard containers and packaging. The total amount of MSW generated for paper and paperboard containers and packaging was 39.9 million tons or 15.1 percent in 2015. Although, the recycled rate is about 78.2 percent and 4.3 percent of small proportions were combusted with energy recovery and 17.6 percent in landfill.
Wood packaging
Wood packaging is anything that is made out of wood used for packaging purposes (e.g., wood crates, wood chips, boards, and planks). Wood packaging is still highly used in today's world for transporting goods. According to EPA's data that were borrowed from the Virginia Polytechnic Institute and the United States Department of Agriculture's Forest Service Southern Research Station, 9.8 million tons (3.7 percent of total MSW) of wood packaging were made in production in 2015. Also, in 2015, the amount that was recycled 2.7 million tons. Moreover, its estimated that 14.3 percent of the wood containers and packaging waste generated was combusted with energy recovery, while the 58.6 percent went to the land filled.
Plastic containers and packaging
Plastic containers and packaging can be found in plastic bottles, supermarket bags, milk and water jugs, and more. EPA used data from the American Chemistry Council to estimate that 14.7 million tons (5.5 percent of MSW generation) of plastic containers and packaging were created in 2015. The overall amount that is recycled is about 2.2 million tons (14.6 percent). In addition, 16.8 percent were combusted with energy recover and 68.6 percent went straight into the land fill. Most of the plastics are made from polyethylene terephthalate (PET), high-density polyethylene (HDPE), low-density polyethylene (LDPE), polyvinyl chloride (PVC), polystyrene (PS), polypropylene (PP) and other resins. That being said, the recycling rate for PET bottles and jars was 29.9 percent (890,000 tons) and the recycling of HDPE water and milk jugs was 30.3 percent (230,000 tons).
Role of packaging waste in pollution
Litter
Litter mostly consists of packaging waste. Besides the disfigurement of the landscape, it also poses a health hazard for various life forms. Packaging materials such as glass and plastic bottles are the main constituents of litter. It has a huge impact on the marine environment as well, when animals are caught in or accidentally consume plastic packaging.
Air pollution
The production of packaging material is the main source of the air pollution that is being spread globally. Some emissions comes from accidental fires or activities that includes incineration of packaging waste that releases vinyl chloride, CFC, and hexane. For a more direct course, emissions can originate in land fill sites which could release and methane. Most comes from steel and glass packaging manufacturing.
Water pollution
Packaging waste can come from land based or marine sources. The current location that makes up the large of amount of water pollution is the Great Pacific Garbage Patch located at West Coast of North America to Japan. Marine sources such as rivers that caught packaging materials eventually lead to the oceans. In global standards, about 80 percent of packaging waste in ocean comes from land based sources and 20 percent comes from marine sources. The 20 percent of packaging waste that comes from marine sources comes from the rivers of China starting from least to greatest contributors, the Hanjiang, Zhujiang, Dong, Huangpu, Xi, and Yangtze river. All other marine sources comes from rivers of Africa and Southeast Asia.
Impacts on marine species and wildlife species
Most marine species and wildlife species suffer from the following:
Entanglement: At least 344 species are entangled by packaging waste, specifically the ones that are plastics. Most of the victims are marine species like whales, seabirds, turtles, and fish.
Ingestion: 233 marine species are recorded that had consumed plastic packaging waste of either unintentionally, intentionally, or indirectly. Again, the following victims would be whales, fish, mammals, seabirds, and turtles. The effects of eating plastic packaging waste could lead to greatly reduced stomach capacity, leading to poor appetite and false sense of satiation. Whats worse is that the size of the ingested material is ultimately limited by the size of the organism. For example, microplastics consumed by planktons and fishes can consume cigarettes boxes. Plastic can also obstruct or perforate the gut, cause ulcerative lesions, or gastric rupture. This can ultimately lead to death.
Interaction: Animals contacting with packaging waste includes collisions, obstructions, abrasions or use as substrate.
Impacts on human health
Bisphenol A (BPA), styrene and benzene can be found in certain packaging waste. BPA can affect the hearts of women, permanently damage the DNA of mice, and appear to be entering the human body from a variety of unknown sources. Studies from Journal of American Association shows that higher bisphenol A levels were significantly associated with heart diseases, diabetes, and abnormally high levels of certain liver enzymes. Toxins such as these are found within our food chains. When fish or plankton consume microplastics, it can also enter our food chain. Microplastics was also found in common table salt and in both tap and bottled water. Microplastics are dangerous as the toxins can affect the human body's nervous, respiratory, and reproductive system.
Actions to reduce packaging wastes
Waste management system improvements
Segregation of waste at sources: plastics, organic, metals, paper, etc.
Effective collection of the segregated waste, transport and safe storage
Cost-effective recycling of materials (including plastics)
Less land filling and dumping in the environment
Promotion of eco-friendly alternatives
Governments working with industries could support the development and promotion of sustainable alternatives in order to phase out single-use plastics progressively. If governments were to introduce economic incentives, supporting projects which upscale or recycle single-use items and stimulating the creation of micro-enterprises, they could contribute to the uptake of eco-friendly alternatives to single-use plastics.
Social awareness and education
Social awareness and education are also ways to help contribute to issues similar to helping reduce packaging waste. Using the media gives individuals or groups quick access to spread information and awareness concerning letting the public know what is happening in the world and how others can contribute to fixing packaging waste problems. Schools are also good for spreading education with factual knowledge and possible outcomes for the increase of packaging waste and provide ways to get individuals to give a helping hand in keeping our planet clean. Public awareness strategies can include various activities designed to persuade and educate. These strategies may focus on the reuse and recycling of resources and encouraging responsible use and minimization of waste generation and litter.
Voluntarily actions to reduce packaging waste
Reuse bags
Bring reusable bags to supermarkets
Repair broken objects instead of throwing them away
Exchange packaging materials on BoxGiver
Recycle
Clean up in coastal areas
Do community services to clean up parks and streets from packaging waste
See also
References
Packaging
Waste | Packaging waste | Physics | 3,063 |
2,212,867 | https://en.wikipedia.org/wiki/Detailed%20balance | The principle of detailed balance can be used in kinetic systems which are decomposed into elementary processes (collisions, or steps, or elementary reactions). It states that at equilibrium, each elementary process is in equilibrium with its reverse process.
History
The principle of detailed balance was explicitly introduced for collisions by Ludwig Boltzmann. In 1872, he proved his H-theorem using this principle. The arguments in favor of this property are founded upon microscopic reversibility.
Five years before Boltzmann, James Clerk Maxwell used the principle of detailed balance for gas kinetics with the reference to the principle of sufficient reason. He compared the idea of detailed balance with other types of balancing (like cyclic balance) and found that "Now it is impossible to assign a reason" why detailed balance should be rejected (pg. 64).
In 1901, Rudolf Wegscheider introduced the principle of detailed balance for chemical kinetics. In particular, he demonstrated that the irreversible cycles A1 -> A2 -> \cdots -> A_\mathit{n} -> A1 are impossible and found explicitly the relations between kinetic constants that follow from the principle of detailed balance. In 1931, Lars Onsager used these relations in his works, for which he was awarded the 1968 Nobel Prize in Chemistry.
Albert Einstein in 1916 used the principle of detailed balance in a background for his quantum theory of emission and absorption of radiation.
The principle of detailed balance has been used in Markov chain Monte Carlo methods since their invention in 1953. In particular, in the Metropolis–Hastings algorithm and in its important particular case, Gibbs sampling, it is used as a simple and reliable condition to provide the desirable equilibrium state.
Now, the principle of detailed balance is a standard part of the university courses in statistical mechanics, physical chemistry, chemical and physical kinetics.
Microscopic background
The microscopic "reversing of time" turns at the kinetic level into the "reversing of arrows": the elementary processes transform into their reverse processes. For example, the reaction
transforms into
and conversely. (Here, are symbols of components or states, are coefficients). The equilibrium ensemble should be invariant with respect to this transformation because of microreversibility and the uniqueness of thermodynamic equilibrium. This leads us immediately to the concept of detailed balance: each process is equilibrated by its reverse process.
This reasoning is based on three assumptions:
does not change under time reversal;
Equilibrium is invariant under time reversal;
The macroscopic elementary processes are microscopically distinguishable. That is, they represent disjoint sets of microscopic events.
Any of these assumptions may be violated. For example, Boltzmann's collision can be represented as where is a particle with velocity v. Under time reversal transforms into . Therefore, the collision is transformed into the reverse collision by the PT transformation, where P is the space inversion and T is the time reversal. Detailed balance for Boltzmann's equation requires PT-invariance of collisions' dynamics, not just T-invariance. Indeed, after the time reversal the collision transforms into For the detailed balance we need transformation into
For this purpose, we need to apply additionally the space reversal P. Therefore, for the detailed balance in Boltzmann's equation not T-invariance but PT-invariance is needed.
Equilibrium may be not T- or PT-invariant even if the laws of motion are invariant. This non-invariance may be caused by the spontaneous symmetry breaking. There exist nonreciprocal media (for example, some bi-isotropic materials) without T and PT invariance.
If different macroscopic processes are sampled from the same elementary microscopic events then macroscopic detailed balance may be violated even when microscopic detailed balance holds.
Now, after almost 150 years of development, the scope of validity and the violations of detailed balance in kinetics seem to be clear.
Detailed balance
Reversibility
A Markov process is called a reversible Markov process or reversible Markov chain if there exists a positive stationary distribution π that satisfies the detailed balance equationswhere Pij is the Markov transition probability from state i to state j, i.e. , and πi and πj are the equilibrium probabilities of being in states i and j, respectively. When for all i, this is equivalent to the joint probability matrix, being symmetric in i and j; or symmetric in and t.
The definition carries over straightforwardly to continuous variables, where π becomes a probability density, and a transition kernel probability density from state s′ to state s:The detailed balance condition is stronger than that required merely for a stationary distribution, because there are Markov processes with stationary distributions that do not have detailed balance.
Transition matrices that are symmetric or always have detailed balance. In these cases, a uniform distribution over the states is an equilibrium distribution.
Kolmogorov's criterion
Reversibility is equivalent to Kolmogorov's criterion: the product of transition rates over any closed loop of states is the same in both directions.
For example, it implies that, for all a, b and c,For example, if we have a Markov chain with three states such that only these transitions are possible: , then they violate Kolmogorov's criterion.
Closest reversible Markov chain
For continuous systems with detailed balance, it may be possible to continuously transform the coordinates until the equilibrium distribution is uniform, with a transition kernel which then is symmetric. In the case of discrete states, it may be possible to achieve something similar by breaking the Markov states into appropriately-sized degenerate sub-states.
For a Markov transition matrix and a stationary distribution, the detailed balance equations may not be valid. However, it can be shown that a unique Markov transition matrix exists which is closest according to the stationary distribution and a given norm. The closest Matrix can be computed by solving a quadratic-convex optimization problem.
Detailed balance and entropy increase
For many systems of physical and chemical kinetics, detailed balance provides sufficient conditions for the strict increase of entropy in isolated systems. For example, the famous Boltzmann H-theorem states that, according to the Boltzmann equation, the principle of detailed balance implies positivity of entropy production. The Boltzmann formula (1872) for entropy production in rarefied gas kinetics with detailed balance served as a prototype of many similar formulas for dissipation in mass action kinetics and generalized mass action kinetics with detailed balance.
Nevertheless, the principle of detailed balance is not necessary for entropy growth. For example, in the linear irreversible cycle A1 -> A2 -> A3 -> A1, entropy production is positive but the principle of detailed balance does not hold.
Thus, the principle of detailed balance is a sufficient but not necessary condition for entropy increase in Boltzmann kinetics. These relations between the principle of detailed balance and the second law of thermodynamics were clarified in 1887 when Hendrik Lorentz objected to the Boltzmann H-theorem for polyatomic gases. Lorentz stated that the principle of detailed balance is not applicable to collisions of polyatomic molecules.
Boltzmann immediately invented a new, more general condition sufficient for entropy growth. Boltzmann's condition holds for all Markov processes, irrespective of time-reversibility. Later, entropy increase was proved for all Markov processes by a direct method. These theorems may be considered as simplifications of the Boltzmann result. Later, this condition was referred to as the "cyclic balance" condition (because it holds for irreversible cycles) or the "semi-detailed balance" or the "complex balance". In 1981, Carlo Cercignani and Maria Lampis proved that the Lorentz arguments were wrong and the principle of detailed balance is valid for polyatomic molecules. Nevertheless, the extended semi-detailed balance conditions invented by Boltzmann in this discussion remain the remarkable generalization of the detailed balance.
Wegscheider's conditions for the generalized mass action law
In chemical kinetics, the elementary reactions are represented by the stoichiometric equations
where are the components and are the stoichiometric coefficients. Here, the reverse reactions with positive constants are included in the list separately. We need this separation of direct and reverse reactions to apply later the general formalism to the systems with some irreversible reactions. The system of stoichiometric equations of elementary reactions is the reaction mechanism.
The stoichiometric matrix is , (gain minus loss). This matrix need not be square. The stoichiometric vector is the rth row of with coordinates .
According to the generalized mass action law, the reaction rate for an elementary reaction is
where is the activity (the "effective concentration") of .
The reaction mechanism includes reactions with the reaction rate constants . For each r the following notations are used: ; ; is the reaction rate constant for the reverse reaction if it is in the reaction mechanism and 0 if it is not; is the reaction rate for the reverse reaction if it is in the reaction mechanism and 0 if it is not. For a reversible reaction, is the equilibrium constant.
The principle of detailed balance for the generalized mass action law is: For given values there exists a positive equilibrium that satisfies detailed balance, that is, . This means that the system of linear detailed balance equations
is solvable (). The following classical result gives the necessary and sufficient conditions for the existence of a positive equilibrium with detailed balance (see, for example, the textbook).
Two conditions are sufficient and necessary for solvability of the system of detailed balance equations:
If then and, conversely, if then (reversibility);
For any solution of the system
the Wegscheider's identity holds:
Remark. It is sufficient to use in the Wegscheider conditions a basis of solutions of the system .
In particular, for any cycle in the monomolecular (linear) reactions the product of the reaction rate constants in the clockwise direction is equal to the product of the reaction rate constants in the counterclockwise direction. The same condition is valid for the reversible Markov processes (it is equivalent to the "no net flow" condition).
A simple nonlinear example gives us a linear cycle supplemented by one nonlinear step:
A1 <=> A2
A2 <=> A3
A3 <=> A1
{A1}+A2 <=> 2A3
There are two nontrivial independent Wegscheider's identities for this system:
and
They correspond to the following linear relations between the stoichiometric vectors:
and
The computational aspect of the Wegscheider conditions was studied by D. Colquhoun with co-authors.
The Wegscheider conditions demonstrate that whereas the principle of detailed balance states a local property of equilibrium, it implies the relations between the kinetic constants that are valid for all states far from equilibrium. This is possible because a kinetic law is known and relations between the rates of the elementary processes at equilibrium can be transformed into relations between kinetic constants which are used globally. For the Wegscheider conditions this kinetic law is the law of mass action (or the generalized law of mass action).
Dissipation in systems with detailed balance
To describe dynamics of the systems that obey the generalized mass action law, one has to represent the activities as functions of the concentrations cj and temperature. For this purpose, use the representation of the activity through the chemical potential:
where μi is the chemical potential of the species under the conditions of interest, is the chemical potential of that species in the chosen standard state, R is the gas constant and T is the thermodynamic temperature.
The chemical potential can be represented as a function of c and T, where c is the vector of concentrations with components cj. For the ideal systems, and : the activity is the concentration and the generalized mass action law is the usual law of mass action.
Consider a system in isothermal (T=const) isochoric (the volume V=const) condition. For these conditions, the Helmholtz free energy measures the “useful” work obtainable from a system. It is a functions of the temperature T, the volume V and the amounts of chemical components Nj (usually measured in moles), N is the vector with components Nj. For the ideal systems,
The chemical potential is a partial derivative: .
The chemical kinetic equations are
If the principle of detailed balance is valid then for any value of T there exists a positive point of detailed balance ceq:
Elementary algebra gives
where
For the dissipation we obtain from these formulas:
The inequality holds because ln is a monotone function and, hence, the expressions and have always the same sign.
Similar inequalities are valid for other classical conditions for the closed systems and the corresponding characteristic functions: for isothermal isobaric conditions the Gibbs free energy decreases, for the isochoric systems with the constant internal energy (isolated systems) the entropy increases as well as for isobaric systems with the constant enthalpy.
Onsager reciprocal relations and detailed balance
Let the principle of detailed balance be valid. Then, for small deviations from equilibrium, the kinetic response of the system can be approximated as linearly related to its deviation from chemical equilibrium, giving the reaction rates for the generalized mass action law as:
Therefore, again in the linear response regime near equilibrium, the kinetic equations are ():
This is exactly the Onsager form: following the original work of Onsager, we should introduce the thermodynamic forces and the matrix of coefficients in the form
The coefficient matrix is symmetric:
These symmetry relations, , are exactly the Onsager reciprocal relations. The coefficient matrix is non-positive. It is negative on the linear span of the stoichiometric vectors .
So, the Onsager relations follow from the principle of detailed balance in the linear approximation near equilibrium.
Semi-detailed balance
To formulate the principle of semi-detailed balance, it is convenient to count the direct and inverse elementary reactions separately. In this case, the kinetic equations have the form:
Let us use the notations , for the input and the output vectors of the stoichiometric coefficients of the rth elementary reaction. Let be the set of all these vectors .
For each , let us define two sets of numbers:
if and only if is the vector of the input stoichiometric coefficients for the rth elementary reaction; if and only if is the vector of the output stoichiometric coefficients for the rth elementary reaction.
The principle of semi-detailed balance means that in equilibrium the semi-detailed balance condition holds: for every
The semi-detailed balance condition is sufficient for the stationarity: it implies that
For the Markov kinetics the semi-detailed balance condition is just the elementary balance equation and holds for any steady state. For the nonlinear mass action law it is, in general, sufficient but not necessary condition for stationarity.
The semi-detailed balance condition is weaker than the detailed balance one: if the principle of detailed balance holds then the condition of semi-detailed balance also holds.
For systems that obey the generalized mass action law the semi-detailed balance condition is sufficient for the dissipation inequality (for the Helmholtz free energy under isothermal isochoric conditions and for the dissipation inequalities under other classical conditions for the corresponding thermodynamic potentials).
Boltzmann introduced the semi-detailed balance condition for collisions in 1887 and proved that it guaranties the positivity of the entropy production. For chemical kinetics, this condition (as the complex balance condition) was introduced by Horn and Jackson in 1972.
The microscopic backgrounds for the semi-detailed balance were found in the Markov microkinetics of the intermediate compounds that are present in small amounts and whose concentrations are in quasiequilibrium with the main components. Under these microscopic assumptions, the semi-detailed balance condition is just the balance equation for the Markov microkinetics according to the Michaelis–Menten–Stueckelberg theorem.
Dissipation in systems with semi-detailed balance
Let us represent the generalized mass action law in the equivalent form: the rate of the elementary process
is
where is the chemical potential and is the Helmholtz free energy. The exponential term is called the Boltzmann factor and the multiplier is the kinetic factor.
Let us count the direct and reverse reaction in the kinetic equation separately:
An auxiliary function of one variable is convenient for the representation of dissipation for the mass action law
This function may be considered as the sum of the reaction rates for deformed input stoichiometric coefficients . For it is just the sum of the reaction rates. The function is convex because .
Direct calculation gives that according to the kinetic equations
This is the general dissipation formula for the generalized mass action law.
Convexity of gives the sufficient and necessary conditions for the proper dissipation inequality:
The semi-detailed balance condition can be transformed into identity . Therefore, for the systems with semi-detailed balance .
Cone theorem and local equivalence of detailed and complex balance
For any reaction mechanism and a given positive equilibrium a cone of possible velocities for the systems with detailed balance is defined for any non-equilibrium state N
where cone stands for the conical hull and the piecewise-constant functions do not depend on (positive) values of equilibrium reaction rates and are defined by thermodynamic quantities under assumption of detailed balance.
The cone theorem states that for the given reaction mechanism and given positive equilibrium, the velocity (dN/dt) at a state N for a system with complex balance belongs to the cone . That is, there exists a system with detailed balance, the same reaction mechanism, the same positive equilibrium, that gives the same velocity at state N. According to cone theorem, for a given state N, the set of velocities of the semidetailed balance systems coincides with the set of velocities of the detailed balance systems if their reaction mechanisms and equilibria coincide. This means local equivalence of detailed and complex balance.
Detailed balance for systems with irreversible reactions
Detailed balance states that in equilibrium each elementary process is equilibrated by its reverse process and requires reversibility of all elementary processes. For many real physico-chemical complex systems (e.g. homogeneous combustion, heterogeneous catalytic oxidation, most enzyme reactions etc.), detailed mechanisms include both reversible and irreversible reactions. If one represents irreversible reactions as limits of reversible steps, then it becomes obvious that not all reaction mechanisms with irreversible reactions can be obtained as limits of systems or reversible reactions with detailed balance. For example, the irreversible cycle A1 -> A2 -> A3 -> A1 cannot be obtained as such a limit but the reaction mechanism A1 -> A2 -> A3 <- A1 can.
Gorban–Yablonsky theorem. A system of reactions with some irreversible reactions is a limit of systems with detailed balance when some constants tend to zero if and only if (i) the reversible part of this system satisfies the principle of detailed balance and (ii) the convex hull of the stoichiometric vectors of the irreversible reactions has empty intersection with the linear span of the stoichiometric vectors of the reversible reactions. Physically, the last condition means that the irreversible reactions cannot be included in oriented cyclic pathways.
See also
T-symmetry
Microscopic reversibility
Master equation
Balance equation
Gibbs sampling
Metropolis–Hastings algorithm
Atomic spectral line (deduction of the Einstein coefficients)
Random walks on graphs
References
Non-equilibrium thermodynamics
Statistical mechanics
Markov models
Chemical kinetics | Detailed balance | Physics,Chemistry,Mathematics | 4,096 |
40,769 | https://en.wikipedia.org/wiki/Balancing%20network | In a hybrid set, hybrid coil, or resistance hybrid, balancing network is a circuit used to match, i.e., to balance, the impedance of a uniform transmission line, (e.g., a twisted metallic pair, coaxial cable, etc.) over a selected range of frequencies. A balancing network is required to ensure isolation between the two ports of the four-wire side of the hybrid.
A balancing network can also be a device used between a balanced device or line and an unbalanced device or line for the purpose of transforming from balanced to unbalanced or from unbalanced to balanced.
Source: from Federal Standard 1037C and from MIL-STD-188
References
See also
balun
Analog circuits | Balancing network | Engineering | 153 |
53,163,575 | https://en.wikipedia.org/wiki/Avinash%20Kumar%20Agarwal | Avinash Kumar Agarwal (born 22 August 1972) is director of Indian Institute of Technology, Jodhpur. He is an Indian mechanical engineer, tribologist and a professor at the Department of Mechanical Engineering of the Indian Institute of Technology, Kanpur. He is known for his studies on internal combustion engines, Emissions, alternate fuels and CNG engines and is an elected fellow of the American Society of Mechanical Engineering (2013), Society of Automotive Engineers, US (2012), National Academy of Science, Allahabad (2018), Royal Society of Chemistry, UK (2018), International Society for Energy, Environment and Sustainability (2016), and Indian National Academy of Engineering (2015). The Council of Scientific and Industrial Research, the apex agency of the Government of India for scientific research, awarded him the Shanti Swarup Bhatnagar Prize for Science and Technology, one of the highest Indian science awards for his contributions to Engineering Sciences in 2016. Agarwal has been bestowed upon Prestigious J C Bose Fellowship of Science and Engineering Research Board. Government of India (August 2019). Agarwal is among the top ten highly cited researchers (HCR) of 2018 from India, as per Clarivate Analytics, an arm of Web of Science.
Biography
Avinash Kumar Agarwal, born on 22 August 1972 at Karauli, in the Indian state of Rajasthan, earned his graduate degree in mechanical engineering (BE) from Malaviya Regional Engineering College (MREC) Jaipur (present-day Malaviya National Institute of Technology, Jaipur) of the University of Rajasthan in 1994 and did his master's degree at the Centre for Energy Studies of the Indian Institute of Technology, Delhi from where he obtained an MTech in energy studies in 1996. immediately after this, he pursued his PhD at IIT Delhi, in Center for energy studies, under the guidance of L. M. Das, which he successfully defended in 1999 for his thesis, Performance evaluation and tribological studies on a biodiesel-fuelled compression ignition engine. Thereafter he moved to the US for his postdoctoral work which he completed at the Engine Research Center of the University of Wisconsin-Madison between 1999 and 2001. On his return to India in March 2001, he joined the Indian Institute of Technology, Kanpur as an assistant professor. He was promoted as an associate professor in 2007 and has been serving the institute since 2012 as a professor at the Department of Mechanical Engineering. He had seven short stints abroad as visiting professor during this period, first at Wolfson School of Mechanical and Manufacturing Engineering of the University of Loughborough in 2002 second and third, at Photonics Institute of Technical University of Vienna in 2004 and 2013, and the fourth, fifth and sixth at Hanyang University, South Korea in 2013, 2014 and 2015 and the last stint at Korea Advanced Institute of Science and Technology (KAIST) in 2016. On 19 April 2024, he was appointed as the director of IIT Jodhpur.
Agarwal is married to Dr. Rashmi A. Agarwal and the couple has two children, Aditya (b. 2003) and Rithwik (b. 2006). The family lives in Kanpur in Uttar Pradesh.
Legacy
Agrawal's researches has covered the fields of engine combustion, alternate fuels, emission and particulate control, optical diagnosis, Methanol Engine Development, fuel spray optimization and tribology and his work has assisted in the development of low-cost diesel oxidation catalysts and homogeneous charge compression ignition engines. His studies of laser ignition of methane-air hydrogen-air mixtures and biodiesel based on Indian feedstocks have widened the understanding of the subjects; he carried out a project on biodiesels during 2010–13 for the Department of Science and Technology of India. He has documented his researches by way of over 280 articles; Google Scholar and ResearchGate, online repositories of scientific articles, have listed many of them. Besides, he has edited forty books, most of which are published by Springer, including, Combustion for Power Generation and Transportation, and Novel Combustion Concepts for Sustainable Energy Development and has contributed forty two chapters to many books. He is also a co-editor of a five-volume reference text, Handbook of Combustion, published by Wiley-VCH in 2010.
Agarwal is the Associate Principle Editor of Journal "Fuel", Editor-in-chief of Journal of Energy and Environmental Sustainability (JEES) and is Associate Editor of two other journals, "ASME Journal of Energy Resources Technology", and "Journal of the Institution of Engineers (India): Series C". He is a member of the editorial board of several prestigious journals such as "International Journal of Engine Research, published by SAE International and IMechE, London, UK", Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering and Recent Patents on Mechanical Engineering of Bentham Science. He is also a former associate editor of the International Journal of Vehicle Systems Modelling and Testing of Inderscience Publishers, "International Journal of Oil, Gas and Coal Technology" (IJOGCT), Published by Inderscience Publishers and guest-edited a special issue of the Journal of Automobile Engineering on Alternative Fuels in 2007. He is a member of the Methanol Task Force of the Department of Science and Technology since 2017, former member of Technology Systems Group of the Department of Science and Technology and a former member of the Experts Group on Biofuels and Retrofitting of Engines of the Government of India. He is a member of the board of associates of the Internal Combustion Engines Division of American Society of Mechanical Engineering and is associated with SAE International, sitting in many of their review committees.
He was the session organizer for 2005, 2006, 2007, 2008 and 2009 editions of SAE World Congress and chaired the 2004, 2005 and 2006 sessions on alternative fuel and internal combustion engines.
Awards and honors
Agarwal received the Young Scientist Award of the Department of Science and Technology in 2002, followed by the Career Award for Young Teachers of the All India Council of Technical Education (AICTE) in 2004. The Young Engineer Award of the Indian National Academy of Engineering reached him in 2005 and the Young Scientists Medal of the Indian National Science Academy came his way in 2007. He received the Alkyl Amine Young Scientist Award of the Institute of Chemical Technology the same year and a year later, SAE International selected him for the 2008 Ralph R. Teetor Educational Award. He received the C. V. Raman Young Teachers Award of the IES Group in 2011 and NASI-Reliance Industries Platinum Jubilee Award of the National Academy of Sciences, India in 2012. The Indian National Academy of Engineering honored him again in 2012 with the Silver Jubilee Young Engineer Award, which was followed by "Rajib Goyal Prize" in Physical Sciences-2015 from Kurukshetra University and then Council of Scientific and Industrial Research awarded him the Shanti Swarup Bhatnagar Prize, one of the highest Indian science awards in 2016. Afterwards, he was conferred "Er. M P Baya National Award-2017" in Mechanical Engineering by Institution of Engineers, Udaipur and Clarivate Analytics India Research Excellence - Citation Award-2017, which was 6th Edition prize for high citations and high impact work from India, given by Clarivate Analytics.
Agarwal, who held the BOYSCAST Fellowship of Department of Science and Technology in 2002 and Devendra Shukla Research Fellowship of IIT Kanpur in 2009, was elected as a fellow by the Indian National Academy of Engineering in 2015. He is also a fellow of the American Society of Mechanical Engineers and SAE International, Royal Society of Chemistry, Indian National Academy of Engineering, National Academy of Sciences and International Society for Energy, Environment and Sustainability He was listed in several editions of Marquis Who's Who in Science and Engineering, Who's Who (Emerging Leaders) and Who's Who in the World. Agarwal was Poonam and Prabhu Goyal Chair Professor at IIT Kanpur from 2012 to 2016. He is currently SBI Endowed chair Professor in the same institution (2018-2021).
Selected bibliography
Books
Environmental Contaminants, 431 pages, Published by Springer, Singapore (2018), (Eds.) Tarun Gupta, Avinash K Agarwal, Rashmi A Agarwal, Nitin K Labhsetwar () DOI: 10.1007/978-981-10-7332-8.
Air Pollution and Control, 260 pages, Published by Springer, Singapore (2018), (Eds.) Nikhil Sharma, Avinash K Agarwal, Peter Eastwood, Tarun Gupta, Akhilendra P Singh () DOI: 10.1007/978-981-10-7185-0.
Coal and Biomass Gasification, 521 pages, Published by Springer, Singapore (2018), (Eds.) Santanu De, Avinash K Agarwal, V S Moholkar, Thallada Bhaskar () DOI: 10.1007/978-981-10-7335-9.
Droplets and Sprays, 430 pages, Published by Springer, Singapore (2018), (Eds.) Saptarshi Basu, Avinash K Agarwal, Achintya Mukhopadhyay, Chetan Patel () DOI: 10.1007/978-981-10-7449-3.
Advances in Internal Combustion Engine Research, 345 pages, Published by Springer, Singapore (2018), (Eds.) Dhananjay K Srivastava, Avinash K Agarwal, Amitava Datta, Rakesh K Maurya () DOI: 10.1007/978-981-10-7575-9.
Modeling and Simulations of Turbulent Combustion, 661 pages, Published by Springer, Singapore (2018), (Eds.) Santanu De, Avinash K Agarwal, Swetoprovo Chaudhuri, Swarnendu Sen () DOI: 10.1007/978-981-10-7410-3.
Prospects of Alternative Transportation Fuels, 405 pages, Published by Springer, Singapore (2018), (Eds.) Akhilendra P Singh, Avinash K Agarwal, Rashmi A Agarwal, Atul Dhar, Mritunjay Kumar Shukla () DOI: 10.1007/978-981-10-7518-6.
Environmental, Chemical and Medical Sensors, 409 pages, Published by Springer, Singapore (2018), (Eds.) Shantanu Bhattacharya, Avinash K Agarwal, Nripen Chanda, Ashok Pandey, Ashis Kumar Sen () DOI: 10.1007/978-981-10-7751-7.
Applications of Solar Energy, 364 pages, Published by Springer, Singapore (2018), (Eds.) Himanshu Tyagi, Avinash K Agarwal, Prodyut R Chakraborty, Satvasheel Powar () DOI: 10.1007/978-981-10-7206-2.
Bioremediation: Applications for Environmental Protection and Management, 411 pages, Published by Springer, Singapore (2018), (Eds.) Sunita J Varjani, Avinash K Agarwal, Edgard Ghansounou, Baskar Gurunathan () DOI: 10.1007/978-981-10-7485-1.
Applications Paradigms of Droplet and Spray Transport: Paradigms and Applications, 379 pages, Published by Springer, Singapore (2018), (Eds.) Saptarshi Basu, Avinash K Agarwal, Achintya Mukhopadhyay, Chetan Patel () DOI: 10.1007/978-981-10-7233-8.
Combustion for Power Generation and Transportation: Technology, Challenges and Prospects, 451 pages, Published by Springer, Singapore (2017), (Eds.) Avinash Kumar Agarwal, Santanu De, Ashok Pandey, Akhilendra Pratap Singh (). DOI: 10.1007/978-981-10-3785-6.
Locomotives and Rail Road Transportation: Technology, Challenges and Prospects, 247 pages, Published by Springer, Singapore (2017), (Eds.) Avinash Kumar Agarwal, Atul Dhar, Anirudh Gautam, Ashok Pandey (). DOI: 10.1007/978-981-10-3788-7.
Biofuels: Technology, Challenges and Prospects, 245 pages, Published by Springer, Singapore (2017), (Eds.) Avinash Kumar Agarwal, Rashmi Avinash Agarwal, Tarun Gupta, Bhola Ram Gurjar. DOI: 10-1007/978-981-10-3791-7 (). DOI: 10.1007/978-981-10-3791-7.
Technology Vision 2015: Technology Roadmap Transportation, 237 pages, (Eds.) Avinash Kumar Agarwal, S S Thipse, Akhilendra P Singh, Gautam Goswami, Mukti Prasad, Published by TIFAC, New Delhi, December 2016.
Energy, Combustion and Propulsion: New Perspectives, 609 pages, Published by Athena Academic, London, UK (2016), (Eds.) Avinash K Agarwal, Suresh K. Aggarwal, Ashwani K. Gupta, Abhijit Kushari, Ashok Pandey ()
Novel Combustion Concepts for Sustainable Energy Development, 562 pages, Published by Springer, Singapore (2014), (Eds.) Avinash K. Agarwal, Ashok Pandey, Ashwani K. Gupta, Suresh K. Aggarwal, Abhijit Kushari. (). DOI: 10.1007/978-81-322-2211-8-18.
Handbook of Combustion, 5 Volumes, 3168 pages, Hardcover, April 2010, Published by Wiley VCH, (Eds.) Maximilian Lackner, Franz Winter, Avinash K. Agarwal ().
CI Engine Performance for Use with Alternative Fuels, 2009(SP-2237), 185 pages, Published by SAE International, US, 2009, (Eds.) Amiyo K Basu, Avinash Kumar Agarwal, Paul Richards, G. J. Thompson, Scott A Miers, Sundar Rajan Krishnan ().
Combustion Science and Technology: Recent Trends Published by Narosa Publishing House, New Delhi, 2009 (Eds.) A. K. Agarwal, A. Kushari, S. K. Aggarwal, A. K. Runchal, 300 Pages ().
CI Engine Performance for use with Alternative Fuels (SP-2176), Published by SAE International, US, 2008, (Eds.) Avinash K. Agarwal, G. J. Thompson, Scott A. Miers, Sundar R. Krishnan ().
Alternative Fuels and CI Engine Performance (SP-2067), 160 Pages, Published by SAE International, US, 2007, (Eds.) Avinash K. Agarwal, G. J. Thompson ().
New Diesel Engines and Components and CI Engine performance for Use with Alternative Fuels (SP-2014), 171 Pages, Published by SAE International, US, 2006, (Eds.) A. Jain, J. E. Mossberg, Avinash K. Agarwal, G. J. Thompson ().
CI Engine performance for Use with Alternative Fuels, and New Diesel Engines and Components (SP-1978), 196 Pages, Published by SAE International, US, 2005, (Eds.) J. E. Mossberg, A. Jain, G. J. Thompson, Avinash K. Agarwal ().
Articles
Avinash K Agarwal*, Bushra Ateeq, Tarun Gupta, Akhilendra P. Singh, Swaroop K Pandey, Nikhil Sharma, Rashmi A Agarwal, Neeraj K. Gupta, Hemant Sharma, Ayush Jain, Pravesh C Shukla, "Toxicity and mutagenicity of exhaust from compressed natural gas: Could this be a clean solution for megacities with mixed-traffic conditions?”, Environmental Pollution. 239, 499–511, 2018.doi: 10.1016/j.envpol.2018.04.028
See also
Diesel fuel
Vegetable oil
References
Further reading
Recipients of the Shanti Swarup Bhatnagar Award in Engineering Science
1972 births
Indian technology writers
People from Rajasthan
Indian mechanical engineers
Tribologists
University of Rajasthan alumni
IIT Delhi alumni
Academic staff of IIT Delhi
Academic staff of IIT Kanpur
Academics of Loughborough University
Living people
Fellows of the Indian National Academy of Engineering
University of Wisconsin–Madison fellows | Avinash Kumar Agarwal | Materials_science | 3,545 |
9,783,078 | https://en.wikipedia.org/wiki/Proteins%40home | proteins@home was a volunteer computing project that used the BOINC architecture. The project was run by the Department of Biology at . The project began on December 28, 2006 and ended in June 2008.
Purpose
proteins@home was a large-scale non-profit protein structure prediction project utilizing volunteer computing to perform intensive computations in a small amount of time. From their website:
The amino acid sequence of a protein determines its three-dimensional structure, or 'fold'. Conversely, the three-dimensional structure is compatible with a large, but limited set of amino acid sequences. Enumerating the allowed sequences for a given fold is known as the 'inverse protein folding problem'. We are working to solve this problem for a large number of known protein folds (a representative subset: about 1500 folds). The most expensive step is to build a database of energy functions that describe all these structures. For each structure, we consider all possible sequences of amino acids. Surprisingly, this is computationally tractable, because our energy functions are sums over pairs of interactions. Once this is done, we can explore the space of amino acid sequences in a fast and efficient way, and retain the most favorable sequences. This large-scale mapping of protein sequence space will have applications for predicting protein structure and function, for understanding protein evolution, and for designing new proteins. By joining the project, you will help to build the database of energy functions and advance an important area of science with potential biomedical applications.
See also
List of volunteer computing projects
References
External links
proteins@home archive
Science in society
Free science software
Protein structure
Volunteer computing projects | Proteins@home | Chemistry,Technology | 325 |
7,146,636 | https://en.wikipedia.org/wiki/Hexabromocyclododecane | Hexabromocyclododecane (HBCD or HBCDD) is a brominated flame retardant. It consists of twelve carbon, eighteen hydrogen, and six bromine atoms tied to the ring. Its primary application is in extruded (XPS) and expanded (EPS) polystyrene foam used as thermal insulation in construction. Other uses are upholstered furniture, automobile interior textiles, car cushions and insulation blocks in trucks, packaging material, video cassette recorder housing, and electric and electronic equipment. According to UNEP, "HBCD is produced in China, Europe, Japan, and the USA. The last known current annual production is approximately 28,000 tonnes per year. The main share of the market volume is used in Europe and China" (figures from 2009 to 2010). Due to its persistence, toxicity, and ecotoxicity, the Stockholm Convention on Persistent Organic Pollutants decided in May 2013 to list hexabromocyclododecane in Annex A to the convention with specific exemptions for production and use in expanded polystyrene and extruded polystyrene in buildings. Because HBCD has 16 possible stereo-isomers with different biological activities, the substance poses a difficult problem for manufacture and regulation.
Toxicity
HBCD's toxicity and its harm to the environment are current sources of concern. HBCD can be found in environmental samples such as birds, mammals, fish, and other aquatic organisms as well as soil and sediment.
On this basis, on 28 October 2008, the European Chemicals Agency decided to include HBCD in the SVHC list, Substances of Very High Concern, within the Registration, Evaluation, Authorisation and Restriction of Chemicals framework. On 18 February 2011, HBCD was listed in Annex XIV of REACH and hence is subject to Authorisation. HBCD can be used until the so-called “sunset date” (21 August 2015). After that date, only authorized applications will be allowed in the EU.
HBCD has been found widely present in biological samples from remote areas and supporting pieces of evidence for its classification as Persistent, Bioaccumulative, and Toxic (PBT) and undergoes long-range environmental transportation.
In July 2012, an EU-harmonized classification and labeling for HBCD entered into force. HBCD has been classified as a category 2 for reproductive toxicity. Since August 2010 hexabromocyclododecanes are included in the EPA's List of Chemicals of Concern.
In May 2013 the Stockholm Convention on Persistent Organic Pollutants (POPs) decided to include HBCD in the convention's Annex A for elimination, with specific exemptions for expanded and extruded polystyrene in buildings needed to give countries time to phase-in safer substitutes. HBCD is listed for elimination, but with a specific exemption for expanded polystyrene (EPS) and extruded polystyrene (XPS) in buildings. Countries may choose to use this exemption for up to five years after the request for exemption is submitted. Japan was the first country to implement a ban on the import and production of HBCD effective in May 2014.
Because HBCD has 16 possible stereo-isomers with different biological activities, the substance poses a difficult problem for manufacture and regulation.
The HBCD commercial mixture is composed of three main diastereomers denoted as alpha (α-HBCD), beta (β-HBCD), and gamma (γ-HBCD) with traces of others. A series of four published in vivo mice studies were conducted between several federal and academic institutions to characterize the toxicokinetic profiles of individual HBCD stereoisomers. The predominant diastereomer in the HBCD mixture, γ-HBCD, undergoes rapid hepatic metabolism, fecal and urinary elimination, and biological conversion to other diastereomers with a short biological half-life of 1–4 days. After oral exposure to the γ-HBCD diastereomer, β-HBCD was detected in the liver and brain, and α-HBCD and β-HBCD was detected in the fat and feces with multiple novel metabolites identified - monohydroxy-pentabromocyclododecane, monohydroxy-pentabromocyclododecene, dihydroxy-pentabromocyclododecene, and dihydroxy-pentabromocyclododecadiene. In contrast, α-HBCD is more biologically persistent, resistant to metabolism, bioaccumulates in lipid-rich tissues after a 10-day repeated exposure study, and has a longer biological half-life of up to 21 days; only α-HBCD was detected in the liver, brain, fat and feces with no stereoisomerization to γ-HBCD or β-HBCD and low trace levels of four different hydroxylated metabolites were identified. Developing mice had higher HBCD tissue levels than adult mice after exposure to either α-HBCD or γ-HBCD indicating the potential for increased susceptibility of the developing young to HBCD effects. The reported toxicokinetic differences of individual HBCD diastereoisomers have important implications for the extrapolation of toxicological studies of the commercial HBCD mixture to the assessment of human risk.
Environmental Concerns
Due to its persistence, toxicity, and ecotoxicity, the Stockholm Convention on Persistent Organic Pollutants decided in May 2013 to list hexabromocyclododecane in Annex A to the convention with specific exemptions for production and use in expanded polystyrene and extruded polystyrene in buildings. Countries may choose to use this exemption for up to five years after the request for exemption is submitted.
There is a large and still increasing stock of HBCD in the anthroposphere, mainly in EPS and XPS insulation boards.
A long-term environmental monitoring program run by the Fraunhofer Institute for Molecular Biology and Applied Ecology demonstrates a general trend that HBCD concentrations are decreasing over time. HBCD emissions into the environment are controlled under the voluntary industry emission management program: the Voluntary Emissions Control Action Programme (VECAP). The VECAP annual report demonstrates a continuous decrease of potential emissions of HBCD to the environment.
References
External links
MPI Milebrome B-972, FR 50 & GC SAM: The low-cost alternatives to Hexabromocyclododecane (HBCD) in EPS and XPS applications , Stockholm Convention on Persistent Organic Pollutants 2012
An Overview of Alternatives to Tetrabromobisphenol A (TBBPA) and Hexabromocyclododecane (HBCD) , University of Massachusetts Lowell, March 2006
ECHA: MEMBER STATE COMMITTEE SUPPORT DOCUMENT FOR IDENTIFICATION OF HEXABROMOCYCLODODECANE AND ALL MAJOR DIASTEREOISOMERS IDENTIFIED AS A SUBSTANCE OF VERY HIGH CONCERN, 8 October 2008
Factsheet BSEF
BSEF – the bromine industry website’s page on HBCD
Flame retardants
Organobromides
PBT substances
Persistent organic pollutants under the Stockholm Convention | Hexabromocyclododecane | Chemistry | 1,524 |
880,483 | https://en.wikipedia.org/wiki/Self%20number | In number theory, a self number or Devlali number in a given number base is a natural number that cannot be written as the sum of any other natural number and the individual digits of . 20 is a self number (in base 10), because no such combination can be found (all give a result less than 20; all other give a result greater than 20). 21 is not, because it can be written as 15 + 1 + 5 using n = 15. These numbers were first described in 1949 by the Indian mathematician D. R. Kaprekar.
Definition and properties
Let be a natural number. We define the -self function for base to be the following:
where is the number of digits in the number in base , and
is the value of each digit of the number. A natural number is a -self number if the preimage of for is the empty set.
In general, for even bases, all odd numbers below the base number are self numbers, since any number below such an odd number would have to also be a 1-digit number which when added to its digit would result in an even number. For odd bases, all odd numbers are self numbers.
The set of self numbers in a given base is infinite and has a positive asymptotic density: when is odd, this density is 1/2.
Self numbers in specific bases
For base 2 self numbers, see . (written in base 10)
The first few base 10 self numbers are:
1, 3, 5, 7, 9, 20, 31, 42, 53, 64, 75, 86, 97, 108, 110, 121, 132, 143, 154, 165, 176, 187, 198, 209, 211, 222, 233, 244, 255, 266, 277, 288, 299, 310, 312, 323, 334, 345, 356, 367, 378, 389, 400, 411, 413, 424, 435, 446, 457, 468, 479, 490, ...
Self primes
A self prime is a self number that is prime.
The first few self primes in base 10 are
3, 5, 7, 31, 53, 97, 211, 233, 277, 367, 389, 457, 479, 547, 569, 613, 659, 727, 839, 883, 929, 1021, 1087, 1109, 1223, 1289, 1447, 1559, 1627, 1693, 1783, 1873, ...
References
Kaprekar, D. R. The Mathematics of New Self-Numbers Devaiali (1963): 19 - 20.
Arithmetic dynamics
Base-dependent integer sequences
Inverse functions | Self number | Mathematics | 566 |
16,854,527 | https://en.wikipedia.org/wiki/Separoid | In mathematics, a separoid is a binary relation between disjoint sets which is stable as an ideal in the canonical order induced by inclusion. Many mathematical objects which appear to be quite different, find a common generalisation in the framework of separoids; e.g., graphs, configurations of convex sets, oriented matroids, and polytopes. Any countable category is an induced subcategory of separoids when they are endowed with homomorphisms (viz., mappings that preserve the so-called minimal Radon partitions).
In this general framework, some results and invariants of different categories turn out to be special cases of the same aspect; e.g., the pseudoachromatic number from graph theory and the Tverberg theorem from combinatorial convexity are simply two faces of the same aspect, namely, complete colouring of separoids.
The axioms
A separoid is a set endowed with a binary relation on its power set, which satisfies the following simple properties for :
A related pair is called a separation and we often say that A is separated from B. It is enough to know the maximal separations to reconstruct the separoid.
A mapping is a morphism of separoids if the preimages of separations are separations; that is, for
Examples
Examples of separoids can be found in almost every branch of mathematics. Here we list just a few.
1. Given a graph G=(V,E), we can define a separoid on its vertices by saying that two (disjoint) subsets of V, say A and B, are separated if there are no edges going from one to the other; i.e.,
2. Given an oriented matroid M = (E,T), given in terms of its topes T, we can define a separoid on E by saying that two subsets are separated if they are contained in opposite signs of a tope. In other words, the topes of an oriented matroid are the maximal separations of a separoid. This example includes, of course, all directed graphs.
3. Given a family of objects in a Euclidean space, we can define a separoid in it by saying that two subsets are separated if there exists a hyperplane that separates them; i.e., leaving them in the two opposite sides of it.
4. Given a topological space, we can define a separoid saying that two subsets are separated if there exist two disjoint open sets which contains them (one for each of them).
The basic lemma
Every separoid can be represented with a family of convex sets in some Euclidean space and their separations by hyperplanes.
References
Further reading
Binary relations | Separoid | Mathematics | 582 |
219,807 | https://en.wikipedia.org/wiki/Keel | The keel is the bottom-most longitudinal structural element of a watercraft. On some sailboats, it may have a hydrodynamic and counterbalancing purpose as well. The laying of the keel is often the initial step in constructing a ship. In the British and American shipbuilding traditions, this event marks the beginning date of a ship's construction.
Etymology
The word "keel" comes from Old English , Old Norse , = "ship" or "keel". It has the distinction of being regarded by some scholars as the first word in the English language recorded in writing, having been recorded by Gildas in his 6th century Latin work De Excidio et Conquestu Britanniae, under the spelling cyulae (he was referring to the three ships that the Saxons first arrived in).
is the Latin word for "keel" and is the origin of the term careen (to clean a keel and the hull in general, often by rolling the ship on its side). An example of this use is Careening Cove, a suburb of Sydney, Australia, where careening was carried out in the early colonial days.
History
Origins
The use of a keel in sailing vessels dates back to antiquity.
The wreck of an ancient Greek merchant ship known as the Kyrenia ship establishes the origin of the keel at least as far back as 315 BC.
The Uluburun shipwreck ( 1325 BC) had a rudimentary keel, but it may have been more of a center plank than a keel.
Construction styles
Frame first
In carvel-built hulls, construction began with the laying of the keel, followed by the stern and stem. Frames were set up afterward, set at key points along the keel. Later, the keelson was attached to the keel, either bolted or with treenails.
Plank first
A plank first building system that is still in use today is clinker construction, using overlapping planks which are shaped to produce the hull form. Older systems include the bottom-based method used for the planking on either side of the keel of a cog (and also in Dutch shipbuilding up to and including the 17th century). This involves flush-fitted planks that have been cut to provide the shape of the hull. Still older is the mortice and tenon edge-to-edge joining of hull planks in the Mediterranean during the classical period. In this system, much of the strength of the hull is derived from the planking, with the frames providing some extra strength. In all these systems, the joining of the keel, stem and sternpost are the starting point of construction.
Structural keels
A structural keel is the bottom-most structural member around which the hull of a ship is built. The keel runs along the centerline of the ship, from the bow to the stern. The keel is often the first part of a ship's hull to be constructed, and laying the keel, or placing the keel in the cradle where the ship will be built, may mark the start time of its construction. Large, modern ships are now often built in a series of pre-fabricated, complete hull sections rather than being built around a single keel, so the shipbuilding process commences with the cutting of the first sheet of steel.
The most common type of keel is the "flat plate keel", which is fitted in most ocean-going ships and other vessels. A form of keel found on smaller vessels is the "bar keel", which may be fitted in trawlers, tugs, and smaller ferries. Where grounding is possible, this type of keel is suitable with its massive scantlings, but there is always a problem of the increased draft with no additional cargo capacity. If a double bottom is fitted, the keel is almost inevitably of the flat plate type, bar keels often being associated with open floors, where the plate keel may also be fitted.
Hydrodynamic keels
Hydrodynamic keels have the primary purpose of interacting with the water and are typical of certain sailboats. Fixed hydrodynamic keels have the structural strength to support the boat's weight.
Sailboat keels
In sailboats, keels serve two purposes: 1) as an underwater foil to minimize the lateral motion of the vessel under sail (leeway) and 2) as a counterweight to the lateral force of the wind on the sail(s) that causes rolling to the side (heeling). As an underwater foil, a keel uses the forward motion of the boat to generate lift to counteract the leeward force of the wind. As a counterweight, a keel increasingly offsets the heeling moment with increasing angle of heel. Related foils include movable centreplates, which -being metal- have the secondary purpose of being a counterweight, and centreboards and daggerboards, which are of lighter weight, do not have the secondary purpose of being a counterweight.
Moveable sailboat keels may pivot (a centreboard, centreplate or swing keel), retract upwards (lifting/retracting keel or daggerboard), or swing sideways in the water (canting keels) to move the ballasting effect to one side and allow the boat to sail in a more upright position.
See also
Coin ceremony
Kelson
False keel
Daggerboard
Leeboard
Bilgeboard
Bruce foil
Keelhauling – an archaic maritime punishment
Keel block
Under keel clearance
Notes
Bibliography
Rousmaniere, John, The Annapolis Book of Seamanship, Simon & Schuster, 1999
Chapman Book of Piloting (various contributors), Hearst Corporation, 1999
Herreshoff, Halsey (consulting editor), The Sailor's Handbook, Little Brown and Company
Seidman, David, The Complete Sailor, International Marine, 1995
Jobson, Gary, Sailing Fundamentals, Simon & Schuster, 1987
Nautical terminology
Naval architecture
Sailing rigs and rigging
Sailing vessel components
Shipbuilding
Sailboat components | Keel | Engineering | 1,207 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.