id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
624,625 | https://en.wikipedia.org/wiki/On%20the%20Cruelty%20of%20Really%20Teaching%20Computer%20Science | "On the Cruelty of Really Teaching Computing Science" is a 1988 scholarly article by E. W. Dijkstra which argues that computer programming should be understood as a branch of mathematics, and that the formal provability of a program is a major criterion for correctness.
Despite the title, most of the article is on Dijkstra’s attempt to put computer science into a wider perspective within science, teaching being addressed as a corollary at the end.
Specifically, Dijkstra made a “proposal for an introductory programming course for freshmen” that
consisted of Hoare logic as an uninterpreted formal system.
Debate over feasibility
Since the term "software engineering" was coined, formal verification has almost always been considered too resource-intensive to be feasible. In complex applications, the difficulty of correctly specifying what the program should do in the first place is also a common source of error. Other methods of software testing are generally employed to try to eliminate bugs and many other factors are considered in the measurement of software quality.
Until the end of his life, Dijkstra maintained that the central challenges of computing hadn't been met to his satisfaction, due to an insufficient emphasis on program correctness (though not obviating other requirements, such as maintainability and efficiency).
Pedagogical legacy
Computer science as taught today does not follow of Dijkstra's advice. The curricula generally emphasize techniques for managing complexity and preparing for future changes, following Dijkstra's earlier writings. These include abstraction, programming by contract, and design patterns. Programming techniques to avoid bugs and conventional software testing methods are taught as basic requirements, and students are exposed to certain mathematical tools, but formal verification methods are not included in the curriculum except perhaps as an advanced topic. So in some ways, Dijkstra's ideas have been adhered to; however, the ideas he felt most strongly about have not been.
Newly formed curricula in software engineering have adopted Dijkstra's recommendations. The focus of these programs is the formal specification of software requirements and design in order to facilitate the formal validation of system correctness. In Canada, they are often accredited engineering degrees with similar core competencies in physics-based engineering.
References
1988 documents
Computer science papers
Computer science education
Works by Edsger Dijkstra | On the Cruelty of Really Teaching Computer Science | [
"Technology"
] | 471 | [
"Computer science education",
"Computer science"
] |
624,666 | https://en.wikipedia.org/wiki/List%20of%20partition%20topics | Generally, a partition is a division of a whole into non-overlapping parts. Among the kinds of partitions considered in mathematics are
partition of a set or an ordered partition of a set,
partition of a graph,
partition of an integer,
partition of an interval,
partition of unity,
partition of a matrix; see block matrix, and
partition of the sum of squares in statistics problems, especially in the analysis of variance,
quotition and partition, two ways of viewing the operation of division of integers.
Integer partitions
Composition (combinatorics)
Ewens's sampling formula
Ferrers graph
Glaisher's theorem
Landau's function
Partition function (number theory)
Pentagonal number theorem
Plane partition
Quotition and partition
Rank of a partition
Crank of a partition
Solid partition
Young tableau
Young's lattice
Set partitions
Bell number
Bell polynomials
Dobinski's formula
Cumulant
Data clustering
Equivalence relation
Exact cover
Knuth's Algorithm X
Dancing Links
Exponential formula
Faà di Bruno's formula
Feshbach–Fano partitioning
Foliation
Frequency partition
Graph partition
Kernel of a function
Lamination (topology)
Matroid partitioning
Multipartition
Multiplicative partition
Noncrossing partition
Ordered partition of a set
Partition calculus
Partition function (quantum field theory)
Partition function (statistical mechanics)
Derivation of the partition function
Partition of an interval
Partition of a set
Ordered partition
Partition refinement
Disjoint-set data structure
Partition problem
3-partition problem
Partition topology
Quotition and partition
Recursive partitioning
Stirling number
Stirling transform
Stratification (mathematics)
Tverberg partition
Twelvefold way
In probability and stochastic processes
Chinese restaurant process
Dobinski's formula
Ewens's sampling formula
Law of total cumulance
Partition
Partition topics | List of partition topics | [
"Mathematics"
] | 359 | [
"Enumerative combinatorics",
"Combinatorics"
] |
624,670 | https://en.wikipedia.org/wiki/Power%20nap | A power nap or cat nap is a short sleep that terminates before deep sleep (slow-wave sleep; SWS). A power nap is intended to quickly revitalize the sleeper.
A power nap combined with consuming caffeine is called a stimulant nap, coffee nap, caffeine nap, or nappuccino.
Characteristics
A power nap, also known as a Stage 2 nap, is a short slumber of 20 minutes or less which terminates before the occurrence of deep slow-wave sleep, intended to quickly revitalize the napper. The expression "power nap" was coined by Cornell University social psychologist James Maas.
The 20-minute nap increases alertness and motor skills. Various durations may be recommended for power naps, which are short compared to regular sleep. The short duration prevents nappers from sleeping so long that they enter the slow wave portion of the normal sleep cycle without being able to complete the cycle. Entering deep, slow-wave sleep and failing to complete the normal sleep cycle, can result in a phenomenon known as sleep inertia, where one feels groggy, disoriented, and even sleepier than before beginning the nap. In order to attain optimal post-nap performance, a Stage 2 nap must be limited to the beginning of a sleep cycle, specifically sleep stages N1 and N2, typically 18–25 minutes.
Experimental confirmation of the benefits of this brief nap comes from a Flinders University study in Australia in which 5, 10, 20, or 30-minute periods of sleep were given. The greatest immediate improvement in measures of alertness and cognitive performance came after the 10 minutes of sleep. The 20 and 30-minute periods of sleep showed evidence of sleep inertia immediately after the naps and improvements in alertness more than 30 minutes later, but not to a greater level than after the 10 minutes of sleep. Power naps are effective even when schedules allow a full night's sleep.
Research
Potential benefits
Power naps intend to restore alertness, performance, and learning ability. A nap may also reverse the hormonal impact of a night of poor sleep or reverse the damage of sleep deprivation. A University of Düsseldorf study found superior memory recall once a person had reached 6 minutes of sleep, suggesting that the onset of sleep may initiate active memory processes of consolidation which—once triggered—remains effective even if sleep is terminated.
According to clinical studies among men and women, power nappers of any frequency or duration had a significantly lower mortality ratio due to heart disease than those not napping. Specifically, those occasionally napping had a 12% lower coronary mortality, whereas those systematically napping had a 37% lower coronary mortality.
A Flinders University study of individuals restricted to only five hours of sleep per night found a 10-minute nap was overall the most recuperative nap duration of various nap lengths they examined (lengths of 0 min, 5 min, 10 min, 20 min, and 30 minutes): the 5-minute nap produced few benefits in comparison with the no-nap control; the 10-minute nap produced immediate improvements in all outcome measures (including sleep onset latency, subjective sleepiness, fatigue, vigor, and cognitive performance), with some of these benefits maintained for as long as 155 minutes; the 20-minute nap was associated with improvements emerging 35 minutes after napping and lasting up to 125 minutes after napping; and the 30-minute nap produced a period of impaired alertness and performance immediately after napping, indicative of sleep inertia, followed by improvements lasting up to 155 minutes after the nap.
The NASA Ames Fatigue Countermeasures Group studied the effects of sleep loss and jet lag, and conducts training to counter these effects. A major fatigue countermeasures recommendation consists of a 40-minute nap ("NASA nap") which empirically showed to improve flight crew performance and alertness with a 22% statistical risk of entering SWS.
For several years, scientists have been investigating the benefits of napping, both the power nap and much longer sleep durations as long as 1–2 hours. Performance across a wide range of cognitive processes has been tested. Studies demonstrate that naps are as good as a night of sleep for some types of memory tasks.
A NASA study led by David F. Dinges, professor at the University of Pennsylvania School of Medicine, found that naps can improve certain memory functions. In that NASA study, volunteers spent several days living on one of 18 different sleep schedules, all in a laboratory setting. To measure the effectiveness of the naps, tests probing memory, alertness, response time, and other cognitive skills were used.
Power Napping Enablers and sleep timers allow properly timed power napping.
One study showed that a midday snooze reverses information overload. Reporting in Nature Neuroscience, Sara Mednick, PhD, Stickgold and colleagues also demonstrated that "burnout" irritation, frustration and poorer performance on a mental task can set in as a day of training wears on. This study also proved that, in some cases, napping could even boost performance to an individual's top levels. The NIMH team wrote: "The bottom line is: we should stop feeling guilty about taking that 'power nap' at work."
The Centers for Disease Control and Prevention studied the effects of socioeconomic status on short sleep durations. In this 2007-2008 CDCP study, 4,850 National Health and Nutrition Examination Survey (NHANES) produced self-reported sleep durations. It was suggested through this study that individuals with minority status and a lower ranking in socioeconomic position are more inclined to have shorter self-reported sleep durations.
Potential risks and detriments
Longer and more frequent daytime naps appeared to be associated with a higher risk of Alzheimer's dementia in a study that tracked 1401 older people over 14 years. Links have also been proposed between these types of naps and cardiovascular disease, though the evidence is largely inconclusive. A series of studies by the medical journal Sleep demonstrated that people who nap for an hour or more a day had 1.82 times the rate of cardiovascular disease than people who didn't nap.
Stimulant nap
A stimulant nap is a brief period of sleep of around 15 minutes, preceded by consuming a caffeinated drink or another stimulant.
It may combat daytime drowsiness more effectively than napping or drinking coffee alone. A stimulant nap is more effective than regular naps in improving post-nap alertness and cognitive functioning. In a driving simulator and a series of studies, Horne and Reyner investigated the effects of cold air, radio, a break with no nap, a nap, caffeine pill vs. placebo and a short nap preceded by caffeine on mildly sleep-deprived subjects. A nap with caffeine was by far the most effective in reducing driving accidents and subjective sleepiness as it helps the body get rid of the sleep-inducing chemical compound adenosine. Caffeine in coffee takes up to half an hour to have an alerting effect, hence "a short (<15min) nap will not be compromised if it is taken immediately after the coffee." One account suggested that it was like a "double shot of energy" from the stimulating boost from caffeine plus better alertness from napping. This procedure has been studied on sleep-deprived humans given the task of driving a motor vehicle afterwards, although it has not been studied on elderly populations.
Nap rooms and tech aided naps
Some companies have nap rooms to allow employees to take power naps. This may be in a form of a nap room with a recliner, or chairs specially designed for power napping installed in a designated area. Companies with nap rooms say that employees are happier and become more productive at work.
Similar nap rooms and stations also exist in higher education institutions. Many colleges and universities provide napping furnitures such as cots and beanbags in libraries for students to take naps after long periods of study. At least one university has a nap room set up in a gym. Some medical schools also set up nap rooms at teaching hospitals. The nap rooms may include sleeping pods or cots, white noise machines, and antimicrobial pillows.
In Barcelona, there is a café called Nappuccino that implements custom-built napping pods inside the café.
A more portable aid is a nap timer app. Apps have various features including aided sounds, nap history and pattern tracking and daily reminders that make it easier to take naps.
See also
References
Further reading
Maas, James. Power Sleep : The Revolutionary Program That Prepares Your Mind for Peak Performance; William Morrow Paperbacks; 1st edition, 19 December 1998; .
External links
Boston Globe article on power naps
University of Miami : "Sleep, Napping and the Brain -The Power of Napping", YouTube
Sleep
sv:Sömn#Tupplur | Power nap | [
"Biology"
] | 1,831 | [
"Behavior",
"Sleep"
] |
624,681 | https://en.wikipedia.org/wiki/Lyudmila%20Zhuravleva | Lyudmila Vasilyevna Zhuravleva (, ; born 22 May 1946) is a Soviet, Russian and Ukrainian astronomer, who worked at the Crimean Astrophysical Observatory in Nauchnij, where she discovered 213 minor planets. She also serves as president
of the Crimean branch of the "Prince Clarissimus Aleksandr Danilovich Menshikov Foundation" (which was founded in May 1995 in Berezovo, and is not the same as the "Menshikov Foundation" children's charity founded by Anthea Eno, the wife of Brian Eno). She has discovered a number of asteroids, including the Trojan asteroid 4086 Podalirius and asteroid 2374 Vladvysotskij. Zhuravleva is ranked 43 in the Minor Planet Center's list of those who have discovered minor planets. She is credited with having discovered 200, and co-discovered an additional 13 between 1972 and 1992. In the rating of minor planet discoveries, she is listed in 57th place out of 1,429 astronomers. The main-belt asteroid 26087 Zhuravleva, discovered by her colleague Lyudmila Karachkina at Nauchnij, was named in her honour.
List of discovered minor planets
References
Discoverers of asteroids
Living people
1946 births
Soviet astronomers
Ukrainian astronomers
Women astronomers | Lyudmila Zhuravleva | [
"Astronomy"
] | 275 | [
"Women astronomers",
"Astronomers"
] |
624,705 | https://en.wikipedia.org/wiki/Pole%20star | A pole star is a visible star that is approximately aligned with the axis of rotation of an astronomical body; that is, a star whose apparent position is close to one of the celestial poles. On Earth, a pole star would lie directly overhead when viewed from the North or the South Pole.
Currently, Earth's pole stars are Polaris (Alpha Ursae Minoris), a bright magnitude 2 star aligned approximately with its northern axis that serves as a pre-eminent star in celestial navigation, and a much dimmer magnitude 5.5 star on its southern axis, Polaris Australis (Sigma Octantis).
From around 1700 BC until just after 300 AD, Kochab (Beta Ursae Minoris) and Pherkad (Gamma Ursae Minoris) were twin northern pole stars, though neither was as close to the pole as Polaris is now.
History
In classical antiquity, Beta Ursae Minoris (Kochab) was closer to the celestial north pole than Alpha Ursae Minoris. While there was no naked-eye star close to the pole, the midpoint between Alpha and Beta Ursae Minoris was reasonably close to the pole, and it appears that the entire constellation of Ursa Minor, in antiquity known as Cynosura (Greek Κυνόσουρα "dog's tail"), was used as indicating the northern direction for the purposes of navigation by the Phoenicians. The ancient name of Ursa Minor, anglicized as cynosure, has since itself become a term for "guiding principle" after the constellation's use in navigation.
Alpha Ursae Minoris (Polaris) was described as ἀειφανής (transliterated as aeiphanes) meaning "always above the horizon", "ever-shining" by Stobaeus in the 5th century, when it was still removed from the celestial pole by about 8°. It was known as scip-steorra ("ship-star") in 10th-century Anglo-Saxon England, reflecting its use in navigation. In the Vishnu Purana, it is personified under the name Dhruva ("immovable, fixed").
The name stella polaris was coined in the Renaissance, even though at that time it was well recognized that it was several degrees away from the celestial pole; Gemma Frisius in the year 1547 determined this distance as 3°8'.
An explicit identification of Mary as stella maris with the North Star (Polaris) becomes evident in the title Cynosura seu Mariana Stella Polaris (i.e. "Cynosure, or the Marian Polar Star"), a collection of Marian poetry published by Nicolaus Lucensis (Niccolo Barsotti de Lucca) in 1655.
Precession of the equinoxes
In 2022 Polaris' mean declination was 89.35 degrees North; (at epoch J2000 it was 89.26 degrees N). So it appears due north in the sky to a precision better than one degree, and the angle it makes with respect to the true horizon (after correcting for refraction and other factors) is within a degree of the latitude of the observer.
The celestial pole will be nearest Polaris in 2100.
Due to the precession of the equinoxes (as well as the stars' proper motions), the role of North Star has passed from one star to another in the remote past, and will pass in the remote future. In 3000 BC, the faint star Thuban in the constellation Draco was the North Star, aligning within 0.1° distance from the celestial pole, the closest of any of the visible pole stars. However, at magnitude 3.67 (fourth magnitude) it is only one-fifth as bright as Polaris, and today it is invisible in light-polluted urban skies.
During the 1st millennium BC, Beta Ursae Minoris (Kochab) was the bright star closest to the celestial pole, but it was never close enough to be taken as marking the pole, and the Greek navigator Pytheas in ca. 320 BC described the celestial pole as devoid of stars. In the Roman era, the celestial pole was about equally distant between Polaris and Kochab.
The precession of the equinoxes takes about 25,770 years to complete a cycle. Polaris' mean position (taking account of precession and proper motion) will reach a maximum declination of +89°32'23", which translates to 1657" (or 0.4603°) from the celestial north pole, in February 2102. Its maximum apparent declination (taking account of nutation and aberration) will be +89°32'50.62", which is 1629" (or 0.4526°) from the celestial north pole, on 24 March 2100.
Precession will next point the north celestial pole at stars in the northern constellation Cepheus. The pole will drift to space equidistant between Polaris and Gamma Cephei ("Errai") by 3000 AD, with Errai reaching its closest alignment with the northern celestial pole around 4200 AD. Iota Cephei and Beta Cephei will stand on either side of the northern celestial pole some time around 5200 AD, before moving to closer alignment with the brighter star Alpha Cephei ("Alderamin") around 7500 AD.
Precession will then point the north celestial pole at stars in the northern constellation Cygnus. Like Beta Ursae Minoris during the 1st millennium BC, the bright star closest to the celestial pole in the 10th millennium AD, first-magnitude Deneb, will be a distant 7° from the pole, never close enough to be taken as marking the pole, while third-magnitude Delta Cygni will be a more helpful pole star, at a distance of 3° from celestial north, around 11,250 AD. Precession will then point the north celestial pole nearer the constellation Lyra, where the second brightest star in the northern celestial hemisphere, Vega, will be a pole star around 14,500 AD, though at a distance of 5° from celestial north.
Precession will eventually point the north celestial pole nearer the stars in the constellation Hercules, pointing towards Tau Herculis around 18,400 AD. The celestial pole will then return to the stars in constellation Draco (Thuban, mentioned above) before returning to the current constellation, Ursa Minor. When Polaris becomes the North Star again around 27,800 AD, due to its proper motion it then will be farther away from the pole than it is now, while in 23,600 BC it was closer to the pole.
Over the course of Earth's 26,000-year axial precession cycle, a series of bright naked eye stars (an apparent magnitude up to +6; a full moon is −12.9) in the northern hemisphere will hold the transitory title of North Star. While other stars might line up with the north celestial pole during the 26,000 year cycle, they do not necessarily meet the naked eye limit needed to serve as a useful indicator of north to an Earth-based observer, resulting in periods of time during the cycle when there is no clearly defined North Star. There will also be periods during the cycle when bright stars give only an approximate guide to "north", as they may be greater than 5° of angular diameter removed from direct alignment with the north celestial pole.
The 26,000 year cycle of North Stars, starting with the current star, with stars that will be "near-north" indicators when no North Star exists during the cycle, including each star's average brightness and closest alignment to the north celestial pole during the cycle:
Southern pole star (South Star)
Currently, there is no South Pole Star like Polaris, the so-called North Star. Sigma Octantis is the closest near naked-eye star to the south celestial pole, but at apparent magnitude 5.47 it is barely visible on a clear night, making it less useful for casual navigational or astronomy alignment purposes.
It is a yellow giant 294 light years from Earth. Its angular separation from the pole is about 1° (). The Southern Cross constellation functions as an approximate southern pole constellation, by pointing to where a southern pole star would be.
At the equator, it is possible to see both Polaris and the Southern Cross. The celestial south pole is moving toward the Southern Cross, which has pointed to the south pole for the last 2000 years or so. As a consequence, the constellation is no longer visible from subtropical northern latitudes, as it was in the time of the ancient Greeks.
Around 200 BC, the star Beta Hydri was the nearest bright star to the celestial south pole. Around 2800 BC, Achernar was only 8 degrees from the south pole.
In the next 7500 years, the south celestial pole will pass close to the stars Gamma Chamaeleontis (4200 AD), I Carinae, Omega Carinae (5800 AD), Upsilon Carinae, Iota Carinae (Aspidiske, 8100 AD) and Delta Velorum (Alsephina, 9200 AD). From the eightieth to the ninetieth centuries, the south celestial pole will travel through the False Cross. Around 14,000 AD Canopus will have a declination of –82°, meaning it will rise and set daily for latitudes between 8°S and 8°N, and will not rise to viewers north of this latter 8th parallel north.
Precession and proper motion mean that Sirius will be a future southern pole star: at 88.4° S declination in the year 66,270 AD; and 87.7° S declination in the year 93,830 AD.
Other planets
Pole stars of other planets are defined analogously: they are stars (brighter than 6th magnitude, i.e., visible to the naked eye under ideal conditions) that most closely coincide with the projection of the planet's axis of rotation onto the celestial sphere. Different planets have different pole stars because their axes are oriented differently. (See Poles of astronomical bodies.)
In religion and mythology
In the medieval period, Polaris was also known as stella maris ("star of the sea", from its use for navigation at sea), as in e.g. Bartholomaeus Anglicus (d. 1272), in the translation of John Trevisa (1397):
Polaris was associated with Marian veneration from an early time, Our Lady, Star of the Sea being a title of the Blessed Virgin. This tradition goes back to a misreading of Saint Jerome's translation of Eusebius' Onomasticon, De nominibus hebraicis (written ca. 390). Jerome gave stilla maris "drop of the sea" as a (false) Hebrew etymology of the name Maria. This stilla maris was later misread as stella maris; the misreading is also found in the manuscript tradition of Isidore's Etymologiae (7th century); it probably arises in the Carolingian era; a late 9th-century manuscript of Jerome's text still has stilla, not stella, but Paschasius Radbertus, also writing in the 9th century, makes an explicit reference to the "Star of the Sea" metaphor, saying that Mary is the "Star of the Sea" to be followed on the way to Christ, "lest we capsize amid the storm-tossed waves of the sea."
In Mandaean cosmology, the Pole Star is considered to be auspicious and is associated with the World of Light ("heaven"). Mandaeans face north when praying, and temples are also oriented towards the north. On the contrary, the south is associated with the World of Darkness.
See also
Astronomy on Mars § Celestial poles and ecliptic
Celestial equator
Direction determination
Empirical evidence for the spherical shape of Earth § Observation of certain, fixed stars from different locations
Guide star
Lists of stars
Worship of heavenly bodies
References
External links
Star trails around Polaris
Articles containing video clips
Navigation
Star types | Pole star | [
"Astronomy"
] | 2,537 | [
"Star types",
"Astronomical classification systems"
] |
624,708 | https://en.wikipedia.org/wiki/Persistence%20of%20a%20number | In mathematics, the persistence of a number is the number of times one must apply a given operation to an integer before reaching a fixed point at which the operation no longer alters the number.
Usually, this involves additive or multiplicative persistence of a non-negative integer, which is how often one has to replace the number by the sum or product of its digits until one reaches a single digit. Because the numbers are broken down into their digits, the additive or multiplicative persistence depends on the radix. In the remainder of this article, base ten is assumed.
The single-digit final state reached in the process of calculating an integer's additive persistence is its digital root. Put another way, a number's additive persistence counts how many times we must sum its digits to arrive at its digital root.
Examples
The additive persistence of 2718 is 2: first we find that 2 + 7 + 1 + 8 = 18, and then that 1 + 8 = 9. The multiplicative persistence of 39 is 3, because it takes three steps to reduce 39 to a single digit: 39 → 27 → 14 → 4. Also, 39 is the smallest number of multiplicative persistence 3.
Smallest numbers of a given multiplicative persistence
In base 10, there is thought to be no number with a multiplicative persistence greater than 11; this is known to be true for numbers up to 2.67×1030000. The smallest numbers with persistence 0, 1, 2, ... are:
0, 10, 25, 39, 77, 679, 6788, 68889, 2677889, 26888999, 3778888999, 277777788888899.
The search for these numbers can be sped up by using additional properties of the decimal digits of these record-breaking numbers. These digits must be in increasing order (with the exception of the second number, 10), and – except for the first two digits – all digits must be 7, 8, or 9. There are also additional restrictions on the first two digits.
Based on these restrictions, the number of candidates for n-digit numbers with record-breaking persistence is only proportional to the square of n, a tiny fraction of all possible n-digit numbers. However, any number that is missing from the sequence above would have multiplicative persistence > 11; such numbers are believed not to exist, and would need to have over 30,000 digits if they do exist.
Properties of additive persistence
The additive persistence of a number is smaller than or equal to the number itself, with equality only when the number is zero.
For base and natural numbers and the numbers and have the same additive persistence.
More about the additive persistence of a number can be found here.
Smallest numbers of a given additive persistence
The additive persistence of a number, however, can become arbitrarily large (proof: for a given number , the persistence of the number consisting of repetitions of the digit 1 is 1 higher than that of ). The smallest numbers of additive persistence 0, 1, 2, ... are:
0, 10, 19, 199, 19999999999999999999999, ...
The next number in the sequence (the smallest number of additive persistence 5) is 2 × 102×(1022 − 1)/9 − 1 (that is, 1 followed by 2222222222222222222222 9's). For any fixed base, the sum of the digits of a number is at most proportional to its logarithm; therefore, the additive persistence is at most proportional to the iterated logarithm, and the smallest number of a given additive persistence grows tetrationally.
Functions with limited persistence
Some functions only allow persistence up to a certain degree.
For example, the function which takes the minimal digit only allows for persistence 0 or 1, as you either start with or step to a single-digit number.
References
Literature
Number theory | Persistence of a number | [
"Mathematics"
] | 844 | [
"Discrete mathematics",
"Number theory"
] |
624,714 | https://en.wikipedia.org/wiki/Thomson%20scattering | Thomson scattering is the elastic scattering of electromagnetic radiation by a free charged particle, as described by classical electromagnetism. It is the low-energy limit of Compton scattering: the particle's kinetic energy and photon frequency do not change as a result of the scattering. This limit is valid as long as the photon energy is much smaller than the mass energy of the particle: , or equivalently, if the wavelength of the light is much greater than the Compton wavelength of the particle (e.g., for electrons, longer wavelengths than hard x-rays).
Description of the phenomenon
Thomson scattering is a model for the effect of electromagnetic fields on electrons when the field energy is much less than the rest mass of the electron . In the model the electric field of the incident wave accelerates the charged particle, causing it, in turn, to emit radiation at the same frequency as the incident wave, and thus the wave is scattered. Thomson scattering is an important phenomenon in plasma physics and was first explained by the physicist J. J. Thomson. As long as the motion of the particle is non-relativistic (i.e. its speed is much less than the speed of light), the main cause of the acceleration of the particle will be due to the electric field component of the incident wave. In a first approximation, the influence of the magnetic field can be neglected. The particle will move in the direction of the oscillating electric field, resulting in electromagnetic dipole radiation. The moving particle radiates most strongly in a direction perpendicular to its acceleration and that radiation will be polarized along the direction of its motion. Therefore, depending on where an observer is located, the light scattered from a small volume element may appear to be more or less polarized.
The electric fields of the incoming and observed wave (i.e. the outgoing wave) can be divided up into those components lying in the plane of observation (formed by the incoming and observed waves) and those components perpendicular to that plane. Those components lying in the plane are referred to as "radial" and those perpendicular to the plane are "tangential". (It is difficult to make these terms seem natural, but it is standard terminology.)
The diagram on the right depicts the plane of observation. It shows the radial component of the incident electric field, which causes the charged particles at the scattering point to exhibit a radial component of acceleration (i.e., a component tangent to the plane of observation). It can be shown that the amplitude of the observed wave will be proportional to the cosine of χ, the angle between the incident and observed waves. The intensity, which is the square of the amplitude, will then be diminished by a factor of cos2(χ). It can be seen that the tangential components (perpendicular to the plane of the diagram) will not be affected in this way.
The scattering is best described by an emission coefficient which is defined as ε where ε dt dV dΩ dλ is the energy scattered by a volume element in time dt into solid angle dΩ between wavelengths λ and λ+dλ. From the point of view of an observer, there are two emission coefficients, εr corresponding to radially polarized light and εt corresponding to tangentially polarized light. For unpolarized incident light, these are given by:
where is the density of charged particles at the scattering point, is incident flux (i.e. energy/time/area/wavelength), is the angle between the incident and scattered photons (see figure above) and is the Thomson cross section for the charged particle, defined below. The total energy radiated by a volume element in time dt between wavelengths λ and λ+dλ is found by integrating the sum of the emission coefficients over all directions (solid angle):
The Thomson differential cross section, related to the sum of the emissivity coefficients, is given by
expressed in SI units; q is the charge per particle, m the mass of particle, and a constant, the permittivity of free space. (To obtain an expression in cgs units, drop the factor of 4ε0.) Integrating over the solid angle, we obtain the Thomson cross section
in SI units.
The important feature is that the cross section is independent of light frequency. The cross section is proportional by a simple numerical factor to the square of the classical radius of a point particle of mass m and charge q, namely
Alternatively, this can be expressed in terms of , the Compton wavelength, and the fine structure constant:
For an electron, the Thomson cross-section is numerically given by:
Examples of Thomson scattering
The cosmic microwave background contains a small linearly-polarized component attributed to Thomson scattering. That polarized component mapping out the so-called E-modes was first detected by DASI in 2002.
The solar K-corona is the result of the Thomson scattering of solar radiation from solar coronal electrons. The ESA and NASA SOHO mission and the NASA STEREO mission generate three-dimensional images of the electron density around the Sun by measuring this K-corona from three separate satellites.
In tokamaks, corona of ICF targets and other experimental fusion devices, the electron temperatures and densities in the plasma can be measured with high accuracy by detecting the effect of Thomson scattering of a high-intensity laser beam. An upgraded Thomson scattering system in the Wendelstein 7-X stellarator uses Nd:YAG lasers to emit multiple pulses in quick succession. The intervals within each burst can range from 2 ms to 33.3 ms, permitting up to twelve consecutive measurements. Synchronization with plasma events is made possible by a newly added trigger system that facilitates real-time analysis of transient plasma events.
In the Sunyaev–Zeldovich effect, where the photon energy is much less than the electron rest mass, the inverse-Compton scattering can be approximated as Thomson scattering in the rest frame of the electron.
Models for X-ray crystallography are based on Thomson scattering.
See also
Compton scattering
Kapitsa–Dirac effect
Klein–Nishina formula
References
Further reading
External links
Thomson scattering notes
Thomson scattering: principle and measurements
Atomic physics
Scattering
Plasma diagnostics | Thomson scattering | [
"Physics",
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 1,265 | [
"Nuclear physics",
"Plasma physics",
"Quantum mechanics",
"Measuring instruments",
"Plasma diagnostics",
"Scattering",
"Condensed matter physics",
"Atomic physics",
"Particle physics",
"Atomic",
" molecular",
" and optical physics"
] |
624,739 | https://en.wikipedia.org/wiki/Catenation | In chemistry, catenation is the bonding of atoms of the same element into a series, called a chain. A chain or a ring may be open if its ends are not bonded to each other (an open-chain compound), or closed if they are bonded in a ring (a cyclic compound). The words to catenate and catenation reflect the Latin root catena, "chain".
Carbon
Catenation occurs most readily with carbon, which forms covalent bonds with other carbon atoms to form long chains and structures. This is the reason for the presence of the vast number of organic compounds in nature. Carbon is most well known for its properties of catenation, with organic chemistry essentially being the study of catenated carbon structures (and known as catenae). Carbon chains in biochemistry combine any of various other elements, such as hydrogen, oxygen, and biometals, onto the backbone of carbon.
However, carbon is by no means the only element capable of forming such catenae, and several other main-group elements are capable of forming an expansive range of catenae, including hydrogen, boron, silicon, phosphorus, sulfur and halogens.
The ability of an element to catenate is primarily based on the bond energy of the element to itself, which decreases with more diffuse orbitals (those with higher azimuthal quantum number) overlapping to form the bond. Hence, carbon, with the least diffuse valence shell p orbital is capable of forming longer p-p sigma bonded chains of atoms than heavier elements which bond via higher valence shell orbitals. Catenation ability is also influenced by a range of steric and electronic factors, including the electronegativity of the element in question, the molecular orbital n and the ability to form different kinds of covalent bonds. For carbon, the sigma overlap between adjacent atoms is sufficiently strong that perfectly stable chains can be formed. With other elements this was once thought to be extremely difficult in spite of plenty of evidence to the contrary.
Hydrogen
Theories of the structure of water involve three-dimensional networks of tetrahedra and chains and rings, linked via hydrogen bonding.
A polycatenated network, with rings formed from metal-templated hemispheres linked by hydrogen bonds, was reported in 2008.
In organic chemistry, hydrogen bonding is known to facilitate the formation of chain structures. For example, 4-tricyclanol C10H16O shows catenated hydrogen bonding between the hydroxyl groups, leading to the formation of helical chains; crystalline isophthalic acid C8H6O4 is built up from molecules connected by hydrogen bonds, forming infinite chains.
In unusual conditions, a 1-dimensional series of hydrogen molecules confined within a single wall carbon nanotube is expected to become metallic at a relatively low pressure of 163.5 GPa. This is about 40% of the ~400 GPa thought to be required to metallize ordinary hydrogen, a pressure which is difficult to access experimentally.
Silicon
Silicon can form sigma bonds to other silicon atoms (and disilane is the parent of this class of compounds). However, it is difficult to prepare and isolate SinH2n+2 (analogous to the saturated alkane hydrocarbons) with n greater than about 8, as their thermal stability decreases with increases in the number of silicon atoms. Silanes higher in molecular weight than disilane decompose to polymeric polysilicon hydride and hydrogen. But with a suitable pair of organic substituents in place of hydrogen on each silicon it is possible to prepare polysilanes (sometimes, erroneously called polysilenes) that are analogues of alkanes. These long chain compounds have surprising electronic properties - high electrical conductivity, for example - arising from sigma delocalization of the electrons in the chain.
Even silicon–silicon pi bonds are possible. However, these bonds are less stable than the carbon analogues. Disilane and longer silanes are quite reactive compared to alkanes. Disilene and disilynes are quite rare, unlike alkenes and alkynes. Examples of disilynes, long thought to be too unstable to be isolated were reported in 2004.
Boron
In dodecaborate(12) anion, twelve boron atoms covalently link to each other to form an icosahedral structure. Various other similar motifs are also well studied, such as boranes, carboranes and metal dicarbollides.
Nitrogen
Nitrogen, unlike its neighbor carbon, is much less likely to form chains that are stable at room temperature. But, there do exist nitrogen chains; for example, in solid nitrogen, triazane, azide anion and triazoles. Longer series with eight or more nitrogen atoms, such as 1,1'-Azobis-1,2,3-triazole, have been synthesized. These compounds have potential use as a convenient way to store large amount of energy.
Phosphorus
Phosphorus chains (with organic substituents) have been prepared, although these tend to be quite fragile. Small rings or clusters are more common.
Sulfur
The versatile chemistry of elemental sulfur is largely due to catenation. In the native state, sulfur exists as S8 molecules. On heating these rings open and link together giving rise to increasingly long chains, as evidenced by the progressive increase in viscosity as the chains lengthen. Also, sulfur polycations, sulfur polyanions (polysulfides) and lower sulfur oxides are all known. Furthermore, selenium and tellurium show variants of these structural motifs.
Semimetallic elements
In recent years, a variety of double and triple bonds between semi-metallic elements have been reported, including silicon, germanium, arsenic and bismuth. The ability of certain main group elements to catenate is currently the subject of research into inorganic polymers.
Halogens
Except for fluorine that can only form unstable polyfluorides at low temperature, all other stable halogens (Cl, Br, I) can form several isopolyhalogen anions that are stable at room temperature, of which the most prominent example being triiodide. In all these anions, the halogen atoms of the same element bond to each other.
See also
Backbone chain
Chain-growth polymerization
Macromolecule
Aromaticity
Polyhalogen ions
Polysulfides
Superatom
Inorganic polymer
Self-assembly
References
Bibliography
Organic chemistry
Inorganic chemistry | Catenation | [
"Chemistry"
] | 1,337 | [
"nan"
] |
624,771 | https://en.wikipedia.org/wiki/Leslie%20Peltier | Leslie Copus Peltier (January 2, 1900 – May 10, 1980) was an American amateur astronomer and discoverer of several comets and novae, including Nova Herculis 1963. He was once described as "the world's greatest non-professional astronomer" by Harlow Shapley.
Biography
Leslie Copus Peltier was born in Delphos, Ohio. Delphos is located in northwestern Ohio in both Van Wert and Allen County. His homeplace was located on South Bredeick Street, and his home is still standing today. The home was known as Brookhaven. Peltier married Dorothy Nihiser in November 1933. An amateur astronomer, he was a prolific discoverer of comets and also a persistent observer of variable stars and member of the AAVSO. He was co-discoverer of 12 comets, 10 of which carry his name, and over a span of more than 60 years made more than 132,000 variable star observations.
He wrote the autobiographical Starlight Nights (), which evokes the magic of stargazing in simpler days, on a farm and without light pollution.
Main-belt asteroid 3850 Peltier is named in his honor, as is the Leslie C. Peltier Award of the Astronomical League.
Publications
Peltier authored the following books:
Starlight Nights: The Adventures of a Star-Gazer (1965); also published in Japanese as Hoshi No Kuru Yoru (1985)
Guideposts to the Stars: Exploring the Skies Throughout the Year (1972); also published in Dutch as Spectrum Sterrengids (1976) and as Prisma Sterrengids (1979)
The Place on Jennings Creek (1977)
Leslie Peltier's Guide to the Stars (1986)
The Binocular Stargazer: A Beginner's Guide to Exploring the Sky (1995)
References
External links
Leslie C. Peltier at the AAVSO
Obituary in JAVSO 9 (1980) 32
1900 births
1980 deaths
American astronomers
Amateur astronomers
Discoverers of comets
People from Delphos, Ohio | Leslie Peltier | [
"Astronomy"
] | 429 | [
"Astronomers",
"Amateur astronomers"
] |
624,917 | https://en.wikipedia.org/wiki/Straight-twelve%20engine | A straight-12 engine or inline-12 engine is a twelve-cylinder piston engine with all twelve cylinders mounted in a straight line along the crankcase.
Land use
Due to the very long length of a straight-twelve engine, they are rarely used in automobiles. The first known example is a engine in the 1920 French Corona car; however it is not known if any cars were sold. Packard also experimented with an automobile powered by an inline 12 in 1929.
The straight-12 has also been used for large military trucks.
Marine use
Some Russian firms built straight-12s for use in ships in the 1960s and 1970s.
MAN Diesel & Turbo 12K98ME and 12S90ME-C and the Wärtsilä-Sulzer RTA96-C are examples of contemporary marine engines in L-12-cylinder configuration. These are popular for propulsion in container ships.
References
Piston engine configurations
12-cylinder engines
12 | Straight-twelve engine | [
"Engineering"
] | 189 | [
"Mechanical engineering stubs",
"Mechanical engineering"
] |
624,918 | https://en.wikipedia.org/wiki/Learning%20economy | A learning economy is a society that values skills like assets, where learning and employment information is readily exchanged from institution to institution, and controlled by the learner and worker.
History of knowledge economies
Modern economies can be characterised as learning economies in which knowledge is the crucial resource and learning is the most important process. Different kinds of learning and economically relevant types of knowledge can likewise be identified. It is argued that pure market economies, if such existed, would have severe problems in terms of learning and innovation. The 'learning economy' is a mixed economy in a fundamental sense.
In the public debate, knowledge is increasingly presented as the crucial factor in the development of both society and the economy. In a growing number of publications from the European Commission and Organisation for Economic Co-operation and Development it is emphasised that citizens of European Union member countries currently operate in ‘a knowledge-based economy’. For several reasons many prefer the term ‘the learning economy’ in characterising the current phase of socio-economic development.
Active studies
Academic achievements of learning economy using blockchain appeared in April 2018. In 2018 at the annual United Nations General Assembly, a decentralized Learning Economy blockchain protocol was proposed. It asserts that if "education was the new gold standard," a market economy could be built around it to catalyze and incentivize 21st century education. Blockchain has many applications in education including verifying the integrity of skills, returning the control of identity to the students, and defining research provenance. In April 2019, research was published at Harvard Kennedy School making a case for a new form of Economy Corporation (E-Corp) to govern this decentralized Learning Economy.
On January 30, 2020, the state of Colorado's Department of Higher Education and the Learning Economy Foundation partnered to provide a three-year empirical study of a statewide decentralized education and workplace ecosystem as a test case for other states and nations. The C-Lab provides a unified space for Web3 pilots and workgroups from across the state of Colorado. Its first goals are learner record interoperability, allowing learning institutions to exchange student records seamlessly, and learner wallets that empower the students to save their credentials on their phones and share them directly with employers and learning institutions. To further this effort, the Colorado Governor's Office of Information Technology, CDHE, Learning Economy Foundation, and ETH Denver launched the "Advance Colorado” partnership program powered by the State of Colorado to further blockchain technology initiatives. The goal of the consortium is to lay the foundation for a decentralized and open Internet of Education.
Early criticism
Much of the initial theories about the advent of a fundamentally new era in which economic activity is increasingly 'abstract', i.e., disconnected from land, labour, and physical capital (machines and industrial infrastructure) and also capital in terms of fund was associated with the 'business management' literature of the 'new economy' NASDAQ bubble, which collapsed in 2001 (but slowly recovered, albeit, in a leaner format, throughout the 2000s). This literature was initially known more for its hyperbole and faddishness than for its academic/empirical integrity. More recently [2011] however, empirical research from cross-disciplinary fields such as innovation studies are altering that perception.
Since 2017, the case for decentralized blockchain learning economies has grown stronger along with its criticisms. Many distributed applications have tested economic models that incentivize students and teachers, but without institutional and government support there is little chance of many national learning economies adopting at scale any time soon. Another major concern is new and unknown challenges with student privacy and distributed ledgers, as well as skepticism about the merits of cryptocurrency.
See also
Knowledge economy
References
Business intelligence terms
Economics catchphrases
Information Age
Economy by field | Learning economy | [
"Technology"
] | 781 | [
"Information Age",
"Computing and society"
] |
625,059 | https://en.wikipedia.org/wiki/Modular%20building | A modular building is a prefabricated building that consists of repeated sections called modules. Modularity involves constructing sections away from the building site, then delivering them to the intended site. Installation of the prefabricated sections is completed on site. Prefabricated sections are sometimes placed using a crane. The modules can be placed side-by-side, end-to-end, or stacked, allowing for a variety of configurations and styles. After placement, the modules are joined together using inter-module connections, also known as inter-connections. The inter-connections tie the individual modules together to form the overall building structure.
Uses
Modular buildings may be used for long-term, temporary or permanent facilities, such as construction camps, schools and classrooms, civilian and military housing, and industrial facilities. Modular buildings are used in remote and rural areas where conventional construction may not be reasonable or possible, for example, the Halley VI accommodation pods used for a BAS Antarctic expedition. Other uses have included churches, health care facilities, sales and retail offices, fast food restaurants and cruise ship construction. They can also be used in areas that have weather concerns, such as hurricanes. Modular buildings are often used to provide temporary facilities, including toilets and ablutions at events. The portability of the buildings makes them popular with hire companies and clients alike. The use of modular buildings enables events to be held at locations where existing facilities are unavailable, or unable to support the number of event attendees.
Construction process
Construction is offsite, using lean manufacturing techniques to prefabricate single or multi-story buildings in deliverable module sections. Often, modules are based around standard 20 foot containers, using the same dimensions, structures, building and stacking/placing techniques, but with smooth (instead of corrugated) walls, glossy white paint, and provisions for windows, power, potable water, sewage lines, telecommunications and air conditioning. Permanent Modular Construction (PMC) buildings are manufactured in a controlled setting and can be constructed of wood, steel, or concrete. Modular components are typically constructed indoors on assembly lines. Modules' construction may take as little as ten days but more often one to three months. PMC modules can be integrated into site built projects or stand alone and can be delivered with MEP, fixtures and interior finishes.
The buildings are 60% to 90% completed offsite in a factory-controlled environment, and transported and assembled at the final building site. This can comprise the entire building or be components or subassemblies of larger structures. In many cases, modular contractors work with traditional general contractors to exploit the resources and advantages of each type of construction. Completed modules are transported to the building site and assembled by a crane. Placement of the modules may take from several hours to several days. Off-site construction running in parallel to site preparation providing a shorter time to project completion is one of the common selling points of modular construction. Modular construction timeline
Permanent modular buildings are built to meet or exceed the same building codes and standards as site-built structures and the same architect-specified materials used in conventionally constructed buildings are used in modular construction projects. PMC can have as many stories as building codes allow. Unlike relocatable buildings, PMC structures are intended to remain in one location for the duration of their useful life.
Manufacturing considerations
The entire process of modular construction places significance on the design stage. This is where practices such as Design for Manufacture and Assembly (DfMA) are used to ensure that assembly tolerances are controlled throughout manufacture and assembly on site. It is vital that there is enough allowance in the design to allow the assembly to take up any "slack" or misalignment of components. The use of advanced CAD systems, 3D printing and manufacturing control systems are important for modular construction to be successful. This is quite unlike on-site construction where the tradesman can often make the part to suit any particular installation.
Upfront production investment
The development of factory facilities for modular homes requires significant upfront investment. To help address housing shortages in the 2010s, the United Kingdom Government (via Homes England) invested in modular housing initiatives. Several UK companies (for example, Ilke Homes, L&G Modular Homes, House by Urban Splash, Modulous, TopHat and Lighthouse) were established to develop modular homes as an alternative to traditionally-built residences, but failed as they could not book revenues quickly enough to cover the costs of establishing manufacturing facilities.
IIke Homes opened a factory in Knaresborough, Yorkshire in 2018, and Homes England invested £30m in November 2019, and a further £30m in September 2021. Despite a further fund-raising round, raising £100m in December 2022, Ilke Homes went into administration on 30 June 2023, with most of the company's 1,150 staff made redundant, and debts of £320m, including £68m owed to Homes England.
In 2015 Legal & General launched a modular homes operation, L&G Modular Homes, opening a 550,000 sq ft factory in Sherburn-in-Elmet, near Selby in Yorkshire. The company incurred large losses as it invested in its factory before earning any revenues; by 2019, it had lost over £100m. Sales revenues from a Selby project, plus schemes in Kent and West Sussex, started to flow in 2022, by which time the business's total losses had grown to £174m. Production was halted in May 2023, with L&G blaming local planning delays and the COVID-19 pandemic for its failure to grow its sales pipeline. The enterprise incurred total losses over seven years of £295m.
Market acceptance
Some home buyers and some lending institutions resist consideration of modular homes as equivalent in value to site-built homes. While the homes themselves may be of equivalent quality, entrenched zoning regulations and psychological marketplace factors may create hurdles for buyers or builders of modular homes and should be considered as part of the decision-making process when exploring this type of home as a living and/or investment option. In the UK and Australia, modular homes have become accepted in some regional areas; however, they are not commonly built in major cities. Modular homes are becoming increasingly common in Japanese urban areas, due to improvements in design and quality, speed and compactness of onsite assembly, as well as due to lowering costs and ease of repair after earthquakes. Recent innovations allow modular buildings to be indistinguishable from site-built structures. Surveys have shown that individuals can rarely tell the difference between a modular home and a site-built home.
Modular homes vs. mobile homes
Differences include the building codes that govern the construction, types of material used and how they are appraised by banks for lending purposes. Modular homes are built to either local or state building codes as opposed to manufactured homes, which are also built in a factory but are governed by a federal building code. The codes that govern the construction of modular homes are exactly the same codes that govern the construction of site-constructed homes. In the United States, all modular homes are constructed according to the International Building Code (IBC), IRC, BOCA or the code that has been adopted by the local jurisdiction. In some states, such as California, mobile homes must still be registered yearly, like vehicles or standard trailers, with the Department of Motor Vehicles or other state agency. This is true even if the owners remove the axles and place it on a permanent foundation.
Recognizing a mobile or manufactured home
A mobile home should have a small metal tag on the outside of each section. If a tag cannot be located, details about the home can be found in the electrical panel box. This tag should also reveal a manufacturing date. Modular homes do not have metal tags on the outside but will have a dataplate installed inside the home, usually under the kitchen sink or in a closet. The dataplate will provide information such as the manufacturer, third party inspection agency, appliance information, and manufacture date.
Materials
The materials used in modular buildings are of the same quality and durability as those used in traditional construction, preserving characteristics such as acoustic insulation and energy efficiency, as well as allowing for attractive and innovative designs thanks to their versatility. Most commonly used are steel, wood and concrete.
Steel: Because it is easily moldable, it allows for innovation in design and aesthetics.
Wood: Wood is an essential part of most modular buildings. Thanks to its lightness, it facilitates the work of assembling and moving the prefabricated modules.
Concrete: Concrete offers a solid structure that is ideal for the structural reinforcement of permanent modular buildings. It is increasingly being used as a base material in this type of building, thanks to its various characteristics such as fire resistance, energy savings, greater acoustic insulation, and durability.
Wood-frame floors, walls and roof are often utilized. Some modular homes include brick or stone exteriors, granite counters and steeply pitched roofs. Modulars can be designed to sit on a perimeter foundation or basement. In contrast, mobile homes are constructed with a steel chassis that is integral to the integrity of the floor system. Modular buildings can be custom built to a client's specifications. Current designs include multi-story units, multi-family units and entire apartment complexes. The negative stereotype commonly associated with mobile homes has prompted some manufacturers to start using the term "off-site construction."
New modular offerings include other construction methods such as cross-laminated timber frames.
Financing
Mobile homes often require special lenders.
Modular homes on the other hand are financed as site built homes with a construction loan
Standards and zoning considerations
Typically, modular dwellings are built to local, state or council code, resulting in dwellings from a given manufacturing facility having differing construction standards depending on the final destination of the modules. The most important zones that manufacturers have to take into consideration are local wind, heat, and snow load zones. For example, homes built for final assembly in a hurricane-prone, earthquake or flooding area may include additional bracing to meet local building codes. Steel and/or wood framing are common options for building a modular home.
Some US courts have ruled that zoning restrictions applicable to mobile homes do not apply to modular homes since modular homes are designed to have a permanent foundation. Additionally, in the US, valuation differences between modular homes and site-built homes are often negligible in real estate appraisal practice; modular homes can, in some market areas, (depending on local appraisal practices per Uniform Standards of Professional Appraisal Practice) be evaluated the same way as site-built dwellings of similar quality. In Australia, manufactured home parks are governed by additional legislation that does not apply to permanent modular homes. Possible developments in equivalence between modular and site-built housing types for the purposes of real estate appraisals, financing and zoning may increase the sales of modular homes over time.
CLASP (Consortium of Local Authorities Special Programme)
The Consortium of Local Authorities Special Programme (abbreviated and more commonly referred to as CLASP) was formed in England in 1957 to combine the resources of local authorities with the purpose of developing a prefabricated school building programme. Initially developed by Charles Herbert Aslin, the county architect for Hertfordshire, the system was used as a model for several other counties, most notably Nottinghamshire and Derbyshire. CLASP's popularity in these coal mining areas was in part because the system permitted fairly straightforward replacement of subsidence-damaged sections of building.
Building strength
Modular homes are designed to be stronger than traditional homes by, for example, replacing nails with screws, adding glue to joints, and using 8–10% more lumber than conventional housing. This is to help the modules maintain their structural integrity as they are transported on trucks to the construction site. However, there are few studies on the response of modular buildings to transport and handling stresses. It is therefore presently difficult to predict transport induced damage.
When FEMA studied the destruction wrought by Hurricane Andrew in Dade County Florida, they concluded that modular and masonry homes fared best compared to other construction.
CE marking
The CE mark is a construction norm that guarantees the user of mechanical resistance and strength of the structure. It is a label given by European community empowered authorities for end-to-end process mastering and traceability.
All manufacturing operations are being monitored and recorded:
Suppliers have to be known and certified,
Raw materials and goods being sourced are to be recorded by batch used,
Elementary products are recorded and their quality is monitored,
Assembly quality is managed and assessed on a step by step basis,
When a modular unit is finished, a whole set of tests are performed and if quality standards are met, a unique number and EC stamp is attached to and on the unit.
This ID and all the details are recorded in a database, At any time, the producer has to be able to answer and provide all the information from each step of the production of a single unit, The EC certification guaranties standards in terms of durability, resistance against wind and earthquakes.
Open modular building
The term Modularity can be perceived in different ways. It can even be extended to building P2P (peer-to-peer) applications; where a tailored use of the P2P technology is with the aid of a modular paradigm. Here, well-understood components with clean interfaces can be combined to implement arbitrarily complex functions in the hopes of further proliferating self-organising P2P technology. Open modular buildings are an excellent example of this. Modular building can also be open source and green.
Bauwens, Kostakis and Pazaitis elaborate on this kind of modularity. They link modularity to the construction of houses.
This commons-based activity is geared towards modularity. The construction of modular buildings enables a community to share designs and tools related to all the different parts of house construction. A socially-oriented endeavour that deals with the external architecture of buildings and the internal dynamics of open source commons. People are thus provided with the tools to reconfigure the public sphere in the area where they live, especially in urban environments. There is a robust socializing element that is reminiscent of pre-industrial vernacular architecture and community-based building.
Some organisations already provide modular housing. Such organisations are relevant as they allow for the online sharing of construction plans and tools. These plans can be then assembled, through either digital fabrication like 3D printing or even sourcing low-cost materials from local communities. It has been noticed that given how easy it is to use these low-cost materials are (for example: plywood), it can help increase the permeation of these open buildings to areas or communities that lack the know-how or abilities of conventional architectural or construction firms. Ergo, it allows for a fundamentally more standardised way of constructing houses and buildings. The overarching idea behind it remains key - to allow for easy access to user-friendly layouts which anyone can use to build in a more sustainable and affordable way.
Modularity in this sense is building a house from different standardised parts, like solving a jigsaw puzzle.
3D printing can be used to build the house.
The main standard is OpenStructures and its derivative Autarkytecture.
Research and development
Modular construction is the subject of continued research and development worldwide as the technology is applied to taller and taller buildings. Research and development is carried out by modular building companies and also research institutes such as the Modular Building Institute and the Steel Construction Institute.
See also
Affordable housing
Alternative housing
Commercial modular construction
Construction 3D printing
Container home
Kit house
MAN steel house
Manufactured housing
Modern methods of construction
Modular design
Portable building
Prefabrication
Open-source architecture
Open source hardware
OpenStructures
Prefabricated home
Relocatable buildings
Recreational vehicles
Shipping container architecture
Stick-built home
Tiny house movement
Toter
References
34 - "Volumetric modular construction trend gaining groun d". https://www.aa.com.tr/en/corporate-news/volumetric-modular-construction-trend-gaining-ground/2357158 06.09.2021
Building engineering
Prefabricated buildings
Building
Sustainable building
Buildings and structures by type
Building
Buildings and structures | Modular building | [
"Engineering"
] | 3,286 | [
"Sustainable building",
"Buildings and structures by type",
"Building",
"Building engineering",
"Construction",
"Civil engineering",
"Buildings and structures",
"Prefabricated buildings",
"Architecture"
] |
625,114 | https://en.wikipedia.org/wiki/Forge%20welding | Forge welding (FOW), also called fire welding, is a solid-state welding process that joins two pieces of metal by heating them to a high temperature and then hammering them together. It may also consist of heating and forcing the metals together with presses or other means, creating enough pressure to cause plastic deformation at the weld surfaces. The process, although challenging, has been a method of joining metals used since ancient times and is a staple of traditional blacksmithing. Forge welding is versatile, being able to join a host of similar and dissimilar metals. With the invention of electrical welding and gas welding methods during the Industrial Revolution, manual forge-welding has been largely replaced, although automated forge-welding is a common manufacturing process.
Introduction
Forge welding is a process of joining metals by heating them beyond a certain threshold and forcing them together with enough pressure to cause deformation of the weld surfaces, creating a metallic bond between the atoms of the metals. The pressure required varies, depending on the temperature, strength, and hardness of the alloy. Forge welding is the oldest welding technique, and has been used since ancient times.
Welding processes can generally be grouped into two categories: fusion and diffusion welding. Fusion welding involves localized melting of the metals at the weld interfaces, and is common in electric or gas welding techniques. This requires temperatures much higher than the melting point of the metal in order to cause localized melting before the heat can thermally conduct away from the weld, and often a filler metal is used to keep the weld from segregating due to the high surface tension. Diffusion welding consists of joining the metals without melting them, welding the surfaces together while in the solid state.
In diffusion welding, the heat source is often lower than the melting point of the metal, allowing more even heat-distribution thus reducing thermal stresses at the weld. In this method a filler metal is typically not used, but the weld occurs directly between the metals at the weld interface. This includes methods such as cold welding, explosion welding, and forge welding. Unlike other diffusion methods, in forge welding the metals are heated to a high temperature before forcing them together, usually resulting in greater plasticity at the weld surfaces. This generally makes forge welding more versatile than cold-diffusion techniques, which are usually performed on soft metals like copper or aluminum.
In forge welding, the entire welding areas are heated evenly. Forge welding can be used for a much wider range of harder metals and alloys, like steel and titanium.
History
The history of joining metals goes back to the Bronze Age, where bronzes of different hardness were often joined by casting-in. This method consisted of placing a solid part into a molten metal contained in a mold and allowing it to solidify without actually melting both metals, such as the blade of a sword into a handle or the tang of an arrowhead into the tip. Brazing and soldering were also common during the Bronze Age.
The act of welding (joining two solid parts through diffusion) began with iron. The first welding process was forge welding, which started when humans learned to smelt iron from iron ore; most likely in Anatolia (Turkey) around 1800 BC. Ancient people could not create temperatures high enough to melt iron fully, so the bloomery process that was used for smelting iron produced a lump (bloom) of iron grains sintered together with small amounts of slag and other impurities, referred to as sponge iron because of its porosity.
After smelting, the sponge iron needed to be heated above the welding temperature and hammered, or "wrought." This squeezed out air pockets and melted slag, bringing the iron grains into close contact to form a solid block (billet).
Many items made of wrought iron have been found by archeologists, that show evidence of forge welding, which date from before 1000 BC. Because iron was typically made in small amounts, any large object, such as the Delhi Pillar, needed to be forge welded out of smaller billets.
Forge welding grew from a trial-and-error method, becoming more refined over the centuries. Due to the poor quality of ancient metals, it was commonly employed in making composite steels, by joining high-carbon steels, that would resist deformation but break easily, with low-carbon steels, which resist fracture but bend too easily, creating an object with greater toughness and strength than could be produced with a single alloy. This method of pattern welding first appeared around 700 BC, and was primarily used for making weapons such as swords; the most widely known examples being Damascene, Japanese and Merovingian.<ref
name = Berns>
The History of Hardening by Hans Berns -- Harterei Gerster AG 2013 Page 48--49
</ref> This process was also common in the manufacture of tools, from wrought-iron plows with steel edges to iron chisels with steel cutting surfaces.<ref
name = Berns/>
Materials
Many metals can be forge welded, with the most common being both high and low-carbon steels. Iron and even some hypoeutectic cast-irons can be forge welded. Some aluminum alloys can also be forge welded. Metals such as copper, bronze and brass do not forge weld readily. Although it is possible to forge weld copper-based alloys, it is often with great difficulty due to copper's tendency to absorb oxygen during the heating. Copper and its alloys are usually better joined with cold welding, explosion welding, or other pressure-welding techniques. With iron or steel, the presence of even small amounts of copper severely reduces the alloy's ability to forge weld.
Titanium alloys are commonly forge welded. Because of titanium's tendency to absorb oxygen when molten, the solid-state, diffusion bond of a forge weld is often stronger than a fusion weld in which the metal is liquefied.
Forge welding between similar materials is caused by solid-state diffusion. This results in a weld that consists of only the welded materials without any fillers or bridging materials. Forge welding between dissimilar materials is caused by the formation of a lower melting temperature eutectic between the materials. Due to this the weld is often stronger than the individual metals.
Processes
The most well-known and oldest forge-welding process is the manual-hammering method. Manual hammering is done by heating the metal to the proper temperature, coating with flux, overlapping the weld surfaces, and then striking the joint repeatedly with a hand-held hammer. The joint is often formed to allow space for the flux to flow out, by beveling or rounding the surfaces slightly, and hammered in a successively outward fashion to squeeze the flux out. The hammer blows are typically not as hard as those used for shaping, preventing the flux from being blasted out of the joint at the first blow.
When mechanical hammers were developed, forge welding could be accomplished by heating the metal, and then placing it between the mechanized hammer and the anvil. Originally powered by waterwheels, modern mechanical-hammers can also be operated by compressed air, electricity, steam, gas engines, and many other ways. Another method is forge welding with a die, whereby the pieces of metal are heated and then forced into a die which both provides the pressure for the weld and keeps the joint at the finished shape. Roll welding is another forge welding process, where the heated metals are overlapped and passed through rollers at high pressures to create the weld.
Modern forge-welding is often automated, using computers, machines, and sophisticated hydraulic presses to produce a variety of products from a number of various alloys. For example, steel pipe is often forge-welded during the manufacturing process. Flat stock is heated and fed through specially-shaped rollers that both form the steel into a tube and simultaneously provide the pressure to weld the edges into a continuous seam.
Diffusion bonding is a common method for forge welding titanium alloys in the aerospace industry. In this process the metal is heated while in a press or die. Beyond a specific critical-temperature, which varies depending on the alloy, the impurities burn out and the surfaces are forced together.
Other methods include flash welding and percussion welding. These are resistance forge-welding techniques where the press or die is electrified, passing high current through the alloy to create the heat for the weld. Shielded active-gas forge-welding is a process of forge welding in an oxygen-reactive environment, to burn out oxides, using hydrogen gas and induction heating.
Temperature
Iron, different steels, and even cast-iron can be welded to each other, provided that their carbon content is close enough that the welding ranges overlap. Pure iron can be welded when nearly white hot; between and . Steel with a carbon content of 2.0% can be welded when orangish-yellow, between and . Common steel, between 0.2 and 0.8% carbon, is typically welded at a bright yellow heat.
A primary requirement for forge welding is that both weld surfaces need to be heated to the same temperature and welded before they cool too much. When steel reaches the proper temperature, it begins to weld very readily, so a thin rod or nail heated to the same temperature will tend to stick at first contact, requiring it to be bent or twisted loose.
Care must be taken to avoid overheating the metal to the point that it gives off sparks from rapid oxidation (burning), or else the weld will be poor and brittle.
Decarburization
When steel is heated to an austenizing temperature, the carbon begins to diffuse through the iron. The higher the temperature; the greater the rate of diffusion. At such high temperatures, carbon readily combines with oxygen to form carbon dioxide, so the carbon can easily diffuse out of the steel and into the surrounding air. By the end of a blacksmithing job, the steel will be of a lower carbon content than it was prior to heating. Therefore, most blacksmithing operations are done as quickly as possible to reduce decarburization, preventing the steel from becoming too soft.
To produce the right amount of hardness in the finished product, the smith generally begins with steel that has a carbon content that is higher than desired. In ancient times, forging often began with steel that had a carbon content much too high for normal use. Most ancient forge-welding began with hypereutectoid steel, containing a carbon content sometimes well above 1.0%. Hypereutectoid steels are typically too brittle to be useful in a finished product, but by the end of forging the steel typically had a high carbon-content ranging from 0.8% (eutectoid tool-steel) to 0.5% (hypoeutectoid spring-steel).
Applications
Forge welding has been used throughout its history for making most any items out of steel and iron. It has been used in everything from the manufacture of tools, farming implements, and cookware to the manufacture of fences, gates, and prison cells. In the early Industrial Revolution, it was commonly used in the manufacture of boilers and pressure vessels, until the introduction of fusion-welding. It was commonly used through the Middle Ages for producing armor and weapons.
One of the most famous applications of forge welding involves the production of pattern-welded blades. During this process a smith repeatedly draws out a billet of steel, folds it back and welds it upon itself. Another application was the manufacture of shotgun barrels. Metal wire was spooled onto a mandrel, and then forged into a barrel that was thin, uniform, and strong. In some cases the forge-welded objects are acid-etched to expose the underlying pattern of metal, which is unique to each item and provides aesthetic appeal.
Despite its diversity, forge welding had many limitations. A primary limitation was the size of objects that could be forge welded. Larger objects required a bigger heat source, and size reduced the ability to manually weld it together before it cooled too much. Welding large items like steel plate or girders was typically not possible, or at least highly impractical, until the invention of fusion welding, requiring them to be riveted instead. In some cases, fusion welding produced a much stronger weld, such as in the construction of boilers.
Flux
Forge welding requires the weld surfaces to be extremely clean or the metal will not join properly, if at all. Oxides tend to form on the surface while impurities like phosphorus and sulfur tend to migrate to the surface. Often a flux is used to keep the welding surfaces from oxidizing, which would produce a poor quality weld, and to extract other impurities from the metal. The flux mixes with the oxides that form and lowers the melting temperature and the viscosity of the oxides. This enables the oxides to flow out of the joint when the two pieces are beaten together. A simple flux can be made from borax, sometimes with the addition of powdered iron-filings.
The oldest flux used for forge welding was fine silica sand. The iron or steel would be heated in a reducing environment within the coals of the forge. Devoid of oxygen, the metal forms a layer of iron-oxide called wüstite on its surface. When the metal is hot enough, but below the welding temperature, the smith sprinkles some sand onto the metal. The silicon in the sand reacts with the wustite to form fayalite, which melts just below the welding temperature. This produced a very effective flux which helped to make a strong weld.
Early examples of flux used different combinations and various amounts of iron fillings, borax, sal ammoniac, balsam of copaiba, cyanide of potash, and soda phosphate. The 1920 edition of Scientific American book of facts and formulae indicates a frequently offered trade secret as using copperas, saltpeter, common salt, black oxide of manganese, prussiate of potash, and "nice welding sand" (silicate).
See also
Pattern welding
Friction welding
Friction stud welding
References
Welding | Forge welding | [
"Engineering"
] | 2,883 | [
"Welding",
"Mechanical engineering"
] |
625,226 | https://en.wikipedia.org/wiki/Reversible%20reaction | A reversible reaction is a reaction in which the conversion of reactants to products and the conversion of products to reactants occur simultaneously.
\mathit aA{} + \mathit bB <=> \mathit cC{} + \mathit dD
A and B can react to form C and D or, in the reverse reaction, C and D can react to form A and B. This is distinct from a reversible process in thermodynamics.
Weak acids and bases undergo reversible reactions. For example, carbonic acid:
H2CO3 (l) + H2O(l) ⇌ HCO3−(aq) + H3O+(aq).
The concentrations of reactants and products in an equilibrium mixture are determined by the analytical concentrations of the reagents (A and B or C and D) and the equilibrium constant, K. The magnitude of the equilibrium constant depends on the Gibbs free energy change for the reaction. So, when the free energy change is large (more than about 30 kJ mol−1), the equilibrium constant is large (log K > 3) and the concentrations of the reactants at equilibrium are very small. Such a reaction is sometimes considered to be an irreversible reaction, although small amounts of the reactants are still expected to be present in the reacting system. A truly irreversible chemical reaction is usually achieved when one of the products exits the reacting system, for example, as does carbon dioxide (volatile) in the reaction
CaCO3 + 2HCl → CaCl2 + H2O + CO2↑
History
The concept of a reversible reaction was introduced by Claude Louis Berthollet in 1803, after he had observed the formation of sodium carbonate crystals at the edge of a salt lake (one of the natron lakes in Egypt, in limestone):
2NaCl + CaCO3 → Na2CO3 + CaCl2
He recognized this as the reverse of the familiar reaction
Na2CO3 + CaCl2→ 2NaCl + CaCO3
Until then, chemical reactions were thought to always proceed in one direction. Berthollet reasoned that the excess of salt in the lake helped push the "reverse" reaction towards the formation of sodium carbonate.
In 1864, Peter Waage and Cato Maximilian Guldberg formulated their law of mass action which quantified Berthollet's observation. Between 1884 and 1888, Le Chatelier and Braun formulated Le Chatelier's principle, which extended the same idea to a more general statement on the effects of factors other than concentration on the position of the equilibrium.
Reaction kinetics
For the reversible reaction A⇌B, the forward step A→B has a rate constant and the backwards step B→A has a rate constant . The concentration of A obeys the following differential equation:
If we consider that the concentration of product B at anytime is equal to the concentration of reactants at time zero minus the concentration of reactants at time , we can set up the following equation:
Combining and , we can write
.
Separation of variables is possible and using an initial value , we obtain:
and after some algebra we arrive at the final kinetic expression:
.
The concentration of A and B at infinite time has a behavior as follows:
Thus, the formula can be linearized in order to determine :
To find the individual constants and , the following formula is required:
See also
Dynamic equilibrium
Chemical equilibrium
Irreversibility
Microscopic reversibility
Static equilibrium
References
Equilibrium chemistry
Physical chemistry | Reversible reaction | [
"Physics",
"Chemistry"
] | 725 | [
"Equilibrium chemistry",
"Physical chemistry",
"Applied and interdisciplinary physics",
"nan"
] |
625,232 | https://en.wikipedia.org/wiki/Skeletal%20formula | The skeletal formula, line-angle formula, bond-line formula or shorthand formula of an organic compound is a type of molecular structural formula that serves as a shorthand representation of a molecule's bonding and some details of its molecular geometry. A skeletal formula shows the skeletal structure or skeleton of a molecule, which is composed of the skeletal atoms that make up the molecule. It is represented in two dimensions, as on a piece of paper. It employs certain conventions to represent carbon and hydrogen atoms, which are the most common in organic chemistry.
An early form of this representation was first developed by organic chemist August Kekulé, while the modern form is closely related to and influenced by the Lewis structure of molecules and their valence electrons. Hence they are sometimes termed Kekulé structures or Lewis–Kekulé structures. Skeletal formulae have become ubiquitous in organic chemistry, partly because they are relatively quick and simple to draw, and also because the curved arrow notation used for discussions of reaction mechanisms and electron delocalization can be readily superimposed.
Several other ways of depicting chemical structures are also commonly used in organic chemistry (though less frequently than skeletal formulae). For example, conformational structures look similar to skeletal formulae and are used to depict the approximate positions of atoms in 3D space, as a perspective drawing. Other types of representation, such as Newman projection, Haworth projection or Fischer projection, also look somewhat similar to skeletal formulae. However, there are slight differences in the conventions used, and the reader needs to be aware of them in order to understand the structural details encoded in the depiction. While skeletal and conformational structures are also used in organometallic and inorganic chemistry, the conventions employed also differ somewhat.
The skeleton
Terminology
The skeletal structure of an organic compound is the series of atoms bonded together that form the essential structure of the compound. The skeleton can consist of chains, branches and/or rings of bonded atoms. Skeletal atoms other than carbon or hydrogen are called heteroatoms.
The skeleton has hydrogen and/or various substituents bonded to its atoms. Hydrogen is the most common non-carbon atom that is bonded to carbon and, for simplicity, is not explicitly drawn. In addition, carbon atoms are not generally labelled as such directly (i.e. with "C"), whereas heteroatoms are always explicitly noted as such ("N" for nitrogen, "O" for oxygen, etc.)
Heteroatoms and other groups of atoms that give rise to relatively high rates of chemical reactivity, or introduce specific and interesting characteristics in the spectra of compounds are called functional groups, as they give the molecule a function. Heteroatoms and functional groups are collectively called "substituents", as they are considered to be a substitute for the hydrogen atom that would be present in the parent hydrocarbon of the organic compound.
Basic structure
As in Lewis structures, covalent bonds are indicated by line segments, with a doubled or tripled line segment indicating double or triple bonding, respectively. Likewise, skeletal formulae indicate formal charges associated with each atom (although lone pairs are usually optional, see below). In fact, skeletal formulae can be thought of as abbreviated Lewis structures that observe the following simplifications:
Carbon atoms are represented by the vertices (intersections or termini) of line segments. For clarity, methyl groups are often explicitly written out as Me or CH3, while (hetero)cumulene carbons are frequently represented by a heavy center dot.
Hydrogen atoms attached to carbon are implied. An unlabeled vertex is understood to represent a carbon attached to the number of hydrogens required to satisfy the octet rule, while a vertex labeled with a formal charge and/or nonbonding electron(s) is understood to have the number of hydrogen atoms required to give the carbon atom these indicated properties. Optionally, acetylenic and formyl hydrogens can be shown explicitly for the sake of clarity.
Hydrogen atoms attached to a heteroatom are shown explicitly. The heteroatom and hydrogen atoms attached thereto are usually shown as a single group (e.g., OH, NH2) without explicitly showing the hydrogen–heteroatom bond. Heteroatoms with simple alkyl or aryl substituents, like methoxy (OMe) or dimethylamino (NMe2), are sometimes shown in the same way, by analogy.
Lone pairs on carbene carbons must be indicated explicitly while lone pairs in other cases are optional and are shown only for emphasis. In contrast, formal charges and unpaired electrons on main-group elements are always explicitly shown.
In the standard depiction of a molecule, the canonical form (resonance structure) with the greatest contribution is drawn. However, the skeletal formula is understood to represent the "real molecule" that is, the weighted average of all contributing canonical forms. Thus, in cases where two or more canonical forms contribute with equal weight (e.g., in benzene, or a carboxylate anion) and one of the canonical forms is selected arbitrarily, the skeletal formula is understood to depict the true structure, containing equivalent bonds of fractional order, even though the delocalized bonds are depicted as nonequivalent single and double bonds.
Contemporary graphical conventions
Since skeletal structures were introduced in the latter half of the 19th century, their appearance has undergone considerable evolution. The graphical conventions in use today date to the 1980s. Thanks to the adoption of the ChemDraw software package as a de facto industry standard (by American Chemical Society, Royal Society of Chemistry, and Gesellschaft Deutscher Chemiker publications, for instance), these conventions have been nearly universal in the chemical literature since the late 1990s. A few minor conventional variations, especially with respect to the use of stereobonds, continue to exist as a result of differing US, UK and European practice, or as a matter of personal preference. As another minor variation between authors, formal charges can be shown with the plus or minus sign in a circle (⊕, ⊖) or without the circle. The set of conventions that are followed by most authors is given below, along with illustrative examples.
Implicit carbon and hydrogen atoms
For example, the skeletal formula of hexane (top) is shown below. The carbon atom labeled C1 appears to have only one bond, so there must also be three hydrogens bonded to it, in order to make its total number of bonds four. The carbon atom labelled C3 has two bonds to other carbons and is therefore bonded to two hydrogen atoms as well. A Lewis structure (middle) and ball-and-stick model (bottom) of the actual molecular structure of hexane, as determined by X-ray crystallography, are shown for comparison.
It does not matter which end of the chain one starts numbering from, as long as consistency is maintained when drawing diagrams. The condensed formula or the IUPAC name will confirm the orientation. Some molecules will become familiar regardless of the orientation.
Explicit heteroatoms and hydrogen atoms
All atoms that are not carbon or hydrogen are signified by their chemical symbol, for instance Cl for chlorine, O for oxygen, Na for sodium, and so forth. In the context of organic chemistry, these atoms are commonly known as heteroatoms (the prefix hetero- comes from Greek ἕτερος héteros, meaning "other").
Any hydrogen atoms bonded to heteroatoms are drawn explicitly. In ethanol, C2H5OH, for instance, the hydrogen atom bonded to oxygen is denoted by the symbol H, whereas the hydrogen atoms which are bonded to carbon atoms are not shown directly.
Lines representing heteroatom-hydrogen bonds are usually omitted for clarity and compactness, so a functional group like the hydroxyl group is most often written −OH instead of −O−H. These bonds are sometimes drawn out in full in order to accentuate their presence when they participate in reaction mechanisms.
Shown below for comparison are a skeletal formula (top), its Lewis structure (middle) and its ball-and-stick model (bottom) of the actual 3D structure of the ethanol molecule in the gas phase, as determined by microwave spectroscopy.
Pseudoelement symbols
There are also symbols that appear to be chemical element symbols, but represent certain very common substituents or indicate an unspecified member of a group of elements. These are called pseudoelement symbols or organic elements and are treated like univalent "elements" in skeletal formulae. A list of common pseudoelement symbols:
General symbols
X for any (pseudo)halogen atom (in the related MLXZ notation, X represents a one-electron donor ligand)
L or Ln for a ligand or ligands (in the related MLXZ notation, L represents a two-electron donor ligand)
M or Met for any metal atom ([M] is used to indicate a ligated metal, MLn, when the identities of the ligands are unknown or irrelevant)
E or El for any electrophile (in some contexts, E is also used to indicate any p-block element)
Nu for any nucleophile
Z for conjugating electron-withdrawing groups (in the related MLXZ notation, Z represents a zero-electron donor ligand; in unrelated usage, Z is also an abbreviation for the carboxybenzyl group.)
D for deuterium (2H)
T for tritium (3H)
Alkyl groups
R for any alkyl group or even any organyl group (Alk can be used to unambiguously indicate an alkyl group)
Me for the methyl group
Et for the ethyl group
Pr, n-Pr, or nPr for the (normal) propyl group (Pr is also the symbol for the element praseodymium. However, since the propyl group is monovalent, while praseodymium is nearly always trivalent, ambiguity rarely, if ever, arises in practice.)
i-Pr or iPr for the isopropyl group
All for the allyl group (uncommon)
Bu, n-Bu or nBu for the (normal) butyl group
i-Bu or iBu (i often italicized) for the isobutyl group
s-Bu or sBu for the secondary butyl group
t-Bu or tBu for the tertiary butyl group
Pn for the pentyl group (or Am for the synonymous amyl group, although Am is also the symbol for americium.)
Np or Neo for the neopentyl group (Warning: Organometallic chemists often use Np for the related neophyl group, PhMe2C–. Np is also the symbol for the element neptunium.)
Cy or Chx for the cyclohexyl group
Ad for the 1-adamantyl group
Tr or Trt for the trityl group
Aromatic and unsaturated substituents
Ar for any aromatic substituent (Ar is also the symbol for the element argon. However, argon is inert under all usual conditions encountered in organic chemistry, so the use of Ar to represent an aryl substituent never causes confusion.)
Het for any heteroaromatic substituent
Bn or Bzl for the benzyl group (not to be confused with Bz for benzoyl group; However, old literature may use Bz for benzyl group.)
Dipp for the 2,6-diisopropylphenyl group
Mes for the mesityl group
Ph, Φ, or φ for the phenyl group (the use of phi for phenyl has been in decline)
Tol for the tolyl group, usually the para isomer
Is or Tipp for the 2,4,6-triisopropylphenyl group (the former symbol is derived from the synonym isityl)
An for the anisyl group, usually the para isomer (An is also the symbol for a generic actinoid element. However, since the anisyl group is monovalent, while the actinides are usually divalent, trivalent, or even higher valency, ambiguity rarely, if ever, arises in practice.)
Cp for the cyclopentadienyl group (Cp was the symbol for cassiopeium, a former name for lutetium)
Cp* for the pentamethylcyclopentadienyl group
Vi for the vinyl group (uncommon)
Functional groups
Ac for the acetyl group (Ac is also the symbol for the element actinium. However, actinium is almost never encountered in organic chemistry, so the use of Ac to represent the acetyl group never causes confusion);
Bz for the benzoyl group; OBz is the benzoate group
Piv for the pivalyl (t-butylcarbonyl) group; OPiv is the pivalate group
Bt for the 1-benzotriazolyl group
Im for the 1-imidazolyl group
NPhth for the phthalimide-1-yl group
Sulfonyl/sulfonate groups
Sulfonate esters are often leaving groups in nucleophilic substitution reactions. See the articles on sulfonyl and sulfonate groups for further information.
Bs for the brosyl (p-bromobenzenesulfonyl) group; OBs is the brosylate group
Ms for the mesyl (methanesulfonyl) group; OMs is the mesylate group
Ns for the nosyl (p-nitrobenzenesulfonyl) group (Ns was the chemical symbol for nielsbohrium, but that was renamed bohrium, Bh); ONs is the nosylate group
Tf for the triflyl (trifluoromethanesulfonyl) group; OTf is the triflate group
Nf for the nonaflyl (nonafluorobutanesulfonyl) group, ; ONf is the nonaflate group
Ts for tosyl (p-toluenesulfonyl) group (Ts is also the symbol for the element tennessine. However, tennessine is too unstable to ever be encountered in organic chemistry, so the use of Ts to represent tosyl never causes confusion); OTs is the tosylate group
Protecting groups
A protecting group or protective group is introduced into a molecule by chemical modification of a functional group to obtain chemoselectivity in a subsequent chemical reaction, facilitating multistep organic synthesis.
Boc for the t-butoxycarbonyl group
Cbz or Z for the carboxybenzyl group
Fmoc for the fluorenylmethoxycarbonyl group
Alloc for the allyloxycarbonyl group
Troc for the trichloroethoxycarbonyl group
TMS, TBDMS, TES, TBDPS, TIPS, ... for various silyl ether groups
PMB for the 4-methoxybenzyl group
MOM for the methoxymethyl group
THP for the 2-tetrahydropyranyl group
Multiple bonds
Two atoms can be bonded by sharing more than one pair of electrons. The common bonds to carbon are single, double and triple bonds. Single bonds are most common and are represented by a single, solid line between two atoms in a skeletal formula. Double bonds are denoted by two parallel lines, and triple bonds are shown by three parallel lines.
In more advanced theories of bonding, non-integer values of bond order exist. In these cases, a combination of solid and dashed lines indicate the integer and non-integer parts of the bond order, respectively.
Benzene rings
In recent years, benzene is generally depicted as a hexagon with alternating single and double bonds, much like the structure Kekulé originally proposed in 1872. As mentioned above, the alternating single and double bonds of "1,3,5-cyclohexatriene" are understood to be a drawing of one of the two equivalent canonical forms of benzene (the one explicitly shown and the one with the opposite pattern of formal single and double bonds), in which all carbon–carbon bonds are of equivalent length and have a bond order of exactly 1.5. For aryl rings in general, the two analogous canonical forms are almost always the primary contributors to the structure, but they are nonequivalent, so one structure may make a slightly greater contribution than the other, and bond orders may differ somewhat from 1.5.
An alternate representation that emphasizes this delocalization uses a circle, drawn inside the hexagon of single bonds, to represent the delocalized pi orbital. This style, based on one proposed by Johannes Thiele, used to be very common in introductory organic chemistry textbooks and is still frequently used in informal settings. However, because this depiction does not keep track of electron pairs and is unable to show the precise movement of electrons, it has largely been superseded by the Kekuléan depiction in pedagogical and formal academic contexts.
Stereochemistry
Stereochemistry is conveniently denoted in skeletal formulae:
The relevant chemical bonds can be depicted in several ways:
Solid lines represent bonds in the plane of the paper or screen.
Solid wedges represent bonds that point out of the plane of the paper or screen, towards the observer.
Hashed wedges or dashed lines (thick or thin) represent bonds that point into the plane of the paper or screen, away from the observer.
Wavy lines represent either unknown stereochemistry or a mixture of the two possible stereoisomers at that point.
An obsolescent depiction of hydrogen stereochemistry that used to be common in steroid chemistry is the use of a filled circle centered on a vertex (sometimes called H-dot/H-dash/H-circle, respectively) for an upward pointing hydrogen atom and two hash marks next to vertex or a hollow circle for a downward pointing hydrogen atom.
An early use of this notation can be traced back to Richard Kuhn who in 1932 used solid thick lines and dotted lines in a publication. The modern solid and hashed wedges were introduced in the 1940s by Giulio Natta to represent the structure of high polymers, and extensively popularised in the 1959 textbook Organic Chemistry by Donald J. Cram and George S. Hammond.
Skeletal formulae can depict cis and trans isomers of alkenes. Wavy single bonds are the standard way to represent unknown or unspecified stereochemistry or a mixture of isomers (as with tetrahedral stereocenters). A crossed double-bond has been used sometimes; it is no longer considered an acceptable style for general use but may still be required by computer software.
Hydrogen bonds
Hydrogen bonds are generally denoted by dotted or dashed lines. In other contexts, dashed lines may also represent partially formed or broken bonds in a transition state.
Notes
References
External links
Drawing organic molecules from chemguide.co.uk
Organic chemistry
Chemical formulas
Chemical structures | Skeletal formula | [
"Chemistry"
] | 3,938 | [
"Chemical formulas",
"Chemical structures",
"nan"
] |
625,251 | https://en.wikipedia.org/wiki/Endodermis | The endodermis is the innermost layer of cortex in land plants. It is a cylinder of compact living cells, the radial walls of which are impregnated with hydrophobic substances (Casparian strip) to restrict apoplastic flow of water to the inside. The endodermis is the boundary between the cortex and the stele.
In many seedless plants, such as ferns, the endodermis is a distinct layer of cells immediately outside the vascular cylinder (stele) in roots and shoots. In most seed plants, especially woody types, the endodermis is present in roots but not in stems.
The endodermis helps regulate the movement of water, ions and hormones into and out of the vascular system. It may also store starch, be involved in perception of gravity and protect the plant against toxins moving into the vascular system.
Structure
The endodermis is developmentally the innermost portion of the cortex. It may consist of a single layer of barrel-shaped cells without any intercellular spaces or sometimes several cell layers. The cells of the endodermis typically have their primary cell walls thickened on four sides radial and transverse with suberin, a water-impermeable waxy substance which in young endodermal cells is deposited in distinctive bands called Casparian strips. These strips vary in width but are typically smaller than the cell wall on which they are deposited. If the endodermis is likened to a brick cylinder (e.g. a smokestack), with the bricks representing individual cells, the Casparian strips are analogous to the mortar between the bricks. In older endodermal cells, suberin may be more extensively deposited on all cell wall surfaces and the cells can become lignified, forming a complete waterproof layer.
Some plants have a large number of amyloplasts (starch containing organelles) in their endodermal cells, in which case the endodermis may be called a starch sheath.
Endodermis is often made visible with stains like phloroglucinol due to the phenolic and lipid nature of the Casparian strips or by the abundance of amyloplasts.
Function
The endodermis prevents water, and any solutes dissolved in the water, from passing through this layer via the apoplast pathway. Water can only pass through the endodermis by crossing the membrane of endodermal cells twice (once to enter and a second time to exit). Water moving into or out of the xylem, which is part of the apoplast, can thereby be regulated since it must enter the symplast in the endodermis. This allows the plant to control to some degree the movement of water and to selectively uptake or prevent the passage of ions or other molecules.
The endodermis does not allow gas bubbles to enter the xylem and helps prevent embolisms from occurring in the water column.
Passage cells are endodermal cells of older roots which have retained thin walls and Casparian strips rather than becoming suberized and waterproof like the other cells around them, to continue to allow some symplastic flow to the inside. Experimental evidence suggests that passage cells function to allow transfer of solutes such as calcium and magnesium into the stele, in order to eventually reach the transpiration system. For the most part, however, old roots seal themselves off at the endodermis, and only serve as a passageway for water and minerals taken up by younger roots "downstream".
Endodermal cells may contain starch granules in the form of amyloplasts. These may serve as food storage, and have been shown to be involved in gravitropism in some plants.
See also
Suberin
Exodermis
Epidermis
References
Gifford, Ernest M. & Foster, Adriance S. (1988). Morphology and Evolution of Vascular Plants, (3rd ed.). New York: W. H. Freeman and Company. .
Plant morphology | Endodermis | [
"Biology"
] | 827 | [
"Plant morphology",
"Plants"
] |
625,318 | https://en.wikipedia.org/wiki/Photorefractive%20effect | The photorefractive effect is a nonlinear optical effect seen in certain crystals and other materials that respond to light by altering their refractive index.
The effect can be used to store temporary, erasable holograms and is useful for holographic data storage.
It can also be used to create a phase-conjugate mirror or an optical spatial soliton.
Mechanism
The photorefractive effect occurs in several stages:
A photorefractive material is illuminated by coherent beams of light. (In holography, these would be the signal and reference beams). Interference between the beams results in a pattern of dark and light fringes throughout the crystal.
In regions where a bright fringe is present, electrons can absorb the light and be photoexcited from an impurity level into the conduction band of the material, leaving an electron hole (a net positive charge). Impurity levels have an energy intermediate between the energies of the valence band and conduction band of the material.
Once in the conduction band, the electrons are free to move and diffuse throughout the crystal. Since the electrons are being excited preferentially in the bright fringes, the net electron diffusion current is towards the dark-fringe regions of the material.
While in the conduction band, the electrons may with some probability recombine with the holes and return to the impurity levels. The rate at which this recombination takes place determines how far the electrons diffuse, and thus the overall strength of the photorefractive effect in that material. Once back in the impurity level, the electrons are trapped and can no longer move unless re-excited back into the conduction band (by light).
With the net redistribution of electrons into the dark regions of the material, leaving holes in the bright areas, the resulting charge distribution causes an electric field, known as a space charge field to be set up in the crystal. Since the electrons and holes are trapped and immobile, the space charge field persists even when the illuminating beams are removed.
The internal space charge field, via the electro–optic effect, causes the refractive index of the crystal to change in the regions where the field is strongest. This causes a spatially varying refractive index grating to occur throughout the crystal. The pattern of the grating that is formed follows the light interference pattern originally imposed on the crystal.
The refractive index grating can now diffract light shone into the crystal, with the resulting diffraction pattern recreating the original pattern of light stored in the crystal.
Application
The photorefractive effect can be used for dynamic holography, and, in particular, for cleaning of coherent beams.
For example, in the case of a hologram, illuminating the grating with just the reference beam causes the reconstruction of the original signal beam. When two coherent laser beams (usually obtained by splitting a laser beam by the use of a beamsplitter into two, and then suitably redirecting by mirrors) cross inside a photorefractive crystal, the resultant refractive index grating diffracts the laser beams. As a result, one beam gains energy and becomes more intense at the expense of light intensity reduction of the other. This phenomenon is an example of two-wave mixing. In this configuration, Bragg diffraction condition is automatically satisfied.
The pattern stored inside the crystal persists until the pattern is erased; this can be done by flooding the crystal with uniform illumination which will excite the electrons back into the conduction band and allow them to be distributed more uniformly.
Photorefractive materials include barium titanate (BaTiO3), lithium niobate (LiNbO3), vanadium doped zinc telluride (ZnTe:V), organic photorefractive materials, certain photopolymers, and some multiple quantum well structures.
References
Optical materials
Nonlinear optics
Holography | Photorefractive effect | [
"Physics"
] | 802 | [
"Materials",
"Optical materials",
"Matter"
] |
625,338 | https://en.wikipedia.org/wiki/3494%20Purple%20Mountain | 3494 Purple Mountain, provisional designation , is a bright Vestian asteroid and a formerly lost minor planet from the inner regions of the asteroid belt, approximately in diameter. First observed in 1962, it was officially discovered on 7 December 1980, by Chinese astronomers at the Purple Mountain Observatory in Nanking, China, and later named in honor of the discovering observatory. The V-type asteroid has a rotation period of 5.9 hours.
Orbit and classification
Purple Mountain is a core member of the Vesta family (), a giant asteroid family of typically bright V-type asteroids. Vestian asteroids have a composition akin to cumulate eucrites (HED meteorites) and are thought to have originated deep within 4 Vesta's crust, possibly from the Rheasilvia crater, a large impact crater on its southern hemisphere near the South pole, formed as a result of a subcatastrophic collision. Vesta is the main belt's second-largest and second-most-massive body after . Based on osculating Keplerian orbital elements, the asteroid has also been classified as a member of the Flora family (), a giant asteroid family and the largest family of stony asteroids in the main-belt.
Purple Mountain orbits the Sun in the inner asteroid belt at a distance of 2.0–2.7 AU once every 3 years and 7 months (1,315 days; semi-major axis of 2.35 AU). Its orbit has an eccentricity of 0.13 and an inclination of 6° with respect to the ecliptic. The body's observation arc begins with a precovery taken at Palomar Observatory in December 1951, or 29 years prior to its official discovery observation.
Lost asteroid
Purple Mountain has been a lost minor planet. In November 1962, Purple Mountain was observed as at Goethe Link Observatory. A total of three additional observations were taken at Crimea–Nauchnij in 1969 and 1972, when it was designated as and , respectively, but was subsequently lost with no follow-up observations until its official discovery at Nanking in 1980.
Physical characteristics
Based on the Moving Object Catalog (MOC) of the Sloan Digital Sky Survey, Purple Mountain is a common, stony S-type asteroid, with a sequential best-type taxonomy of SV. The Collaborative Asteroid Lightcurve Link (CALL) also assumes it to be a stony S-type.
In the SMASS-I classification by Xu, the asteroid is a V-type. This agrees with its measured high albedo (see below) often seen among the core members of the Vesta family. In 2013, a spectroscopic analysis showed it to have a composition very similar to the cumulate eucrite meteorites, which also suggests that the basaltic asteroid has originated from the crust of 4 Vesta.
Rotation period
In June 2015, a rotational lightcurve of Purple Mountain was obtained from photometric observations by astronomers at Texas A&M University, using the SARA-telescopes of the Southeastern Association for Research and Astronomy consortium. The 0.9-meter SARA-North telescope is located at Kitt Peak National Observatory, Arizona, while the 0.6-meter SARA-South telescope is hosted at the Cerro Tololo Inter-American Observatory in Chile. Lightcurve analysis gave a rotation period of 5.857 hours with a brightness variation of 0.32 magnitude (). One month later, in July 2015, another period of 2.928 hours and an amplitude of 0.40 magnitude was measured at MIT's George R. Wallace Jr. Observatory (). The results are in good agreement, apart from the fact that the latter is an alternative, monomodal solution with half the period of the former. CALL adopts the longer, bimodal period solution as the better result in its Lightcurve Data Base, due to the lightcurve's distinct amplitude and the small phase angle of the first observation.
Diameter and albedo
According to the survey carried out by the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Purple Mountain measures 6.507 kilometers in diameter and its surface has an albedo of 0.347, while CALL assumes an albedo of 0.24 – derived from the body's classification into the Flora family – and consequently calculates a larger diameter of 7.82 kilometers based on an absolute magnitude of 12.7.
Naming
This minor planet was named in honor of the Purple Mountain Observatory (PMO), an astronomical observatory located in Nanking (Nanjing), China. Built in 1934, the observatory is known for its astrometric observations and for its numerous discoveries of small Solar System bodies. It has played an important role in developing modern Chinese astronomy. The official naming citation was published by the Minor Planet Center on 29 November 1993 ().
References
External links
Asteroid Lightcurve Database (LCDB), query form (info )
Dictionary of Minor Planet Names, Google books
Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center
003494
003494
Named minor planets
003494
19801207
Recovered astronomical objects | 3494 Purple Mountain | [
"Astronomy"
] | 1,052 | [
"Recovered astronomical objects",
"Astronomical objects"
] |
625,341 | https://en.wikipedia.org/wiki/Nanopore | A nanopore is a pore of nanometer size. It may, for example, be created by a pore-forming protein or as a hole in synthetic materials such as silicon or graphene.
When a nanopore is present in an electrically insulating membrane, it can be used as a single-molecule detector. It can be a biological protein channel in a high electrical resistance lipid bilayer, a pore in a solid-state membrane or a hybrid of these – a protein channel set in a synthetic membrane. The detection principle is based on monitoring the ionic current passing through the nanopore as a voltage is applied across the membrane. When the nanopore is of molecular dimensions, passage of molecules (e.g., DNA) cause interruptions of the "open" current level, leading to a "translocation event" signal. The passage of RNA or single-stranded DNA molecules through the membrane-embedded alpha-hemolysin channel (1.5 nm diameter), for example, causes a ~90% blockage of the current (measured at 1 M KCl solution).
It may be considered a Coulter counter for much smaller particles.
Types
Organic
Nanopores may be formed by pore-forming proteins, typically a hollow core passing through a mushroom-shaped protein molecule. Examples of pore-forming proteins are alpha hemolysin, aerolysin, and MspA porin. In typical laboratory nanopore experiments, a single protein nanopore is inserted into a lipid bilayer membrane and single-channel electrophysiology measurements are taken. Newer pore-forming proteins have been extracted from bacteriophages for study into their use as nanopores. These pores are generally selected due to their diameter being above 2 nm, the diameter of double-stranded DNA.
Larger nanopores can be up to 20 nm in a diameter. These pores allow small molecules like oxygen, glucose and insulin to pass however they prevent large immune system molecules like immunoglobins from passing. As an example, rat pancreatic cells are microencapsulated, they receive nutrients and release insulin through nanopores being totally isolated from their neighboring environment i.e. foreign cells. This knowledge can help to replace nonfunctional islets of Langerhans cells in the pancreas (responsible for producing insulin), by harvested piglet cells. They can be implanted underneath the human skin without the need of immunosuppressants which put diabetic patients at a risk of infection.
Inorganic
Solid-state nanopores are generally made in silicon compound membranes, one of the most common being silicon nitride. The second type of widely used solid-state nanopores are glass nanopores fabricated by laser-assisted pulling of glass capillary. Solid-state nanopores can be manufactured with several techniques including ion-beam sculpting, dielectric breakdown, electron beam exposure using TEM and Ion track etching.
More recently, the use of graphene as a material for solid-state nanopore sensing has been explored. Another example of solid-state nanopores is a box-shaped graphene (BSG) nanostructure. The BSG nanostructure is a multilayer system of parallel hollow nanochannels located along the surface and having quadrangular cross-section. The thickness of the channel walls is approximately equal to 1 nm. The typical width of channel facets makes about 25 nm.
Size-tunable elastomeric nanopores have been fabricated, allowing accurate measurement of nanoparticles as they occlude the flow of ionic current. This measurement methodology can be used to measure a wide range of particle types. In contrast to the limitations of solid-state pores, they allow for the optimization of the resistance pulse magnitude relative to the background current by matching the pore-size closely to the particle-size. As detection occurs on a particle by particle basis, the true average and polydispersity distribution can be determined. Using this principle, the world's only commercial tunable nanopore-based particle detection system has been developed by Izon Science Ltd. The box-shaped graphene (BSG) nanostructure can be used as a basis for building devices with changeable pore sizes.
Nanopore based sequencing
The observation that a passing strand of DNA containing different bases corresponds with shifts in current values has led to the development of nanopore sequencing. Nanopore sequencing can occur with bacterial nanopores as mentioned in the above section as well as with the Nanopore sequencing device(s) is created by Oxford Nanopore Technologies.
Monomer identification
From a fundamental standpoint, nucleotides from DNA or RNA are identified based on shifts in current as the strand is entering the pore. The approach that Oxford Nanopore Technologies uses for nanopore DNA sequencing labeled DNA sample is loaded to the flow cell within the nanopore. The DNA fragment is guided to the nanopore and commences the unfolding of the helix. As the unwound helix moves through the nanopore, it is correlated with a change in the current value which is measured in thousand times per second. Nanopore analysis software can take this alternating current value for each base detected, and obtain the resulting DNA sequence. Similarly with the usage of biological nanopores, as a constant voltage is applied to the system, the alternating current can be observed. As DNA, RNA or peptides enter the pore, shifts in the current can be observed through this system that are characteristic of the monomer being identified.
Ion current rectification (ICR) is an important phenomenon for nanopore. Ion current rectification can also be used as a drug sensor and be employed to investigate charge status in the polymer membrane.
Applications to nanopore sequencing
Apart from rapid DNA sequencing, other applications include separation of single stranded and double stranded DNA in solution, and the determination of length of polymers. At this stage, nanopores are making contributions to the understanding of polymer biophysics, single-molecule analysis of DNA-protein interactions, as well as peptide sequencing. When it comes to peptide sequencing bacterial nanopores like hemolysin, can be applied to both RNA, DNA and most recently protein sequencing. Such as when applied in a study in which peptides with the same Glycine-Proline-Proline repeat were synthesized, and then put through nanopore analysis, an accurate sequence was able to be attained. This can also be used to identify differences in stereochemistry of peptides based on intermolecular ionic interactions. Some configuration changes of protein could also be observed from the translocation curve. Understanding this also contributes more data to understanding the sequence of the peptide fully in its environment. Usage of another bacterial derived nanopore, an aerolysin nanopore, has shown ability having shown similar ability in distinguishing residues within a peptide has also shown the ability to identify toxins present even in proclaimed "very pure" protein samples, while demonstrating stability over varying pH values. A limitation to the usage of bacterial nanopores would be that peptides as short as six residues were accurately detected, but with larger more negatively charged peptides resulted in more background signal that is not representative of the molecule.
Alternate applications
Since the discovery of track-etched technology in the late 1960s, filter membranes with needed diameter have found application potential in various fields including food safety, environmental pollution, biology, medicine, fuel cell, and chemistry. These track-etched membranes are typically made in polymer membrane through track-etching procedure, during which the polymer membrane is first irradiated by heavy ion beam to form tracks and then cylindrical pores or asymmetric pores are created along the track after wet etching.
As important as fabrication of the filter membranes with proper diameters, characterizations and measurements of these materials are of the same paramount. Until now, a few of methods have been developed, which can be classified into the following categories according to the physical mechanisms they exploited: imaging methods such as scanning electron microscopy (SEM), transmission electron microscopy (TEM), atomic force microscopy (AFM); fluid transport such as bubble point and gas transport; fluid adsorptions such as nitrogen adsorption/desorption (BEH), mercury porosimetry, liquid-vapor equilibrium (BJH), gas-liquid equilibrium (permoporometry) and liquid-solid equilibrium (thermoporometry); electronic conductance; ultrasonic spectroscopy; and molecular transport.
More recently, the use of light transmission technique as a method for nanopore size measurement has been proposed.
See also
Coulomb blockade
Hemolysin
Nanofluidics
Nanometre
Nanopore sequencing
Nanoporous materials
Pore-forming toxin
References
Further reading
External links
Computer simulations of nanopore devices
Conical Nanopore Sensors
Biomimetic Channels and Ionic Devices
Nanotechnology | Nanopore | [
"Materials_science",
"Engineering"
] | 1,829 | [
"Nanotechnology",
"Materials science"
] |
625,398 | https://en.wikipedia.org/wiki/Ingrid%20van%20Houten-Groeneveld | Ingrid van Houten-Groeneveld (; ; 21 October 1921 – 30 March 2015) was a Dutch astronomer.
Background
In a jointly credited trio with Tom Gehrels and her husband Cornelis Johannes van Houten, she was the discoverer of many thousands of asteroids (credited by the Minor Planet Center with the discovery of 4,641 numbered minor planets). In the Palomar–Leiden survey, Gehrels took the images using the 48-inch Schmidt telescope at Palomar Observatory and shipped the photographic plates to the van Houtens at Leiden Observatory, who analyzed them for new asteroids. The trio are jointly credited with several thousand asteroid discoveries. Van Houten-Groeneveld died on 30 March 2015, at the age of 93, in Oegstgeest, Netherlands.
The Themistian main-belt asteroid 1674 Groeneveld – discovered by Karl Reinmuth at Heidelberg and independently discovered by Finnish astronomer Yrjö Väisälä in 1938, was named in her honor ().
Publications
Notes
References
External links
I. van Houten-Groeneveld home page
Symposium 85e verjaardag of 4 November 2006 (in Dutch)
2005 Annual report of the Leiden Observatory, see page 6.
1921 births
2015 deaths
21st-century Dutch astronomers
20th-century Dutch women scientists
Discoverers of asteroids
20th-century Dutch astronomers
Dutch women scientists
Academic staff of Leiden University
Scientists from Berlin
Women astronomers | Ingrid van Houten-Groeneveld | [
"Astronomy"
] | 301 | [
"Women astronomers",
"Astronomers"
] |
625,420 | https://en.wikipedia.org/wiki/Paper%20football | Paper football (also called finger football, flick football, tabletop football, thump football, or freaky football) refers to a table-top game, loosely based on American football, in which a sheet of paper folded into a small triangle is slid back and forth across a table top by two opponents. This game is widely practiced for entertainment, mostly by students in primary, middle school (junior high), and high school age in the United States. Though its origin is in dispute, it was widely played at churches in Madison, Wisconsin in the early 1970s. The youth group at Grace Baptist Church held weekly events and competitions including monthly championships.
Gameplay
The game uses a piece of paper folded into a triangle, called the "ball". The starting player begins by kicking off the ball. To perform a kickoff, the ball is placed on the table, suspended by one of the player's hands with the index finger on the upper tip of the ball, then the player flicks the ball with the other hand's thumb and index finger. If the ball ends up flying off the table or hanging on the edge of the table, the kickoff is redone. If the ball lands on the table without reaching the edge of the receiving player's side, players take turns pushing it with a steady fast motion towards the opponent's side.
The player scores points by getting the ball hanging on the edge of the opponent's side, called a touchdown. Every time a touchdown is scored, the player who scored has a chance to make a field goal, which has that player flick the ball as in the kickoff through the opponent's goal post, formed by placing both wrists parallel to the table on the edge, with the tips of both thumbs touching each other and both index fingers pointing straight upward. If the field goal is successful, the kicking player scores one point. The player who conceded points starts the next kickoff.
The game ends based on the agreed-upon rules, be it time limit (the player with the most points when the predetermined amount of time has elapsed wins) or score limit (the first player to reach the predetermined score threshold wins).
See also
Penny football
Button football
Tabletop football
Blow football
References
External links
Children's games
Individual sports
football
Variations of American football
football | Paper football | [
"Mathematics"
] | 470 | [
"Recreational mathematics",
"Paper folding"
] |
625,507 | https://en.wikipedia.org/wiki/Windows%20Calculator | Windows Calculator is a software calculator developed by Microsoft and included in Windows. In its Windows 10 incarnation it has four modes: standard, scientific, programmer, and a graphing mode. The standard mode includes a number pad and buttons for performing arithmetic operations. The scientific mode takes this a step further and adds exponents and trigonometric function, and programmer mode allows the user to perform operations related to computer programming. In 2020, a graphing mode was added to the Calculator, allowing users to graph equations on a coordinate plane.
The Windows Calculator is one of a few applications that have been bundled in all versions of Windows, starting with Windows 1.0. Since then, the calculator has been upgraded with various capabilities.
In addition, the calculator has also been included with Windows Phone and Xbox One. The Microsoft Store page proclaims HoloLens support as of February 2024, but the Calculator app is not installed on HoloLens by default.
History
A simple arithmetic calculator was first included with Windows 1.0.
In Windows 3.0, a scientific mode was added, which included exponents and roots, logarithms, factorial-based functions, trigonometry (supports radian, degree and gradians angles), base conversions (2, 8, 10, 16), logic operations, statistical functions such as single variable statistics and linear regression.
Windows 9x and Windows NT 4.0
Until Windows 95, it uses an IEEE 754-1985 double-precision floating-point, and the highest representable number by the calculator is 21024, which is slightly above 10308 (≈1.80 × 10308).
In Windows 98 and later, it uses an arbitrary-precision arithmetic library, replacing the standard IEEE floating point library. It offers bignum precision for basic operations (addition, subtraction, multiplication, division) and 32 digits of precision for advanced operations (square root, transcendental functions). The largest value that can be represented on the Windows Calculator is currently and the smallest is . (Also ! calculates the gamma function which is defined over all real numbers, only excluding the negative integers).
Windows 2000, XP and Vista
In Windows 2000, digit grouping is added. Degree and base settings are added to menu bar.
The calculators of Windows XP and Vista were able to calculate using numbers beyond 1010000, but calculating with these numbers (e.g. 10^2^2^2^2^2^2^2...) does increasingly slow down the calculator and make it unresponsive until the calculation has been completed.
These are the last versions of Windows Calculator, where calculating with binary/decimal/hexadecimal/octal numbers is included into scientific mode. In Windows 7, they were moved to programmer mode, which is a new separate mode that co-exists with scientific mode.
Windows 7
In Windows 7, separate programmer, statistics, unit conversion, date calculation, and worksheets modes were added. Tooltips were removed. Furthermore, Calculator's interface was revamped for the first time since its introduction. The base conversion functions were moved to the programmer mode and statistics functions were moved to the statistics mode. Switching between modes does not preserve the current number, clearing it to 0.
The highest number is now limited to 1010000 again.
In every mode except programmer mode, one can see the history of calculations. The app was redesigned to accommodate multi-touch. Standard mode behaves as a simple checkbook calculator; entering the sequence 6 * 4 + 12 / 4 - 4 * 5 gives the answer 25. In scientific mode, order of operations is followed while doing calculations (multiplication and division are done before addition and subtraction), which means 6 * 4 + 12 / 4 - 4 * 5 = 7.
In programmer mode, inputting a number in decimal has a lower and upper limit, depending on the data type, and must always be an integer. Data type of number in decimal mode is signed n-bit integer when converting from number in hexadecimal, octal, or binary mode.
On the right side of the main Calculator, one can add a panel with date calculation, unit conversion and worksheets. Worksheets allow one to calculate a result of a chosen field based on the values of other fields. Pre-defined templates include calculating a car's fuel economy (mpg and L/100 km), a vehicle lease, and a mortgage. In pre-beta versions of Windows 7, Calculator also provided a Wages template.
Windows 8.1
While the traditional Calculator is still included with Windows 8.1, a Metro-style Calculator is also present, featuring a full-screen interface as well as normal, scientific, and conversion modes.
Windows 10
The Calculator in non-LTSC editions of Windows 10 is a Universal Windows Platform app. In contrast, Windows 10 LTSC (which does not include universal Windows apps) includes the traditional calculator, but which is now named . Both calculators provide the features of the traditional calculator included with Windows 7 and Windows 8.x, such as unit conversions for volume, length, weight, temperature, energy, area, speed, time, power, data, pressure and angle, and the history list which the user can clear.
Both the universal Windows app and LTSC's register themselves with the system as handlers of a '' pseudo-protocol. This registration is similar to that performed by any other well-behaved application when it registers itself as a handler for a filetype (e.g. ) or protocol (e.g. ).
All Windows 10 editions (both LTSC and non-LTSC) continue to have a , which however is just a stub that launches (via ShellExecute) the handler that is associated with the '' pseudo-protocol. As with any other protocol or filetype, when there are multiple handlers to choose from, users are free to choose which handler they prefer either via the classic control panel ('Default programs' settings) or the immersive UI settings ('Default Apps' settings) or from the command prompt via .
In the Windows 10 Fall Creators Update, a currency converter mode was added to Calculator.
On 6 March 2019, Microsoft released the source code for Calculator on GitHub under the MIT License.
Windows 11
In Windows 11, the Calculator app's user interface was modified to match the design of Windows 11 and a new settings page is present for users to toggle between the themes of the app without changing the operating system's theme. In 2021, Microsoft announced it would migrate the codebase of the Calculator app to C# in order to welcome more developers to contribute to the app.
Gallery
Features
By default, Calculator runs in standard mode, which resembles a four-function calculator. More advanced functions are available in scientific mode, including logarithms, numerical base conversions, some logical operators, operator precedence, radian, degree and gradians support as well as simple single-variable statistical functions. It does not provide support for user-defined functions, complex numbers, storage variables for intermediate results (other than the classic accumulator memory of pocket calculators), automated polar-cartesian coordinates conversion, or support for two-variables statistics.
Calculator supports keyboard shortcuts; all Calculator features have an associated keyboard shortcut.
Calculator in programmer mode cannot accept or display a number larger than a signed QWORD (16 hexadecimal digits/64 bits). The largest number it can handle is therefore 0x7FFFFFFFFFFFFFFF (decimal 9,223,372,036,854,775,807). Any calculations in programmer mode which exceed this limit will overflow, even if those calculations would succeed in other modes. In particular, scientific notation is not available in this mode.
Issues
In Windows 7, 8, and some versions of Windows 10, transcendental function operations, such as the square root operator (sqrt(4) − 2 = −8.1648465955514287168521180122928e−39), would sometimes be calculated incorrectly due to catastrophic cancelation. In newer versions, this doesn't happen with integers, but it still happens when you enter decimal numbers.
Older versions of the universal Calculator in non-LTSC editions of Windows 10 doesn't use any regional format (can be set in Region Control Panel) that are different from the app's display language for number formatting (the app's language is English (United States) but Windows's regional format is set to a different format).
Calculator Plus
Calculator Plus is a separate application for Windows XP and Windows Server 2003 users that adds a 'Conversion' mode over the Windows XP version of the Calculator. The 'Conversion' mode supports unit conversion and currency conversion. Currency exchange rates can be updated using the built-in update feature, which downloads exchange rates from the European Central Bank.
See also
Formula calculator
List of formerly proprietary software
Microsoft Math Solver
Power Calculator
References
External links
Windows Calculator on Microsoft Store
Source code on GitHub
Microsoft Calculator Plus
1985 software
Formerly proprietary software
Free and open-source software
Mathematical software
Microsoft free software
Software calculators
Software using the MIT license
Universal Windows Platform apps
Windows components
Xbox One software
Windows Phone software | Windows Calculator | [
"Mathematics"
] | 1,997 | [
"Software calculators",
"Mathematical software"
] |
625,653 | https://en.wikipedia.org/wiki/M4%20%28computer%20language%29 | m4 is a general-purpose macro processor included in most Unix-like operating systems, and is a component of the POSIX standard.
The language was designed by Brian Kernighan and Dennis Ritchie for the original versions of UNIX. It is an extension of an earlier macro processor, m3, written by Ritchie for an unknown AP-3 minicomputer.
The macro preprocessor operates as a text-replacement tool. It is employed to re-use text templates, typically in computer programming applications, but also in text editing and text-processing applications. Most users require m4 as a dependency of GNU autoconf.
History
Macro processors became popular when programmers commonly used assembly language. In those early days of programming, programmers noted that much of their programs consisted of repeated text, and they invented simple means for reusing this text. Programmers soon discovered the advantages not only of reusing entire blocks of text, but also of substituting different values for similar parameters. This defined the usage range of macro processors at the time.
In the 1960s, an early general-purpose macro processor, M6, was in use at AT&T Bell Laboratories, which was developed by Douglas McIlroy, Robert Morris and Andrew Hall.
Kernighan and Ritchie developed m4 in 1977, basing it on the ideas of Christopher Strachey. The distinguishing features of this style of macro preprocessing included:
free-form syntax (not line-based like a typical macro preprocessor designed for assembly-language processing)
the high degree of re-expansion (a macro's arguments get expanded twice: once during scanning and once at interpretation time)
The implementation of Rational Fortran used m4 as its macro engine from the beginning, and most Unix variants ship with it.
many applications continue to use m4 as part of the GNU Project's autoconf. It also appears in the configuration process of sendmail (a widespread mail transfer agent) and for generating footprints in the gEDA toolsuite. The SELinux Reference Policy relies heavily on the m4 macro processor.
m4 has many uses in code generation, but (as with any macro processor) problems can be hard to debug.
Features
m4 offers these facilities:
a free-form syntax, rather than line-based syntax
a high degree of macro expansion (arguments get expanded during scan and again during interpretation)
text replacement
parameter substitution
file inclusion
string manipulation
conditional evaluation
arithmetic expressions
system interface
programmer diagnostics
programming language independent
human language independent
provides programming language capabilities
Unlike most earlier macro processors, m4 does not target any particular computer or human language; historically, however, its development originated for supporting the Ratfor dialect of Fortran. Unlike some other macro processors, m4 is Turing-complete as well as a practical programming language.
Unquoted identifiers which match defined macros are replaced with their definitions. Placing identifiers in quotes suppresses expansion until possibly later, such as when a quoted string is expanded as part of macro replacement. Unlike most languages, strings in m4 are quoted using the backtick (`) as the starting delimiter, and apostrophe (') as the ending delimiter. Separate starting and ending delimiters allows the arbitrary nesting of quotation marks in strings to be used, allowing a fine degree of control of how and when macro expansion takes place in different parts of a string.
Example
The following fragment gives a simple example that could form part of a library for generating HTML code. It defines a commented macro to number sections automatically:
divert(-1)
m4 has multiple output queues that can be manipulated with the
`divert' macro. Valid queues range from 0 to 10, inclusive, with
the default queue being 0. As an extension, GNU m4 supports more
diversions, limited only by integer type size.
Calling the `divert' macro with an invalid queue causes text to be
discarded until another call. Note that even while output is being
discarded, quotes around `divert' and other macros are needed to
prevent expansion.
# Macros aren't expanded within comments, meaning that keywords such
# as divert and other built-ins may be used without consequence.
# HTML utility macro:
define(`H2_COUNT', 0)
# The H2_COUNT macro is redefined every time the H2 macro is used:
define(`H2',
`define(`H2_COUNT', incr(H2_COUNT))<h2>H2_COUNT. $1</h2>')
divert(1)dnl
dnl
dnl The dnl macro causes m4 to discard the rest of the line, thus
dnl preventing unwanted blank lines from appearing in the output.
dnl
H2(First Section)
H2(Second Section)
H2(Conclusion)
dnl
divert(0)dnl
dnl
<HTML>
undivert(1)dnl One of the queues is being pushed to output.
</HTML>
Processing this code with m4 generates the following text:
<HTML>
<h2>1. First Section</h2>
<h2>2. Second Section</h2>
<h2>3. Conclusion</h2>
</HTML>
Implementations
FreeBSD, NetBSD, and OpenBSD provide independent implementations of the m4 language. Furthermore, the Heirloom Project Development Tools includes a free version of the m4 language, derived from OpenSolaris.
M4 has been included in the Inferno operating system. This implementation is more closely related to the original m4 developed by Kernighan and Ritchie in Version 7 Unix than its more sophisticated relatives in UNIX System V and POSIX.
GNU m4 is an implementation of m4 for the GNU Project. It is designed to avoid many kinds of arbitrary limits found in traditional m4 implementations, such as maximum line lengths, maximum size of a macro and number of macros. Removing such arbitrary limits is one of the stated goals of the GNU Project.
The GNU Autoconf package makes extensive use of the features of GNU m4.
GNU m4 is currently maintained by Gary V. Vaughan and Eric Blake. GNU m4 is free software, released under the terms of the GNU General Public License.
See also
C preprocessor
Macro (computer science)
Make
Template processor
Web template system
References
External links
GNU m4 website
GNU m4 manual
m4 tutorial
Macro Magic: m4, Part One and Part Two
Macro programming languages
Unix programming tools
Unix SUS2008 utilities
Inferno (operating system) commands | M4 (computer language) | [
"Technology"
] | 1,356 | [
"Computing commands",
"Inferno (operating system) commands"
] |
625,728 | https://en.wikipedia.org/wiki/JavaOS | JavaOS is a discontinued operating system based on a Java virtual machine. It was originally developed by Sun Microsystems. Unlike Windows, macOS, Unix, or Unix-like systems which are primarily written in the C programming language, JavaOS is primarily written in Java. It is now considered a legacy system.
History
The Java programming language was introduced by Sun in May 1995. Jim Mitchell and Peter Madany at JavaSoft designed a new operating system, codenamed Kona, written completely in Java. In March 1996, Tom Saulpaugh joined the now seven-person Kona team to design an input/output (I/O) architecture, having come from Apple as Macintosh system software engineer since June 1985 and co-architect of Copland.
JavaOS was first announced in a Byte article. In 1996, JavaSoft's official product announcement described the compact OS designed to run "in anything from net computers to pagers". In early 1997, JavaSoft transferred JavaOS to SunSoft. In late 1997, Bob Rodriguez led the team to collaborate with IBM who then marketed the platform, accelerated development, and made significant key architectural contributions to the next release of JavaOS, eventually renamed JavaOS for Business. IBM indicated its focus was more on network computer thin clients, specifically to replace traditional IBM 3270 "green screen" and Unix X terminals, and to implement single application clients.
Chorus, a distributed real-time operating system, was used for its microkernel technology. This began with Chorus Systèmes SA, a French company, licensing JavaOS from Sun and replacing the earlier JavaOS hardware abstraction layer with the Chorus microkernel, thereby creating the Chorus/Jazz product, which was intended to allow Java applications to run in a distributed, real-time embedded system environment. Then in September 1997, it was announced that Sun Microsystems was acquiring Chorus Systèmes SA.
In 1999, Sun and IBM announced the discontinuation of the JavaOS product. As early as 2003, Sun materials referred to JavaOS as a "legacy technology", recommending migration to Java ME, leaving the choice of specific OS and Java environment to the implementer.
Design
JavaOS is based on a hardware architecture native microkernel, running on platforms including ARM, PowerPC, SPARC, StrongARM, and IA-32 (x86). The Java virtual machine runs on the microkernel. All device drivers are written in Java and executed by the virtual machine. A graphics and windowing system implementing the Abstract Window Toolkit (AWT) application programming interface (API) is also written in Java.
JavaOS was designed to run on embedded systems and has applications in devices such as set-top boxes, computer networking infrastructure, and automated teller machines (ATMs). It comes with the JavaStation.
Licensing
JavaSoft granted licenses to more than 25 manufacturers, including Oracle, Acer, Xerox, Toshiba, and Nokia. IBM and Sun announced the cooperation for JavaOS for Business at the end of March 1998.
See also
Android (operating system)
Java Desktop System
JX (operating system)
Inferno (operating system)
SavaJe
Vino (operating system)
References
ARM operating systems
Embedded operating systems
Java platform
Microkernels
Microkernel-based operating systems
Object-oriented operating systems
Sun Microsystems software
X86 operating systems | JavaOS | [
"Technology"
] | 682 | [
"Computing platforms",
"Java platform"
] |
625,897 | https://en.wikipedia.org/wiki/Haboob | A haboob () is a type of intense dust storm carried by the wind of a weather front. Haboobs occur regularly in dry land area regions throughout the world.
Formation and characteristics
During thunderstorm formation, winds move in a direction opposite to the storm's travel, and they move from all directions into the thunderstorm. When the storm collapses and begins to release precipitation, wind directions reverse, gusting outward from the storm and generally gusting the strongest in the direction of the storm's travel.
When this downdraft of cold air, or downburst, reaches the ground, it sweeps up dry, loose silt and clay (referred to collectively as dust) from the desert, forming a wall of airborne sediment that precedes the storm cloud. This dust wall can span up to 100 km (62 mi) in width and extend several kilometers in elevation. During their peak intensity, haboob winds can reach speeds of 35–100 km/h (22–62 mph) and may approach suddenly with minimal warning. Rain often fails to reach the ground level as it evaporates in the hot, dry air—a phenomenon known as virga). The evaporation process further chills the rushing air and propels it forward. In some instances, persistent rain may carry a significant amount of dust, leading to what is termed as mud storms in severe cases.
Safety
Eye and respiratory system protection is advisable for anyone who must be outside during a haboob. Moving to shelter is highly advised during a strong event.
While operating a vehicle, drivers are advised to pull over to the side of the road and turn off their lights to avoid confusing other drivers in conditions of poor visibility.
Occurrence
Middle East
Haboobs have been observed in the Sahara, Sahel (typically Sudan, where they were named and described), as well as across the Arabian Peninsula, throughout Kuwait, and in the most arid regions of Iraq. Haboob winds in the Arabian Peninsula, Iraq, and Kuwait are frequently created by the collapse of a thunderstorm.
North Africa
African haboobs result from the northward summer shift of the Intertropical Convergence Zone into North Africa, bringing moisture from the Gulf of Guinea.
Australia
Haboobs in Australia may be frequently associated with cold fronts. The deserts of Central Australia, especially near Alice Springs, are particularly prone to haboobs, with sand and debris reaching several kilometers into the sky and leaving up to of sand in the haboob's path.
North America
As with haboobs in the Middle East, haboob occurrences in North America are often created by the collapse of a thunderstorm. This is a local or mesoscale event, and at times of extreme drought they can originate in agricultural regions. Some of the most famous dust storms of the Dust Bowl and similar conditions later were in fact synoptic scale events typically generated by a strong cold frontal passage, with storms on 11 November 1911, 9–11 May 1934, 14 April 1935, and 19 February 1954 having been particularly vivid examples.
The arid and semiarid regions of North America—in fact, any dry region—may experience haboobs. In North America, the most common terms for these events are either dust storm or sandstorm. In the U.S., they frequently occur in the deserts of Arizona, including around the cities of Yuma and Phoenix; in New Mexico, including Albuquerque; eastern California; and Texas. Per the Washington State Department of Ecology, they also occur in the Columbia Basin of Eastern Washington, and can impact cities such as Walla Walla and Spokane. In Washington, improved farming practices have led to a decline in large dust storms and haboobs since the 1990s, with the largest likelihood of formation between late March through April, corresponding to the beginning of field tilling in Eastern Washington. In Mexico, they occur in the northern part of the country in the Sonoran and Chihuahuan Desert. Most recently, a haboob impacted the cities of Guaymas, San Carlos, and Empalme, Sonora on 20 July 2023.
Mars
Global dust storms on Mars have been compared to haboobs on Earth.
Titan
Dust storms of Titan observed in 2009 and 2010 have been compared to haboobs. However, the convective storm clouds are composed of liquid methane droplets, and the dust is likely composed of organic tholins.
See also
Bora (wind)
Dry thunderstorm
Dust devil
Intertropical Convergence Zone
Khamsin
Mistral (wind)
Outflow boundary
Simoom
Sirocco
References
External links
Haboob Photos @ HikeArizona.COM
Haboobs, Arizona Department of Transportation.
The Bibliography of Aeolian Research
Haboob on Winds of the World
Time-lapse video of the 5 July 2011 Arizona Haboob
Dust storms
Storm
Weather hazards | Haboob | [
"Physics"
] | 995 | [
"Weather",
"Physical phenomena",
"Weather hazards"
] |
626,072 | https://en.wikipedia.org/wiki/Dead%20zone%20%28ecology%29 | Dead zones are hypoxic (low-oxygen) areas in the world's oceans and large lakes. Hypoxia occurs when dissolved oxygen (DO) concentration falls to or below 2 ml of O2/liter. When a body of water experiences hypoxic conditions, aquatic flora and fauna begin to change behavior in order to reach sections of water with higher oxygen levels. Once DO declines below 0.5 ml O2/liter in a body of water, mass mortality occurs. With such a low concentration of DO, these bodies of water fail to support the aquatic life living there. Historically, many of these sites were naturally occurring. However, in the 1970s, oceanographers began noting increased instances and expanses of dead zones. These occur near inhabited coastlines, where aquatic life is most concentrated.
Coastal regions, such as the Baltic Sea, the northern Gulf of Mexico, and the Chesapeake Bay, as well as large enclosed water bodies like Lake Erie, have been affected by deoxygenation due to eutrophication. Excess nutrients are input into these systems by rivers, ultimately from urban and agricultural runoff and exacerbated by deforestation. These nutrients lead to high productivity that produces organic material that sinks to the bottom and is respired. The respiration of that organic material uses up the oxygen and causes hypoxia or anoxia.
The UN Environment Programme reported 146 dead zones in 2004 in the world's oceans where marine life could not be supported due to depleted oxygen levels. Some of these were as small as a square kilometer (0.4 mi2), but the largest dead zone covered 70,000 square kilometers (27,000 mi2). A 2008 study counted 405 dead zones worldwide.
Causes
Aquatic and marine dead zones can be caused by an increase in nutrients (particularly nitrogen and phosphorus) in the water, known as eutrophication. These nutrients are the fundamental building blocks of single-celled, plant-like organisms that live in the water column, and whose growth is limited in part by the availability of these materials. With more available nutrients, single-celled aquatic organisms (such as algae and cyanobacteria) have the resources necessary to exceed their previous growth limit and begin to multiply at an exponential rate. Exponential growth leads to rapid increases in the density of certain types of these phytoplankton, a phenomenon known as an algal bloom.
Limnologist David Schindler, whose research at the Experimental Lakes Area led to the banning of harmful phosphates in detergents, warned about algal blooms and dead zones, "The fish-killing blooms that devastated the Great Lakes in the 1960s and 1970s haven't gone away; they've moved west into an arid world in which people, industry, and agriculture are increasingly taxing the quality of what little freshwater there is to be had here....This isn't just a prairie problem. Global expansion of dead zones caused by algal blooms is rising rapidly."
The major groups of algae are cyanobacteria, green algae, dinoflagellates, coccolithophores and diatom algae. An increase in the input of nitrogen and phosphorus generally causes cyanobacteria to bloom. Other algae are consumed and thus do not accumulate to the same extent as cyanobacteria. Cyanobacteria are not good food for zooplankton and fish and hence accumulate in water, die, and then decompose. The bacterial degradation of their biomass consumes the oxygen in the water, thereby creating the state of hypoxia.
Dead zones can be caused by natural and by anthropogenic factors. Natural causes include coastal upwelling, changes in wind, and water circulation patterns. Other environmental factors that determine the occurrence or intensity of a dead zone include long water residence times, high temperatures, and high levels of sunlight penetration through the water column.
Additionally, natural oceanographic phenomena can cause deoxygenation of parts of the water column. For example, enclosed bodies of water, such as fjords or the Black Sea, have shallow sills at their entrances, causing water to be trapped there for a long time. The eastern tropical Pacific Ocean and northern Indian Ocean have lowered oxygen concentrations which are thought to be in regions where there is minimal circulation to replace the oxygen that is consumed. These areas are also known as oxygen minimum zones (OMZ). In many cases, OMZs are permanent or semi-permanent areas.
Remains of organisms found within sediment layers near the mouth of the Mississippi River indicate four hypoxic events before the advent of synthetic fertilizer. In these sediment layers, anoxia-tolerant species are the most prevalent remains found. The periods indicated by the sediment record correspond to historic records of high river flow recorded by instruments at Vicksburg, Mississippi.
Changes in ocean circulation triggered by ongoing climate change could also add or magnify other causes of oxygen reductions in the ocean.
Anthropogenic causes include use of chemical fertilizers and their subsequent presence in water runoff and groundwater, direct sewage discharge into rivers and lakes, and nutrient discharge into groundwater from large, accumulated quantities of animal waste. Use of chemical fertilizers is considered the major human-related cause of dead zones around the world. However, runoff from sewage, urban land use, and fertilizers can also contribute to eutrophication.
In August 2017, a report suggested that the US meat industry and agroeconomic system are predominantly responsible for the largest-ever dead zone in the Gulf of Mexico. Soil runoff and leached nitrate, exacerbated by agricultural land management and tillage practices as well as manure and synthetic fertilizer usage, contaminated water from the Heartland to the Gulf of Mexico. A large portion of the plant matter by-products from crops grown in this region are used as major feed components in the production of meat animals for agribusiness companies, like Tyson and Smithfield Foods. Over 86% of the livestock feed is inedible for humans.
Notable dead zones in the United States include the northern Gulf of Mexico region, surrounding the outfall of the Mississippi River, the coastal regions of the Pacific Northwest, and the Elizabeth River in Virginia Beach, all of which have been shown to be recurring events over the last several years. Around the world, dead zones have developed in continental seas, such as the Baltic Sea, Kattegat, Black Sea, Gulf of Mexico, and East China Sea, all of which are major fishery areas.
Types
Dead zones can be classified by type, and are identified by the length of their occurrence:
Permanent dead zones are deep water occurrences that rarely exceed 2 milligrams per liter.
Temporary dead zones are short lived dead zones lasting hours or days.
Seasonal dead zones are annually occurring, typically in warm months of summer and autumn.
Diel cycling hypoxia is a specific seasonal dead zone that only becomes hypoxic during the night
The type of dead zone can, in some ways, be categorized by the time required for the water to return to full health. This time frame depends on the intensity of eutrophication and level of oxygen depletion. A water body that sinks to anoxic conditions and experiences extreme reduction in community diversity will have to travel a much longer path to return to full health. A water body that only experiences mild hypoxia and maintains community diversity and maturity will require a much shorter path length to return to full health.
Effects
The most notable effects of eutrophication are vegetal blooms, sometimes toxic, loss of biodiversity and anoxia, which can lead to the massive death of aquatic organisms.
Due to the hypoxic conditions present in dead zones, marine life within these areas tends to be scarce. Most fish and motile organisms tend to emigrate out of the zone as oxygen concentrations fall, and benthic populations may experience severe losses when oxygen concentrations are below 0.5 mg l−1 O2. In severe anoxic conditions, microbial life may experience dramatic shifts in community identity as well, resulting in an increased abundance of anaerobic organisms as aerobic microbes decrease in number and switch energy sources for oxidation such as nitrate, sulfate, or iron reduction. Sulfur reduction is a particular concern as Hydrogen sulfide is toxic and stresses most organisms within the zone further, exacerbating mortality risks.
Low oxygen levels can have severe effects on survivability of organisms inside the area while above lethal anoxic conditions. Studies conducted along the Gulf Coast of North America have shown hypoxic conditions lead to reduction of reproductive rates and growth rates in a variety of organisms including fish and benthic invertebrates. Organisms able to leave the area typically do so when oxygen concentrations decrease to less than 2 mg l−1. At these oxygen concentrations and below, organisms that survive inside the oxygen deficient environment and are unable to escape the area will often exhibit progressively worsening stress behavior and die. Surviving organisms tolerant of hypoxic conditions often exhibit physiological adaptations appropriate for persisting within hypoxic environments. Examples of such adaptations include increased efficiency of oxygen intake and use, lowering required amount of oxygen intake through reduced growth rates or dormancy, and increasing the usage of anaerobic metabolic pathways.
Community composition in benthic communities is dramatically disrupted by periodic oxygen depletion events, such as those of seasonal dead zones and occurring as a result of Diel cycles. The longterm effects of such hypoxic conditions result in a shift in communities, most commonly manifest as a decrease in species diversity through mass mortality events. Reestablishment of benthic communities depend upon composition of adjacent communities for larval recruitment. This results in a shift towards faster establishing colonizers with shorter and more opportunistic life strategies, potentially disrupting historic benthic compositions.
Fisheries
The influence of dead zones on fisheries and other marine commercial activities varies by the length of occurrence and location. Dead zones are often accompanied by a decrease in biodiversity and collapse in benthic populations, lowering the diversity of yield in commercial fishing operations, but in cases of eutrophication-related dead zone formations, the increase in nutrient availability can lead to temporary rises in select yields among pelagic populations, such as anchovies. However, studies estimate that the increased production in the surrounding areas do not offset the net decrease in productivity resulting from the dead zone. For instance, an estimated 17,000 MT of carbon in the form of prey for fisheries has been lost as a result of dead zones in the Gulf of Mexico. Additionally, many stressors in fisheries are worsened by hypoxic conditions. Indirect factors such as increased success by invasive species and increased pandemic intensity in stressed species such as oysters both lead to losses in revenue and ecological stability in affected regions.
Coral reefs
There has been a severe increase in mass mortality events associated with low oxygen causing mass hypoxia with the majority having been in the last 2 decades. The rise in water temperature leads to an increase in oxygen demand and the increase for ocean deoxygenation which causes these large coral reef dead zones. For many coral reefs, the response to this hypoxia is very dependent on the magnitude and duration of the deoxygenation. The symptoms can be anywhere from reduced photosynthesis and calcification to bleaching. Hypoxia can have indirect effects like the abundance of algae and spread of coral diseases in the ecosystems. While coral is unable to handle such low levels of oxygen, algae is quite tolerant. Because of this, in interaction zones between algae and coral, increased hypoxia will cause more coral death and higher spread of algae. The increase mass coral dead zones is reinforced by the spread of coral diseases. Coral diseases can spread easily when there are high concentrations of sulfide and hypoxic conditions. Due to the loop of hypoxia and coral reef mortality, the fish and other marine life that inhabit the coral reefs have a change in behavioral in response to the hypoxia. Some fish will go upwards to find more oxygenated water, and some enter a phase of metabolic and ventilatory depression. Invertebrates migrate out of their homes to the surface of substratum or move to the tips of arborescent coral colonies.
Around six million people, the majority who live in developing countries, depend on coral reef fisheries. These mass die-offs due to extreme hypoxic events can have severe impacts on reef fish populations. Coral reef ecosystems offer a variety of essential ecosystem services including shoreline protection, nitrogen fixation, and waste assimilation, and tourism opportunities. The continued decline of oxygen in oceans on coral reefs is concerning because it takes many years (decades) to repair and regrow corals.
Jellyfish blooms
Despite most other life forms being killed by the lack of oxygen, jellyfish can thrive and are sometimes present in dead zones in vast numbers. Jellyfish blooms produce large quantities of mucus, leading to major changes in food webs in the ocean since few organisms feed on them. The organic carbon in mucus is metabolized by bacteria which return it to the atmosphere in the form of carbon dioxide in what has been termed a "jelly carbon shunt". The potential worsening of jellyfish blooms as a result of human activities has driven new research into the influence of dead zones on jelly populations. The primary concern is the potential for dead zones to serve as breeding grounds for jelly populations as a result of the hypoxic conditions driving away competition for resources and common predators of jellyfish. The increased population of jellyfish could have high commercial costs with loss of fisheries, destruction and contamination of trawling nets and fishing vessels, and lowered tourism revenue in coastal systems.
Seagrass beds
Globally, seagrass has been declining rapidly. It is estimated that 21% of the 71 known seagrass species have decreasing population trends and 11% of those species have been designated as threatened on the ICUN Red List. Hypoxia that leads to eutrophication caused from ocean deoxygenation is one of the main underlying factors of these die-offs. Eutrophication causes enhanced nutrient enrichment which can result in seagrass productivity, but with continual nutrient enrichment in seagrass meadows, it can cause excessive growth of microalgae, epiphytes and phytoplankton resulting in hypoxic conditions.
Seagrass is both a source and a sink for oxygen in the surrounding water column and sediments. At night, the inner part of seagrass oxygen pressure is linearly related to the oxygen concentration in the water column, so low water column oxygen concentrations often result in hypoxic seagrass tissues, which can eventually kill off the seagrass. Normally, seagrass sediments must supply oxygen to the below-ground tissue through either photosynthesis or by diffusing oxygen from the water column through leaves to rhizomes and roots. However, with the change in seagrass oxygen balances, it can often result in hypoxic seagrass tissues. Seagrass exposed to this hypoxic water column show increased respiration, reduced rates of photosynthesis, smaller leaves, and reduced number of leaves per shoot. This causes insufficient supply of oxygen to the belowground tissues for aerobic respiration, so seagrass must rely on the less-efficient anaerobic respiration. Seagrass die-offs create a positive feedback loop in which the mortality events cause more death as higher oxygen demands are created when dead plant material decomposes.
Because hypoxia increases the invasion of sulfides in seagrass, this negatively affects seagrass through photosynthesis, metabolism and growth. Generally, seagrass is able to combat the sulfides by supplying enough oxygen to the roots. However, deoxygenation causes the seagrass to be unable to supply this oxygen, thus killing it off.
Deoxygenation reduces the diversity of organisms inhabiting seagrass beds by eliminating species that cannot tolerate the low oxygen conditions. Indirectly, the loss and degradation of seagrass threatens numerous species that rely on seagrass for either shelter or food. The loss of seagrass also effects the physical characteristics and resilience of seagrass ecosystems. Seagrass beds provide nursery grounds and habitat to many harvested commercial, recreational, and subsistence fish and shellfish. In many tropical regions, local people are dependent on seagrass associated fisheries as a source of food and income.
Seagrass also provides many ecosystem services including water purification, coastal protection, erosion control, sequestration and delivery of trophic subsidies to adjacent marine and terrestrial habitats. Continued deoxygenation causes the effects of hypoxia to be compounded by climate change which will increase the decline in seagrass populations.
Mangrove forests
Compared to seagrass beds and coral reefs, hypoxia is more common on a regular basis in mangrove ecosystems, though ocean deoxygenation is compounding the negative effects by anthropogenic nutrient inputs and land use modification.
Like seagrass, mangrove trees transport oxygen to roots of rhizomes, reduce sulfide concentrations, and alter microbial communities. Dissolved oxygen is more readily consumed in the interior of the mangrove forest. Anthropogenic inputs may push the limits of survival in many mangrove microhabitats. For example, shrimp ponds constructed in mangrove forests are considered the greatest anthropogenic threat to mangrove ecosystems. These shrimp ponds reduce estuary circulation and water quality which leads to the promotion of diel-cycling hypoxia. When the quality of the water degrades, the shrimp ponds are quickly abandoned leaving massive amounts of wastewater. This is a major source of water pollution that promotes ocean deoxygenation in the adjacent habitats.
Due to these frequent hypoxic conditions, the water does not provide habitats to fish. When exposed to extreme hypoxia, ecosystem function can completely collapse. Extreme deoxygenation will affect the local fish populations, which are an essential food source. The environmental costs of shrimp farms in the mangrove forests grossly outweigh their economic benefits. Cessation of shrimp production and restoration of these areas and reduce eutrophication and anthropogenic hypoxia.
Locations
In the 1970s, marine dead zones were first noted in settled areas where intensive economic use stimulated scientific scrutiny: in the U.S. East Coast's Chesapeake Bay, in Scandinavia's strait called the Kattegat, which is the mouth of the Baltic Sea and in other important Baltic Sea fishing grounds, in the Black Sea, and in the northern Adriatic.
Other marine dead zones have appeared in coastal waters of South America, China, Japan, and New Zealand. A 2008 study counted 405 dead zones worldwide.
Baltic Sea
Researchers from Baltic Nest Institute published in one of PNAS issues reports that the dead zones in the Baltic Sea have grown from approximately 5,000 km2 to more than 60,000 km2 in recent years.
Some of the causes behind the elevated increase of dead zones can be attributed to the use of fertilizers, large animal farms, the burning of fossil fuels, and effluents from municipal wastewater treatment plants.
With its massive size, the Baltic Sea is best analyzed in sub-areas rather than as a whole. In a paper published in 2004, researchers specifically divided the Baltic Sea into 9 sub-areas, each having its own specific characteristics. The 9 sub-areas are discerned as follows: Gulf of Bothnia, Archipelago region, Gulf of Finland, Gulf of Riga, Gulf of Gdansk, Swedish East-coast, Central Baltic, Belt Sea region, and Kattegat. Each sub-area has responded differently to nutrient additions and eutrophication; however, there are a few general patterns and measures for the Baltic Sea as a whole. As the researchers Rönnberg and Bonsdorff state,
"Irrespective of the area-specific effects of the increased loads of nutrients to the Baltic Sea, the sources are more or less similar in the whole region. The extent and the severity of the discharges may differ, however. As is seen in e.g. HELCOM (1996) and Rönnberg (2001), the major sources in the input of nutrients are derived from agriculture, industry, municipal sewage and transports. Nitrogen emissions in form of atmospheric depositions are also important, as well as local point sources, such as aquaculture and leakage from forestry."
In general, each area of the Baltic Sea is experiencing similar anthropogenic effects. As Rönnberg and Bonsdorff state, "Eutrophication is a serious problem in the Baltic Sea area." However, when it comes to implementation of water revival programs, each area likely will need to be handled on a local level.
Virginia
Chesapeake Bay
According to the National Geographic, the Chesapeake Bay was one of the first hypoxic zones to be identified in the 1970s. The Chesapeake Bay experiences seasonal hypoxia due to high nitrogen levels. These nitrogen levels are caused by urbanization, there are multiple factories that pollute the atmosphere with nitrogen, and agriculture, the opposite side of the bay is used for poultry farming, which produces a lot of manure that ends up running off into the Chesapeake Bay.
From 1985 - 2019, there were efforts from the caretakers of Chesapeake Bay to reduce the annual hypoxic volumes. There was significant improvement in 2016-2017 that gave assurance to the caretakers that the efforts were successful, however recent data has shown that further efforts are needed to continuously curb the effects of global warming.
Elizabeth River, Virginia
The Elizabeth River estuary is used for commercial and military use and is one of the most commonly used ports on the East Coast of the USA. From 2015-2019, 11 different conditions were measured in various areas of the Elizabeth River. Throughout the river, there were consistently high levels of nitrogen and phosphorus, along with high levels of other contaminants contributing to the poor quality of life for bottom feeders along the river. The main cause of the pollution to the Elizabeth river has been the military and industrial activities through the 1990s. In 1993, the Elizabeth River Project was started in attempt to do a restoration project on the river. Adopting one of the fish whose species had been largely impacted by the pollution, the Fundulus heteroclitus (Mummichog), the group was able to gain traction and carry out multiple projects and has removed thousands of tons of contaminated sediment. In 2006, Maersk-APM, a major shipping company, wanted to build a new port on the Elizabeth River. As part of the environmental mitigation they worked with the Elizabeth River Project to create the Money Point Project, which was an effort to restore Money Point, which had been deemed biologically depleted due to a black tar like substance called creosote laying at the bottom. Maersk-APM gave $5 million to help get the project up and running. By 2012, they were able to restore over 7 acres of tidal marsh, 3 acres of oyster reef and created a new shoreline. In 2019, the Money Point Project received the "Best Restored Shore" award from the American Shore and Beach Preservation Association.
Lake Erie
A seasonal dead zone exists in the central part of Lake Erie from east of Point Pelee to Long Point and stretches to shores in Canada and the United States. Between the months of July and October the dead zone has the ability to grow to the size of 10,000 square kilometers. Lake Erie has an excess of phosphorus due to agricultural runoff that quickens the growth of algae which then contributes to hypoxic conditions. The superabundance of phosphorus in the lake has been linked to nonpoint source pollution such as urban and agricultural runoff as well as point source pollution that includes sewage and wastewater treatment plants. The zone was first noticed in the 1960s amid the peak of eutrophication occurring in the lake. After public concern increased, Canada and the US launched efforts to reduce runoff pollution into the lake in the 1970s as means to reverse the dead zone growth. Scientists in 2018 stated that phosphorus runoff would have to further decrease by 40% to avoid the emergence of the dead zones in the area. The commercial and recreational fishing industry have been significantly impacted by the hypoxic zone. In 2021, the low-oxygenated waters caused a mass-kill event of freshwater drum fish species (also known as sheepshead fish). Water from the lake is also used for human drinking. Water from the lake has been said to acquire a pervasive odor and discoloration when the dead zone is active in the late summer months.
Lower St. Lawrence Estuary
A dead zone exists in the Lower St. Lawrence River area from east the Saguenay River to east of Baie Comeau, greatest at depths over and noticed since the 1930s. The main concern for Canadian scientists is the impact on fish found in the area.
Oregon
There is a hypoxic zone covering the coasts of Oregon and Washington that reached peak size in 2006 at an area of over 1,158 square miles. Strong surface winds between April and September cause frequent upwelling that results in an increase of algae blooms, rendering the hypoxia a seasonal occurrence. The upwelling has contributed to lower temperatures within the zone. The dead zone has resulted in sea organisms such as crabs and fish relocating and an interference of commercial fishing. Organisms that cannot relocate have been found to suffocate, leaving them unable to be used by fishermen. In 2009, one scientist described "thousands and thousands" of suffocated, crabs, worms, and sea stars along the seafloor of the hypoxic zone. In 2021, 1.9 million dollars were put into monitoring and continuing to study the hypoxic conditions in the area that the dead zone occurs in.
Gulf of Mexico 'dead zone'
The area of temporary hypoxic bottom water that occurs most summers off the coast of Louisiana in the Gulf of Mexico is the largest recurring hypoxic zone in the United States. It occurs only during the summer months of the year due to summer warming, regional circulation, wind mixing and high freshwater discharge. The Mississippi River, which is the drainage area for 41% of the continental United States, dumps high-nutrient runoff such as nitrates and phosphorus into the Gulf of Mexico. According to a 2009 fact sheet created by NOAA, "seventy percent of nutrient loads that cause hypoxia are a result of this vast drainage basin". which includes the heart of U.S. agribusiness, the Midwest. The discharge of treated sewage from urban areas (pop. c 12 million in 2009) combined with agricultural runoff deliver c. 1.7 million tons of phosphorus and nitrogen into the Gulf of Mexico every year. Nitrogen is indeed needed to increase crop yields, but plants are inefficient at taking it up, and often more fertilizers are used than plants actually need. Therefore, only a percentage of applied nitrogen ends up in the crops; and in some areas that number is less than 20%. Even though Iowa occupies less than 5% of the Mississippi River drainage basin, average annual nitrate discharge from surface water in Iowa is about 204,000 to 222,000 metric tonnes, or 25% of all the nitrate which the Mississippi River delivers to the Gulf of Mexico. Export from the Raccoon River Watershed is among the highest in the United States with annual yields at 26.1 kg/ha/year which ranked as the highest loss of nitrate out of 42 Mississippi subwatersheds evaluated for a Gulf of Mexico hypoxia report. In 2012, Iowa introduced the Iowa Nutrient Reduction Strategy, which "is a science and technology-based framework to assess and reduce nutrients to Iowa waters and the Gulf of Mexico. It is designed to direct efforts to reduce nutrients in surface water from both point and nonpoint sources in a scientific, reasonable and cost effective manner." The strategy continues to evolve, using voluntary methods to reduce Iowa's negative contributions through outreach, research, and implementation of nutrient holding practices. In order to help reduce agricultural runoff into the Mississippi Basin, Minnesota passed MN Statute 103F.48 in 2015, also known as the "Buffer Law", which was designed to implement mandatory riparian buffers between farmland and public waterways across the State of Minnesota. The Minnesota Board of Water and Soil Resources (BWSR) issued a January 2019 report stating that compliance with the 'Buffer Law' has reached 99%.
Size
The area of hypoxic bottom water that occurs for several weeks each summer in the Gulf of Mexico has been mapped most years from 1985 through 2024. The size varies annually from a record high in 2017 when it encompassed more than 22,730 square kilometers (8,776 square miles) to a record low in 1988 of 39 square kilometers (15 square miles).
The 2015 dead zone measured 16,760 square kilometers (6,474 square miles).
Nancy Rabalais of the Louisiana Universities Marine Consortium in Cocodrie, Louisiana predicted the dead zone or hypoxic zone in 2012 will cover an area of 17,353 square kilometers (6,700 square miles) which is larger than Connecticut; however, when the measurements were completed, the area of hypoxic bottom water in 2012 only totaled 7,480 square kilometers. The models using the nitrogen flux from the Mississippi River to predict the "dead zone" areas have been criticized for being systematically high from 2006 to 2014, having predicted record areas in 2007, 2008, 2009, 2011, and 2013 that were never realized.
In late summer 1988 the dead zone disappeared as the great drought caused the flow of Mississippi to fall to its lowest level since 1933. During times of heavy flooding in the Mississippi River Basin, as in 1993, "the "dead zone"
dramatically increased in size, approximately larger than the previous year".
Economic impact
Some assert that the dead zone threatens lucrative commercial and recreational fisheries in the Gulf of Mexico. "In 2009, the dockside value of commercial fisheries in the Gulf was $629 million. Nearly three million recreational fishers further contributed about $10 billion to the Gulf economy, taking 22 million fishing trips." Scientists are not in universal agreement that nutrient loading has a negative impact on fisheries. Grimes makes a case that nutrient loading enhances the fisheries in the Gulf of Mexico. Courtney et al. hypothesize, that nutrient loading may have contributed to the increases in red snapper in the northern and western Gulf of Mexico.
In 2017, Tulane University offered a $1 million challenge grant for growing crops with less fertilizer.
History
Shrimp trawlers first reported a 'dead zone' in the Gulf of Mexico in 1950, but it was not until 1970 when the size of the hypoxic zone had increased that scientists began to investigate.
After 1950, the conversion of forests and wetlands for agricultural and urban developments accelerated. "Missouri River Basin has had hundreds of thousands of acres of forests and wetlands (66,000,000 acres) replaced with agriculture activity [. . .] In the Lower Mississippi one-third of the valley's forests were converted to agriculture between 1950 and 1976."
In July 2007, a dead zone was discovered off the coast of Texas where the Brazos River empties into the Gulf.
Korea
Jinhae Bay
Jinhae Bay is the first of Korea's two major dead zones. Hypoxia was first reported in Jinhae Bay in September 1974. In 2011, a joint study was done to observe and record causes, effects, and what can be done about Korea's hypoxic zones. It was discovered that Jinhae Bay exhibits a seasonal dead zone from early June to late September. This dead zone is caused by "domestic and land use waste and thermal stratification". Jinhae Bay experiences hypoxia largely at the bottom of its bay. The ratio of phosphorus to nitrogen is imbalanced at the bottom, where it is otherwise balanced at the top, with the exception of early June to late September where the Bay experiences eutrophication as a whole. The effects of Jinhae Bay's hypoxia is seen in the marine system surrounding Korea, with a loss of biological diversity, particularly of the calcareous shelled organisms.
Shihwa Bay
Shihwa Bay is a coastal reservoir created in 1994 to supply surrounding agricultural lands with water, and act as a run-off lake for nearby industrial plants. The Bay was made without much environmental consideration, and by 1999, water quality had a significant drop. This drop in water quality is attributed to the bay not having enough circulation or new water flow to accommodate the domestic and industrial waste being dumped. In response, the Korean government set up a pollution management system within the bay, and has a gate system that allows the Bay to mix with water in the sea. Shihwa Bay is also experiencing an imbalance of phosphorus to nitrogen, but also large sources of ammonium.
Energy Independence and Security Act of 2007
The Energy Independence and Security Act of 2007 calls for the production of of renewable fuels by 2022, including of corn-based ethanol, a tripling of current production that would require a similar increase in corn production. Unfortunately, the plan poses a new problem; the increase in demand for corn production results in a proportional increase in nitrogen runoff. Although nitrogen, which makes up 78% of the Earth's atmosphere, is an inert gas, it has more reactive forms, two of which (nitrate and ammonia) are used to make fertilizer.
According to , a professor of crop physiology at the University of Illinois at Urbana-Champaign, corn requires more nitrogen-based fertilizer because it produces a higher grain per unit area than other crops and, unlike other crops, corn is completely dependent on available nitrogen in soil. The results, reported March 18, 2008, in Proceedings of the National Academy of Sciences, showed that scaling up corn production to meet the goal would increase nitrogen loading in the Dead Zone by 10–18%. This would boost nitrogen levels to twice the level recommended by the Mississippi Basin/Gulf of Mexico Water Nutrient Task Force (Mississippi River Watershed Conservation Programs), a coalition of federal, state, and tribal agencies that have monitored the dead zone since 1997. The task force says a 30% reduction of nitrogen runoff is needed if the dead zone is to shrink.
Reversal
The recovery of benthic communities primarily depends upon the length and severity of hypoxic conditions inside the zone. Less severe conditions and temporary depletion of oxygen allow rapid recovery of benthic communities in the area due to reestablishment by benthic larvae from adjacent areas, with longer conditions of hypoxia and more severe oxygen depletion leading to longer reestablishment periods. Recovery also depends upon stratification levels within the area, so heavily stratified areas in warmer waters are less likely to recover from anoxic or hypoxic conditions in addition to being more susceptible to eutrophication driven hypoxia. The difference in recovery ability and susceptibility to hypoxia in stratified marine environments is expected to complicate recovery efforts of dead zones in the future as ocean warming continues.
Small scale hypoxic systems with rich surrounding communities are the most likely to recover after nutrient influxes leading to eutrophication stop. However, depending on the extent of damage and characteristics of the zone, large scale hypoxic condition could also potentially recover after a period of a decade. For example, the Black Sea dead zone, previously the largest in the world, largely disappeared between 1991 and 2001 after fertilizers became too costly to use following the collapse of the Soviet Union and the demise of centrally planned economies in Eastern and Central Europe. Fishing has again become a major economic activity in the region.
While the Black Sea "cleanup" was largely unintentional and involved a drop in hard-to-control fertilizer usage, the U.N. has advocated other cleanups by reducing large industrial emissions. From 1985 to 2000, the North Sea dead zone had nitrogen reduced by 37% when policy efforts by countries on the Rhine River reduced sewage and industrial emissions of nitrogen into the water. Other cleanups have taken place along the Hudson River and San Francisco Bay.
See also
Notes
References
Minnesota Board of Water and Soil Resources (BWSR, 2018), Alternative Practices Introduction | MN Board of Water, Soil Resources
Minnesota 'Buffer Law' statute: MN Statute 103F.48
BWSR Update, January 2019:
Ronnberg, C., & Bonsdorff, E. (February 2004). Baltic Sea eutrophication: area-specific ecological consequences [Article; Proceedings Paper]. Hydrobiologia, 514(1–3), 227–241. https://doi.org/10.1023/B:HYDR.0000019238.84989.7f
Le Moal, Morgane, Gascuel-Odoux, Chantal, Ménesguen, Alain, Souchon, Yves, Étrillard, Levain, Alix, ... Pinay, Gilles (2019). Eutrophication: A new wine in an old bottle? Elsevier, Science of the Total Environment 651:1–11.
Further reading
Suzie Greenhalgh and Amanda Sauer (WRI), "Awakening the 'Dead Zone': An investment for agriculture, water quality, and climate change" 2003
Reyes Tirado (July 2008) Dead Zones: How Agricultural Fertilizers are Killing our Rivers, Lakes and Oceans. Greenpeace publications. See also:
MSNBC report on dead zones, March 29, 2004
Joel Achenbach, "A 'Dead Zone' in The Gulf of Mexico: Scientists Say Area That Cannot Support Some Marine Life Is Near Record Size", The Washington Post, July 31, 2008
Joel Achenbach, "'Dead Zones' Appear In Waters Worldwide: New Study Estimates More Than 400", The Washington Post, August 15, 2008
External links
Louisiana Universities Marine Consortium
NASA on dead zones (Satellite pictures)
Gulf of Mexico Dead Zone – multimedia
, an online nutrient trading tool developed by the World Resources Institute, designed to address issues of eutrophication. See also the PA NutrientNet website designed for Pennsylvania's nutrient trading program.
Aquatic ecology
Chemical oceanography
Ecotoxicology
Environmental issues with water
Fishing industry
Ocean pollution
Oceanographical terminology
Water pollution
Algal blooms | Dead zone (ecology) | [
"Chemistry",
"Biology",
"Environmental_science"
] | 7,883 | [
"Ocean pollution",
"Algae",
"Water treatment",
"Water pollution",
"Chemical oceanography",
"Water quality indicators",
"Ecosystems",
"Aquatic ecology",
"Algal blooms"
] |
626,077 | https://en.wikipedia.org/wiki/Lites | Lites is a discontinued Unix-like operating system, based on 4.4BSD and the Mach microkernel. Specifically, Lites is a multi-threaded server and emulation library that provided unix functions to a Mach-based system. At the time of its release, Lites provided binary compatibility with 4.4BSD, NetBSD, FreeBSD, 386BSD, UX (4.3BSD), and Linux.
Lites was originally written by Johannes Helander at Helsinki University of Technology, and was further developed by the Flux Research Group at the University of Utah.
See also
HPBSD
References
External links
, Utah Lites
Berkeley Software Distribution
Mach (kernel)
Microkernel-based operating systems
Microkernels
X86 operating systems | Lites | [
"Technology"
] | 161 | [
"Operating system stubs",
"Computing stubs"
] |
626,196 | https://en.wikipedia.org/wiki/Electromagnetic%20propulsion | Electromagnetic propulsion (EMP) is the principle of accelerating an object by the utilization of a flowing electrical current and magnetic fields. The electrical current is used to either create an opposing magnetic field, or to charge a field, which can then be repelled. When a current flows through a conductor in a magnetic field, an electromagnetic force known as a Lorentz force, pushes the conductor in a direction perpendicular to the conductor and the magnetic field. This repulsing force is what causes propulsion in a system designed to take advantage of the phenomenon. The term electromagnetic propulsion (EMP) can be described by its individual components: electromagneticusing electricity to create a magnetic field, and propulsionthe process of propelling something. When a fluid (liquid or gas) is employed as the moving conductor, the propulsion may be termed magnetohydrodynamic drive. One key difference between EMP and propulsion achieved by electric motors is that the electrical energy used for EMP is not used to produce rotational energy for motion; though both use magnetic fields and a flowing electrical current.
The science of electromagnetic propulsion does not have origins with any one individual and has application in many different fields. The thought of using magnets for propulsion continues to this day and has been dreamed of since at least 1897 when John Munro published his fictional story "A Trip to Venus". can be seen in maglev trains and military railguns. Other applications that remain not widely used or still in development include ion thruster for low orbiting satellites and magnetohydrodynamic drive for ships and submarines.
History
One of the first recorded discoveries regarding electromagnetic propulsion was in 1889 when Professor Elihu Thomson made public his work with electromagnetic waves and alternating currents. A few years later Emile Bachelet proposed the idea of a metal carriage levitated in air above the rails in a modern railway, which he showcased in the early 1890s. In the 1960s Eric Roberts Laithwaite developed the linear induction motor, which built upon these principles and introduced the first practical application of electromagnetic propulsion. In 1966 James R. Powell and Gordon Danby patented the superconducting maglev transportation system, and after this engineers around the world raced to create the first high-speed rail. From 1984 to 1995 the first commercial automated maglev system ran in Birmingham. It was a low speed Maglev shuttle that ran from the Birmingham International Airport to the Birmingham International Railway System.
In the USSR at the beginning of 1960th at the Institute of Hydrodynamics, Novosibirsk, Russia, prof. V.F. Minin laid down the experimental foundations of electromagnetic accelerating of bodies to hypersonic velocity.
Uses
Trains
Electromagnetic propulsion is utilized in transportation systems to minimize friction and maximize speed over long distances. This has mainly been implemented in high-speed rail systems that use a linear induction motor to power trains by magnetic currents. It has also been utilized in theme parks to create high-speed roller coasters and water rides.
Maglev
In a maglev train the primary coil assembly lies below the reaction plate. There is a 1–10 cm (0.39-3.93 inch) air gap between that eliminates friction, allowing for speeds up to 500 km/h (310 mph). An alternating electric current is supplied to the coils, which creates a change in polarity of the magnetic field. This pulls the train forward from the front, and thrusts the train forward from the back.
A typical Maglev train costs three cents per passenger mile, or seven cents per ton mile (not including construction costs). This compares to 15 cents per passenger miles for travel by plane and 30 cents for ton mile for travel by intercity trucks. Maglev tracks have high longevity due to minimal friction and an even distribution of weight. Most last for at least 50 years and require little maintenance during this time. Maglev trains are promoted for their energy efficiency since they run on electricity, which can be produced by coal, nuclear, hydro, fusion, wind or solar power without requiring oil. On average most trains travel 483 km/h (300 mph) and use 0.4 megajoules per passenger mile. Using a 20 mi/gallon car with 1.8 people as a comparison, travel by car is typically 97 km/h (60 mph) and uses 4 megajoules per passenger mile. The carbon dioxide emissions are based upon the method of electrical production and fuel use. Many renewable electrical production methods generate little or no carbon dioxide during production (although carbon dioxide may be released during manufacture of the components, e.g. the steel used in wind turbines). The running of the train is significantly quieter than other trains, trucks or airplanes.
Assembly: Linear Induction Motor
A linear induction motor consists of two parts: the primary coil assembly and the reaction plate. The primary coil assembly consists of phase windings surrounded by steel laminations, and includes a thermal sensor within a thermal epoxy. The reaction plate consists of a 3.2 mm (0.125 inch) thick aluminum or copper plate bonded to a 6.4 mm (0.25 inch) thick cold rolled steel sheet. There is an air gap between these two parts that creates the frictionless property an electromagnetic propulsion system encompasses. Functioning of a linear induction motor begins with an AC force that is supplied to the coil windings within the primary coil assembly. This creates a traveling magnetic field that induces a current in the reaction plate, which then creates its own magnetic field. The magnetic fields in the primary coil assembly and reaction plate alternate, which generates force and direct linear motion.
Spacecraft
There are multiple applications for EMP technologies in the field of aerospace. Many of these applications are conceptual as of now, however, there are also several applications that range from near term to next century. One of such applications is the use of EMP to control fine adjustments of orbiting satellites. One of these particular systems is based on the direct interactions of the vehicle's own electromagnetic field and the magnetic field of the Earth. The thrust force may be thought of as an electrodynamic force of interaction of the electric current inside its conductors with the applied natural field of the Earth. To attain a greater force of interaction, the magnetic field must be propagated further from the flight craft. The advantages of such systems is the very precise and instantaneous control over the thrust force. In addition, the expected electrical efficiencies are far greater than those of current chemical rockets that attain propulsion through the intermediate use of heat; this results in low efficiencies and large amounts of gaseous pollutants. The electrical energy in the coil of the EMP system is translated to potential and kinetic energy through direct energy conversion. This results in the system having the same high efficiencies as other electrical machines while excluding the ejection of any substance into the environment.
The current thrust-to mass ratios of these systems are relatively low. Nevertheless, since they do not require reaction mass, the vehicle mass is constant. Also, the thrust can be continuous with relatively low electric consumption. The biggest limitation would be mainly the electrical conductance of materials to produce the necessary values of the current in the propulsion system.
Ships and Submarines
EMP and its applications for seagoing ships and submarines have been investigated since at least 1958 when Warren Rice filed a patent describing the technology. The technology described by Rice considered charging the hull of the vessel itself. The design was later refined by allowing the water to flow through thrusters as described in a later patent by James Meng. The arrangement consists of a water channel open at both ends extending longitudinally through or attached to the ship, a means for producing magnetic field throughout the water channel, electrodes at each side of the channel and source of power to send direct current through the channel at right angles to magnetic flux in accordance with Lorentz force.
Elevators
Cable-free elevators using EMP, capable of moving both vertically and horizontally, have been developed by German engineering firm Thyssen Krupp for use in high rise, high density buildings.
See also
Coilgun
Magnetohydrodynamics
Railgun
References
Propulsion
Electromagnetic components
Space elevator | Electromagnetic propulsion | [
"Astronomy",
"Technology"
] | 1,640 | [
"Exploratory engineering",
"Astronomical hypotheses",
"Space elevator"
] |
626,455 | https://en.wikipedia.org/wiki/Alternative%20fuel | Alternative fuels, also known as non-conventional and advanced fuels, are fuels derived from sources other than petroleum. Alternative fuels include gaseous fossil fuels like propane, natural gas, methane, and ammonia; biofuels like biodiesel, bioalcohol, and refuse-derived fuel; and other renewable fuels like hydrogen and electricity.
These fuels are intended to substitute for more carbon intensive energy sources like gasoline and diesel in transportation and can help to contribute to decarbonization and reductions in pollution. Alternative fuel is also shown to reduce non-carbon emissions such as the release of nitric oxide and nitrogen dioxide, as well as sulfur dioxide and other harmful gases in the exhaust. This is especially important in industries such as mining, where toxic gases can accumulate more easily.
Official definitions
Definition in the European Union
In the European Union, alternative fuel is defined by Directive 2014/94/EU of the European Parliament and of the Council of 22 October 2014 on the deployment of alternative fuels infrastructure.
Definition in the US
In the US, the EPA defines alternative fuel as
Definition in Canada
In Canada, since 1996, Alternative Fuels Regulations SOR/96-453 Alternative Fuels Act defined alternative fuel:
China
In China, alternative fuel vehicles should comply with technical guidelines for the local production of alternative-fuel vehicles: they should have a shelf life of more than , and a complete charge should take less than seven hours. Up to 80% of a charge must be available after less than 30 minutes of charging. In addition, pure-electric vehicles must consume electric energy of less than 0.16 kWh/km.
Biofuel
Biofuels are also considered a renewable source. Although renewable energy is used mostly to generate electricity, it is often assumed that some form of renewable energy or a percentage is used to create alternative fuels.
Research is ongoing into finding more suitable biofuel crops and improving the oil yields of these crops. Using the current yields, vast amounts of land and fresh water would be needed to produce enough oil to completely replace fossil fuel usage.
Biomass
Biomass in the energy production industry is living and recently dead biological material which can be used as fuel or for industrial production. It has become popular among coal power stations, which switch from coal to biomass in order to convert to renewable energy generation without wasting existing generating plant and infrastructure. Biomass most often refers to plants or plant-based materials that are not used for food or feed, and are specifically called nitrocellulose biomass. As an energy source, biomass can either be used directly via combustion to produce heat, or indirectly after converting it to various forms of biofuel.
Algae fuel
Algae-based biofuels have been promoted in the media as a potential panacea to crude oil-based transportation problems. Algae could yield more than 2000 gallons of fuel per acre per year of production. Algae based fuels are being successfully tested by the U.S. Navy Algae-based plastics show potential to reduce waste and the cost per pound of algae plastic is expected to be cheaper than traditional plastic prices.
Biodiesel
Biodiesel is made from animal fats or vegetable oils, renewable resources that come from plants such as atrophy, soybean, sunflowers, corn, olive, peanut, palm, coconut, safflower, canola, sesame, cottonseed, etc. Once these fats or oils are filtered from their hydrocarbons and then combined with alcohol like methanol, diesel is produced from this chemical reaction. These raw materials can either be mixed with pure diesel to make various proportions or used alone. Despite one’s mixture preference, biodiesel will release a smaller number of pollutants (carbon monoxide, particulates and hydrocarbons) than conventional diesel, because biodiesel burns both cleanly and more efficiently. Even with regular diesel’s reduced quantity of sulfur from the LSD (ultra-low sulfur diesel) invention, biodiesel exceeds those levels because it is sulfur-free.
Alcohol fuels
Methanol and ethanol fuel are primary sources of energy; they are convenient fuels for storing and transporting energy. These alcohols can be used in internal combustion engines as alternative fuels. Butane has another advantage: it is the only alcohol-based motor fuel that can be transported readily by existing petroleum-product pipeline networks, instead of only by tanker trucks and railroad cars.
Ammonia
Ammonia (NH3) can be used as fuel. Benefits of ammonia for ships include reducing greenhouse gas emissions. Nitrogen reduction is being considered as a possible component for fuel cells and combustion engines through research of conversion of ammonia to nitrogen gas and hydrogen gas.
Ammonia is the simplest molecule that carries hydrogen in a liquid form. It is carbon-free and can be produced using renewable energy. Ammonia can become a transitional fuel soon because of its relative easiness of storage and distribution.
Emulsion fuel
Emulsified fuels include multiple components that are mixed to a water-in-oil emulsion, which are created to improve the fuels combustive properties. Diesel can also be emulsified with water to be used as a fuel. It helps in improving engine efficiency and reducing exhaust emissions.
Carbon-neutral and negative fuels
Carbon-neutral fuel is synthetic fuel—such as methane, gasoline, diesel fuel or jet fuel—produced from renewable or nuclear energy used to hydrogenate waste carbon dioxide recycled from power plant flue exhaust gas or derived from carbolic acid in seawater. Such fuels are potentially carbon neutral because they do not result in a net increase in atmospheric greenhouse gases. To the extent that carbon neutral fuels displace fossil fuels, or if they are produced from waste carbon or seawater carbolic acid, and their combustion is subject to carbon capture at the flue or exhaust pipe, they result in negative carbon dioxide emission and net carbon dioxide removal from the atmosphere, and thus constitute a form of greenhouse gas remediation. Such carbon neutral and negative fuels can be produced by the electrolysis of water to make hydrogen used in the Sabatier reaction to produce methane which may then be stored to be burned later in power plants as synthetic natural gas, transported by pipeline, truck, or tanker ship, or be used in gas to liquids processes such as the Fischer–Tropsch process to make traditional transportation or heating fuels.
Carbon-neutral fuels have been proposed for distributed storage for renewable energy, minimizing problems of wind and solar intermittent, and enabling transmission of wind, water, and solar power through existing natural gas pipelines. Such renewable fuels could alleviate the costs and dependency issues of imported fossil fuels without requiring either electrification of the vehicle fleet or conversion to hydrogen or other fuels, enabling continued compatible and affordable vehicles. Germany has built a 250-kilowatt synthetic methane plant which they are scaling up to 10 megawatts. Audi has constructed a carbon neutral liquefied natural gas (LNG) plant in Werlte, Germany. The plant is intended to produce transportation fuel to offset LNG used in their A3 Sportback g-tron automobiles, and can keep 2,800 metric tons of CO2 out of the environment per year at its initial capacity. Other commercial developments are taking place in Columbia, South Carolina, Camarillo, California, and Darlington, England.
The least expensive source of carbon for recycling into fuel is flue-gas emissions from fossil-fuel combustion, where it can be extracted for about US $7.50 per ton. Automobile exhaust gas capture has also been proposed to be economical but would require extensive design changes or retrofitting. Since carbonic acid in seawater is in chemical equilibrium with atmospheric carbon dioxide, extraction of carbon from seawater has been studied. Researchers have estimated that carbon extraction from seawater would cost about $50 per ton. Carbon capture from ambient air is more costly, at between $600 and $1000 per ton and is considered impractical for fuel synthesis or carbon sequestration.
Nighttime wind power is considered the most economical form of electrical power with which to synthesize fuel, because the load curve for electricity peaks sharply during the warmest hours of the day, but wind tends to blow slightly more at night than during the day. Therefore, the price of nighttime wind power is often much less expensive than any alternative. Off-peak wind power prices in high wind penetration areas of the U.S. averaged 1.64 cents per kilowatt-hour in 2009, but only 0.71 cents/kWh during the least expensive six hours of the day. Typically, wholesale electricity costs 2 to 5 cents/kWh during the day. Commercial fuel synthesis companies suggest they can produce fuel for less than petroleum fuels when oil costs more than $55 per barrel. The U.S. Navy estimates that shipboard production of jet fuel from nuclear power would cost about $6 per gallon. While that was about twice the petroleum fuel cost in 2010, it is expected to be much less than the market price in less than five years if recent trends continue. Moreover, since the delivery of fuel to a carrier battle group costs about $8 per gallon, shipboard production is already much less expensive. However, U.S. civilian nuclear power is considerably more expensive than wind power. The Navy's estimate that 100 megawatts can produce 41,000 gallons of fuel per day indicates that terrestrial production from wind power would cost less than $1 per gallon.
Hydrogen and formic acid
Hydrogen is an emissionless fuel. The byproduct of hydrogen burning is water, although some mono-nitrogen oxides NOx are produced when hydrogen is burned with air.
Another fuel is formic acid. The fuel is used by converting it first to hydrogen and using that in a fuel cell. Formic acid is much more easy to store than hydrogen.
Hydrogen/compressed natural gas mixture
HCNG (or H2CNG) is a mixture of compressed natural gas and 4–9 percent hydrogen by energy.
Hydrogen could also be used as hydroxy gas for better combustion characteristics of compression-ignition engines. Hydroxy gas is obtained through electrolysis of water.
Compressed air
The air engine is an emission-free piston engine using compressed air as fuel.
Propane autogas
Propane is a cleaner burning, high-performance fuel derived from multiple sources. It is known by many names including propane, LPG (liquified propane gas), LPA (liquid propane autogas), Autogas and others. Propane is a hydrocarbon fuel and is a member of the natural gas family.
Propane as an automotive fuel shares many of the physical attributes of gasoline while reducing tailpipe emissions and well to wheel emissions overall. Propane is the number one alternative fuel in the world and offers an abundance of supply, liquid storage at low pressure, an excellent safety record and large cost savings when compared to traditional fuels.
Propane delivers an octane rating between 104 and 112 depending on the composition of the butane/propane ratios of the mixture. Propane autogas in a liquid injection format captures the phase change from liquid to gas state within the cylinder of the combustion engine producing an "intercooler" effect, reducing the cylinder temperature and increasing air density. The resultant effect allows more advance on the ignition cycle and a more efficient engine combustion.
Propane lacks additives, detergents or other chemical enhancements further reducing the exhaust output from the tailpipe. The cleaner combustion also has fewer particulate emissions, lower NOx due to the complete combustion of the gas within the cylinder, higher exhaust temperatures increasing the efficiency of the catalyst and deposits less acid and carbon inside the engine which extends the useful life of the lubricating oil.
Propane autogas is generated at the well alongside other natural gas and oil products. It is also a by-product of the refining processes which further increase the supply of Propane to the market.
Propane is stored and transported in a liquid state at roughly of pressure. Fueling vehicles are similar to gasoline in the speed of delivery with modern fueling equipment. Propane filling stations only require a pump to transfer vehicle fuel and do not require expensive and slow compression systems when compared to compressed natural gas which is usually kept at over .
In a vehicle format, propane autogas can be retrofitted to almost any engine and provide fuel cost savings and lowered emissions while being more efficient as an overall system due to the large, pre-existing propane fueling infrastructure that does not require compressors and the resultant waste of other alternative fuels in well to wheel lifecycles.
Compressed natural gas
Compressed natural gas (CNG) and liquefied natural gas (LNG) are two cleaner combustible alternatives to conventional liquid automobile fuels.
Compressed natural gas fuel types
CNG vehicles can use both renewable CNG and non-renewable CNG.
Conventional CNG is a fossil fuel. New technologies such as horizontal drilling and hydraulic fracturing to economically access unconventional gas resources, appear to have increased the supply of natural gas in a fundamental way.
Renewable natural gas or biogas is a methane-based gas with similar properties to natural gas that can be used as transportation fuel. Present sources of biogas are mainly landfills, sewage, and animal/agri-waste. Based on the process type, biogas can be divided into the following: biogas produced by anaerobic digestion, landfill gas collected from landfills, treated to remove trace contaminants, and synthetic natural gas (SNG).
Practicality
CNG powers more than 5 million vehicles worldwide, and just over 150,000 of these are in the U.S. American usage is growing at a dramatic rate.
Environmental analysis
Because natural gas emits less smog-forming pollutants than other fossil fuels when combusted, cleaner air has been measured in urban localities switching to natural gas vehicles. Tailpipe can be reduced by 15–25% compared to gasoline, diesel. The greatest reductions occur in medium and heavy duty, light duty and refuse truck segments.
reductions of up to 88% are possible by using biogas.
Natural gas and hydrogen are both lighter than air and can be mixed together.
Nuclear power and radiothermal generators
Nuclear reactors
Nuclear power is any nuclear technology designed to extract usable energy from atomic nuclei via controlled nuclear reactions. Currently, the only controlled method uses nuclear fission in a fissile fuel (with a small fraction of the power coming from subsequent radioactive decay). Use of nuclear fusion for controlled power generation is not yet practical, but is an active area of research.
Nuclear power generally requires a nuclear reactor to heat a working fluid such as water, which is then used to create steam pressure, which is converted into mechanical work for the purpose of generating electricity or propulsion in water. Today, more than 15% of the world's electricity comes from nuclear power, and over 150 nuclear-powered naval vessels have been built.
In theory, electricity from nuclear reactors could also be used for propulsion in space, but this has yet to be demonstrated in a space flight. Some smaller reactors, such as the TOPAZ nuclear reactor, are built to minimize moving parts and use methods that convert nuclear energy to electricity more directly, making them useful for space missions, but this electricity has historically been used for other purposes. Power from nuclear fission has been used in a number of spacecraft, all of them uncrewed. The Soviets up to 1988 orbited 33 nuclear reactors in RORSAT military radar satellites, where electric power generated was used to power a radar unit that located ships on the Earth's oceans. The U.S. also orbited one experimental nuclear reactor in 1965, in the SNAP-10A mission.
Thorium fuelled nuclear reactors
Thorium-based nuclear power reactors have also become an area of active research in recent years. It is being backed by many scientists and researchers, and Professor James Hansen, the former Director at NASA Goddard Institute for Space Studies has reportedly said, "After studying climate change for over four decades, it's clear to me that the world is heading for a climate catastrophe unless we develop adequate energy sources to replace fossil fuels. Safer, cleaner and cheaper nuclear power can replace coal and is desperately needed as an essential part of the solution". Thorium is 3–4 times more abundant within nature than uranium, and its ore, monazite, is commonly found in sands along bodies of water. Thorium has also gained interest because it could be easier to obtain than uranium. While uranium mines are enclosed underground and thus very dangerous for the miners, thorium is taken from open pits. Monazite is present in countries such as Australia, the United States and India, in quantities large enough to power the earth for thousands of years. As an alternative to uranium-fuelled nuclear reactors, thorium has been proven to add to proliferation, produces radioactive waste for deep geological repositories like technetium-99 (half-life over 200,000 years), and has a longer fuel cycle.
For a list of experimental and presently-operating thorium-fueled reactors, see .
Radiothermal generators
In addition, radioisotopes have been used as alternative fuels, on both lands, and in space. Their use on land is declining due to the danger of theft of isotope and environmental damage if the unit is opened. The decay of radioisotopes generates both heat and electricity in many space probes, particularly probes to outer planets where sunlight is weak, and low temperatures is a problem. Radiothermal generators (RTGs) which use radioisotopes as fuels do not sustain a nuclear chain reaction, but rather generate electricity from the decay of a radioisotope.
See also
– An alternative fuel festival in New York
– A possible future alternative to LNG for transporting natural gas
fuels
– A potential lead-free alternative to 100LL aviation gasoline.
References
External links
Alternative Fuels Data Center (U.S. DOE)
Alternative Fuels Information Centre (Victorian Government)
Alternative Fuel Vehicle Training National Alternative Fuels Training Consortium, West Virginia University
Clean Cities Program U.S. DOE program encouraging alternative fuel use
International Air Transport Association alternative aviation fuels
Alternative Fuel Stations Locator for United States
ScienceDaily – Alternative Fuel News
Student's Guide to Alternative Fuel (California Energy Commission)
Sustainable Green Fleets, an EU-sponsored dissemination project for alternative fuels for fleets
Pop. Mechanics: Crunching the numbers on alternative fuels
Alternative Fuels portal on WiserEarth
Alternative Clean Transportation Expo
Hydrogen Internal Combustion Engine Vehicles
Student's Guide to Alternative Fuels
Green Revolution – The Future of Electric Cars
Fuels
Fuel
Sustainable technologies | Alternative fuel | [
"Chemistry"
] | 3,755 | [
"Fuels",
"Chemical energy sources"
] |
626,501 | https://en.wikipedia.org/wiki/Meycauayan | Meycauayan , officially the City of Meycauayan (), is a component city in the province of Bulacan, Philippines. According to the 2020 census, it has a population of 225,673 people.
Etymology
The place got its name from the Tagalog words may kawayan which is literally translated to English as there is bamboo. It was formerly known as Mecabayan , a Kapampangan name.
History
During the Spanish colonization of the country, the town of Meycauayan was established as a settlement by a group of Spanish priests belonging to the Franciscan Order. In 1578, its early inhabitants came into contact with Christianity. In that same year, Father Juan de Placencia and Diego Oropesa built the first church structure, which was believed to be made of nipa and bamboo. Common to all Spanish settlements in that period was the adoption of a patron saint for the newly opened town. Meycauayan has St. Francis of Assisi as the Patron Saint. It was only in 1668, however, that a concrete church structure was erected.
Meycauayan was then one of the largest towns in the province of Bulacan. The towns, which fell under its political jurisdiction, were San Jose del Monte, Bocaue, Valenzuela (formerly Polo), Obando, Marilao, Santa Maria and Pandi. It was also regarded as the unofficial capital of the province, being the hub of activities brought about by the establishment of the market center and the presence of the Spanish military detachment. During the revolution, which was set off by the execution of Dr. Jose Rizal in 1896, Meycauayan contributed its share in the fight against the Spanish conquistadores. Among her sons who figured prominently in the revolution were: Andres Pacheco, Ciriaco Contreras, Guillermo Contreras, Guillermo Bonque, Tiburcio Zuela, and Liberato Exaltacion. There were many others who had joined the revolution and had displayed their exceptional heroism until 1898, when the country gained its independence from Spain.
Between 1901 and 1913, Marilao became part of Meycauayan.
In 1949, a big fire razed the market center and several business establishments in the town, causing setbacks to the development of the municipality. It took several years to recover from the destruction and property losses. However, in the 1960s and early part of 1970s, new hope for the development was ushered in. Reconstruction and rehabilitation of infrastructure facilities were made possible through the assistance of the provincial and national governments. A more sound economic base was established and crop production more than doubled.
Cityhood
Meycauayan twice attempted for cityhood. The first was filed by district representative Angelito Sarmiento, seeking the conversion of the then-municipality of Meycauayan into a component city, which was signed by President Gloria Macapagal Arroyo on March 5, 2001 as Republic Act No. 9021.
The plebiscite for the ratification, along with that of Cauayan, Isabela (by virtue of RA No. 9017 dated February 28), was scheduled by the Commission on Elections on March 30. The bid however failed, and Meycauayan remained a municipality. (Meanwhile, affirmative votes won in the separate plebiscite in Cauayan.)
For the second time, in another attempt for conversion, district representative Reylina Nicolas authored House Bill 4397 (dated July 24, 2006), which was later signed into law by President Arroyo as RA No. 9356 on October 2, 2006.
A plebiscite was held on December 10, where the cityhood was eventually ratified and the proclamation was made in the evening. It was noted that compared to the first plebiscite, the second showed that only more than a hundred voters were added to those in favor of the conversion, as well as a sharp decline in the number of those who were against.
With the ratification, Meycauayan became Bulacan's third component city, following San Jose del Monte in 2000, and Malolos, whose loss in its cityhood bid in 1999 was reversed following a recount.
Contemporary
Today, the city of Meycauayan has transformed into a major economic and industrial hub in the Province of Bulacan and the rest of Region III.
Geography
The City of Meycauayan is generally surrounded with plain land and gentle rolling hills. Meycauayan is named to Filipino phrase may kawayan that means "with bamboo". Comfortably above sea level, this terrain is an interweaving of greenery and concrete road network. The slope of the land dips towards a west to north westerly direction. River, natural lake and drainage waterways envelope and criss-cross the area.
The city is located north of Manila and south of Malolos City, the provincial capital city. It is bounded by the town of Marilao to the north, the two Metro Manila cities of Valenzuela to the south and Caloocan (North) to the east, and the town of Obando to the west.
Barangays
Meycauayan is administratively subdivided into 26 urban barangays. Each barangay consists of puroks and some have sitios.
Climate
Demographics
In the 2020 census, the population of Meycauayan was 225,673 people, with a density of .
Government
Local government
The Sangguniang Panlungsod is the legislature of the government of Meycauayan. As defined by the Local Government Code of 1991, the legislatures have legislative and quasi-judicial powers and functions. The members of the Sangguniang Panlungsod, often referred to as councilors are either elected or ex-officio and includes a city's vice mayor who serves as the presiding officer.
Past officials
Economy
The City of Meycauayan is the economic, industrial, commercial, financial and educational center of southern Bulacan. The city is known for its jewelry and leather industries. For years, Meycauayan has been the hub of jewelry production in the Philippines and in Asia. It is known for its low-priced jewelries. The locality also produces leather goods. Shoes, bags and every kind of leather product has been traditionally manufactured here. A number of leather tanneries still operate in Meycauayan, which over the years have converted the city into a hub for leather goods.
In 2016, the total net income for Meycauayan is worth Php 6.875 billion, making it the richest in the province of Bulacan and 18th-highest-income city in the Philippines.
Industrial compounds and parks
The City of Meycauayan is also home to many industrial parks and compounds.
Meycauayan Industrial Subd. I, II, III & IV
Meridian Industrial Compound
Muralla Industrial Park
First Valenzuela Industrial Compound
Sterling Industrial Park Phase I, II, III & IV
Education
Meycauayan City have its own division of schools since January 2013. The City Schools Division of Meycauayan has two districts, Meycauayan West District and Meycauayan East District.
There are 24 public elementary schools and 4 public high schools as well as 11 private schools in the city. There are also tertiary schools in Meycauayan. Polytechnic College of the City of Meycauayan is under the funding and management of the City Government, currently located at Pag-asa Street, Barangay Malhacan. Meycauayan College is a private educational institution in Barangay Calvario and Malhacan. It was established in 1925 as Meycauayan Institute. Other than tertiary education, it also offers primary and secondary education.
Religion
Saint Francis of Assisi Parish Church, commonly known as the Meycauayan Church, is a Roman Catholic church located in Meycauayan, Philippines. It is one of the oldest parishes in Bulacan which even predates the Malolos Cathedral established in 1580 and the Barasoain Church established in 1859. It is also the province's largest parish with an estimated population of about 80,000 parishioners. The church is the seat of the vicariate of St. Francis of Assisi in the Diocese of Malolos.
Feasts
Liputan Barrio Fiesta
This festival takes place every 2nd Sunday of May in Barangay Liputan. After a nine-day novena, the fiesta culminates with a colorful fluvial procession in honor of the "Mahal na Señor", an image of the Crucified Christ venerated on the island of Liputan. The image, along with those of the Virgin and St. Joseph, are placed on a pagoda, a makeshift bamboo bier constructed on boats and decorated with buntings. The images are then taken to the old church in the town center of Meycauayan for a mass.
Feast of St. Francis of Assisi
It is a celebration held in the oldest church in Meycauayan, the St. Francis of Assisi Parish Church in Barangay Poblacion, which commemorates the foundation of the city in 1578 by the Franciscans. Before, it has come to be known as the "Kawayanan Festival", and includes an animal parade, street dancing, and other related cultural activities. It is held every fourth of October.
Religious organizations/denominations
Like other cities and municipalities in the Philippines, Meycauayan hosts no official religion as the 1987 Constitution mandates that there shall be no state religion and that it provides for the separation of church and state.
Church buildings and related structures from various sects and denominations are scattered in Meycauayan. They come mostly from Christians with their religious denominations and orders like the Catholic Church, the Born-again, the Baptists, and the Iglesia ni Cristo, while the city also has mosques and centers for Islam.
There are Light Church Meycauayan and Jesus Is Lord Church Meycauayan Chapter as well.
Issues and controversies
Mayoralty dispute (1995–2008)
The succession of the city's administration was put into question by a series of legal cases between two then-Mayors. Florentino Blanco, town mayor from 1987 to 1992, ran in 1995 but was disqualified by the Supreme Court for vote buying on July 21, 1997. Blanco was replaced by Vice Mayor Eduardo Nolasco in an acting capacity, serving out the remainder of his term.
Blanco ran again in 1998 but lost to Eduardo Alarilla; Blanco attempted to file an election protest against Alarilla but the COMELEC dismissed the case. He attempted to run again in 2004 but later withdrew his candidacy. In 2007, he ran once more but lost to Eduardo Alarilla's wife, Joan Alarilla (Mr. Alarilla has then reached the three-term limit imposed by law). The then Mayor Alarilla then attempted to disqualify Blanco; the COMELEC ruled in favor of Alarilla, but the Supreme Court reversed this decision, stating that Blanco is still eligible to run for public office.
Heirs of Anacleto Nieto vs. Meycauayan, Bulacan
On December 13, 2007, the Supreme Court of the Philippines ordered Meycauayan, Bulacan to surrender peaceful possession to the Heirs of Anacleto Nieto, and vacate the 3,882 square meters lot, at Poblacion, Meycauayan, TCT No. T-24.055 (M) which it used and even constructed an extension of the public market therein. Meycauayan was also ordered to pay the reasonable value of the property and P 1,716,000.00 as reasonable compensation for the use of the property from 1966 until the filing of the complaint on December 28, 1994.
Pollution
In 2007, The Meycauayan and the neighboring town of Marilao in Bulacan province shared a slot in the list of the world's 30 most polluted places in the developing world drawn up by the private New York-based institute Pure Earth. In its report, "The World’s Worst Polluted Places" for 2007, Pure Earth said: "Industrial waste is haphazardly dumped into the Meycauayan, Marilao and Obando River system, a source of drinking and agricultural water supplies for the 250,000 people living in and around" the Meycauayan-Marilao area. Meycauayan also shares border with Caloocan.
Gallery
Notable people
Lydia de Vega – track and field athlete, Asian Games medalist
Rans Rifol – actress, former member of MNL48
Roel Cortez – singer, songwriter
Chelsea Manalo – beauty pageant titleholder, Miss Universe Philippines 2024
Rey Valera – singer, songwriter
References
External links
Meycauayan Bulacan
Charter of the City of Meycauayan (RA 9356)
[ Philippine Standard Geographic Code]
Philippine Census Information
Cities in Bulacan
Populated places established in 1578
1578 establishments in the Philippines
Contaminated farmland
Component cities in the Philippines | Meycauayan | [
"Chemistry",
"Environmental_science"
] | 2,722 | [
"Contaminated farmland",
"Water pollution"
] |
626,506 | https://en.wikipedia.org/wiki/Morris%20Canal | The Morris Canal (1829–1924) was a common carrier anthracite coal canal across northern New Jersey that connected the two industrial canals in Easton, Pennsylvania across the Delaware River from its western terminus at Phillipsburg, New Jersey to New York Harbor and New York City through its eastern terminals in Newark and on the Hudson River in Jersey City. The canal was sometimes called the Morris and Essex Canal, in error, due to confusion with the nearby and unrelated Morris and Essex Railroad.
With a total elevation change of more than , the canal was considered an ingenious technological marvel for its use of water-driven inclined planes, the first in the United States, to cross the northern New Jersey hills.
It was built primarily to move coal to industrializing eastern cities that had stripped their environs of wood. Completed to Newark in 1831, the canal was extended eastward to Jersey City between 1834 and 1836. In 1839, hot blast technology was married to blast furnaces fired entirely using anthracite, allowing the continuous high-volume production of plentiful anthracite pig iron.
The Morris Canal eased the transportation of anthracite from Pennsylvania's Lehigh Valley to northern New Jersey's growing iron industry and other developing industries adopting steam power in New Jersey and the New York City area. It also carried minerals and iron ore westward to blast furnaces in western New Jersey and Allentown and Bethlehem in the Lehigh Valley until the development of Great Lakes iron ore caused the trade to decline.
The Morris Canal remained in heavy use through the 1860s. But railroads had begun to eclipse canals in the United States, and in 1871, it was leased to the Lehigh Valley Railroad.
Like many enterprises that depended on anthracite, the canal's revenues dried up with the rise of oil fuels and truck transport. It was taken over by the state of New Jersey in 1922, and formally abandoned in 1924.
While the canal was largely dismantled in the following five years, portions of it and its accompanying feeders and ponds have been preserved. A statewide greenway for cyclists and pedestrians is planned, beginning in Phillipsburg, traversing Warren, Sussex, Morris, Passaic, Essex, and Hudson Counties and including the old route through Jersey City. The canal was added to the National Register of Historic Places on October 1, 1974, for its significance in engineering, industry, and transportation. The boundary was increased in 2016 to include the Lake Hopatcong station in Landing.
Description
On the canal's western end, at Phillipsburg, a cable ferry allowed Morris Canal boats to cross the Delaware River westward to Easton, Pennsylvania, and travel up the Lehigh Canal to Mauch Chunk, in the anthracite coal regions, to receive their cargoes from the mines. From Phillipsburg, the Morris Canal ran eastward through the valley of the Musconetcong River, which it roughly paralleled upstream to its source at Lake Hopatcong, New Jersey's largest lake. From the lake, the canal descended through the valley of the Rockaway River to Boonton, eventually around the northern end of Paterson's Garret Mountain, and south to its 1831 terminus at Newark on the Passaic River. From there it continued eastward across Kearny Point and through Jersey City to the Hudson River. The extension through Jersey City was at sea level and was supplied with water from the lower Hackensack River.
With its two navigable feeders, the canal was long. Its ascent eastward from Phillipsburg to its feeder from Lake Hopatcong was , and the descent from there to tidewater was . Surmounting the height difference was considered a major engineering feat of its day, accomplished through 23 locks and 23 inclined planes — essentially, short railways that carried canal boats in open cars uphill and downhill using water-powered winches. Inclined planes required less time and water than locks, although they were more expensive to build and maintain.
History
The idea for constructing the canal is credited to Morristown businessman George P. MacCulloch, who reportedly conceived the idea while visiting Lake Hopatcong. In 1822, MacCulloch brought together a group of interested citizens at Morristown to discuss the idea.
The Palladium of Liberty, a Morristown newspaper of the day, reported on August 29, 1822: "...Membership of a committee which studied the practicality of a canal from Pennsylvania to Newark, New Jersey, consisted of two prominent citizens from each county (NJ) concerned: Hunterdon County, Nathaniel Saxton, Henry Dusenberry; Sussex County, Morris Robinson, Gamaliel Bartlett; Morris County, Lewis Condict, Mahlon Dickerson; Essex County, Gerald Rutgers, Charles Kinsey; Bergen County, John Rutherford, William Colefax ...".
On November 15, 1822, the New Jersey Legislature passed an act appointing three commissioners, one of whom was MacCulloch, to explore the feasibility of the project, determine the canal's possible route, and estimate its costs. MacCulloch initially greatly underestimated the height difference between the Passaic and Lake Hopatcong, pegging it at only .
On December 31, 1824, the New Jersey Legislature chartered the Morris Canal and Banking Company, a private corporation charged with the construction of the canal. The corporation issued 20,000 shares of stock at $100 a share, providing $2 million of capital, divided evenly between funds for building the canal and funds for banking privileges. The charter provided that New Jersey could take over the canal at the end of 99 years. In the event that the state did not take over the canal, the charter would remain in effect for 50 years more, after which the canal would become the property of the state without cost.
Construction
In 1823, the canal company hired Ephraim Beach, who was originally an assistant engineer on the Erie Canal, as its chief engineer, to survey the routes for the Morris Canal.
Construction started in 1824 in Newark, with a channel wide and deep. The canal started from Upper Newark Bay, followed the Passaic River and crossed it at Little Falls, then went on to Boonton, Dover, then the southern tip of Lake Hopatcong, whereupon it went to Phillipsburg.
On October 15, 1825, ground was broken at the summit level at the "Great Pond" (i.e. lake Hopatcong). By 1828, 82 of the 97 eastern sections and 43 of the 74 western sections were finished. By 1829, some sections were completed and opened for traffic, and in 1830, the section from Newark to Rockaway was opened.
Because the locks could only handle boats of , that meant that through traffic from the Lehigh Canal was impossible, requiring reloading coal at Easton.
Design and building of the inclined planes
The vertical movement on the Morris Canal was , in comparison with less than on the Erie Canal, and would have required a lock every , which would have made the costs prohibitive.
James Renwick, a professor at Columbia University, devised the idea of using inclined planes to raise the boats in , instead of using about 300 lift locks, since a lift lock of that time typically lifted about . In the end, Renwick used only 23 inclined planes and 23 locks. Lock dimensions were originally wide and long
Renwick's original design seems to have been to have double tracks on all inclined planes, with the descending caisson holding more water; thus, the system theoretically would not have needed external power. Nevertheless, the inclined planes were built with overshot water wheels to supply power.
The early planes were done by different contractors and differed greatly. In 1829, the canal company hired David Bates Douglass from West Point, who became the chief engineer of the planes. He supervised the construction of the remaining planes to be built and also altered the already built planes.
The inclined planes had a set of tracks, gauge about , running from the lower level up the incline, over the crest of the hill at the top, and down into the next level. Tracks were submerged at both ends. A large cradle, holding the boat, ran on the tracks. Iron overshot waterwheels originally powered the planes.
The Scotch (reaction) turbines, which later replaced the overshot water wheels, were feet in diameter and made of cast iron. They could pull boats up an 11% grade. The longest plane was the double-tracked Plane 9 West, which was long and lifted boats up (i.e. 6% grade) in 12 minutes. The total weight of the boat, cargo, and cradle was about .
The Scotch turbines produced (for example, on Plane 2 West) 235 horsepower using a head of water and had a discharge rate of per minute. Some turbines were also reported to develop 704 horsepower. The winding drum was in diameter and had a spiral grove of pitch. The rope was fastened on both ends to the drum, and there was a clutch that allowed the direction of the wheel to be reversed. The plane had two lines of steel rails, with a gauge of from center of rail to center. Rails were wide at top and high, and weighed . The cradle had a brake, in case the load went downhill too fast. Descent was also checked by the plane-man by putting about half power through the turbine. The water was fed into the turbines from below, thus relieving friction on the bearings and balancing them.
A comparison of Plane 2 West (Stanhope), which had a lift, with a flight of 12 locks yields the following: the plane took 5 minutes 30 seconds, and consumed of water lifting a loaded boat. Locks , meaning of water per lock) would consume for 12 locks (about 23 times more water) and would take 96 minutes.
The Elbląg Canal, one of Seven Wonders of Poland, used the Morris Canal's technology as inspiration for its inclined planes; for that reason, the inclined planes on that canal strongly resemble those on the Morris Canal.
The Newark Eagle reported in 1830:
The machinery was set in motion under the direction of Major Douglass, the enterprising Engineer. The boat, with two hundred persons on board, rose majestically out of the water; in one minute it was upon the summit, which it passed apparently with all the ease that a ship would cross a wave of the sea. As the forward wheels of the car commenced their descent, the boat seemed gently to bow to the spectators and the town below, then glided quickly down the wooden way. In six minutes and thirty seconds it descended from the summit and re-entered the canal, thus passing a plane one thousand and forty feet long, with a descent of seventy feet, in six and one half minutes.
An English visitor, Fanny Trollope, in her 1832 book Domestic Manners of the Americans, wrote of the canal:
We spent a delightful day in New Jersey, in visiting, with a most agreeable party, the inclined planes, which are used instead of locks on the Morris canal.
This is a very interesting work; it is one among a thousand which prove the people of America to be the most enterprising in the world. I was informed that this important canal, which connects the waters of the Hudson and the Delaware, is a hundred miles long, and in this distance overcomes a variation of level amounting to sixteen hundred feet. Of this, fourteen hundred are achieved by inclined planes. The planes average about sixty feet of perpendicular lift each, and are to support about forty tons. The time consumed in passing them is twelve minutes for one hundred feet of perpendicular rise. The expense is less than a third of what locks would be for surmounting the same rise. If we set about any more canals, this may be worth attending to.
This Morris canal is certainly an extraordinary work; it not only varies its level sixteen hundred feet, but at one point runs along the side of a mountain at thirty feet above the tops of the highest buildings in the town of Paterson, below; at another it crosses the falls of the Passaic in a stone aqueduct sixty feet above the water in the river. This noble work, in a great degree, owes its existence to the patriotic and scientific energy of Mr. Cadwallader Colden.
Orange Street Inclined Plane
In 1902, after a fatal crash between a Delaware and Lackawanna railroad train and a streetcar, the railroad grade was lowered (to the level it occupies today) and the Morris Canal had to make an electrically driven incline plane to bring boats up and over the railroad and Orange street, and then back down into the canal, with a pipe to carry the water across the break.
Aqueducts
Several aqueducts were built for the canal: the Little Falls Aqueduct over the Passaic River in Paterson, New Jersey, and the Pompton River Aqueduct, as well as aqueducts over the Second and Third river
The longest level was , from Bloomfield to Lincoln Park; the second-longest, from Port Murray to Saxon Falls.
The aqueduct over the Pohatcong Creek at the base of Inclined Plane 7 West is now used as a road bridge on Plane Hill Road in Bowerstown.
Opening of the canal
On November 1, 1830, before the whole canal was finished, the eastern side of the canal between Dover and Newark was tested with several boats loaded with iron ore and iron. These went through the planes without incident. On May 20, 1832, the canal was officially opened. The first boat passing entirely through the canal was the Walk-on-Water, followed by two coal-laden boats went from Phillipsburg all the way to Newark. This initial section (not counting the Jersey City portion, which was done later) ran and cost $2,104,413 ().
Operating years
Soon after opening, it became apparent that the canal had to be widened and had to be extended across to the New York Bay across Bayonne. Early boats, called "flickers," could only carry 18 tons. By 1840 the company had finished enlarging the locks, canal, and planes, and finished building the extension to Jersey City. Boats were divided in two, hitched with a pin, as were the trucks (cradles) on the inclined planes. The enlarged locks' dimensions were now wide and long.
The original company failed in 1841 amid banking scandals, and the canal was leased to private bidders for three years. The canal company was reorganized in 1844, with a capitalization of $1 million. The canal bed was inspected, and improvements were made. First, the places where seepage occurred were lined with clay, and two feeders were dug, to Lake Hopatcong and to Pompton. The inclined planes were rebuilt with wire cabling. Banking privileges were removed in 1849, leaving the company as a canal-operating business only.
By 1860, the canal had been progressively enlarged to accommodate boats of . Traffic reached a peak in 1866, when the canal carried of freight (equivalent to nearly 13,000 boatloads). Between 1848 and 1860, the original overshot water wheels that powered the inclined planes were replaced with more powerful water turbines. The locks and inclines planes were also renumbered since some had been combined or eliminated.
Boats of were now long, wide, and drew of water.
Freight figures are as follows:
Cargo
The Morris canal carried coal, malleable pig iron, and iron ore. It also carried grain, wood, cider, vinegar, beer, whiskey, bricks, hay, hides, sugar, lumber, manure, lime, and ice. Although it is said that the Morris Canal was mainly a freight canal and not a passenger canal, some boats on the Morris Canal did offer "Cool summer rides accompanied by a shipment of ice". Additional cargo include scrap metal, zinc, sand, clay, and farm products.
Iron ore from the Ogden Mine was brought to Nolan's Point (Lake Hopatcong). In 1880 the canal transported 1,700 boatloads (about ) of iron ore. Thereafter, competition from taconite ore in the Lake Superior region brought a decline in New Jersey bog iron.
Working hours
The plane tender of Plane 11 East reported that during the operating days, boats would tie up at night up to a mile above the plane, down another mile to Lock 15 East (Lock 14 East in the old 1836 numbering), and then about a mile below Lock 14 East. They would start putting boats through starting around 4 a.m. (dawn), and go all day long until 10 p.m., often handling boatmen who had gone through that morning, unloaded in Newark, and were returning.
Decline
The canal's profitability was undermined by railroads, which could deliver in five hours cargo that took four days by boat.
In 1871, the canal was leased by the Lehigh Valley Railroad, which sought the valuable terminal properties at Phillipsburg and Jersey City. The railroad never realized a profit from the operation of the canal.
By the early 20th century, commercial traffic on the canal had become negligible. Two committees, in 1903 and 1912, recommended abandoning the canal. The 1912 survey wrote, "...from Jersey City to Paterson, [the canal] was little more than an open sewer ... but its value beyond Paterson was of great importance, no longer as a freight transit line but as a parkway." Many of the existing photographs of the working canal were shot as part of these surveys, as well as by other people who wanted photographs of the canal before its demise.
In 1918, the canal company filed a lawsuit to block the construction of the Wanaque Reservoir in Passic County, asserting that the reservoir would divert water needed for the Pompton feeder. The company won the suit in 1922, but the victory was Pyrrhic; the canal was now viewed as an impediment to the development of the area's water supply. On March 1, 1923, the state of New Jersey took possession of the canal; it shut it down the following year. Over the next five years, the state largely dismantled the canal: the water was drained out, banks were cut, and canal works destroyed, including needlessly dynamiting the Little Falls aqueduct.
The Newark City Subway, now Newark Light Rail, was built along its route.
Canal today
The Morris Canal Historic District was added to the New Jersey Register of Historic Places in 1973 and to the National Register of Historic Places in 1974. The canal was listed as a National Historic Civil Engineering Landmark in 1980.
Portions of the canal are preserved. Waterloo Village, a restored canal town in Sussex County, has the remains of an inclined plane, a guard lock, a watered section of the canal, a canal store, and other period buildings. The Canal Society of New Jersey maintains a museum in the village.
Other remnants and artifacts of the canal can be seen along its former course. On the South Kearny, New Jersey, peninsula, where the canal ran just south of and parallel to the Lincoln Highway, now U.S. Route 1/9 Truck, the cross-highway bridges for Central Avenue and the rail spur immediately to its east were built to span the highway and the canal.
The inlet where the canal connected to the Hudson River is now the north edge of Liberty State Park, and the right-of-way of the Hudson-Bergen Light Rail follows the canal for part of its length.
Morris Canal Greenway
The North Jersey Transportation Planning Authority is developing the Morris Canal Greenway, a group of passive recreation parks and preserves along parts of the former canal route.
Parks along the greenway include:
Peckman Preserve
Pompton Aquatic Park
Wayne Township Riverside Park
Walking path in Lincoln Park
Gallery
Historic images
See also
Pequannoc Spillway
Pompton dam
Waterloo Village
Cornelius Clarkson Vermeule II
Henry Barnard Kümmel
Delaware Canal - A canal feeding urban Philadelphia connecting with the Morris and Lehigh Canals at their respective Easton terminals.
Delaware and Raritan Canal – A later New Jersey canal carrying mostly coal from the Delaware River to New York and northeastern New Jersey, and iron ore from New Jersey up the Lehigh.
Chesapeake and Delaware Canal – A canal crossing the Delmarva Peninsula in the states of Delaware and Maryland, connecting the Chesapeake Bay with the Delaware Bay.
Delaware and Hudson Canal - Another early built coal canal as the American canal age began; contemporary with the Lehigh and the Schuylkill navigations.
Lehigh Canal – A sister canal in the Lehigh Valley that fed coal traffic to the Delaware Canal via a connection in Easton, Pennsylvania.
Schuylkill Canal - Navigation joining Reading, PA and Philadelphia.
Paterson Great Falls
Lake Hopatcong
General references
Notes
References
Further reading
External links
Canal Society of New Jersey, includes canal map
http://planning.morriscountynj.gov/survey/canal/ a partial listing of Canal employees in Morris County, New Jersey
Walking The Morris Canal
Photo Documentary of the Morris Canal
The Morris Canal in Bloomfield, NJ
The Morris Canal in Roxbury Township, NJ
, and
1831 establishments in New Jersey
1924 disestablishments in New Jersey
Canals in New Jersey
Historic American Engineering Record in New Jersey
Historic Civil Engineering Landmarks
Transportation buildings and structures in Hudson County, New Jersey
Transportation buildings and structures in Morris County, New Jersey
Transportation buildings and structures in Sussex County, New Jersey
Transportation buildings and structures in Warren County, New Jersey
Economic history of New Jersey
Canals on the National Register of Historic Places in New Jersey
National Register of Historic Places in Hudson County, New Jersey
National Register of Historic Places in Morris County, New Jersey
National Register of Historic Places in Sussex County, New Jersey
National Register of Historic Places in Warren County, New Jersey
Canals opened in 1836
Transportation buildings and structures in Passaic County, New Jersey
Historic districts on the National Register of Historic Places in New Jersey
New Jersey Register of Historic Places
Braille trail sites | Morris Canal | [
"Engineering"
] | 4,429 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
626,514 | https://en.wikipedia.org/wiki/Presentation | A presentation conveys information from a speaker to an audience. Presentations are typically demonstrations, introduction, lecture, or speech meant to inform, persuade, inspire, motivate, build goodwill, or present a new idea/product. Presentations usually require preparation, organization, event planning, writing, use of visual aids, dealing with stress, and answering questions. "The key elements of a presentation consists of presenter, audience, message, reaction and method to deliver speech for organizational success in an effective manner." Presentations are widely used in tertiary work settings such as accountants giving a detailed report of a company's financials or an entrepreneur pitching their venture idea to investors. The term can also be used for a formal or ritualized introduction or offering, as with the presentation of a debutante. Presentations in certain formats are also known as keynote address. Interactive presentations, in which the audience is involved, are also represented more and more frequently. Instead of a monologue, this creates a dialogue between the speaker and the audience. The advantages of an interactive presentation is for example, that it attracts more attention from the audience and that the interaction creates a sense of community.
Visual elements
A presentation program is commonly used to generate the presentation content, some of which also allow presentations to be developed collaboratively, e.g. using the Internet by geographically disparate collaborators. Presentation viewers can be used to combine content from different sources into one presentation. Some of the popular presentation products used across the globe are offered by Apple, Google and Microsoft.
Microsoft PowerPoint and Google Slides are effective tools to develop slides, both Google Slides and Microsoft PowerPoint allows groups to work together online to update each account as it is edited. Content such as text, images, links, and effects are added into each of the presentation programs to deliver useful, consolidated information to a group. Visual elements add to the effectiveness of a presentation and help emphasize the key points being made through the use of type, color, images/videos, graphs, layout, and transitions.
Legibility
One common means to help one convey information and the audience stay on track is through the incorporation of text in a legible font size and type. According to the article "Prepare and Deliver an Effective Presentation", effective presentations typically use serif fonts (e.g. Times New Roman, Garamond, Baskerville, etc.) for the smaller text and sans serif fonts (e.g. Helvetica, Futura, Arial, etc.) for headings and larger text. The typefaces are used along with type size to improve readability for the audience. A combination of these typefaces can also be used to create emphasis. The majority of the fonts within a presentation are kept simple to aid in readability. Font styles, like bold, italic, and underline, are used to highlight important points.
It is possible to emphasize text and still maintain its readability by using contrasting colors. For example, black words on a white background emphasize the text being displayed but still helps maintain its readability. Text that contrasts with the background of a slide also enhances visibility. Readability and visibility enhance a presentation experience, which contributes to the effectiveness of it. Certain colors are also associated with specific emotions and the proper application of these colors adds to the effectiveness of a presentation through the creation of an immersive experience for an audience.
Images/videos
Large images relevant to the presentation attract an audience’s attention which in turn can clarify the topics within the presentation. Using pictures sparingly helps support other presentation elements (e.g. text). Short videos are used to help the presenter reinforce their message to the audience. With the additional reinforcement that images and videos offer, the effectiveness of a presentation may be improved.
Assessing presentations
There lacks a comprehensive list of criteria common among research studies or educational institutions in a typical presentation rubric used to assess presentations. Nevertheless, De Grez et al., in consultation with experienced higher education teachers, developed a rubric composed of nine evaluative criteria, of which five dealt with one’s manner of delivery (interaction with audience, enthusiasm, eye contact, vocal delivery, and body language), three were content related (structure, quality of introduction, and conclusion), and one evaluated general professionalism.
See also
Audience response
Audiovisual education
Showcase Presentations
Slide show
Wireless clicker
References
External links
Definition of presentations
Thefreedictionary.com
Dictionary.com
Merriam-webster.com
Content (types, audience, visual)
Daria Price Bowman. (1998). Presentations. Madison WI: F+W Publications Inc.
Public speaking | Presentation | [
"Technology"
] | 944 | [
"Multimedia",
"Presentation"
] |
626,631 | https://en.wikipedia.org/wiki/Extensible%20Application%20Markup%20Language | Extensible Application Markup Language (XAML ) is a declarative XML-based language developed by Microsoft for initializing structured values and objects. It is available under Microsoft's Open Specification Promise.
XAML is used extensively in Windows Presentation Foundation (WPF), Silverlight, Workflow Foundation (WF), Windows UI Library (WinUI), Universal Windows Platform (UWP), and .NET Multi-platform App UI (.NET MAUI). In WPF and UWP, XAML is a user interface markup language to define UI elements, data binding, and events. In WF, however, XAML defines workflows.
XAML elements map directly to Common Language Runtime (CLR) object instances, while XAML attributes map to CLR properties and events on those objects.
Anything that is created or implemented in XAML can be expressed using a more traditional .NET language, such as C# or Visual Basic .NET. However, a key aspect of the technology is the reduced complexity needed for tools to process XAML, because it is based on XML.
Technology
XAML originally stood for Extensible Avalon Markup Language, Avalon being the code-name for Windows Presentation Foundation (WPF). Before the end of .NET Framework 3.0 development, however, Microsoft adopted XAML for Workflow Foundation (WF).
In WPF, XAML describes visual user interfaces. WPF allows for the definition of both 2D and 3D objects, rotations, animations, and a variety of other effects and features. A XAML file can be compiled into a Binary Application Markup Language (BAML) file, which may be inserted as a resource into a .NET Framework assembly. At run-time, the framework engine extracts the BAML file from assembly resources, parses it, and creates a corresponding WPF visual tree or workflow.
In WF contexts, XAML describes potentially long-running declarative logic, such as those created by process modeling tools and rules systems. The serialization format for workflows was previously called XOML, to differentiate it from UI markup use of XAML, but now they are no longer distinguished. However, the file extension for files containing the workflow markup is still ".xoml".
XAML uses a specific way to define look and feel called Templates; differing from Cascading Style Sheet syntax, it is closer to XBL.
To create XAML files, one could use Microsoft Expression Blend, Microsoft Visual Studio, the hostable WF visual designer, or XAMLPad.
Examples
This Windows Presentation Foundation example shows the text "Hello, world!" in the top-level XAML container called Canvas.
<Canvas xmlns="http://schemas.microsoft.com/client/2010"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
<TextBlock>Hello, world!</TextBlock>
</Canvas>
The schema (the part) may have to be changed to work on your computer.
Using a schema that Microsoft recommends, the example can also be
<Canvas xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation">
<TextBlock>Hello, world!</TextBlock>
</Canvas>
A crucial part of utilizing XAML to its full potential is making appropriate usage of binding, as well as being comfortable with creating your own custom user elements as required, for your specific needs. Binding can be done as follows:<TextBox x:Name="txtInput" />
<TextBlock Text="{Binding ElementName=txtInput,Path=Text}" />
Differences between versions of XAML
There are three main Microsoft implementations of XAML:
Windows Presentation Foundation (WPF), first available with .NET Framework 3.0
Silverlight 3 and 4, first available for Internet Explorer 6 and now deprecated
Windows UI Library (formerly UWP XAML and WinRT XAML), first shipped with Windows 8 and Windows Server 2012, but now available as a part of the Windows App SDK
These versions have some differences in the parsing behavior. Additionally, the Silverlight 4 XAML parser is not 100% backward-compatible with Silverlight 3 files. Silverlight 3 XAML files may be rejected or parsed differently by the Silverlight 4 parser.
XAML Applications in Web Browsers
Historically, XAML based applications could be run in some web browsers, such as Internet Explorer and Firefox. This could be achieved through XBAP files created from WPF applications, or via the Silverlight browser plugin. However, both these methods are now unsupported on all major browsers due to their reliance on the discontinued NPAPI browser plugin interface.
Lock-in Concerns
In 2007, European Committee for Interoperable Systems (ECIS) – a coalition of mostly American software companies – accused Microsoft of attempting to hijack HTML and replace it with XAML, thus creating a vendor lock-in. Jeremy Reimer, writing for Ars Technica described this comment as "the most egregious error" and added that XAML is unlikely to ever replace HTML.
See also
List of user interface markup languages
Comparison of user interface markup languages:
EMML
FXML
MXML
XPS
XUL
ZUML
Interface Builder
Layout manager
References
External links
XAML Language Reference
XAML for UWP: Overview
XAML for WPF: Overview
System.Windows.Markup Namespace
System.Xaml Namespace
.NET terminology
Declarative markup languages
Declarative programming languages
Markup languages
Microsoft application programming interfaces
Microsoft Windows multimedia technology
User interface markup languages
Vector graphics markup languages
XML-based standards | Extensible Application Markup Language | [
"Technology"
] | 1,248 | [
"Computer standards",
"XML-based standards"
] |
626,702 | https://en.wikipedia.org/wiki/Chamber%20pot | A chamber pot is a portable toilet, meant for nocturnal use in the bedroom. It was common in many cultures before the advent of indoor plumbing and flushing toilets.
Names and etymology
"Chamber" is an older term for bedroom. The chamber pot is also known as a , a jerry, a guzunder, a po (possibly from ), a potty pot, a potty, a thunder pot or a thunder mug. It was also known as a chamber utensil or bedroom ware.
History
Chamber pots were used in ancient Greece at least since the 6th century BC and were known under different names: (amis), (ouranē) and (ourētris, from - ouron, "urine"), / (skōramis), (chernibion).
The introduction of indoor flush toilets started to displace chamber pots in the 19th century, but they remained common until the mid-20th century. The alternative to using the chamber pot was a trip to the outhouse.
In China, the chamber pot (便壶 (biàn hú) was common. A wealthy salt merchant in the city of Yangzhou became the symbol of conspicuous excess when he commissioned a chamber pot made of gold which was so tall that he had to climb a ladder to use it.
Modern use
Chamber pots continue in use in areas lacking indoor plumbing.
In the Philippines, chamber pots are used as urinals and are known as arinola in most Philippine languages, such as Cebuano and Tagalog.
In Korea, chamber pots are referred to as yogang (요강). They were used by people who did not have indoor plumbing to avoid the cold elements during the winter months.
Children's potties
The term "potty" is usually used to refer to the small, toilet-shaped devices made especially for children training to use the toilet, also called potty training, which are similar to chamber pots. These "potties" are generally a large plastic bowl with an ergonomically designed back and front to protect against splashes. They may have a built-in handle or grasp at the back to allow easy emptying and a non-slip bottom to prevent the child from sliding while in use. Some are given bright colors, and others may feature gentle or unoffensive drawings or cartoon characters. In many cases they are used since it is difficult for children to maneuver themselves up onto the normal toilet; in addition the larger opening in the regular toilet is much too wide for a child to sit over comfortably and can be intimidating when they first start learning. The size of a potty chair means they can be packed away in a bag for days out or when camping with young children.
Shapes and related items
A chamber pot might be disguised in a sort of chair (a close stool). It might be stored in a cabinet with doors to hide it; this sort of nightstand was known as a commode, hence the latter word came to mean "toilet" as well. For homes without these items of furniture, the chamber pot was stored under the bed.
The modern commode toilet and bedpan, used by bedbound or disabled persons, are variants of the chamber pot.
A related item was the bourdalou or bourdaloue, a small handheld oblong ceramic pot used in 17th- and 18th-century France to allow women to urinate conveniently. This item, similar in shape to a deep gravy boat, could be held between the legs and urinated into while standing or crouching, with little risk of soiling their clothing. At the time, women did not customarily wear two-legged underwear as today.
Cultural references
"The Crabfish" is a 17th-century folk song about what is most likely a common lobster, stored in a chamber pot by an unwise fisherman. The moral of the song is that one should look into a chamberpot before using it.
Philippine mythology recounts that giving newlyweds a chamber pot assures them of prosperity. President Elpidio Quirino, as part of a smear campaign against him, was falsely rumoured to possess a golden arinola.
In his satire Utopia, Thomas More had chamberpots made out of gold.
See also
History of water supply and sanitation
References
Toilets | Chamber pot | [
"Biology"
] | 877 | [
"Excretion",
"Toilets"
] |
626,709 | https://en.wikipedia.org/wiki/Hydroiodic%20acid | Hydroiodic acid (or hydriodic acid) is a colorless liquid. It is an aqueous solution of hydrogen iodide with the chemical formula . It is a strong acid, in which hydrogen iodide is ionized completely in an aqueous solution. Concentrated aqueous solutions of hydrogen iodide are usually 48% to 57% HI by mass.
Preparation
Reactions
Hydroiodic acid reacts with oxygen in air to give iodine:
Like hydrogen halides, hydroiodic acid adds to alkenes to give alkyl iodides. It can also be used as a reducing agent, for example in the reduction of aromatic nitro compounds to anilines.
Cativa process
The Cativa process is a major end use of hydroiodic acid, which serves as a co-catalyst for the production of acetic acid by the carbonylation of methanol.
Illicit uses
Hydroiodic acid is listed as a U.S. Federal DEA List I Chemical, owing to its use as a reducing agent related to the production of methamphetamine from ephedrine or pseudoephedrine (recovered from nasal decongestant pills).
References
External links
International Chemical Safety Card 1326
European Chemicals Bureau
Viscosities of Aqueous Hydrochloric Acid Solutions, and Densities and Viscosities of Aqueous Hydroiodic Acid Solutions
Iodides
Acids
Nonmetal halides
Reducing agents
nl:Waterstofjodide
pl:Kwas jodowodorowy | Hydroiodic acid | [
"Chemistry"
] | 316 | [
"Acids",
"Redox",
"Reducing agents"
] |
626,814 | https://en.wikipedia.org/wiki/TV%20tuner%20card | A TV tuner card is a kind of television tuner that allows television signals to be received by a computer. Most TV tuners also function as video capture cards, allowing them to record television programs onto a hard disk much like the digital video recorder (DVR) does.
The interfaces for TV tuner cards are most commonly either PCI bus expansion card or the newer PCI Express (PCIe) bus for many modern cards, but PCMCIA, ExpressCard, or USB devices also exist. In addition, some video cards double as TV tuners, notably the ATI All-In-Wonder series. The card contains a tuner and an analog-to-digital converter (collectively known as the analog front end) along with demodulation and interface logic. Some lower-end cards lack an onboard processor and, like a Winmodem, rely on the system's CPU for demodulation.
Types
There are many types of tuner cards.
Analog tuners
Analog television cards output a raw video stream, suitable for real-time viewing but ideally requiring some sort of video compression if it is to be recorded.
Some cards also have analog input (composite video or S-Video) and many also provide a radio tuner.
An early example was the Aapps Corp. MicroTV for Apple Macintosh II, which debuted in 1989.
More-advanced TV tuners encode the signal to Motion JPEG or MPEG, relieving the main CPU of this load.
Hybrid tuners
A hybrid tuner has one tuner that can be configured to act as an analog tuner or a digital tuner. Switching between the systems is fairly easy, but cannot be done immediately. The card operates as a digital tuner or an analog tuner until reconfigured.
Combo tuners
This is similar to a hybrid tuner, except there are two separate tuners on the card. One can watch analog while recording digital, or vice versa. The card operates as an analog tuner and a digital tuner simultaneously. The advantages over two separate cards are cost and utilization of expansion slots in the computer. As many regions around the world convert from analog to digital broadcasts, these tuners are gaining popularity.
Like the analog cards, the Hybrid and Combo tuners can have specialized chips on the tuner card to perform the encoding, or leave this task to the CPU. The tuner cards with this 'hardware encoding' are generally thought of as being higher quality. Small USB tuner sticks have become more popular in 2006 and 2007 and are expected to increase in popularity. These small tuners generally do not have hardware encoding due to size and heat constraints.
While most TV tuners are limited to the radio frequencies and video formats used in the country of sale, many TV tuners used in computers use DSP, so a firmware upgrade is often all that's necessary to change the supported video format. Many newer TV tuners have flash memory big enough to hold the firmware sets for decoding several different video formats, making it possible to use the tuner in many countries without having to flash the firmware. However, while it is generally possible to flash a card from one analog format to another due to the similarities, it is generally not possible to flash a card from one digital format to another due to differences in decode logic necessary.
Radio tuners
Many TV tuners can function as FM radios; this is because there are similarities between broadcast television and FM radio. The FM radio spectrum is close to (or even inside) that used by VHF terrestrial TV broadcasts. And many broadcast television systems around the world use FM audio. So listening to an FM radio station is simply a case of configuring existing hardware.
Mobile TV adapter
External TV tuner card attachments are available for mobile phones and also smartphones, for watching mobile TV, via TV stations on 1seg in Japan, Latin America and the Philippines. There was also a "converter" for watching DVB-H in Europe and elsewhere via Wi-Fi streaming video (PacketVideo).
Video capture
Video capture cards are a class of video capture devices designed to plug directly into expansion slots in personal computers and servers. Models from many manufacturers are available; all comply with one of the popular host bus standards including PCI, newer PCI Express (PCIe) or AGP bus interfaces.
These cards typically include one or more software drivers to expose the cards' features, via various operating systems, to software applications that further process the video for specific purposes. As a class, the cards are used to capture baseband analog composite video, S-Video, and, in models equipped with tuners, RF modulated video. Some specialized cards support digital video via digital video delivery standards including serial digital interface (SDI) and, more recently, the emerging HDMI standard. These models often support both standard definition (SD) and high definition (HD) variants.
While most PCI and PCI-Express capture devices are dedicated to that purpose, AGP capture devices are usually included with the graphics adapted on the board as an all-in-one package. Unlike video editing cards, these cards tend to not have dedicated hardware for processing video beyond the analog-to-digital conversion. Most, but not all, video capture cards also support one or more channels of audio. New technologies allow PCI-Express and HD-SDI to be implemented on video capture cards at lower costs than before.
An early example is the Mass Microsystems Colorspace FX card from 1989.
Applications
There are many applications for video capture cards, including converting a live analog source into some type of analog or digital media, (such as a VHS tape to a DVD), archiving, video editing, scheduled recording (such as a DVR), television tuning, or video surveillance. The cards may have significantly different designs to optimally support each of these functions. Capture cards can be used for recording a video game longplay (LP) so gamers can make walkthrough gameplay videos.
One of the most popular applications for video capture cards is to capture video and audio for live Internet video streaming. The live stream can also be simultaneously archived and formatted for video on demand. The capture cards used for this purpose are typically purchased, installed, and configured in host PC systems by hobbyists or systems integrators. Some care is required to select suitable host systems for video encoding, particularly HD applications which are more affected by CPU performance, number of CPU cores, and certain motherboard characteristics that heavily influence capture performance.
See also
Comparison of PVR software packages
Digital video recorder
Frame grabber
TV gateway
References
External links
"PC Project: Choosing a TV tuner card to record digital TV".pcauthority.com.au, Retrieved 9 January 2011
Computing input devices
Set-top box
Television technology | TV tuner card | [
"Technology"
] | 1,397 | [
"Information and communications technology",
"Television technology"
] |
626,855 | https://en.wikipedia.org/wiki/Webcast | A webcast is a media presentation distributed over the Internet using streaming media technology to distribute a single content source to many simultaneous listeners/viewers. A webcast may either be distributed live or on demand. Essentially, webcasting is "broadcasting" over the Internet.
The largest "webcasters" include existing radio and TV stations, who "simulcast" their output through online TV or online radio streaming, as well as a multitude of Internet-only "stations". Webcasting usually consists of providing non-interactive linear streams or events. Rights and licensing bodies offer specific "webcasting licenses" to those wishing to carry out Internet broadcasting using copyrighted material.
Overview
Webcasting is used extensively in the commercial sector for investor relations presentations (such as annual general meetings), in e-learning (to transmit seminars), and for related communications activities. However, webcasting does not bear much, if any, relationship to web conferencing, which is designed for many-to-many interaction.
The ability to webcast using cheap/accessible technology has allowed independent media to flourish. There are many notable independent shows that broadcast regularly online. Often produced by average citizens in their homes they cover many interests and topics. Webcasts relating to computers, technology, and news are particularly popular and many new shows are added regularly.
Webcasting differs from podcasting in that webcasting refers to live streaming while podcasting simply refers to media files placed on the Internet.
The term "webcast" had previously been used to describe the distribution of Web or Internet content using conventional broadcast technologies such as those intended for digital video (Digital Video Broadcasting) and audio (Digital Audio Broadcasting), and in some cases even leveraging analogue broadcasting techniques traditionally used by Teletext services to deliver a limited "Best of the Web" selection of content to audiences. Overnight broadcasts of data via analogue television signals were claimed by WebTV representatives to be able to offer "a fresh gigabyte of data every day... while you sleep". Typically, webcasting referred to a form of datacasting involving higher bandwidth broadcast technologies delivering Web content, multimedia files in particular, and with any interactivity supported by lower bandwidth return channels such as dial-up Internet access over the public telephone network or communication over mobile telephone networks. Such return channels conveyed each user's requests for the delivery of specific content over the broadcast medium. Eventually, DVB satellite operators were to offer a higher bandwidth return channel using DVB-RCS, raising the prospect of "point-to-point connections with users' satellite dishes". Webcasting had been regarded as a way of providing higher bandwidth Internet access to home computer users as well as enabling television-based Internet access, driving the development of smart television products.
History
The earliest graphically oriented web broadcasts were not streaming video, but were in fact still frames which were photographed with a web camera every few minutes while they were being broadcast live over the Internet. One of the earliest instances of sequential live image broadcasting was in 1991 when a camera was set up next to the Trojan Room in the computer laboratory of the University of Cambridge. It provided a live picture every few minutes of the office coffee pot to all desktop computers on that office's network. A couple of years later its broadcasts went to the Internet, became known as the Trojan Room Coffee Pot webcam, and gained international notoriety as a feature of the fledgling World Wide Web.
Later in 1996 an American college student and conceptual artist, Jenny Ringley, set up a web camera similar to the Trojan Room Coffee Pot's webcam in her dorm room. That webcam photographed her every few minutes while it broadcast those images live over the Internet upon a site called JenniCam. Ringley wanted to portray all aspects of her lifestyle and the camera captured her doing almost everything – brushing her teeth, doing her laundry, and even having sex with her boyfriend. Her website generated millions of hits upon the Internet, became a pay site in 1998, and spawned hundreds of female imitators who would then use streaming video to create a new billion dollar industry called camming, and brand themselves as camgirls or webcam models.
One of the earliest webcast equivalent of an online concert and one of the earliest examples of webcasting itself was by Apple Computer's Webcasting Group in partnership with the entrepreneurs Michael Dorf and Andrew Rasiej. Together with David B. Pakman from Apple, they launched the Macintosh New York Music Festival from July 17–22, 1995. This event audio webcast concerts from more than 15 clubs in New York City. Apple later webcast a concert by Metallica on June 10, 1996, live from Slim's in San Francisco.
In 1995, Benford E. Standley produced one of the first audio/video webcasts in history.
On October 31, 1996, UK rock band Caduseus broadcast their one-hour concert from 11pm to 12midnight (UT) at Celtica in Machynlleth, Wales, UK – the first live streamed audio and simultaneous live streamed video multicast – around the globe to more than twenty direct "mirrors" in more than twenty countries.
In September 1997, Nebraska Public Television started webcasting Big Red Wrap Up from Lincoln, Nebraska which combined highlights from every Cornhusker football game, coverage of the coaches' weekly press conferences, analysis with Nebraska sportswriters, appearances by special guests and questions and answers with viewers.
On August 8, 1997, the American jam band Phish webcast one of their concerts for the first time.
On October 22, 1998, the first Billy Graham Crusade was broadcast live to a worldwide audience from the Raymond James Stadium in Tampa Florida courtesy of Dale Ficken and the WebcastCenter in Pennsylvania. The live signal was broadcast via satellite to PA, then encoded and streamed via the BGEA website.
On February 6, 1999, a 21-minute Victoria's Secret fashion show featuring supermodel Tyra Banks aired exclusively on Broadcast.com. The webcast was promoted by a 30-second television spot during Super Bowl XXXIII and drew an estimated 1.5 million viewers. Broadcast.com servers were reportedly overwhelmed by the spike in traffic, locking out many potential viewers.
Virtually all major broadcasters now have a webcast of their output, from the BBC to CNN to Al Jazeera to UNTV in television to Radio China, Vatican Radio, United Nations Radio and the World Service in radio.
On November 4, 1994, Stef van der Ziel distributed the first live video images over the web from the Simplon venue in Groningen. On November 7, 1994, WXYC, the college radio station of the University of North Carolina at Chapel Hill became the first radio station in the world to broadcast its signal over the internet.
Translated versions including Subtitling are now possible using SMIL Synchronized Multimedia Integration Language.
Wedcast
A webcast of a wedding may be called a wedcast; it allows family and friends of the couple to watch the wedding in real time on the Internet. It is sometimes used for weddings in exotic locations, where it would be expensive or difficult for people to travel to see the wedding in person.
On August 13, 1998, the first webcast wedding took place, between Alan K'necht and Carrie Silverman in Toronto Canada.
The first webcast teleconference wedding to date is believed to have occurred on December 31, 1998. Dale Ficken and Lorrie Scarangella wed on this date as they stood in a church in Pennsylvania, and were married by Jerry Falwell while he sat in his office in Lynchburg, Virginia.
Webcasting a funeral is also a service provided by some funeral homes. Although it has been around since at least 2005, cheaper broadband access, the financial strain of travel, and deployments to Iraq and Afghanistan have all led to increased use of the technology.
See also
Internet radio
Live streaming
Media clip
Streaming media
Video blog
Webisode
Webinar
References
Streaming television
Internet radio
Streaming
Broadcasting
Online services | Webcast | [
"Technology"
] | 1,640 | [
"Multimedia",
"Internet radio",
"Streaming",
"Streaming television"
] |
626,861 | https://en.wikipedia.org/wiki/Situs%20inversus | Situs inversus (also called situs transversus or oppositus) is a congenital condition in which the major visceral organs are reversed or mirrored from their normal positions. The normal arrangement of internal organs is known as situs solitus. Although cardiac problems are more common, many people with situs inversus have no medical symptoms or complications resulting from the condition, and until the advent of modern medicine, it was usually undiagnosed.
Situs inversus is found in about 0.01% of the population, or about 1 person in 10,000. In the most common situation, situs inversus totalis, it involves complete transposition (right to left reversal) of all of the viscera. The heart is not in its usual position in the left chest, but is on the right, a condition known as dextrocardia (). Because the relationship between the organs is not changed, most people with situs inversus have no associated medical symptoms or complications.
An uncommon form of situs inversus is isolated levocardia, in which the position of the heart is not mirrored alongside the other organs. Isolated levocardia carries a risk of heart defects, and so patients with the condition may require surgery to correct them.
In rarer cases such as situs ambiguus or heterotaxy, situs cannot be determined. In these patients, the liver may be midline, the spleen absent or multiple, and the bowel malrotated. Often, structures are duplicated or absent altogether. This is more likely to cause medical problems than situs inversus totalis.
Signs and symptoms
In the absence of congenital heart defects, individuals with situs inversus are homeostatically normal, and can live standard healthy lives, without any complications related to their medical condition. There is a 5–10% prevalence of congenital heart disease in individuals with situs inversus totalis, most commonly transposition of the great vessels. The incidence of congenital heart disease is 95% in situs inversus with levocardia.
Many people with situs inversus totalis are unaware of their unusual anatomy until they seek medical attention for an unrelated condition, such as a rib fracture or a bout of appendicitis. The condition may also be discovered during the administration of certain medicines or during tests such as a barium meal or enema.<ref>medscapestatic.com; Situs inversus of the large intestine discovered after administering a barium enema, eMedicine</ref> The reversal of the organs may then lead to some confusion, as many signs and symptoms will be on the atypical side. For example, if an individual with situs inversus develops appendicitis, they will present to the physician with lower left abdominal pain, since that is where their appendix lies. Thus, in the event of a medical problem, the knowledge that the individual has situs inversus can expedite diagnosis. People with this rare condition should inform their doctors before an examination, so the doctor can redirect their search for heart sounds and other signs. Wearing a medical identification tag can help inform health care providers in the event the person is unable to communicate.Situs inversus also complicates organ transplantation operations as donor organs will more likely come from situs solitus (normal) donors. As hearts and livers are chiral, geometric problems arise placing an organ into a cavity shaped in the mirror image. For example, a person who requires a heart transplant needs all their great vessels reattached to the donor heart. However, the orientation of these vessels in a person with situs inversus is reversed, necessitating steps so that the blood vessels join properly.
Cause
Situs inversus is generally an autosomal recessive genetic condition, although it can be X-linked or found in identical "mirror image" twins.
About 25% of individuals with situs inversus have an underlying condition known as primary ciliary dyskinesia (PCD). PCD is a dysfunction of the cilia that occurs during early embryonic development. Normally functioning cilia determine the position of the internal organs during early development, and so embryos with PCD have a 50% chance of developing situs inversus. If they do, they are said to have Kartagener syndrome, characterized by the triad of situs inversus, chronic sinusitis, and bronchiectasis. Cilia are also responsible for clearing mucus from the lung, and the dysfunction causes increased susceptibility to lung infections. Kartagener syndrome can also manifest with male infertility as functional cilia are required for proper sperm flagella function.
A marked increase in cases was observed several months after the lifting of the zero-COVID-19 policy in China, which coincided with a rise in SARS-CoV-2 infections. This rare clinical evidence suggests a possible link between infection during pregnancy and the development of situs inversus in the fetus, specifically during gestational weeks 4–6, the critical period for organ positioning.
Effect on anatomy
The condition affects all major structures within the thorax and abdomen. Generally, the organs are simply transposed through the sagittal plane. The heart is located on the right side of the thorax, the stomach and spleen on the right side of the abdomen and the liver and gall bladder on the left side. The heart's normal right atrium occurs on the left, and the left atrium is on the right. The lung anatomy is reversed and the left lung has three lobes while the right lung has two lobes. The intestines and other internal structures are also reversed from the normal, and the blood vessels, nerves, and lymphatics are also transposed.
If the heart is swapped to the right side of the thorax, it is known as "situs inversus with dextrocardia" or "situs inversus totalis". If the heart remains on the normal left side of the thorax, a much rarer condition (1 in 2,000,000 of the general population), it is known as "situs inversus with levocardia" or "situs inversus incompletus".Situs inversus of the optic disc may occur unilaterally or bilaterally, associated with reduced binocularity and stereoacuity resembling monofixation syndrome. It is characterized by emergence of the retinal vessels in an anomalous direction (from the nasal rather than the temporal aspect) with dysversion (tilt) of the optic disc.Situs inversus does not appear to significantly affect rates of handedness. Based on a 2004 study documenting situs inversus in individuals with primary ciliary dyskinesia, the proportion of right-handedness among those with situs inversus did not differ significantly from that of those with situs solitus. A more recent 2023 study failed to find statistically significant differences in cognition, albeit left-handedness was significantly more common at 26% compared to the 10.6% of general population.
Diagnosis
Diagnosis of situs inversus can be made using imaging techniques such as x-ray, ultrasound, CT scan, and magnetic resonance imaging (MRI).
Any potential treatment would involve a complete and highly invasive surgical rearrangement of the internal viscera of the patient. Such a procedure is unnecessary, given that situs inversus rarely causes any additional symptoms. No treatment, medical or surgical, is prescribed for situs inversus individuals, with medical professionals instead treating any other symptoms the patient may have with awareness of the unique anatomy of the patient.
Occurrence Situs inversus is rare, affecting 0.01% of the population.
History
Dextrocardia (the heart being located on the right side of the thorax) was seen and drawn by Leonardo da Vinci, and then recognised by Marco Aurelio Severino in 1643. Situs inversus was first described more than a century later by Matthew Baillie.
Etymology
The term situs inversus is a short form of the Latin phrase situs inversus viscerum, meaning "inverted position of the internal organs".
Notable cases
Notable individuals with documented cases of situs inversus include:
Enrique Iglesias, a Spanish singer, songwriter, actor and record producer.
Catherine O'Hara, Canadian-American actress, writer, and comedian.
Randy Foye, an American basketball player in the NBA. He has suffered no discernible complications, and the condition is not expected to affect his career as a professional athlete, as all his organs are in reverse.
Ginggaew Lorsoungnern, a Thai convict executed by firing squad. Her condition was discovered after she was shot in the left side of her chest and survived. After waking up in the morgue she was taken back and executed.
Tim Miller, director of the Ashtanga Yoga Center in Carlsbad, California.
Rose Marie Bentley, a Molalla, Oregon woman who unknowingly had the rare variant situs inversus with levocardia, and lived to 99 years without any complications. She donated her body to Oregon Health & Science University, where her condition was discovered during an anatomy class after students noticed the unusual arrangement of her heart's blood vessels, prompting further investigation of the cadaver.
See also
Asplenia
Chirality (mathematics)
Ectopia cordis''
Johann Friedrich Meckel, the Elder
Polysplenia
Notes
References
Further reading
this book was the 2003 Aventis winner and has a description of the history behind the discovery of this medical condition.
External links
Chest X-ray & CT scan Radiology Teaching File
Autosomal recessive disorders
Motor skills
Rare diseases
Congenital disorders | Situs inversus | [
"Biology"
] | 2,062 | [
"Behavior",
"Motor skills",
"Motor control"
] |
627,008 | https://en.wikipedia.org/wiki/Skyhook%20%28structure%29 | A skyhook is a proposed momentum exchange tether that aims to reduce the cost of placing payloads into low Earth orbit. A heavy orbiting station is connected to a cable which extends down towards the upper atmosphere. Payloads, which are much lighter than the station, are hooked to the end of the cable as it passes, and are then flung into orbit by rotation of the cable around the center of mass. The station can then be reboosted to its original altitude by electromagnetic propulsion, rocket propulsion, or by deorbiting another object with the same kinetic energy as transferred to the payload.
A skyhook differs from a geostationary orbit space elevator in that a skyhook would be much shorter and would not come in contact with the surface of the Earth. A skyhook would require a suborbital launch vehicle to reach its lower end, while a space elevator would not.
History
Different synchronous non-rotating orbiting skyhook concepts and versions have been proposed, starting with Isaacs in 1966, Artsutanov in 1967, Pearson and Colombo in 1975, Kalaghan in 1978, and Braginski in 1985. The versions with the best potential involve a much shorter tether in low Earth orbit, which rotates in its orbital plane and whose ends brush the upper Earth atmosphere, with the rotational motion cancelling the orbital motion at ground level. These "rotating" skyhook versions were proposed by Moravec in 1976, and Sarmont in 1994.
This resulted in a Shuttle-based tether system: the TSS-1R mission, launched 22 February 1996 on STS-75 that focused in characterizing basic space tether behavior and space plasma physics. The Italian satellite was deployed to a distance of from the Space Shuttle.
Sarmont theorized in 1994 that the skyhook could be cost competitive with what is realistically thought to be achievable using a space elevator.
In 2000 and 2001, Boeing Phantom Works, with a grant from NASA Institute for Advanced Concepts, performed a detailed study of the engineering and commercial feasibility of various skyhook designs. They studied in detail a specific variant of this concept, called "Hypersonic Airplane Space Tether Orbital Launch System" or HASTOL. This design called for a hypersonic ramjet or scramjet aircraft to intercept a rotating hook while flying at Mach 10.
While no skyhook has yet been built, there have been a number of flight experiments exploring various aspects of the space tether concept in general.
Rotating skyhook
By rotating the tether around the orbiting center of mass in a direction opposite to the orbital motion, the speed of the hook relative to the ground can be reduced. This reduces the required strength of the tether, and makes coupling easier.
The rotation of the tether can be made to exactly match the orbital speed (around 7–8 km/s). In this configuration, the hook would trace out a path similar to a cardioid. From the point of view of the ground, the hook would appear to descend almost vertically, come to a halt, and then ascend again. This configuration minimises aerodynamic drag, and thus allows the hook to descend deep into the atmosphere. However, according to the HASTOL study, a skyhook of this kind in Earth orbit would require a very large counterweight, on the order of 1000–2000 times the mass of the payload, and the tether would need to be mechanically reeled in after collecting each payload in order to maintain synchronization between the tether rotation and its orbit.
Phase I of Boeing's Hypersonic Airplane Space Tether Orbital Launch (HASTOL) study, published in 2000, proposed a 600 km-long tether, in an equatorial orbit at 610–700 km altitude, rotating with a tip speed of 3.5 km/s. This would give the tip a ground speed of 3.6 km/s (Mach 10), which would be matched by a hypersonic airplane carrying the payload module, with transfer at an altitude of 100 km. The tether would be made of existing commercially available materials: mostly Spectra 2000 (a kind of ultra-high-molecular-weight polyethylene), except for the outer 20 km which would be made of heat-resistant Zylon PBO. With a nominal payload mass of 14 tonnes, the Spectra/Zylon tether would weigh 1300 tonnes, or 90 times the mass of the payload. The authors stated:
The primary message we want to leave with the Reader is: "We don't need magic materials like 'Buckminster-Fuller-carbon-nanotubes' to make the space tether facility for a HASTOL system. Existing materials will do."
The second phase of the HASTOL study, published in 2001, proposed increasing the intercept airspeed to Mach 15–17, and increasing the intercept altitude to 150 km, which would reduce the necessary tether mass by a factor of three. The higher speed would be achieved by using a reusable rocket stage instead of a purely air-breathing aircraft. The study concluded that although there are no "fundamental technical show-stoppers", substantial improvement in technology would be needed. In particular, there was concern that a bare Spectra 2000 tether would be rapidly eroded by atomic oxygen; this component was given a technology readiness level of 2.
Similar concepts
The capture-ejector rim is a variation that consists of a rim- or ring-shaped structure. Like a rotating skyhook, it would rotate in a direction opposite to its orbital motion, allowing a spacecraft at suborbital velocity to attach to its lower portion and later be flung into orbit from its upper portion. It would be easier for a spacecraft to attach to the lower portion of a capture-ejector rim than to attach to the end of a skyhook (which would only point downwards for a brief period of time).
See also
Mass driver
Orbital ring
Railgun
Space elevator
Space tether missions
Momentum exchange tether
References
External links
by Kurzgesagt
Megastructures
Space elevator
Spacecraft propulsion
Vertical transport devices | Skyhook (structure) | [
"Astronomy",
"Technology"
] | 1,256 | [
"Exploratory engineering",
"Astronomical hypotheses",
"Transport systems",
"Space elevator",
"Vertical transport devices",
"Megastructures"
] |
627,040 | https://en.wikipedia.org/wiki/Electronic%20signature | An electronic signature, or e-signature, is data that is logically associated with other data and which is used by the signatory to sign the associated data. This type of signature has the same legal standing as a handwritten signature as long as it adheres to the requirements of the specific regulation under which it was created (e.g., eIDAS in the European Union, NIST-DSS in the USA or ZertES in Switzerland).
Electronic signatures are a legal concept distinct from digital signatures, a cryptographic mechanism often used to implement electronic signatures. While an electronic signature can be as simple as a name entered in an electronic document, digital signatures are increasingly used in e-commerce and in regulatory filings to implement electronic signatures in a cryptographically protected way. Standardization agencies like NIST or ETSI provide standards for their implementation (e.g., NIST-DSS, XAdES or PAdES). The concept itself is not new, with common law jurisdictions having recognized telegraph signatures as far back as the mid-19th century and faxed signatures since the 1980s.
Description
The USA's E-Sign Act, signed June 30, 2000 by President Clinton was described months later as "more like a seal than a signature."
An electronic signature is intended to provide a secure and accurate identification method for the signatory during a transaction.
Definitions of electronic signatures vary depending on the applicable jurisdiction. A common denominator in most countries is the level of an advanced electronic signature requiring that:
The signatory can be uniquely identified and linked to the signature
The signatory must have sole control of the private key that was used to create the electronic signature
The signature must be capable of identifying if its accompanying data has been tampered with after the message was signed
In the event that the accompanying data has been changed, the signature must be invalidated
Electronic signatures may be created with increasing levels of security, with each having its own set of requirements and means of creation on various levels that prove the validity of the signature. To provide an even stronger probative value than the above described advanced electronic signature, some countries like member states of the European Union or Switzerland introduced the qualified electronic signature. It is difficult to challenge the authorship of a statement signed with a qualified electronic signature - the statement is non-repudiable. Technically, a qualified electronic signature is implemented through an advanced electronic signature that utilizes a digital certificate, which has been encrypted through a security signature-creating device and which has been authenticated by a qualified trust service provider.
In contract law
Since well before the American Civil War began in 1861, morse code was used to send messages electrically via the telegraph. Some of these messages were agreements to terms that were intended as enforceable contracts. An early acceptance of the enforceability of telegraphic messages as electronic signatures came from a New Hampshire Supreme Court case, Howley v. Whipple, in 1869.
In the 1980s, many companies and even some individuals began using fax machines for high-priority or time-sensitive delivery of documents. Although the original signature on the original document was on paper, the image of the signature and its transmission was electronic.
Courts in various jurisdictions have decided that enforceable legality of electronic signatures can include agreements made by email, entering a personal identification number (PIN) into a bank ATM, signing a credit or debit slip with a digital pen pad device (an application of graphics tablet technology) at a point of sale, installing software with a clickwrap software license agreement on the package, and signing electronic documents online.
The first agreement signed electronically by two sovereign nations was a Joint Communiqué recognizing the growing importance of the promotion of electronic commerce, signed by the United States and Ireland in 1998.
Enforceability
In 1996 the United Nations published the UNCITRAL Model Law on Electronic Commerce. Article 7 of the UNCITRAL Model Law on Electronic Commerce was highly influential in the development of electronic signature laws around the world, including in the US. In 2001, UNCITRAL concluded work on a dedicated text, the UNCITRAL Model Law on Electronic Signatures, which has been adopted in some 30 jurisdictions. Article 9, paragraph 3 of the United Nations Convention on the Use of Electronic Communications in International Contracts, 2005, which establishes a mechanism for functional equivalence between electronic and handwritten signatures at the international level as well as for the cross-border recognition. The latest UNCITRAL text dealing with electronic signatures is article 16 of the UNCITRAL Model Law on the Use and Cross-border Recognition of Identity Management and Trust Services (2022).
Canadian law (PIPEDA) attempts to clarify the situation by first defining a generic electronic signature as "a signature that consists of one or more letters, characters, numbers or other symbols in digital form incorporated in, attached to or associated with an electronic document," then defining a secure electronic signature as an electronic signature with specific properties. PIPEDA's secure electronic signature regulations refine the definition as being a digital signature applied and verified in a specific manner.
In the European Union, EU Regulation No 910/2014 on electronic identification and trust services for electronic transactions in the European internal market (eIDAS) sets the legal frame for electronic signatures. It repeals Directive 1999/93/EC. The current and applicable version of eIDAS was published by the European Parliament and the European Council on July 23, 2014. Following Article 25 (1) of the eIDAS regulation, an advanced electronic signature shall “not be denied legal effect and admissibility as evidence in legal proceedings". However it will reach a higher probative value when enhanced to the level of a qualified electronic signature. By requiring the use of a qualified electronic signature creation device and being based on a certificate that has been issued by a qualified trust service provider, the upgraded advanced signature then carries according to Article 25 (2) of the eIDAS Regulation the same legal value as a handwritten signature. However, this is only regulated in the European Union and similarly through ZertES in Switzerland. A qualified electronic signature is not defined in the United States.
The U.S. Code defines an electronic signature for the purpose of US law as "an electronic sound, symbol, or process, attached to or logically associated with a contract or other record and executed or adopted by a person with the intent to sign the record." It may be an electronic transmission of the document which contains the signature, as in the case of facsimile transmissions, or it may be encoded message, such as telegraphy using Morse code.
In the United States, the definition of what qualifies as an electronic signature is wide and is set out in the Uniform Electronic Transactions Act ("UETA") released by the National Conference of Commissioners on Uniform State Laws (NCCUSL) in 1999. It was influenced by ABA committee white papers and the uniform law promulgated by NCCUSL. Under UETA, the term means "an electronic sound, symbol, or process, attached to or logically associated with a record and executed or adopted by a person with the intent to sign the record." This definition and many other core concepts of UETA are echoed in the U.S. ESign Act of 2000. 48 US states, the District of Columbia, and the US Virgin Islands have enacted UETA. Only New York and Illinois have not enacted UETA, but each of those states has adopted its own electronic signatures statute. As of June 11, 2020, Washington State Office of CIO adopted UETA.
In Australia, an electronic signature is recognised as "not necessarily the writing in of a name, but maybe any mark which identifies it as the act of the party.” Under the Electronic Transactions Acts in each Federal, State and Territory jurisdiction, an electronic signature may be considered enforceable if (a) there was a method used to identify the person and to indicate that person’s intention in respect of the information communicated and the method was either: (i) as reliable as appropriate for the purpose for which the electronic communication was generated or communicated, in light of all the circumstances, including the relevant agreement; or (ii) proven in fact to have fulfilled the functions above by itself or together with further evidence and the person to whom the signature is required to be given consents to that method.
Legal definitions
Various laws have been passed internationally to facilitate commerce by using electronic records and signatures in interstate and foreign commerce. The intent is to ensure the validity and legal effect of contracts entered electronically. For instance,
PIPEDA (Canadian federal law)
(1) An electronic signature is "a signature that consists of one or more letters, characters, numbers or other symbols in digital form incorporated in, attached to or associated with an electronic document";
(2) A secure electronic signature is an electronic signature that
(a) is unique to the person making the signature;
(b) the technology or process used to make the signature is under the sole control of the person making the signature;
(c) the technology or process can be used to identify the person using the technology or process; and
(d) the electronic signature can be linked with an electronic document in such a way that it can be used to determine whether the electronic document has been changed since the electronic signature was incorporated in, attached to, or associated with the electronic document.
ESIGN Act Sec 106 (US federal law)
(2) ELECTRONIC- The term 'electronic' means relating to technology having electrical, digital, magnetic, wireless, optical, electromagnetic, or similar capabilities.
(4) ELECTRONIC RECORD- The term 'electronic record' means a contract or other record created, generated, sent, communicated, received, or stored by electronic means.
(5) ELECTRONIC SIGNATURE- The term 'electronic signature' means an electronic sound, symbol, or process, attached to or logically associated with a contract or other record and executed or adopted by a person with the intent to sign the record.
Regulation No 910/2014 on electronic identification and trust services for electronic transactions in the internal market Art 3 (European Union regulation)
(10) ‘electronic signature’ means data in electronic form which is attached to or logically associated with other data in electronic form and which is used by the signatory to sign;
(11) ‘advanced electronic signature’ means an electronic signature which meets the requirements set out in Article 26;
(12) ‘qualified electronic signature’ means an advanced electronic signature that is created by a qualified electronic signature creation device, and which is based on a qualified certificate for electronic signatures;
GPEA Sec 1710 (US federal law)
(1) ELECTRONIC SIGNATURE.—the term "electronic signature" means a method of signing an electronic message that—
(A) identifies and authenticates a particular person as the source of the electronic message; and
(B) indicates such person's approval of the information contained in the electronic message.
UETA Sec 2 (US state law)
(5) "Electronic" means relating to technology having electrical, digital, magnetic, wireless, optical, electromagnetic, or similar capabilities.
(6) "Electronic agent" means a computer program or an electronic or other automated means used independently to initiate an action or respond to electronic records or performances in whole or in part, without review or action by an individual.
(7) "Electronic record" means a record created, generated, sent, communicated, received, or stored by electronic means.
(8) "Electronic signature" means an electronic sound, symbol, or process attached to or logically associated with a record and executed or adopted by a person with the intent to sign the record.
Federal Reserve 12 CFR 202 (US federal regulation) refers to the ESIGN Act
Commodity Futures Trading Commission 17 CFR Part 1 Sec. 1.3 (US federal regulations)
(tt) Electronic signature means an electronic sound, symbol, or process attached to or logically associated with a record and executed or adopted by a person with the intent to sign the record.
Food and Drug Administration 21 CFR Sec. 11.3 (US federal regulations)
(5) Digital signature means an electronic signature based upon cryptographic methods of originator authentication, computed by using a set of rules and a set of parameters such that the signer's identity and the integrity of the data can be verified.
(7) Electronic signature means a computer data compilation of any symbol or series of symbols executed, adopted, or authorized by an individual to be the legally binding equivalent of the individual's handwritten signature.
United States Patent and Trademark Office 37 CFR Sec. 1.4 (federal regulation)
(d)(2) S-signature. An S-signature is a signature inserted between forwarding slash marks, but not a handwritten signature ... (i)The S-signature must consist only of letters, or Arabic numerals, or both, with appropriate spaces and commas, periods, apostrophes, or hyphens for punctuation... (e.g., /Dr. James T. Jones, Jr./)...
(iii) The signer's name must be:
(A) Presented in printed or typed form preferably immediately below or adjacent to the S-signature, and
(B) Reasonably specific enough so that the identity of the signer can be readily recognized.
Laws regarding their use
Australia - Electronic Transactions Act 1999 (which incorporates amendments from Electronic Transactions Amendment Act 2011), Section 10 - Signatures specifically relates to electronic signatures.
Azerbaijan - Electronic Signature and Electronic Document Law (2004)
Brazil - 2020 Electronic signature Law (Lei de assinaturas eletrônicas); Brazil's National Public Key Certificate Infrastructure Act (Infraestrutura de Chaves Públicas Brasileira - ICP-Brasil)
Bulgaria - Electronic Document and Electronic Certification Services Act
Canada - PIPEDA, its regulations, and the Canada Evidence Act.
China - Law of the People's Republic of China on Electronic Signature (effective April 1, 2005)
Costa Rica - Digital Signature Law 8454 (2005)
Croatia 2002, updated 2008
Czech Republic – currently directly applicable eIDAS and Zákona o službách vytvářejících důvěru pro elektronické transakce - 297/2016 Sb. (effective from 19 September 2016), formerly Zákon o elektronickém podpisu - 227/2000 Sb. (effective from 1 October 2000 until 19 September 2016 when it was derogated)
Ecuador – Ley de Comercio Electronico Firmas y Mensajes de Datos
European Union - eIDAS regulation on implementation within the EU is set out in the Digital Signatures and the Law.
India - Information Technology Act
Indonesia - Law No. 11/2008 on Information and Electronic Transactions
Iraq - Electronic Transactions and Electronic Signature Act No 78 in 2012
Ireland - Electronic Commerce Act 2000
Japan - Law Concerning Electronic Signatures and Certification Services, 2000
Kazakhstan - Law on Electronic Document and Electronic Signature (07.01.2003)
Lithuania - Law on Electronic Identification and Trust Services for Electronic Transactions
Mexico - E-Commerce Act [2000]
Malaysia - Digital Signature Act 1997 and Digital Signature Regulation 1998 (https://www.mcmc.gov.my/sectors/digital-signature)
Moldova - Privind semnătura electronică şi documentul electronic (http://lex.justice.md/md/353612/)
New Zealand - Contract and Commercial Law Act 2017
Paraguay - Ley 4017: De validez jurídica de la Firma Electrónica, la Firma Digital, los Mensajes de Datos y el Expediente Electrónico (12/23/2010) , Ley 4610: Que modifica y amplia la Ley 4017/10 (05/07/2012)
Peru - Ley Nº 27269. Ley de Firmas y Certificados Digitales (28MAY2000)
the Philippines - Electronic Commerce Act of 2000
Poland - Ustawa o podpisie elektronicznym (Dziennik Ustaw z 2001 r. Nr 130 poz. 1450)
Romania - LEGE nr. 214 din 5 iulie 2024 privind utilizarea semnăturii electronice, a mărcii temporale și prestarea serviciilor de încredere bazate pe acestea
Russian Federation - Federal Law of Russian Federation about Electronic Signature (06.04.2011)
Singapore - Electronic Transactions Act (2010) (background information, differences between ETA 1998 and ETA 2010)
Slovakia - Zákon č.215/2002 o elektronickom podpise
Slovenia - Slovene Electronic Commerce and Electronic Signature Act
South Africa - Electronic Communications and Transactions Act [No. 25 of 2002]
Spain - Ley 6/2020, de 11 de noviembre, reguladora de determinados aspectos de los servicios electrónicos de confianza
Switzerland - ZertES
Republika Srpska (entity of the Bosnia and Herzegovina) 2005
Thailand - Electronic Transactions Act B.E.2544 (2001)
Turkey - Electronic Signature Law
Ukraine - Electronic Signature Law, 2003
UK - s.7 Electronic Communications Act 2000
U.S. - Electronic Signatures in Global and National Commerce Act
U.S. - Uniform Electronic Transactions Act - adopted by 48 states
U.S. - Government Paperwork Elimination Act (GPEA)
U.S. - The Uniform Commercial Code (UCC)
Usage
In 2016, Aberdeen Strategy and Research reported that 73% of "best-in-class" and 34% of all other respondents surveyed made use of electronic signature processes in supply chain and procurement, delivering benefits in the speed and efficiency of key procurement activities. The percentages of their survey respondents using electronic signatures in accounts payable and accounts receivable processes were a little lower, 53% of "best-in-class" respondents in each case.
Technological implementations (underlying technology)
Digital signature
Digital signatures are cryptographic implementations of electronic signatures used as a proof of authenticity, data integrity and non-repudiation of communications conducted over the Internet. When implemented in compliance to digital signature standards, digital signing should offer end-to-end privacy with the signing process being user-friendly and secure. Digital signatures are generated and verified through standardized frameworks such as the Digital Signature Algorithm (DSA) by NIST or in compliance to the XAdES, PAdES or CAdES standards, specified by the ETSI.
There are typically three algorithms involved with the digital signature process:
Key generation – This algorithm provides a private key along with its corresponding public key.
Signing – This algorithm produces a signature upon receiving a private key and the message that is being signed.
Verification – This algorithm checks for the message's authenticity by verifying it along with the signature and public key.
The process of digital signing requires that its accompanying public key can then authenticate the signature generated by both the fixed message and private key. Using these cryptographic algorithms, the user's signature cannot be replicated without having access to their private key. A secure channel is not typically required. By applying asymmetric cryptography methods, the digital signature process prevents several common attacks where the attacker attempts to gain access through the following attack methods.
The most relevant standards on digital signatures with respect to size of domestic markets are the Digital Signature Standard (DSS) by the National Institute of Standards and Technology (NIST) and the eIDAS Regulation enacted by the European Parliament. OpenPGP is a non-proprietary protocol for email encryption through public key cryptography. It is supported by PGP and GnuPG, and some of the S/MIME IETF standards and has evolved into the most popular email encryption standard in the world.
Biometric signature
An electronic signature may also refer to electronic forms of processing or verifying identity through the use of biometric "signatures" or biologically identifying qualities of an individual. Such signatures use the approach of attaching some biometric measurement to a document as evidence. Biometric signatures include fingerprints, hand geometry (finger lengths and palm size), iris patterns, voice characteristics, retinal patterns, or any other human body property. All of these are collected using electronic sensors of some kind.
Biometric measurements of this type are useless as passwords because they can't be changed if compromised. However, they might be serviceable, except that to date, they have been so easily deceived that they can carry little assurance that the person who purportedly signed a document was actually the person who did. For example, a replay of the electronic signal produced and submitted to the computer system responsible for 'affixing' a signature to a document can be collected via wiretapping techniques. Many commercially available fingerprint sensors have low resolution and can be deceived with inexpensive household items (for example, gummy bear candy gel). In the case of a user's face image, researchers in Vietnam successfully demonstrated in late 2017 how a specially crafted mask could beat Apple's Face ID on iPhone X.
See also
Authentication
Long-term validation
UNCITRAL Model Law on Electronic Signatures (MLES)
References
External links
E-Sign Final Report (2005, European Union)
Judicial Studies Board Digital Signature Guidelines
Dynamic signatures
Authentication methods
Biometrics
Cryptography
Computer law
Electronic identification
Signature
Records management technology | Electronic signature | [
"Mathematics",
"Technology",
"Engineering"
] | 4,347 | [
"Cybersecurity engineering",
"Cryptography",
"Applied mathematics",
"Computer law",
"Computing and society",
"Electronic identification"
] |
627,059 | https://en.wikipedia.org/wiki/Lothar%20Meyer | Julius Lothar Meyer (19 August 1830 – 11 April 1895) was a German chemist. He was one of the pioneers in developing the earliest versions of the periodic table of the chemical elements. The Russian chemist Dmitri Mendeleev (his chief rival) and he had both worked with Robert Bunsen. Meyer never used his first given name and was known throughout his life simply as Lothar Meyer.
Career
Meyer was born in Varel, Germany (then part of the Duchy of Oldenburg). He was the son of Friedrich August Meyer, a physician, and Anna Biermann. After attending the Altes Gymnasium in Oldenburg, he studied medicine at the University of Zurich in 1851. Two years later, he studied pathology at the University of Würzburg as a student of Rudolf Virchow. At Zurich, he had studied under Carl Ludwig, which had prompted him to devote his attention to physiological chemistry. After graduating as a Doctor of Medicine from Würzburg in 1854, he went to Heidelberg University, where Robert Bunsen held the chair of chemistry. In 1858, he received a Ph.D. in chemistry from the University of Breslau with a thesis on the effects of carbon monoxide on the blood. With this interest in the physiology of respiration, he had recognized that oxygen combines with the hemoglobin in blood.
Influenced by the mathematical teaching of Gustav Kirchhoff, he took up the study of mathematical physics at the University of Königsberg under Franz Ernst Neumann and in 1859, after having received his habilitation (certification for university teaching), became Privatdozent in physics and chemistry at the University of Breslau. In 1866, Meyer accepted a post at the Eberswalde Forestry Academy at Neustadt-Eberswalde but two years later was appointed to a professorship at the Karlsruhe Polytechnic.
In 1872, Meyer was the first to suggest that the six carbon atoms in the benzene ring (that had been proposed a few years earlier by August Kekulé) were interconnected by single bonds only, the fourth valence of each carbon atom being directed toward the interior of the ring.
During the Franco-Prussian War, the Polytechnic was used as a hospital and Meyer took an active role in the care of the wounded. In 1876, Meyer became Professor of Chemistry at the University of Tübingen, where he served until his death from a stroke on 11 April 1895 at the age of 64.
Periodic table
Meyer is best known for his part in the periodic classification of the elements. He noted, as John A. R. Newlands did in England, that if the elements were arranged in the order of their atomic weights, they fell into groups of similar chemical and physical properties repeated at periodic intervals. According to him, if the atomic weights were plotted as ordinates and the atomic volumes as abscissae—the curve obtained a series of maxima and minima—the most electro-positive elements appearing at the peaks of the curve in the order of their atomic weights.
His book, Die modernen Theorien der Chemie, which he began writing in Breslau in 1862 and published two years later, contained an early version of the periodic table. It contained 28 elements, grouping them for the first time into six families by their valence. Works on organizing the elements by atomic weight, until then had been stymied by the widespread use of equivalent weights for the elements, rather than atomic weights.
He published articles about classification table of the elements in horizontal form (1864) and vertical form (1870), in which the series of periods are properly ended by an element of the alkaline earth metal group.
Table of Meyer, 1864
Table of Meyer, 1870
In 1869, Dmitri Mendeleev published a periodic table of all elements known at that time (he later predicted several new elements to complete the table, and corrected some atomic weights). A few months later, Meyer published a paper that included a revised version of his 1864 table that now included virtually all of the known elements, which was similar to the table published by Mendeleev:
Meyer had developed his fuller periodic table independently, but he acknowledged Mendeleev's priority. Included in Meyer's paper was a line chart of atomic volumes as a function of atomic weights, showing graphically the periodicity of the elements. Like Mendeleev, he also included predictions of future elements, but unlike Mendeleev did not emphasize these predictions nor suggest details of the physical and chemical properties of the future elements.
In 1882, both Meyer and Mendeleev received the Davy Medal from the Royal Society in recognition of their work on the Periodic Law.
The mineral lotharmeyerite, , was discovered in 1983 and named in recognition of Meyer's work on the Periodic Law. The type locality is the Ojuela mine, Mapimí, Durango, Mexico. Four closely related minerals have been described since 1983: ferrilotharmeyerite (1992); cobaltlotharmeyerite (1997); nickellotharmeyerite (1999); and manganlotharmeyerite (2002).
Personal life
Meyer married Johanna Volkmann in 1866.
Tribute
On 19 August 2020, Google celebrated his 190th birthday with a Google Doodle.
See also
Dimitri Mendeleev
History of the periodic table
Notes
References
Harald Kluge and Ingrid Kaestner, Ein Wegbereiter der physikalischen Chemie im 19. Jahrhundert, Julius Lothar Meyer (1830–1895) (Aachen: Shaker-Verlag, 2014).
Otto Kraetz, "Lothar Meyer," Neue Deutsche Biographie, 17 (1994), 304–06.
External links
Periodic table according to Lothar Meyer (1870)
Video of a talk by Michael Gordin titled "Periodicity, Priority, Pedagogy: Mendeleev and Lothar Meyer"
The Internet Database of Periodic Tables. Chemogenesis web book.
1830 births
1895 deaths
Corresponding members of the Saint Petersburg Academy of Sciences
Academic staff of the Eberswalde University for Sustainable Development
19th-century German chemists
Academic staff of the Karlsruhe Institute of Technology
People from Oldenburg (state)
People from Varel
Academic staff of the University of Tübingen
University of Würzburg alumni
People involved with the periodic table | Lothar Meyer | [
"Chemistry"
] | 1,289 | [
"Periodic table",
"People involved with the periodic table"
] |
627,071 | https://en.wikipedia.org/wiki/Computer-aided%20software%20engineering | Computer-aided software engineering (CASE) is a domain of software tools used to design and implement applications. CASE tools are similar to and are partly inspired by computer-aided design (CAD) tools used for designing hardware products. CASE tools are intended to help develop high-quality, defect-free, and maintainable software. CASE software was often associated with methods for the development of information systems together with automated tools that could be used in the software development process.<ref>P. Loucopoulos and V. Karakostas (1995). System Requirements Engineerinuality software which will perform effectively.</ref>
History
The Information System Design and Optimization System (ISDOS) project, started in 1968 at the University of Michigan, initiated a great deal of interest in the whole concept of using computer systems to help analysts in the very difficult process of analysing requirements and developing systems. Several papers by Daniel Teichroew fired a whole generation of enthusiasts with the potential of automated systems development. His Problem Statement Language / Problem Statement Analyzer (PSL/PSA) tool was a CASE tool although it predated the term.
Another major thread emerged as a logical extension to the data dictionary of a database. By extending the range of metadata held, the attributes of an application could be held within a dictionary and used at runtime. This "active dictionary" became the precursor to the more modern model-driven engineering capability. However, the active dictionary did not provide a graphical representation of any of the metadata. It was the linking of the concept of a dictionary holding analysts' metadata, as derived from the use of an integrated set of techniques, together with the graphical representation of such data that gave rise to the earlier versions of CASE.
The next entrant into the market was Excelerator from Index Technology in Cambridge, Mass. While DesignAid ran on Convergent Technologies and later Burroughs Ngen networked microcomputers, Index launched Excelerator on the IBM PC/AT platform. While, at the time of launch, and for several years, the IBM platform did not support networking or a centralized database as did the Convergent Technologies or Burroughs machines, the allure of IBM was strong, and Excelerator came to prominence. Hot on the heels of Excelerator were a rash of offerings from companies such as Knowledgeware (James Martin, Fran Tarkenton and Don Addington), Texas Instrument's CA Gen and Andersen Consulting's FOUNDATION toolset (DESIGN/1, INSTALL/1, FCP).
CASE tools were at their peak in the early 1990s. According to the PC Magazine of January 1990, over 100 companies were offering nearly 200 different CASE tools. At the time IBM had proposed AD/Cycle, which was an alliance of software vendors centered on IBM's Software repository using IBM DB2 in mainframe and OS/2:The application development tools can be from several sources: from IBM, from vendors, and from the customers themselves. IBM has entered into relationships with Bachman Information Systems, Index Technology Corporation, and Knowledgeware wherein selected products from these vendors will be marketed through an IBM complementary marketing program to provide offerings that will help to achieve complete life-cycle coverage''.
With the decline of the mainframe, AD/Cycle and the Big CASE tools died off, opening the market for the mainstream CASE tools of today. Many of the leaders of the CASE market of the early 1990s ended up being purchased by Computer Associates, including IEW, IEF, ADW, Cayenne, and Learmonth & Burchett Management Systems (LBMS). The other trend that led to the evolution of CASE tools was the rise of object-oriented methods and tools. Most of the various tool vendors added some support for object-oriented methods and tools. In addition new products arose that were designed from the bottom up to support the object-oriented approach. Andersen developed its project Eagle as an alternative to Foundation. Several of the thought leaders in object-oriented development each developed their own methodology and CASE tool set: Jacobson, Rumbaugh, Booch, etc. Eventually, these diverse tool sets and methods were consolidated via standards led by the Object Management Group (OMG). The OMG's Unified Modelling Language (UML) is currently widely accepted as the industry standard for object-oriented modeling.
CASE software
Tools
CASE tools support specific tasks in the software development life-cycle. They can be divided into the following categories:
Business and analysis modeling: Graphical modeling tools. E.g., E/R modeling, object modeling, etc.
Development: Design and construction phases of the life-cycle. Debugging environments. E.g., IISE LKO.
Verification and validation: Analyze code and specifications for correctness, performance, etc.
Configuration management: Control the check-in and check-out of repository objects and files. E.g., SCCS, IISE.
Metrics and measurement: Analyze code for complexity, modularity (e.g., no "go to's"), performance, etc.
Project management: Manage project plans, task assignments, scheduling.
Another common way to distinguish CASE tools is the distinction between Upper CASE and Lower CASE. Upper CASE Tools support business and analysis modeling. They support traditional diagrammatic languages such as ER diagrams, Data flow diagram, Structure charts, Decision Trees, Decision tables, etc. Lower CASE Tools support development activities, such as physical design, debugging, construction, testing, component integration, maintenance, and reverse engineering. All other activities span the entire life-cycle and apply equally to upper and lower CASE.
Workbenches
Workbenches integrate two or more CASE tools and support specific software-process activities. Hence they achieve:
A homogeneous and consistent interface (presentation integration)
Seamless integration of tools and toolchains (control and data integration)
An example workbench is Microsoft's Visual Basic programming environment. It incorporates several development tools: a GUI builder, a smart code editor, debugger, etc. Most commercial CASE products tended to be such workbenches that seamlessly integrated two or more tools. Workbenches also can be classified in the same manner as tools; as focusing on Analysis, Development, Verification, etc. as well as being focused on the upper case, lower case, or processes such as configuration management that span the complete life-cycle.
Environments
An environment is a collection of CASE tools or workbenches that attempts to support the complete software process. This contrasts with tools that focus on one specific task or a specific part of the life-cycle. CASE environments are classified by Fuggetta as follows:
Toolkits: Loosely coupled collections of tools. These typically build on operating system workbenches such as the Unix Programmer's Workbench or the VMS VAX set. They typically perform integration via piping or some other basic mechanism to share data and pass control. The strength of easy integration is also one of the drawbacks. Simple passing of parameters via technologies such as shell scripting can't provide the kind of sophisticated integration that a common repository database can.
Fourth generation: These environments are also known as 4GL standing for fourth generation language environments due to the fact that the early environments were designed around specific languages such as Visual Basic. They were the first environments to provide deep integration of multiple tools. Typically these environments were focused on specific types of applications. For example, user-interface driven applications that did standard atomic transactions to a relational database. Examples are Informix 4GL, and Focus.
Language-centered: Environments based on a single often object-oriented language such as the Symbolics Lisp Genera environment or VisualWorks Smalltalk from Parcplace. In these environments all the operating system resources were objects in the object-oriented language. This provides powerful debugging and graphical opportunities but the code developed is mostly limited to the specific language. For this reason, these environments were mostly a niche within CASE. Their use was mostly for prototyping and R&D projects. A common core idea for these environments was the model–view–controller user interface that facilitated keeping multiple presentations of the same design consistent with the underlying model. The MVC architecture was adopted by the other types of CASE environments as well as many of the applications that were built with them.
Integrated: These environments are an example of what most IT people tend to think of first when they think of CASE. Environments such as IBM's AD/Cycle, Andersen Consulting's FOUNDATION, the ICL CADES system, and DEC Cohesion. These environments attempt to cover the complete life-cycle from analysis to maintenance and provide an integrated database repository for storing all artifacts of the software process. The integrated software repository was the defining feature for these kinds of tools. They provided multiple different design models as well as support for code in heterogenous languages. One of the main goals for these types of environments was "round trip engineering": being able to make changes at the design level and have those automatically be reflected in the code and vice versa. These environments were also typically associated with a particular methodology for software development. For example, the FOUNDATION CASE suite from Andersen was closely tied to the Andersen Method/1 methodology.
Process-centered: This is the most ambitious type of integration. These environments attempt to not just formally specify the analysis and design objects of the software process but the actual process itself and to use that formal process to control and guide software projects. Examples are East, Enterprise II, Process Wise, Process Weaver, and Arcadia. These environments were by definition tied to some methodology since the software process itself is part of the environment and can control many aspects of tool invocation.
In practice, the distinction between workbenches and environments was flexible. Visual Basic for example was a programming workbench but was also considered a 4GL environment by many. The features that distinguished workbenches from environments were deep integration via a shared repository or common language and some kind of methodology (integrated and process-centered environments) or domain (4GL) specificity.
Major CASE risk factors
Some of the most significant risk factors for organizations adopting CASE technology include:
Inadequate standardization: Organizations usually have to tailor and adopt methodologies and tools to their specific requirements. Doing so may require significant effort to integrate both divergent technologies as well as divergent methods. For example, before the adoption of the UML standard the diagram conventions and methods for designing object-oriented models were vastly different among followers of Jacobsen, Booch, and Rumbaugh.
Unrealistic expectations: The proponents of CASE technology—especially vendors marketing expensive tool sets—often hype expectations that the new approach will be a silver bullet that solves all problems. In reality no such technology can do that and if organizations approach CASE with unrealistic expectations they will inevitably be disappointed.
Inadequate training: As with any new technology, CASE requires time to train people in how to use the tools and to get up to speed with them. CASE projects can fail if practitioners are not given adequate time for training or if the first project attempted with the new technology is itself highly mission critical and fraught with risk.
Inadequate process control: CASE provides significant new capabilities to utilize new types of tools in innovative ways. Without the proper process guidance and controls these new capabilities can cause significant new problems as well.
See also
Data modeling
Domain-specific modeling
Method engineering
Model-driven architecture
Modeling language
Rapid application development
Automatic programming
Test automation
Build automation
References
Data management | Computer-aided software engineering | [
"Technology"
] | 2,334 | [
"Data management",
"Data"
] |
627,096 | https://en.wikipedia.org/wiki/Intel%20Developer%20Forum | The Intel Developer Forum (IDF) was a biannual gathering of technologists to discuss Intel products and products based on Intel products. The first IDF was held in 1997.
To emphasize the importance of China, the Spring 2007 IDF was held in Beijing instead of San Francisco, and San Francisco and Taipei shared the Fall IDF event in September and October, respectively. Three IDF shows were scheduled in 2008; with the date of IDF San Francisco notably moving to August rather than September. In previous years, events were held in major cities around the world such as San Francisco, Mumbai, Bangalore, Moscow, Cairo, São Paulo, Amsterdam, Munich and Tokyo.
On April 17, 2017, Intel announced that it would no longer be hosting IDF. As a result of this announcement, IDF17, which was scheduled for August in San Francisco, was canceled.
2007 events
April 17–18, 2007 – Beijing, China
September 18–20, 2007 – San Francisco, United States
October 15–16, 2007 – Taipei, Taiwan
2008 events
April 2–3, 2008 – Shanghai, China
August 19–21, 2008 – San Francisco, United States
October 20–21, 2008 – Taipei, Taiwan
2009 events
April 8–9, 2009 – Beijing, China
September 22–24, 2009 – San Francisco, United States
November 16–17, 2009 – Taipei, Taiwan
2010 events
April 13–14, 2010 – Beijing, China
September 13–15, 2010 – San Francisco, United States
2011 events
April 12–13, 2011 – Beijing, China
September 13–15, 2011 – San Francisco, United States
2012 events
April 11–12, 2012 – Beijing, China
May 15, 2012 – São Paulo, Brazil
September 11–13, 2012 – San Francisco, United States
2013 events
April 10–11, 2013 – Beijing, China
September 10–12, 2013 – San Francisco, United States
2014 events
March 8–10, 2014 – Shenzhen, China
September 9-11, 2014 – San Francisco, United States
2015 events
April 7-10, 2015 – Shenzhen, China
August 18–20, 2015 – San Francisco, United States
2016 events
April 13-14, 2016 – Shenzhen, China
August 16–18, 2016 – San Francisco, United States
2017 events
Intel originally announced that in 2017, no event would be hosted in China and that the San Francisco event would feature a new format.
On April 17, The event was cancelled with the retirement of the entire program.
References
External links
Developer Forum
Computer conferences
Recurring events established in 1997
Recurring events disestablished in 2017 | Intel Developer Forum | [
"Technology"
] | 513 | [
"Computing stubs",
"Computer hardware stubs"
] |
627,159 | https://en.wikipedia.org/wiki/Nicholas%20Aylward%20Vigors | Nicholas Aylward Vigors (1785 – 26 October 1840) was an Irish zoologist and politician. He popularized the classification of birds on the basis of the quinarian system.
Early life
Vigors was born at Old Leighlin, County Carlow, in 1785. He was the first son of Capt. Nicholas Aylward Vigors, who served in the 29th (Worcestershire) Regiment, and his first wife, Catherine Vigors, daughter of Solomon Richards of Solsborough. He matriculated at Trinity College, Oxford, in November 1803, and was admitted at Lincoln's Inn in November 1806. Without completing his studies, he served in the army during the Peninsular War from 1809 to 1811 and was wounded in the Battle of Barossa on 5 March 1811. Though he had not yet completed his studies, he still published "An inquiry into the nature and extent of poetick licence" in London in 1810. He then returned to Oxford to continue his studies and achieved his Bachelor of Arts in 1817 and Master of Arts in 1818. He practiced as a barrister and became a Doctor of Civil Law in 1832.
Zoology
Vigors was a co-founder of the Zoological Society of London in 1826, and its first secretary until 1833. In that year, he founded what became the Royal Entomological Society of London. He was a fellow of the Linnean Society and the Royal Society. He was the author of 40 papers, mostly on ornithology. He described 110 species of birds, enough to rank him among the top 30 bird authors historically. He provided the text for John Gould's A Century of Birds from the Himalaya Mountains (1830–32).
One bird that he described was "Sabine's snipe". This was treated as a common snipe by Barrett-Hamilton in 1895 and by Meinertzhagen in 1926 but was thought to be probably a Wilson's snipe in 1945. Vigors lent a skin for later editions of Thomas Bewick's History of British Birds.
Politics
Vigors succeeded to his father's estate in 1828. He was MP for the borough of Carlow from 1832 until 1835. He briefly represented County Carlow in 1835. Vigors had been elected in a by-election in June after the Conservative MPs originally returned at the 1835 United Kingdom general election were unseated on petition and a new writ issued. On 19 August 1835, Vigors and his running mate, in the two-member county constituency, were unseated on petition. The same two Conservatives who had previously been unseated were awarded the seats. On the death of one of them, Vigors won the subsequent by-election in 1837 and retained the seat until his own death.
References
Bibliography
Parliamentary Election Results in Ireland, 1801-1922, edited by B.M. Walker (Royal Irish Academy 1978)
External links
Art UK: Toucan by Vigors
1785 births
1840 deaths
Alumni of Trinity College, Oxford
British ornithologists
Irish ornithologists
British zoologists
Irish zoologists
Taxon authorities
Members of the Parliament of the United Kingdom for County Carlow constituencies (1801–1922)
Fellows of the Royal Society
Fellows of the Linnean Society of London
Secretaries of the Zoological Society of London
UK MPs 1832–1835
UK MPs 1835–1837
UK MPs 1837–1841
Grenadier Guards officers
British Army personnel of the Napoleonic Wars
Politicians from County Carlow
Irish Repeal Association MPs
Committee members of the Society for the Diffusion of Useful Knowledge
Scientists from County Carlow
Military personnel from County Carlow | Nicholas Aylward Vigors | [
"Biology"
] | 720 | [
"Taxon authorities",
"Taxonomy (biology)"
] |
627,206 | https://en.wikipedia.org/wiki/Transhuman%20Space | Transhuman Space (THS) is a role-playing game by David Pulver, published by Steve Jackson Games as part of the "Powered by GURPS" (Generic Universal Role-Playing System) line. Set in the year 2100, humanity has begun to colonize the Solar System. The pursuit of transhumanism is now in full swing, as more and more people reach fully posthuman states.
Transhuman Space was one of the first role-playing games to tackle postcyberpunk and transhumanist themes. In 2002, the Transhuman Space adventure "Orbital Decay" received an Origins Award nomination for Best Role-Playing Game Adventure. Transhuman Space won the 2003 Grog d'Or Award for Best Role-playing Game, Game Line or RPG Setting.
Setting
The game assumes that no cataclysm — natural or human-induced — swept Earth in the 21st century. Instead, constant developments in information technology, genetic engineering, nanotechnology and nuclear physics generally improved condition of the average human life. Plagues of the 20th century (like cancer or AIDS) have been suppressed, the ozone layer is being restored and Earth's ecosystems are recovering (although thermal emission by fusion power plants poses an environmental threat—albeit a much lesser one than previous sources of energy). Thanks to modern medicine humans live biblical timespans surrounded by various artificially intelligent helper applications and robots (cybershells), sensory experience broadcasts (future TV) and cyberspace telepresence. Thanks to cheap and clean fusion energy humanity has power to fuel all these wonders, restore and transform its home planet and finally settle on other heavenly bodies.
Human genetic engineering has advanced to the point that anyone—single individuals, same-sex couples, groups of three or more—can reproduce. The embryos can be allowed to be developed naturally, or they can undergo three levels of tinkering:
1. Genefixing, which corrects defects;
2. Upgrades, which boost natural abilities (Ishtar Upgrades are slightly more attractive than usual, Metanoia Upgrades are more intelligent, etc.); and...
3. Full transition to parahuman status (Nyx Parahumans only need a few hours of sleep per week, Aquamorphs can live underwater, etc.) Another type of human genetic engineering, far more controversial, is the creation of bioroids, fully sentient slave races.
People can "upload" by recording the simulation of their brains on computer disks. The emulated individual then becomes a ghost, an infomorph very easily confused with "sapient artificial intelligence". However, this technology has several problems as the solely available "brainpeeling" technique is fatal to the original biological lifeform being simulated, has a significant failure rate and the philosophical questions regarding personal identity remain equivocal. Any infomorph, regardless of its origin, can be plugged into a "cybershell" (robotic or cybernetic body), or a biological body, or "bioshell". Or, the individual can illegally make multiple "xoxes", or copies of themselves, and scatter them throughout the system, exponentially increasing the odds that at least one of them will live for centuries more, if not forever.
This is also a time of space colonization. First, humanity (specifically China, followed by the United States and others) colonized Mars in a fashion resembling that outlined in the Mars Direct project. The Moon, Lagrangian points, inner planets and asteroids soon followed. In the late 21st century even some of Saturn's moons have been settled as a base for that planet's Helium-3 scooping operations.
Transhuman Space's setting is neither utopia nor dystopia, however: several problems have arisen from these otherwise beneficial developments. The generation gap has become a chasm as lifespans increase. No longer do the elite fear death, and no longer can the young hope to replace them. While it seemed that outworld colonies would offer accommodation and work for those young ones, they are being replaced by genetically tailored bioroids and AI-powered cybershells. The concept of humanity is no longer clear in a world where even some animals speak of their rights and the dead haunt both cyberspace and reality (in form of infomorph-controlled bioshells or cybershells).
And the wonders of high science are not universally shared — some countries merely struggle with informatization while others suffer from nanoplagues, defective drugs, implants and software tested on their populace. In some poor countries high-tech tyrants oppress their backward people. And in outer space all sort of modern crime thrives, barely suppressed by military forces.
Publication history
After the initial set of GURPS books that were published using the GURPS Lite, later publications such as Transhuman Space by David Pulver were labelled simply "Powered by GURPS" without using the name "GURPS" in the book title. Transhuman Space received a significant amount of supporting publications, and was the largest original background setting that Steve Jackson Games produced in 15 years. Shannon Appelcline noted that by its inclusion of posthuman characters, the book began to show the limits of the GURPS system as it was, which is something that Pulver would address soon thereafter.
Steve Jackson Games has not updated the core book (GURPS Transhuman Space) to 4th edition, although the supplement Transhuman Space: Changing Times provides a path for migrating to 4th edition. It has produced several 4th edition supplements for the setting: Transhuman Space: Bioroid Bazaar, Transhuman Space: Cities on the Edge, Transhuman Space: Martial Arts 2100, Transhuman Space: Personnel Files 2-5, Transhuman Space: Shell-Tech, GURPS Spaceships 8: Transhuman Spacecraft, Transhuman Space: Transhuman Mysteries, and Transhuman Space: Wings of the Rising Sun.
Reception
In a review of Transhuman Space in Black Gate, William Stoddard said "Transhuman Space was a richly detailed setting; if it had imperfections, it had enough depth to make up for them. I think it has the potential to become a classic in its field. Perhaps a campaign set in its default start year of 2100 could leave the early twenty-first century blurry enough to avoid obvious incongruities."
Reviews
Review in Vol. 20, No. 1 of Prometheus, the journal of the Libertarian Futurist Society.
See also
Eclipse Phase
Orion's Arm
Hard science fiction
List of GURPS books
GURPS Basic Set
Pyramid, a monthly online magazine with GURPS support
References
External links
Transhuman Space Official web site
Review of Transhuman Space at RPGnet
2003 Grog d'Or Announcement
Fiction about artificial intelligence
Fiction about augmented reality
Biopunk
Biorobotics in fiction
Fiction about brain–computer interface
Fiction about cyborgs
Fiction about consciousness transfer
Fiction about robots
Fiction about the Solar System
Fiction about genetic engineering
GURPS 3rd edition
GURPS 4th edition
GURPS books
Nanopunk
Fiction about nanotechnology
Postcyberpunk
Fiction about prosthetics
Role-playing game supplements introduced in 2002
Science fiction role-playing games
Steve Jackson Games games
Transhumanism in fiction
Fiction about virtual reality | Transhuman Space | [
"Materials_science",
"Engineering",
"Biology"
] | 1,530 | [
"Fiction about cyborgs",
"Genetic engineering",
"Fiction about genetic engineering",
"Fiction about nanotechnology",
"Cyborgs",
"Nanotechnology"
] |
627,405 | https://en.wikipedia.org/wiki/Telephone%20numbering%20plan | A telephone numbering plan is a type of numbering scheme used in telecommunication to assign telephone numbers to subscriber telephones or other telephony endpoints. Telephone numbers are the addresses of participants in a telephone network, reachable by a system of destination code routing. Telephone numbering plans are defined in each of the administrative regions of the public switched telephone network (PSTN) and in private telephone networks.
For public numbering systems, geographic location typically plays a role in the sequence of numbers assigned to each telephone subscriber. Many numbering plan administrators subdivide their territory of service into geographic regions designated by a prefix, often called an area code or city code, which is a set of digits forming the most-significant part of the dialing sequence to reach a telephone subscriber.
Numbering plans may follow a variety of design strategies which have often arisen from the historical evolution of individual telephone networks and local requirements. A broad division is commonly recognized between closed and open numbering plans. A closed numbering plan, as found in North America, features fixed-length area codes and local numbers, while an open numbering plan has a variance in the length of the area code, local number, or both of a telephone number assigned to a subscriber line. The latter type developed predominantly in Europe.
The International Telecommunication Union (ITU) has established a comprehensive numbering plan, designated E.164, for uniform interoperability of the networks of its member state or regional administrations. It is an open numbering plan but imposes a maximum length of 15 digits to telephone numbers. The standard defines a country code for each member region which is prefixed to each national telephone number for international destination routing.
Private numbering plans exist in telephone networks that are privately operated in an enterprise or organizational campus. Such systems may be supported by a private branch exchange (PBX), which provides a central access point to the PSTN and also controls internal calls between telephone extensions.
In contrast to numbering plans, which determine telephone numbers assigned to subscriber stations, dialing plans establish the customer dialing procedures, i.e., the sequence of digits or symbols to be dialed to reach a destination. It is the manner in which the numbering plan is used. Even in closed numbering plans, it is not always necessary to dial all digits of a number. For example, an area code may often be omitted when the destination is in the same area as the calling station.
Telephone number structure
National or regional telecommunication administrations that are members of the International Telecommunication Union (ITU) use national telephone numbering plans that conform to international standard E.164.
E.164 specifies that a telephone number consist of a country code and a national telephone number. National telephone numbers are defined by national or regional numbering plans, such as the European Telephony Numbering Space, the North American Numbering Plan (NANP), or the UK number plan.
Within a national numbering plan, a complete destination telephone number is typically composed of an area code and a subscriber telephone number.
Many national numbering plans have developed from local historical requirements and progress or technological advancements, which resulted in a variety of structural characteristics of the telephone numbers assigned to telephones. In the United States, the industry decided in 1947 to unite all local telephone networks under one common numbering plan with a fixed length of ten digits for the national telephone number of each telephone, of which the last seven digits were known as the local directory number, or subscriber number. Such a numbering plan became known as a closed numbering plan. In several European countries, a different strategy prevailed, known as the open numbering plan, which features a variance in the length of the area code, the local number, or both.
Subscriber number
The subscriber number is the address assigned to a telephone line or wireless communication channel terminating at the customer equipment. The first few digits of the subscriber number may indicate smaller geographical scopes, such as towns or districts, based on municipal aspects, or individual telephone exchanges (central office code), such as a wire centers. In mobile networks they may indicate the network provider. Callers in a given area sometimes do not need to include area prefixes when dialing within the same area, but devices that dial telephone numbers automatically may include the full number with area and access codes.
The subscriber number is typically listed in local telephone directories, and is therefore often referred to as the directory number.
Area code
Telephone administrations that manage telecommunication infrastructure of extended size, such as a large country, often divide the territory into geographic areas. This benefits independent management by administrative or historical subdivisions, such as states and provinces, of the territory or country. Each area of subdivision is identified in the numbering plan with a routing code. This concept was first developed in the planning for a nationwide numbering plan for Operator Toll Dialing and direct distance dialing (DDD) in the Bell System in the United States in the 1940s, a system that resulted in the North American Numbering Plan for World Zone 1. AT&T divided the United States and Canada into numbering plan areas (NPAs), and assigned to each NPA a unique three-digit prefix, the numbering plan area code, which became known in short-form as NPA code or simply area code. The area code is prefixed to each telephone number issued in its service area.
Other national telecommunication authorities use various formats and dialing rules for area codes. The size of area code prefixes may either be fixed or variable. Area codes in the NANP have three digits, while two digits are used in Brazil, one digit in Australia and New Zealand. Variable-length formats exist in multiple countries including: Argentina, Austria (1 to 4), Germany (2 to 5 digits), Japan (1 to 5), Mexico (2 or 3 digits), Peru (1 or 2), Syria (1 or 2) and the United Kingdom. In addition to digit count, the format may be restricted to certain digit patterns. For example, the NANP had at times specific restrictions on the range of digits for the three positions, and required assignment to geographical areas avoiding nearby areas receiving similar area codes to avoid confusion and misdialing.
Some countries, such as Denmark and Uruguay, have merged variable-length area codes and telephone numbers into fixed-length numbers that must always be dialed independently of location. In such administrations, the area code is not distinguished formally in the telephone number.
In the UK, area codes were first known as subscriber trunk dialling (STD) codes. Depending on local dialing plans, they are often necessary only when dialed from outside the code area or from mobile phones. In North America ten-digit dialing is required in areas with overlay numbering plans, in which multiple area codes are assigned to the same area.
The strict correlation of a telephone number to a geographical area has been broken by technical advances, such as local number portability in the North American Numbering Plan and voice over IP services.
When dialing a telephone number, the area code may have to be preceded by a trunk prefix or national access code for domestic calls, and for international calls by the international access code and country code.
Area codes are often quoted by including the national access code. For example, a number in London may be listed as 020 7946 0321. Users must correctly interpret 020 as the code for London. If they call from another station within London, they may merely dial 7946 0321, or if dialing from another country, the initial 0 should be omitted after the country code.
International numbering plan
The E.164 standard of the International Telecommunication Union is an international numbering plan and establishes a country calling code (country code) for each member organization. Country codes are prefixes to national telephone numbers that denote call routing to the network of a subordinate number plan administration, typically a country, or group of countries with a uniform numbering plan, such as the NANP. E.164 permits a maximum length of 15 digits for the complete international phone number consisting of the country code, the national routing code (area code), and the subscriber number. E.164 does not define regional numbering plans, however, it does provide recommendations for new implementations and uniform representation of all telephone numbers.
Country code
Country codes are necessary only when dialing telephone numbers in other countries than the originating telephone, but many networks permit them for all calls. These are dialed before the national telephone number.
Following ITU-T specification E.123, international telephone numbers are commonly indicated in listings by prefixing the country code with a plus sign (). This reminds the subscriber to dial the international access code of the country from which the call is placed. For example, the international dialing prefix or access code in all NANP countries is 011, and 00 in most other countries. On modern mobile telephones and many voice over IP services, the plus sign can usually be dialed and functions directly as the international access code. Peer-to-peer SIP uses Dynamic Delegation Discovery System to perform endpoint discovery, and therefore E.164 numbers.
Special services
Within the system of country calling codes, the ITU has defined certain prefixes for special services, and assigns such codes for independent international networks, such as satellite systems, spanning beyond the scope of regional authorities.
Some special service codes are the following:
388 5 – shared code for groups of nations
388 3 – European Telephony Numbering Space – Europe-wide services (discontinued)
800 – International Freephone (UIFN)
808 – reserved for Shared Cost Services
878 – Universal Personal Telecommunications services
881 – Global Mobile Satellite System
882 and +883 – International Networks
888 - international disaster relief operations
979 – International Premium Rate Service
991 – International Telecommunications Public Correspondence Service trial (ITPCS)
999 – reserved for future global service
Satellite telephone systems
Satellite phones are typically issued with telephone numbers with a special country calling code, for example:
Inmarsat: 870: SNAC (Single Network Access Code)
ICO Global: 881 0, 881 1
Ellipso: 881 2, 881 3
Iridium: 881 6, 881 7
Globalstar: 881 8, 881 9
Emsat: 882 13
Thuraya: 882 16
ACeS: 882 20
Some satellite telephones are issued with telephone numbers from a national numbering plan; for example, Globalstar issues NANP telephone numbers.
Private numbering plan
Like a public telecommunications network, a private telephone network in an enterprise or within an organizational campus may implement a private numbering plan for the installed base of telephones for internal communication. Such networks operate a private switching system or a private branch exchange (PBX) within the network. The internal numbers assigned are often called extension numbers, as the internal numbering plan extends an official, published main access number for the entire network. A caller from within the network only dials the extension number assigned to another internal destination telephone.
A private numbering plan provides the convenience of mapping station telephone numbers to other commonly used numbering schemes in an enterprise. For example, station numbers may be assigned as the room number of a hotel or hospital. Station numbers may also be strategically mapped to certain keywords composed from the letters on the telephone dial, such as 4357 (help) to reach a help desk.
The internal number assignments may be independent of any direct inward dialing (DID) services provided by external telecommunication vendors. For numbers without DID access, the internal switch relays externally originated calls via an operator, an automated attendant or an electronic interactive voice response system. Telephone numbers for users within such systems are often published by suffixing the official telephone number with the extension number, e.g., 1 800 555-0001 x2055.
Some systems may automatically map a large block of DID numbers (differing only in a trailing sequence of digits) to a corresponding block of individual internal stations, allowing each of them to be reached directly from the public switched telephone network. In some of these cases, a special shorter dial-in number can be used to reach an operator who can be asked for general information, e.g. help looking up or connecting to internal numbers. For example, individual extensions at Universität des Saarlandes can be dialed directly from outside via their four-digit internal extension +49-681-302-xxxx, whereas the university's official main number is +49-681-302-0 (49 is the country code for Germany, 681 is the area code for Saarbrücken, 302 the prefix for the university).
Callers within a private numbering plan often dial a trunk prefix to reach a national or international destination (outside line) or to access a leased line (or tie-line) to another location within the same enterprise. A large manufacturer with factories and offices in multiple cities may use a prefix (such as '8') followed by an internal routing code to indicate a city or location, then an individual four- or five-digit extension number at the destination site. A common trunk prefix for an outside line on North American systems is the digit 9, followed by the outside destination number.
Additional dial plan customisations, such as single-digit access to a hotel front desk or room service from an individual room, are available at the sole discretion of the PBX owner.
Numbering plan indicator
Signaling in telecommunication networks is specific to the technology in use for each link. During signaling, it is common that additional information is passed between switching systems that is not represented in telephone numbers, which serve only as network addresses of endpoints. One such information element is the numbering plan indicator (NPI). It is a number defined in the ITU standard Q.713, paragraph 3.4.2.3.3, indicating the numbering plan of the attached telephone number. NPIs can be found in Signalling Connection Control Part (SCCP) and short message service (SMS) messages. , the following numbering plans and their respective numbering plan indicator values have been defined:
Subscriber dialing procedures
While a telephone numbering plan specifies the digit sequence assigned to each telephone or wire line, establishing the network addresses needed for routing calls, numbering plan administrators may define certain dialing procedures for placing calls. This may include the dialing of additional prefixes necessary for administrative or technical reasons, or it may permit short code sequences for convenience or speed of service, such as in cases of emergency. The body of dialing procedures of a numbering plan administration is often called a dial plan.
A dial plan establishes the expected sequence of digits dialed on subscriber premises equipment, such as telephones, in private branch exchange (PBX) systems, or in other telephone switches to effect access to the telephone networks for the routing of telephone calls, or to effect or activate specific service features by the local telephone company, such as 311 or 411 service.
Variable-length dialing
Within the North American Numbering Plan (NANP), the administration defines standard and permissive dialing procedures, specifying the number of mandatory digits to be dialed for local calls within a single numbering plan area (NPA), as well as alternate, optional sequences, such as adding the prefix 1 before the telephone number.
Despite the closed numbering plan in the NANP, different dialing procedures exist in many of the territories for local and long-distance telephone calls. This means that to call another number within the same city or area, callers need to dial only a subset of the full telephone number. For example, in the NANP, only the seven-digit number may need to be dialed, but for calls outside the local numbering plan area, the full number including the area code is required. In these situations, ITU-T Recommendation E.123 suggests to list the area code in parentheses, signifying that in some cases the area code is optional or may not be required.
Internationally, an area code is typically prefixed by a domestic trunk access code (usually 0) when dialing from inside a country, but is not necessary when calling from other countries; there are exceptions, such as for Italian land lines.
To call a number in Sydney, Australia, for example:
xxxx xxxx (within Sydney and other locations within New South Wales and the Australian Capital Territory - no area code required)
(02) xxxx xxxx (outside New South Wales and the Australian Capital Territory, but still within Australia - the area code is required)
+61 2 xxxx xxxx (outside Australia)
The plus character (+) in the markup signifies that the following digits are the country code, in this case 61. Some phones, especially mobile telephones, allow the + to be entered directly. For other devices the user must replace the + with the international access code for their current location. In the United States, most carriers require the caller to dial 011 before the destination country code.
New Zealand requires the area code to be dialed when calling between two local calling areas. During the 1970s and 1980s, each local calling area had its own area code. For example, Christchurch and Nelson in the late 1980s:
(03) xxx xxx — to call Christchurch from Nelson.
(054) xx xxx — to call Nelson from Christchurch.
During the early 1990s, the numbering plan was reorganised with the numerous area codes merged into just five, but the requirement to dial the area code between local calling areas remained. This means even though Christchurch and Nelson are now both in the same area code, the area code has to be dialed for calls between the two cities.
(03) 3xx xxxx — to call Christchurch from Nelson.
(03) 54x xxxx — to call Nelson from Christchurch.
In many areas of the NANP, the domestic trunk code (long-distance access code) must also be dialed along with the area code for long-distance calls even within the same numbering plan area. For example, to call a number in Regina in area code 306:
306 xxx xxxx — within Regina, Lumsden and other local areas
1 306 xxx xxxx — within Saskatchewan, but not within the Regina local calling area, e.g., Saskatoon
1 306 xxx xxxx — anywhere within the NANP outside Saskatchewan
In many parts of North America, especially in area code overlay complexes, dialing the area code, or 1 and the area code, is required even for local calls. Dialing from mobile phones does not require the trunk code in the US, although it is still necessary for calling all long-distance numbers from a mobile phone in Canada. Many mobile handsets automatically add the area code of the set's telephone number for outbound calls, if not dialed by the user.
In some parts of the United States, especially northeastern states such as Pennsylvania served by Verizon Communications, the ten-digit number must be dialed. If the call is not local, the call fails unless the dialed number is preceded by digit 1. Thus:
610 xxx xxxx — local calls within the 610 area code and its overlay (484), as well as calls to or from the neighboring 215 area code and its overlay, 267. Area code is required; one of two completion options for mobile phones within the U.S.
1 610 xxx xxxx — calls from numbers outside the 610/484 and 215/267 area codes; second of two completion options for mobile phones within the U.S.
In California and New York, because of the existence of both overlay area codes (where an area code must be dialed for every call) and non-overlay area codes (where an area code is dialed only for calls outside the subscriber's home area code), "permissive home area code dialing" of 1 + the area code within the same area code, even if no area code is required, has been permitted since the mid-2000s. For example, in the 559 area code (a non-overlay area code), calls may be dialed as seven digits (XXX-XXXX) or 1 559 + 7 digits. The manner in which a call is dialed does not affect the billing of the call. This "permissive home area code dialing" helps maintain uniformity and eliminates confusion given the different types of area code relief that has made California the nation's most "area code" intensive State. Unlike other states with overlay area codes (Texas, Maryland, Florida and Pennsylvania and others), the California Public Utilities Commission and the New York State Public Service Commission maintain two different dial plans: Landlines must dial 1 + area code whenever an Area Code is part of the dialed digits while cellphone users can omit the "1" and just dial 10 digits.
Many organizations have private branch exchange systems which permit dialing the access digit(s) for an outside line (usually 9 or 8), a "1" and finally the local area code and xxx xxxx in areas without overlays. This aspect is unintentionally helpful for employees who reside in one area code and work in an area code with one, two, or three adjacent area codes. 1+ dialing to any area code by an employee can be done quickly, with all exceptions processed by the private branch exchange and passed onto the public switched telephone network.
Full-number dialing
In small countries or areas, the full telephone number is used for all calls, even in the same area. This has traditionally been the case in small countries and territories where area codes have not been required. However, there has been a trend in many countries towards making all numbers a standard length, and incorporating the area code into the subscriber's number. This usually makes the use of a trunk code obsolete.
For example, to call someone in Oslo in Norway before 1992, it was necessary to dial:
xxx xxx (within Oslo - no area code required)
(02) xxx xxx (within Norway - outside Oslo)
47 2 xxx xxx (outside Norway)
After 1992, this changed to a closed eight-digit numbering plan, e.g.:
22xx xxxx (within Norway - including Oslo)
47 22xx xxxx (outside Norway)
However, in other countries, such as France, Belgium, Japan, Switzerland, South Africa and some parts of North America, the trunk code is retained for domestic calls, whether local or national, e.g.,
Paris 01 xx xx xx xx (outside France +33 1 xxxx xxxx)
Brussels 02 xxx xxxx (outside Belgium +32 2 xxx xxxx)
Geneva 022 xxx xxxx (outside Switzerland +41 22 xxx xxxx)
Cape Town 021 xxx xxxx (outside South Africa +27 21 xxx xxxx)
New York 1 212 xxx xxxx (outside the North American Numbering Plan +1 212 xxx xxxx)
Fukuoka 092 xxx xxxx (outside the Japanese Numbering Plan +81 92 xxx xxxx)
India "0-10 Digit Number" (outside India +91 XXXXXXXXXX). In India due to the availability of multiple operators, the metro cities have short codes which range from 2 to 8 digits.
While some, like Italy, require the initial zero to be dialed, even for calls from outside the country, e.g.,
Rome 06 xxxxxxxx (outside Italy +39 06 xxxxxxxx)
While dialing of full national numbers takes longer than a local number without the area code, the increased use of phones that can store numbers means that this is of decreasing importance. It also makes it easier to display numbers in the international format, as no trunk code is required—hence a number in Prague, Czech Republic, can now be displayed as:
2xx xxx xxx (inside Czech Republic)
+420 2xx xxx xxx (outside Czech Republic)
as opposed to before September 21, 2002:
02 / xx xx xx xx (inside Czech Republic)
+420 2 / xx xx xx xx (outside Czech Republic)
Some countries already switched, but trunk prefix re-added with the closed dialing plan, for example in Bangkok, Thailand before 1997:
xxx-xxxx (inside Bangkok)
02-xxx-xxxx (inside Thailand)
+66 2-xxx-xxxx (outside Thailand)
This was changed in 1997:
2-xxx-xxxx (inside Thailand)
+66 2-xxx-xxxx (outside Thailand)
Trunk prefix was re-added in 2001
02-xxx-xxxx (inside Thailand)
+66 2-xxx-xxxx (outside Thailand)
See also
:Category:Telephone numbers by country
National conventions for writing telephone numbers
List of country calling codes
List of North American Numbering Plan area codes
Carrier access code
Telephone exchange names
Area code 000
References
External links
List of ITU-T Recommendation E.164 assigned country codes as of 15 Dec 2016
List of ITU-T Recommendation E.164 Dialling Procedures as of 15 DEC 2011
Telephone numbers
ITU-T recommendations
Identifiers
numbering plan | Telephone numbering plan | [
"Mathematics"
] | 5,123 | [
"Mathematical objects",
"Numbers",
"Telephone numbers"
] |
627,411 | https://en.wikipedia.org/wiki/Jewel%20bearing | A jewel bearing is a plain bearing in which a metal spindle turns in a jewel-lined pivot hole. The hole is typically shaped like a torus and is slightly larger than the shaft diameter. The jewels are typically made from the mineral corundum, usually either synthetic sapphire or synthetic ruby. Jewel bearings are used in precision instruments where low friction, long life, and dimensional accuracy are important. Their main use is in mechanical watches.
History
Jewel bearings were invented in 1704 for use in watches by Nicolas Fatio de Duillier, Peter Debaufre, and Jacob Debaufre, who received an English patent for the idea. Originally natural jewels were used, such as diamond, sapphire, ruby, and garnet. In 1902, a process to make synthetic sapphire and ruby (crystalline aluminium oxide, also known as corundum) was invented by Auguste Verneuil, making jewelled bearings much cheaper. Today most jewelled bearings are synthetic ruby or sapphire.
Historically, jewel pivots were made by grinding using diamond abrasive. Modern jewel pivots are often made using high-powered lasers, chemical etching, and ultrasonic milling.
During World War II jewel bearings were one of the products restricted by the United States government War Production Board as critical to the war effort.
Characteristics
The advantages of jewel bearings include high accuracy, very small size and weight, low and predictable friction, good temperature stability, and the ability to operate without lubrication and in corrosive environments. They are known for their low kinetic friction and highly consistent static friction. The static coefficient of friction of brass-on-steel is 0.35, while that of sapphire-on-steel is 0.10–0.15. Sapphire surfaces are very hard and durable, with Mohs hardness of 9 and Knoop hardness of 1800, and can maintain smoothness over decades of use, thus reducing friction variability. Disadvantages include brittleness and fragility, limited availability and applicability in medium and large bearing sizes and capacities, and friction variations if the load is not axial. Like other bearings, most jeweled pivots use oil lubrication to reduce friction.
Uses
The predominant use of jewel bearings is in mechanical watches, where their low and predictable friction improves watch accuracy as well as improving bearing life. Manufacturers traditionally listed the number of jewels prominently on the watch face or back, as an advertising point. A typical fully jeweled time-only watch has 17 jewels: two cap jewels, two pivot jewels and an impulse jewel for the balance wheel, two pivot jewels and two pallet jewels for the pallet fork, and two pivot jewels each for the escape, fourth, third, and center wheels.
In modern quartz watches, the timekeeper is a quartz crystal in an electronic circuit, powering a small stepper motor. Because of the small amount of torque needed to move the hands, there is almost no pressure on the bearings and no real gain by using a jewel bearing, hence they are not used in a large proportion of quartz movements.
The other major use of jeweled bearings is in sensitive mechanical measuring instruments. They are typically used for delicate linkages that must carry very small forces, in instruments such as galvanometers, compasses, gyroscopes, gimbals, dial indicators, dial calipers, and turbine flow meters. In such instruments, jewel bearings are often used as pivots for their needles which need to move reliably and with low variability even when measuring small changes. Bearing bores are typically smaller than 1 mm and support loads weighing less than 1 gram, although they are made as large as 10 mm and may support loads up to about 500 g. Their use has diminished with the popularization of digital measuring instruments.
See also
Incabloc shock protection system
The Clock of the Long Now which will use ceramic bearings with no lubrication at low speed
References
Footnotes
Bearings (mechanical)
Timekeeping components | Jewel bearing | [
"Technology"
] | 811 | [
"Timekeeping components",
"Components"
] |
627,413 | https://en.wikipedia.org/wiki/John%20Maynard%20Smith%20Prize | The John Maynard Smith Prize is a prize given by the European Society for Evolutionary Biology on odd years to an outstanding young researcher. It was first awarded in 1997 and is named after the evolutionary biologist John Maynard Smith (1920–2004).
List of winners
Source: European Society for Evolutionary Biology
See also
List of biology awards
References
Biology awards
Awards established in 1997
European science and technology awards | John Maynard Smith Prize | [
"Technology"
] | 77 | [
"Science and technology awards",
"Biology awards"
] |
627,476 | https://en.wikipedia.org/wiki/Udder | An udder is an organ formed of two or four mammary glands on the females of dairy animals and ruminants such as cattle, goats, and sheep. An udder is equivalent to the breast in primates, elephantine pachyderms and other mammals. The udder is a single mass hanging beneath the animal, consisting of pairs of mammary glands with protruding teats. In cattle, camels and deer, there are normally two pairs, in sheep and goats, there is one pair, and in some animals, there are many pairs. In animals with udders, the mammary glands develop on the milk line near the groin. Mammary glands that develop on the chest (such as in primates and elephants) are generally referred to as breasts.
Udder care and hygiene in cows is important in milking, aiding uninterrupted and untainted milk production, and preventing mastitis. Products exist to soothe the chapped skin of the udder. This helps prevent bacterial infection, and reduces irritation during milking by the cups, and so the cow is less likely to kick the cups off. It has been demonstrated that incorporating nutritional supplements into diet, including vitamin E, is an additional method of improving udder health and reducing infection.
Etymology
Udder has been attested in Middle English as or (also as , ), and in Old English as . It was evolved from the Proto-Germanic reconstructed root *eudrą or *ūdrą, which in turn descended from Proto-Indo-European *h₁ówHdʰr̥ (“udder”). It is cognate with Saterland Frisian (“udder”), Dutch (“udder”), German (“udder”), Swedish (“udder”), Icelandic (“udder”), Vedic Sanskrit ऊधर् (ū́dhar), Ancient Greek (), and Latin .
As food
The udder, or elder in Ireland, Scotland and northern England, of a slaughtered cow was in times past prepared and consumed. In other countries, like Italy, parts of Pakistan, and some South American countries, cow udder is still consumed in dishes like the traditional and ubres asada.
References
External links
Mammal anatomy
Breast
Exocrine system
Mammal female reproductive system
Glands
Secondary sexual characteristics
Dairy farming | Udder | [
"Biology"
] | 477 | [
"Exocrine system",
"Organ systems"
] |
627,496 | https://en.wikipedia.org/wiki/Nonviolent%20Communication | Nonviolent Communication (NVC) is an approach to enhanced communication, understanding, and connection based on the principles of nonviolence and humanistic psychology. It is not an attempt to end disagreements, but rather a way that aims to increase empathy and understanding to improve the overall quality of life. It seeks empathic dialogue and understanding among all parties. Nonviolent Communication evolved from concepts used in person-centered therapy, and was developed by clinical psychologist Marshall Rosenberg beginning in the 1960s and 1970s. There are a large number of workshops and clinical materials about NVC, including Rosenberg's book Nonviolent Communication: A Language of Life. Marshall Rosenberg also taught NVC in a number of video lectures available online; the workshop recorded in San Francisco is the most well-known.
NVC is a communication tool with the goal of first creating empathy in the conversation. The idea is that once people hear one another, it will be much easier to talk about a solution which satisfies all parties' fundamental needs. The goal is interpersonal harmony and obtaining knowledge for future cooperation. Notable concepts include rejecting coercive forms of discourse, gathering facts through observing without evaluating, genuinely and concretely expressing feelings and needs, and formulating effective and empathetic requests. Nonviolent Communication is used as a clinical psychotherapy modality and it is also offered in workshops for the general public, particularly in regard to seeking harmony in relationships and at workplaces.
History
Marshall Rosenberg's motivation for developing NVC was based on his own experiences at the Detroit race riot of 1943, as well as the antisemitism that he experienced in his early life .
According to Marion Little (2008), the roots of the NVC model developed in the late 1960s, when Rosenberg was working on racial integration in schools and organizations in the Southern United States. The earliest version of the model (observations, feelings, needs, and action-oriented wants) was part of a training manual Rosenberg prepared in 1972.
The development of NVC is highly reliant on concepts developed by Carl Rogers and person-centered therapy. Rogers emphasized: 1) experiential learning, 2) "frankness about one's emotional state," 3) the satisfaction of hearing others "in a way that resonates for them," 4) the enriching and encouraging experience of "creative, active, sensitive, accurate, empathic listening," 5) the "deep value of congruence between one's own inner experience, one's conscious awareness, and one's communication," and, subsequently, 6) the enlivening experience of unconditionally receiving love or appreciation and extending the same. These influenced the concepts described in the section below.
Rosenberg was influenced by Erich Fromm, George Albee, and George Miller to adopt a community focus in his work, moving away from clinical psychological practice. The central ideas influencing this shift by Rosenberg were that: (1) individual mental health depends on the social structure of a community (Fromm), (2) therapists alone are unable to meet the psychological needs of a community (Albee), and (3) knowledge about human behavior will increase if psychology is freely given to the community (Miller).
Rosenberg's early work with children with learning disabilities shows his interest in psycholinguistics and the power of language, as well as his emphasis on collaboration. In its initial development, the NVC model re-structured the pupil-teacher relationship to give students greater responsibility for, and decision-making related to, their own learning. The model has evolved over the years to incorporate institutional power relationships (i.e., police-citizen, boss-employee) and informal ones (i.e. man-woman, rich-poor, adult-youth, parent-child). The ultimate aim is to develop societal relationships based on a restorative, "partnership" paradigm and mutual respect, rather than a retributive, fear-based, "domination" paradigm.
In order to show the differences between communication styles, Rosenberg started to use two animals. Violent communication was represented by the carnivorous Jackal as a symbol of aggression and especially dominance. The herbivorous Giraffe on the other hand, represented his NVC strategy. The Giraffe was chosen as symbol for NVC as its long neck is supposed to show the clear-sighted speaker, being aware of his fellow speakers' reactions; and because the Giraffe has a large heart, representing the compassionate side of NVC. In his courses he tended to use these animals in order to make the differences in communication clearer to the audience.
The model had evolved to its present form (observations, feelings, needs and requests) by 1992. Since the late 2000s, there has been more emphasis on self-empathy as a key to the model's effectiveness. Another shift in emphasis, since 2000, has been the reference to the model as a process. The focus is thus less on the "steps" themselves and more on the practitioner's intentions in speaking ("Is the intent to get others to do what one wants, or to foster more meaningful relationships and mutual satisfaction?") in listening ("Is the intent to prepare for what one has to say, or to extend heartfelt, respectful attentiveness to another?") and the quality of connection experienced with others.
In 2019, a group of certified NVC trainers published a #MeToo statement honouring Marshall Rosenberg's legacy but also acknowledging he had slept with students at some times of his life. The trainers encourage all facilitators to share a warning with prospective clients and students about the potential risks of empathy work and recommended sexual boundaries.
Overview
Nonviolent Communication holds that most conflicts between individuals or groups arise from miscommunication about their human needs, due to coercive or manipulative language that aims to induce fear, guilt, shame, etc. These "violent" modes of communication, when used during a conflict, divert the attention of the participants away from clarifying their needs, their feelings, their perceptions, and their requests, thus perpetuating the conflict.
Alternative names
In a recorded lecture, Marshall Rosenberg describes the origins of the name Nonviolent Communication. He explains that the name was chosen to connect his work to the word "nonviolence" that was used by the peace movement, thus showing the ambition to create peace on the planet. Meanwhile, Marshall did not like that name since it described what NVC is not, rather than what NVC is. In fact, this goes against an important principle in the fourth component of NVC, i.e. requests. Specifically, in an NVC request, one should ask for what one does want, not what one doesn't want. Because of this, a number of alternative names have become common, most importantly giraffe language, compassionate communication or collaborative communication.
Components
There are four components to practice nonviolent communication, and in this order:
Observation: These are facts (what we are seeing, hearing, or touching) as distinct from our evaluation of meaning and significance. NVC discourages static generalizations. It is said that "When we combine observation with evaluation, others are apt to hear criticism and resist what we are saying." Instead, a focus on observations specific to time and context is recommended.
Feelings: These are emotions or sensations, free of thought and story. These are to be distinguished from thoughts (e.g., "I feel I didn't get a fair deal") and from words colloquially used as feelings but which convey what we think we are (e.g., "inadequate"), how we think others are evaluating us (e.g., "unimportant"), or what we think others are doing to us (e.g., "misunderstood", "ignored"). Feelings are said to reflect whether we are experiencing our needs as met or unmet. Identifying feelings is said to allow us to more easily connect with one another, and "Allowing ourselves to be vulnerable by expressing our feelings can help resolve conflicts."
Needs: These are universal human needs, as distinct from particular strategies for meeting needs. It is posited that "Everything we do is in service of our needs." Marshall Rosenberg refers to Max-Neef's model where needs may be categorised into 9 classes: sustenance, safety, love, understanding/empathy, creativity, recreation, sense of belonging, autonomy and meaning. For more information, the Center for Nonviolent Communication has developed a needs inventory.
Requests: Requests are distinguished from demands in that one is open to hearing a response of "no" without this triggering an attempt to force the matter. If one makes a request and receives a "no" it is not recommended that one gives up, but that one empathizes with what is preventing the other person from saying "yes," before deciding how to continue the conversation. It is recommended that requests use clear, positive, concrete action language.
Modes
There are three primary modes of application of NVC:
Self-empathy involves compassionately connecting with what is going on inside us. This may involve, without blame, noticing the thoughts and judgments we are having, noticing our feelings, and most critically, connecting to the needs that are affecting us.
Receiving empathically, in NVC, involves "connection with what's alive in the other person and what would make life wonderful for them... It's not an understanding of the head where we just mentally understand what another person says... Empathic connection is an understanding of the heart in which we see the beauty in the other person, the divine energy in the other person, the life that's alive in them... It doesn't mean we have to feel the same feelings as the other person. That's sympathy, when we feel sad that another person is upset. It doesn't mean we have the same feelings; it means we are with the other person... If you're mentally trying to understand the other person, you're not present with them." Empathy involves "emptying the mind and listening with our whole being." NVC suggests that however the other person expresses themselves, we focus on listening for the underlying observations, feelings, needs, and requests. It is suggested that it can be useful to reflect a paraphrase of what another person has said, highlighting the NVC components implicit in their message, such as the feelings and needs you guess they may be expressing.
Expressing honestly, in NVC, is likely to involve expressing an observation, feeling, need, and request. An observation may be omitted if the context of the conversation is clear. A feeling might be omitted if there is sufficient connection already, or the context is one where naming a feeling isn't likely to contribute to connection. It is said that naming a need in addition to a feeling makes it less likely that people will think you are making them responsible for your feeling. Similarly, it is said that making a request in addition to naming a need makes it less likely that people will infer a vague demand that they address your need. The components are thought to work together synergistically. According to NVC trainer Bob Wentworth, "an observation sets the context, feelings support connection and getting out of our heads, needs support connection and identify what is important, and a request clarifies what sort of response you might enjoy. Using these components together minimizes the chances of people getting lost in potentially disconnecting speculation about what you want from them and why."
Research
A systematic review of research as of 2013 analyzed 13 studies picked from 2,634 citations. Two of these studies came from peer-reviewed journals. Eleven of these suggested an increase in empathy subsequent to the application of NVC (five of these with evidence of statistical significance) and two did not. There have been no randomized studies into NVC. Academic research into NVC only began in the 1990s, and has been increasing with time.
As of 2017, fifteen master's theses and doctoral dissertations are known to have tested the model on sample sizes of 108 or smaller and generally have found the model to be effective.
While it is widely applied in clinical and lay contexts, and very limited research generally shows the technique to be effective in conflict resolution and increasing empathy, psychologists generally do not consider it to have the same standing as evidence-based practices such as cognitive-behavioral therapy. This is due to the low amount of academic research on the method.
Allan Rohlfs, who first met Rosenberg in 1972 and was a founder of the Center for Nonviolent Communication, in 2011 explained a paucity of academic literature as follows:
Virtually all conflict resolution programs have an academic setting as their foundation and therefore have empirical studies by graduate students assessing their efficacy. NVC is remarkable for its roots. Marshall Rosenberg, Ph.D. (clinical psychology, U of Wisconsin) comes from a full time private practice in clinical psychology and consultation, never an academic post. NVC, his creation, is entirely a grassroots organization and never had until recently any foundation nor grant monies, on the contrary funded 100% from trainings which were offered in public workshops around the world. ... Empirical data is now coming slowly as independent researchers find their own funding to conduct and publish empirical studies with peer review.
Bowers and Moffett (2012) asserts that NVC has been absent from academic programs due to a lack of research into the theoretical basis for the model and lack of research on the reliability of positive results.
Connor and Wentworth (2012) examined the impact of 6-months of NVC training and coaching on 23 executives in a Fortune 100 corporation. A variety of benefits were reported, including "conversations and meetings were notably more efficient, with issues being resolved in 50-80 percent less time."
A 2014 study examined the effects of combined NVC and mindfulness training on 885 male inmates of the Monroe Correctional Complex in Monroe, Washington. The training was found to reduce recidivism from 37% to 21%, and the training was estimated as having saved the state $5 million per year in reduced incarceration costs. The training was found to increase equanimity, decrease anger, and lead to abilities to take responsibility for one's feelings, express empathy, and to make requests without imposing demands.
Relationship to spirituality
In the introduction to Rosenberg's book Nonviolent Communication: A Language of Life,
NVC is described as a framework of several pre-existing concepts, that Rosenberg found useful on the topic of communication and conflict resolution. Therefore, it is perhaps not surprising that some Christians have found NVC to be complementary to their Christian faith. Many people have found Nonviolent Communication to be very complementary to Buddhism, both in theory and in manifesting Buddhist ideals in practice. Furthermore, the "NVC consciousness" described in NVC have several similarities to the concepts of presence and patience in mindfulness.
As Theresa Latini notes, "Rosenberg understands NVC to be a fundamentally spiritual practice." Marshall Rosenberg describes the influence of his spiritual life on the development and practice of NVC:
I think it is important that people see that spirituality is at the base of Nonviolent Communication, and that they learn the mechanics of the process with that in mind. It's really a spiritual practice that I am trying to show as a way of life. Even though we don't mention this, people get seduced by the practice. Even if they practice this as a mechanical technique, they start to experience things between themselves and other people they weren't able to experience before. So eventually they come to the spirituality of the process. They begin to see that it's more than a communication process and realize it's really an attempt to manifest a certain spirituality.
Rosenberg further states that he developed NVC as a way to "get conscious of" what he calls the "Beloved Divine Energy". Rosenberg considered NVC to be much more than a four-step process for communication, but rather a way of living.
Relationship to other models
Marion Little examines theoretical frameworks related to NVC. The influential interest-based model for conflict resolution, negotiation, and mediation developed by Fisher, Ury, and Patton at the Harvard Negotiation Project and at the Program on Negotiation in the 1980s appears to have some conceptual overlap with NVC, although neither model references the other. Little suggests The Gordon Model for Effective Relationships (1970) as a likely precursor to both NVC and interest-based negotiation, based on conceptual similarities, if not any direct evidence of a connection. Like Rosenberg, Gordon had worked with Carl Rogers, so the models' similarities may reflect common influences.
Suzanne Jones sees a substantive difference between active listening as originated by Gordon and empathic listening as recommended by Rosenberg, insofar as active listening involves a specific step of reflecting what a speaker said to let them know you are listening, whereas empathic listening involves an ongoing process of listening with both heart and mind and being fully present to the other's experience, with an aim of comprehending and empathizing with the needs of the other, the meaning of the experience for that person.
Gert Danielsen and Havva Kök both note an overlap between the premises of NVC and those of Human Needs Theory (HNT), an academic model for understanding the sources of conflict and designing conflict resolution processes, with the idea that "Violence occurs when certain individuals or groups do not see any other way to meet their need, or when they need understanding, respect and consideration for their needs."
Chapman Flack sees an overlap between what Rosenberg advocates and critical thinking, especially Bertrand Russell's formulation uniting kindness and clear thinking.
Martha Lasley sees similarities with the Focused Conversation Method developed by the Institute of Cultural Affairs (ICA), with NVC's observations, feelings, needs, and requests components relating to FCM's objective, reflective, interpretive, and decisional stages.
Applications
NVC has been applied in organizational and business settings, in parenting, in education, in mediation, in psychotherapy, in healthcare, in addressing eating issues, in justice, and as a basis for a children's book, among other contexts.
Rosenberg related ways he used Nonviolent Communication in peace programs in conflict zones including Rwanda, Burundi, Nigeria, Malaysia, Indonesia, Sri Lanka, Colombia, Serbia, Croatia, Ireland, and the Middle East including the occupied West Bank.
Reportedly, one of the first acts of Satya Nadella when he became CEO of Microsoft in 2014 was to ask top company executives to read Rosenberg's book, Nonviolent Communication.
Criticisms
Several researchers have attempted a thorough evaluation of criticisms and weaknesses of NVC and assessed significant challenges in its application. These span a range of potential problems, from the practical to the theoretical, and include concerns gathered from study participants and researchers.
The difficulty of using NVC as well as the dangers of misuse are common concerns. In addition, Bitschnau and Flack find a paradoxical potential for violence in the use of NVC, occasioned by its unskilled use. Bitschnau further suggests that the use of NVC is unlikely to allow everyone to express their feelings and have their needs met in real life as this would require inordinate time, patience and discipline. Those who are skilled in the use of NVC may become prejudiced against those who are not and prefer to converse only among themselves.
Furthermore, the exclusivity of NVC appears to favor the well-educated, valuing those with more awareness of grammar, word choice, and syntax. This could lead to problems of accessibility for the underprivileged and favoring a higher social class.
Oboth suggests that people might hide their feelings in the process of empathy, subverting the nonviolence of communication.
Though intended to strengthen relationships between loved ones, NVC may lead to the outcome of an ended relationship. We are finite creatures with finite resources, and understanding one another's needs through NVC may teach that the relationship causes too much strain to meet all needs.
The massive investment of time and effort in learning to use NVC has been noted by a number of researchers.
Chapman Flack, in reviewing a training video by Rosenberg, finds the presentation of key ideas "spell-binding" and the anecdotes "humbling and inspiring", notes the "beauty of his work", and his "adroitly doing fine attentive thinking" when interacting with his audience. Yet Flack wonders what to make of aspects of Rosenberg's presentation, such as his apparent "dim view of the place for thinking" and his building on Walter Wink's account of the origins of our way of thinking. To Flack, some elements of what Rosenberg says seem like pat answers at odds with the challenging and complex picture of human nature, history, literature, and art offer.
Flack notes a distinction between the "strong sense" of Nonviolent Communication as a virtue that is possible with care and attention, and the "weak sense," a mimicry of this born of ego and haste. The strong sense offers a language to examine one's thinking and actions, support understanding, bring one's best to the community, and honor one's emotions. In the weak sense, one may take the language as rules and use these to score debating points, label others for political gain, or insist that others express themselves in this way. Though concerned that some of what Rosenberg says could lead to the weak sense, Flack sees evidence confirming that Rosenberg understands the strong sense in practice. Rosenberg's work with workshop attendees demonstrates "the real thing." Yet Flack warns that "the temptation of the weak sense will not be absent." As an antidote, Flack advises, "Be conservative in what you do, be liberal in what you accept from others," (also known as the robustness principle) and guard against the "metamorphosis of nonviolent communication into subtle violence done in its name."
Ellen Gorsevski, assessing Rosenberg's book, Nonviolent Communication: A Language of Compassion (1999) in the context of geopolitical rhetoric, states that "the relative strength of the individual is vastly overestimated while the key issue of structural violence is almost completely ignored."
PuddleDancer Press reports that NVC has been endorsed by a variety of public figures.
Sven Hartenstein has created a series of cartoons spoofing NVC.
While a number of studies have indicated a high degree of effectiveness, there has been limited academic research into NVC in general. From an evidence-based standpoint, it does not have the same standing as practices such as cognitive-behavioral therapy. Supporters of the theory have generally relied on clinical and anecdotal experience to support its efficacy. Critics generally assume the efficacy of the method on an individual level; most criticism consider issues of equity and consistency. In Internet blog posts, some have described its model as self-contradictory, viewing NVC as a potentially coercive (and thus “violent”) technique with significant potential for misuse. The method requires a substantial amount of effort (time) to learn and apply, and assumes a certain level of education.
Organizations
The Center for Nonviolent Communication (CNVC), founded by Marshall Rosenberg, has trademarked the terms Nonviolent Communication: A Language of Life, The Center for Nonviolent Communication and CNVC.
CNVC certifies trainers who wish to teach NVC in a manner aligned with CNVC's understanding of the NVC process. CNVC also offers trainings by certified trainers.
Some trainings in Nonviolent Communication are offered by trainers sponsored by organizations considered as allied with, but having no formal relationship with, the Center for Nonviolent Communication founded by Marshall Rosenberg. Some of these trainings are announced through CNVC. Numerous NVC organizations have sprung up around the world, many with regional focuses.
See also
References
Further reading
Atlee, T. "Thoughts on Nonviolent Communication and Social Change." Co-intelligence Institute.
Branch, K. (2017) "How to Survive Thanksgiving Drama With This Smart Conflict-Management Strategy" Vogue Magazine November, 2017.
Evans, Louise (2016) The Five Chairs: Own Your Behaviours, Master Your Communication, Determine Your Success (book; TEDx talk)
Kabatznick, R. and M. Cullen (2004) "The Traveling Peacemaker: A Conversation with Marshall Rosenberg." Inquiring Mind, Fall issue.
Kashtan, M. (2010-ongoing), blog about applying NVC The Fearless Heart by the co-founder of Bay Area Nonviolent Communication.
Kashtan, M. (2012) "Nonviolent Communication: Gandhian Principles for Everyday Living", Satyagraha Foundation for Nonviolence Studies, April 2012.
Latini, T. (2009). Nonviolent Communication: A Humanizing Ecclesial and Educational Practice. Journal of Education & Christian Belief.
Moore, P. (2004) "NonViolent Communication as an Evolutionary Imperative-The InnerView of Marshall Rosenberg" Alternatives, Issue 29, Spring.
Sauer, M. (2004) "Expert on conflict resolution believes nonviolence is in our nature" San Diego Union-Tribune, October 14, 2004.
van Gelder, S. (1998) "The Language of Nonviolence" Yes Magazine, Summer 1998.
External links
The Center for Nonviolent Communication - nonprofit international organization
Nonviolent Communication by the authentic communication group - online skills training that helps to improve personal development
Human communication
Mindfulness movement
Nonviolence | Nonviolent Communication | [
"Biology"
] | 5,269 | [
"Human communication",
"Behavior",
"Human behavior"
] |
627,501 | https://en.wikipedia.org/wiki/Littlewood%20conjecture | In mathematics, the Littlewood conjecture is an open problem () in Diophantine approximation, proposed by John Edensor Littlewood around 1930. It states that for any two real numbers α and β,
where is the distance to the nearest integer.
Formulation and explanation
This means the following: take a point (α, β) in the plane, and then consider the sequence of points
(2α, 2β), (3α, 3β), ... .
For each of these, multiply the distance to the closest line with integer x-coordinate by the distance to the closest line with integer y-coordinate. This product will certainly be at most 1/4. The conjecture makes no statement about whether this sequence of values will converge; it typically does not, in fact. The conjecture states something about the limit inferior, and says that there is a subsequence for which the distances decay faster than the reciprocal, i.e.
o(1/n)
in the little-o notation.
Connection to further conjectures
It is known that this would follow from a result in the geometry of numbers, about the minimum on a non-zero lattice point of a product of three linear forms in three real variables: the implication was shown in 1955 by Cassels and Swinnerton-Dyer. This can be formulated another way, in group-theoretic terms. There is now another conjecture, expected to hold for n ≥ 3: it is stated in terms of G = SLn(R), Γ = SLn(Z), and the subgroup D of diagonal matrices in G.
Conjecture: for any g in G/Γ such that Dg is relatively compact (in G/Γ), then Dg is closed.
This in turn is a special case of a general conjecture of Margulis on Lie groups.
Partial results
Borel showed in 1909 that the exceptional set of real pairs (α,β) violating the statement of the conjecture is of Lebesgue measure zero. Manfred Einsiedler, Anatole Katok and Elon Lindenstrauss have shown that it must have Hausdorff dimension zero; and in fact is a union of countably many compact sets of box-counting dimension zero. The result was proved by using a measure classification theorem for diagonalizable actions of higher-rank groups, and an isolation theorem proved by Lindenstrauss and Barak Weiss.
These results imply that non-trivial pairs satisfying the conjecture exist: indeed, given a real number α such that , it is possible to construct an explicit β such that (α,β) satisfies the conjecture.
See also
Littlewood polynomial
References
Further reading
Diophantine approximation
Conjectures
Unsolved problems in mathematics | Littlewood conjecture | [
"Mathematics"
] | 557 | [
"Unsolved problems in mathematics",
"Diophantine approximation",
"Conjectures",
"Mathematical relations",
"Mathematical problems",
"Approximations",
"Number theory"
] |
627,542 | https://en.wikipedia.org/wiki/Rolling-element%20bearing | In mechanical engineering, a rolling-element bearing, also known as a rolling bearing, is a bearing which carries a load by placing rolling elements (such as balls, cylinders, or cones) between two concentric, grooved rings called races. The relative motion of the races causes the rolling elements to roll with very little rolling resistance and with little sliding.
One of the earliest and best-known rolling-element bearings are sets of logs laid on the ground with a large stone block on top. As the stone is pulled, the logs roll along the ground with little sliding friction. As each log comes out the back, it is moved to the front where the block then rolls on to it. It is possible to imitate such a bearing by placing several pens or pencils on a table and placing an item on top of them. See "bearings" for more on the historical development of bearings.
A rolling element rotary bearing uses a shaft in a much larger hole, and spheres or cylinders called "rollers" tightly fill the space between the shaft and hole. As the shaft turns, each roller acts as the logs in the above example. However, since the bearing is round, the rollers never fall out from under the load.
Rolling-element bearings have the advantage of a good trade-off between cost, size, weight, carrying capacity, durability, accuracy, friction, and so on. Other bearing designs are often better on one specific attribute, but worse in most other attributes, although fluid bearings can sometimes simultaneously outperform on carrying capacity, durability, accuracy, friction, rotation rate and sometimes cost. Only plain bearings are used as widely as rolling-element bearings. Common mechanical components where they are widely used are – automotive, industrial, marine, and aerospace applications. They are products of great necessity for modern technology. The rolling element bearing was developed from a firm foundation that was built over thousands of years. The concept emerged in its primitive form in Roman times. After a long inactive period in the Middle Ages, it was revived during the Renaissance by Leonardo da Vinci, and developed steadily in the seventeenth and eighteenth centuries.
History of Bearings Timeline
Beginning in 2600 BCE - The Ancient Egyptians were the first to notably use the concept behind rolling bearings, they first did this by using logs under these stones with groups of builders on either side to push and pull the weight of the stones.
40 BC - In the remains of a sunken Roman Ship in Lake Nemi. This discovery shows the continual development of the principle. The remains of the ship do not show clear signs of an indication of what these bearings were used for.
17th century - Galileo describes the functionality of a caged bearing
1740 - John Harrison invented the first caged roller bearing for H3 marine timekeeping.
1794 - The first patent for the ball race was given to Phillip Vaughn of Carmarthen, Wales. This is the first design seen with a spherical object moving through a groove.
1869 -Jules Suriray receives the first patent for a radial ball bearing, his design was used by James Moore to win the first 80 mile bicycle race from Paris to Rouen.
Overall Design
Design description
Bearings, especially rolling element bearings are designed in similar fashion across the board consisting of the outer and inner track, a central bore, a retainer to keep the rolling elements from clashing into one another or seizing the bearing movement, and the rolling elements themselves.
The internal rolling components may differ in design due to their intended purpose of application of the bearing. The main five types of bearings are Ball, Cylindrical, Tapered, Barrel, and Needle.
Ball - the simplest following the basic principles with minimal design intention. Important to note the ability for more seizures is likely due to the freedom of the track design.
Cylindrical - For single axis movement for straight directional movement. The shape allows for more surface area to be in contact adding in moving more weight with less force at a greater distance.
Tapered - Primarily focused on the ability to take on axial loading and radial loading and how it does this is by using a conical structure enabling the elements to roll diagonally.
Barrel - Provides assistance to high radial socks loads that cause misalignment and uses its shape and size for compensation.
Needle - Varying in size, diameters, and materials these types of bearings are best suited for helping reduce weight as well as smaller cross sections application, typically higher load capacity than ball bearings and rigid shaft applications.
Specific Design Types
Ball bearing
A particularly common kind of rolling-element bearing is the ball bearing. The bearing has inner and outer races between which balls roll. Each race features a groove usually shaped so the ball fits slightly loose. Thus, in principle, the ball contacts each race across a very narrow area. However, a load on an infinitely small point would cause infinitely high contact pressure. In practice, the ball deforms (flattens) slightly where it contacts each race much as a tire flattens where it contacts the road. The race also yields slightly where each ball presses against it. Thus, the contact between ball and race is of finite size and has finite pressure. The deformed ball and race do not roll entirely smoothly because different parts of the ball are moving at different speeds as it rolls. Thus, there are opposing forces and sliding motions at each ball/race contact. Overall, these cause bearing drag.
Roller bearings
Cylindrical roller
Roller bearings are the earliest known type of rolling-element-bearing, dating back to at least 40 BC. Common roller bearings use cylinders of slightly greater length than diameter. Roller bearings typically have a higher radial load capacity than ball bearings, but a lower capacity and higher friction under axial loads. If the inner and outer races are misaligned, the bearing capacity often drops quickly compared to either a ball bearing or a spherical roller bearing.
As in all radial bearings, the outer load is continuously re-distributed among the rollers. Often fewer than half of the total number of rollers carry a significant portion of the load. The animation on the right shows how a static radial load is supported by the bearing rollers as the inner ring rotates.
Spherical roller
Spherical roller bearings have an outer race with an internal spherical shape. The rollers are thicker in the middle and thinner at the ends. Spherical roller bearings can thus accommodate both static and dynamic misalignment. However, spherical rollers are difficult to produce and thus expensive, and the bearings have higher friction than an ideal cylindrical or tapered roller bearing since there will be a certain amount of sliding between rolling elements and races.
Gear bearing
Gear bearings are similar to epicyclic gearing. They consist of a number of smaller 'satellite' gears which revolve around the center of the bearing along a track on the outsides of the internal and satellite gears, and on the inside of the external gear. The downside to this bearing is manufacturing complexity.
Tapered roller
Tapered roller bearings use conical rollers that run on conical races. Most roller bearings only take radial or axial loads, but tapered roller bearings support both radial and axial loads, and generally can carry higher loads than ball bearings due to greater contact area. Tapered roller bearings are used, for example, as the wheel bearings of most wheeled land vehicles. The downsides to this bearing is that due to manufacturing complexities, tapered roller bearings are usually more expensive than ball bearings; and additionally under heavy loads the tapered roller is like a wedge and bearing loads tend to try to eject the roller; the force from the collar which keeps the roller in the bearing adds to bearing friction compared to ball bearings.
Needle roller
The needle roller bearing is a special type of roller bearing which uses long, thin cylindrical rollers resembling needles. Often the ends of the rollers taper to points, and these are used to keep the rollers captive, or they may be hemispherical and not captive but held by the shaft itself or a similar arrangement. Since the rollers are thin, the outside diameter of the bearing is only slightly larger than the hole in the middle. However, the small-diameter rollers must bend sharply where they contact the races, and thus the bearing fatigues relatively quickly.
CARB toroidal roller bearings
CARB bearings are toroidal roller bearings and similar to spherical roller bearings, but can accommodate both angular misalignment and also axial displacement. Compared to a spherical roller bearing, their radius of curvature is longer than a spherical radius would be, making them an intermediate form between spherical and cylindrical rollers. Their limitation is that, like a cylindrical roller, they do not locate axially. CARB bearings are typically used in pairs with a locating bearing, such as a spherical roller bearing. This non-locating bearing can be an advantage, as it can be used to allow a shaft and a housing to undergo thermal expansion independently.
Toroidal roller bearings were introduced in 1995 by SKF as "CARB bearings". The inventor behind the bearing was the engineer Magnus Kellström.
Configurations
The configuration of the races determine the types of motions and loads that a bearing can best support. A given configuration can serve multiple of the following types of loading.
Thrust loadings
Thrust bearings are used to support axial loads, such as vertical shafts. Common designs are Thrust ball bearings, spherical roller thrust bearings, tapered roller thrust bearings or cylindrical roller thrust bearings. Also non-rolling-element bearings such as hydrostatic or magnetic bearings see some use where particularly heavy loads or low friction is needed.
Radial loadings
Rolling-element bearings are often used for axles due to their low rolling friction. For light loads, such as bicycles, ball bearings are often used. For heavy loads and where the loads can greatly change during cornering, such as cars and trucks, tapered rolling bearings are used.
Linear motion
Linear motion roller-element bearings are typically designed for either shafts or flat surfaces. Flat surface bearings often consist of rollers and are mounted in a cage, which is then placed between the two flat surfaces; a common example is drawer-support hardware. Roller-element bearing for a shaft use bearing balls in a groove designed to recirculate them from one end to the other as the bearing moves; as such, they are called linear ball bearings or recirculating bearings.
Bearing failure
Rolling-element bearings often work well in non-ideal conditions, but sometimes minor problems cause bearings to fail quickly and mysteriously. For example, with a stationary (non-rotating) load, small vibrations can gradually press out the lubricant between the races and rollers or balls (false brinelling). Without lubricant the bearing fails, even though it is not rotating and thus is apparently not being used. For these sorts of reasons, much of bearing design is about failure analysis. Vibration based analysis can be used for fault identification of bearings.
There are three usual limits to the lifetime or load capacity of a bearing: abrasion, fatigue and pressure-induced welding.
Abrasion occurs when the surface is eroded by hard contaminants scraping at the bearing materials.
Fatigue results when a material becomes brittle after being repeatedly loaded and released. Where the ball or roller touches the race there is always some deformation, and hence a risk of fatigue. Smaller balls or rollers deform more sharply, and so tend to fatigue faster.
Pressure-induced welding can occur when two metal pieces are pressed together at very high pressure and they become one. Although balls, rollers and races may look smooth, they are microscopically rough. Thus, there are high-pressure spots which push away the bearing lubricant. Sometimes, the resulting metal-to-metal contact welds a microscopic part of the ball or roller to the race. As the bearing continues to rotate, the weld is then torn apart, but it may leave race welded to bearing or bearing welded to race.
Although there are many other apparent causes of bearing failure, most can be reduced to these three. For example, a bearing which is run dry of lubricant fails not because it is "without lubricant", but because lack of lubrication leads to fatigue and welding, and the resulting wear debris can cause abrasion. Similar events occur in false brinelling damage. In high speed applications, the oil flow also reduces the bearing metal temperature by convection. The oil becomes the heat sink for the friction losses generated by the bearing.
ISO has categorised bearing failures into a document Numbered ISO 15243.
Life calculation models
The life of a rolling bearing is expressed as the number of revolutions or the number of operating hours at a given speed that the bearing is capable of enduring before the first sign of metal fatigue (also known as spalling) occurs on the race of the inner or outer ring, or on a rolling element. Calculating the endurance life of bearings is possible with the help of so-called life models. More specifically, life models are used to determine the bearing size – since this must be sufficient to ensure that the bearing is strong enough to deliver the required life under certain defined operating conditions.
Under controlled laboratory conditions, however, seemingly identical bearings operating under identical conditions can have different individual endurance lives. Thus, bearing life cannot be calculated based on specific bearings, but is instead related to in statistical terms, referring to populations of bearings. All information with regard to load ratings is then based on the life that 90% of a sufficiently large group of apparently identical bearings can be expected to attain or exceed. This gives a clearer definition of the concept of bearing life, which is essential to calculate the correct bearing size. Life models can thus help to predict the performance of a bearing more realistically.
The prediction of bearing life is described in ISO 281 and the ANSI/American Bearing Manufacturers Association Standards 9 and 11.
The traditional life prediction model for rolling-element bearings uses the basic life equation:
Where:
is the 'basic life' (usually quoted in millions of revolutions) for a reliability of 90%, i.e. no more than 10% of bearings are expected to have failed
is the dynamic load rating of the bearing, quoted by the manufacturer
is the equivalent dynamic load applied to the bearing
is a constant: 3 for ball bearings, 4 for pure line contact and 3.33 for roller bearings
Basic life or is the life that 90% of bearings can be expected to reach or exceed. The median or average life, sometimes called Mean Time Between Failure (MTBF), is about five times the calculated basic rating life.
Several factors, the 'ASME five factor model', can be used to further adjust the life depending upon the desired reliability, lubrication, contamination, etc.
The major implication of this model is that bearing life is finite, and reduces by a cube power of the ratio between design load and applied load. This model was developed in 1924, 1947 and 1952 work by Arvid Palmgren and Gustaf Lundberg in their paper Dynamic Capacity of Rolling Bearings. The model dates from 1924, the values of the constant from the post-war works. Higher values may be seen as both a longer lifetime for a correctly-used bearing below its design load, or also as the increased rate by which lifetime is shortened when overloaded.
This model was recognised to have become inaccurate for modern bearings. Particularly owing to improvements in the quality of bearing steels, the mechanisms for how failures develop in the 1924 model are no longer as significant. By the 1990s, real bearings were found to give service lives up to 14 times longer than those predicted. An explanation was put forward based on fatigue life; if the bearing was loaded to never exceed the fatigue strength, then the Lundberg-Palmgren mechanism for failure by fatigue would simply never occur. This relied on homogeneous vacuum-melted steels, such as AISI 52100, that avoided the internal inclusions that had previously acted as stress risers within the rolling elements, and also on smoother finishes to bearing tracks that avoided impact loads. The constant now had values of 4 for ball and 5 for roller bearings. Provided that load limits were observed, the idea of a 'fatigue limit' entered bearing lifetime calculations. If the bearing was not loaded beyond this limit, its theoretical lifetime would be limited only by external factors, such as contamination or a failure of lubrication.
A new model of bearing life was put forward by FAG and developed by SKF as the Ioannides-Harris model. ISO 281:2000 first incorporated this model and ISO 281:2007 is based on it.
The concept of fatigue limit, and thus ISO 281:2007, remains controversial, at least in the US.
Generalized Bearing Life Model (GBLM)
In 2015, the SKF Generalized Bearing Life Model (GBLM) was introduced. In contrast to previous life models, GBLM explicitly separates surface and subsurface failure modes – making the model flexible to accommodate several different failure modes. Modern bearings and applications show fewer failures, but the failures that do occur are more linked to surface stresses. By separating surface from the subsurface, mitigating mechanisms can more easily be identified. GBLM makes use of advanced tribology models to introduce a surface distress failure mode function, obtained from the evaluation of surface fatigue. For the subsurface fatigue, GBLM uses the classical Hertzian rolling contact model. With all this, GBLM includes the effects of lubrication, contamination, and race surface properties, which together influence the stress distribution in the rolling contact.
In 2019, the Generalized Bearing Life Model was relaunched. The updated model offers life calculations also for hybrid bearings, i.e. bearings with steel rings and ceramic (silicon nitride) rolling elements. Even if the 2019 GBLM release was primarily developed to realistically determine the working life of hybrid bearings, the concept can also be used for other products and failure modes.
Constraints and trade-offs
All parts of a bearing are subject to many design constraints. For example, the inner and outer races are often complex shapes, making them difficult to manufacture. Balls and rollers, though simpler in shape, are small; since they bend sharply where they run on the races, the bearings are prone to fatigue. The loads within a bearing assembly are also affected by the speed of operation: rolling-element bearings may spin over 100,000 rpm, and the principal load in such a bearing may be momentum rather than the applied load. Smaller rolling elements are lighter and thus have less momentum, but smaller elements also bend more sharply where they contact the race, causing them to fail more rapidly from fatigue. Maximum rolling-element bearing speeds are often specified in 'nDm', which is the product of the mean diameter (in mm) and the maximum RPM. For angular contact bearings nDms over 2.1 million have been found to be reliable in high performance rocketry applications.
There are also many material issues: a harder material may be more durable against abrasion but more likely to suffer fatigue fracture, so the material varies with the application, and while steel is most common for rolling-element bearings, plastics, glass, and ceramics are all in common use. A small defect (irregularity) in the material is often responsible for bearing failure; one of the biggest improvements in the life of common bearings during the second half of the 20th century was the use of more homogeneous materials, rather than better materials or lubricants (though both were also significant). Lubricant properties vary with temperature and load, so the best lubricant varies with application.
Although bearings tend to wear out with use, designers can make tradeoffs of bearing size and cost versus lifetime. A bearing can last indefinitely—longer than the rest of the machine—if it is kept cool, clean, lubricated, is run within the rated load, and if the bearing materials are sufficiently free of microscopic defects. Cooling, lubrication, and sealing are thus important parts of the bearing design.
The needed bearing lifetime also varies with the application. For example, Tedric A. Harris reports in his Rolling Bearing Analysis on an oxygen pump bearing in the U.S. Space Shuttle which could not be adequately isolated from the liquid oxygen being pumped. All lubricants reacted with the oxygen, leading to fires and other failures. The solution was to lubricate the bearing with the oxygen. Although liquid oxygen is a poor lubricant, it was adequate, since the service life of the pump was just a few hours.
The operating environment and service needs are also important design considerations. Some bearing assemblies require routine addition of lubricants, while others are factory sealed, requiring no further maintenance for the life of the mechanical assembly. Although seals are appealing, they increase friction, and in a permanently sealed bearing the lubricant may become contaminated by hard particles, such as steel chips from the race or bearing, sand, or grit that gets past the seal. Contamination in the lubricant is abrasive and greatly reduces the operating life of the bearing assembly. Another major cause of bearing failure is the presence of water in the lubrication oil. Online water-in-oil monitors have been introduced in recent years to monitor the effects of both particles and the presence of water in oil and their combined effect.
Designation
Metric rolling-element bearings have alphanumerical designations, defined by ISO 15, to define all of the physical parameters. The main designation is a seven digit number with optional alphanumeric digits before or after to define additional parameters. Here the digits will be defined as: 7654321. Any zeros to the left of the last defined digit are not printed; e.g. a designation of 0007208 is printed 7208.
Digits one and two together are used to define the inner diameter (ID), or bore diameter, of the bearing. For diameters between 20 and 495 mm, inclusive, the designation is multiplied by five to give the ID; e.g. designation 08 is a 40 mm ID. For inner diameters less than 20 the following designations are used: 00 = 10 mm ID, 01 = 12 mm ID, 02 = 15 mm ID, and 03 = 17 mm ID. The third digit defines the "diameter series", which defines the outer diameter (OD). The diameter series, defined in ascending order, is: 0, 8, 9, 1, 7, 2, 3, 4, 5, 6. The fourth digit defines the type of bearing:
The fifth and sixth digit define structural modifications to the bearing. For example, on radial thrust bearings the digits define the contact angle, or the presence of seals on any bearing type. The seventh digit defines the "width series", or thickness, of the bearing. The width series, defined from lightest to heaviest, is: 7, 8, 9, 0, 1 (extra light series), 2 (light series), 3 (medium series), 4 (heavy series). The third digit and the seventh digit define the "dimensional series" of the bearing.
There are four optional prefix characters, here defined as A321-XXXXXXX (where the X's are the main designation), which are separated from the main designation with a dash. The first character, A, is the bearing class, which is defined, in ascending order: C, B, A. The class defines extra requirements for vibration, deviations in shape, the rolling surface tolerances, and other parameters that are not defined by a designation character. The second character is the frictional moment (friction), which is defined, in ascending order, by a number 1–9. The third character is the radial clearance, which is normally defined by a number between 0 and 9 (inclusive), in ascending order, however for radial-thrust bearings it is defined by a number between 1 and 3, inclusive. The fourth character is the accuracy ratings, which normally are, in ascending order: 0 (normal), 6X, 6, 5, 4, T, and 2. Ratings 0 and 6 are the most common; ratings 5 and 4 are used in high-speed applications; and rating 2 is used in gyroscopes. For tapered bearings, the values are, in ascending order: 0, N, and X, where 0 is 0, N is "normal", and X is 6X.
There are five optional characters that can defined after the main designation: A, E, P, C, and T; these are tacked directly onto the end of the main designation. Unlike the prefix, not all of the designations must be defined. "A" indicates an increased dynamic load rating. "E" indicates the use of a plastic cage. "P" indicates that heat-resistant steel are used. "C" indicates the type of lubricant used (C1–C28). "T" indicates the degree to which the bearing components have been tempered (T1–T5).
While manufacturers follow ISO 15 for part number designations on some of their products, it is common for them to implement proprietary part number systems that do not correlate to ISO 15.
See also
References
Further reading
External links
Technical publication about bearing lubrication
NASA technical handbook Rolling-Element Bearing (NASA-RP-1105)
NASA technical handbook Lubrication of Machine Elements (NASA-RP-1126)
How rolling-element bearings work
Kinematic Models for Design Digital Library (KMODDL) - Movies and photos of hundreds of working mechanical-systems models at Cornell University. Also includes an e-book library of classic texts on mechanical design and engineering.
Damping and Stiffness Characteristics of Rolling Element Bearings - Theory and Experiment (PhD thesis, Paul Dietl, TU Vienna, 1997
Bearings (mechanical)
Tribology
ru:Подшипник#Подшипники качения | Rolling-element bearing | [
"Chemistry",
"Materials_science",
"Engineering"
] | 5,272 | [
"Tribology",
"Mechanical engineering",
"Materials science",
"Surface science"
] |
2,270,459 | https://en.wikipedia.org/wiki/Photostimulation | Photostimulation is the use of light to artificially activate biological compounds, cells, tissues, or even whole organisms. Photostimulation can be used to noninvasively probe various relationships between different biological processes, using only light. In the long run, photostimulation has the potential for use in different types of therapy, such as migraine headache. Additionally, photostimulation may be used for the mapping of neuronal connections between different areas of the brain by “uncaging” signaling biomolecules with light. Therapy with photostimulation has been called light therapy, phototherapy, or photobiomodulation.
Photostimulation methods fall into two general categories: one set of methods uses light to uncage a compound that then becomes biochemically active, binding to a downstream effector. For example, uncaging glutamate is useful for finding excitatory connections between neurons, since the uncaged glutamate mimics the natural synaptic activity of one neuron impinging upon another. The other major photostimulation method is the use of light to activate a light-sensitive protein such as rhodopsin, which can then excite the cell expressing the opsin.
Scientists have long postulated the need to control one type of cell while leaving those surrounding it untouched and unstimulated. Well-known scientific advancements such as the use of electrical stimuli and electrodes have succeeded in neural activation but fail to achieve the aforementioned goal because of their imprecision and inability to distinguish between different cell types. The use of optogenetics (artificial cell activation via the use of light stimuli) is unique in its ability to deliver light pulses in a precise and timely fashion. Optogenetics is somewhat bidirectional in its ability to control neurons. Channels can be either depolarized or hyperpolarized depending on the wavelength of light that targets them. For instance, the technique can be applied to channelrhodopsin cation channels to initiate neuronal depolarization and eventually activation upon illumination. Conversely, activity inhibition of a neuron can be triggered via the use of optogenetics as in the case of the chloride pump halorhodopsin which functions to hyperpolarize neurons.
Before optogenetics can be performed, however, the subject at hand must express the targeted channels. Natural and abundant in microbials, rhodopsins—including bacteriorhodopsin, halorhodopsin and channelrhodopsin—each have a different characteristic action spectrum which describes the set of colors and wavelengths that they respond to and are driven to function by.
It has been shown that channelrhodopsin-2, a monolithic protein containing a light sensor and a cation channel, provides electrical stimulation of appropriate speed and magnitude to activate neuronal spike firing. Recently, photoinhibition, the inhibition of neural activity with light, has become feasible with the application of molecules such as the light-activated chloride pump halorhodopsin to neural control. Together, blue-light activated channelrhodopsin-2 and the yellow light-activated chloride pump halorhodopsin enable multiple-color, optical activation and silencing of neural activity. (See also Photobiomodulation)
Methods
A caged protein is a protein that is activated in the presence of a stimulating light source. In most cases, photo-uncaging is the technique revealing the active region of a compound by the process of photolysis of the shielding molecule (‘cage’). However, uncaging the protein requires an appropriate wavelength, intensity, and timing of the light. Achieving this is possible due to the fact that the optical fiber may be modified to deliver specific amounts of light. In addition, short bursts of stimulation allow results similar to the physiological norm. The steps of photostimulation are time independent in that protein delivery and light activation can be done at different times. This is because the two steps are dependent on each other for activation of the protein.
Some proteins are innately photosensitive and function in the presence of light. Proteins known as opsins form the crux of the photosensitive proteins. These proteins are often found in the eye. In addition, many of these proteins function as ion channels and receptors. One example is when a certain wavelength of light is put onto certain channels, the blockage in the pore is relieved and allows ion transduction.
To uncage molecules, a photolysis system is required to cleave the covalent bond. An example system can consist of a light source (generally a laser or a lamp), a controller for the amount of light that enters, a guide for the light, and a delivery system. Often, the design function in such a way that a medium is met between the diffusing light that may cause additional, unwanted photolysis and light attenuation; both being significant problems with a photolysis system.
History
The idea of photostimulation as a method of controlling biomolecule function was developed in the 1970s. Two researchers, Walther Stoeckenius and Dieter Oesterhelt discovered an ion pump known as bacteriorhodopsin which functions in the presence of light in 1971. In 1978, J.F. Hoffman invented the term “caging”. Unfortunately, this term caused some confusion among scientists due to the fact that the term is often used to describe a molecule which is trapped within another molecule. It could also be confused with the “caged effect” in the recombination of radicals. Therefore, some authors decided to use the term “light-activated” instead of “caging”. Both terms are currently in use. The first “caged molecule” synthesized by Hoffman et al. at Yale was the caged precursor to ATP derivative 1.
Applications
Photostimulation is notable for its temporal precision, which may be used to obtain an accurate starting time of activation of caged effectors. In conjunction with caged inhibitors, the role of biomolecules at specific timepoints in an organism's lifecycle may be studied. A caged inhibitor of N-ethylmaleimide sensitive fusion protein (NSF), a key mediator of synaptic transmission, has been used to study the time dependency of NSF. Several other studies have effected action potential firing through use of caged neurotransmitters such as glutamate. Caged neurotransmitters, including photolable precursors of glutamate, dopamine, serotonin, and GABA, are commercially available.
Signaling during mitosis has been studied using reporter molecules with a caged fluorophore, which is not phosphorylated if photolysis has not occurred. The advantage of this technique is that it provides a “snapshot” of kinase activity at specific timepoints rather than recording all activity since the reporter's introduction.
Calcium ions play an important signaling role, and controlling their release with caged channels has been extensively studied.
Unfortunately, not all organisms produce or hold sufficient amounts of opsins. Thus, the opsin gene must be introduced to target neurons if they are not already present in the organism of study. The addition and expression of this gene is sufficient for the use of optogenetics. Possible means of achieving this include the construction of transgenic lines containing the gene or acute gene transfer to a specific area or region within an individual. These methods are known as germline transgenesis and somatic gene delivery, respectively.
Optogenetics has shown significant promise in the treatment of a series of neurological disorders such as Parkinson's disease and epilepsy. Optogenetics has the potential to facilitate the manipulation and targeting of specific cell types or neural circuits, characteristics that are lacking in current brain stimulation techniques like DBS. At this point, the use of optogenetics in treating neural diseases has only been practically implemented in the field of neurobiology to reveal more about the mechanisms of specific disorders. Before the technique can be implemented to directly treat these disorders developments in other related fields such as gene therapy, opsin engineering, and optoelectronics must also make certain developments.
References
External links
Channelrhodopsin and halorhodopsin mediating photostimulation
Optogenetics Resource Center
Biochemistry methods
Diagnostic neurology | Photostimulation | [
"Chemistry",
"Biology"
] | 1,743 | [
"Biochemistry methods",
"Biochemistry"
] |
2,270,854 | https://en.wikipedia.org/wiki/Polyethylene%20naphthalate | Polyethylene naphthalate (poly(ethylene 2,6-naphthalate) or PEN) is a polyester derived from naphthalene-2,6-dicarboxylic acid and ethylene glycol. As such it is related to poly(ethylene terephthalate), but with superior barrier properties.
Production
Two major manufacturing routes exist for polyethylene naphthalate (PEN), i.e. an ester or an acid process, named according to whether the starting monomer is a diester or a diacid derivative, respectively. In both cases for PEN, the glycol monomer is ethylene glycol. Solid-state polymerization (SSP) of the melt-produced resin pellets is the preferred process to increase the average molecular weight of PEN.
Applications
Because it provides a very good oxygen barrier, it is well-suited for bottling beverages that are susceptible to oxidation, such as beer. It is also used in making high performance sailcloth.
Significant commercial markets have been developed for its application in textile and industrial fibers, films, and foamed articles, containers for carbonated beverages, water and other liquids, and thermoformed applications. It is also an emerging material for modern electronic devices.
PEN was the medium for Advanced Photo System film (discontinued in 2011).
PEN is used for manufacturing high performance fibers that have very high modulus and better dimensional stability than PET or Nylon fibers.
PEN is used as the substrate for most Linear Tape-Open (LTO) cartridges.
It also has been found to show excellent scintillation properties and is expected to replace classic plastic scintillators.
Benefits when compared to polyethylene terephthalate
The two condensed aromatic rings of PEN confer on it improvements in strength and modulus, chemical and hydrolytic resistance, gaseous barrier, thermal and thermo-oxidative resistance and ultraviolet (UV) light barrier resistance compared to polyethylene terephthalate (PET). PEN is intended as a PET replacement, especially when used as a substrate for flexible integrated circuits.
References
Polyesters
Plastics
ja:ポリエステル#ポリエチレンナフタレート | Polyethylene naphthalate | [
"Physics"
] | 477 | [
"Amorphous solids",
"Unsolved problems in physics",
"Plastics"
] |
2,270,861 | https://en.wikipedia.org/wiki/NGC%201705 | NGC 1705 is a peculiar lenticular galaxy and a blue compact dwarf galaxy (BCD) in the southern constellation of Pictor, positioned less than a degree to the east of Iota Pictoris, and is undergoing a starburst. With an apparent visual magnitude of 12.6 it requires a telescope to observe. It is estimated to be approximately 17 million light-years from the Earth, and is a member of the Dorado Group.
This is a relatively isolated galaxy, with its nearest neighbors being more than distant. However, its neutral hydrogen disk shows a significant amount of warp, suggesting that the outer gas is still settling into place. The mass models of the galaxy suggest the dominant source of mass is a dark matter halo. It has a super star cluster located near the galactic center, and shows strong galactic winds. Designated NGC1750–1, this cluster has a maximum radius of and is Myr old.
The major starburst activity is happening at the core of the galaxy, within the central , and this is providing the main ionizing source out to distance of or more. Over the last 10 million years it has added worth of stars. The younger stars in the galaxy with an age below a billion years have an estimated and are mainly concentrated near the center, while the older star populations have and form a more extended distribution. The total mass of neutral hydrogen in the galaxy is estimated at .
References
External links
NGC 1705 at ESA/Hubble
Lenticular galaxies
Peculiar galaxies
Dorado Group
Pictor
1705
16282 | NGC 1705 | [
"Astronomy"
] | 309 | [
"Pictor",
"Constellations"
] |
2,270,916 | https://en.wikipedia.org/wiki/Duru%E2%80%93Kleinert%20transformation | The Duru–Kleinert transformation, named after İsmail Hakkı Duru and Hagen Kleinert, is a mathematical method for solving path integrals of physical systems with singular potentials, which is necessary for the solution of all atomic path integrals due to the presence of Coulomb potentials (singular like ).
The Duru–Kleinert transformation replaces the diverging time-sliced path integral of Richard Feynman (which thus does not exist) by a well-defined convergent one.
Papers
H. Duru and H. Kleinert, Solution of the Path Integral for the H-Atom, Phys. Letters B 84, 185 (1979)
H. Duru and H. Kleinert, Quantum Mechanics of H-Atom from Path Integrals, Fortschr. d. Phys. 30, 401 (1982)
H. Kleinert, Path Integrals in Quantum Mechanics, Statistics, Polymer Physics, and Financial Markets 3. ed., World Scientific (Singapore, 2004) (read book here)
Quantum mechanics | Duru–Kleinert transformation | [
"Physics"
] | 212 | [
"Theoretical physics",
"Quantum mechanics",
"Quantum physics stubs"
] |
2,270,963 | https://en.wikipedia.org/wiki/NGC%201999 | NGC 1999, also known as The Cosmic Keyhole, is a dust-filled bright nebula with a vast hole of empty space represented by a black patch of sky, as can be seen in the photograph. It is a reflection nebula, and shines from the light of the variable star V380 Orionis.
It was previously believed that the black patch was a dense cloud of dust and gas which blocked light that would normally pass through, called a dark nebula. Analysis of this patch by the infrared telescope Herschel (October 9, 2009), which has the capability of penetrating such dense cloud material, resulted in continued black space. This led to the belief that either the cloud material was immensely dense or that an unexplained phenomenon had been detected.
With support from ground-based observations done using the submillimeter bolometer cameras on the Atacama Pathfinder Experiment radio telescope (November 29, 2009) and the Mayall (Kitt Peak) and Magellan telescopes (December 4, 2009), it was determined that the patch looks black not because it is an extremely dense pocket of gas, but because it is truly empty. The exact cause of this phenomenon is still being investigated, although it has been hypothesized that narrow jets of gas from some of the young stars in the region punctured the sheet of dust and gas, as well as, powerful radiation from a nearby mature star may have helped to create the hole. Researchers believe this discovery should lead to a better understanding of the entire star forming process.
It is located 1,500 light-years away from Earth in the constellation Orion. HH 1/2, the first recognized Herbig-Haro Object, is located near NGC 1999.
References
External links
1999
Reflection nebulae
Orion molecular cloud complex
Orion (constellation) | NGC 1999 | [
"Astronomy"
] | 368 | [
"Constellations",
"Orion (constellation)"
] |
2,271,084 | https://en.wikipedia.org/wiki/Query%20by%20Example | Query by Example (QBE) is a database query language for relational databases. It was devised by Moshé M. Zloof at IBM Research during the mid-1970s, in parallel to the development of SQL. It is the first graphical query language, using visual tables where the user would enter commands, example elements and conditions. Many graphical front-ends for databases use the ideas from QBE today. Originally limited only for the purpose of retrieving data, QBE was later extended to allow other operations, such as inserts, deletes and updates, as well as creation of temporary tables.
The motivation behind QBE is that a parser can convert the user's actions into statements expressed in a database manipulation language, such as SQL. Behind the scenes, it is this statement that is actually executed. A suitably comprehensive front-end can minimize the burden on the user to remember the finer details of SQL, and it is easier and more productive for end-users (and even programmers) to select tables and columns by selecting them rather than typing in their names.
In the context of information retrieval, QBE has a somewhat different meaning. The user can submit a document, or several documents, and ask for "similar" documents to be retrieved from a document database [see search by multiple examples]. Similarity search is based comparing document vectors (see Vector Space Model).
QBE represents seminal work in end-user development, frequently cited in research papers as an early example of this topic.
Currently, QBE is supported in several relational database front ends, notably Microsoft Access, which implements "Visual Query by Example", as well as Microsoft SQL Server Enterprise Manager. It is also implemented in several object-oriented databases (e.g. in db4o).
QBE is based on the logical formalism called tableau query, although QBE adds some extensions to that, much like SQL is based on the relational algebra.
Example
An example using the Suppliers and Parts database is given here to illustrate how QBE works.
As a general technique
The term also refers to a general technique influenced by Zloof's work whereby only items with search values are used to "filter" the results. It provides a way for a software user to perform queries without having to know a query language (such as SQL). The software can automatically generate the queries for the user (usually behind the scenes). Here are some examples:
Example Form B:
.....Name: Bob
..Address:
.....City:
....State: TX
..Zipcode:
Resulting SQL:
SELECT * FROM Contacts WHERE Name='Bob' AND State='TX';
Note how blank items do not generate SQL terms. Since "Address" is blank, there is no clause generated for it.
For
Example Form C:
.....Name:
..Address:
.....City: Sampleton
....State:
..Zipcode: 12345
Resulting SQL:
SELECT * FROM Contacts WHERE City='Sampleton' AND Zipcode='12345';
More advanced versions of QBE have other comparison operator options, often via a pull-down menu, such as "Contains", "Not Contains", "Starts With", "Greater-Than", and so forth.
Another approach to text comparisons is to allow one or more wildcard character characters. For example, if an asterisk is designated as a wild wildcard character in a particular system, then searching for last names using "Rob*" would return (match) last names such as "Rob", "Robert", "Robertson", "Roberto", etc.
See also
CRUD
Microsoft Query by Example
GraphQL a QBE for JSON front-ends.
QBIC
References
Sources
.
.
.
.
.
External links
Relational model
Query languages
Human–computer interaction
1970s software
IBM software | Query by Example | [
"Engineering"
] | 802 | [
"Human–computer interaction",
"Human–machine interaction"
] |
2,271,116 | https://en.wikipedia.org/wiki/EFA%20%28mobile%20bridge%29 | The EFA or Engin de Franchissement de l'Avant (forward crossing apparatus) is a field-deployable river crossing vehicle, used by combat engineers in the French Army. Unlike a bridge layer, which transports a bridge that is deployed off of the host vehicle, the EFA itself is a combined pontoon bridge and amphibious vehicle, enabling much more rapid redeployment of the mobile bridge structure and an additional use as a ferry (at the cost of being useless in returning to service damaged bridges). When needed, multiple EFA's can be combined in a series to create a traditional pontoon bridge. It has been built since 1989 by Chaudronnerie et Forges d'Alsace (CEFA), located in Soultz-sous-Forêts in the Bas-Rhin.
Characteristics
A single EFA, in ferry configuration, has a length of 34.55m on a loading surface of 96 m2 is ready in less than five minutes for the transportation of up to 70 tons of goods. In one hour it is able to make about 10-12 crossings over a of 100m length and eight to 10 crossings over a length of 200 m. Two EFA coupled together at the ramp allow the carriage of up to 150 ton cargo, and a floating bridge with four EFA for example offers, in less than 10 minutes, a crossing capacity of 100 m long with an estimated flow of 200 vehicles an hour.
The EFA is capable of astern propulsion, thus allowing fording without having to reorient the direction of the vehicle to the opposite shore which allows for more fluid ferry operations and rapid bridge assembly.
The crew consists of four people:
An equipment commander
A driver
A pilot
A crewman
Predecessor
The EFA is the heir to the first self-propelled bridging ferry invented in 1955 by the French military engineer and general Jean Gillois (born in Châteaubriant 1909). Called the "Amphibious Bac" or "Gillois", it entered service with the French army in 1963. A version modified by EWK was successively adopted by the German, British and to a limited extent American militaries, and was used by Israel in the 1973 Yom Kippur War. At the time of its introduction it was able to carry vehicles up to a maximum weight of 25 tons and while configured as a bridge it could support loads of about 50 tons. It takes between 45 and 65 minutes to form a bridge 100 meters long. It allows an armed force to avoid the heavy and bulky convoys that barges brought in by road, which are sensitive to enemy attacks.
Users
: Contract for 10 units of more than 60 million euros signed in 2006 for EFA X1 motorized with Friedrichshafen MTU of 760 hp. Delivery from September 2008
: 39 units built for the French army since 1989, in active service since 1993. As of December 31, 2013, 30 units were in service with an average age of 25 years. They are assigned to the following units:
3rd engineer regiment,
6th engineer regiment,
19th engineer regiment,
School of Engineering,
Champagne Training Park.
The three EFA sections are theoretically equipped with four groups of two vehicles, i.e. eight EFA per regiment. In practice, by 2014 it would seem that there were only four AETs per regiment, the rest being distributed between the Engineering School, the Training Park and the industrial owner of the conditioning contract.
See also
Bailey bridge
Pontoon bridge
Armoured vehicle-launched bridge
References
External links
Description page on the site of the French Ministry of Defence
Military vehicles of France
Portable bridges
Military bridging equipment
Military vehicles introduced in the 1980s | EFA (mobile bridge) | [
"Engineering"
] | 739 | [
"Military bridging equipment",
"Military engineering"
] |
2,271,290 | https://en.wikipedia.org/wiki/Salvia%20splendens | Salvia splendens, the scarlet sage, is a tender herbaceous perennial plant native to Brazil, growing at elevation where it is warm year-round and with high humidity. The wild form, rarely seen in cultivation, reaches tall. Smaller cultivars are very popular as bedding plants, seen in shopping malls and public gardens all over the world.
Taxonomy
Salvia splendens was first described and named in 1822. At that time it was given the common name "Lee's scarlet sage". Before the plant was selected to become dwarf in size, an early Dutch selection named 'Van Houttei' was chosen and is still popular in the horticulture trade.
Description
The native type is rarely used or described, though it grew from in height. Its leaves are in even, elliptical arrangements, 7 × 5 cm, with dentate margin and long petioles. It may branch, where upper branches are finely hairy and lower parts hairless. Erect spikes of flowers sprout from the centre of the plant in groups of 2 to 6 from each leaf node; scarlet, tubular or bell-shaped, 35 mm long, with two lobes towards the apex; the upper lobe is 13 mm long. It flowers a good part of summer and autumn.
Cultivation
It is widely grown as an ornamental plant, with a large number of cultivars selected by different colours from white to dark purple. It is a subtropical species that does not survive freezing temperatures, but can grow in cold climates as an annual plant. The most common selections are the dwarf sizes that go by names such as 'Sizzler' and 'Salsa', and planted en masse in gardens and malls. 'Van Houttei' reaches in height. The various types typically have red flowers.
Cultivars
Named cultivars include:
S. splendens 'Alba', with white flowers
'Atropurpurea', with dark violet to purple flowers
'Atrosanguinea', flowers dark red
'Bicolor', flowers white and red
'Bruantii', small, with red flowers
'Compacta', small, flowers in dense racemes, white or red
'Grandiflora', large, with large red flowers
'Issanchon', small, with white flowers striped pink to red
'Nana', an early-flowering cultivar, with red blossoms
'Scarlet Pygmy', a very dwarf, early flowering seed race with intense scarlet blossoms
'Semperflorens', continuous flowering
'Souchetii', small, with white or red flowers
'St. John's Fire', dwarf plants with dense, abundant, scarlet, early-flowering, long-lasting blossoms
'Violacea', flowers dark violet to purple.
The cultivars 'Vanguard' and 'Van-Houttei' have gained the Royal Horticultural Society's Award of Garden Merit.
References
External links
UC Berkeley: Observations on Salvia splendens
Interview with Daniel Siebert on S. splendens and S. divinorum
splendens
Flora of Brazil
Medicinal plants
Plants used in traditional Chinese medicine
Garden plants of South America
Plants that can bloom all year round | Salvia splendens | [
"Biology"
] | 662 | [
"Plants that can bloom all year round",
"Plants"
] |
2,271,535 | https://en.wikipedia.org/wiki/List%20of%20Dacian%20plant%20names | This is a list of plant names in Dacian, surviving from ancient botanical works such as Dioscorides' De Materia Medica (abb. MM) and Pseudo-Apuleius' Herbarius (abb. Herb.). Dacian plant names are one of the primary sources left to us for studying the Dacian language, an ancient language of South Eastern Europe. This list also includes a Bessian plant name and a Moesian plant name, both neighboring Daco-Thracian tribes, as well as a clear Albanoid name. According to linguist Vladimir I. Georgiev, the suffixes -dela, -dil(l)a, -zila and -tilia indicate names of medicinal plants.
See also
Dacian language
List of Dacian words
List of Romanian words of possible pre-Roman origin
List of Dacian names
References
External links
Sorin Olteanu's Project: Linguae Thraco-Daco-Moesorum - The Dacian Plant Names
Dacian language
Dacian plant names
Flora of Romania | List of Dacian plant names | [
"Biology"
] | 221 | [
"Lists of biota",
"Lists of plants",
"Plants"
] |
2,271,552 | https://en.wikipedia.org/wiki/Polidocanol | Polidocanol is a local anaesthetic and antipruritic component of ointments and bath additives. It relieves itching caused by eczema and dry skin. It has also been used to treat varicose veins, hemangiomas, and vascular malformations. It is formed by the ethoxylation of dodecanol.
Sclerotherapy
Polidocanol is also used as a sclerosant, an irritant injected to treat varicose veins, under the trade names Asclera, Aethoxysklerol and Varithena. Polidocanol causes fibrosis inside varicose veins, occluding the lumen of the vessel, and reducing the appearance of the varicosity.
The FDA has approved polidocanol injections for the treatment of small varicose (less than 1 mm in diameter) and reticular veins (1 to 3 mm in diameter). Polidocanol works by damaging the cell lining of blood vessels, causing them to close and eventually be replaced by other types of tissue. Polidocanol in the form of Varithena injected in the greater saphenous vein can cause the eruption of varicose and spider veins throughout the lower leg. This procedure should be done with caution and with the knowledge that the appearance of the leg may be forever compromised.
References
Fatty alcohols
Polyethers
Polymers
Antipruritics
Vascular surgery
Non-ionic surfactants | Polidocanol | [
"Chemistry",
"Materials_science"
] | 309 | [
"Polymers",
"Polymer chemistry"
] |
2,271,614 | https://en.wikipedia.org/wiki/Arrayed%20waveguide%20grating | Arrayed waveguide gratings (AWG) are commonly used as optical (de)multiplexers in wavelength division multiplexed (WDM) systems. These devices are capable of multiplexing many wavelengths into a single optical fiber, thereby increasing the transmission capacity of optical networks considerably.
The devices are based on a fundamental principle of optics, which states that light waves of different wavelengths do not interfere linearly with each other. This means that, if each channel in an optical communication network makes use of light of a slightly different wavelength, then the light from many of these channels can be carried by a single optical fiber with negligible crosstalk between the channels. The AWGs are used to multiplex channels of several wavelengths onto a single optical fiber at the transmission end and are also used as demultiplexers to retrieve individual channels of different wavelengths at the receiving end of an optical communication network.
Operation of AWG devices
Conventional silica-based AWGs, as illustrated in the figure above, are planar lightwave circuits fabricated by depositing layers of doped and undoped silica on a silicon substrate.
The AWGs consist of a number of input (1) and output (5) couplers, a free space propagation region (2) and (4) and the grating waveguides (3). The grating waveguides consists of many waveguides, each having a constant length increment (ΔL).
Light is coupled into the device via an optical fiber (1) connected to the input port.
Light diffracting out of the input waveguide at the coupler/slab interface propagates through the free-space region (2) and illuminates the grating with a Gaussian distribution.
Each wavelength of light coupled to the grating waveguides (3) undergoes a constant change of phase attributed to the constant length increment in grating waveguides.
The diffracted light from each waveguide within the grating undergoes constructive interference, resulting in a refocusing of the light at the output waveguides (5). The spatial position of the output channels is wavelength-dependent, determined by the array phase shift induced by the constant length increment in the grating waveguides.
References
Optical devices
Photonics
Fiber optics
Multiplexing | Arrayed waveguide grating | [
"Materials_science",
"Engineering"
] | 478 | [
"Glass engineering and science",
"Optical devices"
] |
2,271,697 | https://en.wikipedia.org/wiki/Giuseppe%20Melfi | Giuseppe Melfi (June 11, 1967) is an Italo-Swiss mathematician who works on practical numbers and modular forms.
Career
He gained his PhD in mathematics in 1997 at the University of Pisa. After some time spent at the University of Lausanne during 1997-2000, Melfi was appointed at the University of Neuchâtel, as well as at the University of Applied Sciences Western Switzerland and at the local University of Teacher Education.
Work
His major contributions are in the field of practical numbers. This prime-like sequence of numbers is known for having an asymptotic behavior and other distribution properties similar to the sequence of primes. Melfi proved two conjectures both raised in 1984 one of which is the corresponding of the Goldbach conjecture for practical numbers: every even number is a sum of two practical numbers. He also proved that there exist infinitely many triples of practical numbers of the form .
Another notable contribution has been in an application of the theory of modular forms, where he found new Ramanujan-type identities for the sum-of-divisor functions. His seven new identities extended the ten other identities found by Ramanujan in 1913. In particular he found the remarkable identity
where is the sum of the divisors of and is the sum of the third powers of the divisors of .
Among other problems in elementary number theory, he is the author of a theorem that allowed him to get a 5328-digit number that has been for a while the largest known primitive weird number.
In applied mathematics his research interests include probability and simulation.
Selected research publications
.
See also
Applications of randomness
References
External links
Giuseppe Melfi's home page
The proof of conjectures on practical numbers and the joint work with Paul Erdős on Zentralblatt.
Tables of practical numbers compiled by Giuseppe Melfi
Academic research query for "Giuseppe Melfi"
1967 births
20th-century Italian mathematicians
21st-century Italian mathematicians
Living people
Number theorists
Mathematicians from Sicily
Academic staff of the University of Neuchâtel | Giuseppe Melfi | [
"Mathematics"
] | 407 | [
"Number theorists",
"Number theory"
] |
2,271,744 | https://en.wikipedia.org/wiki/Business%20rule%20management%20system | A BRMS or business rule management system is a software system used to define, deploy, execute, monitor and maintain the variety and complexity of decision logic that is used by operational systems within an organization or enterprise. This logic, also referred to as business rules, includes policies, requirements, and conditional statements that are used to determine the tactical actions that take place in applications and systems.
Overview
A BRMS includes, at minimum:
A repository, allowing decision logic to be externalized from core application code
Tools, allowing both technical developers and business experts to define and manage decision logic
A runtime environment, allowing applications to invoke decision logic managed within the BRMS and execute it using a business rules engine
The top benefits of a BRMS include:
Reduced or removed reliance on IT departments for changes in live systems. Although, QA and Rules testing would still be needed in any enterprise system.
Increased control over implemented decision logic for compliance and better business management including audit logs, impact simulation and edit controls.
The ability to express decision logic with increased precision, using a business vocabulary syntax and graphical rule representations (decision tables, decision models, trees, scorecards and flows)
Improved efficiency of processes through increased decision automation.
Some disadvantages of the BRMS include:
Extensive subject matter expertise can be required for vendor specific products. In addition to appropriate design practices (such as Decision Modeling), technical developers must know how to write rules and integrate software with existing systems
Poor rule harvesting approaches can lead to long development cycles, though this can be mitigated with modern approaches like the Decision Model and Notation (DMN) standard.
Integration with existing systems is still required and a BRMS may add additional security constraints.
Reduced IT department reliance may never be a reality due to continued introduction to new business rule considerations or object model perturbations
The coupling of a BRMS vendor application to the business application may be too tight to replace with another BRMS vendor application. This can lead to cost to benefits issues. The emergence of the DMN standard has mitigated this to some degree.
Most BRMS vendors have evolved from rule engine vendors to provide business-usable software development lifecycle solutions, based on declarative definitions of business rules executed in their own rule engine. BRMSs are increasingly evolving into broader digital decisioning platforms that also incorporate decision intelligence and machine learning capabilities.
However, some vendors come from a different approach (for example, they map decision trees or graphs to executable code). Rules in the repository are generally mapped to decision services that are naturally fully compliant with the latest SOA, Web Services, or other software architecture trends.
Related software approaches
In a BRMS, a representation of business rules maps to a software system for execution. A BRMS therefore relates to model-driven engineering, such as the model-driven architecture (MDA) of the Object Management Group (OMG). It is no coincidence that many of the related standards come under the OMG banner.
A BRMS is a critical component for Enterprise Decision Management as it allows for the transparent and agile management of the decision-making logic required in systems developed using this approach.
Associated standards
The OMG Decision Model and Notation standard is designed to standardize elements of business rules development, specially decision table representations. There is also a standard for a Java Runtime API for rule engines JSR-94.
OMG Business Motivation Model (BMM): A model of how strategies, processes, rules, etc. fit together for business modeling
OMG SBVR: Targets business constraints as opposed to automating business behavior
OMG Production Rule Representation (PRR): Represents rules for production rule systems that make up most BRMS' execution targets
OMG Decision Model and Notation (DMN): Represents models of decisions, which are typically managed by a BRMS
RuleML provides a family of rule mark-up languages that could be used in a BRMS and with W3C RIF it provides a family of related rule languages for rule interchange in the W3C Semantic Web stack
Many standards, such as domain-specific languages, define their own representation of rules, requiring translations to generic rule engines or their own custom engines.
Other domains, such as PMML, also define rules.
See also
BPMS
DBMS
RDMS
Business rules
Business rules approach
Business rules engine
Decision Model and Notation
References
External links
Workshop summary paper: Six Views on the Business Rule Management System
Data modeling
Rule engines
Decision support systems
Expert systems | Business rule management system | [
"Technology",
"Engineering"
] | 906 | [
"Decision support systems",
"Data modeling",
"Data engineering",
"Information systems",
"Expert systems"
] |
2,272,102 | https://en.wikipedia.org/wiki/Protoplanetary%20nebula | A protoplanetary nebula or preplanetary nebula (PPN, plural PPNe) is an astronomical object which is at the short-lived episode during a star's rapid evolution between the late asymptotic giant branch (LAGB) phase and the subsequent planetary nebula (PN) phase. A PPN emits strongly in infrared radiation, and is a kind of reflection nebula. It is the second-from-the-last high-luminosity evolution phase in the life cycle of intermediate-mass stars (1–8 ).
Naming
The name protoplanetary nebula is an unfortunate choice due to the possibility of confusion with the same term being sometimes employed when discussing the unrelated concept of protoplanetary disks. The name protoplanetary nebula is a consequence of the older term planetary nebula, which was chosen due to early astronomers looking through telescopes and finding a similarity in appearance of planetary nebula to the gas giants such as Neptune and Uranus. To avoid any possible confusion, suggested employing a new term preplanetary nebula which does not overlap with any other disciplines of astronomy. They are often referred to as post-AGB stars, although that category also includes stars that will never ionize their ejected matter.
Evolution
Beginning
During the late asymptotic giant branch (LAGB) phase, when mass loss reduces the hydrogen envelope's mass to around 10−2 for a core mass of 0.60 , a star will begin to evolve towards the blue side of the Hertzsprung–Russell diagram. When the hydrogen envelope has been further reduced to around 10−3 , the envelope will have been so disrupted that it is believed further significant mass loss is not possible. At this point, the effective temperature of the star, T*, will be around 5,000 K and it is defined to be the end of the LAGB and the beginning of the PPN.
Protoplanetary nebula phase
During the ensuing protoplanetary nebula phase, the central star's effective temperature will continue rising as a result of the envelope's mass loss as a consequence of the hydrogen shell's burning. During this phase, the central star is still too cool to ionize the slow-moving circumstellar shell ejected during the preceding AGB phase. However, the star does appear to drive high-velocity, collimated winds which shape and shock this shell, and almost certainly entrain slow-moving AGB ejecta to produce a fast molecular wind. Observations and high-resolution imaging studies from 1998 to 2001, demonstrate that the rapidly evolving PPN phase ultimately shapes the morphology of the subsequent PN. At a point during or soon after the AGB envelope detachment, the envelope shape changes from roughly spherically symmetric to axially symmetric. The resultant morphologies are bipolar, knotty jets and Herbig–Haro-like "bow shocks". These shapes appear even in relatively "young" PPNe.
End
The PPN phase continues until the central star reaches around 30,000 K and it is hot enough (producing enough ultraviolet radiation) to ionize the circumstellar nebula (ejected gases) and it becomes a kind of emission nebula called a Planetary Nebula. This transition must take place in less than around 10,000 years or else the density of the circumstellar envelope will fall below the PN formulation density threshold of around 100 per cm3 and no PN will result, such a case is sometimes referred to as a 'lazy planetary nebula'.
Recent conjectures
Bujarrabal et al. (2001)
found that the "interacting stellar winds" model of Kwok
et al. (1978) of radiatively-driven winds is insufficient to account for their CO observations of PPN fast winds which imply high momentum and energy inconsistent with that model. Complementarily, theorists (Soker & Livio 1994;
Reyes-Ruiz & Lopez 1999; Soker & Rappaport 2000; Blackman, Frank & Welch 2001) investigated whether accretion disk scenarios, similar to models used to explain jets from active galactic nuclei and young stars, could account for both the point symmetry and the high degree of collimation seen in many PPN jets. In such models applied to the PPN context, the accretion disk forms through binary interactions. Magneto-centrifugal launching from the disk surface is then a way to convert gravitational energy into the kinetic energy of a fast wind in these systems. If the accretion-disk jet paradigm is correct and magneto-hydrodynamics (MHD) processes mediate the energetics and collimation of PPN outflows, then they will also determine physics of the shocks in these flows, and this can be confirmed with high-resolution pictures of the emission regions that go with the shocks.
See also
Bipolar nebula
Bipolar outflow
List of protoplanetary nebulae
Planetary nebula
Notes
The late asymptotic giant branch begins at the point on the asymptotic giant branch (AGB) where a star is no longer observable in visible light and becomes an infrared object.
References
.
.
Nebulae
Stellar evolution | Protoplanetary nebula | [
"Physics",
"Astronomy"
] | 1,047 | [
"Nebulae",
"Astronomical objects",
"Astrophysics",
"Stellar evolution"
] |
2,272,185 | https://en.wikipedia.org/wiki/Millman%27s%20theorem | In electrical engineering, Millman's theorem (or the parallel generator theorem) is a method to simplify the solution of a circuit. Specifically, Millman's theorem is used to compute the voltage at the ends of a circuit made up of only branches in parallel.
It is named after Jacob Millman, who proved the theorem.
Explanation
Let be the generators' voltages. Let be the resistances on the branches with voltage generators . Then Millman states that the voltage at the ends of the circuit is given by:
That is, the sum of the short circuit currents in branch divided by the sum of the conductances in each branch.
It can be proved by considering the circuit as a single supernode. Then, according to Ohm and Kirchhoff, the voltage between the ends of the circuit is equal to the total current entering the supernode divided by the total equivalent conductance of the supernode. The total current is the sum of the currents in each branch. The total equivalent conductance of the supernode is the sum of the conductance of each branch, since all the branches are in parallel.
Branch variations
Current sources
One method of deriving Millman's theorem starts by converting all the branches to current sources (which can be done using Norton's theorem). A branch that is already a current source is simply not converted. In the expression above, this is equivalent to replacing the term in the numerator of the expression above with the current of the current generator, where the kth branch is the branch with the current generator. The parallel conductance of the current source is added to the denominator as for the series conductance of the voltage sources. An ideal current source has zero conductance (infinite resistance) and so adds nothing to the denominator.
Ideal voltage sources
If one of the branches is an ideal voltage source, Millman's theorem cannot be used, but in this case the solution is trivial, the voltage at the output is forced to the voltage of the ideal voltage source. The theorem does not work with ideal voltage sources because such sources have zero resistance (infinite conductance) so the summation of both the numerator and denominator are infinite and the result is indeterminate.
See also
Analysis of resistive circuits
References
Bakshi, U.A.; Bakshi, A.V., Network Analysis, Technical Publications, 2009 .
Ghosh, S.P.; Chakraborty, A.K., Network Analysis and Synthesis, Tata McGraw-Hill, 2010 .
Singh, S.N., Basic Electrical Engineering, PHI Learning, 2010 .
Wadhwa, C.L., Network Analysis and Synthesis, New Age International '
Electrical engineering
Circuit theorems | Millman's theorem | [
"Physics",
"Engineering"
] | 563 | [
"Electrical engineering",
"Equations of physics",
"Circuit theorems",
"Physics theorems"
] |
2,272,236 | https://en.wikipedia.org/wiki/Student%20Space%20Exploration%20%26%20Technology%20Initiative | The Student Space Exploration & Technology Initiative (SSETI) is a unique project put into execution by students from different universities spread over European countries. In collaboration with space industry they aim to build microsatellites together.
Most universities do not have capabilities to build their own complete satellite. The SSETI aims to combine different academic capabilities to realise pan-European student missions. Space projects, which are beyond the local existing capabilities, will be made possible through the fragmentation and redistribution of many small, locally achievable tasks. Ambitious projects, such as a lunar lander, may be realised by this distributed development.
SWARM
SWARM is the youngest project in line. SWARM will be developed consisting by a nanosatellite releasing femtosatellites into low Earth orbit from where scientific and/or technical experiments will be conducted.
ESEO
The ESEO (European Student Earth Orbiter) project was started as part of the SSETI project family, in collaboration with the European Space Agency who ran the project independently from SSETI. ESEO was launched from Vandenberg AFB on 3 December 2018 as a part of Spaceflight's SmallSat Express and was expected to function for 6 to 18 months.
The Mission Control Center was located in Forlì (Italy).
The ESEO project concluded in December 2020, after operating for two years.
ESMO
The ESMO (European Student Moon Orbiter) project was started as part of the SSETI project family, in collaboration with the European Space Agency who now run the project independently from SSETI.
SSETI Express
See also
Mozhayets-5
ESA
SSETI Express Satellite
SEDS
References
External links
SSETI Website
A3E Website (SPAIN/Catalan Division)
Viktoria Schöneich "No, we are not calling E.T., we're going to the Moon! (YouTube Video) SpaceUp Stuttgart 2012
Proposed spacecraft
Space organizations
European Space Agency | Student Space Exploration & Technology Initiative | [
"Astronomy"
] | 384 | [
"Astronomy organizations",
"Space organizations"
] |
2,272,402 | https://en.wikipedia.org/wiki/Natural%20region | A natural region (landscape unit) is a basic geographic unit. Usually, it is a region which is distinguished by its common natural features of geography, geology, and climate.
From the ecological point of view, the naturally occurring flora and fauna of the region are likely to be influenced by its geographical and geological factors, such as soil and water availability, in a significant manner. Thus most natural regions are homogeneous ecosystems. Human impact can be an important factor in the shaping and destiny of a particular natural region.
Main terms
The concept "natural region" is a large basic geographical unit, like the vast boreal forest region. The term may also be used generically, like in alpine tundra, or specifically to refer to a particular place.
The term is particularly useful where there is no corresponding or coterminous official region. The Fens of eastern England, the Thai highlands, and the Pays de Bray in Normandy, are examples of this. Others might include regions with particular geological characteristics, like badlands, such as the Bardenas Reales, an upland massif of acidic rock, or The Burren, in Ireland.
By Country
Natural regions of Burundi
Natural regions of Chile
Natural regions of Colombia
Natural regions of Germany
Natural regions of Venezuela
References
External links
Natural regions of Texas
Alberta's Natural Regions
Natural regions in Valencia
Ecology | Natural region | [
"Biology"
] | 266 | [
"Ecology"
] |
2,272,644 | https://en.wikipedia.org/wiki/Daina%20Taimi%C5%86a | Daina Taimiņa (born August 19, 1954) is a Latvian mathematician, retired adjunct associate professor of mathematics at Cornell University, known for developing a way of modeling hyperbolic geometry with crocheted objects.
Education and career
Taimiņa received all of her formal education in Riga, Latvia, where in 1977 she graduated summa cum laude from the University of Latvia and completed her graduate work in Theoretical Computer Science (with thesis advisor Prof. Rūsiņš Mārtiņš Freivalds) in 1990. As one of the restrictions of the Soviet system at that time, a doctoral thesis was not allowed to be defended in Latvia, so she defended hers in Minsk, receiving the title of Candidate of Sciences. This explains the fact that Taimiņa's doctorate was formally issued by the Institute of Mathematics of the National Academy of Sciences of Belarus. After Latvia regained independence in 1991, Taimiņa received her higher doctoral degree (doktor nauk) in mathematics from the University of Latvia, where she taught for 20 years.
Daina Taimiņa joined the Cornell Math Department in December 1996.
Combining her interests in mathematics and crocheting, she is one of 24 mathematicians and artists who make up the Mathemalchemy Team.
Hyperbolic crochet
While attending a geometry workshop at Cornell University about teaching geometry for university professors in 1997, Taimiņa was presented with a fragile paper model of a hyperbolic plane, made by the professor in charge of the workshop, David Henderson (designed by geometer William Thurston.) It was made «out of thin, circular strips of paper taped together». She decided to make more durable models, and did so by crocheting them. The first night after first seeing the paper model at the workshop she began experimenting with algorithms for a crocheting pattern, after visualising hyperbolic planes as exponential growth.
The following fall, Taimiņa was scheduled to teach a geometry class at Cornell. She was determined to find what she thought was the best possible way to teach her class. So while she, together with her family, spent the preceding summer at a tree farm in Pennsylvania, she also spent her days by the pool watching her two daughters learning how to swim whilst simultaneously making a classroom set of models of the hyperbolic plane. This was the first ever made from yarn and crocheting.
The models made a significant difference to her students, according to themselves. They said they "liked the tactile way of exploring hyperbolic geometry" and that it helped them acquire experiences that helped them move on in said geometry. This was what Taimina herself had been missing when first learning about hyperbolic planes and is also what has made her models so effective, as these models have later become the preferred way of explaining hyperbolic space within geometry.
In a TedxRiga by Taimiņa she tells the story of how the need for a visual, intuitive way of understanding hyperbolic planes spurred her toward inventing crocheted geometry models. In the talk she also gives a basic introduction to hyperbolic geometry using her models as well as rendering some of the negative responses she initially received from some who viewed crocheting as unfitting in mathematics.
In the foreword to Taimiņa's book Crocheting Adventures with Hyperbolic Planes mathematician William Thurston, the designer of the paper model of hyperbolic planes, called Taimiņa's models «deceptively interesting». He attributed much of his view on them to how they make possible a tactile, non-symbolic, cognitively holistic way of understanding the highly abstract and complex part of mathematics non-euclidean geometry, is.
Taimiņa has led several workshops at Cornell University for college geometry instructors together with professor David Henderson (of the aforementioned 1997 workshop and who later became her husband).
Crocheted mathematical models later appeared in three geometry textbooks they wrote together, of which the most popular is Experiencing Geometry: Euclidean and non-Euclidean with History. In 2020 Taimiņa published 4th edition of this book as open source Experiencing Geometry
An article about Taimiņa's innovation in New Scientist was spotted by the Institute For Figuring, a small non-profit organisation based in Los Angeles, and she was invited to speak about hyperbolic space and its connections with nature to a general audience which included artists and movie producers. Taimiņa's initial lecture and following other public presentations sparked great interest in this new tactile way of exploring concepts of hyperbolic geometry, making this advanced topic accessible to wide audiences. Originally creating purely mathematical models, Taimiņa soon became popular as a fiber artist and public presenter for general audiences of ages five and up. In June 2005, her work was first shown as art in an exhibition "Not The Knitting You Know" at Eleven Eleven Sculpture Space, an art gallery in Washington, D.C. Since then she has participated regularly in various shows in galleries in US, UK, Latvia, Italy, Belgium, Ireland, Germany. Her artwork is in the collections of several private collectors, colleges and universities, and has been included in the American Mathematical Model Collection of the Smithsonian Museum, Cooper–Hewitt, National Design Museum, and Institut Henri Poincaré.
Her work and its far-flung influence has received wide interest in media. It has been written about in 'Knit Theory' in Discover magazine and in The Times, explaining how a hyperbolic plane can be crocheted by increasing the number of stitches:
Margaret Wertheim interviewed Daina Taimiņa and David Henderson for Cabinet Magazine
Later, based on Taimiņa's work, the Institute For Figuring published a brochure "A Field Guide to Hyperbolic Space". In 2005 the IFF decided to incorporate Taimiņa's ideas and approach of explaining hyperbolic space in their mission of popularizing mathematics, and curated an exhibition at Machine Project gallery, which was the subject of a piece in the Los Angeles Times.
Taimiņa's way of exploring hyperbolic space via crochet and connections with nature, combatting math phobia, was adapted by Margaret Wertheim in her talks and became highly successful in the IFF-curated Hyperbolic Crochet Coral Reef project.
Books
Taimiņa's book "Crocheting Adventures with Hyperbolic Planes" (A K Peters, Ltd., 2009, ) won the 2009 Bookseller/Diagram Prize for Oddest Title of the Year.
It also won the 2012 Euler Book Prize of the Mathematical Association of America.
Taimiņa also contributed to David W. Henderson's book Differential Geometry: A Geometric Introduction (Prentice Hall, 1998) and, with Henderson, wrote Experiencing Geometry: Euclidean and Non-Euclidean with History (Prentice Hall, 2005).
See also
Mathematics and fiber arts
Notes
References
David W. Henderson, Daina Taimina Experiencing Geometry: Euclidean and non-Euclidean with History, Pearson Prentice Hall, 2005 Experiencing Geometry
Further reading
.
External links
Personal web page at Cornell University
.
.
.
.
TEDxRiga talk Crocheting Hyperbolic Planes: Daina Taimiņa at TEDxRiga
1954 births
Topologists
Hyperbolic geometers
Latvian emigrants to the United States
Women mathematicians
Cornell University faculty
20th-century Latvian mathematicians
Living people
Latvian women writers
Mathematical artists
Mathematics popularizers
Riga State Gymnasium No.1 alumni
University of Latvia alumni
Latvian women scientists
21st-century Latvian mathematicians | Daina Taimiņa | [
"Mathematics"
] | 1,470 | [
"Topologists",
"Topology"
] |
2,272,662 | https://en.wikipedia.org/wiki/Charles%20J.%20Joachain | Charles J. Joachain is a Belgian physicist.
Biography
Born in Brussels on 9 May 1937, Charles J. Joachain obtained his Ph.D. in Physics in 1963 at the Université Libre de Bruxelles (Free University of Brussels). From 1964 to 1965 he was a Postdoctoral Fellow of the Belgian American Educational Foundation at the University of California, Berkeley and the Lawrence Berkeley Laboratory, and from 1965 to 1966 a Research Physicist at these institutions. At the Université Libre de Bruxelles he was appointed chargé de cours associé in 1965, chargé de cours in 1968, professeur extraordinaire in 1971 and professeur ordinaire in 1978. He was chairman of the Department of Physics in 1980 and 1981. He was also appointed professor at the Université Catholique de Louvain in 1984. In 2002, he became professeur ordinaire émérite (Emeritus Professor) at the Université Libre de Bruxelles and professeur honoraire at the Université Catholique de Louvain.
Professor Joachain has been a visiting professor in several universities and laboratories in Europe and the United States, in particular at the University of California, Berkeley and the Lawrence Berkeley Laboratory, the Université Pierre et Marie Curie (Paris VI), the University of Rome “La Sapienza” and the Max Planck Institute for Quantum Optics in Garching.
Research activities
The research activities of Professor Joachain concern two areas of theoretical physics:
1) Quantum collision theory: electron and positron collisions with atomic systems, atom-atom collisions, nuclear reactions, high-energy hadron collisions with nuclei.
2) High-intensity laser-atom interactions: multiphoton ionization, harmonic generation, laser-assisted atomic collisions, attophysics, relativistic effects in laser-atom interactions.
Publications
Professor Joachain has published five books :
1) "Quantum Collision Theory", North Holland, Amsterdam (1975), 2d edition (1979); 3d edition (1983).
2) "Physics of Atoms and Molecules" (with B.H. Bransden), Longman, London (1983); 2d edition, Prentice Hall-Pearson (2003).
3) "Quantum Mechanics" (with B.H. Bransden), Longman, London (1989), 2d edition, Prentice Hall- Pearson (2000).
4) "Theory of Electron-Atom Collisions. Part I. Potential Scattering" (with P.G. Burke), Plenum Press, New York (1995).
5) "Atoms in Intense Laser Fields" (with N.J. Kylstra and R.M. Potvliege), Cambridge University Press (2012).
He has co-edited four books :
1) "Atomic and Molecular Physics of Controlled Thermonuclear Fusion"(with D.E. Post), Plenum, New York (1983).
2) "Photon and Electron Collisions with Atoms and Molecules"(with P.G. Burke), Plenum, New York (1997).
3) "Atoms, Solids and Plasmas in Super-Intense Laser Fields" (with D. Batani, S. Martellucci and A.N. Chester), Kluwer Academic-Plenum, New-York (2001).
4) "Atoms and Plasmas in Super-Intense Laser Fields" (with D. Batani and S. Martellucci), Conference Proceedings, Volume 88, Italian Physical Society, Bologna (2004).
He is also the author of hundred and forty-seven research articles and forty-five review articles in theoretical physics, devoted mainly to quantum collision theory with applications to atomic, nuclear and high-energy processes and to the theory of high-intensity laser-atom interactions.
Distinctions and prizes
Joachain has received many scientific distinctions and prizes, in particular the Prix Louis Empain in 1963, the Alexander von Humboldt Prize in 1998 and the Blaise Pascal Medal for Physics of the European Academy of Sciences in 2012. He was President of the Belgian Physical Society from 1987 to 1989 and of the “Institut des Hautes Etudes” of Belgium from 2006 to 2011. He has been a Fellow of the Institute of Physics (UK) since 1974, a Fellow of the American Physical Society since 1977 and a Doctor Honoris Causa of the University of Durham since 1989. He is a member of the Royal Academy of Science, Letters and Fine Arts of Belgium (President, 2015–16), of the Academia Europaea and of the European Academy of Sciences.
References
1937 births
Living people
Belgian nuclear physicists
Free University of Brussels (1834–1969) alumni
University of California, Berkeley alumni
Science teachers
Belgian science writers
Fellows of the Institute of Physics
Theoretical physicists
Fellows of the American Physical Society | Charles J. Joachain | [
"Physics"
] | 984 | [
"Theoretical physics",
"Theoretical physicists"
] |
2,272,780 | https://en.wikipedia.org/wiki/Paperweight | A paperweight is a small solid object heavy enough, when placed on top of papers, to keep them from blowing away in a breeze or from moving under the strokes of a painting brush (as with Chinese calligraphy). While any object, such as a stone, can serve as a paperweight, decorative paperweights of metal, glass, jade or other material are also produced, either by individual artisans or factories.
In the West, the decorative paperweights are usually in limited editions, and are collected as works of fine glass art, some of which are exhibited in museums. First produced in about 1845, particularly in France, such decorative paperweights declined in popularity before undergoing a revival in the mid-twentieth century.
Basic features
Decorative glass paperweights have a flat or slightly concave base, usually polished but sometimes frosted, cut in one of several variations (e.g. star-cut bases have a multi-pointed star, while a diamond cut base has grooves cut in a criss-cross pattern), although a footed weight has a flange in the base. The ground on which the inner parts rest may be clear or colored, made of unfused sand, or resemble lace (latticinio). The domed top is usually faceted or cut and made of lead glass and may be coated with one or more thin layers of colored glass, and have windows cut through it to reveal the interior motif. The exact shape or profile of the dome varies from one artist or factory to another, but in fine examples will act as a lens that, as one moves the weight about, attractively varies the inner design's appearance. A magnifying glass is often used to gain appreciation of the fine detail of the work within. In a modern piece, an identifying mark and date are imperative.
Paperweights are made by individual artisans or in factories where many artists and technicians collaborate; both may produce inexpensive as well as "collector" weights.
Workmanship, design, rarity, and condition determine a paperweight's value: its glass should not have a yellow or greenish cast, and there should be no unintentional asymmetries, or unevenly spaced or broken elements. Visible flaws, such as bubbles, striations and scratches lessen the value.
Antique paperweights, of which perhaps 10,000 or so survive (mostly in museums), generally appreciate steadily in value; as of August 2018 the record price was the $258,500 paid in 1990 for an antique French weight.
History
In China, paperweight is as old as paper, already existed in the Han dynasty (202-220 BC), and its predecessor used for holding down floor mats existed in the Warring States period (475-221 BC).
Western paperweights started in the "classic" years between 1845 and 1860 primarily in three French factories named Baccarat, Saint-Louis and Clichy. Together, they made between 15,000 and 25,000 weights in the classic period. Weights (mainly of lesser quality) were also made in the United States, United Kingdom, and elsewhere, though Bacchus (UK) and New England Glass Company (US) produced some that equaled the best of the French. Modern weights have been made from about 1950 to the present.
In the US, Charles Kaziun started in 1940 to produce buttons, paperweights, inkwells and other bottles, using lamp-work of elegant simplicity. In Scotland, the pioneering work of Paul Ysart from the 1930s onward preceded a new generation of artists such as William Manson, Peter McDougall, Peter Holmes and John Deacons. A further impetus to reviving interest in paperweights was the publication of Evangiline Bergstrom's book, Old Glass Paperweights, the first of a new genre.
A number of small studios appeared in the mid-20th century, particularly in the US. These may have several to some dozens of workers with various levels of skill cooperating to produce their own distinctive line. Notable examples are Lundberg Studios, Orient and Flume, Correia Art Glass, St. Clair, Lotton, and Parabelle Glass.
Starting in the late 1960s and early 1970s, artists such as Francis Whittemore, Paul Stankard, his former assistant Jim D'Onofrio, Chris Buzzini, Delmo and daughter Debbie Tarsitano, Victor Trabucco and sons, Gordon Smith, Rick Ayotte and his daughter Melissa, the father and son team of Bob and Ray Banford, and Ken Rosenfeld began breaking new ground and were able to produce fine paperweights rivaling anything produced in the classic period.
Types of glass paperweights
Collectors may specialize in one of several types of paperweights, but more often they wind up with an eclectic mix.
Millefiori (Italian—'thousand flowers') paperweights contain thin cross-sections of cylindrical composite canes made from colored rods and usually resemble little flowers, although they can be designed after anything, even letters and dates. These are usually made in a factory setting. They exist in many variations such as scattered, patterned, close concentric or carpet ground. Sometimes the canes are formed into a sort of upright tuft shaped like a mushroom that is encased in the dome. The year of manufacture is sometimes enclosed in one of the canes.
Lampwork paperweights have objects such as flowers, fruit, butterflies or animals constructed by shaping and working bits of colored glass with a gas burner or torch and assembling them into attractive compositions, which are then incorporated into the dome. This is a form particularly favored by studio artists. The objects are often stylized, but may be highly realistic.
Sulphide paperweights have an encased cameo-like medallion or portrait plaque made from a special ceramic that is able to reproduce very fine detail. These are known as incrustations, cameo incrustations, or sulphides. They often are produced to commemorate some person or event. From the late 1700s through the end of the 1900s, an amazing variety of glass objects, including paperweights, were made with incrustations. The finest collection of incrustations ever assembled was by Paul Jokelson, collector, author and founder of the Paperweight Collectors' Association. A part of his collection was gifted to the Corning Museum of Glass, with the remaining portion being sold in London in the 1990s.
Most paperweights, which are considered works of art, use one of the above techniques; millefiori, lampwork or sulphide — all techniques that had been around long before the advent of paperweights. A fourth technique, a crimp flower, usually a rose, originated in the Millville, New Jersey area in the first decade of the twentieth century. Often called a Millville rose, these weights range from simple folk art to fine works of art, depending on the maker.
Fine weights not made with any of the major techniques include swirls, marbries and crowns. Swirl paperweights have opaque rods of two or three colors radiating like a pinwheel from a central millefiori floret. A similar style, the marbrie, is a paperweight that has several bands of color close to the surface that descend from the apex in a looping pattern to the bottom of the weight. Crown paperweights have twisted ribbons, alternately colored and white filigree which radiate from a central millefiori floret at the top, down to converge again at the base. This was first devised in the Saint Louis factory and remains popular today.
Miniature weights have a diameter of less than approximately , and magnums have a diameter greater than about .
California-style paperweights are made by "painting" the surface of the dome with colored molten glass (torchwork), and manipulated with picks or other tools. They may also be sprayed while hot with various metallic salts to achieve an iridescent look.
Victorian portrait and advertising paperweights were dome glass paperweights first made in Pittsburgh, Pennsylvania using a process patented in 1882 by William H. Maxwell. The portrait paperweights contained pictures of ordinary people reproduced on a milk glass disk and encased within clear glass. This same process was also used to produce paperweights with the owner's name encased or an advertisement of a business or product. Pittsburgher Albert A. Graeser patented a different process for making advertising paperweights in 1892. The Graeser process involved sealing an image to the underside of a rectangular glass blank using a milk glass or enamel-like glaze. Many paperweights of the late 19th century are marked either J. N. Abrams or Barnes and Abrams and may list either the 1882 Maxwell or 1892 Graeser patent date. It has been theorized that Barnes and Abrams did not actually manufacture advertising paperweights for their customers, but instead subcontracted the actual manufacturing task out to Pittsburgh-area glasshouses. The Paperweight Collectors Association Annual Bulletins published in 2000, 2001 and 2002 describe these in detail.
Bohemian paperweights were particularly popular in Victorian times. Large engraved or cut hollow spheres of ruby glass were a common form.
Museum collections
The United States has a number of museums exhibiting outstanding paperweight collections. Many collectors consider the finest of these to be the Arthur Rubloff collection at the Art Institute of Chicago, which expanded its exhibition in 2012. The Bergstrom-Mahler Museum in Neenah, Wisconsin, exhibits the Evangeline Bergstrom collection. The Corning Museum of Glass in Corning, New York, exhibits the Amory Houghton collection. The Yelverton Paperweight Centre in Devon, England, a collection of over 1,000 paperweights, closed in 2013.
Another museum with a notable exhibition of outstanding American paperweights is in the Museum of American Glass at the Wheaton Arts and Cultural Center in Millville, New Jersey. In 1998, Henry Melville Fuller donated 330 twentieth-century paperweights to the Currier Museum of Art in Manchester, New Hampshire.
Paperweight Collectors
There are many paperweight collectors worldwide. Several collectors' associations hold national or regional conventions, and sponsor activities such as tours, lectures, and auctions. Famous collectors include the literary figures Colette, Oscar Wilde and Truman Capote. Empress Eugenie (Napoleon III's wife), Empress Carlotta (wife of Maximilian I of Mexico) and Farouk, King of Egypt were also avid collectors. The collecting histories of Rubloff, Bergstrom, and Houghton were similar. They had two things in common—a passion for their collecting, and the privilege of having sufficient financial resources to build extensive collections of very rare and expensive weights. Another famous collector was Lothar-Günther Buchheim, the German author and painter, best known for his novel Das Boot. His collection of about 3,000 paperweights can be seen at his museum in Germany—Museum der Phantasie—in Bernried, Bavaria, Starnberger See.
In May 1953, collector Paul Jokelson organized and created the Paperweight Collectors Association (PCA), the world's first collecting group dedicated to glass paperweights. Interest grew rapidly and by May 1954, membership had risen to 280 members and the PCA published its first bulletin. The PCA held its first convention in May 1961, in New York City with 100 members in attendance. In September 1968, Paul Jokelson published the first PCA newsletter. In September 1995, the PCA entered the digital era, going online with the PCA, Inc. website. In December 2010 the PCA Facebook page was created, allowing for casual observers, aficionados, artists, and collectors to become ever more connected, allowing for the appreciation of this enchanting art to thrive. Today membership spans the globe.
PCA Members receive a newsletter four times a year and a printed annual bulletin. The annual bulletin is the only publication of its kind and the preeminent source for all things paperweight-related. It contains indispensable, up-to-date research on the great paperweight makers of the 19th century and the masters of the art today. The PCA holds a convention biennially, where collectors, artists, dealers and scholars from around the world meet to share their passion for the art of the paperweight. At the convention, attendees can expect to see artist demonstrations from some of the world's leading glass artists, presentations from paperweight scholars and artists, and some of the world's finest paperweights on display.
See also
Glass museums and galleries
Snow globe
Marble (toy)
References
Further reading
Dunlop, Paul H. (2009) The Dictionary of Glass Paperweights
Dunlop, Paul H. (1991) The Jokelson Collection of Cameo Incrustation
Reilly, Pat, (1994) Paperweights: The Collector's Guide to Identifying, Selecting, and Enjoying New and Vintage Paperweights
Selman, Lawrence H. (1992) All About Paperweights
Jargstorf, Sibylle (1997) Paperweights .
Stankard, Paul J. (2007) No Green Berries or Leaves—The Creative Journey of an Artist in Glass softcover and hardcover
External links
Collecting
Paper
Glass art
Glass production
Weights | Paperweight | [
"Physics",
"Materials_science",
"Engineering"
] | 2,719 | [
"Glass engineering and science",
"Glass production",
"Weights",
"Physical objects",
"Matter"
] |
2,272,790 | https://en.wikipedia.org/wiki/Equivalent%20series%20inductance | Equivalent series inductance (ESL) is an effective inductance that is used to describe the inductive part of the impedance of certain electrical components.
Overview
The theoretical treatment of devices such as capacitors and resistors tends to assume they are ideal or "perfect" devices, contributing only capacitance or resistance to the circuit. However, all physical devices are connected to a circuit through conductive leads and paths, which contain inherent, usually unwanted, inductance. This means that physical components contain some inductance in addition to their other properties.
An easy way to deal with these inherent inductances in circuit analysis is by using a lumped element model to express each physical component as a combination of an ideal component and a small inductor in series, the inductor having a value equal to the inductance present in the non-ideal, physical device.
Effects
Ideally, the impedance of a capacitor falls with increasing frequency at 20 dB/decade. However, due partly to the inductive properties of the connections, and partly to non-ideal characteristics of the capacitor material, real capacitors also have inductive properties whose impedance rises with frequency at 20 dB/decade. At the resonance frequency the sum of both is minimal, above it the parasitic series inductance of the capacitor dominates.
See also
Equivalent series resistance (ESR)
References
Electrical parameters
Capacitors | Equivalent series inductance | [
"Physics",
"Engineering"
] | 298 | [
"Physical quantities",
"Capacitors",
"Capacitance",
"Electrical engineering",
"Electrical parameters"
] |
2,273,245 | https://en.wikipedia.org/wiki/Pittsburgh%20toilet | A Pittsburgh toilet, or Pittsburgh potty, is a basement toilet configuration commonly found in the area of Pittsburgh in the United States. It consists of an ordinary flush toilet with no surrounding walls. Most of these toilets are paired with a crude basement shower apparatus and large sink, which often doubles as a laundry room.
Origin
The most popular explanation for the Pittsburgh toilet is related to Pittsburgh's status as a major industrial city in the 20th century. According to this explanation, toilets such as these were said to be used by steelworkers and miners who, grimy from the day's labor, could use an exterior door to enter the basement directly from outside and use the basement's shower and toilet before heading upstairs.
Alternatively, they may have served to prevent sewage backups from flooding the living areas of homes. As sewage backups tend to flood the lowest fixture in a residence, a Pittsburgh toilet would be the fixture to overflow, containing the sewage leak in the basement.
References
External links
Pittsburgh Post-Gazette article mentions Pittsburgh toilet
Pittsburgh Magazine article on the "Pittsburgh Potty"
Toilets
History of Pittsburgh
Culture of Pittsburgh
Working-class culture in Pennsylvania | Pittsburgh toilet | [
"Biology"
] | 232 | [
"Excretion",
"Toilets"
] |
2,273,293 | https://en.wikipedia.org/wiki/Federation%20of%20Astronomical%20Societies | The Federation of Astronomical Societies (FAS) is an international union of astronomical societies formed in 1974 and will be celebrating its 50th anniversary in 2024. Its motto is "Supporting UK Astronomy", and there is also one member society from Spain. As of November 2021, it has over 200 member societies.
FAS publishes a newsletter 6 times a year, which is sent to its member societies, and holds in-person conventions as well as events online. The FAS is run by a council of elected volunteers.
A major benefit for local societies is to be able to arrange Public Liability Insurance from a policy which is shared between the members, greatly reducing the policy fee.
Conventions
The FAS on-line convention in April 2021 was addressed by Lord Martin_Rees, the UK Astronomer Royal. Another speaker was George Tahu, the Lead Program executive for Mars_2020, speaking from Washington, D.C.
The Autumn 2021 Convention was the first national in-person astronomy event in the UK since 2019, due to COVID. It took place at the National Space Centre in Leicester on Saturday 13 November.
The 2022 convention was on 12 November in Oxford. The theme was "Women In Astronomy" — the keynote speaker was Dame Jocelyn Bell-Burnell.
See also
List of astronomical societies
References
External links
The Federation of Astronomical Societies
The Irish Federation of Astronomical Societies (IFAS)
Federations
Astronomy organizations
Astronomy societies
Astronomy in the United Kingdom
Scientific organizations established in 1974 | Federation of Astronomical Societies | [
"Astronomy"
] | 294 | [
"Astronomy societies",
"Astronomy stubs",
"Astronomy organizations",
"Astronomy organization stubs"
] |
2,273,378 | https://en.wikipedia.org/wiki/Arene%20substitution%20pattern | Arene substitution patterns are part of organic chemistry IUPAC nomenclature and pinpoint the position of substituents other than hydrogen in relation to each other on an aromatic hydrocarbon.
Ortho, meta, and para substitution
In ortho-substitution, two substituents occupy positions next to each other, which may be numbered 1 and 2. In the diagram, these positions are marked R and ortho.
In meta-substitution, the substituents occupy positions 1 and 3 (corresponding to R and meta in the diagram).
In para-substitution, the substituents occupy the opposite ends (positions 1 and 4, corresponding to R and para in the diagram).
The toluidines serve as an example for these three types of substitution.
Synthesis
Electron donating groups, for example amino, hydroxyl, alkyl, and phenyl groups tend to be ortho/para-directors, and electron withdrawing groups such as nitro, nitrile, and ketone groups, tend to be meta-directors.
Properties
Although the specifics vary depending on the compound, in simple disubstituted arenes, the three isomers tend to have rather similar boiling points. However, the para isomer usually has the highest melting point, and the lowest solubility in a given solvent, of the three isomers.
Separation of ortho and para isomers
Because electron donating groups are both ortho and para directors, separation of these isomers is a common problem in synthetic chemistry. Several methods exist in order to separate these isomers:
Column chromatography will often separate these isomers, as the ortho is more polar than the para in general.
Fractional crystallisation can be used to obtain pure para product, relying on the principle that it is less soluble than the ortho and thus will crystallise first. Care must be taken to avoid cocrystallisation of the ortho isomer.
Many nitro compounds' ortho and para isomers have quite different boiling points. These isomers can often be separated by distillation. These separated isomers can be converted to diazonium salts and used to prepare other pure ortho or para compounds.
Ipso, meso, and peri substitution
Ipso-substitution describes two substituents sharing the same ring position in an intermediate compound in an electrophilic aromatic substitution. Trimethylsilyl, tert-butyl, and isopropyl groups can form stable carbocations, hence are ipso directing groups.
Meso-substitution refers to the substituents occupying a benzylic position. It is observed in compounds such as calixarenes and acridines.
Peri-substitution occurs in naphthalenes for substituents at the 1 and 8 positions.
Cine and tele substitution
In cine-substitution, the entering group takes up a position adjacent to that occupied by the leaving group. For example, cine-substitution is observed in aryne chemistry.
Tele-substitution occurs when the new position is more than one atom away on the ring.
Origins
The prefixes ortho, meta, and para are all derived from Greek, meaning correct, following, and beside, respectively. The relationship to the current meaning is perhaps not obvious. The ortho description was historically used to designate the original compound, and an isomer was often called the meta compound. For instance, the trivial names orthophosphoric acid and trimetaphosphoric acid have nothing to do with aromatics at all. Likewise, the description para was reserved for just closely related compounds. Thus Jöns Jakob Berzelius originally called the racemic form of tartaric acid "paratartaric acid" (another obsolete term: racemic acid) in 1830. The use of the prefixes ortho, meta and para to distinguish isomers of disubstituted aromatic rings starts with Wilhelm Körner in 1867, although he applied the ortho prefix to a 1,4-isomer and the meta prefix to a 1,2-isomer. It was the German chemist Karl Gräbe who, in 1869, first used the prefixes ortho-, meta-, para- to denote specific relative locations of the substituents on a disubstituted aromatic ring (namely naphthalene). In 1870, the German chemist Viktor Meyer first applied Gräbe's nomenclature to benzene. The current nomenclature was introduced by the Chemical Society in 1879.
Examples
Examples of the use of this nomenclature are given for isomers of cresol, C6H4(OH)(CH3):
There are three arene substitution isomers of dihydroxybenzene (C6H4(OH)2) – the ortho isomer catechol, the meta isomer resorcinol, and the para isomer hydroquinone:
There are three arene substitution isomers of benzenedicarboxylic acid (C6H4(COOH)2) – the ortho isomer phthalic acid, the meta isomer isophthalic acid, and the para isomer terephthalic acid:
These terms can also be used in six-membered heterocyclic aromatic systems such as pyridine, where the nitrogen atom is considered one of the substituents. For example, nicotinamide and niacin, shown meta substitutions on a pyridine ring, while the cation of pralidoxime is an ortho isomer.
See also
Descriptor (chemistry)
Isomer
Structural isomerism
References
Aromatic compounds
Chemical nomenclature | Arene substitution pattern | [
"Chemistry"
] | 1,191 | [
"Organic compounds",
"Aromatic compounds",
"nan"
] |
2,273,604 | https://en.wikipedia.org/wiki/Coordination%20geometry | The coordination geometry of an atom is the geometrical pattern defined by the atoms around the central atom. The term is commonly applied in the field of inorganic chemistry, where diverse structures are observed. The coordination geometry depends on the number, not the type, of ligands bonded to the metal centre as well as their locations. The number of atoms bonded is the coordination number.
The geometrical pattern can be described as a polyhedron where the vertices of the polyhedron are the centres of the coordinating atoms in the ligands.
The coordination preference of a metal often varies with its oxidation state. The number of coordination bonds (coordination number) can vary from two in as high as 20 in .
One of the most common coordination geometries is octahedral, where six ligands are coordinated to the metal in a symmetrical distribution, leading to the formation of an octahedron if lines were drawn between the ligands. Other common coordination geometries are tetrahedral and square planar.
Crystal field theory may be used to explain the relative stabilities of transition metal compounds of different coordination geometry, as well as the presence or absence of paramagnetism, whereas VSEPR may be used for complexes of main group element to predict geometry.
Crystallography usage
In a crystal structure the coordination geometry of an atom is the geometrical pattern of coordinating atoms where the definition of coordinating atoms depends on the bonding model used. For example, in the rock salt ionic structure each sodium atom has six near neighbour chloride ions in an octahedral geometry and each chloride has similarly six near neighbour sodium ions in an octahedral geometry. In metals with the body centred cubic (bcc) structure each atom has eight nearest neighbours in a cubic geometry. In metals with the face centred cubic (fcc) structure each atom has twelve nearest neighbours in a cuboctahedral geometry.
Table of coordination geometries
A table of the coordination geometries encountered is shown below with examples of their occurrence in complexes found as discrete units in compounds and coordination spheres around atoms in crystals (where there is no discrete complex).
Naming of inorganic compounds
IUPAC have introduced the polyhedral symbol as part of their IUPAC nomenclature of inorganic chemistry 2005 recommendations to describe the geometry around an atom in a compound.
IUCr have proposed a symbol which is shown as a superscript in square brackets in the chemical formula. For example, would be Ca[8cb]F2[4t], where [8cb] means cubic coordination and [4t] means tetrahedral. The equivalent symbols in IUPAC are CU−8 and T−4 respectively.
The IUPAC symbol is applicable to complexes and molecules whereas the IUCr proposal applies to crystalline solids.
See also
Molecular geometry
VSEPR
Ligand field theory
Cis effect
Addition to pi ligands
References
Molecular physics
Chemical bonding
Coordination chemistry
Inorganic chemistry | Coordination geometry | [
"Physics",
"Chemistry",
"Materials_science"
] | 586 | [
"Molecular physics",
"Coordination chemistry",
" molecular",
"Condensed matter physics",
"nan",
"Atomic",
"Chemical bonding",
" and optical physics"
] |
2,273,689 | https://en.wikipedia.org/wiki/Commons | The commons is the cultural and natural resources accessible to all members of a society, including natural materials such as air, water, and a habitable Earth. These resources are held in common even when owned privately or publicly. Commons can also be understood as natural resources that groups of people (communities, user groups) manage for individual and collective benefit. Characteristically, this involves a variety of informal norms and values (social practice) employed for a governance mechanism.
Commons can also be defined as a social practice of governing a resource not by state or market but by a community of users that self-governs the resource through institutions that it creates.
Definition and modern use
The Digital Library of the Commons defines "commons" as "a general term for shared resources in which each stakeholder has an equal interest".
The term "commons" derives from the traditional English legal term for common land, which are also known as "commons", and was popularised in the modern sense as a shared resource term by the ecologist Garrett Hardin in an influential 1968 article called "The Tragedy of the Commons". As Frank van Laerhoven and Elinor Ostrom have stated; "Prior to the publication of Hardin's article on the tragedy of the commons (1968), titles containing the words 'the commons', 'common pool resources', or 'common property' were very rare in the academic literature."
Some texts make a distinction in usage between common ownership of the commons and collective ownership among a group of colleagues, such as in a producers' cooperative. The precision of this distinction is not always maintained. Others conflate open access areas with commons; however, open access areas can be used by anybody while the commons has a defined set of users.
The use of "commons" for natural resources has its roots in European intellectual history, where it referred to shared agricultural fields, grazing lands and forests that were, over a period of several hundred years, enclosed, claimed as private property for private use. In European political texts, the common wealth was the totality of the material riches of the world, such as the air, the water, the soil and the seed, all nature's bounty regarded as the inheritance of humanity as a whole, to be shared together. In this context, one may go back further, to the Roman legal category res communis, applied to things common to all to be used and enjoyed by everyone, as opposed to res publica, applied to public property managed by the government.
Types
Environmental resource
The examples below illustrate types of environmental commons.
European land use
Originally in medieval England the common was an integral part of the manor, and was thus legally part of the estate in land owned by the lord of the manor, but over which certain classes of manorial tenants and others held certain rights. By extension, the term "commons" has come to be applied to other resources which a community has rights or access to. The older texts use the word "common" to denote any such right, but more modern usage is to refer to particular rights of common, and to reserve the name "common" for the land over which the rights are exercised. A person who has a right in, or over, common land jointly with another or others is called a commoner.
In middle Europe, commons (relatively small-scale agriculture in, especially, southern Germany, Austria, and the alpine countries) were kept, in some parts, till the present. Some studies have compared the German and English dealings with the commons between late medieval times and the agrarian reforms of the 18th and 19th centuries. The UK was quite radical in doing away with and enclosing former commons, while southwestern Germany (and the alpine countries as e.g. Switzerland) had the most advanced commons structures, and were more inclined to keep them. The Lower Rhine region took an intermediate position. However, the UK and the former dominions have till today a large amount of Crown land which often is used for community or conservation purposes.
Mongolian grasslands
Based on a research project by the Environmental and Cultural Conservation in Inner Asia (ECCIA) from 1992 to 1995, satellite images were used to compare the amount of land degradation due to livestock grazing in the regions of Mongolia, Russia, and China. In Mongolia, where shepherds were permitted to move collectively between seasonal grazing pastures, degradation remained relatively low at approximately 9%. Comparatively, Russia and China, which mandated state-owned pastures involving immobile settlements and in some cases privatization by household, had much higher degradation, at around 75% and 33% respectively. A collaborative effort on the part of Mongolians proved much more efficient in preserving grazing land.
United States
Trawl fisheries of New York
A trawl fishery in the Blight region, located in New York, provides a completely different example of a type of community-based solution to what is sometimes referred to as the dilemma or "tragedy of the commons". The multitude of fishermen in the region make up a fishing cooperative who specialize in harvesting whiting. Being a part of the cooperative gives them consistent access to the best whiting grounds in the area which allows them to be highly successful, sometimes even dominating regional whiting markets during the winter season. It is a relatively high price to be a member of the collective, which limits entry, while also establishing catching quotas for members. They prevent unlimited entry or access to cap the number of members that are allowed in the club. This is done through a closed membership policy, as well as having control over the docking spaces. This leads to exclude outsiders from entering the regional whiting market. The "quotas" are established based on what they estimate can be sold to the regional markets. It directly contrasts government imposed regulations which are typically considered to be inflexible by the fisherman in the local area. The cooperative on the other hand is considered to be effective and flexible in their sustainable use of the resources in the region.
Lobster fishery of Maine
The widespread success of the Maine lobster industry is often attributed to the willingness of Maine's lobstermen to uphold and support lobster conservation rules. These rules include harbor territories not recognized by the state, informal trap limits, and laws imposed by the state of Maine (which are largely influenced by lobbying from lobster industry itself). Lobster is another resource that is sometimes considered vulnerable to overharvesting, and many people within the industry itself have been predicting a collapse for years. Nonetheless, the lobster industry has remained relatively unscathed by resource depletion. The state government of Maine establishes certain regulations, but they do not limit the number of licenses themselves. In practice there are many restrictive exclusionary systems that are generated, dictated, and upheld by the community through a series of "traditional fishing rights" that have been locally grandfathered in. One must obtain confirmation to fish from the community to actually be granted access. Once an individual is granted access, they are still only able to access the territories held by that community. Outsiders may be persuaded by threats of violence even. It is impossible to know if the lobster resource would have been sustainably used if there was more regulation, or without the internal regulation, but it is certainly being used sustainably in its current state of affairs. It also seems to be run relatively efficiently. This case study of Maine lobster fisheries reflects how a group was able to restrain access to a resource from outsiders, while regulating the communal use in an effective manner. This has allowed the local communities to reap the benefits of the rewards of their restrain for decades. Essentially, the local lobster fishers collaborate without much government intervention to sustain their common-pool resource.
Irrigation systems of New Mexico
Acequia is a method of collective responsibility and management for irrigation systems in desert areas. In New Mexico, a community-run organization known as Acequia Associations supervises water in terms of diversion, distribution, utilization, and recycling, in order to reinforce agricultural traditions and preserve water as a common resource for future generations. The Congreso de las Acequias has since 1990s, is a statewide federation that represents several hundred acequia systems in New Mexico.
Community forests in Nepal
In the late 1980s, Nepal chose to decentralize government control over forests. Community forest programs work by giving local areas a financial stake in nearby woodlands, and thereby increasing the incentive to protect them from overuse. Local institutions regulate harvesting and selling of timber and land, and must use any profit towards community development and preservation of the forests. In twenty years, some locals, especially in the middle hills, have noticed a visible increase in the number of trees, although other places have not seen tangible results, especially where opportunity costs to land are high. Community forestry may also contribute to community development in rural areas – for instance school construction, irrigation and drinking water channel construction, and road construction. Community forestry has proven conducive to democratic practices at grass roots level. Many Nepalese forest user groups generate income from the community forests, although the amount can vary widely among groups and is often invested in the community rather than flowing directly to individual households. Such income is generated from external sources involving the sales of timber from thinned pine plantations such as in the community forest user groups of Sindhu Palchok and Rachma, and internally in Nepal's mid-hills' broad leaf forests from membership fees, penalties and fines on rule-breakers, in addition to the sales of forest products. Some of the most significant benefits are that locals are able to use the products they gather directly in their own homes for subsistence use.
Beaver hunting in James Bay, Quebec, Canada
Hunting wildlife territories in James Bay, Quebec; located in the northeastern part of Canada, provide an example of resources being effectively shared by a community. There is an extensive heritage of local customaries that are used to effectively regulate beaver hunting in the region. Beaver has been an important source of food and commerce for the area since the fur trade began in 1670. Unfortunately, beavers are an easy target for resource degradation and depletion due to their colonies being easily spotted. Luckily, the area has grandfathered in many traditions, and stewards of the land to safeguard certain territories populations.
In the 1920s there was a massive influx of non-native trappers in the region due to a new railroad coming to the area, as well as an increase in fur prices at the time. The Amerindian communities' lost control over these territories for a short time during this period which helped to eventually lead to what is known as a "tragedy of the commons". In the 1930s conservation laws were enacted which prohibited outsiders from trapping in the area, and reinforced locals' customary laws. This led to a restoration of the population and commerce that beavers provided by the 1950s. The experience of the 1920s is not an isolated incident in the community either. Business conflicts among fur trading companies has led to a couple other times of resource overuse, but gradually resource use was restored to a proper balance once local control was restored. This case study reflects how communal resource sharing can be effectively propagated by a community.
Free drinkable water fountains in Paris
In Paris, France, there are over 1,200 free drinkable water fountains distributed throughout the city. The first 100 were donated by Englishman Sir Richard Wallace (1818–1890) in 1872, called the Wallace fountains, and since then the Parisian water company "Eau du Paris" have put more of them around the city, this give people living Paris and tourists all around the world access to free drinkable fresh water in Paris. Since then, many other countries like Spain, Brazil, Italy or Portugal have put these fountains on a lower scale.
Allotment gardens in Stockholm
In the Stockholm region, green spaces are predominantly owned and managed in either private or municipal forms, allotment gardens being the most common form. The system provide cultural ecosystem services to lot holders, as well as the offer of vegetables, fruits, and ornamental flowers.
The majority of allotment land in Stockholm is owned by the local municipality, and leaseholds are set for extended periods of time (up to 25 years). The local allotment association makes the decisions about who gets land rights. Only residents of multifamily homes inside the municipality were permitted to sign contracts, signifying a commitment to the original goals of allotments, which were to enhance the health of city dwellers in outdoor settings.
Land is organised and managed cooperatively; outside enterprises are not involved in any way. The allotment association recognises lot holders as official members, granting them equal voting rights and shares. In turn, the association represents the land owners in various administrative proceedings.
Urban green commons in Cape Town
In the post-apartheid metropolis of Cape Town, South Africa, the history of land rights is particularly noticeable since a large number of residents have vivid memories of being forcibly evacuated from their homes or of being assigned to live in specific regions.
In 2005, the city re-zoned the Northern shore of Zeekoevlei – a seasonal lake and wetland area – into smaller parcels of land that were bought by people from Grassy Park who shared experiences of oppression and marginalization during apartheid. After 10 years of being utilised as a landfill, the area was covered in "non-indigenous" plants. While constructing their homes, the locals decided to do something different: rather than erecting security walls to demarcate and guard their individual property, they would restore the fynbos and wetland ecology and establish a public communal garden. As stated by the locals, the initial plan was to build a "blueprint" for communal gardening that would serve as an example for other abandoned green areas, with the goal of "correcting the imbalances of apartheid" and "beautifying and dignifying".
The nine residents and the city's conservation managers signed an agreement that allowed the residents to incorporate the public shoreline area into the rehabilitation project, even though the city had retained the area closest to the shoreline as public property. Meanwhile, the city saw an opportunity to restore the fynbos and provided labour and plants for clearing and planting.
About 50,000 plants were planted (and "weeds" eradicated) along Bottom Road over the course of four years, drawing bees, birds, dragonflies, and toads in addition to humans through the addition of walkways, benches, and areas for barbecues. Here, management is done by the locals themselves, often with assistance from the local government, through paid employees and voluntary labour.
Because of its immense size, governance is extremely difficult. Currently, the project spans 6-7 ha, potentially even more. Its proximity to a busy road and hundreds of residential homes exacerbates the traffic issue. In addition to the disregard shown by the city administration, the neighbourhood has deteriorated as a result of people setting up barbecues at random and cars driving around freely, both of which have been linked to criminal activity.
Cultural and intellectual commons
Today, the commons are also understood within a cultural sphere. These commons include literature, music, arts, design, film, video, television, radio, information, software and sites of heritage. Wikipedia is an example of the production and maintenance of common goods by a contributor community in the form of encyclopedic knowledge that can be freely accessed by anyone without a central authority.
Tragedy of the commons in the Wiki-Commons is avoided by community control by individual authors within the Wikipedia community.
The information commons may help protect users of commons. Companies that pollute the environment release information about what they are doing. The Corporate Toxics Information Project and information like the Toxic 100, a list of the top 100 polluters, helps people know what these corporations are doing to the environment.
Digital commons
Mayo Fuster Morell proposed a definition of digital commons as "information and knowledge resources that are collectively created and owned or shared between or among a community and that tend to be non-exclusive, that is, be (generally freely) available to third parties. Thus, they are oriented to favor use and reuse, rather than to exchange as a commodity. Additionally, the community of people building them can intervene in the governing of their interaction processes and of their shared resources."
Examples of digital commons are Wikipedia, free software and open-source hardware projects.
Following the narrative of post-growth, the digital commons can present a model of progress that guide commoners to build counter-power in the economic and political field. Being able to digitally share knowledge and resources through internet platforms is a new capacity that challenges the traditional hierarchical structures of production, allowing for a higher collective benefit and a sustainable management of resources. Non-material resources are digitally reproducible and therefore can be shared at a low cost, contrary to physical resources which are quite limited. Shared resources represent in this context data, information, culture and knowledge which are produced and accessible online. In accordance with the "design global, manufacture local" approach digital commons may link the traditional commons theory with existing physical infrastructures. It further connects with the degrowth communities since transformations in use-value creation by employing new technologies, decoupling society from GDP growth and lower CO2 emissions, are envisioned.
Moreover, as a decentralized approach, there is a strong emphasis on inclusion and democratic regulation which has led Commons as an alternative, emancipatory and emerging form of social organization that goes beyond democratic capitalism. Accordingly, through the cooperation of diverse stakeholders and the equitable distribution of means of production, technological development becomes more accessible and bottom-up projects are fostered in communities.
Urban commons
Urban commons present the opportunity for the citizens to gain power upon the management of the urban resources and reframe city-life costs based on their use value and maintenance costs, rather than the market-driven value.
Urban commons situates citizens as key players rather than public authorities, private markets and technologies. David Harvey (2012) defines the distinction between public spaces and urban commons. He highlights that the former is not to be equated automatically with urban commons. Public spaces and goods in the city make a commons when part of the citizens take political action. Syntagma Square in Athens, Tahrir Square in Cairo, Maidan Nezalezhnosti in Kyiv, and the Plaza de Catalunya in Barcelona were public spaces that transformed to an urban commons as people protested there to support their political statements. Streets are public spaces that have often become an urban commons by social action and revolutionary protests. Urban commons are operating in the cities in a complementary way with the state and the market. Some examples are community gardening, urban farms on the rooftops and cultural spaces. More recently participatory studies of commons and infrastructures under the conditions of the financial crisis have emerged.
Knowledge commons
In 2007, Elinor Ostrom along with her colleague Charlotte Hess, did succeed in extending the commons debate to knowledge, approaching knowledge as a complex ecosystem that operates as a common – a shared resource that is subject to social dilemmas and political debates. The focus here was on the ready availability of digital forms of knowledge and associated possibilities to store, access and share it as a common. The connection between knowledge and commons may be made through identifying typical problems associated with natural resource commons, such as congestion, overharvesting, pollution and inequities, which also apply to knowledge. Then, effective alternatives (community-based, non-private, non-state), in line with those of natural commons (involving social rules, appropriate property rights and management structures), solutions are proposed. Thus, the commons metaphor is applied to social practice around knowledge. It is in this context that the present work proceeds, discussing the creation of depositories of knowledge through the organised, voluntary contributions of scholars (the research community, itself a social common), the problems that such knowledge commons might face (such as free-riding or disappearing assets), and the protection of knowledge commons from enclosure and commodification (in the form of intellectual property legislation, patenting, licensing and overpricing). At this point, it is important to note the nature of knowledge and its complex and multi-layered qualities of non-rivalry and non-excludability. Unlike natural commons – which are both rival and excludable (only one person can use any one item or portion at a time and in so doing they use it up, it is consumed) and characterised by scarcity (they can be replenished but there are limits to this, such that consumption/destruction may overtake production/creation) – knowledge commons are characterised by abundance (they are non-rival and non-excludable and thus, in principle, not scarce, so not impelling competition and compelling governance). This abundance of knowledge commons has been celebrated through alternative models of knowledge production, such as Commons-based peer production (CBPP), and embodied in the free software movement. The CBPP model showed the power of networked, open collaboration and non-material incentives to produce better quality products (mainly software).
Kopli 93 Community Center in Tallinn, Estonia
Kopli 93 is a historic building in Tallinn that has undergone several transformations, notably evolving from a soviet army sailor's club into a vibrant community center equipped with a garden, an apiary, and a workshop. Originally opened in 1937 as a cultural and educational hub, it was repurposed by the Soviet army in 1940 to serve as a sailor's club. In the 1990s, the building transitioned to private ownership, housing a university until 2011. In 2019, it was acquired by the Salme Cultural Center, ushering in its current chapter as a community hub.
The transformation of Kopli 93 into a community center was driven by a commitment to sustainable development and fostering community engagement. The center organizes workshops, training sessions, and events that encourage community members to collaborate and contribute to a sustainable future for the building and its garden. One notable initiative, "Community Wednesday," takes place every Wednesday at 6:00 PM, inviting locals to participate in gardening and connect with one another. Furthermore, the workshop operates on Mondays, Wednesdays, and Thursdays from 5:00 PM to 9:00 PM, providing free access to community members who bring their own materials. Participants receive guidance from the in-house foreman and are required to clean up after their sessions, including dedicating 15 minutes to tidying the workshop.
This initiative is part of the CENTRINNO 2020–2024 project, supported by the Horizon 2020 innovation funding program of the European Commission (H2020 grant. 869595). The project began with the establishment of a community garden and later expanded to include a workshop and cultural events.
Commoning as a process
Scholars such as David Harvey have adopted the term commoning, which as a verb serves to emphasize an understanding of the commons as a process and a practice rather than as "a particular kind of thing" or static entity."The common is not to be construed, therefore, as a particular kind of thing, asset or even social process, but as an unstable and malleable social relation between a particular self-defined social group and those aspects of its actually existing or yet-to-be-created social and/or physical environment deemed crucial to its life and livelihood. There is, in effect, a social practice of commoning. This practice produces or establishes a social relation with a common whose uses are either exclusive to a social group or partially or fully open to all and sundry. At the heart of the practice of commoning lies the principle that the relation between the social group and that aspect of the environment being treated as a common shall be both collective and non-commodified-off-limits to the logic of market exchange and market valuations."Some authors distinguish between the resources shared (the common-pool resources), the community who governs it, and commoning, that is, the process of coming together to manage such resources. Commoning thus adds another dimension to the commons, acknowledging the social practices entailed in the process of establishing and governing a commons. These practices entail, for the community of commoners, the creation of a new way of living and acting together, thus involving a collective psychological shift: it also entails a process of subjectivization, where the commoners produce themselves as common subjects.
Economic theories
Tragedy of the commons
A commons failure theory, now called tragedy of the commons, originated in the 18th century. In 1833 William Forster Lloyd introduced the concept by a hypothetical example of herders overusing a shared parcel of land on which they are each entitled to let their cows graze, to the detriment of all users of the common land. The same concept has been called the "tragedy of the fishers", when over-fishing could cause stocks to plummet. Forster's pamphlet was little known, and it wasn't until 1968, with the publication by the ecologist Garrett Hardin of the article "The Tragedy of the Commons", that the term gained relevance. Hardin introduced this tragedy as a social dilemma, and aimed at exposing the inevitability of failure that he saw in the commons.
However, Hardin's (1968) argument has been widely criticized, since he is accused of having mistaken the commons, that is, resources held and managed in common by a community, with open access, that is, resources that are open to everyone but where it is difficult to restrict access or to establish rules. In the case of the commons, the community manages and sets the rules of access and use of the resource held in common: the fact of having a commons, then, does not mean that anyone is free to use the resource as they like. Studies by Ostrom and others have shown that managing a resource as a commons often has positive outcomes and avoids the so-called tragedy of the commons, a fact that Hardin overlooked.
It has been said the dissolution of the traditional land commons played a watershed role in landscape development and cooperative land use patterns and property rights. However, as in the British Isles, such changes took place over several centuries as a result of land enclosure.
Economist Peter Barnes has proposed a 'sky trust' to fix this tragedic problem in worldwide generic commons. He claims that the sky belongs to all the people, and companies do not have a right to over pollute. It is a type of cap and dividend program. Ultimately the goal would be to make polluting excessively more expensive than cleaning what is being put into the atmosphere.
Successful commons
While the original work on the tragedy of the commons concept suggested that all commons were doomed to failure, they remain important in the modern world. Work by later economists has found many examples of successful commons, and Elinor Ostrom won the Nobel Prize for analysing situations where they operate successfully. For example, Ostrom found that grazing commons in the Swiss Alps have been run successfully for many hundreds of years by the farmers there.
Allied to this is the "comedy of the commons" concept, where users of the commons are able to develop mechanisms to police their use to maintain, and possibly improve, the state of the commons. This term was coined in an essay by legal scholar, Carol M. Rose, in 1986.
Notable theorists
Peter Barnes
Yochai Benkler
David Bollier
Murray Bookchin
Iain Boal
George Caffentzis
Barry Commoner
Silvia Federici
Henry George
Garrett Hardin
Michael Hardt
David Harvey
Silke Helfrich
Lewis Hyde
Lawrence Lessig
Peter Linebaugh
Karl Linn
Vasilis Kostakis
William Forster Lloyd
William Morris
Fred Moten
Antonio Negri
Elinor Ostrom
Raj Patel
John Platt (see Social trap)
Joachim Radkau
Kenneth Rexroth
Gerrard Winstanley
Monica White
Michel Bauwens
Feminist perspectives
Silvia Federici articulates a feminist perspective of the commons in her essay "Feminism and the Politics of the Commons". Since the language around the commons has been largely appropriated by the World Bank as it sought to re-brand itself "the environmental guardian of the planet", she argues that it is important to adopt a commons discourse that actively resists this re-branding. Secondly, articulations of the commons, although historically present and multiple have struggled to come together as a unified front. For the latter to happen she argues that a "commoning" or "commons" movement that is effectively able to resist capitalist forms of organizing labour and our livelihoods must look to women to take the lead in organizing the collectivization of our daily lives and the means of production.
Women and the struggle for the Commons
Women have traditionally been at the forefront of struggles for commoning "as primary subjects of reproductive work". This proximity and dependence on communal natural resources has made women the most vulnerable by their privatization, and made them their most staunch defendants. Examples include: subsistence agriculture, credit associations such as tontine (money commons) and collectivizing reproductive labor. In "Caliban and the Witch", Federici interprets the ascent of capitalism as a reactionary move to subvert the rising tide of communalism and to retain the basic social contract.
"Feminist Reconstructions" of the Commons
The process of commoning the material means of reproduction of human life is most promising in the struggle to "disentangle our livelihoods not only from the world market but also from the war machine and prison system". One of the main aims of the process of commoning is to create "common subjects" that are responsible to their communities. The notion of community is not understood as a "gated community", but as "a quality of relations, a principle of cooperation and responsibility to each other and the earth, the forests, the seas, the animals. In communalizing housework, one of the supporting pillars of human activity, it is imperative that this sphere is "not negated but revolutionized". Communalizing housework also serves to de-naturalize it as women's labour, which has been an important part of the feminist struggle.
Feminist Commons Movement
Abortion and Birth Control
As reproductive rights over unwanted pregnancies have been denied in many countries for many years, several resistance groups used diverse commoning strategies in order to provide women safe and affordable abortion. Care, knowledge, and pills have been made commons against abortion restriction. In New York, U.S., the group Haven Coalition volunteer provide pre and post abortion care for people who have to travel for abortion which is considered illegal in their places of origins, and with New York Abortion Access Fund, they are able to provide them with medical and financial assistance. Underground networks outside medical service establishments are where women's networks oversee the abortion and assist each other physically or emotionally by sharing the knowledge of herbalism or home abortion. These underground groups operate under codenames like Jane Collective in Chicago or Renata in Arizona. Some groups like Women on Waves from Netherlands use international waters to conduct abortion. Also, in Italy, Obiezione Respinta movement collaboratively map spaces related to birth control such as pharmacies, consultors, hospitals, etc., through which users share their knowledge and experience of the place and provide access to information that is difficult to obtain.
Historical land commons movements
The Carlist Wars
The Diggers
Kett's Rebellion
Contemporary commons movements
Abahlali baseMjondolo in South Africa
The Bhumi Uchhed Pratirodh Committee in India
Electronic Frontier Foundation
The EZLN in Mexico
Fanmi Lavalas in Haiti
Geolibertarianism primarily in the US
The Homeless Workers' Movement in Brazil
The Land is Ours in the UK
The Landless Workers' Movement in Brazil
Movement for Justice en el Barrio in the United States of America
Narmada Bachao Andolan in India
Take Back the Land in the US
Cosmopolitan localism or cosmolocalism
See also
Citizen's dividend
Common good
Common ownership
Creative Commons
Copyleft
Common land – Account of historical and present common land use, mainly British Isles.
Enclosure
Global commons
Game theory
Homo reciprocans
Network effect
"The Magic Cauldron" – essay on the open source economic model
Tragedy of the anticommons
International Association for the Study of the Commons
Municipalization
Nationalization
Patentleft
Public good (economics)
Public land
Reproductive labor
Social ownership
State ownership
Tyranny of small decisions
The Goose and the Common
References
Further reading
Basu, S (2016). Knowledge production, Agriculture and Commons: The case of Generation Challenge Programme. (PhD Thesis). Netherlands: Wageningen University. Retrieved from .
Basu, S (2014). An alternative imagination to study commons: beyond state and beyond scientific establishment. Paper presented at the 2nd International Conference on Knowledge Commons for Sustainable Agricultural Innovations. Maringá, Brazil: Maringá State University.
Bowers, Chet. (2006). Revitalizing the Commons: Cultural and Educational Sites of Resistance and Affirmation. Lexington Books.
Bowers, Chet. (2012). The Way Forward: Educational Reforms that Focus on the Cultural Commons and the Linguistic Roots of the Ecological Crisis. Eco-Justice Press.
Bresnihan, P. et Byrne, M. (2015). Escape into the city: Everyday practices of communing and the production of urban space in Dublin. Antipode 47(1), pp. 36–54.
Dalakoglou, Dimitris "Infrastructural gap: Commons, State and Anthropology". City 20(6).
Dellenbaugh, et al. (2015). Urban Commons: Moving beyond State and Market. Birkhäuser.
Fourier, Charles. (1996). The Theory of the Four Movements (Cambridge University Press)
Gregg, Pauline. (2001). Free-Born John: A Biography of John Lilburne (Phoenix Press)
Harvey, Neil. (1998). The Chiapas Rebellion: The Struggle for Land and Democracy (Duke University Press)
Hill, Christopher. (1984). The World Turned Upside Down: Radical Ideas During the English Revolution (Penguin)
Hill, Christopher. (2006). Winstanley 'The Law of Freedom' and other Writings (Cambridge University Press)
Hyde, Lewis. (2010). Common as Air: Revolution, Art and Ownership (Farrar, Straus and Giroux)
Kennedy, Kennedy. (2008). Diggers, Levellers, and Agrarian Capitalism: Radical Political Thought in 17th Century England (Lexington Books)
Kostakis, Vasilis and Bauwens, Michel. (2014). Network Society and Future Scenarios for a Collaborative Economy. (Basingstoke, UK: Palgrave Macmillan). (wiki)
Leaming, Hugo P. (1995). Hidden Americans: Maroons of Virginia and the Carolinas (Routledge)
Linebaugh, Peter, and Marcus Rediker. (2000). The Many-Headed Hydra: Sailors, Slaves, Commoners, and the Hidden History of the Revolutionary Atlantic (Boston: Beacon Press)
Linebaugh, Peter. (2008). The Magna Carta Manifesto: Liberties and Commons for All (University of California Press)
Lummis, Douglas. (1997). Radical Democracy (Cornell University Press)
Mitchel, John Hanson. (1998). Trespassing: An Inquiry into the Private Ownership of Land (Perseus Books)
Neeson, J. M. (1996). Commoners: Common Right, Enclosure and Social Change in England, 1700–1820 (Cambridge University Press)
Negri, Antonio, and Michael Hardt. (2009). Commonwealth. Harvard University Press.
Newfont, Kathyn. (2012). Blue Ridge Commons: Environmental Activism and Forest History in Western North Carolina (The University of Georgia Press)
Patel, Raj. (2010). The Value of Nothing (Portobello Books)
Price, Richard, ed. (1979). Maroon Societies: Rebel Slave Communities in the Americas (The Johns Hopkins University Press)
Proudhon, Pierre-Joseph. (1994). What is Property? (Cambridge University Press)
Rexroth, Kenneth. (1974). Communalism: From Its Origins to the Twentieth Century (Seabury Press)
Rowe, Jonathan. (2013). Our Common Wealth: The Hidden Economy That Makes Everything Else Work (Berrett-Koehler)
Shantz, Jeff. (2013). Commonist Tendencies: Mutual Aid Beyond Communism. (Punctum)
Simon, Martin. (2014). Your Money or Your Life: time for both. Social Commons. (Freedom Favours)
External links
IASC - The International Association for the Study of the Commons - an international association dedicated to the international and interdisciplinary study of commons and commons issues
Foundation for common land – A gathering of those across Great Britain and beyond with a stake in pastoral commons and their future
International Journal of the Commons – an interdisciplinary peer-reviewed open-access journal dedicated to furthering the understanding of institutions for use and management of resources that are (or could be) enjoyed collectively.
Infrastructuring the Commons – Aalto University Special Interest Group SIG in the Commons (peer-production, co-production, co-governance, co-creation) and Public(s)services. The SIG addresses the relevance of the Commons as a framework to expand the understanding of emerging considerations for the design, provision and maintenance of public services and urban space. Helsinki, Finland.
On the Commons – dedicated to exploring ideas and action about the commons—which encompasses natural assets such as oceans and clean air as well as cultural endowments like the Internet, scientific research and the arts.
The Peer to Peer Foundation and the Economics and the Commons Conference.
The Commons Strategies Group.
The Commons Transition Primer.
P2P Lab
Environmental social science concepts | Commons | [
"Environmental_science"
] | 7,690 | [
"Environmental social science concepts",
"Environmental social science"
] |
2,273,690 | https://en.wikipedia.org/wiki/Signal/One | Signal/One was a manufacturer of high performance SSB and CW HF radio communications transceivers initially based in St. Petersburg, Florida, United States.
History
Signal/One's parent company was Electronic Communications, Inc. (ECI), a military division of NCR Corporation located in St. Petersburg, Florida. Key Signal/One executives were general manager Dick Ehrhorn (amateur radio call sign W4ETO), and project engineer Don Fowler (W4YET). Beginning in the 1960s with the Signal/One CX7, ("S1", as they were called) the company made radios that were priced well above the competition and offered many advanced features for the time, such as passband tuning, broadband transmission, dual receive, built-in IAMBIC keyer, electronic digital read out, solid state design, QSK and RF clipping. A Signal/One radio was said to be a complete high performance, station in a box.
While marketed to the affluent radio amateur, it has been suggested that the primary market for Signal/One, like Collins, was military, State Department, and government communications. Although prized for the performance and advanced engineering, Signal/One's products did not sell as well as hoped, and the company gradually fell on hard times. From the 1970s though the 1990s, every few years, Signal/One was spun off, sold, and resurfaced at another location.
Collectors
The surviving Signal/One products are sought after and actively collected. These include the CX7, CX7A, CX7B, CX11 and Milspec models. The last Signal/One radio was a re-engineered ICOM IC-781. Information available indicates there were 1152 Signal Ones built: 850 CX7, 112 CX11, 168 MS1030 (number of "C" versions is not known), 6 MilSpec1030C, 15 MilSpec1030CI Icom IC-781 conversions and 1 Milspec1030E DSP Icom IC-756 Pro conversion.
See also
Collins Radio
Icom
Vintage amateur radio
References
Amateur radio companies
Radio electronics
Defunct manufacturing companies of the United States
Defunct electronics companies of the United States | Signal/One | [
"Engineering"
] | 463 | [
"Radio electronics"
] |
2,274,000 | https://en.wikipedia.org/wiki/Lid%C3%A9rc | A lidérc () is a unique supernatural being of Hungarian folklore. It has three known varieties, which often borrow traits from one another.
The first, more traditional form of the lidérc is as a miracle chicken, csodacsirke in Hungarian, which hatches from the first egg of a black hen kept warm under the arm of a human. Some versions of the legend say that an unusually tiny black hen's egg, or any egg at all, may become a lidérc, or that the egg must be hatched by placing it in a heap of manure.
The lidérc attaches itself to people to become their lover. If the owner is a woman, the being shifts into a man, but instead of pleasuring the woman, it fondles her, sits on her body, and sometimes sucks her blood, making her weak and sick after a time. From this source comes a Hungarian word for nightmare -- lidércnyomás, which literally means "lidérc pressure", from the pressure on the body while the being sits on it. Alternate names for the lidérc are iglic, ihlic in Csallóköz, lüdérc, piritusz in the south, and mit-mitke in the east.
The lidérc hoards gold and thus makes its owner rich. To dispose of this form of the lidérc, it must be persuaded to perform an impossible task, such as haul sand with rope, or water with a sieve. It can also be destroyed by locking it into a tree hollow.
The second variety of the lidérc is as a tiny being, a temporal devil, földi ördög in Hungarian. It has many overlapping qualities with the miracle chicken form, and it may also be obtained from a black hen's egg, but more often it is found accidentally in rags, boxes, glass bottles, or in the pockets of old clothes. A person owning this form of the lidérc suddenly becomes rich and is capable of extraordinary feats, because the person's soul has supposedly been given to the lidérc, or even to the Devil.
The third variety is as a Satanic lover, ördögszerető in Hungarian, quite similar to an incubus or succubus. This form of the lidérc flies at night, appearing as a fiery light, a will o' the wisp, or even as a bird of fire. In the northern regions of Hungary and beyond, it is also known as ludvérc, lucfir. In Transylvania and Moldavia it goes by the names of lidérc, lüdérc, and sometimes ördög, literally, the Devil. While in flight, the lidérc sprinkles flames. On earth, it can assume a human shape, usually the shape of a much lamented dead relative or lover. Its footprints are that of a horse. The lidérc enters houses through chimneys or keyholes, brings sickness and doom to its victims. It leaves the house with a splash of flames and dirties the walls. Burning incense and birch branches prevent the lidérc from entering one's dwelling. In the eastern regions of Hungary and beyond, it is said the lidérc is impossible to outrun, it haunts cemeteries, and it must disappear at the first crow of a rooster at dawn.
Appearances in modern literature
A lidérc is mentioned in the famous historical novel The Name of the Rose by Umberto Eco.
mentioned in Katie MacAlister's novel Fire Me Up as a possible alternative to an incubus that forcibly attacked a human. Page 261.
mentioned in Steven Brust's & Megan Lindholm's novel, The Gypsy.
mentioned in Carol Goodman's novel The Incubus (re-released as The Demon Lover under the penname Juliet Dark).
mentioned in Daniel O'Malley's The Blitz, the third novel of The Rook Files trilogy.
mentioned in Frank Tallis 's novel Fatal lies.
Appearances in media
A shape-shifting lidérc is revealed in Lost Girl episode "Caged Fae" (301).
Notes
References
Magyar Néprajzi Lexicon: Encyclopedia of Hungarian Folklore (in Hungarian)
Eco, Umberto (1980) Il nome della rosa. Gruppo Editoriale Fabbri-Bompiani, Sonzongo, Etas S.p.A
Hungarian mythology
Legendary birds
Slavic mythology
Chuvash folklore
Sleep in mythology and folklore
Hungarian legendary creatures
Succubi
Incubi | Lidérc | [
"Biology"
] | 938 | [
"Behavior",
"Sleep",
"Sleep in mythology and folklore"
] |
2,275,193 | https://en.wikipedia.org/wiki/Cabal%20%28set%20theory%29 | The Cabal was, or perhaps is, a set of set theorists in Southern California, particularly at UCLA and Caltech, but also at UC Irvine. Organization and procedures range from informal to nonexistent, so it is difficult to say whether it still exists or exactly who has been a member, but it has included such notable figures as Donald A. Martin, Yiannis N. Moschovakis, John R. Steel, and Alexander S. Kechris. Others who have published in the proceedings of the Cabal seminar include Robert M. Solovay, W. Hugh Woodin, Matthew Foreman, and Steve Jackson.
The work of the group is characterized by free use of large cardinal axioms, and research into the descriptive set theoretic behavior of sets of reals if such assumptions hold.
Some of the philosophical views of the Cabal seminar were described in and .
Publications
References
Descriptive set theory
Set theory | Cabal (set theory) | [
"Mathematics"
] | 190 | [
"Mathematical logic",
"Set theory"
] |
16,363,080 | https://en.wikipedia.org/wiki/Royal%20Colonial%20Boundary%20of%201665 | The Royal Colonial Boundary of 1665 marked the border between the Colony of Virginia and the Province of Carolina from the Atlantic Ocean westward across North America. The line follows the parallel 36°30′ north latitude that later became a boundary for several U.S. states as far west as the Oklahoma Panhandle, and also came to be associated with the Missouri Compromise of 1820.
It was a brainchild of King Charles II of England, and was intended to stretch from the Atlantic to the Pacific Ocean. The line was selected as a small adjustment to the 36 degree southern border of Virginia colony in the creation of the Province of Carolina. By 1819 it was surveyed as far west as the Mississippi River near New Madrid, Missouri, where it created the Kentucky Bend.
It is a historic civil engineering landmark, as designated by the American Society of Civil Engineers. It would later be said of the project:
The boundary Charles II envisioned was one of the most grandiose in history. To decree an imaginary geographic straight line, 3,000 miles long, as a boundary across an unknown continent that he didn't even own was the height of royal pomposity.
The survey was done in five stages, using cadastral and geodetic surveying, being one of the first attempts to mark a boundary so long that it had to be concerned with the curvature of the Earth.
A major aberration in the line occurs south of Damascus, Virginia due to the surveyor, Peter Jefferson (father of Thomas Jefferson), continually edging north of the proper latitude. There are three theories about this:
The surveyor was drunk.
Iron deposits in the mountains interfered with compass readings.
People who lived in Tennessee exerted influence over the location of the line. (There were few British subjects living in Tennessee at the time Peter Jefferson and his partners marked their segment from the Dan River to what is now the Tennessee/North Carolina state line.)
The line was extended in 1779 and 1780 to the point at which it would first cross the Cumberland River. From there, the state of Virginia hired Thomas Walker to survey the line to the Mississippi River. Walker did not do a perfect job due to dense virgin forest, mountainous terrain, and rough riverbeds. In 1821 the state of Tennessee did a survey of the line to determine its true border with Kentucky, but this was not resolved since Kentucky was not participating. A joint survey by the two states was conducted in 1859, commanded by Austin P. Cox and Benjamin Pebbles. They started a survey from the New Madrid Bend of the Mississippi River to the Cumberland Gap, placing a stone slab every .
In the west, the line would later be used for approximating a de facto boundary north of which slavery could not be practiced, as established in the Missouri Compromise of 1820.
A marker at the Cumberland Gap National Historical Park denotes where the boundaries of Kentucky, Tennessee, and Virginia intersect. Under the Royal Proclamation of 1763, it also marks how far west a British American colonist was allowed to reside. Its exact location is 36°36'2.91" N, 83°40'31.23" W.
References
Pre-statehood history of Kentucky
Pre-statehood history of North Carolina
Pre-statehood history of Tennessee
Colony of Virginia
Historic Civil Engineering Landmarks
Borders of Kentucky
Borders of Tennessee
1665 in the Thirteen Colonies
1665 in the Colony of Virginia | Royal Colonial Boundary of 1665 | [
"Engineering"
] | 673 | [
"Civil engineering",
"Historic Civil Engineering Landmarks"
] |
16,364,229 | https://en.wikipedia.org/wiki/Arakelov%20theory | In mathematics, Arakelov theory (or Arakelov geometry) is an approach to Diophantine geometry, named for Suren Arakelov. It is used to study Diophantine equations in higher dimensions.
Background
The main motivation behind Arakelov geometry is that there is a correspondence between prime ideals and finite places , but there also exists a place at infinity , given by the Archimedean valuation, which doesn't have a corresponding prime ideal. Arakelov geometry gives a technique for compactifying into a complete space which has a prime lying at infinity. Arakelov's original construction studies one such theory, where a definition of divisors is constructor for a scheme of relative dimension 1 over such that it extends to a Riemann surface for every valuation at infinity. In addition, he equips these Riemann surfaces with Hermitian metrics on holomorphic vector bundles over X(C), the complex points of . This extra Hermitian structure is applied as a substitute for the failure of the scheme Spec(Z) to be a complete variety.
Note that other techniques exist for constructing a complete space extending , which is the basis of F1 geometry.
Original definition of divisors
Let be a field, its ring of integers, and a genus curve over with a non-singular model , called an arithmetic surface. Also, let be an inclusion of fields (which is supposed to represent a place at infinity). Also, let be the associated Riemann surface from the base change to . Using this data, one can define a c-divisor as a formal linear combination where is an irreducible closed subset of of codimension 1, , and , and the sum represents the sum over every real embedding of and over one embedding for each pair of complex embeddings . The set of c-divisors forms a group .
Results
defined an intersection theory on the arithmetic surfaces attached to smooth projective curves over number fields, with the aim of proving certain results, known in the case of function fields,
in the case of number fields. extended Arakelov's work by establishing results such as a Riemann-Roch theorem, a Noether formula, a Hodge index theorem and the nonnegativity of the self-intersection of the dualizing sheaf in this context.
Arakelov theory was used by Paul Vojta (1991) to give a new proof of the Mordell conjecture, and by in his proof of Serge Lang's generalization of the Mordell conjecture.
developed a more general framework to define the intersection pairing defined on an arithmetic surface over the spectrum of a ring of integers by Arakelov. developed a theory of positive line bundles and proved a Nakai–Moishezon type theorem for arithmetic surfaces. Further developments in the theory of positive line bundles by and culminated in a proof of the Bogomolov conjecture by and .
Arakelov's theory was generalized by Henri Gillet and Christophe Soulé to higher dimensions. That is, Gillet and Soulé defined an intersection pairing on an arithmetic variety. One of the main results of Gillet and Soulé is the arithmetic Riemann–Roch theorem of , an extension of the Grothendieck–Riemann–Roch theorem to arithmetic varieties.
For this one defines arithmetic Chow groups CHp(X) of an arithmetic variety X, and defines Chern classes for Hermitian vector bundles over X taking values in the arithmetic Chow groups.
The arithmetic Riemann–Roch theorem then describes how the Chern class behaves under pushforward of vector bundles under a proper map of arithmetic varieties. A complete proof of this theorem was only published recently by Gillet, Rössler and Soulé.
Arakelov's intersection theory for arithmetic surfaces was developed further by . The theory of Bost is based on the use of Green functions which, up to logarithmic singularities, belong to the Sobolev space . In this context, Bost obtains an arithmetic Hodge index theorem and uses this to obtain Lefschetz theorems for arithmetic surfaces.
Arithmetic Chow groups
An arithmetic cycle of codimension p is a pair (Z, g) where Z ∈ Zp(X) is a p-cycle on X and g is a Green current for Z, a higher-dimensional generalization of a Green function. The arithmetic Chow group of codimension p is the quotient of this group by the subgroup generated by certain "trivial" cycles.
The arithmetic Riemann–Roch theorem
The usual Grothendieck–Riemann–Roch theorem describes how the Chern character ch behaves under pushforward of sheaves, and states that ch(f*(E))= f*(ch(E)TdX/Y), where f is a proper morphism from X to Y and E is a vector bundle over f. The arithmetic Riemann–Roch theorem is similar, except that the Todd class gets multiplied by a certain power series.
The arithmetic Riemann–Roch theorem states
where
X and Y are regular projective arithmetic schemes.
f is a smooth proper map from X to Y
E is an arithmetic vector bundle over X.
is the arithmetic Chern character.
TX/Y is the relative tangent bundle
is the arithmetic Todd class
is
R(X) is the additive characteristic class associated to the formal power series
See also
Hodge–Arakelov theory
Hodge theory
P-adic Hodge theory
Adelic group
Notes
References
.
.
.
.
.
.
.
External links
Original paper
Arakelov geometry preprint archive
Algebraic geometry
Diophantine geometry | Arakelov theory | [
"Mathematics"
] | 1,160 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
16,364,924 | https://en.wikipedia.org/wiki/Chemical%20bonding%20model | A chemical bonding model is a theoretical model used to explain atomic bonding structure, molecular geometry, properties, and reactivity of physical matter. This can refer to:
VSEPR theory, a model of molecular geometry.
Valence bond theory, which describes molecular electronic structure with localized bonds and lone pairs.
Molecular orbital theory, which describes molecular electronic structure with delocalized molecular orbitals.
Crystal field theory, an electrostatic model for transition metal complexes.
Ligand field theory, the application of molecular orbital theory to transition metal complexes.
Chemical bonding | Chemical bonding model | [
"Physics",
"Chemistry",
"Materials_science"
] | 109 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
16,365,636 | https://en.wikipedia.org/wiki/Progressive%20provisioning | Progressive provisioning is a term used in entomology to refer to a form of parental behavior in which an adult (most commonly a hymenopteran such as a bee or wasp) feeds its larvae directly after they have hatched, feeding each larva repeatedly until it has completed development. The food is typically in the form of masticated or immobilized prey items (in predatory wasps), or regurgitated nectar mixed with pollen (in bees); only rarely are other sorts of food resources used (such as glandular secretions, or carrion). While this sort of direct and repetitive feeding of offspring is extremely common in groups such as birds and mammals, it is far less common among insects, with the exception of eusocial insects (one of the defining features of eusociality is cooperative brood care). Accordingly, progressive provisioning is universal among ants, and widespread among the social bees and wasps. Certain nonsocial wasps also rear their young with this type of feeding. Young termites (and other hemimetabolous insects) are able to feed themselves, and therefore do not demonstrate any form of provisioning.
One of the only well-known examples of progressive provisioning outside of the Hymenoptera are the burying beetles, which care for their larvae and supply them with a mass of carrion, which the adults chew and regurgitate to the developing larvae.
Many eusocial bees, such as stingless bees and halictids, practice mass provisioning, where all of the larval food is supplied before the egg is laid.
See also
Mass provisioning
References
Ethology
Behavioral ecology
Sociobiology | Progressive provisioning | [
"Biology"
] | 341 | [
"Behavior",
"Ethology stubs",
"Behavioral ecology",
"Behavioural sciences",
"Sociobiology",
"Ethology"
] |
16,365,986 | https://en.wikipedia.org/wiki/Backhouse%27s%20constant | Backhouse's constant is a mathematical constant named after Nigel Backhouse. Its value is approximately 1.456 074 948.
It is defined by using the power series such that the coefficients of successive terms are the prime numbers,
and its multiplicative inverse as a formal power series,
Then:
.
This limit was conjectured to exist by Backhouse, and later proven by Philippe Flajolet.
References
Further reading
Mathematical constants
Prime numbers | Backhouse's constant | [
"Mathematics"
] | 94 | [
"Number theory stubs",
"Prime numbers",
"Mathematical objects",
"nan",
"Mathematical constants",
"Numbers",
"Number theory"
] |
16,366,361 | https://en.wikipedia.org/wiki/GW501516 | GW501516 (also known as GW-501,516, GW1516, GSK-516, Cardarine, and on the black market as Endurobol) is a PPARδ receptor agonist that was invented in a collaboration between Ligand Pharmaceuticals and GlaxoSmithKline in the 1990s. It entered into clinical development as a drug candidate for metabolic and cardiovascular diseases, but was abandoned in 2007 because animal testing showed that the drug caused cancer to develop rapidly in several organs.
In 2007, research was published showing that high doses of GW501516 given to mice dramatically improved their physical performance; the work was widely discussed in popular media, and led to a black market for the drug candidate and to its abuse by athletes as a doping agent. The World Anti-Doping Agency (WADA) developed a test for GW501516 and other related chemicals and added them to the prohibited list in 2009; it has issued additional warnings to athletes that GW501516 is not safe.
History
GW501516 was initially discovered during a research collaboration between GSK and Ligand Pharmaceuticals that began in 1992. The discovery of the compound was published in a 2001 issue of PNAS. Oliver et al. reported that they used "combinatorial chemistry and structure-based drug design" to develop it. One of the authors was the son of Leo Sternbach who discovered benzodiazepines in the 1960s.
R & D Focus Drug News reported that GSK began phase I trials of the compound for the treatment of hyperlipidemia in 2000 followed by phase I/II in 2002. In 2003, Ligand Pharmaceuticals earned a $1 million payment as a result of GSK continuing phase I development.
By 2007, GW501516 had completed two phase II clinical studies and other studies relating to obesity, diabetes, dyslipidemia and cardiovascular disease, but GSK abandoned further development of the drug in 2007 for reasons which were not disclosed at the time. It later emerged that the drug was discontinued because animal testing showed that the drug caused cancer to develop rapidly in several organs, at dosages of 3 mg/kg/day in both mice and rats.
Ronald M. Evans's laboratory purchased a sample of GW501516 and gave mice a much higher dose than had been used in GSK's experiments; they found that the compound dramatically increased the physical performance of the mice. The work was published in 2007 in Cell and was widely reported in the popular press including The New York Times and The Wall Street Journal.
Another human study (comparing cardarine with the PPARα agonist GW590735 and placebo) was published in 2021.
Performance-enhancing drug
Concerns were raised prior to the 2008 Beijing Olympics that GW501516 could be used by athletes as a performance-enhancing drug that was not currently controlled by regulations or detected by standard tests. One of the main researchers from the study on enhanced endurance consequently developed a urine test to detect the drug, and made it available to the International Olympic Committee. The World Anti-Doping Agency (WADA) developed a test for GW501516 and other related PPARδ modulators, and added such drugs to the prohibited list in 2009.
GW501516 has been promoted on bodybuilding and athletics websites and by 2011 had already been available for some time on the black market. In 2011, it was reported to cost $1,000 for 10 g. In 2012, WADA recategorised GW501516 from a gene doping compound to a "hormone and metabolic modulator".
In 2013, WADA took the rare step of warning potential users of the compound of the possible health risks, stating that "clinical approval has not, and will not be given for this substance"; the New Scientist attributed the warning to the risks of the drug causing cancer.
A number of athletes have tested positive for GW501516. At the Vuelta Ciclista a Costa Rica in December 2012, four Costa Rican riders tested positive for GW501516. Three of them received two-year suspensions, while the fourth received 12 years as it was his second doping violation. In April 2013, Russian cyclist Valery Kaykov was suspended by cycling's governing body UCI after having tested positive for GW501516. Kaykov's team RusVelo dismissed him immediately and in May 2013, Venezuelan Miguel Ubeto was provisionally suspended by the Lampre team. In February 2014, Russian race walker Elena Lashmanova tested positive for GW501516. In April 2019, American heavyweight boxer Jarrell Miller tested positive for GW501516 which caused his challenge for Anthony Joshua's World Heavyweight titles to be cancelled. In December 2020, Miller was suspended for 2 years for repeated violations. In July 2022, the 2012 800m Olympic silver medalist from Botswana, Nijel Amos tested positive for GW501516 and was provisionally suspended just days before the 2022 World Athletics Championships. Surinam's Issam Asinga, who set the under-20 world track record in the men's 100 meters, was informed on Aug. 9, 2023 by the Athletics Integrity Unit that his July 18 drug test the prior month detected trace amounts of GW501516. Asinga has alleged in a suit filed in the Southern District of New York that Gatorade provided him with Gatorade Recovery Gummies at their awards ceremony one week earlier in Los Angeles tainted with GW501516.
Mode of action
GW501516 is a selective agonist (activator) of the PPARδ receptor. It displays high affinity (Ki = 1 nM) and potency (EC50 = 1 nM) for PPARδ with greater than selectivity over PPARα and PPARγ.
In rats, binding of GW501516 to PPARδ recruits the coactivator PGC-1α. The PPARδ/coactivator complex in turn upregulates the expression of proteins involved in energy expenditure. Furthermore, in rats treated with GW501516, increased fatty acid metabolism in skeletal muscle and protection against diet-induced obesity and type II diabetes was observed. In obese rhesus monkeys, GW501516 increased high-density lipoprotein (HDL) and lowered very-low-density lipoprotein (VLDL).
Activation of PPARδ is also believed to be the mechanism responsible for cancer induction. A 2018 study in finds that GW501516 enhances the growth of colitis-associated colorectal cancer by increasing inflammation and the expression of GLUT1 and SLC1A5. See also .
See also
Acadesine
GFT505
GW0742
Irisin
Peroxisome proliferator-activated receptor
Sodelglitazar
SR9009
References
PPAR agonists
Drugs developed by GSK plc
Carcinogens
Abandoned drugs
Trifluoromethyl compounds
Thioethers
Thiazoles
Carboxylic acids
Exercise mimetics | GW501516 | [
"Chemistry",
"Environmental_science"
] | 1,468 | [
"Toxicology",
"Exercise biochemistry",
"Carboxylic acids",
"Functional groups",
"Drug safety",
"Carcinogens",
"Exercise mimetics",
"Abandoned drugs"
] |
16,367,375 | https://en.wikipedia.org/wiki/Cross-laminated%20timber | Cross-laminated timber (CLT) is a subcategory of engineered wood panel product made from gluing together at least three layers of solid-sawn lumber (i.e. lumber cut from a single log). The grain of each layer of boards is usually rotated 90 degrees from that of adjacent layers and glued on the wide faces of each board, usually in a symmetric way so that the outer layers have the same orientation. An odd number of layers is most common, but there are configurations with even numbers as well (which are then arranged to give a symmetric configuration). Regular timber is an anisotropic material, meaning that the physical properties change depending on the direction at which the force is applied. By gluing layers of wood at right angles, the panel is able to achieve better structural rigidity in both directions. It is similar to plywood but with distinctively thicker laminations (or lamellae).
CLT is distinct from glued laminated timber (known as glulam), which is a product with all laminations orientated in the same way.
History
The first patent resembling CLT was first developed in the 1920s by Frank J. Walsh and Robert L. Watts in Tacoma, Washington. Many sources however, date the first patent back to 1985 when it was patented in France. Significant developments were then made in Austria when Gerhard Schickhofer presented his PhD thesis research on CLT in 1994. Utilizing the theories he developed during his research, Schickhofer began working with three small sawmills and the Sawmillers Association, to start production of CLT. With help from some government funding, they were able to hand build a test CLT press and create the first few panels. At the same time, the first press system, utilizing water-based pressure, came on the market enabling Schickhofer and his team to think beyond the capabilities originally thought possible for CLT. After years of extensive research, Schickofer submitted the results to the Austrian and EU government bodies that dealt with the approval of materials for commercial products and in December 1998 it was approved. A period of substantial growth in production and projects soon followed in Germany and other European countries as a push for green buildings became more prominent. CLT was slow to take off in North America, but it has begun to gain momentum in more recent years.
Building codes
In 2002, Austria used Schickhofer's research to create the first national CLT guidelines. The International European Technical Assessments (ETA) began to regulate the properties and design of CLT in 2006. Efforts to standardize CLT in Europe started in 2008 and by 2015 the first European product standard for CLT, EN 16351, was approved. Also in 2015, CLT was incorporated into the International Building Code in accordance with ANSI/APA PRG 320 and the National Fire Protection Association (NFPA) began to research and develop codes regarding the fire safety of CLT and other engineered woods. The 2021 revision of the IBC included three new construction types for mass timber buildings, Type IV-A, Type IV-B and Type IV-C. These new types enabled buildings using mass timber to be built taller and over greater areas than before.
Manufacturing
The manufacturing of CLT is generally divided into nine steps: primary lumber selection, lumber grouping, lumber planing, lumber cutting, adhesive application, panel lay-up, assembly pressing, quality control and marking and shipping.
During primary lumber selection, lumber will undergo a moisture content (MC) check and visual grading. Depending on the application, structural testing (E-rating) may also be completed. The moisture content check is conducted because the lumber that is typically used, can arrive with a MC of 19% or less, but lumber for CLT needs to have a MC of approximately 12% during manufacturing to avoid internal stress due to shrinkage. This test is also done so that adjacent pieces of lumber do not have an MC difference greater than 5%. In order to conduct an MC check, various hand-held or on the line devices can be used. Some are more accurate than others, as they check moisture content within the wood not just at surface level. Further research and development is ongoing to improve the accuracy of such devices. Temperature in the manufacturing facility is also checked and maintained throughout this process to ensure the quality of the lumber. Visual grading is performed so that any warping in the lumber is prevented from affecting the pressure the bond line can withstand. It also ensures that waning, defects in the wood due to bark or missing wood due to the curvature of the log, does not significantly reduce the available bonding surface. For a product to be considered an E-class CLT, visual grading must be considered for perpendicular layers while parallel layers must be determined by the E-rating (the average stiffness of a piece of lumber). Products are classified as V-class if visual grading is used for both perpendicular and parallel layers.
Using the results from lumber selection, the grouping step ensures the timber of various categories are grouped together. Lumber used for the major and minor strength directions are grouped primarily relying on MC and visual grading. Within the major strength direction, all lumber is required to have the same engineering properties so that panel limitations can be determined. Likewise, all lumber for the minor direction must have a single set of properties. Higher quality lumber can also be grouped so that it is reserved for areas in which fasteners are installed to maximize fastener effectiveness. For aesthetic purposes, some lumber will be set aside so that the outermost layer of a panel is visually appealing. Timber that does not fit into either category may be used for different products, such as plywood or glued laminated timber.
The planing step improves the surfaces of the timber to reduce oxidation which increases the effectiveness of the adhesives. Approximately 2.5 mm is trimmed off the top and bottom faces and 3.8 mm is trimmed off the sides to ensure a flat surface. In some cases, when the lumber edges are not glued because they have an acceptable width tolerance, only the top and bottom faces are planed. It is possible that planing may increase the overall moisture content of the timber due to the drying variations throughout the wood. When this occurs, bonding suitability should be assessed and reconditioning may be necessary.
The timber is then cut to a certain length depending on the application and specific client needs. Cut-offs from the longitudinal layers may be used to create the transverse layers if the same specifications are needed for both the parallel and perpendicular layers.
Application of the adhesive occurs shortly after planing to avoid any issues affecting the surface of the lumber. Applying the adhesive is most often done in one of two ways: a through-feed process or side-by-side nozzles. In the through-feed process extruder heads distribute parallel threads of adhesive along the piece of lumber in an airtight system to avoid air gaps in the glue that could affect bonding strength. This is typically used for phenol resorcinol formaldehyde (PRF) or polyurethane-reactive (PUR) adhesives. For PUR adhesives, the layers of lumber may be misted to help with curing. The side-by-side nozzle option is commonly reserved for CLT layers that are formed in advance and works by installing the nozzles along a beam that will travel along the length of the lumber and apply the adhesive. To avoid additional manufacturing costs, adhesive is typically only applied to the top and bottom faces of the lumber, but edge-gluing can be done if necessary.
Panel lay-up is performed next and involves laying the individual pieces of lumber together in preparation for assembly pressing. In accordance with ANSI/APA PRG 320, at least 80% of the surface area between layers must be bound together so that the bond is effective. In order to meet this standard, manufacturers are tasked with finding the most efficient way of laying the lumber. This time between spreading the adhesive and applying pressure is known as assembly time and must fall within the time targeted by the specific adhesive used.
Assembly pressing completes the adhering process with either a vacuum press or a hydraulic press. Vacuum pressing generates a pressure of approximately , which is not always enough to address the warping potential or surface irregularities. To accommodate this, lumber shrinkage reliefs can be cut in the longitudinal direction. These reliefs release the stress in the lumber and decrease the risk of cracking due to drying. They must however, have a maximum width and depth so that the bonding area and panel strength are not impacted significantly. Using a vacuum press may be more beneficial in some circumstances, because they can press more than one CLT panel at once and can be used for curved elements. A hydraulic press on the other hand, generates a greater pressure, ranging from , and applies it to specific faces of the panel. For this reason, panels may need to undergo both vertical and side clamping pressing.
Once assembly pressing is complete, the CLT panels undergo quality control machining. Sanders are used to sand each panel down to the desired thickness with a tolerance of 1 mm, or less if specified by a project. The CLT panels are then moved to a multi-axis numerically controlled machine that makes precision cuts for doors, windows, splices, and connections. Any minor repairs that are necessary at this stage are completed manually.
To meet the requirements of ANSI/APA PRG 320 and ensure that the correct product has been specified, delivered, and installed, CLT panels must be marked to identify a variety of information. This includes the grade, thickness, mill name, agency name or logo, ANSI/APA PRG 320 symbol, manufacturer designations, and a top stamp if it is a custom panel. These markings must be stamped at intervals of or less so that when longer pieces are delivered to site and cut, they still display the necessary information. Further markings may be added to demonstrate the main direction of loading and the zones designed to receive connections. During shipping and construction, the CLT panels must be protected from the weather to maintain their structural integrity.
Advantages
As a building material, CLT has numerous advantages:
Design flexibility – CLT has many applications in construction as it can be used for walls, floors, and roofs. The size of the panels is also easily varied as it is only limited by the site storage and transportation to the site.
Environmentally conscious – CLT is a renewable, green, and sustainable material, if the trees used to make it are sourced from efficiently managed forests.
Carbon capture – Because CLT is made from wood, it sequesters carbon. Different factors will affect how much carbon is sequestered, but numerous studies have shown that the use of CLT (in combination with other engineered wood) in construction could significantly reduce our net carbon emission output.
Prefabrication – CLT panels are fully fabricated before they are transported to site. This enables quicker construction times compared to other materials which can lead to shorter schedules and its subsequent benefits such as cost savings, less risk for accidents, and reduced disruption to the neighborhood.
Thermal insulation – CLT panels provide air tightness and great thermal insulation to buildings as the thermal conductivity (U) of a panel is approximately 0.3458 W/m2K. Other common building materials can have U values ranging from 0.4 to 2.5 W/m2K. The various layers of wood also serve as a thermal mass, which can help reduce a building's energy use.
Light weight – CLT is significantly lighter than traditional building materials so foundations can be designed to support a smaller load and therefore use less material. The machinery required on-site to move and place the CLT panels are also smaller than those needed to lift heavier building materials. These aspects enable contractors to erect CLT buildings on sites that might otherwise be incapable of supporting heavier projects. This can help ease infilling projects where construction is especially tight or difficult to access due to other preexisting structures around the site.
Strength and stiffness – CLT products have relatively high in and out of plane strength and stiffness due to the perpendicular layers. This reinforcement is comparable to a reinforced concrete slab and increases the panel's resistance to splitting. CLT has also been shown to perform well under seismic forces.
Fire safety – Wood is inherently flammable which leads to the D class fire rating CLT receives. Despite this, CLT ranks highly for its ability to withstand a fire once started. It is classified as REI 90, indicating that it can retain the necessary load bearing capacity and meet integrity requirements for 90 minutes during a fire. This leads to a better overall fire safety performance than unprotected steel, which loses its load bearing capacity after it is exposed to a fire for only 15 minutes.
Challenges
There are also some drawbacks associated with CLT:
Costs – As CLT is a newer material for North America, it is only produced in a few regions, generally the Pacific Northwest. Transporting CLT panels across potentially vast distances will incur additional upfront costs. Some sources also indicate that the production costs of CLT are greater than other commonly used building materials due to the newness of the system and lack of current demand.
Limited track record – CLT is a new venture for many in North America which can limit the number of engineers and contractors willing to take on a CLT project due to their lack of knowledge and experience with the material. The building codes for mass timber projects are also not as developed as those for concrete and steel which can again make developers hesitant to employ CLT. A considerable amount of technical research has been done on CLT, but it takes time to integrate new practices and results into the building industry because of its path-dependent culture which resists deviating from established practices, especially when the research conducted on CLT has not necessarily reached those who have the opportunity to implement it.
Acoustics – CLT alone does not meet the necessary sound insulation ratings. In order to meet these requirements, additional elements, such as decoupled gypsum board, must be used in conjunction with the CLT panels. Changing the effective mass area of the CLT panel by increasing thickness or adding a second panel (creating an air gap) also improved the sound insulation to meet the standards.
Vibrations – Current standardized methods for testing the vibration performance of floors are not applicable to CLT floors due to their lightweight nature and natural frequency. Looking at deflection under a uniformly distributed load, we can get some idea of the vibration performance of a CLT floor, but it is heavily reliant on a designer's judgement and neglects the influence of mass characteristics. In order to fully test the vibration performance of a CLT floor, a new testing method will need to be developed.
Applications
CLT is used in a number of various different structures around the world.
Pavilions
In September 2016 the world's first timber mega-tube structure was built at the Chelsea College of Arts in London, using hardwood CLT panels. The "Smile" was designed by architect Alison Brooks and engineered by Arup, in collaboration with the American Hardwood Export Council, for the London Design Festival. The structure is a curved tube in a shape of a smile touching the floor at its centre and has a maximum capacity of 60 people.
Plyscrapers
Stadthaus, a residential building in Hackney, London, built in 2009, was the first building constructed using only CLT framing, including for the stairs and elevator shaft. At 9 stories and 30 m tall, at the time of construction, it was the tallest CLT building in the world.
In 2012, Forte Living, an apartment complex in Melbourne, Australia, became the tallest plyscraper framed with CLT alone. The building has 10 stories and stands just over 32 m tall. The 759 CLT panels necessary for the project were manufactured in Austria using European spruce that was grown and harvested there.
In 2022, the Ascent MKE building in Milwaukee, Wisconsin, became the tallest high-rise to utilize CLT components. Reaching 25 stories and 86.6 m, the Ascent relies on concrete, steel, and mass timber components. The CLT was primarily used to create the slabs for each floor.
Bridges
The Mistissini Bridge in Mistissini, Quebec, Canada, is a 160-meter-long bridge that crosses the Uupaachikus Pass. Designed by Stantec and completed in 2014, the Mistissini Bridge employs the use of locally sourced CLT panels and glue-laminated timber girders to serve as the main structural members of the bridge. The bridge won numerous awards including the National Award of Excellence in the Transportation category at the 48th annual Association of Consulting Engineering Companies (ACEC) and also the Engineering a Better Canada Award.
The Exploded View is a fully CLT bridge in the design phase . Originally proposed in 2020 by Paul Cocksedge, this bridge will cross the Liesbeek River in Cape Town, South Africa. Cocksedge plans to manufacture the CLT from Eucalyptus trees, an invasive species in the area.
Parking structures
The Glenwood is a CLT parking garage that is part of a larger redevelopment plan in Springfield, Oregon. Construction is underway , but once complete, it will stand at four stories and have 360 parking spots. In order to help protect the CLT from the rain while keeping it exposed, a façade made from overlapping glass panels will be installed.
Open Platform and JAJA Architects won a design competition in 2020 for their plans to create a Park n’ Play garage in Aarhus, Denmark. The garage not only employs the use of CLT for the structure but surrounds the garage with planters and other greenery to promote the use of the space as more than just a place to leave a car. There are six stories with 700 spots, some designed specifically to promote green transportation, including charging stations and carpool only spots. The facility was designed to help the country achieve its goal of reaching carbon neutrality by 2050.
Modular construction
CLT has also been identified as a suitable candidate for use in modular construction. Silicon Valley–based modular construction startup Katerra opened a modular construction CLT factory in Spokane, Washington, in 2019 and some politicians were calling for the use of pre-fabricated modular CLT construction to address the housing crisis in cities like Seattle.
The Dyson Institute Village was built in 2019 on the outskirts of Malmesbury, England, to provide on-campus student housing for the Dyson Institute of Engineering and Technology. The village was designed as a number of stacked studio apartment modules by London architects WilkinsonEyre, and modeled after Montreal's Habitat 67. The pods are constructed from CLT, with each pod wrapped in aluminum.
Mechanical properties and effects
As a novel and renewable construction material, the demand of applying cross-laminated timber (CLT) has increased significantly. However, the mechanical properties of CLT have not been fully explored. This section primarily discusses research on the compressive strength and seismic behavior of CLT. In the compressive strength section, the impact of the number of CLT layers and the geometry of openings on a CLT panel is analyzed. Meanwhile, the seismic behavior of CLT is evaluated through a shaking table test that assesses the seismic shear capacity of the material.
The mechanical properties summary is based on the research of Pina et al. and Sato et al.
Summary of properties
Cross-laminated timber (CLT) is an engineered wood product that is gaining popularity in the construction industry due to its numerous advantages, such as sustainability, cost-effectiveness, and ease of construction. Mechanical properties, particularly compressive strength, are key factors to consider when designing and constructing CLT panels. The number of layers in a CLT panel has a direct impact on its compressive strength, with more layers generally resulting in higher strength. However, in the case of keeping the certain thickness of a CLT panel, more layers will result in a lower buckling capacity. Additionally, the geometry of openings in a CLT panel can also affect its compressive strength, with larger openings leading to lower strength. Also, rectangular openings oriented perpendicular to the loading direction exhibit lower sensitivity to capacity reduction when subjected to critical loading changes compared to those with an orientation parallel to the loading direction. To achieve optimal mechanical properties, it is important to carefully consider both the number of layers and the geometry of openings when designing and constructing CLT panels.
The seismic behavior of cross-laminated timber (CLT) is an area of active research and development. Studies have shown that CLT has good seismic performance due to its high stiffness and strength, as well as its ductility and energy dissipation capacity.
Discussions
Why CLT is sustainable
Cross-laminated timber (CLT) is considered sustainable because it's made from renewable wood that can be harvested responsibly. CLT production is also eco-friendly, generating fewer greenhouse gas emissions and using less energy than traditional materials like concrete and steel. CLT can help reduce a building's carbon footprint, as it absorbs and stores carbon dioxide. Its lightweight, prefabricated design minimizes waste and improves construction efficiency. Finally, CLT is durable and long-lasting, with a projected lifespan of over 100 years, making it a promising option for sustainable construction.
Objective and approaches of the researches
For the compressive strength research, the author uses a computational homogenization procedure that is implemented numerically within a finite element framework using the commercial software ANSYS 15.0. The study aims to determine the buckling strength of Cross-Laminated Timber (CLT) walls by varying the thickness of individual layers while maintaining the total thickness. ANSYS is used to apply critical buckling loads to a set thickness wall and alter the thickness of each layer. The research demonstrates that the buckling resistance strength of CLT walls is influenced by two physical properties: the total thickness of the wall and the number of layers. Additionally, the study investigates the effect of wall openings on the wall's strength by analyzing the opening geometry, loading orientation, and opening position.
For the seismic resistance strength research, shaking table tests were conducted for target structure, which was composed of narrow shear walls and high ductile tensile bolts. The structure was shown to behave well during severe strong motion as specified in the Japanese building standard law and to have survived the 1995 Kobe earthquake despite the occurrence of a compressive rupture in shear walls which are support elements against the vertical load. Story shear capacity calculated from a numerical model and element tests (such as connections) were safely evaluated.
Assumptions of the research
Linear elastic constitutive law is assumed to reduce the computational time.
The wood is assumed to be in a dry condition.
The interface between different layers is fully bonded with no slide.
Simply supported boundary conditions are used in all the models.
Only rectangular cut-outs for openings are considered.
20mm cubic SOLID186 meshing elements was conducted by using ANSYS.
For further research
While linear elastic behavior is often assumed for Cross-Laminated Timber (CLT), in reality, its performance is not always linear and needs to be studied in a non-linear context. Additionally, the connection between different layers cannot always be fully bonded, and the moisture content of the wood will change over time. Furthermore, the impact of different opening shapes on CLT strength requires further investigation, and boundary conditions are not always simply supported. It is important to consider these factors when studying the behavior and performance of CLT in real-world applications.
See also
Engineered Wood
Brettstapel
Glued laminated timber
Laminated veneer lumber
Green building
Plyscraper
Lumber
Plywood
References
External links
Engineered wood
Composite materials | Cross-laminated timber | [
"Physics"
] | 4,851 | [
"Materials",
"Composite materials",
"Matter"
] |
16,367,693 | https://en.wikipedia.org/wiki/Forensic%20polymer%20engineering | Forensic polymer engineering is the study of failure in polymeric products. The topic includes the fracture of plastic products, or any other reason why such a product fails in service, or fails to meet its specification. The subject focuses on the material evidence from crime or accident scenes, seeking defects in those materials that might explain why an accident occurred, or the source of a specific material to identify a criminal. Many analytical methods used for polymer identification may be used in investigations, the exact set being determined by the nature of the polymer in question, be it thermoset, thermoplastic, elastomeric or composite in nature.
One aspect is the analysis of trace evidence such as skid marks on exposed surfaces, where contact between dissimilar materials leaves material traces of one left on the other. Provided the traces can be analyzed successfully, then an accident or crime can often be reconstructed.
Methods of analysis
Thermoplastics can be analysed using infra-red spectroscopy, ultraviolet–visible spectroscopy, nuclear magnetic resonance spectroscopy and the environmental scanning electron microscope. Failed samples can either be dissolved in a suitable solvent and examined directly (UV, IR and NMR spectroscopy) or be a thin film cast from solvent or cut using microtomy from the solid product. Infra-red spectroscopy is especially useful for assessing oxidation of polymers, such as the polymer degradation caused by faulty injection moulding. The spectrum shows the characteristic carbonyl group produced by oxidation of polypropylene, which made the product brittle. It was a critical part of a crutch, and when it failed, the user fell and injured herself very seriously. The spectrum was obtained from a thin film cast from a solution of a sample of the plastic taken from the failed forearm crutch.
Microtomy is preferable since there are no complications from solvent absorption, and the integrity of the sample is partly preserved. Thermosets, composites and elastomers can often be examined using only microtomy owing to the insoluble nature of these materials.
Fracture
Fractured products can be examined using fractography, an especially useful method for all broken components using macrophotography and optical microscopy. Although polymers usually possess quite different properties to metals, ceramics and glasses, they are just as susceptible to failure from mechanical overload, fatigue and stress corrosion cracking if products are poorly designed or manufactured.
Scanning electron microscopy or ESEM is especially useful for examining fracture surfaces and can also provide elemental analysis of viewed parts of the sample being investigated. It is effectively a technique of microanalysis and valuable for examination of trace evidence. On the other hand, colour rendition is absent in ESEM, and there is no information provided about the way in which those elements are bonded to one another. Specimens will be exposed to a partial vacuum, so any volatiles may be removed, and surfaces may be contaminated by substances used to attach the sample to the mount.
Examples
Many polymers are attacked by specific chemicals in the environment, and serious problems can arise, including road accidents and personal injury. Polymer degradation leads to sample embrittlement, and fracture under low applied loads.
Ozone cracking
Polymers for example, can be attacked by aggressive chemicals, and if under load, then cracks will grow by the mechanism of stress corrosion cracking. Perhaps the oldest known example is the ozone cracking of rubbers, where traces of ozone in the atmosphere attack double bonds in the chains of the materials. Elastomers with double bonds in their chains include natural rubber, nitrile rubber and styrene-butadiene rubber. They are all highly susceptible to ozone attack, and can cause problems like vehicle fires (from rubber fuel lines) and tyre blow-outs. Nowadays, anti-ozonants are widely added to these polymers, so the incidence of cracking has dropped. However, not all safety-critical rubber products are protected, and, since only ppb of ozone will start attack, failures are still occurring.
Chlorine-induced cracking
Another highly reactive gas is chlorine, which will attack susceptible polymers such as acetal resin and polybutylene pipework. There have been many examples of such pipes and acetal fittings failing in properties in the US as a result of chlorine-induced cracking. Essentially the gas attacks sensitive parts of the chain molecules (especially secondary, tertiary or allylic carbon atoms), oxidising the chains and ultimately causing chain cleavage. The root cause is traces of chlorine in the water supply, added for its anti-bacterial action, attack occurring even at parts per million traces of the dissolved gas. The chlorine attacks weak parts of a product, and, in the case of an acetal resin junction in a water supply system, it is the thread roots that were attacked first, causing a brittle crack to grow. The discoloration on the fracture surface was caused by deposition of carbonates from the hard water supply, so the joint had been in a critical state for many months.
Hydrolysis
Most step-growth polymers can suffer hydrolysis in the presence of water, often a reaction catalysed by acid or alkali. Nylon for example, will degrade and crack rapidly if exposed to strong acids, a phenomenon well known to people who accidentally spill acid onto their tights.
The broken fuel pipe caused a serious accident when diesel fuel poured out from a van onto the road. A following car skidded and the driver was seriously injured when she collided with an oncoming lorry. Scanning electron microscopy or SEM showed that the nylon connector had fractured by stress corrosion cracking due to a small leak of battery acid. Nylon is susceptible to hydrolysis in contact with sulfuric acid, and only a small leak of acid would have sufficed to start a brittle crack in the injection moulded connector by a mechanism known as stress corrosion cracking, or SCC. The crack took about 7 days to grow across the diameter of the tube, hence the van driver should have seen the leak well before the crack grew to a critical size. He did not, therefore resulting in the accident. The fracture surface showed a mainly brittle surface with striations indicating progressive growth of the crack across the diameter of the pipe. Once the crack had penetrated the inner bore, fuel started leaking onto the road. Diesel is especially hazardous on road surfaces because it forms a thin oily film that cannot be seen easily by drivers. It is akin to black ice in lubricity, so skids are common when diesel leaks occur. The insurers of the van driver admitted liability and the injured driver was compensated.
Polycarbonate is susceptible to alkali hydrolysis, the reaction simply depolymerising the material. Polyesters are prone to degrade when treated with strong acids, and in all these cases, care must be taken to dry the raw materials for processing at high temperatures to prevent the problem occurring.
UV degradation
Many polymers are also attacked by UV radiation at vulnerable points in their chain structures. Thus polypropylene suffers severe cracking in sunlight unless anti-oxidants are added. The point of attack occurs at the tertiary carbon atom present in every repeat unit, causing oxidation and finally chain breakage. Polyethylene is also susceptible to UV degradation, especially those variants that are branched polymers such as LDPE. The branch points are tertiary carbon atoms, so polymer degradation starts there and results in chain cleavage, and embrittlement. In the example shown at right, carbonyl groups were easily detected by IR spectroscopy from a cast thin film. The product was a road cone that had cracked in service, and many similar cones also failed because an anti-UV additive had not been used.
See also
Applied spectroscopy
Catastrophic failure
Circumstantial evidence
Environmental stress cracking
Forensic chemistry
Forensic electrical engineering
Forensic evidence
Forensic photography
Forensic engineering
Forensic materials engineering
Forensic science
Fractography
Ozone cracking
Polymer degradation
Skid mark
Stress corrosion cracking
Trace evidence
UV degradation
References
Peter R Lewis and Sarah Hainsworth, Fuel Line Failure from stress corrosion cracking, Engineering Failure Analysis,13 (2006) 946–962.
Lewis, Peter Rhys, Reynolds, K, Gagg, C, Forensic Materials Engineering: Case studies, CRC Press (2004).
Wright, D.C., Environmental Stress Cracking of Plastics RAPRA (2001).
Ezrin, Meyer, Plastics Failure Guide: Cause and Prevention, Hanser-SPE (1996).
Lewis, Peter Rhys, Forensic Polymer Engineering: Why polymer products fail in service, 2nd Edition, Elsevier-Woodhead (2016).
Polymers
Polymer engineering
Materials degradation | Forensic polymer engineering | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,741 | [
"Polymers",
"Materials degradation",
"Materials science",
"Polymer chemistry"
] |
16,368,112 | https://en.wikipedia.org/wiki/Lansdowne%20Bank | Lansdowne Bank, sometimes called Landsdowne Bank, is an extensive submerged bank located between the main island of New Caledonia and the Chesterfield Islands, in the easternmost part of the Coral Sea. It covers an area of , making it one of the largest banks of the world, has general depths of , and a largely sandy bottom. Two reefs mark the shallowest spots of the bank, but they are still submerged at low tide.
Fairway Ridge, also called Fairway Plateau, is a submarine feature shown on some maps in that area. The Lansdowne Bank area marked is far larger than Fairway Plateau, but there are smaller, unnamed plateaus nearby. The Lansdowne Bank area, shown at the northeastern end of Lord Howe Rise, is separated from New Caledonia by the northern New Caledonia Trough.
Nereus Reef
Nereus Reef is at the northern end of Lansdowne Bank. Its given position of is doubtful. Nereus Reef has the least depth of the bank, with . A shoal with a depth of is west-northwest of Nereus Reef.
Fairway Reef
The southeastern end of Lansdowne Bank is marked by Fairway Reef, thus named from its lying in the fairway between Australia and New Caledonia, midway between the Bellona Reefs (south of the Chesterfield Islands) and New Caledonia. Fairway Reef is long, about 4 to 5 fathoms (7.3 to 9.1 metres) deep, of coral bottom, and located at . According to some sources, Fairway Reef dries at low tide.
Sandy Island Mystery
Google Maps and other internet maps showed a large landmass named Sandy Island in northern Lansdowne Bank, but did not show an image for it. The non-existence of this phantom island had been proved by French missions in the 1970s, and the information passed to hydrographic services around the world, but it had remained in the World Vector Shoreline Database. In 2012 Australian scientists again proved the island does not exist.
See also
List of reefs
References
External links
Geology
Geology
Part of Nautical Chart n° 6670
Landforms of New Caledonia
Coral reefs
Undersea banks of the Pacific Ocean | Lansdowne Bank | [
"Biology"
] | 441 | [
"Biogeomorphology",
"Coral reefs"
] |
16,368,217 | https://en.wikipedia.org/wiki/List%20of%20sovereign%20states%20by%20number%20of%20broadband%20Internet%20subscriptions | This article contains a sortable list of countries by number of broadband Internet subscriptions and penetration rates, using data compiled by the International Telecommunication Union.
List
The list includes figures for both fixed wired broadband subscriptions and mobile cellular subscriptions:
Fixed-broadband access refers to high-speed fixed (wired) access to the public Internet at downstream speeds equal to, or greater than, 256 kbit/s. This includes satellite Internet access, cable modem, DSL, fibre-to-the-home/building, and other fixed (wired) broadband subscriptions. The totals are measured irrespective of the method of payment.
Mobile-cellular access refers to high-speed mobile access to the public Internet at advertised data speeds equal to, or greater than, 256 kbit/s. To be counted, a mobile subscription must allow access to the greater Internet via HTTP and must have been used to make a data connection using the Internet Protocol in the previous three months. SMS and MMS messaging do not count as an active Internet data connection even if they are delivered via IP.
Penetration rate is the percentage (%) of a country's population that are subscribers. A dash (—) is shown when data for 2012 is not available. Non-country and disputed areas are shown in italics. Taiwan is listed as a sovereign country.
Note: Because a single Internet subscription may be shared by many people and a single person my have more than one subscription, the penetration rate will not reflect the actual level of access to broadband Internet of the population and penetration rates larger than 100% are possible.
See also
List of countries by number of Internet users
List of countries by number of telephone lines in use
List of countries by smartphone penetration
List of mobile network operators
List of multiple-system operators
List of telecommunications companies
References
External links
"Internet Monitor", a research project of the Berkman Center for Internet & Society at Harvard University to evaluate, describe, and summarize the means, mechanisms, and extent of Internet access, content controls and activity around the world.
International telecommunications
Broadband Internet users
Internet-related lists
IT infrastructure | List of sovereign states by number of broadband Internet subscriptions | [
"Technology"
] | 427 | [
"Computing-related lists",
"Information technology",
"IT infrastructure",
"Internet-related lists"
] |
16,369,125 | https://en.wikipedia.org/wiki/Ferropericlase | Ferropericlase or magnesiowüstite is a magnesium/iron oxide with the chemical formula that is interpreted to be one of the main constituents of the Earth's lower mantle together with the silicate perovskite (), a magnesium/iron silicate with a perovskite structure. Ferropericlase has been found as inclusions in a few natural diamonds. An unusually high iron content in one suite of diamonds has been associated with an origin from the lowermost mantle. Discrete ultralow-velocity zones in the deepest parts of the mantle, near the Earth's core, are thought to be blobs of ferropericlase, as seismic waves are significantly slowed as they pass through them, and ferropericlase is known to have this effect at the high pressures and temperatures found deep within the Earth's mantle. In May 2018, ferropericlase was shown to be anisotropic in specific ways in the high pressures of the lower mantle, and these anisotropies may help seismologists and geologists to confirm whether those ultra-low velocity zones are indeed ferropericlase, by passing seismic waves through them from various different directions and observing the exact amount of change in the velocity of those waves.
Spin transition zone
Changes in the spin state of electrons in iron in mantle minerals has been studied experimentally in ferropericlase. Samples are subject to the conditions of the lower mantle in a laser-heated diamond anvil cell and the spin-state is measured using synchrotron X-ray spectroscopy. Results indicate that the change from a high to low spin state in iron occurs with increasing depth over a range from 1000 km to 2200 km.
Mantle abundance
Ferropericlase makes up about 20% of the volume of the lower mantle of the Earth, which makes it the second most abundant mineral phase in that region after silicate perovskite ; it also is the major host for iron in the lower mantle. At the bottom of the transition zone of the mantle, the reaction
γ–
transforms γ-olivine into a mixture of perovskite and ferropericlase and vice versa. In the literature, this mineral phase of the lower mantle is also often called magnesiowüstite.
See also
Ilmenite
Post-perovskite
Wollastonite ()
References
Petrology
Oxide minerals
Earth's mantle | Ferropericlase | [
"Chemistry"
] | 502 | [
"Inorganic compounds",
"Inorganic compound stubs"
] |
16,369,138 | https://en.wikipedia.org/wiki/NewDos/80 | NewDos/80 is a third-party operating system for the Radio Shack TRS-80 line of microcomputers released in 1980. NewDos/80 was developed by Apparat, Inc., of Denver, Colorado. NewDos/80 version 2.0 was released in August 1981. It ran on the TRS-80 Model I and Model III.
Overview
The operating system had additional commands and features that were not available in TRSDOS, the native operating system for TRS-80 computers. NewDos/80 allowed TRS-80 computers to take advantage of advances in floppy disk storage that went beyond the initial 87.5KB 35-track, single-density, single-sided format. The system also corrected issues that early versions of TRSDOS had with arbitrarily losing data due to errors in how it communicated with the contemporary TRS-80 disk drives' 1771 disk controller.
NewDos/80 had many options for specifying specific low-level disk configurations. Settings such as diskette formats, disk drive types, track geometry and controllers could be configured using the PDRIVE command. In version 2.1, Apparat added support for hard disk drives via an external bus adapter.
Additionally, NewDOS/80 incorporated a software fix for the infamous hardware "keybounce" problem associated with the TRS-80 Model I. Without such a fix, keyboard keys would often repeat multiple times when struck. This fix was not built into TRS-DOS that came with the computer, so users were forced to load a separate debounce utility every time they booted.
NewDos/80 was written by Cliff Ide and Jason Matthews. Ide was the primary author of NewDos in all of its incarnations, Matthews wrote "patches" for various applications such as Scripsit and VisiCalc. Ide later retired and Matthews went on to other projects in the software business.
Commands
The following is a list of NewDos/80 commands:
Reception
While criticizing NEWDOS's "nearly incomprehensible documentation", Jerry Pournelle wrote in 1980 that it was "a much better operating system" than the "needlessly complex" TRSDOS and stated that "Tandy ought to be marketing NEWDOS+ themselves".
See also
List of DOS commands
List of Unix commands
References
External links
Mike's Virtual Computer Museum - NEWDOS/80 page
Ira Goldklang's TRS-80 Revived Site - NEWDOS/80 Version 2 reference
Disk operating systems
TRS-80
1980 software | NewDos/80 | [
"Technology"
] | 523 | [
"Operating system stubs",
"Computing stubs"
] |
16,369,918 | https://en.wikipedia.org/wiki/Plutonium%20hexafluoride | Plutonium hexafluoride is the highest fluoride of plutonium, and is of interest for laser enrichment of plutonium, in particular for the production of pure plutonium-239 from irradiated uranium. This isotope of plutonium is needed to avoid premature ignition of low-mass nuclear weapon designs by neutrons produced by spontaneous fission of plutonium-240.
Preparation
Plutonium hexafluoride is prepared by fluorination of plutonium tetrafluoride (PuF4) by powerful fluorinating agents such as elemental fluorine.
+ →
This reaction is endothermic. The product forms relatively quickly at temperatures of 750 °C, and high yields may be obtained by quickly condensing the product and removing it from equilibrium.
It can also be obtained by fluorination of plutonium(III) fluoride, plutonium(IV) oxide, or plutonium(IV) oxalate at approximately 700 °C:
2 + 3 → 2
+ 3 → +
+ 3 → + 4
Alternatively, plutonium(IV) fluoride oxidizes in an 800-°C oxygen atmosphere to plutonium hexafluoride and plutonium(IV) oxide:
3 + → 2 +
In 1984, the synthesis of plutonium hexafluoride at near–room-temperatures was achieved through the use of dioxygen difluoride. Hydrogen fluoride is not sufficient even though it is a powerful fluorinating agent. Room temperature syntheses are also possible by using krypton difluoride or irradiation with UV light.
Properties
Physical properties
Plutonium hexafluoride is a red-brown volatile solid, crystallizing in the orthorhombic crystal system with space group Pnma and lattice parameters , , and . It sublimes around 60 °C with heat 12.1 kcal/mol to a gas of octahedral molecules with plutonium-fluorine bond lengths of 197.1 pm. At high pressure, the gas condenses, with a triple point at 51.58 °C and ; the heat of vaporization is 7.4 kcal/mol. At temperatures below -180 °C, plutonium hexafluoride is colorless.
Plutonium hexafluoride is paramagnetic, with molar magnetic susceptibility 0.173 mm3/mol.
Spectroscopic properties
Plutonium hexafluoride admits six different oscillation modes: stretching modes , , and and rotational modes , , and . The Raman spectrum cannot be observed, because irradiation at 564.1 nm induces photochemical decomposition. Irradation at 532 nm induces fluorescence at 1900 nm and 4800 nm; irradiation at 1064 nm induces fluorescence about 2300 nm.
Chemical properties
Plutonium hexafluoride is relatively hard to handle, being very corrosive, poisonous, and prone to auto-radiolysis.
Reactions with other compounds
PuF6 is stable in dry air, but reacts vigorously with water, including atmospheric moisture, to form plutonium(VI) oxyfluoride and hydrofluoric acid.
+ 2 → + 4
It can be stored for a long time in a quartz or pyrex ampoule, provided there are no traces of moisture, the glass has been thoroughly outgassed, and any traces of hydrogen fluoride have been removed from the compound.
An important reaction involving PuF6 is the reduction to plutonium dioxide. Carbon monoxide generated from an oxygen-methane flame can perform the reduction.
Decomposition reactions
Plutonium hexafluoride typically decomposes to plutonium tetrafluoride and fluorine gas. Thermal decomposition does not occur at room temperature, but proceeds very quickly at 280 °C. In the absence of any external cause for decomposition, the alpha-particle current from plutonium decay will generate auto-radiolysis, at a rate of 1.5%/day (half-time 1.5 months) in solid phase. Storage in gas phase at pressures 50–100 torr (70–130 mbar) appears to minimize auto-radiolysis, and long-term recombination with freed fluorine does occur.
Likewise, the compound is photosensitive, decomposing (possibly to plutonium pentafluoride and fluorine) under laser irradiation at a wavelength of less than 520 nm.
Exposure to laser radiation at 564.1 nm or gamma rays will also induce rapid dissolution.
Uses
Plutonium hexafluoride plays a role in the enrichment of plutonium, in particular for the isolation of the fissile isotope 239Pu from irradiated uranium. For use in nuclear weaponry, the 241Pu present must be removed for two reasons:
It generates enough neutrons by spontaneous fission to cause an uncontrollable reaction.
It undergoes beta decay to form 241Am, leading to the accumulation of americium over long periods of storage which must be removed.
The separation between plutonium and the americium contained proceeds through reaction with dioxygen difluoride. Aged PuF4 is fluorinated at room temperature to gaseous PuF6, which is separated and reduced back to PuF4, whereas any AmF4 present does not undergo the same conversion. The product thus contains very little amounts of americium, which becomes concentrated in the unreacted solid.
Separation of the hexafluorides of uranium and plutonium is also important in the reprocessing of nuclear waste. From a molten salt mixture containing both elements, uranium can largely be removed by fluorination to UF6, which is stable at higher temperatures, with only small amounts of plutonium escaping as PuF6.
History
Shortly after plutonium's discovery and isolation in 1940, chemists began to postulate the existence of plutonium hexafluoride. Early experiments, which sought to mimic methods for the construction of uranium hexafluoride, had conflicting results; and definitive proof only appeared in 1942. The Second World War then interrupted the publication of further research.
Initial experiments, undertaken with extremely small quantities of plutonium, showed that a volatile plutonium compound would develop in a stream of fluorine gas only at temperatures exceeding 700 °C. Subsequent experiments showed that plutonium on a copper plate volatilized in a 500-°C fluorine stream, and that the reaction rate decreased with atomic number in the series uranium > neptunium > plutonium. Brown and Hill, using milligram-scale samples of plutonium, completed in 1942 a distillation experiment with uranium hexafluoride, suggesting that higher fluorides of plutonium ought be unstable, and decompose to plutonium tetrafluoride at room temperature. Nevertheless, the vapor pressure of the compound appeared to correspond to that of uranium hexafluoride. Davidson, Katz, and Orlemann showed in 1943 that plutonium in a nickel vessel volatilized under a fluorine atmosphere, and that the reaction product precipitated on a platinum surface.
Fisher, Vaslow, and Tevebaugh conjectured that the higher fluorides exhibited a positive enthalpy of formation, that their formation would be endothermic, and consequently only stabilized at high temperatures.
In 1944, prepared a volatile compound of plutonium believed to be the elusive plutonium hexafluoride, but the product decomposed prior to identification. The fluid substance would collect onto cooled glass and liquify, but then the fluoride atoms would react with the glass.
By comparison between uranium and plutonium compounds, Brewer, Bromley, Gilles, and Lofgren computed the thermodynamic characteristics of plutonium hexafluoride.
In 1950, Florin's efforts finally yielded the synthesis, and improved thermodynamic data and a new apparatus for its production soon followed. Around the same time, British workers also developed a method for the production of PuF6.
References
Plutonium compounds
Hexafluorides
Octahedral compounds
Actinide halides
Nuclear materials | Plutonium hexafluoride | [
"Physics"
] | 1,685 | [
"Materials",
"Nuclear materials",
"Matter"
] |
16,370,251 | https://en.wikipedia.org/wiki/Central%20Power%20Research%20Institute | Central Power Research Institute (CPRI) is a research institute originally established by the Government of India in 1960, with headquarters in Bangalore. The Institute was re-organized into an Autonomous Society in the year 1978 under the aegis of the Ministry of Power, Government of India. The main objective of setting up the Institute is to serve as a national Level laboratory for undertaking applied research in electrical power engineering besides functioning as an independent national testing and certification authority for electrical equipment and components to ensure reliability in power systems and to innovate and develop new products.
Research and development
CPRI has been taking programs in various areas of electrical power generation, transmission and distribution in the endeavor to assist utilities to supply reliable, uninterrupted and quality services to the consumers. The broad objectives with the research projects are taken up in CPRI are
Offering technical advice and trouble shooting.
Product development and improvement to meet global standards.
Bridging the gap in testing and developing special testing technologies.
Large collaborative research projects with utilities and industries.
Electrical products and process improvement
While some of the programme are undertaken in collaboration with industries and utilities, other are taken up in-house to develop the expertise and infrastructure the power sector. The project proposals are planned well in advance keeping in mind the needs of the industry /utilities and are cleared by the committee on Research.
In CPRI, research and development projects of different dimensions are taken up under 4 schemes namely
IHRD (or In House Research and Development )
RSoP (Research Scheme on Power)
Sponsored projects
NPP (R&D under National Perspective Plan)
Testing and certification
The performance of the institute in the field of testing and certification during the year 2009-2010 is very good as compared to 2008-2009. A total of 66347 tests on 22256 samples were conducted for 6127 organizations. The revenue realized from testing, certification and consultancy services rendered during this year increased to Rs.9600 lakhs as against Rs.7741 lakhs during 2008-2009.
New Testing Facilities
IEC61850 protocol conformance testing laboratory
Mobile field testing laboratory for On-site Accuracy Test and Condition Assessment of EHV Class
Instrument Transformers
Setting up 10kA Temperature Rise Test Facility.
Test Facilities for Air-conditioners and Refrigerator.
Automation of Tower Testing Station.
Augmentation of Relay Testing laboratory
Compression Test on Quadruple Spacer
Testing 800 kV HDVC Tower for M/s. POWERGRID, Gurgaon
Seismic pushover Test on Prototype RCC Framed structure
Dynamic Testing of Distance Protection relay P443 for M/S. Areva T & D Ltd., PSP, Chennai.
Testing of Distance Relays using Real Time Digital simulation for M/s.BGR Energy System Ltd.,
Testing of numerical distance relays—SEL &OMEGA
Destruction test on power electronics capacitors of ratings, 560 F, 3.8 kV DC, was conducted as per IEC 61881-1999 was conducted in the power capacitors laboratory for the first time, which has helped the customer in verifying the design aspects with reference to safety/protection
For the first time in the country in a 1180 kvar 12 kv shunt capacitor was tested at the power Capacitor Laboratory. This is the highest unit rating (Kvar) tested.
Other important tests
Under standards and labeling programme of Government of India, CPRI carried out the check testing of BEE star labeled products like refrigerators & TFLss against the work order from M/s BEE, New Delhi.
M/s BEE has issued work order to CPRI for carrying out second-round check testing of refrigerators, TFLs and fans during the year 2010–2011
Testing of solar lanterns is now undertaken at ECDD.
Testing of LED lamps as per manufactures specifications is undertaken at ECDD
ECDD has taken up field and lab evaluation of street light controllers as per manufactures specifications
Under standards & labeling programme of Government of India, CPRI carried out the check testing of BEE star labeled products like Refrigerators & TFLs.
Consultancy services
Consultancy is one of the major activities of the institute. The following are some of the important consultancy services rendered by the institute in various areas during the year under report
Distribution System
BESCOM-DAS Project
KPTCL–SCADA Project
Restructured Accelerated Power development Reforms Programme (R-APDRP)
Rajiv Gandhi Grameen Vidyutikaran Yojana (RGGVY) Project
Power system studies
Study of electrical system for changed systems configuration for increased reliability for M/s Burrup Fertilizers Pty Ltd., Australia.
Power System study and Energy Audit of Reliance Industries Ltd., Hazira.
Consultancy service for augmenting power distribution network at SAIL-ISP, Burnpur township related area for ISP-SAIL, Burnpur.
Technical services for power line design with low loss conductor for sterlite private limited.
Transient simulation studies of 14 MVA and 25 MVA furnace transformer for RSP, Rourkela.
Testing of load management system controller for Jindal steel plant.
Testing of composite intelligent load shedding controllers on RTDS for M/s ABB.
Grounding Studies
Evaluation of earthing system with INDELEC Compound as filler material.
Long term measurement of ground resistance of Terec+ compound based Ear thing system
Condition monitoring and diagnostics of substations and power plants.
Long term measurement of ground resistance of Curec+ and Curecon compound based Ear thing system
Performance and energy efficiency
Consultancy for thermal/hydro power stations
RLA assignments involving in-situ oxide scale measurements and microstructure replication studies.
Fibroscopic inspection of LTRH/Economizer coils.
Crystal size and lattice strain measurements in nano coatings.
Corrosion mapping of water wall tubes using Low Frequency Electro Magnetic Technique (LFET).
Residual Life Assessment of penstocks & Hydro turbine components.
Hydrogen embrittlement in water wall tubes.
The performance evaluation study of Electrostatic precipitator.
Characterization and beneficiation of fly ash cenospheres generated at Simhadri thermal power project.
Pipe & Hangers Inspection and flexibility analysis for high energy pipeline systems for thermal power plants.
Transmission towers and accessories.
Vibration & seismic performance.
Quality accreditations
The facilities of the Institute are accredited as per ISO/IEC 17025 quality norms and the Institute has acquired international accreditations like Short Circuit Testing Liaison for its global acceptance. Certification of the Institute is gaining acceptance in the countries of Middle East, Africa and South East Asia.
CPRI network
CPRI has its head office in Bangalore and the Institute has six facilities at Bhopal, Hyderabad, Nagpur, Noida, Kolkata and Guwahati.
See also
Electric Power Research Institute
References
External links
CPRI website
CPRI Recruitment Notifications 2012-13
Cheap power, sify.com.
Engineering consulting firms
Electric power in India
Research institutes in Bengaluru
Research institutes in Hyderabad, India
1960 establishments in Mysore State | Central Power Research Institute | [
"Engineering"
] | 1,409 | [
"Engineering consulting firms",
"Engineering companies"
] |
16,370,290 | https://en.wikipedia.org/wiki/Thiophene-3-acetic%20acid | Thiophene-3-acetic acid is an organosulfur compound with the formula HO2CCH2C4H3S. It is a white solid. It is one of two isomers of thiophene acetic acid, the other being thiophene-2-acetic acid.
Thiophene-3-acetic acid has attracted attention as a precursor to functionalized derivatives of polythiophene.
References
Thiophenes
Acetic acids | Thiophene-3-acetic acid | [
"Chemistry"
] | 104 | [
"Organic compounds",
"Organic compound stubs",
"Organic chemistry stubs"
] |
16,370,324 | https://en.wikipedia.org/wiki/Ambu%20%28company%29 | Ambu, or officially Ambu A/S, is a Danish company that develops, produces and markets single-use endoscopy solutions, diagnostic and life-supporting equipment to hospitals, private practices, and rescue services.
It was founded in Denmark in 1937, as Testa Laboratorium, by German engineer Holger Hesse.
The largest business areas are anesthesia, cardiology, neurology, pulmonology, urology and gastroenterology. The company's most important products are devices for artificial ventilation, single-use endoscopes and single-use electrodes for ECG tests and neurophysiological mappings.
External links
Official website
LinkedIn page
References
Health care companies of Denmark
Life science companies based in Copenhagen
Companies based in Ballerup Municipality
Technology companies established in 1937
Danish companies established in 1937
Companies listed on Nasdaq Copenhagen
Companies in the OMX Copenhagen 25
Companies in the OMX Nordic 40 | Ambu (company) | [
"Biology"
] | 195 | [
"Life sciences industry",
"Life science companies based in Copenhagen"
] |
16,370,466 | https://en.wikipedia.org/wiki/ICEARRAY | ICEARRAY is an abbreviation for Icelandic Strong-motion Array. The ICEARRAY network is a seismic array of 14 strong-motion stations located within the South Iceland Seismic Zone. Each station consists of a seismograph situated in a protective housing. The stations are spread across a geographical area of approximately 3 km² in the town of Hveragerdi in south-western Iceland. Most of the units are located in the basements of residential buildings in Hveragerdi town centre, which is approximately 35 km southeast of Iceland's capital, Reykjavík. The ICEARRAY project is supported by the 6th Framework of the European Commission through the Marie Curie International Re-integration Grant, the Iceland Centre for Research and the University of Iceland Earthquake Engineering Research Centre.
Instruments
The instruments used in the seismic array are CUSP-3Clp accelerometers manufactured by Canterbury Seismic Instruments Ltd. based in New Zealand. The instruments record three components of ground motion, i.e. one vertical and two horizontal components, over a high dynamic range. The instruments are connected to a GPS clock, ensuring a uniform time over the network. The instruments communicate via a wireless permanent GPRS connection. This enables remote configuration of individual units and near real-time alerts and data uploading. An important feature that has been developed during the establishment of the array is a common triggering scheme. This feature was designed in collaboration with the manufacturers of the units. In the event of two or more units receiving an earthquake trigger, the common triggering feature activates the entire array to start recording. This scheme ensures complete data coverage and greatly reduces the need to filter out noise and manmade disturbances.
Geographical location
Iceland is located on the Mid-Atlantic Ridge, the extensional tectonic plate boundary between the North American Plate and the Eurasian Plate. It is also located over a deep-seated mantle plume known as the Iceland hotspot, which causes dynamic uplift of the Iceland Plateau, with associated volcanism and seismicity. The line of the Mid-Atlantic Ridge is offset by two transform zones in Iceland, the South Iceland Seismic Zone (SISZ) in the south and the Tjornes Fracture Zone in the north. The town of Hveragerdi is located at the western end of the SISZ, an area of considerable seismicity.
Background
The South Iceland Seismic Zone (SISZ) has been the location of numerous large destructive earthquakes in the past. The SISZ is an east-west trending transform zone approximately 70 km long and 10–20 km wide. Destructive earthquake sequences in this region usually consist of several earthquakes exceeding a magnitude of 6.5 and with their epicentres located on north-south trending faults. Such a sequence started on 17 June 2000 at 15:40 local time with an earthquake of magnitude 6.5. It was followed on 21 June 2000 at 00:51 by a magnitude 6.4 event. Earthquake-induced damage was widespread, although fortunately there was no loss of life and no serious injuries.
Purpose
The potential of these large destructive earthquakes occurring is a constant threat to local and national infrastructure, such as pipelines, electrical power transmission, roads, dams and bridges. The spatial variability of earthquake ground motions is a key aspect when designing infrastructure. It can have a dramatic effect on the response of structures and the extent of damage. In order to estimate the effects, it is necessary to develop models from data recorded on a seismograph array. This is the reason why the ICEARRAY was created. The models developed from the data recorded on the ICEARRAY have enabled the first reliable assessment of earthquake effects on infrastructure in the SISZ. The data also provides a physically realistic description of fault rupture. The models and simulations developed can be applied in other regions and the methods used provide a link between seismology and earthquake engineering
References
External links
ICEARRAY website
Earthquake Engineering Research Centre website
Canterbury Seismic Instruments website
Earthquake and seismic risk mitigation
Earthquake engineering | ICEARRAY | [
"Engineering"
] | 807 | [
"Earthquake engineering",
"Earthquake and seismic risk mitigation",
"Civil engineering",
"Structural engineering"
] |
16,370,539 | https://en.wikipedia.org/wiki/Fospropofol | Fospropofol (INN), often used as the disodium salt (trade name Lusedra) is an intravenous sedative-hypnotic agent. It is currently approved for use in sedation of adult patients undergoing diagnostic or therapeutic procedures such as endoscopy.
Clinical applications
Several water-soluble derivatives and prodrugs of the widely used intravenous anesthetic agent propofol have been developed, of which fospropofol has been found to be the most suitable for clinical development thus far. Purported advantages of this water-soluble chemical compound include less pain at the site of intravenous administration, less potential for hyperlipidemia with long-term administration, and less chance for bacteremia. Often, fospropofol is administered in conjunction with an opioid such as fentanyl.
Clinical pharmacology
Mechanism of action
Fospropofol is a prodrug of propofol; as an organophosphate it is metabolized by alkaline phosphatases to phosphate and formaldehyde and the active metabolite, propofol.
Pharmacodynamics
Pharmacokinetics
Initial trial results on fospropofol pharmacokinetics were retracted by the investigators. As of 2011, new results were not available.
Controlled substance
Fospropofol is classified as a Schedule IV controlled substance in the United States' Controlled Substances Act.
See also
Ciprofol
References
General anesthetics
Sedatives
Phenol ethers
Prodrugs
GABAA receptor positive allosteric modulators
Isopropyl compounds
Phosphate esters | Fospropofol | [
"Chemistry"
] | 354 | [
"Chemicals in medicine",
"Prodrugs"
] |
16,371,070 | https://en.wikipedia.org/wiki/Gel%20point%20%28petroleum%29 | The gel point of petroleum products is the temperature at which the liquids gel so they no longer flow by gravity or can be pumped through fuel lines. This phenomenon happens when the petroleum product reaches a low enough temperature to precipitate interlinked paraffin wax crystals throughout the fluid.
More highly distilled petroleum products have fewer paraffins and will have a lower gel point. On the other hand, the gel point of crude oil is dependent upon the composition of the crude oil as some crude oils contain more or less components that dissolve the paraffins. In some cases the gel point of a crude oil may be correlated from the pour point.
The gel points of some common petroleum products are as follows:
#1 diesel fuel: .
#2 diesel fuel: .
Heating oil: .
Kerosene: .
For the petroleum product to flow again, it needs to be brought above the gel point temperature to the ungel point, which is typically near its pour point. However, without stirring the paraffin waxes may still remain in crystal form so the fuel may have to be warmed further to its remix temperature to completely re-dissolve the waxes.
Anti-gel additives are sometimes added to petroleum products where cold temperature may affect their use. The additives act to reduce the formation of wax crystals in the product, thereby lowering the pour point and the gel point of the fuel. Anti-gel additives may not necessarily affect the cloud point.
See also
Cloud point
Cold filter plugging point
Petroleum
Pour point
References
Chemical properties | Gel point (petroleum) | [
"Chemistry"
] | 310 | [
"nan"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.