id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
14,617,622 | https://en.wikipedia.org/wiki/Pugh%27s%20closing%20lemma | In mathematics, Pugh's closing lemma is a result that links periodic orbit solutions of differential equations to chaotic behaviour. It can be formally stated as follows:
Let be a diffeomorphism of a compact smooth manifold . Given a nonwandering point of , there exists a diffeomorphism arbitrarily close to in the topology of such that is a periodic point of .
Interpretation
Pugh's closing lemma means, for example, that any chaotic set in a bounded continuous dynamical system corresponds to a periodic orbit in a different but closely related dynamical system. As such, an open set of conditions on a bounded continuous dynamical system that rules out periodic behaviour also implies that the system cannot behave chaotically; this is the basis of some autonomous convergence theorems.
See also
Smale's problems
References
Further reading
Dynamical systems
Lemmas in analysis
Limit sets | Pugh's closing lemma | [
"Physics",
"Mathematics"
] | 184 | [
"Limit sets",
"Theorems in mathematical analysis",
"Topology",
"Mechanics",
"Lemmas in mathematical analysis",
"Lemmas",
"Dynamical systems"
] |
14,617,718 | https://en.wikipedia.org/wiki/Soredium | Soredia are common reproductive structures of lichens. Lichens reproduce asexually by employing simple fragmentation and production of soredia and isidia. Soredia are powdery propagules composed of fungal hyphae wrapped around cyanobacteria or green algae. These can be either scattered diffusely across the surface of the lichen's thallus, or produced in localized structures called soralia. Fungal hyphae make up the basic body structure of a lichen. The soredia are released through openings in the upper cortex of the lichen structure. After their release, the soredia disperse to establish the lichen in a new location.
References
Fungal morphology and anatomy
Lichenology | Soredium | [
"Biology"
] | 145 | [
"Lichenology"
] |
14,617,864 | https://en.wikipedia.org/wiki/Czech%20chemical%20nomenclature | Foundations of the Czech chemical nomenclature () and terminology were laid during the 1820s and 1830s. These early naming conventions fit the Czech language and, being mostly the work of a single person, Jan Svatopluk Presl, provided a consistent way to name chemical compounds. Over time, the nomenclature expanded considerably, following the recommendations by the International Union of Pure and Applied Chemistry (IUPAC) in the recent era.
Unlike the nomenclature that is used in biology or medicine, the chemical nomenclature stays closer to the Czech language and uses Czech pronunciation and inflection rules, but developed its own, very complex, system of morphemes (taken from Greek and Latin), grammar, syntax, punctuation and use of brackets and numerals. Certain terms (such as ) use the phonetic transcription, but the rules for spelling are inconsistent.
History
Medieval alchemists in the Czech lands used obscure and inconsistent terminology to describe their experiments. Edward Kelley, an alchemist at the court of Rudolf II, even invented his own secret language. Growth of the industry in the region during the 19th century, and the nationalistic fervour of the Czech National Revival, led to the development of Czech terminologies for natural and applied sciences.
Jan Svatopluk Presl (1791–1849), an all-round natural scientist, proposed a new Czech nomenclature and terminology in the books Lučba čili chemie zkusná (1828–1835) and Nerostopis (1837). Presl had invented Czech neologisms for most of the then known chemical elements; ten of these, including , , , and , have entered the language. Presl also created naming conventions for oxides, in which the electronegative component of the compound became the noun and the electropositive component became an adjective. The adjectives were associated with a suffix, according to the valence number of the component they represented. Originally there were five suffixes: , , , , and . These were later expanded to eight by Vojtěch Šafařík: , , , , and , , , and , representing oxidation numbers from 1 to 8. For example, corresponds to and to .
Salts were identified by the suffix added to the noun. Many of the terms created by Presl derive from Latin, German or Russian; only some were retained in use.
A similar attempt published in Orbis pictus (1852) by Karel Slavoj Amerling (1807–1884) to create Czech names for the chemical elements (and to order the elements into a structure, similar to the work of Russian chemist Nikolay Beketov) was not successful.
Later work on the nomenclature was performed by Vojtěch Šafařík (1829–1902). In 1876 Šafařík started to publish the journal Listy chemické, the first chemistry journal in Austria-Hungary (today issued under the name Chemické Listy), and this journal has played an important role in the codification of the nomenclature and terminology. During a congress of Czech chemists in 1914, the nomenclature was reworked, and the new system became normative in 1918. Alexandr Sommer-Batěk (1874–1944) and Emil Votoček (1872–1950) were the major proponents of this change. Presl's original conventions remained in use, but formed only a small part of the naming system.
Several changes were applied to the basic terminology during the second half of the 20th century, usually moving closer to the international nomenclature. For example, the former term was officially replaced by , by and later even . The spelling of some chemical elements also changed: should now be written . Adoption of these changes by the Czech public has been quite slow, and the older terms are still used decades later.
The Czechoslovak Academy of Sciences, founded in 1953, took over responsibility for maintenance of the nomenclature and proper implementation of the IUPAC recommendations. Since the Velvet Revolution (1989) this activity has slowed down considerably.
Oxidation state suffixes
Notes
External links
Website about the early history of the Czech chemical nomenclature (in Czech)
Article in a Czech Academy of Sciences bulletin: current problems faced by the Czech chemical nomenclature (2000, section "Současný stav a problémy českého chemického názvosloví")
Organizations
Journal Chemické listy (nomenclature related articles are in Czech, ISSN 1213-7103, printed version ISSN 0009-2770)
Czech Chemical Society (Česká společnost chemická, ČSCH, founded in 1866)
National IUPAC Centre for the Czech Republic
Czech language
Science and technology in the Czech Republic
Chemical nomenclature | Czech chemical nomenclature | [
"Chemistry"
] | 946 | [
"nan"
] |
14,619,183 | https://en.wikipedia.org/wiki/Debraj%20Ray%20%28economist%29 | Debraj Ray (born 3 September 1957) is an Indian-American economist, who is currently teaching and working at New York University. His research interests focus on development economics and game theory. Ray served as Co-editor of the American Economic Review between 2012 and 2020.
Ray is Julius Silver Professor in the Faculty of Arts and Science, New York University, since 2003, and Professor of Economics at New York University since 1999. He is also a Part-Time Professor at the University of Warwick. He is a Research Affiliate at the National Bureau of Economic Research, a council member of the Game Theory Society, and a board member of Theoretical Research in Development Economics (ThReD). He served as a Board Member at the Bureau for Research in the Economic Analysis of Development (BREAD), since its inception till 2023.
Education
Debraj Ray graduated from the University of Calcutta, where he earned a B.A. in Economics in 1977. After that, Ray obtained a M.A. (1981) and a Ph.D. (1983) both from Cornell University, where his doctoral supervisor was Mukul Majumdar. The title of his dissertation is Essays in Intertemporal Economics.
Academic career
Prior to joining NYU, Ray held academic positions at Stanford University, the Indian Statistical Institute, and at Boston University, where he was Director of the Institute for Economic Development. He has held visiting appointments at Harvard University, MIT, the Instituto Nacional de Matemática Pura e Aplicada in Rio de Janeiro, Brazil, the People's University of China in Beijing, the London School of Economics, Columbia University, the Instituto de Análisis Económico in Barcelona and the University of Oslo.
Notable contributions by Ray include:
a concept of egalitarianism under participation constraints (Dutta and Ray 1989);
a theory of renegotiation in dynamic games (Bernheim and Ray 1989, Ray 1994);
a theory of poverty traps based on undernutrition, and later related work on persistent inequality (Dasgupta and Ray 1986, Dasgupta and Ray 1987, Baland and Ray 1991, Mookherjee and Ray 2003);
the development of a concept of polarization, and its connections to social conflict (Esteban and Ray 1994, Duclos, Esteban and Ray 2004, Esteban and Ray 2011);
a theory of coalition formation (Ray and Vohra 1997, Ray and Vohra 1999, Ray 2008);
a theory of socially determined aspirations (Ray 2006, Genicot and Ray 2017);
a leading textbook in Development Economics (Ray 1998).
Professional affiliations and awards
Ray is a Fellow of the American Academy of Arts and Sciences, a Fellow of the Econometric Society, a Fellow of the Bureau for Research and Economic Analysis of Development, a Fellow of the Society for the Advancement of Economic Theory, a Guggenheim Fellow, a recipient of the Mahalanobis Memorial Medal, and a recipient of the Outstanding Young Scientists Award in mathematics from the Indian National Science Academy. He was awarded a Doctor Philosophiae Honoris Causa from the University of Oslo in 2011.
Apart from three terms as Co-editor of the American Economic Review, Ray has served on the editorial board of Econometrica, the Journal of Economic Theory, the Journal of Development Economics, the Journal of Economic Growth, the Japanese Economic Review, Games and Economic Behavior, American Economic Journal Microeconomics. He has served as a Foreign Editor of the Review of Economic Studies, and as Co-editor of the Econometric Society journal, Theoretical Economics.
Among Ray's many public lectures are the 2013 Sir Richard Stone Annual Lecture at the University of Cambridge, the 2016 Laffont Lecture of the Econometric Society (Geneva), the 2017 Richard Ely Distinguished Lectures at Johns Hopkins University, the 2022 Haavelmo Lecture at the University of Oslo, and the inaugural Ashok Kotwal Memorial Lecture (Ideas for India, New Delhi, 2022).
Ray has received many awards for his teaching and research from different institutions around the world. Among them are:
Mahalanobis Memorial Medal of the Indian Econometric Society, 1989
Fellow of the Econometric Society, 1993
Gittner Teaching Award from Boston University, 1996
Guggenheim Fellow, 1997
Dean’s Award for Distinguished Teaching at Stanford University, 1985
Fellow of the American Academy of Arts and Sciences, 2016
Doctor Philosophiae Honoris Causa from University of Oslo, 2011
Fellow of the Society for the Advancement of Economic Theory, 2011
Golden Dozen Teaching Award for excellence in undergraduate teaching from New York University, 2017
Bibliography
Books
Selected Journal Articles
Notes
External links
Debraj Ray's homepage at New York University
1957 births
University of Calcutta alumni
Cornell University alumni
Harvard University staff
Indian development economists
Game theorists
Fellows of the Econometric Society
20th-century Indian economists
21st-century American economists
Living people
New York University faculty
American academics of Indian descent
21st-century Indian economists
Scientists from Kolkata
Scholars from Kolkata
Fellows of the American Academy of Arts and Sciences
Silver professors | Debraj Ray (economist) | [
"Mathematics"
] | 1,017 | [
"Game theorists",
"Game theory"
] |
14,619,198 | https://en.wikipedia.org/wiki/Gyn | A gyn is an improvised three-legged lifting device used on sailing ships. It provides more stability than a derrick or sheers, and requires no rigging for support. Without additional support, however, it can only be used for lifting things directly up and down. Gyns may also be used to support either end of a ropeway.
Two legs, called cheeks, are bound together as in the sheerlegs, with the third spar, called the prypole, and is fixed under the cheek lashing to form the apex of the tripod. Alternately, a tripod lashing may be used to form the tripod, with the heel of the center spar pointing in the opposite direction of the cheeks to ensure a solid apex when raised. Only four tackles are required; three as 'splay tackles' to prevent the legs of the tripod from spreading, with the fourth tackle as lifting purchase. A timber hitch, six figure-of-eight turns, and a finishing clove hitch lash the cheeks into a crutch but not too tight because the cheeks need some room to spread their heels. The cheeks of the gyn are now ready to spread and to be erected. The cheek splay tackle is hauled tight and then the two adjacent prypole splay tackles can be rigged and hauled as apex of the gyn is raised. At the sides, the gyn is unstable and it is crucial that the cargo is not swung out of the base triangle; consequently the gyn is only for lifting cargo vertically.
British Army artillery gunners used apparatus such as 'Bell's gyn' designed by John Bell (artillerist) or the 'Gibraltar gyn' for lifting artillery pieces.
Gyns have also been used on land as part of the equipment to help assist water being pumped out of water wells in the Sinai Peninsula.
See also
Gin pole
External links
Illustrations of a gyn and a gyn ropeway are on page 5-24 of the Sea Cadet Corps Seamanship Training Manual
References
Sailing rigs and rigging
Vertical transport devices | Gyn | [
"Technology"
] | 420 | [
"Vertical transport devices",
"Transport systems"
] |
14,619,329 | https://en.wikipedia.org/wiki/Micro%20Bill%20Systems | Micro Bill Systems, also known as MicroBillSys, MBS and Platte Media, is an online collection service with offices in Leeds, England, considered to be malware. The company states that it is a professional billing company offering "software management solutions that can aid your business in reducing uncollectable payments."
The company's best-known clients are online gambling and pornography sites offering three-day free trials of their subscription-based services.
If users do not cancel during the trial period, the MBS software begins a repeating cycle of full-screen pop-up windows warning users that their account is overdue and demanding payment.
The eleven-page MBS end-user license agreement contains a clause stating that unless the bill is paid, the software will disrupt computer use longer each day, with up to four daily periods of 10 minutes when the pop-up payment demand is locked and cannot be closed or minimized.
Users have complained about the unexpected bills, feel victimized, and deny ever accessing the video sites they are being billed for.
MBS denies installing its software by stealthy means and says that the software is downloaded by consent.
Many consumers are unaware that they have agreed to the download.
Security software company Symantec describes MicroBillSys as a potentially unwanted application that uses aggressive billing and collection techniques to demand payment after a three-day trial period, and says that there are reports of these techniques leaving the computer unable to browse the Internet.
Operation
When a user first accesses an online service whose collections are managed by MBS, the sign-up software creates a unique identifier based on the user's computer configuration and IP address. This identifier permits MBS to maintain a history of user access to supported sites and to send billing notices directly to the user's computer without the consumer ever having entered a name, credit card number, or other personal information.
The billing notices take the form of repeating pop-up windows warning users that their account is overdue and demanding payment for a 30-day subscription. Typical amounts are £19.95 (US$35.00) or £29.95 (US$52.50).
The pop-ups cover a substantial area of the screen and often cannot be closed, effectively preventing use of the computer for up to ten minutes. Their number and frequency increases over time, and to stop them consumers must pay. According to the company's terms and conditions, the agreement can be canceled and the software uninstalled only when no balance is outstanding.
For some who don't pay, Platte sends letters addressed to "the computer owner" threatening legal action in small claims court. The letters, described by one recipient as a "sham county court notice", include a "pre submission" information form which could mislead the unwary into thinking it comes from "Issuing Court Northampton County Court". It is unclear how Platte derives street addresses from IP addresses for these mailings, as ISPs interviewed deny providing such information. By filling out the information form and returning it, users provide Platte with their full name in addition to their correct mailing address. Similarly, users who complain to Platte by email or telephone are asked for their names and addresses so that uninstall codes can be mailed out. Payment demands follow. Later Platte began using a debt collection agency to try to pressurise people into making payments. In these cases, a charge is added to the 'subscription'.
MBS clients
MBS's initial clients were two adult content web sites. After being acquired by Platte Media (Platte International) in early 2008, the company expanded to include the promise of access to Hollywood movies from Getfilmsnow. Film studios Warner Bros. and 20th Century Fox have sent Getfilmsnow a cease and desist order, and say they have not licensed the films Platte is advertising.
While Platte's website presents the company as a mainstream media distribution company, an interview on the Radio Four programme You and Yours with ex-managing director of MBS, Ashley Bateup, indicates that the bulk of the full videos on the site are either black and white, or of a pornographic nature.
Consumer complaints
The UK's Office of Fair Trading (OFT), charged with promoting and protecting consumer interests in the UK, received numerous complaints about the pop-up payment demands from consumers who said they had not realized they were agreeing to be billed. A number of them stated that the pop-up software had been downloaded without the computer having been used to access an MBS client site. The OFT said it was acting in the interests of those consumers whose access to MBS sites was confirmed, but it had no legal jurisdiction to deal with the issue of software being downloaded without consent.
MBS position
MBS denies installing its software by stealthy means, and says that the software is downloaded by consent when users visit an MBS client site. A malware researcher at computer security company Prevx found no evidence of surreptitious installations. A journalist investigating the complaints called the installation process "unmistakable", with "a download, clicking through screens, and entering a four-digit number." Among the required steps is acceptance of an eleven-page end-user license agreement that includes the clause:
The company says that when it looks into complaints, usually a member of the household has downloaded the software without reading the terms and conditions, and once the billing pop-ups begin they refuse to admit their use to the computer owner. The owner then assumes that the computer is somehow infected. The company says "Our customer service team's experience is that people seem to move into denial with their spouses or partners when pornography use is at question."
The software is difficult for non-technical users to remove, due in part to its use of mutually protective executable files.
The company says that if the software were easy to remove, many people would not pay for the services already consumed.
Undertakings
In response to the complaints, the Office of Fair Trading reviewed the MBS sign-up process and the fairness of its terms and conditions.
On 27 March 2008, the OFT announced MBS/Platte Media "undertakings", or pledges, to make the sign-up process more fair and setting limits on the amount of disruption the pop-up payment demands could cause.
The company promised to make clear in the sign-up process that the customer is entering into a contract, and that billing pop-ups will appear after the trial period ends. They also promised "to provide information about how consumers can have the 'pop-up' generating software uninstalled at any time".
The company promised
to not cause more than 20 pop-ups,
to not cause more than one pop-up in any 24-hour period, and
to not cause pop-ups "beyond the expiry of six weeks after payment has become due".
They also promised
to not cause more than ten locked-open pop-ups, and
to not cause locked-open pop-ups to remain locked for more than 60 seconds.
Payment demands delivered as other than pop-up windows are not restricted.
Statements by authorities
In announcing the MBS undertakings, the Office of Fair Trading's Head of Consumer Protection said "We believe that [the undertakings] achieve the right balance between protecting consumer interests without stifling innovation in the 'on-line' market place."
A local authority in the locale of the MBS Leeds office charged with preventing exploitation of vulnerable consumers, the West Yorkshire Trading Standards, has received hundreds of complaints about the pop-ups. A spokesman for the authority said "It is our opinion at this time that the company is operating within the bounds of existing legislation and as such it would be difficult to take any formal legal action against them."
One woman whose family computer was caught up in the pop-up cycle was interviewed in The Guardian. She wonders, if the company's activities are indeed legitimate as maintained by West Yorkshire Trading Standards, why hasn't pressure been put on the Office of Fair Trading to tighten up the law?
Shutdown in the UK
On 9 March 2009, and following a protracted letterwriting campaign conducted by the Platte/MBS Victims Forum Martin Horwood MP raised a question in the House of Commons about the activities of Platte and specifically about the number of complaints that had been received by the OFT and Trading Standards about its activities. In response, he was informed that Platte had ceased trading in the UK with effect from 25 February 2009. No specific reason was given for this withdrawal, but it is fair to assume that the continued resistance by British consumers to what they regarded as an unfair business model must have played a part in its decision, along with the threat of action by HM Revenue concerning possible non-payment of VAT. In an email to Michael Pollitt, the company said it had stopped operating in the UK, and that "Our reasons for this decision and our further intentions are simply related to our original marketing and business model", adding: "Obviously, and just like any other business should and would do, I am making sure that stopping our marketing to the UK Market, is done in such a sensible and orderly manner, that will best preserve the interests of our customers and of our own."
See also
Movieland — a similar business operating in the USA
Ransomware — a malware program that prevents access to files and/or computer unless paid.
References
External links
Is Micro Bill Systems legit or ransomware?
Removing Micro Bill Systems
The EasyPC Company UK - How to remove Platte Media
Malware | Micro Bill Systems | [
"Technology"
] | 1,976 | [
"Malware",
"Computer security exploits"
] |
14,619,769 | https://en.wikipedia.org/wiki/Shear%20legs | Shear legs, also known as sheers, shears, or sheer legs, are a form of two-legged lifting device. Shear legs may be permanent, formed of a solid A-frame and supports, as commonly seen on land and the floating sheerleg, or temporary, as aboard a vessel lacking a fixed crane or derrick.
When fixed, they are often used for very heavy lifting, as in tank recovery, shipbuilding, and offshore salvage operations. At dockyards they hoist masts and other substantial rigging parts on board. They are sometimes temporarily rigged on sailboats for similar tasks.
Uses
On land
Shear legs are a lifting device related to the gin pole, derrick and tripod (lifting device). Shears are an A-frame of any kind of material such as timbers or metal, the feet resting on or in the ground or on a solid surface which will not let them move and the top held in place with guy-wires or guy ropes simply called "guys". Shear legs only need two guys whereas a gin pole needs at least three. The U. S. Army Field Manual FM 5-125 gives detailed instruction on how to rig shears.
On water
Fixed shear legs are most commonly found on floating cranes known as floating sheerlegs. These have heavy A-frame booms and vary in lifting capacity between 50 and 4,000 tons, and are used principally in shipbuilding, other large scale fabrication, cargo management, and salvage operations.
Temporary sheers comprise two upright spars, lashed together at their heads and their feet splayed apart. Unlike in a gyn, which has three legs and is thus stable without support, stability in sheers (derricks, and single-legged gin poles) is provided by a guy. The heels of the spars are secured by splay and heel tackles. The point at the top of the sheers where the spars cross and are lashed together is the "crutch", to which a block and tackle is attached. Unlike derricks, sheers need no lateral support, and only require either a foreguy and an aftguy or a martingale and a topping lift. Being made of two spars rather than one, sheers are stronger than a derrick of the same size and made of equivalent materials. Unlike the apex of a gyn, which is fixed, the crutch of a sheers can be topped up or lowered, via the topping lift, through a limited angle. In the era of sailing vessels, it was common for dockyards to employ a sheer hulk, an old floating ship's hull fitted with sheer legs, and used to install masts in other ships.
See also
Crane (machine)
Masting sheer
Sheerleg
References
Further reading
Sailing rigs and rigging
Vertical transport devices
Lifting equipment
Cranes (machines) | Shear legs | [
"Physics",
"Technology",
"Engineering"
] | 576 | [
"Machines",
"Transport systems",
"Lifting equipment",
"Physical systems",
"Vertical transport devices",
"Cranes (machines)",
"Engineering vehicles"
] |
14,620,737 | https://en.wikipedia.org/wiki/Ostomachion | In ancient Greek geometry, the Ostomachion, also known as () or syntomachion, is a mathematical treatise attributed to Archimedes. This work has survived fragmentarily in an Arabic version and a copy, the Archimedes Palimpsest, of the original ancient Greek text made in Byzantine times.
The word Ostomachion () comes . The manuscripts refer to the word as "Stomachion", an apparent corruption of the original Greek. Ausonius gives us the correct name "Ostomachion" (, "which the Greeks called ostomachion").
The Ostomachion which he describes was a puzzle similar to tangrams and was played perhaps by several persons with pieces made of bone. It is not known which is older, Archimedes' geometrical investigation of the figure, or the game. Victorinus, Bassus Ennodius and Lucretius have also discussed the game.
Game
The game is a 14-piece dissection puzzle forming a square. One form of play to which classical texts attest is the creation of different objects, animals, plants etc. by rearranging the pieces: an elephant, a tree, a barking dog, a ship, a sword, a tower etc. Another suggestion is that it exercised and developed memory skills in the young. James Gow, in his Short History of Greek Mathematics (1884), footnotes that the purpose was to put the pieces back in their box, and this was also a view expressed by W. W. Rouse Ball in some intermediate editions of Mathematical Essays and Recreations, but edited out from 1939.
The number of different ways to arrange the parts of the Stomachions within a square were determined to be 17,152 by Fan Chung, Persi Diaconis, Susan P. Holmes, and Ronald Graham, and confirmed by a computer search by William H. Cutler.
However, this count has been disputed because surviving images of the puzzle show it in a rectangle, not a square, and rotations or reflections of pieces may not have been allowed.
References
Further reading
J. L. Heiberg, Archimedis opera omnia, vol. 2, pp. 420 ff., Leipzig: Teubner 1881
Reviel Netz & William Noel, The Archimedes Codex (Weidenfeld & Nicolson, 2007)
J. Väterlein, Roma ludens (Heuremata - Studien zu Literatur, Sprachen und Kultur der Antike, Bd. 5), Amsterdam: Verlag B. R. Grüner bv 1976
External links
Heinrich Suter, Loculus
James Gow, Short History
W. W. R. Ball, Recreations and Essays
The Ostomachion at the Bibliotheca Augustana
Ostomachion, a Graeco-Roman puzzle
Professor Chris Rorres
Kolata, Gina. "In Archimedes' Puzzle, a New Eureka Moment." The New York Times. December 14, 2003
A tour of Archimedes' Stomachion, by Fan Chung and Ronald Graham.
Ostomachion and others tangram Play with 38 Tangram games online: more than 7,300 shapes proposed by the program.
Ancient Greek mathematical works
Puzzles
Tiling puzzles
Works by Archimedes
Geometric dissection | Ostomachion | [
"Physics",
"Mathematics"
] | 684 | [
"Tessellation",
"Recreational mathematics",
"Tiling puzzles",
"Symmetry"
] |
14,620,877 | https://en.wikipedia.org/wiki/Carlin%20stone | Carlin Stone or Carline Stane is the name given to a number of prehistoric standing stones and natural stone or landscape features in Scotland. The significance of the name is unclear, other than its association with old hags, witches, and the legends of the Cailleach.
Etymology
A 'Carle' in Scots is a commoner, a husband or in a derogatory sense, a churl or male of low birth. The name 'Carline', 'Cairlin', Carlin, 'Cyarlin', 'Kerlin' or 'Kerl' was also used in lowland Scots as a derogatory term for an old woman meaning an 'old hag'. It is from Old Norse Kerling or a corruption or equivalent in Scots of the Gaelic word “Cailleach”, meaning a witch or the 'old Hag', the Goddess of Winter.
Carlin is used as a surname and has several variations e.g., Carlen, Carlon, Carolan, O'Carlin, O'Carlen, O'Carlon, O'Carolan, Carling, Carlton, etc. It is stated as being of Irish Gaelic origin and is found somewhat less frequently in Scotland.
Scottish sites of Carlin stones or natural features
Carlin Skerry, Orkney
This is a rocky islet in the South of Orkney.
Alvah, Aberdeenshire
Near Sandlaw Farm in the parish of Alvah is the Carlin Cist, thought to have been part of a Cromlech at one time.
Backhill of Drachlaw, Aberdeenshire
This stone was part of a recumbent stone circle, around in diameter. It has several alternative names, such as the Caerlin stone; Cairn Riv; Cairn Rib; or Cairn-Rieve. Its map reference in the parish of Inverkeithny is NJ 6744 4659. Three stones remain in line, the Carlin Stone between two others quite small in comparison. In addition, there are two set stones projecting inward from the Carlin Stone.
This boulder is rugged, unshapely, and most unusual in height. Other stones were broken up and removed within relatively recent times; the mounds of stones being carted away for making dikes or drystone walls. In or near the circle were found a small perforated axe-hammer, portions of 3 bronze armlets, flint chips and a jet button.
Bishop Hill, Perth and Kinross
This natural stone outcrop is known as Carlin Maggie and has the look of something imported from Easter Island, but it is natural. It is said to be a witch turned to stone by the Devil after she got on his nerves (carline is an old Scots word for 'witch'). The Devil threw a lightning bolt which had the effect of petrifying her. It is a rock pillar estimated to be high, on the Western slope of Bishop Hill, overlooking Loch Leven. The OS grid reference is NO 18403 04413.
Balgair Muir, Stirlingshire
A "Carlin Stone" is marked on the OS 6 inch series of maps from 1843 to 1882 at this location approximately 5 km NNW of Fintry.
Dunlop, North Ayrshire
On top of the Common Crags overlooking the village of Dunlop and the Glazert Water is a large procumbent boulder known on the OS map as the ‘Carlin’s Stone or Stane’. It is also known locally as the Hag's Stone.
It is not listed by the RCAHMS and is not as well known locally as the nearby megalith known as the Thurgartstone.
Darvel, East Ayrshire
Two farms named High Carlincraig and Low Carlincraig' on the Ordnance Survey maps are to be found above Darvel in East Ayrshire.
Waterside, East Ayrshire
A Carlin Stone is situated on Whitelee Moor near Craigends Farm, below Cameron's Moss near Waterside in East Ayrshire. A nearby watercourse is known as the Carlin Burn, joining the Hareshawmuir water just below the site of the Carlin stone. The stone has been much visited in the past; indicated by the remains of a footbridge running to it across the Hareshawmuir Water.
Knockshinnoch, East Ayrshire
Carlin knowe is a low hill with a prehistoric cairn on its summit near Knockshinnoch farm.
Eaglesham, East Renfrewshire
The OS Maps locate a Carlin Stone or Carlin Crags/Craigs near Bonnyton Golf Club on the outskirts of Eaglesham. Cup marked stones are present at the site. At least two fairly horizontal flat rock faces have cups on them, rings being entirely absent. Two sets of crags are present at the site but only the upper has the petroglyphs.
Carlins Cairn, Dumfries and Galloway
This is a mountain in the south-west of Carsphairn parish.
Castle Douglas, Dumfries and Galloway
This town was known as Carlinwark until 1792. The title came from nearby Carlinwark loch in the north of the parish of Kelton.
Wigtown, Dumfries and Galloway
A Carlin Stone is to ben found at 'The Derry', near to the head of Elrig Loch near Wigtown. It is thought to have been part of a Stone circle and is situated at the OS Map Reference NX326497.
Scottish Borders
The Carlin's Tooth is the name of a natural rock outcrop in the borders between Knocks Knowe and Carter Fell.
Miscellany
Near Kirkhill outside Stewarton are several farms having the name 'Kilbride' in their title. Bride - an anglicization of Brìghde, Brìd or Saint Brigid - was originally the Celtic Goddess linked with the festival of Imbolc, the eve of the first of February. She was the goddess of Spring and was associated with healing and sacred wells, therefore the antithesis of the Carlin or Cailleach.
Papers in the Scottish National Archive state that the lands of Kilbride Cunninghame near Stewarton were also called the 'Lands of Carlin.'
References
External links
RCAHMS Canmore archaeology site
General Roy's Military Survey of Scotland 1747 - 52
Old maps of Scotland from the National Library
Old Ordnance Survey Maps
A Researcher's Guide to Local History terminology
Buildings and structures in Scotland
Megalithic monuments in Scotland
Stones
Rock formations of Scotland | Carlin stone | [
"Physics"
] | 1,347 | [
"Stones",
"Physical objects",
"Matter"
] |
14,621,035 | https://en.wikipedia.org/wiki/Similarities%20between%20Wiener%20and%20LMS | The Least mean squares filter solution converges to the Wiener filter solution, assuming that the unknown system is LTI and the noise is stationary. Both filters can be used to identify the impulse response of an unknown system, knowing only the original input signal and the output of the unknown system. By relaxing the error criterion to reduce current sample error instead of minimizing the total error over all of n, the LMS algorithm can be derived from the Wiener filter.
Derivation of the Wiener filter for system identification
Given a known input signal , the output of an unknown LTI system can be expressed as:
where is an unknown filter tap coefficients and is noise.
The model system , using a Wiener filter solution with an order N, can be expressed as:
where are the filter tap coefficients to be determined.
The error between the model and the unknown system can be expressed as:
The total squared error can be expressed as:
Use the Minimum mean-square error criterion over all of by setting its gradient to zero:
which is
for all
Substitute the definition of :
Distribute the partial derivative:
Using the definition of discrete cross-correlation:
Rearrange the terms:
for all
This system of N equations with N unknowns can be determined.
The resulting coefficients of the Wiener filter can be determined by: , where is the cross-correlation vector between and .
Derivation of the LMS algorithm
By relaxing the infinite sum of the Wiener filter to just the error at time , the LMS algorithm can be derived.
The squared error can be expressed as:
Using the Minimum mean-square error criterion, take the gradient:
Apply chain rule and substitute definition of y[n]
Using gradient descent and a step size :
which becomes, for i = 0, 1, ..., N-1,
This is the LMS update equation.
See also
Wiener filter
Least mean squares filter
References
J.G. Proakis and D.G. Manolakis, Digital Signal Processing: Principles, Algorithms, and Applications, Prentice-Hall, 4th ed., 2007.
Digital signal processing
Filter theory | Similarities between Wiener and LMS | [
"Engineering"
] | 416 | [
"Telecommunications engineering",
"Filter theory"
] |
14,621,467 | https://en.wikipedia.org/wiki/Medical%20equipment%20management | Medical equipment management (sometimes referred to as clinical engineering, clinical engineering management, clinical technology management, healthcare technology management, biomedical maintenance, biomedical equipment management, and biomedical engineering) is a term for the professionals who manage operations, analyze and improve utilization and safety, and support servicing healthcare technology. These healthcare technology managers are, much like other healthcare professionals referred to by various specialty or organizational hierarchy names.
Some of the titles of healthcare technology management professionals are biomed, biomedical equipment technician, biomedical engineering technician, biomedical engineer, BMET, biomedical equipment management, biomedical equipment services, imaging service engineer, imaging specialist, clinical engineer technician, clinical engineering equipment technician, field service engineer, field clinical engineer, clinical engineer, and medical equipment repair person. Regardless of the various titles, these professionals offer services within and outside of healthcare settings to enhance the safety, utilization, and performance on medical devices, applications, and systems.
They are a fundamental part of managing, maintaining, or designing medical devices, applications, and systems for use in various healthcare settings, from the home and the field to the doctor's office and the hospital.
HTM includes the business processes used in interaction and oversight of the technology involved in the diagnosis, treatment, and monitoring of patients. The related policies and procedures govern activities such as the selection, planning, and acquisition of medical devices, and the inspection, acceptance, maintenance, and eventual retirement and disposal of medical equipment.
Responsibilities of the Healthcare Technology Management Professional
The healthcare technology management professional's purpose is to ensure that equipment and systems used in patient care are operational, safe, and properly configured to meet the mission of the healthcare; that the equipment is used in an effective way consistent with the highest standards of care by educating the healthcare provider, equipment user, and patient; that the equipment is designed to limit the potential for loss, harm, or damage to the patient, provider, visitor, and facilities through various means of analysis prior to and during acquisition, monitoring and foreseeing problems during the lifecycle of the equipment, and collaborating with the parties who manufacture, design, regulate, or recommend safe medical devices and systems.
Some but not all of the healthcare technology management professional's functions are:
Equipment Control & Asset Management
Equipment Inventories
Work Order Management
Data Quality Management
Equipment Maintenance Management
Equipment Maintenance
Personnel Management
Quality Assurance
Patient Safety
Risk Management
Hospital Safety Programs
Radiation Safety
Medical Gas Systems
In-Service Education & Training
Accident Investigation
Analysis of Failures, Root Causes, and Human Factors
Safe Medical Devices Act (SMDA) of 1990
Health Insurance Portability and Accountability Act (HIPAA)
Careers in Facilities Management
Equipment Control & Asset Management
Every medical treatment facility should have policies and processes on equipment control and asset management. Equipment control and asset management involves the management of medical devices within a facility and may be supported by automated information systems (e.g., enterprise resource planning (ERP) systems are often found in U.S. hospitals, and the U.S. military health system uses an advanced automated system known as the Defense Medical Logistics Standard Support (DMLSS) suite of applications) or may use a dedicated equipment management and maintenance software. Equipment control begins with the receipt of a newly acquired equipment item and continues through the item's entire lifecycle. Newly acquired devices should be inspected by in-house or contracted biomedical equipment technicians (BMETs), who will receive an established equipment control/asset number from the facilities equipment/property manager. This control number is used to track and record maintenance actions in their database. This is similar to creating a new chart for a new patient who will be seen at the medical facility. Once an equipment control number is established, the device is safety inspected and readied for delivery to clinical and treatment areas in the facility.
Facilities or healthcare delivery networks may rely on a combination of equipment service providers such as manufacturers, third-party services, in-house technicians, and remote support. Equipment managers are responsible for continuous oversight and responsibility for ensuring safe and effective equipment performance through full-service maintenance. Medical equipment managers are also responsible for technology assessment, planning and management in all areas within a medical treatment facility (e.g. developing policies and procedures for the medical equipment management plan, identifying trends and the need for staff education, resolution of defective biomedical equipment issues).
Work Order Management
Work order management involves systematic, measurable, and traceable methods to all acceptance/initial inspections, preventive maintenance, and calibrations, or repairs by generating scheduled and unscheduled work orders. Work order management may be paper-based or computer-base and includes the maintenance of active (open or uncompleted) and completed work orders which provide a comprehensive maintenance history of all medical equipment devices used in the diagnosis, treatment, and management of patients. Work order management includes all safety, preventive, calibration, test, and repair services performed on all such medical devices. A comprehensive work order management system can also be used as a resource and workload management tool by managers responsible for personnel time, total number of hours technician spent working on equipment, maximum repair dollar for one time repair, or total dollar allowed to spend repairing equipment versus replacement.
Post-work order quality checks involve one of two methods: 100% audit of all work orders or statistical sampling of randomly selected work orders. Randomly selected work orders should place more stringent statistical controls based on the clinical criticality of the device involved. For example, 100% of items critical to patient treatment but only 50% of ancillary items may be selected for sampling. In an ideal setting, all work orders are checked, but available resources may dictate a less comprehensive approach. Work orders must be tracked regularly and all discrepancies must be corrected. Managers are responsible to identify equipment location.
Data Quality Management
Accurate, comprehensive data are needed in any automated medical equipment management system. Data quality initiatives can help to insure the accuracy of clinical/biomedical engineering data. The data needed to establish basic, accurate, maintainable automated records for medical equipment management includes: nomenclature, manufacturer, nameplate model, serial number, acquisition cost, condition code, and maintenance assessment. Other useful data could include: warranty, location, other contractor agencies, scheduled maintenance due dates, and intervals. These fields are vital to ensure appropriate maintenance is performed, equipment is accounted for, and devices are safe for use in patient care.
Nomenclature: It defines what the device is, how, and the type of maintenance is to be performed. Common nomenclature systems are taken directly from the ECRI Institute Universal Medical Device Nomenclature System.
Manufacturer: This is the name of the company that received approval from the FDA to sell the device, also known as the Original Equipment Manufacturer (OEM).
Nameplate model: The model number is typically located on the front/behind of the equipment or on the cover of the service manual and is provided by the OEM. E.g. Medtronic PhysioControl's Lifepak 10 Defibrillator can actually be any one of the following correct model numbers listed: 10-41, 10-43, 10 -47, 10-51, and 10-57.
Serial number: This is usually found on the data plate as well, is a serialized number (could contain alpha characters) provided by the manufacturer. This number is crucial to device alerts and recalls.
Acquisition cost: The total purchased price for an individual item or system. This cost should include installation, shipping, and other associated costs. These numbers are crucial for budgeting, maintenance expenditures, and depreciation reporting.
Condition code: This code is mainly used when an item is turned in and should be changed when there are major changes to the device that could affect whether or not an item should be salvaged, destroyed, or used by another Medical Treatment Facility.
Maintenance assessment: This assessment must be validated every time a BMET performs any kind of maintenance on a device.
Several other management tools, such as equipment replacement planning and budgeting, depreciation calculations, and at the local level literature, repair parts, and supplies are directly related to one or more of these fundamental basics. Data Quality must be tracked monthly and all discrepancies must be corrected.
Quality Assurance
Quality Assurance is a way of identifying an item of supply or equipment as being defective. A good quality control/engineering program improves quality of work and lessens the risk of staff/patient injuries/death.
Patient Safety
Safety of our patients/staff is paramount to the success of our organizations mission. The Joint Commission publishes annual lists detailing “National Patient Safety Goals” to be implemented by healthcare organizations. Goals are developed by experts in patient safety nurses, physicians, pharmacists, risk managers, and other professionals with patient-safety experience in a variety of settings. Patient safety is among the most important goals of every healthcare provider, and participation in a variety of committees and processes concerned with patient safety provides a way for biomedical managers and clinical engineering departments to gain visibility and positively affect their workplace.
Risk management
This program helps the medical treatment facility avoid the likelihood of equipment-related risks, minimize liability of mishaps and incidents, and stay compliant with regulatory reporting requirements. The best practice is to use a rating system for every equipment type. For example, a risk-rating system might rate defibrillators as considered high risk, general-purpose infusion pumps as medium risk, electronic thermometers as low risk, and otoscopes as no significant risk. This system could be set up using Microsoft Excel or Access program for a manager's or technician's quick reference.
In addition, user error, equipment abuse, no problem/fault found occurrences must be tracked to assist risk management personnel in determining whether additional clinical staff training must be performed.
Risk management for IT networks incorporating medical devices will be covered by the standard ISO/IEC 80001. Its purpose is: "Recognizing that MEDICAL DEVICES are incorporated into IT-NETWORKS to achieve desirable benefits (for example, INTEROPERABILITY), this international standard defines the roles, responsibilities and activities that are necessary for RISK MANAGEMENT of IT-NETWORKS incorporating MEDICAL DEVICES to address the KEY PROPERTIES". It resorts some basic ideas of ISO 20000 in the context of medical applications, e.g. configuration, incident, problem, change and release management, and risk analysis, control and evaluation according to ISO 14971. IEC 80001 "applies to RESPONSIBLE ORGANIZATIONS, MEDICAL DEVICE manufacturers and other providers of information technologies for the purpose of comprehensive RISK MANAGEMENT".
Hospital Safety Programs
The Joint Commission stipulates seven management plans for hospital accreditation. One of the seven is safety. Safety includes a range of hazards including mishaps, injuries on the job, and patient care hazards. The most common safety mishaps are "needle-sticks" (staff accidentally stick themselves with a needle) or patient injury during care. As a manager, ensure all staff and patients are safe within the facility. Note: it's everyone's responsibility!
There are several meetings that medical equipment managers are required to attend as the organizations technical representative:
Patient Safety
Environment of Care
Space Utilization Committee
Equipment Review Board
Infection Control (optional)
Educational Requirements For Bio-Medical Engineer :
Students should take the most challenging science, math, and English courses available in high school.
All biomedical engineers have at least a bachelor's degree in engineering. Many have advanced graduate degrees as well. Courses of study include a sound background in mechanical, chemical, or industrial engineering, and specialized biomedical training. Most programs last from four to six years, and all states require biomedical engineers to pass examinations and be licensed.
Duties & Responsibilities For Bio-Medical Engineer:
Description:
Biomedical Engineers use engineering principles to solve health related and medical problems. They do a lot of research in conjunction with life scientists, chemists, and medical professionals to design medical devices like artificial hearts, pacemakers, dialysis machines, and surgical lasers. Some conduct research on biological and other life systems or investigate ways to modernize laboratory and clinical procedures.
Frequently, biomedical engineers supervise biomedical equipment maintenance technicians, investigate medical equipment failure, and advise hospitals about purchasing and installing new equipment.
Biomedical engineers work in hospitals, universities, industry, and research laboratories.
Working Conditions :
Biomedical engineers work in offices, laboratories, workshops, manufacturing plants, clinics and hospitals. Some local travel may be required if medical equipment is located in various clinics or hospitals.
Most biomedical engineers work standard weekday hours. Longer hours may be required to meet research deadlines, work with patients at times convenient to them, or work on medical equipment that is in use during daytime hours.
Duties :
Biomedical engineers work closely with life scientists, chemists and medical professionals (physicians, nurses, therapists and technicians) on the engineering aspects of biological systems.
Duties and responsibilities vary from one position to another but, in general, biomedical engineers:
• design and develop medical devices such as artificial hearts and kidneys, pacemakers, artificial hips, surgical lasers, automated patient monitors and blood chemistry sensors.
• design and develop engineered therapies (for example, neural-integrative prostheses).
• adapt computer hardware or software for medical science or health care applications (for example, develop expert systems that assist in diagnosing diseases, medical imaging systems, models of different aspects of human physiology or medical data management).
• conduct research to test and modify known theories and develop new theories.
• ensure the safety of equipment used for diagnosis, treatment and monitoring.
• investigate medical equipment failures and provide advice about the purchase and installation of new equipment.
• develop and evaluate quantitative models of biological processes and systems.
• apply engineering methods to answer basic questions about how the body works.
• contribute to patient assessments.
• prepare and present reports for health professionals and the public.
• supervise and train technologists and technicians.
Biomedical engineers may work primarily in one or a combination of the following fields:
• bioinformatics – developing and using computer tools to collect and analyze data.
• bioinstrumentation – applying electronic and measurement techniques.
• biomaterials – developing durable materials that are compatible with a biological environment.
• biomechanics - applying knowledge of mechanics to biological or medical problems.
• bio-nano-engineering – developing novel structures of nanometer dimensions for application to biology, drug delivery, molecular diagnostics, microsystems and nanosystems.
• biophotonics – applying and manipulating light, usually laser light, for sensing or imaging properties of biological tissue.
• cellular and tissue engineering – studying the anatomy, biochemistry and mechanics of cellular and sub-cellular structures, developing technology to repair, replace or regenerate living tissues and developing methods for controlling cell and tissue growth in the laboratory.
• clinical engineering – applying the latest technology to health care and health care systems in hospitals.
• genomics and genetic engineering – mapping, sequencing and analyzing genomes (DNA), and applying molecular biology methods to manipulate the genetic material of cells, viruses and organisms.
• medical or biological imaging – combining knowledge of a physical phenomenon (for example, sound, radiation or magnetism) with electronic processing, analysis and display.
• molecular bioengineering – designing molecules for biomedical purposes and applying computational methods for simulating biomolecular interactions.
• systems physiology - studying how systems function in living organisms.
• therapeutic engineering – developing and discovering drugs and advanced materials and techniques for delivering drugs to local tissues with minimized side effects.
References
Sources
Bowles, Roger "Techcareers: Biomedical Equipment Technicians" TSTC Publishing
Dyro, Joseph., Clinical Engineering Handbook (Biomedical Engineering).
Khandpur, R. S. "Biomedical Instrumentation: Technology and Applications". McGraw Hills
Northrop, Robert B., "Noninvasive Instrumentation and Measurement in Medical Diagnosis (Biomedical Engineering)".
Webb, Andrew G., "Introduction to Biomedical Imaging (IEEE Press Series on Biomedical Engineering)".
Yadin David, Wolf W. von Maltzahn, Michael R. Neuman, and Joseph D. Bronzino,. Clinical Engineering (Principles and Applications in Engineering).
Villafañe, Carlos CBET: "Biomed: From the Student's Perspective" (). www.Biomedtechnicians.com.
Willson K., Ison K., Tabakov S., "Medical Equipment Management", CRC Press.
Medical Equipment Inventory Management WHO(2011)
Medical equipment | Medical equipment management | [
"Biology"
] | 3,323 | [
"Medical equipment",
"Medical technology"
] |
14,621,489 | https://en.wikipedia.org/wiki/Cytochrome%20b559 | Cytochrome b559 is an important component of Photosystem II (PSII) is a multisubunit protein-pigment complex containing polypeptides both intrinsic and extrinsic to the photosynthetic membrane. Within the core of the complex, the chlorophyll and beta-carotene pigments are mainly bound to the antenna proteins CP43 (PsbC) and CP47 (PsbB), which pass the excitation energy on to chlorophylls in the reaction centre proteins D1 (Qb, PsbA) and D2 (Qa, PsbD) that bind all the redox-active cofactors involved in the energy conversion process. The PSII oxygen-evolving complex (OEC) provides electrons to re-reduce the PSII reaction center, and oxidizes 2 water molecules to recover its reduced initial state. It consists of OEE1 (PsbO), OEE2 (PsbP) and OEE3 (PsbQ). The remaining subunits in PSII are of low molecular weight (less than 10 kDa), and are involved in PSII assembly, stabilisation, dimerization, and photoprotection.
Cytochrome b559, which forms part of the reaction centre core of PSII, is a heterodimer composed of one alpha subunit (PsbE), one beta (PsbF) subunit, and a heme cofactor. Two histidine residues from each subunit coordinate the heme. Although cytochrome b559 is a redox-active protein, it is unlikely to be involved in the primary electron transport in PSII due to its very slow photo-oxidation and photo-reduction kinetics. Instead, cytochrome b559 could participate in a secondary electron transport pathway that helps protect PSII from photo-damage. Cytochrome b559 is essential for PSII assembly.
This domain occurs in both the alpha and beta subunits of cytochrome B559. In the alpha subunit, it occurs together with a lumenal domain (), while in the beta subunit it occurs on its own.
Cytochrome b559 can exist in three forms, each with a characteristic redox potential. These forms are very low potential (VLP), ≤ zero mV; low potential (LP) at 60 mV; and high potential (HP) at 370 mV. There is also an intermediate potential (IP) form that has a redox potential at pH 6.5-7.0 that ranges from 170 to 240 mV. In oxygen-evolving reaction centers, more than half of the cyt b559 is in the HP form. In manganese-depleted non-oxygen evolving photosystem II reaction centers, cyt b559 is usually in the LP form.
References
Photosynthesis
Protein families | Cytochrome b559 | [
"Chemistry",
"Biology"
] | 595 | [
"Biochemistry",
"Protein families",
"Photosynthesis",
"Protein classification"
] |
14,621,793 | https://en.wikipedia.org/wiki/Lindemann%20mechanism | In chemical kinetics, the Lindemann mechanism (also called the Lindemann–Christiansen mechanism or the Lindemann–Hinshelwood mechanism) is a schematic reaction mechanism for unimolecular reactions. Frederick Lindemann and J.A. Christiansen proposed the concept almost simultaneously in 1921, and Cyril Hinshelwood developed it to take into account the energy distributed among vibrational degrees of freedom for some reaction steps.
It breaks down an apparently unimolecular reaction into two elementary steps, with a rate constant for each elementary step. The rate law and rate equation for the entire reaction can be derived from the rate equations and rate constants for the two steps.
The Lindemann mechanism is used to model gas phase decomposition or isomerization reactions. Although the net formula for decomposition or isomerization appears to be unimolecular and suggests first-order kinetics in the reactant, the Lindemann mechanism shows that the unimolecular reaction step is preceded by a bimolecular activation step so that the kinetics may actually be second-order in certain cases.
Activated reaction intermediates
The overall equation for a unimolecular reaction may be written A → P, where A is the initial reactant molecule and P is one or more products (one for isomerization, more for decomposition).
A Lindemann mechanism typically includes an activated reaction intermediate, labeled A*. The activated intermediate is produced from the reactant only after a sufficient activation energy is acquired by collision with a second molecule M, which may or may not be similar to A. It then either deactivates from A* back to A by another collision, or reacts in a unimolecular step to produce the product(s) P.
The two-step mechanism is then
Rate equation in steady-state approximation
The rate equation for the rate of formation of product P may be obtained by using the steady-state approximation, in which the concentration of intermediate A* is assumed constant because its rates of production and consumption are (almost) equal. This assumption simplifies the calculation of the rate equation.
For the schematic mechanism of two elementary steps above, rate constants are defined as for the forward reaction rate of the first step, for the reverse reaction rate of the first step, and for the forward reaction rate of the second step. For each elementary step, the order of reaction is equal to the molecularity
The rate of production of the intermediate A* in the first elementary step is simply:
(forward first step)
A* is consumed both in the reverse first step and in the forward second step. The respective rates of consumption of A* are:
(reverse first step)
(forward second step)
According to the steady-state approximation, the rate of production of A* equals the rate of consumption. Therefore:
Solving for , it is found that
The overall reaction rate is
Now, by substituting the calculated value for [A*], the overall reaction rate can be expressed in terms of the original reactants A and M:
Reaction order and rate-determining step
The steady-state rate equation is of mixed order and predicts that a unimolecular reaction can be of either first or second order, depending on which of the two terms in the denominator is larger. At sufficiently low pressures, so that
, which is second order. That is, the rate-determining step is the first, bimolecular activation step.
At higher pressures, however, so that which is first order, and the rate-determining step is the second step, i.e. the unimolecular reaction of the activated molecule.
The theory can be tested by defining an effective rate constant (or coefficient) which would be constant if the reaction were first order at all pressures: . The Lindemann mechanism predicts that k decreases with pressure, and that its reciprocal is a linear function of or equivalently of . Experimentally for many reactions, does decrease at low pressure, but the graph of as a function of is quite curved. To account accurately for the pressure-dependence of rate constants for unimolecular reactions, more elaborate theories are required such as the RRKM theory.
Decomposition of dinitrogen pentoxide
In the Lindemann mechanism for a true unimolecular reaction, the activation step is followed by a single step corresponding to the formation of products. Whether this is actually true for any given reaction must be established from the evidence.
Much early experimental investigation of the Lindemann mechanism involved study of the gas-phase decomposition of dinitrogen pentoxide 2 N2O5 → 2 N2O4 + O2. This reaction was studied by Farrington Daniels and coworkers, and initially assumed to be a true unimolecular reaction. However it is now known to be a multistep reaction whose mechanism was established by Ogg as:
N2O5 NO2 + NO3
NO2 + NO3 → NO2 + O2 + NO
NO + N2O5 → 3 NO2
An analysis using the steady-state approximation shows that this mechanism can also explain the observed first-order kinetics and the fall-off of the rate constant at very low pressures.
Mechanism of the isomerization of cyclopropane
The Lindemann-Hinshelwood mechanism explains unimolecular reactions that take place in the gas phase. Usually, this mechanism is used in gas phase decomposition and also in isomerization reactions. An example of isomerization by a Lindemann mechanism is the isomerization of cyclopropane.
cyclo−C3H6 → CH3−CH=CH2
Although it seems like a simple reaction, it is actually a multistep reaction:
cyclo−C3H6 → (k1)
→ cyclo−C3H6 (k−1)
→ CH3−CH=CH2 (k2)
This isomerization can be explained by the Lindemann mechanism, because once the cyclopropane, the reactant, is excited by collision it becomes an energized cyclopropane. And then, this molecule can be deactivated back to reactants or produce propene, the product.
References
Reaction mechanisms | Lindemann mechanism | [
"Chemistry"
] | 1,288 | [
"Reaction mechanisms",
"Chemical kinetics",
"Physical organic chemistry"
] |
14,622,000 | https://en.wikipedia.org/wiki/Antenna%20complex%20in%20purple%20bacteria | The antenna complex in purple photosynthetic bacteria are protein complexes responsible for the transfer of solar energy to the photosynthetic reaction centre. Purple bacteria, particularly Rhodopseudomonas acidophila of purple non-sulfur bacteria, have been one of the main groups of organisms used to study bacterial antenna complexes so much is known about this group's photosynthetic components. It is one of the many independent types of light-harvesting complex used by various photosynthetic organisms.
In photosynthetic purple bacteria there are usually two antenna complexes that are generally composed of two types of polypeptides (alpha and beta chains). These proteins are arranged in a ring-like fashion creating a cylinder that spans the membrane; the proteins bind two or three types of bacteriochlorophyll (BChl) molecules and different types of carotenoids depending on the species. LH2 is the outer antenna complex that spans the membrane. It is peripheral to LH1, an antenna complex (also known as the core antenna complex) that is directly associated with the reaction centre, with the RC at the center of its elliptical ring. Unlike for LH1 complexes, the amount of LH2 complexes present vary with growth conditions and light intensity.
Both the alpha and the beta chains of antenna complexes are small proteins of 42 to 68 residues which share a three-domain organization. They are composed of a N-terminal hydrophilic cytoplasmic domain followed by a transmembrane region and a C-terminal hydrophilic periplasmic domain. In the transmembrane region of both chains there is a conserved histidine which is most probably involved in the binding of the magnesium atom of a bacteriochlorophyll group. The beta chains contain an additional conserved histidine which is located at the C-terminal extremity of the cytoplasmic domain and which is also thought to be involved in bacteriochlorophyll-binding.
The particular chemical environment of the Bchl molecules influences the wavelength of light they are able to absorb. LH2 complexes of R. acidophils have BChl a molecules that absorb at 850 nm and 800 nm respectively. BChl a molecules that absorb at 850 nm are present in a hydrophobic environment. These pigments are in contact with a number of non-polar, hydrophobic residues. BChl a molecules that absorb at 800 nm are present in a relatively polar environment. The formylated N-terminus of the alpha polypeptide, a nearby histidine, and a water molecule are responsible for this.
Subfamilies
Antenna complex, alpha subunit
Antenna complex, beta subunit
References
Photosynthesis
Protein domains
Transmembrane proteins | Antenna complex in purple bacteria | [
"Chemistry",
"Biology"
] | 570 | [
"Biochemistry",
"Protein domains",
"Photosynthesis",
"Protein classification"
] |
14,622,189 | https://en.wikipedia.org/wiki/Carfecillin | Carfecillin is a beta-lactam antibiotic. It is a phenyl derivative of carbenicillin, acting as a prodrug.
References
Penicillins
Prodrugs | Carfecillin | [
"Chemistry"
] | 43 | [
"Chemicals in medicine",
"Prodrugs"
] |
14,622,190 | https://en.wikipedia.org/wiki/Hachimycin | Hachimycin, also known as trichomycin, is a polyene macrolide antibiotic, antiprotozoal, and antifungal derived from streptomyces. It was first described in 1950, and in most research cases have been used for gynecological infections.
References
Antibiotics
Macrolide antibiotics
Polyenes | Hachimycin | [
"Biology"
] | 72 | [
"Antibiotics",
"Biocides",
"Biotechnology products"
] |
14,622,358 | https://en.wikipedia.org/wiki/Tracked%20loader | A tracked loader or crawler loader is an engineering vehicle consisting of a tracked chassis with a front bucket for digging and loading material. The history of tracked loaders can be defined by three evolutions of their design. Each of these evolutions made the tracked loader a more viable and versatile tool in the excavation industry. These machines are capable in nearly every task, but master of none. A bulldozer, excavator, or wheeled loader will outperform a tracked loader under specific conditions, but the ability of a tracked loader to perform almost every task on a job site is why it remains a part of many companies' fleets.
The first tracked loaders were built from tracked tractors with custom-built loader buckets. The first loaders were cable-operated like the bulldozers of the era. These tracked loaders lacked the ability to dig in hard ground, but so did the bulldozers of the day. They were mostly used for moving stockpiled material and loading trucks and rail cars.
The first major design change to tracked loaders came with the integration of hydraulic systems. Using hydraulics to power the loader linkages increased the power of the loader. More importantly, the loaders could apply down pressure to the bucket, vastly increasing their ability to dig compacted ground. Most of the tracked loaders were still based on a bulldozer equivalent. The weight of the engine was still on the front half of the tracks along with the heavy loader components. This caused many problems with heavy wear of the front idler wheels and the undercarriage in general. The Caterpillar 983 tracked loader, the second largest tracked loader ever built, was notorious for heavy undercarriage wear.
The hydrostatic drive system was the second major innovation to affect the design of tracked loaders.
Tracked loaders have become sophisticated machines, using hydrostatic transmissions and electro-hydraulic controls to increase efficiency. Until the rise in popularity of excavators, tracked loaders had little competition with regard to digging and loading jobs.
See also
Drott Manufacturing Company
References
External links
Engineering vehicles | Tracked loader | [
"Engineering"
] | 434 | [
"Engineering vehicles"
] |
14,622,379 | https://en.wikipedia.org/wiki/Photosystem%20II%20light-harvesting%20protein | Photosystem II light-harvesting proteins are the intrinsic transmembrane proteins CP43 (PsbC) and CP47 (PsbB) occurring in the reaction centre of photosystem II (PSII). These polypeptides bind to chlorophyll a and β-Carotene and pass the excitation energy on to the reaction centre.<ref
name="PUB00015360"></ref>
This family also includes the iron-stress induced chlorophyll-binding protein CP43', encoded by the IsiA gene, which evolved in cyanobacteria from a PSII protein to cope with light limitations and stress conditions. Under iron-deficient growth conditions, CP43' associates with photosystem I (PSI) to form a complex that consists of a ring of 18 or more CP43' molecules around a PSI trimer, which significantly increases the light-harvesting system of PSI. The IsiA protein can also provide photoprotection for PSII.
Plants, algae and some bacteria use two photosystems, PSI with P700 and PSII with P680. Using light energy, PSII acts first to channel an electron through a series of acceptors that drive a proton pump to generate adenosine triphosphate (ATP), before passing the electron on to PSI. Once the electron reaches PSI, it has used most of its energy in producing ATP, but a second photon of light captured by P700 provides the required energy to channel the electron to ferredoxin, generating reducing power in the form of NADPH. The ATP and NADPH produced by PSII and PSI, respectively, are used in the light-independent reactions for the formation of organic compounds. This process is non-cyclic, because the electron from PSII is lost and is only replenished through the oxidation of water. Hence, there is a constant flow of electrons and associated hydrogen atoms from water for the formation of organic compounds. It is this stripping of hydrogens from water that produces the oxygen we breathe.
IsiA has an inverse relationship with the iron stress repressed RNA (IsrR). IsrR is an antisense RNA that acts as a reversible switch that responds to changes in environmental conditions to modulate the expression of IsiA.
Subfamilies
Photosystem II protein PsbC
See also
Light-harvesting complex
References
Photosynthesis
Protein domains
Protein families
Transmembrane proteins | Photosystem II light-harvesting protein | [
"Chemistry",
"Biology"
] | 508 | [
"Photosynthesis",
"Protein classification",
"Protein domains",
"Biochemistry",
"Protein families"
] |
14,622,722 | https://en.wikipedia.org/wiki/Lipid%20A%20deacylase | Lipid A deacylase (PagL) is an outer membrane protein with lipid A 3-O-deacylase activity. It forms an 8 stranded beta barrel structure.
References
Protein domains
Protein families
Outer membrane proteins | Lipid A deacylase | [
"Biology"
] | 47 | [
"Protein families",
"Protein domains",
"Protein classification"
] |
14,623,014 | https://en.wikipedia.org/wiki/Weierstrass%20ring | In mathematics, a Weierstrass ring, named by Nagata after Karl Weierstrass, is a commutative local ring that is Henselian, pseudo-geometric, and such that any quotient ring by a prime ideal is a finite extension of a regular local ring.
Examples
The Weierstrass preparation theorem can be used to show that the ring of convergent power series over the complex numbers in a finite number of variables is a Wierestrass ring. The same is true if the complex numbers are replaced by a perfect field with a valuation.
Every ring that is a finitely-generated module over a Weierstrass ring is also a Weierstrass ring.
References
Bibliography
Commutative algebra | Weierstrass ring | [
"Mathematics"
] | 151 | [
"Fields of abstract algebra",
"Commutative algebra"
] |
14,623,180 | https://en.wikipedia.org/wiki/Calcium%20iodide | Calcium iodide (chemical formula CaI2) is the ionic compound of calcium and iodine. This colourless deliquescent solid is a salt that is highly soluble in water. Its properties are similar to those for related salts, such as calcium chloride. It is used in photography. It is also used in cat food as a source of iodine.
Reactions
Henri Moissan first isolated pure calcium in 1898 by reducing calcium iodide with pure sodium metal:
CaI2 + 2 Na → 2 NaI + Ca
Calcium iodide can be formed by treating calcium carbonate, calcium oxide, or calcium hydroxide with hydroiodic acid:
CaCO3 + 2 HI → CaI2 + H2O + CO2
Calcium iodide slowly reacts with oxygen and carbon dioxide in the air, liberating iodine, which is responsible for the faint yellow color of impure samples.
2 CaI2 + 2 CO2 + O2 → 2 CaCO3 + 2 I2
References
Calcium compounds
Iodides
Alkaline earth metal halides
Deliquescent materials | Calcium iodide | [
"Chemistry"
] | 218 | [
"Deliquescent materials"
] |
14,623,337 | https://en.wikipedia.org/wiki/Biphenylene | Biphenylene is an organic compound with the formula (C6H4)2. It is a pale, yellowish solid with a hay-like odor. Despite its unusual structure, it behaves like a traditional polycyclic aromatic hydrocarbon.
Bonding
Biphenylene is a polycyclic hydrocarbon, composed of two benzene rings joined by two bridging bonds (as opposed to a normal ring fusion), thus forming a 6-4-6 arene system. The resulting planar structure was one of the first π-electronic hydrocarbon systems discovered to show evidence of antiaromaticity. The spectral and chemical properties show the influence of the central [4n] ring, leading to considerable interest in the system in terms of its degree of lessened aromaticity. Questions of bond alternation and ring currents have been investigated repeatedly. Both X-ray diffraction and electron diffraction studies show a considerable alternation of bond lengths, with the bridging bonds between the benzenoid rings having the unusually great length of 1.524 Å. The separation of the rings is also reflected by the absence of the transmission of NMR substituent effects through the central [4n] ring. However, more sensitive NMR evidence, and particularly the shifting of proton resonances to high field, does indicate the existence of electron delocalization in the central [4n] ring. This upfield shift has been interpreted in terms of diminished benzenoid ring currents, either with or without an accompanying paramagnetic ring current in the central [4n] ring. Magnetic susceptibility measurements also show a diminishing of both diamagnetic exaltation and diamagnetic anisotropy, relative to comparable pure [4n+2] systems, which is also consistent with a reduction of ring current diamagnetism.
The electronic structure of biphenylene in the gas phase has the HOMO at a binding energy of 7.8 eV.
Preparation
Biphenylene was first synthesized by Lothrop in 1941.
The biphenylene structure can also be understood as a dimer of the reactive intermediate benzyne, which in fact serves as a major synthetic route, by heating the benzenediazonium-2-carboxylate zwitterion prepared from 2-aminobenzoic acid. Another approach is by N-amination of 1H-benzotriazole with hydroxylamine-O-sulfonic acid. The major product, 1-aminobenzotriazole, forms benzyne in an almost quantitative yield by oxidation with lead(IV) acetate, which rapidly dimerises to biphenylene in good yields.
Higher biphenylenes
Polycycles containing the biphenylene nucleus have also been prepared, some having considerable antiaromatic character. In general, additional 6-membered rings add further aromatic character, and additional 4-membered and 8-membered rings add antiaromatic character. However, the exact natures of the additions and fusions greatly affect the perturbations of the biphenylene system, with many fusions resulting in counter-intuitive stabilization by [4n] rings, or destabilization by 6-membered rings. This has led to significant interest in the systems by theoretical chemists and graph theoreticians. Even a complete 2-dimensional carbon sheet with biphenylene-like subunits has been proposed
and was in-depth investigated by theoretical means, finding a technologically relevant direct band gap of ca. 1 eV, excitonic binding energies of ca. 500 meV and potential as a gas sensor.
Network
Researchers synthesized a biphenylene sheet consisting of sp2-hybridized carbon atoms that formed four-, six-, and eight-membered rings on a smooth gold surface. A bottom-up two-step interpolymer dehydrofluorination of an adsorbed halogenated terphenyl molecule polymerization yielded ultraflat four- and eight-membered rings. The resulting allotrope was metallic.
References
Antiaromatic compounds
Hydrocarbons
Biphenylenes | Biphenylene | [
"Chemistry"
] | 878 | [
"Organic compounds",
"Hydrocarbons"
] |
14,623,383 | https://en.wikipedia.org/wiki/Henselian%20ring | In mathematics, a Henselian ring (or Hensel ring) is a local ring in which Hensel's lemma holds. They were introduced by , who named them after Kurt Hensel. Azumaya originally allowed Henselian rings to be non-commutative, but most authors now restrict them to be commutative.
Some standard references for Hensel rings are , , and .
Definitions
In this article rings will be assumed to be commutative, though there is also a theory of non-commutative Henselian rings.
A local ring R with maximal ideal m is called Henselian if Hensel's lemma holds. This means that if P is a monic polynomial in R[x], then any factorization of its image P in (R/m)[x] into a product of coprime monic polynomials can be lifted to a factorization in R[x].
A local ring is Henselian if and only if every finite ring extension is a product of local rings.
A Henselian local ring is called strictly Henselian if its residue field is separably closed.
By abuse of terminology, a field with valuation is said to be Henselian if its valuation ring is Henselian. That is the case if and only if extends uniquely to every finite extension of (resp. to every finite separable extension of , resp. to , resp. to ).
A ring is called Henselian if it is a direct product of a finite number of Henselian local rings.
Properties
Assume that is a Henselian field. Then every algebraic extension of is henselian (by the fourth definition above).
If is a Henselian field and is algebraic over , then for every conjugate of over , . This follows from the fourth definition, and from the fact that for every K-automorphism of , is an extension of . The converse of this assertion also holds, because for a normal field extension , the extensions of to are known to be conjugated.
Henselian rings in algebraic geometry
Henselian rings are the local rings with respect to the Nisnevich topology in the sense that if is a Henselian local ring, and is a Nisnevich covering of , then one of the is an isomorphism. This should be compared to the fact that for any Zariski open covering of the spectrum of a local ring , one of the is an isomorphism. In fact, this property characterises Henselian rings, resp. local rings.
Likewise strict Henselian rings are the local rings of geometric points in the étale topology.
Henselization
For any local ring A there is a universal Henselian ring B generated by A, called the Henselization of A, introduced by , such that any local homomorphism from A to a Henselian ring can be extended uniquely to B. The Henselization of A is unique up to unique isomorphism. The Henselization of A is an algebraic substitute for the completion of A. The Henselization of A has the same completion and residue field as A and is a flat module over A. If A is Noetherian, reduced, normal, regular, or excellent then so is its Henselization. For example, the Henselization of the ring of polynomials k[x,y,...] localized at the point (0,0,...) is the ring of algebraic formal power series (the formal power series satisfying an algebraic equation). This can be thought of as the "algebraic" part of the completion.
Similarly there is a strictly Henselian ring generated by A, called the strict Henselization of A. The strict Henselization is not quite universal: it is unique, but only up to non-unique isomorphism. More precisely it depends on the choice of a separable algebraic closure of the residue field of A, and automorphisms of this separable algebraic closure correspond to automorphisms of the corresponding strict Henselization. For example, a strict Henselization of the field of p-adic numbers is given by the maximal unramified extension, generated by all roots of unity of order prime to p. It is not "universal" as it has non-trivial automorphisms.
Examples
Every field is a Henselian local ring. (But not every field with valuation is "Henselian" in the sense of the fourth definition above.)
Complete Hausdorff local rings, such as the ring of p-adic integers and rings of formal power series over a field, are Henselian.
The rings of convergent power series over the real or complex numbers are Henselian.
Rings of algebraic power series over a field are Henselian.
A local ring that is integral over a Henselian ring is Henselian.
The Henselization of a local ring is a Henselian local ring.
Every quotient of a Henselian ring is Henselian.
A ring A is Henselian if and only if the associated reduced ring Ared is Henselian (this is the quotient of A by the ideal of nilpotent elements).
If A has only one prime ideal then it is Henselian since Ared is a field.
References
Commutative algebra | Henselian ring | [
"Mathematics"
] | 1,102 | [
"Fields of abstract algebra",
"Commutative algebra"
] |
14,623,985 | https://en.wikipedia.org/wiki/Kinetic%20chain%20length | In polymer chemistry, the kinetic chain length () of a polymer is the average number of units called monomers added to a growing chain during chain-growth polymerization. During this process, a polymer chain is formed when monomers are bonded together to form long chains known as polymers. Kinetic chain length is defined as the average number of monomers that react with an active center such as a radical from initiation to termination.
This definition is a special case of the concept of chain length in chemical kinetics. For any chemical chain reaction, the chain length is defined as the average number of times that the closed cycle of chain propagation steps is repeated. It is equal to the rate of the overall reaction divided by the rate of the initiation step in which the chain carriers are formed. For example, the decomposition of ozone in water is a chain reaction which has been described in terms of its chain length.
In chain-growth polymerization the propagation step is the addition of a monomer to the growing chain. The word kinetic is added to chain length in order to distinguish the number of reaction steps in the kinetic chain from the number of monomers in the final macromolecule, a quantity named the degree of polymerization. In fact the kinetic chain length is one factor which influences the average degree of polymerization, but there are other factors as described below. The kinetic chain length and therefore the degree of polymerization can influence certain physical properties of the polymer, including chain mobility, glass-transition temperature, and modulus of elasticity.
Calculating chain length
For most chain-growth polymerizations, the propagation steps are much faster than the initiation steps, so that each growing chain is formed in a short time compared to the overall polymerization reaction. During the formation of a single chain, the reactant concentrations and therefore the propagation rate remain effectively constant. Under these conditions, the ratio of the number of propagation steps to the number of initiation steps is just the ratio of reaction rates:
where is the rate of propagation, is the rate of initiation of polymerization, and is the rate of termination of the polymer chain. The second form of the equation is valid at steady-state polymerization, as the chains are being initiated at the same rate they are being terminated ().
An exception is the class of living polymerizations, in which propagation is much slower than initiation, and chain termination does not occur until a quenching agent is added. In such reactions the reactant monomer is slowly consumed and the propagation rate varies and is not used to obtain the kinetic chain length. Instead the length at a given time is usually written as:
where represents the number of monomer units consumed, and the number of radicals that initiate polymerization. When the reaction goes to completion, , and then the kinetic chain length is equal to the number average degree of polymerization of the polymer.
In both cases kinetic chain length is an average quantity, as not all polymer chains in a given reaction are identical in length. The value of ν depends on the nature and concentration of both the monomer and initiator involved.
Kinetic chain length and degree of polymerization
In chain-growth polymerization, the degree of polymerization depends not only on the kinetic chain length but also on the type of termination step and the possibility of chain transfer.
Termination by disproportionation
Termination by disproportionation occurs when an atom is transferred from one polymer free radical to another. The atom is usually hydrogen, and this results in two polymer chains.
With this type of termination and no chain transfer, the number average degree of polymerization (DPn) is then equal to the average kinetic chain length:
Termination by combination
Combination simply means that two radicals are joined together, destroying the radical character of each and forming one polymeric chain. With no chain transfer, the average degree of polymerization is then twice the average kinetic chain length
Chain transfer
Some chain-growth polymerizations include chain transfer steps, in which another atom (often hydrogen) is transferred from a molecule in the system to the polymer radical. The original polymer chain is terminated and a new one is initiated. The kinetic chain is not terminated if the new radical can add monomer. However the degree of polymerization is reduced without affecting the rate of polymerization (which depends on kinetic chain length), since two (or more) macromolecules are formed instead of one. For the case of termination by disproportionation, the degree of polymerization becomes:
where Rtr is the rate of transfer. The greater Rtr is, the shorter the final macromolecule.
Significance
The kinetic chain length is important in determining the degree of polymerization, which in turn influences many physical properties of the polymer.
Viscosity - Chain entanglements are very important in viscous flow behavior (viscosity) of polymers. As the chain becomes longer, chain mobility decreases; that is, the chains become more entangled with each other.
Glass-transition temperature - An increase in chain length often leads to an increase in the glass-transition temperature, Tg. The increased chain length causes the chains to become more entangled at a given temperature. Therefore, the temperature does not need to be as low for the material to act as a solid.
Modulus of Elasticity - A longer chain length is also associated with a material tends to be tougher and has a higher modulus of elasticity, E, also known as the Young's modulus. The interaction of the chains causes the polymer to become stiffer.
References
Polymer chemistry
Chemical kinetics | Kinetic chain length | [
"Chemistry",
"Materials_science",
"Engineering"
] | 1,131 | [
"Chemical kinetics",
"Chemical reaction engineering",
"Materials science",
"Polymer chemistry"
] |
14,624,054 | https://en.wikipedia.org/wiki/Embutramide | Embutramide (INN, USAN, BAN; brand name Embutane) is a potent sedative drug that is structurally related to GHB. It was developed by Hoechst A.G. in 1958 and was investigated as a general anesthetic agent, but was found to have a very narrow therapeutic window, with a 50 mg/kg dose producing effective sedation and a 75 mg/kg dose being fatal. Along with strong sedative effects, embutramide also produces respiratory depression and ventricular arrhythmia. Because of these properties, it was never adopted for medical use as an anesthetic as it was considered too dangerous for this purpose. Instead it is used for euthanasia in veterinary medicine, mainly for the euthanization of dogs.
Embutramide is formulated as a combination product under the brand name Tributame, which also contains chloroquine and lidocaine.
Embutramide is used for euthanasia of a range of different animals, mainly small animals kept as pets rather than large farm animals. It may cause significant pain to the animal being euthanized, and so may be less humane than older drugs used for this purpose such as pentobarbital; however, it may have less abuse potential than barbiturates especially in the Tributame combination formulation, and so is less likely to be diverted for recreational abuse. Embutramide has however been reported to be used for suicide by people with access to the drug, and was added to the list of Schedule III drugs in the US in 2006, as a Non-Narcotic with ACSCN 2020, which classifies it with depressants such as benzodiazepines, barbiturates, and other sedative-hypnotics.
Chemistry
Embutramide is considered an analog of gamma-hydroxybutyrate (GHB) due to its structural similarity to this naturally occurring neurotransmitter. GHB is known for its medical applications, such as treating narcolepsy and alcohol withdrawal symptoms. However, its recreational use has led to its classification as a controlled substance in many countries. The analog status of embutramide is significant in terms of its regulation and controlled use to prevent any potential misuse or abuse.
Synthesis
Alkylation of (3-methoxyphenyl)acetonitrile (1) with bromoethane gives 2-ethyl-2-(3-methoxyphenyl)butanenitrile (2). Sodium borohydride is used to reduce the nitrile group to give 2-ethyl-2-(3-methoxyphenyl)butan-1-amine (3). Amide formation via reaction with gamma-butyrolactone (GBL) completes the synthesis of embutramide (4).
References
Anesthetics
Hypnotics
Sedatives
3-Methoxyphenyl compounds
Carboxamides
Primary alcohols
Opioids | Embutramide | [
"Biology"
] | 642 | [
"Hypnotics",
"Behavior",
"Sleep"
] |
14,624,822 | https://en.wikipedia.org/wiki/Cam-6 | The CAM-6 accelerator is a PC-compatible expansion board designed to simulate cellular automata, presenting the output to an IBM CGA display. It was designed by Tommaso Toffoli and Norman Margolus and is described at length in "Cellular Automata Machines", by Toffoli and Margolus (MIT Press, 1987). The card was engineered and produced by Systems Concepts but production problems made it very hard for interested customers to acquire one.
References
.
.
Describes Rucker's experience acquiring and using the CAM-6 from Systems Concepts in great detail.
Cellular automata | Cam-6 | [
"Mathematics",
"Technology"
] | 123 | [
"Recreational mathematics",
"Computing stubs",
"Computer hardware stubs",
"Cellular automata"
] |
14,625,027 | https://en.wikipedia.org/wiki/Domain%20masking | Domain masking or URL masking is the act of hiding the actual domain name of a website from the URL field of a user's web browser in favor of another name. There are many ways to do this, including the following examples.
HTML inline frame or frameset so a frame embedded in the main website actually points to some other site.
URL rewriting (e.g., mod_rewrite) or aliases to have the web server serve the same page for two different domain names.
Once the URL is masked it displays the URL mask rather than the original URL/domain name. Masking does not affect the content of the actual website; it only covers up the original URL/domain name. Domain masking prevents users from being able to see the actual domain website, whether it be due to length or privacy/security issues.
See also
Website spoofing
URL shortening
URL redirection
References
Web design | Domain masking | [
"Engineering"
] | 198 | [
"Design",
"Web design"
] |
14,625,352 | https://en.wikipedia.org/wiki/Marsha%20Looper | Marsha Looper (born c. 1959) was a Colorado legislator. Elected to the Colorado House of Representatives as a Republican in 2006, Looper represented House District 19, which encompasses eastern El Paso County, Colorado from 2006 to 2012.
Early career
Born to a family of Eastern European descent, Looper was raised on Colorado's Western Slope. She graduated from Fruita Monument High School in Mesa County in western Colorado and took coursework at Mesa State College. A systems engineer, Looper certified as an IBM Network Engineer and a Novell Systems Engineer, and worked for ROLM, IBM and the Widefield School District before starting a company of her own, Computing Solutions Group, in 1993.
Looper entered the real estate business in 2004 and has earned Associate Broker and Registered Appraiser credentials. Since 2004, she has been a partner in Big Sky Realty, in addition to operating Phoenix & Associates, a home remodeling company. She is now working at Keller Williams Partners, specializing in Military Relocations, Recreational and Country properties.
Looper and her husband, Lynn, have operated their family's ranch in near Calhan, Colorado for two decades, as well as Waterworks Sales, a water pipe distribution company. After Waterworks' was purchased by Hughes Supply, Inc., Looper remained with the company as a branch manager. She and Lynn have three children.
Within the community, Looper has been a member of the Pikes Peak Firearms Coalition, the National Rifle Association, the El Paso County Republican Women, the Falcon School District Accountability Committee, the Pikes Peak Range Riders, and the El Paso County Soil and Water Conservation Society, and volunteered with St. Michael's Church, the Special Olympics, and local 4-H and YMCA clubs.
Property-rights activism
Looper was a driving force behind opposition to a proposed toll road project along the Colorado Front Range — the Prairie Falcon Parkway Express, or "Super Slab" project — a highway and rail corridor stretching from Pueblo to Fort Collins. The project would have resulted in the condemnation or taking by eminent domain of privately held properties in seven Colorado counties; Looper's land fell within the corridor designated by the toll road's developers, and subsequently dropped in market value.
As the founder, in 2004, and chair of the Eastern Plains Citizens Coalition and executive director of Colorado Citizens for Property Rights, Looper led grassroots opposition to the toll road and supported several measures during the 2006 legislative session to tighten the rules regarding eminent domain under which toll roads could be constructed.
Among the successful measures lobbied for by Looper and others were rules narrowing the proposed corridor for toll roads from , and new reporting requirements that property owners be informed that their land lay within that corridor.
Looper also led an effort to place a statewide referendum on the 2006 general election ballot to prohibit governments from condemning private property for the purpose of economic development. The citizen initiative gathered over 30,000 signatures, but fell more than 30,000 signatures short of the total required for placement on the statewide ballot.
Legislative career
2006 election
In February 2006, upon the retirement of term-limited Rep. Richard Decker in House District 19, covering eastern El Paso County, Colorado, Looper announced her candidacy for the seat. After experience pushing for legislation Colorado General Assembly to restrict to use of eminent domain, she cited her frustration at the influence of lobbyists, and Looper identified her top legislative concerns as property rights, transportation, and illegal immigration. She also identified water issues and renewable energy as areas of interest. Facing military veteran and school board member Jim Brewer, Looper won the Republican primary with 62% of the vote.
In the general election, Looper faced former Fountain, Colorado mayor and Democrat Ken Barela. Barela criticized Looper's emphasis on property rights, calling her a "one issue candidate;" in response, Looper characterized Barela as "too liberal" for the district. Although she was endorsed by Republican Rep. David Schultheis, she was not endorsed by Republican and outgoing Rep. Richard Decker, who criticized her for possible involvement in an independent publication promoting her campaign, and for donating over $50,000 of personal money to her legislative race; Looper outraised Barela by roughly 10 to 1, and won the general election by a 2 to 1 margin.
2007 legislative session
In the 2007 session of the Colorado General Assembly, Looper sat on the House Agriculture, Livestock and Natural Resources Committee and the House Local Government Committee.
Stemming from her work on toll road issues, including opposition to the "Super Slab" project as an activist, Looper sponsored legislation to impose new requirements, including planning in conjunction with the Colorado Department of Transportation, on new toll road development in Colorado. Other opponents of the "Super Slab" project criticized the bill for removing requirements that property owners be informed of planned development; the requirements had resulted in a decrease in property values for many in the proposed project's corridor. Looper contended that the purpose of the bill was to reduce the potential property value impact of speculative toll road projects. Although the bill passed the Colorado House of Representatives 61-3, the bill was postponed indefinitely in a Senate committee.
Looper also sponsored legislation to require disclosure of water sources for newly sold homes, a move designed to inform homeowners of possibly scarce groundwater resources. Unsuccessfully put forward in three previous years, the bill pass unanimously through committee and the full house before being signed by Gov. Ritter.
2008 legislative session
In the 2008 session of the Colorado General Assembly, Looper sits on the House Agriculture, Livestock, and Natural Resources Committee, and the House State, Veterans, and Military Affairs Committee.
In response to concerns about agricultural labor shortages and the difficulty of hiring legal foreign guest workers, Looper and Democratic Sen. Abel Tapia drafted legislation to create a state office to assist with the logistics of clearing guest workers for jobs in Colorado; under their proposal, the state of Colorado would seek a waiver from the federal government to process H-2A visas applications, including operating a guest-worker screening office in Mexico. The bill, which also contained a provision requiring that guest workers have 20% of their wages withheld until they returned to their home countries, was criticized as a possible violation of federal law. After 26 amendments, including removal of the wage withholding provision, the bill passed House committee with support from farming and ranching groups.
Looper's guest-worker bill became the center of controversy and widespread attention in April, when Rep. Douglas Bruce made controversial comments concerning guest workers during House debate. Looper had previously received death threats for sponsoring the bill, and received additional threats in the wake of the controversy. The bill ultimately passed both houses of the legislature, and was signed into law by Gov. Ritter.
Continuing her work on toll road legislation, Looper again sponsored a bill to alter the reporting and disclosure requirements surrounding planned toll roads, in an effort to reduce the property value impact on homeowners who live within a proposed toll road corridor. The bill was met with opposition from some toll road opponents for being ineffective at halting toll road development, and Looper herself postponed consideration of the bill in favor of a more expansive measured introduced by Rep. Debbie Stafford. Stafford's bill, however, was killed in a House committee, and Looper's measure passed the state house.
Looper has also introduced legislation to require property buyers to be informed of paperwork tracking residential well ownership, and sponsored a bill to allow judges to include restorative justice as part of sentencing for juveniles.
Following the legislative session, Looper was recognized by the Colorado Farm Bureau with their 2008 Pinnacle Award for legislative support of agriculture. In December 2008, she was named Colorado Legislator of the Year by the Rocky Mountain Farmers Union, citing her guest worker legislation.
2008 election
In the 2008 Congressional election, Looper supported Bentley Rayburn's challenge to incumbent Rep. Doug Lamborn in the Republican party primary for Colorado's 5th congressional district. Looper also stood against some fellow Republicans by opposing Amendment 52, a ballot measure on the November ballot that would reallocate some severance tax revenue from water projects to transportation.
Looper herself sought a second term in the legislature, facing Democrat Jimmy Phillips. Looper's re-election bid was endorsed by the Denver Post. She won re-election with 67 percent of the popular vote.
2009 legislative session
For the 2009 legislative session, Looper was named to seats on the House Agriculture, Livestock, and Natural Resources Committee and the House Transportation and Energy Committee. Looper sponsored bills to expand unemployment benefits for the spouses of Colorado military personnel killed in the line of duty, to pilot test electronic online voting for military personnel, and to extend the statute of limitations for vehicular homicides. Looper was also the House sponsor of a proposal to create a Fountain Creek Watershed, Flood Control and Greenway District in Pueblo and El Paso counties,
Looper's most prominent legislative work during 2009 session surrounded two proposals on rainwater harvesting, previously not allowed under Colorado's prior appropriation water rights law. Looper was the Senate sponsor of a bill to allow residents on wells to collect rainwater, which was signed by Gov. Ritter, revising more than a century of Colorado water law. Another proposal sponsored by Looper and enacted into law created a pilot program to study the effects of rainwater diversion for landscaping in mixed-use developments.
2010 legislative session
During the 2010 legislative session, Looper sponsored a bill to allow the creation of Veterans' court, and sponsored legislation revise how Colorado's $1.50/tire recycling fee is spent, after proposals to use the funds for purposes other than tire disposal, and in response to the growth of tire dumps in her district. With Democratic Rep. Joe Rice, Looper introduced legislation to require automobile manufacturers to, when opening a dealership in a market where they had previously closed one, offer right of first refusal the previous dealer. After having passed the state house, a brief but intense lobbying campaign against the bill by Chrysler and General Motors resulted in some concessions to automobile makers before the bill moved forward in the Colorado Senate. In February, Looper introduced a measure to block the transfer of prisoners held at the Guantanamo Bay detention camp to facilities in Colorado.
In March 2010, Looper was one of four legislators named by Gov. Ritter to a 12-member Carbon Capture and Sequestration Task force, convened to consider "complex legal, regulatory and policy issues" surrounding the topic.
References
External links
Marsha Looper official site
Colorado General Assembly - Representative Marsha Looper official CO House website
Project Vote Smart - Representative Marsha Looper (CO) profile
Colorado House GOP - Marsha Looper profile
Marsha Looper's Business Website
1959 births
Ranchers from Colorado
American real estate brokers
Computer systems engineers
Living people
Members of the Colorado House of Representatives
Colorado Mesa University alumni
People from El Paso County, Colorado
Women state legislators in Colorado
21st-century American women politicians
21st-century members of the Colorado General Assembly | Marsha Looper | [
"Technology"
] | 2,236 | [
"Computer systems engineers",
"Computer systems"
] |
14,625,611 | https://en.wikipedia.org/wiki/Cytochrome%20c%20family | Cytochromes c (cyt c, c-type cytochromes) cytochromes, or heme-containing proteins, that have heme C covalently attached to the peptide backbone via one or two thioether bonds. These bonds are in most cases part of a specific Cys-X-X-Cys-His (CXXCH) binding motif, where X denotes a miscellaneous amino acid. Two thioether bonds of cysteine residues bind to the vinyl sidechains of heme, and the histidine residue coordinates one axial binding site of the heme iron. Less common binding motifs can include a single thioether linkage, a lysine or a methionine instead of the axial histidine or a CXnCH binding motif with n>2. The second axial site of the iron can be coordinated by amino acids of the protein, substrate molecules or water. Cytochromes c possess a wide range of properties and function as electron transfer proteins or catalyse chemical reactions involving redox processes. A prominent member of this family is mitochondrial cytochrome c.
Classification
{{Infobox protein family
| Symbol = Cytochrom_C_2
| Name = Cytochrome c''' (Class II)
| image = PDB 1bbh EBI.jpg
| width =
| caption = Atomic structure of a cytochrome c' with an unusual ligand-controlled dimer dissociation at a resolution of 1.8 Ångström (; ).
| Pfam = PF01322
| Pfam_clan =
| InterPro = IPR002321
| SMART =
| PROSITE = PDOC00169
| MEROPS =
| SCOP = 1cgo
| TCDB =
| OPM family =
| OPM protein =
| CAZy =
| CDD =
}}
Cytochrome c proteins can be divided in four classes based on their size, number of heme groups and reduction potentials:
Class I
Small soluble cytochrome c proteins with a molecular weight of 8-12 kDa and a single heme group belong to class I. It includes the low-spin soluble cytC of mitochondria and bacteria, with the heme-attachment site located towards the N-terminus, and the sixth ligand provided by a methionine residue about 40 residues further on towards the C-terminus. The typical class I fold contains five α-helices. On the basis of sequence similarity, class I cytC were further subdivided into five classes, IA to IE. Class IB includes the eukaryotic mitochondrial cyt c and prokaryotic 'short' cyt c2 exemplified by Rhodopila globiformis cyt c2; class IA includes 'long' cyt c2, such as Rhodospirillum rubrum cyt c2 and Aquaspirillum itersonii cyt c550, which have several extra loops by comparison with class IB cyt c.
The linked InterPro entry represents mono-haem cytochrome c proteins (excluding class II and f-type cytochromes), such as cytochromes c, c1, c2, c5, c555, c550-c553, c556, c6 and cbb3. Diheme cytochrome c () are proteins with a class I cluster and a unique cluster.
Subclasses
Cytochrome c, class IA/IB
Cytochrome c, class IC
Cytochrome c, class ID
Cytochrome c, class IE
Class II
The heme group in class II cytochrome c proteins is attached to a C-terminal binding motif. The structural fold of class II c-type cytochromes contains a four α-helix bundle with the covalently attached heme group at its core. Representatives of class II are the high-spin cytochrome c' and a number of low-spin cytochromes c, e.g. cyt c556. The cyt c' are capable of binding such ligands as CO, NO or CN−, albeit with rate and equilibrium constants 100 to 1,000,000-fold smaller than other high-spin hemeproteins. This, coupled with its relatively low redox potential, makes it unlikely that cyt c' is a terminal oxidase. Thus cyt c' probably functions as an electron transfer protein. The 3D structures of a number of cyt c' have been determined which show that the proteins usually exist as a dimer. The Chromatium vinosum cyt c' exhibits dimer dissociation upon ligand binding.
Class III
Proteins containing multiple covalently attached heme groups with low redox potential are included in class III. The heme C groups, all bis-histidinyl coordinated, are structurally and functionally nonequivalent and present different redox potentials in the range 0 to -400 mV. Members of this class are e.g. cytochrome c7 (triheme), cytochrome c3 (tetraheme), and high-molecular-weight cytochrome c (Hmc), containing 16 heme groups with only 30-40 residues per heme group. The 3D structures of a number of cyt c3 proteins have been determined. The proteins consist of 4-5 α-helices and 2 β-sheets wrapped around a compact core of four non-parallel hemes, which present a relatively high degree of exposure to the solvent. The overall protein architecture, heme plane orientations and iron-iron distances are highly conserved.
An example is the Photosynthetic reaction centre of Rhodopseudomonas viridis that contains a tetraheme cytochrome c subunit.
Class IV
According to Ambler (1991), Cytochrome c proteins containing other prosthetic groups besides heme C, such as flavocytochromes c (sulfide dehydrogenase) and cytochromes cd1 (nitrite reductase) belong to class IV. As this grouping is more related to how the heme group is used instead of what the domains themselves look like, proteins placed in this group tend to be scattered in others in bioinformatic groupings.
Biogenesis
The attachment of the heme group is physically separated from the protein biosynthesis. Proteins are synthesized within the cytoplasm and endoplasmic reticulum, while the maturation of cytochromes c occurs in the periplasm of prokaryots, the intermembrane space of mitochondria or the stroma of chloroplasts. Several biochemical pathways have been discovered that differ depending on organism.
System I
Also called cytochrome c maturation (ccm) and found in Pseudomonadota, plant mitochondria, some protozoal mitochondria, deinococci, and archaea. Ccm comprises at least eight membrane proteins (CcmABCDEFGH) that are needed for electron transfer to the heme group, apo-cytochrome handling and attachment of the heme to the apo-cytochrome. An ABC-transporter-like complex formed by CcmA2BCD attaches a heme group to CcmE with the use of ATP. CcmE transports the heme to CcmF where the attachment to the apo-cytochrome occurs. Transport of the apoprotein from the cytoplasm to the periplasm happens via the Sec translocation system. CcmH is used by the system to recognize the apo-cytochrome and direct it to CcmF.
System II
Cytochromes c in chloroplasts, Gram-positive bacteria, cyanobacteria, and some Pseudomonadota are produced by the cytochrome c synthesis (ccs) system. It is composed of two membrane proteins CcsB and CcsA. The CcsBA protein complex was suggested to act as a heme transporter during the attachment process. In some organisms such as Helicobacter hepaticus both proteins are found as a fused single protein. Apoprotein transport occurs via the Sec translocon as well.
System III
Fungal, vertebrate and invertebrate mitochondria produce cytochrome c proteins with a single enzyme called HCCS (holocytochrome c synthase) or cytochrome c heme lyase (CCHL). The protein is attached to the inner membrane of the intermembrane space. In some organisms, such as Saccharomyces cerevisiae, cytochrome c and cytochrome c1 are synthesized by separate heme lyases, CCHL and CC1HL respectively. In Homo sapiens a single HCCS is used for the biosynthesis of both cytochrome c'' proteins.
System IV
Four membrane proteins are necessary for the attachment of a heme in cytochrome b6. A major difference to systems I-III is that the heme attachment occurs at the opposite side of the lipid bilayer compared to the other systems.
Human proteins containing this domain
CYCS; CYC1
References
Protein domains
Peripheral membrane proteins | Cytochrome c family | [
"Biology"
] | 1,946 | [
"Protein domains",
"Protein classification"
] |
9,460,040 | https://en.wikipedia.org/wiki/Lyman-alpha%20emitter | A Lyman-alpha emitter (LAE) is a type of distant galaxy that emits Lyman-alpha radiation from neutral hydrogen.
Most known LAEs are extremely distant, and because of the finite travel time of light they provide glimpses into the history of the universe. They are thought to be the progenitors of most modern Milky Way type galaxies. These galaxies can be found nowadays rather easily in narrow-band searches by an excess of their narrow-band flux at a wavelength which may be interpreted from their redshift
where z is the redshift, is the observed wavelength, and 1215.67 Å is the wavelength of Lyman-alpha emission. The Lyman-alpha line in most LAEs is thought to be caused by recombination of interstellar hydrogen that is ionized by an ongoing burst of star formation. Such Lyman alpha emission was first suggested as a signature of young galaxies by Bruce Partridge and P. J. E. Peebles in 1967. Experimental observations of the redshift of LAEs are important in cosmology because they trace dark matter halos and subsequently the evolution of matter distribution in the universe.
Properties
Lyman-alpha emitters are typically low mass galaxies of 108 to 1010 solar masses. They are typically young galaxies that are 200 to 600 million years old, and they have the highest specific star formation rate of any galaxies known. All of these properties indicate that Lyman-alpha emitters are important clues as to the progenitors of modern Milky Way type galaxies.
Lyman-alpha emitters have many unknown properties. The Lyman-alpha photon escape fraction varies greatly in these galaxies. This is what portion of the light emitted at the Lyman-alpha line wavelength inside the galaxy actually escapes and will be visible to distant observers. There is much evidence that the dust content of these galaxies could be significant and therefore is obscuring the brightness of these galaxies. It is also possible that anisotropic distribution of hydrogen density and velocity play a significant role in the varying escape fraction due to the photons' continued interaction with the hydrogen gas (radiative transfer). Evidence now shows strong evolution in the Lyman-alpha escape fraction with redshift, most likely associated with the buildup of dust in the ISM. Dust is shown to be the main parameter setting the escape of Lyman-alpha photons. Additionally the metallicity, outflows, and detailed evolution with redshift is unknown.
Importance in cosmology
LAEs are important probes of reionization, cosmology (BAO), and they allow probing of the faint end of the luminosity function at high redshift.
The baryonic acoustic oscillation signal should be evident in the power spectrum of Lyman-alpha emitters at high redshift. Baryonic acoustic oscillations are imprints of sound waves on scales where radiation pressure stabilized the density perturbations against gravitational collapse in the early universe. The three-dimensional distribution of the characteristically homogeneous Lyman-alpha galaxy population will allow a robust probe of cosmology. They are a good tool because the Lyman-alpha bias, the propensity for galaxies to form in the highest overdensity of the underlying dark matter distribution, can be modeled and accounted for. Lyman-alpha emitters are over dense in clusters.
See also
Damped Lyman-alpha system
Lyman-alpha blob
Lyman-alpha forest
Lyman-break galaxy
Lyman limit
Lyman series
References
External links
Physical cosmology
Galaxies | Lyman-alpha emitter | [
"Physics",
"Astronomy"
] | 710 | [
"Galaxies",
"Theoretical physics",
"Astrophysics",
"Physical cosmology",
"Astronomical objects",
"Astronomical sub-disciplines"
] |
9,460,255 | https://en.wikipedia.org/wiki/Inventec | Inventec Corporation (; ) is a Taiwan-based Original Design Manufacturer (ODM) making notebook computers, servers and mobile devices. Originally established in 1975 to develop and manufacture electronic calculators, major customers include Hewlett-Packard, Toshiba, Acer, and Fujitsu-Siemens.
Inventec Corporation has major development and manufacturing facilities in China and is one of their largest exporters. The company opened its first development center in China in 1991 and its first manufacturing facility in Shanghai in 1995. In addition, the company has configuration, and service centers in the United States, Europe, and Mexico.
The company has a workforce of over 23,000 employees, including over 3,000 engineers. It partially owns a Japan-based mini notebook brand vendor, Kohjinsha (KJS), which was established in Yokohama.
Group information
Inventec Group comprises five companies:
Inventec Corporation
Noted above
Inventec BESTA
BESTA is an independent subsidiary company of the Inventec Group first launched in Taipei in 1989 to produce compact English/Chinese electronic dictionaries. BESTA has expanded its product line to PDAs, tablet computers and translators in multiple languages (including Korean and Japanese).
BESTA currently produces over 30 models on the market in Taiwan, China, Thailand, Malaysia, Indonesia, and Singapore. The Thai distributor CyberDict offers customized products with additional Thai dictionaries.
BESTA also manufactures a line of language products designed specifically for the North American market, where it has become the leading provider of English/Chinese and English/Korean electronic dictionaries. In the US, BESTA products are sold under the BESTA (Chinese) or OPTIMEC (South Korean) labels and are exclusively distributed and serviced by Moy Sam Corporation (New York) and Maxmile Corporation (Los Angeles). In Canada, BESTA products are found in Toronto and Markham.
Several BESTA models come with slots for inserting SD/MMC data cards containing additional specialized dictionaries (such as medical or business). It has been ranked in 1st place for "Taiwan's Ideal Electronic Dictionary Brand" for twelve consecutive years. Inventec Besta became a listed company in Taiwan Stock Exchange in 2007.
Key Development of Inventec Besta Co:
Year 1989—Inventec Besta Co., Ltd was founded.
Year 1999—Merged with the Inventec's References System Division, Lin Kou Factory, and Inventec (Xi'an) Company
Year 2000—Acquired Golden Atom Holdings Ltd. and invested in Besta Technology (HK) Co., Ltd.and Besta Technology (China) Co., Ltd
Tablets
Amazon Kindle Fire
Barnes & Noble Nook
N18C (Dr.Eye)
Lyon
Mobile phones
OKWAP
J98
PHS-I99
PHS-PG900
PHS-PG901
PHS-I92
PHS-i501
See also
List of companies of Taiwan
References
1975 establishments in Taiwan
Computer hardware companies
Computer systems companies
Electronics companies of Taiwan
Mobile phone manufacturers
Companies based in Taipei
Electronics companies established in 1975 | Inventec | [
"Technology"
] | 637 | [
"Computer hardware companies",
"Computer systems companies",
"Computers",
"Computer systems"
] |
9,460,850 | https://en.wikipedia.org/wiki/Conservation%20and%20restoration%20of%20new%20media%20art | The conservation and restoration of new media art is the study and practice of techniques for sustaining new media art created using from materials such as digital, biological, performative, and other variable media.
New media art runs a unique risk when it comes to longevity that has resulted in the development of new and different preservation and restoration strategies and tools.
To preserve and restore these pieces of new media art, there are a variety of strategies including storage, migration, emulation, and reinterpretation. There are even more tools used to implement these strategies including Archivematica, BitCurator, Conifer, Media Info, PRONOM, QC Tools, and the Variable Media Questionnaire. The common metadata schema used for new media art is Media Art Notation System (MANS). Despite the name "new media art", there is a diverse history of preservation and restoration efforts including both individual efforts and consortium efforts.
Preservation strategies
Storage
The acquisition and storage of the physical media-equipment, such as DVD players or computers, used in multi-media or digital artworks has proven a short-term tactic at best, as hardware can quickly become obsolete or can 'stale' in storage. Storage is also notoriously bad at capturing the contextual and live aspects of works such as Internet art, performance art and live electronic music.
Storage involves keeping documents in their original formats whenever possible to maintain authenticity; keeping metadata updated to aid in finding and understanding the preservation strategies taken so far; keeping documents on reliable, non-proprietary software that users would be the most likely to already have or easily get access to; storing multiple copies of bitstreams; replacing the carriers when new, more widely used ones become available.
Migration
To migrate a work of art is to upgrade its format from an aged medium to a more current one, such as from VHS to DVD, accepting that some changes in quality may occur while still maintaining the integrity of the original. This strategy assumes that preserving the content or information of an artwork, despite its change in media, trumps concerns over fidelity to the original look and feel.
Migration must take place regularly or the original piece may become obsolete with no way to update it to a newer format for accessibility. Migration is especially important when the file is saved on proprietary software like Microsoft Word, Prezi, Archives Space, etc. In the process of migration, a document can be stored in its original form and also migrated to a non-proprietary form in order to maintain authenticity while also providing long-term access.
Emulation
The process of simulating an older operating system (or by extension, other supporting infrastructure) on a newer software or hardware platform is called emulation. The idea behind emulation is to maintain the original format and feel of the piece of new media art. Emulation software allows users and researchers to view complex pieces of art like video games, virtual reality, etc. in a way that it was intended to be viewed. Emulation is especially important for art created on proprietary software or software that many users and researchers might not have access to. The emulation software allows them to view the document even without the original software.
Seeing Double: an emulation testbed
In 2004, the Guggenheim Museum, in conjunction with the Daniel Langlois Foundation, held an exhibition entitled Seeing Double: Emulation in Theory and Practice as a trial of emulation. In the exhibition, artworks operating on their original physical media were displayed alongside versions emulated on newer physical media. The exhibition was organized with the participation of computer researcher and emulation specialist, Jeff Rothenberg. In 1998, Rothenberg had published "Avoiding Technological Quicksand: Finding a Viable Technical Foundation for Digital Preservation".
Reinterpretation
Reinterpretation is the final storage form and is only considered when all other storage forms are not available. Reinterpretation involves changing the essence of the art with or without the artist's approval for preservation purposes. This could involve re-coding for access, recasting a piece in a more modern, durable medium, and more. This technique does not maintain authenticity the way the other strategies do, but it can be the most effective. Therefore, it is considered best practice to only use reinterpretation when all other strategies are deemed inappropriate.
Preservation tools
Because the conservation and restoration of new media art is a craft, not a science, not every preservation strategy will work for every piece of new media art. Repositories have to make decisions based on the complexities of each individual piece. They will each have their own unique needs, interests, and priorities. Repositories and individual conservators keep up with new tools and technologies available to aid in preservation.
Archivematica
Archivematica is an "integrated suite of open source software tools". It allows repositories to store their documents there for the long-term while also keeping up to date with current industry standards such as Dublin Core, AIPs, etc. Repositories started using Archivematica to address the gap between storage and actual preservation. It helps them along with every step in archival processing.
Bit Curator
Bit Curator can be used as a way to examine a collection without going through each individual piece of art. Conservators can upload bulk files and Bit Curator will examine the trends and patterns. From there, repositories can decide what to focus on and which pieces need attention.
Conifer
Conifer creates an archive of any page you visit while you browse. It is useful for conservators because they do not have to collect the webpage materials themselves. Everything you see is archived. Unlike other web archives like Wayback Machine, Conifer captures images, video, etc. of pages that can only be seen by you. They capture material that is password protected. From there, conservators can go through the collection themselves to sort, arrange, describe, add metadata, etc.
Media Info
Media Info is primarily used for audio and visual files. They only take certain formats so more unconventional formats must be converted. This software verifies technical metadata and makes sure everything is working properly and up to date.
PRONOM
PRONOM is a resource for information on "file formats, software products, and other technical components". It helps to ensure the conservation and long-term access to a variety of documents. This information is marketed toward anyone interested in learning more. It is not exclusive to archivists and conservators.
QC Tools
QC Tools filters video files to help repositories analyze the contents of the video.
Variable Media Questionnaire
The Variable Media Questionnaire is a free web service that allows new media curators and repositories to share the most effective strategies of preservation for different forms of new media art. It focuses particularly on creating guidelines for preserving the art once the original medium or software is not available. They utilize the four main preservation strategies while recommending the specific mediums and software and work for different types of art.
Involving artists in preservation
The future of new media conservation and restoration involves more collaboration between artists and curators. When preservation efforts are taken earlier in the creation of the work, future preservation becomes easier and more effective. The artist does not necessarily know the steps that must be taken to accurately preserve new media art and the curator does not necessarily know the artistic intentions of the creator. When these two work together throughout the creation and transfer to a repository, the conservation of the piece will last longer and the intentions of the artist will be honored. Without these efforts, many new media art pieces will not be properly preserved and will never be moved to a repository. The earlier the intervention, the easier it is going to be for the curator to ensure long-term preservation. Steps can be taken to make sure the new media art is around for future use. Those steps can involve the preservation strategies and tools described above, but the piece can only be preserved if it exists in a state where curators can access and modify it. For example, if it exists on a software that is already obsolete, it cannot be migrated.
Metadata standards
Media Art Notation System (MANS)
The Media Art Notation System is a formal notation system introduced by Richard Rinehart, Digital Media Director and Adjunct Curator, Berkeley Art Museum/Pacific Film Archive, in 2007. It was developed in response to a need for a "new approach to conceptualizing digital and media art forms". Rinehart compares MANS to a musical score. An ensemble can change out the instruments, but it will still be the same piece of music as long as they follow the score. In the same way, digital media can be separated from its software and still produce the same computational result. When digital media is presented using a different hardware or software, it may appear slightly different, but it will still be the same piece of media art.
MANS uses XML to present the metadata specifically because it allows the coder to define the framework of the digital media while allowing for variations in how it presents itself. This is particularly useful for conservation because it allows future users to examine the document in a system that works for multiple different pieces of art. If the software or materials for one piece of art becomes obsolete, future researchers will be able to examine the new media art via XML and map it onto a newer schema.
MANS has three levels of implementation. The first level is Score which is mostly metadata with minimal XML. The second level is the machine-processable Score. It includes sub-component description, more XML, and even images and other media. The third level is the machine-processable Score that serves as a working model of the original. This level contains technical metadata, bitstreams, very granular description, and structural markup.
Exhibiting new media art
New media art is unique from other types of art in that the tools and strategies used to create the art are often the same tools and strategies used to display or exhibit the art. Because of this, exhibiting new media art becomes a part of conserving new media art. Often, a curator or specialist will be on site at the exhibit to ensure the art is being displayed and used correctly by audiences. It would be easy to assume a computer is for administrative use when really the coding on the computer is part of the exhibit. This everchanging medium is difficult to conserve, restore, and exhibit.
Because of the innovative nature of new media art, it is very common for exhibits to include audience interaction. Artists will create work that is only fully complete during audience interaction such as movement, tactile pieces, or even changes made by audience members. This creates a unique challenge where only the initial artist-created portion of the piece can be conserved and the audience-interaction portion of the piece will change overtime and depending on the actions of the audience members.
It is considered best practice when conserving or restoring new media art to consider the relationship with the audience. Often the aspect that sets new media art from other types of art is the "liveliness" that is represented by the relationship between the piece and the audience. In order to exhibit this type of art, curators and repositories must first accept this relationship as a type of art, and thus, worth exhibiting. Then, they will attempt to conserve the relationship built between the art and the audience.
Relationship to other preservation efforts
The catchall term sometimes applied to such genres, variable media, suggests that it is possible to recapture the experience of these works independently of the specific physical material and equipment used to display them in a given exhibition or performance. As the nature of multi-media artworks calls for the development of new standards, techniques, and metadata within preservation strategies, the idea that certain artworks incorporating an array of media elements could be variable opens up the possibility for experimental standards of preservation and reinterpretation.
Nevertheless, many new media preservationists work to integrate new preservation strategies with existing documentation techniques and metadata standards. This effort is made in order to remain compatible with previous frameworks and models on how to archive, store and maintain variable media objects in a standardized repository utilizing a systematized vocabulary, such as the Open Archival Information System model.
While some of this research parallels and exploits progress made in the practice of Digital preservation and Web archiving, the preservation of new media art offers special challenges and opportunities. Whereas scientific data and legal records may be easily migrated from one platform to another without losing their essential function, artworks are often sensitive to the look and feel of the media in which they are embedded. On the other hand, artists who are invited to help imagine a long-term plan for their work often respond with creative solutions.
History of new media art preservation
Individual efforts
Numerous contemporary art conservators have contributed individual efforts toward new media art preservation:
Carol Stringari of the Solomon R. Guggenheim Museum in New York
As a deputy director and chief conservator, Stringari led laser research of a monochromatic painting by Ad Reinhardt and project on conservation of the works of László Moholy-Nagy. She later won the CAA/Heritage Preservation Award for Distinction for Scholarship and Conservation for her work on Ad Reinhardt's technique.
Professor Pip Laurenson was formerly the Head of Time-Based Media Conservation at Tate Gallery in London where she ran the influential Andrew W. Mellon foundation supported programme Reshaping the Collectible: When Artworks Live in the Museum. Laurenson is currently head of the UK's first conservation programme dedicated to contemporary art and media, based at UCL East.
Jill Sterret of the San Francisco Museum of Modern Art.
Director of Collections & Conservation at SFMOMA, Sterret is an avid collector and preserver of artworks made by contemporary artists. She is committed to the vital collaborations between artists, curators, technical experts, registrars, and conservators that support contemporary art conservation practice.
Consortium efforts
The variable media concept was developed in 1998, first as a creative strategy Ippolito brought to the adversarial collaborations produced with artists Janet Cohen and Keith Frank, and later as a preservation strategy called the Variable Media Initiative that he applied to endangered artworks in the Solomon R. Guggenheim Museum's collection. In 2002 the Guggenheim partnered with the Daniel Langlois Foundation for Art, Science and Technology in Montreal to form the Variable Media Network, a concerted effort to develop a museum-standard, best practice for the collection and preservation of new media art. Apart from Stringari and Ippolito, other key members of the Variable Media Network included Alain Depocas, Director of the Centre for Research and Documentation, Daniel Langlois Foundation; and Caitlin Jones, former Daniel Langlois Variable Media Preservation Fellow at the Guggenheim Museum.
Around this time similar investigations into the preservation of digital/media art were being led on the West Coast by Richard Rinehart, who published an article on the subject, "The Straw that Broke the Museum's Back? Collecting and Preserving Digital/Media Art for the Next Century", in 2000. Rinehart had also established Conceptual & Intermedia Arts Online (CIAO) with Franklin Furnace, the New York-based performance art-grants giving organization and archive/advocate of performance, 'ephemeral' or non-traditional art under the directorship of Martha Wilson.
Members of the Variable Media Network and CIAO subsequently joined forces with other organizations, including Rhizome.org, an affiliate of New York's New Museum of Contemporary Art, for collective preservation endeavors such as Archiving the Avant Garde. This broader coalition, operating under the rubric Forging the Future, is managed by the Still Water lab at the University of Maine and offers free, open-source tools for new media preservation, including the 3rd-generation Variable Media Questionnaire.
In 2002, Timothy Murray founded the Rose Goldsen Archive of New Media Art. Named after the pioneering critic of the commercialization of mass media, the late Professor Rose Goldsen of Cornell University. The Archive hosts international art work produced on CD-Rom, DVD-Rom, video, digital interfaces, and the internet. Its collection of supporting materials includes unpublished manuscripts and designs, catalogues, monographs, and resource guides to new media art. The curatorial vision emphasizes digital interfaces and artistic experimentation by international, independent artists. Designed as an experimental center of research and creativity, the Goldsen Archive includes materials by individual artists and collaborates on conceptual experimentation and archival strategies with international curatorial and fellowship projects.
Other important initiatives include DOCAM, an international research alliance on the documentation and the conservation of the media arts heritage organized by the Daniel Langlois Foundation, and the International Network for the Conservation of Contemporary Art (INCCA), organized by the Netherlands Institute for Cultural Heritage (ICN).
See also
Art conservation
Digital preservation
Digital art
Internet art
National Digital Library Program (NDLP)
National Digital Information Infrastructure and Preservation Program (NDIIPP)
New media art
Virtual art
References
Alain Depocas, Jon Ippolito, and Caitlin Jones, eds., Permanence Through Change: The Variable Media Approach, co-published by the Guggenheim Museum and The Daniel Langlois Foundation for Art, Science & Technology, 2003.
Jon Ippolito, "Death by Wall Label", 2007.
Jeff Rothenberg, "Avoiding Technological Quicksand: Finding a Viable Technical Foundation for Digital Preservation", 1998.
Variable Media Network
Richard Rinehart, The Media Art Notation System: Documenting and Preserving Digital / Media Art, 2007.
Further reading
Steve Dietz. Collecting New Media Art: Just Like Anything Else, Only Different
Oliver Grau. "For an Expanded Concept of Documentation: The Database of Virtual Art", ICHIM, École du Louvre, Paris 2003, Proceedings, pp. 2–15. Expanded Concept of Documentation
Jones, Caitlin. "Does Hardware Dictate Meaning? Three Variable Media Conservation Case Studies" Horizon article
Jones, Caitlin. "Seeing Double: Emulation in Theory and Practice, The Erl King Case Study" Case Study
Jones, Caitlin. "Understanding Medium: preserving content and context in variable media art" Article from Keep Moving Images
Christiane Paul. Challenges for a Ubiquitous Museum: Presenting and Preserving New Media
Quaranta, Domenico. Interview with Jon Ippolito published in "Noemalab" Leaping into the abyss and resurfacing with a pearl
External links
erpanet The Preservation of Digital-Born Art
Preserving the Immaterial – A conference on variable media
DOCAM – Documentation and Conservation of the Media Arts Heritage / Documentation et Conservation du Patrimoine des Arts Médiatiques
Media Art and Museums: Guidelines and Case Studies
Variable Media Network – A resource from CHIN (Canadian Heritage Information Network)
Conservation and restoration of cultural heritage
Computer art
Digital art
Digital preservation
Internet culture
Multimedia
New media
Preservation (library and archival science) | Conservation and restoration of new media art | [
"Technology"
] | 3,806 | [
"Multimedia",
"New media"
] |
9,460,972 | https://en.wikipedia.org/wiki/Fluproquazone | Fluproquazone (trade name Tormosyl, RF 46-790 ) was a quinazolinone derivative with potent analgesic, antipyretic, and anti-inflammatory effects discovered by Sandoz. It was withdrawn during development due to liver toxicity.
References
Nonsteroidal anti-inflammatory drugs
Quinazolines
Lactams
4-Fluorophenyl compounds
Abandoned drugs
Isopropyl compounds | Fluproquazone | [
"Chemistry"
] | 89 | [
"Drug safety",
"Abandoned drugs"
] |
9,461,013 | https://en.wikipedia.org/wiki/Finger%20Touching%20Cell%20Phone | The Finger Touching Cell Phone was a concept cell-phone developed by Samsung and Sunman Kwon at Hong-ik University, South Korea.
Concept
The phone was designed to be worn, as a wristband. The phone would project a 3 × 4 mobile-style keypad onto your fingers, with each joint making up a button. The product won an iF Concept Product Award in 2007.
References
External links
http://digital.no.msn.com/article.aspx?cp-documentid=2890839(Norwegian)
http://techdigest.tv/2007/02/turn_your_finge.html
Mobile phones
Pointing-device text input | Finger Touching Cell Phone | [
"Technology"
] | 144 | [
"Mobile technology stubs",
"Mobile phone stubs"
] |
9,461,236 | https://en.wikipedia.org/wiki/Riemann%20problem | A Riemann problem, named after Bernhard Riemann, is a specific initial value problem composed of a conservation equation together with piecewise constant initial data which has a single discontinuity in the domain of interest. The Riemann problem is very useful for the understanding of equations like Euler conservation equations because all properties, such as shocks and rarefaction waves, appear as characteristics in the solution. It also gives an exact solution to some complex nonlinear equations, such as the Euler equations.
In numerical analysis, Riemann problems appear in a natural way in finite volume methods for the solution of conservation law equations due to the discreteness of the grid. For that it is widely used in computational fluid dynamics and in computational magnetohydrodynamics simulations. In these fields, Riemann problems are calculated using Riemann solvers.
The Riemann problem in linearized gas dynamics
As a simple example, we investigate the properties of the one-dimensional Riemann problem
in gas dynamics
(Toro, Eleuterio F. (1999). Riemann Solvers and Numerical Methods for Fluid Dynamics, Pg 44, Example 2.5)
The initial conditions are given by
where x = 0 separates two different states, together with the linearised gas dynamic equations (see gas dynamics for derivation).
where we can assume without loss of generality .
We can now rewrite the above equations in a conservative form:
:
where
and the index denotes the partial derivative with respect to the corresponding variable (i.e. x or t).
The eigenvalues of the system are the characteristics of the system
. They give the propagation speed of the medium, including that of any discontinuity, which is the speed of sound here. The corresponding eigenvectors are
By decomposing the left state in terms of the eigenvectors, we get for some
Now we can solve for and :
Analogously
for
Using this, in the domain in between the two characteristics ,
we get the final constant solution:
and the (piecewise constant) solution in the entire domain :
Although this is a simple example, it still shows the basic properties. Most notably, the characteristics decompose the solution into three domains. The propagation speed
of these two equations is equivalent to the propagation speed of sound.
The fastest characteristic defines the Courant–Friedrichs–Lewy (CFL) condition, which sets the restriction for the maximum time step for which an explicit numerical method is stable. Generally as more conservation equations are used, more characteristics are involved.
References
See also
Computational fluid dynamics
Computational magnetohydrodynamics
Riemann solver
Conservation equations
Fluid dynamics
Computational fluid dynamics
Bernhard Riemann | Riemann problem | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 540 | [
"Computational fluid dynamics",
"Chemical engineering",
"Conservation laws",
"Mathematical objects",
"Equations",
"Computational physics",
"Piping",
"Fluid dynamics",
"Conservation equations",
"Symmetry",
"Physics theorems"
] |
9,461,390 | https://en.wikipedia.org/wiki/Riemann%20solver | A Riemann solver is a numerical method used to solve a Riemann problem. They are heavily used in computational fluid dynamics and computational magnetohydrodynamics.
Definition
Generally speaking, Riemann solvers are specific methods for computing the numerical flux across a discontinuity in the Riemann problem. They form an important part of high-resolution schemes; typically the right and left states for the Riemann problem are calculated using some form of nonlinear reconstruction, such as a flux limiter or a WENO method, and then used as the input for the Riemann solver.
Exact solvers
Sergei K. Godunov is credited with introducing the first exact Riemann solver for the Euler equations, by extending the previous CIR (Courant-Isaacson-Rees) method to non-linear systems of hyperbolic conservation laws. Modern solvers are able to simulate relativistic effects and magnetic fields.
More recent research shows that an exact series solution to the Riemann problem exists, which may converge fast enough in some cases to avoid the iterative methods required in Godunov's scheme.
Approximate solvers
As iterative solutions are too costly, especially in magnetohydrodynamics, some approximations have to be made. Some popular solvers are:
Roe solver
Philip L. Roe used the linearisation of the Jacobian, which he then solves exactly.
HLLE solver
The HLLE solver (developed by Ami Harten, Peter Lax, Bram van Leer and Einfeldt) is an approximate solution to the Riemann problem, which is only based on the integral form of the conservation laws and the largest and smallest signal velocities at the interface. The stability and robustness of the HLLE solver is closely related to the signal velocities and a single central average state, as proposed by Einfeldt in the original paper
HLLC solver
The HLLC (Harten-Lax-van Leer-Contact) solver was introduced by Toro. It restores the missing rarefaction wave by using an estimation technique, such as linearisation. More advanced techniques exist, like using the Roe average velocity for the middle wave speed. These schemes are quite robust and efficient but somewhat more diffusive.
Rotated-hybrid Riemann solvers
These solvers were introduced by Hiroaki Nishikawa and Kitamura, in order to overcome the carbuncle problems
of the Roe solver and the excessive diffusion of the HLLE solver at the same time. They developed robust and accurate Riemann solvers by combining the Roe solver and the HLLE/Rusanov solvers: they show that being applied in two orthogonal directions the two Riemann solvers can be combined into a single Roe-type solver (the Roe solver with modified wave speeds). In particular, the one derived from the Roe and HLLE solvers, called Rotated-RHLL solver, is extremely robust (carbuncle-free for all possible test cases on both structured and unstructured grids) and accurate (as accurate as the Roe solver for the boundary layer calculation).
Other solvers
There are a variety of other solvers available, including more variants of the HLL scheme and solvers based on flux-splitting via characteristic decomposition.
Notes
See also
Godunov's scheme
Computational fluid dynamics
Computational magnetohydrodynamics
References
External links
Numerical analysis
Computational fluid dynamics
Conservation equations
Bernhard Riemann | Riemann solver | [
"Physics",
"Chemistry",
"Mathematics"
] | 715 | [
"Computational fluid dynamics",
"Conservation laws",
"Mathematical objects",
"Computational mathematics",
"Equations",
"Computational physics",
"Mathematical relations",
"Numerical analysis",
"Fluid dynamics",
"Conservation equations",
"Approximations",
"Symmetry",
"Physics theorems"
] |
9,461,413 | https://en.wikipedia.org/wiki/ABT-239 | ABT-239 is an H3-receptor inverse agonist developed by Abbott. It has stimulant and nootropic effects, and has been investigated as a treatment for ADHD, Alzheimer's disease, and schizophrenia. ABT-239 is more active at the human H3 receptor than comparable agents such as thioperamide, ciproxifan, and cipralisant. It was ultimately dropped from human trials after showing the dangerous cardiac side effect of QT prolongation, but is still widely used in animal research into H3 antagonists / inverse agonists.
References
External links
Nootropics
H3 receptor antagonists
Benzonitriles
Benzofuranethanamines
Pyrrolidines
Biphenyls
Nitriles | ABT-239 | [
"Chemistry"
] | 162 | [
"Nitriles",
"Functional groups"
] |
9,462,323 | https://en.wikipedia.org/wiki/Wilhelmy%20plate | A Wilhelmy plate is a thin plate that is used to measure equilibrium surface or interfacial tension at an air–liquid or liquid–liquid interface. In this method, the plate is oriented perpendicular to the interface, and the force exerted on it is measured. Based on the work of Ludwig Wilhelmy, this method finds wide use in the preparation and monitoring of Langmuir films.
Detailed description
The Wilhelmy plate consists of a thin plate usually on the order of a few square centimeters in area. The plate is often made from filter paper, glass or platinum which may be roughened to ensure complete wetting. In fact, the results of the experiment do not depend on the material used, as long as the material is wetted by the liquid. The plate is cleaned thoroughly and attached to a balance with a thin metal wire. The force on the plate due to wetting is measured using a tensiometer or microbalance and used to calculate the surface tension () using the Wilhelmy equation:
where is the wetted perimeter (), is the plate width, is the plate thickness, and is the contact angle between the liquid phase and the plate. In practice the contact angle is rarely measured; instead, either literature values are used or complete wetting () is assumed.
In general, surface tension may be measured with high sensitivity using very thin plates ranging in thickness from 0.1 to 0.002 mm. The device is calibrated with pure liquids like water and ethanol. The buoyancy adjustment is minimized by utilizing a thin plate and dipping it as little as feasible. Wetting water on a platinum plate is accomplished by using commercially available platinum plates that have been roughened to improve wettability.
Advantages and short brief
If complete wetting is assumed (contact angle = 0), no correction factors are required to calculate surface tensions when using the Wilhelmy plate, unlike for a du Noüy ring. In addition, because the plate is not moved during measurements, the Wilhelmy plate allows accurate determination of surface kinetics on a wide range of timescales, and it displays low operator variance. In a typical plate experiment, the plate is lowered to the surface being analyzed until a meniscus is formed, and then raised so that the bottom edge of the plate lies on the plane of the undisturbed surface. If measuring a buried interface, the second (less dense) phase is then added on top of the undisturbed primary (denser) phase in such a way as to not disturb the meniscus. The force at equilibrium can then be used to determine the absolute surface or interfacial tension. Due to a large wetted area of the plate, the measurement is less susceptible for measurement errors than when using a smaller probe. Also, the method has been described in several international measurement standards.
See also
Tensiometer (surface tension)
du Noüy ring method
Sessile drop technique
Further reading
Holmberg, K (ed.) Handbook of Applied Surface and Colloid Chemistry New York, Wiley and Sons: 2002. Vol. 2, p. 219
References
Laboratory equipment
Materials science | Wilhelmy plate | [
"Physics",
"Materials_science",
"Engineering"
] | 635 | [
"Applied and interdisciplinary physics",
"Materials science",
"nan"
] |
9,462,509 | https://en.wikipedia.org/wiki/Armstrong%27s%20acid | Armstrong's acid (naphthalene-1,5-disulfonic acid) is a fluorescent organic compound with the formula C10H6(SO3H)2. It is one of several isomers of naphthalenedisulfonic acid. It a colorless solid, typically obtained as the tetrahydrate. Like other sulfonic acids, it is a strong acid. It is named for British chemist Henry Edward Armstrong.
Production and use
It is prepared by disulfonation of naphthalene with oleum:
C10H8 + 2 SO3 → C10H6(SO3H)2
Further sulfonation gives The 1,3,5-trisulfonic acid derivative.
Reactions and uses
Fusion of Armstrong's acid in NaOH gives the disodium salt of 1,5-dihydroxynaphthalene, which can be acidified to give the diol. The intermediate in this hydrolysis, 1-hydroxynaphthalene-5-sulfonic acid, is also useful. Nitration gives nitrodisulfonic acids, which are precursors to amino derivatives.
The disodium salt is sometimes used as a divalent counterion for forming salts of basic drug compounds, as an alternative to the related mesylate or tosylate salts. When used in this way such a salt is called a naphthalenedisulfonate salt, as seen with the most common salt form of the stimulant drug CFT. The disodium salt is also used as an electrolyte in certain kinds of chromatography.
References
Reagents for organic chemistry
Naphthalenesulfonic acids | Armstrong's acid | [
"Chemistry"
] | 355 | [
"Reagents for organic chemistry"
] |
9,462,739 | https://en.wikipedia.org/wiki/FURPS | FURPS is an acronym representing a model for classifying software quality attributes (functional and non-functional requirements):
Functionality - capability (size and generality of feature set), reusability (compatibility, interoperability, portability), security (safety and exploitability)
Usability (UX) - human factors, aesthetics, consistency, documentation, responsiveness
Reliability - availability (failure frequency (robustness/durability/resilience), failure extent and time-length (recoverability/survivability)), predictability (stability), accuracy (frequency/severity of error)
Performance - speed, efficiency, resource consumption (power, ram, cache, etc.), throughput, capacity, scalability
Supportability (serviceability, maintainability, sustainability, repair speed) - testability, flexibility (modifiability, configurability, adaptability, extensibility, modularity), installability, localizability
The model, developed at Hewlett-Packard was first publicly elaborated by Grady and Caswell. FURPS+ is now widely used in the software industry. The + was later added to the model after various campaigns at HP to extend the acronym to emphasize various attributes.
See also
Types of requirements
Expanded list of types of requirements
Further reading
External links
IBM on Furps+
Software requirements
Mnemonics | FURPS | [
"Engineering"
] | 279 | [
"Software engineering",
"Software engineering stubs",
"Software requirements"
] |
9,463,085 | https://en.wikipedia.org/wiki/Hydrogen%20darkening | Hydrogen darkening is a physical degradation of the optical properties of glass. Free hydrogen atoms are able to bind to the SiO2 silica glass compound forming hydroxyl (OH)—a chemical compound that interferes with the passage of light through the glass.
The problem is particularly relevant to fiber-optic cables—particularly in oil and gas wells where fiber optic cables are used for distributed temperature sensing (DTS). Hydrogen can be present due to the cracking of hydrocarbons in the well. The darkening of the fiber can distort the DTS reading and possibly render the DTS system inoperable due to the optical loss budget being exceeded.
To prevent this, coatings such as carbon are applied to the fiber, and hydrogen capturing gels are used to buffer the fiber and other proprietary techniques may be used to prevent hydrogen atoms from reaching the glass fiber via the cable sheath.
References
Elizabeth Ann Bonnell (2015). Temperature dependent behavior of optical loss from hydrogen species in optical fibers at high temperature. (masters thesis), 2015-04-07.
Sensors
Fiber optics
Petroleum production
Glass chemistry
Glass engineering and science
Hydroxides | Hydrogen darkening | [
"Chemistry",
"Materials_science",
"Technology",
"Engineering"
] | 231 | [
"Glass engineering and science",
"Glass chemistry",
"Hydroxides",
"Measuring instruments",
"Materials science",
"Sensors",
"Bases (chemistry)"
] |
9,463,295 | https://en.wikipedia.org/wiki/Scan%20conversion | Scan conversion or scan converting rate is a video processing technique for changing the vertical / horizontal scan frequency of video signal for different purposes and applications. The device which performs this conversion is called a scan converter.
The application of scan conversion is wide and covers video projectors, cinema equipment, TV and video capture cards, standard and HDTV televisions, LCD monitors, radar displays and many different aspects of picture processing.
Mechanisms and methods
Scan conversion involves changing the picture information data rate and wrapping the new picture in appropriate synchronization signals.
There are two distinct methods for changing a picture's data rate:
Analog Methods (Non retentive, memory-less or real time method)
This conversion is done using large numbers of delay cells and is appropriate for analog video. It may also be performed using a specialized scan converter vacuum tube. In this case polar coordinates (angle and distance) data from a source such as a radar receiver, so that it can be displayed on a raster scan (TV type) display.
Digital methods (Retentive or buffered method)
In this method, a picture is stored in a line or frame buffer with n1 speed (data rate) and is read with n2 speed, several picture processing techniques are applicable when the picture is stored in buffer memory including kinds of interpolation from simple to smart high order comparisons, motion detection and … to improve the picture quality and prevent the conversion artifacts.
How to realize
The process in practice is applicable only using integrated circuits in LSI and VLSI scales. Timing, interference between digital and analog signals, clocks, noise and exact synchronization have important roles in the circuit.
Digital conversion method needs the analog video signal to be converted to digital data at the first step.
A scan converter can be made in its basic structure using some high speed integrated circuits as a circuit board however there are some integrated circuits which perform this function plus other picture processing functions like scissoring, change of aspect ratio and … an easy to use example was SDA9401.
Some examples
Up conversion (interpolation):
In many LCD monitors there is a native picture mode, however the monitor can display different graphical modes using a scan converter.
In a 100 Hz/120 Hz analog TV, there is a scan converter circuit which converts the vertical frequency (refresh rate) from standard 50/60 Hz to 100/120 Hz to achieve a low level of flicker which is important in large screen (high inch) TVs.
An external TV card receives the TV signals and converts them to VGA or SVGA format to display on monitor.
Down conversion (decimation):
Many graphic cards have output for standard-definition television. Here there is a conversion from computer graphical modes to TV standard formats.
Other graphic cards lack an SDTV output, but their VGA outputs can still be connected to an SDTV through an external scan converter (pictured).
Scan conversion serves as a bridge between TV and computer graphics technology.
See also
VESA
Analog-to-digital converter
Digital-to-analog converter
International Telecommunication Union
Video scaler
References
Television Engineering Handbook (K. Blair Benson)
A technical introduction to digital video. (Charles A. Poynton)
Television and Video Systems. (Charles G. Buscombe)
Printed Circuits Handbook. (Clyde F. Coombs)
Handbook of Filter Synthesis (Anatol I. Zverev)
External links
A/D, D/A Conversion for HDTV
NTSC to VGA scan converter circuit
Video Duplication with Access Scanning
Scan Converter General Idea
Video
Display technology
Television terminology
Video signal | Scan conversion | [
"Engineering"
] | 738 | [
"Electronic engineering",
"Display technology"
] |
9,463,447 | https://en.wikipedia.org/wiki/CTCF | Transcriptional repressor CTCF also known as 11-zinc finger protein or CCCTC-binding factor is a transcription factor that in humans is encoded by the CTCF gene. CTCF is involved in many cellular processes, including transcriptional regulation, insulator activity, V(D)J recombination and regulation of chromatin architecture.
Discovery
CCCTC-Binding factor or CTCF was initially discovered as a negative regulator of the chicken c-myc gene. This protein was found to be binding to three regularly spaced repeats of the core sequence CCCTC and thus was named CCCTC binding factor.
Function
The primary role of CTCF is thought to be in regulating the 3D structure of chromatin. CTCF binds together strands of DNA, thus forming chromatin loops, and anchors DNA to cellular structures like the nuclear lamina. It also defines the boundaries between active and heterochromatic DNA.
Since the 3D structure of DNA influences the regulation of genes, CTCF's activity influences the expression of genes. CTCF is thought to be a primary part of the activity of insulators, sequences that block the interaction between enhancers and promoters. CTCF binding has also been both shown to promote and repress gene expression. It is unknown whether CTCF affects gene expression solely through its looping activity, or if it has some other, unknown, activity. In a recent study, it has been shown that, in addition to demarcating TADs, CTCF mediates promoter–enhancer loops, often located in promoter-proximal regions, to facilitate the promoter–enhancer interactions within one TAD. This is in line with the concept that a subpopulation of CTCF associates with the RNA polymerase II (Pol II) protein complex to activate transcription. It is likely that CTCF helps to bridge the transcription factor-bound enhancers to transcription start site-proximal regulatory elements and to initiate transcription by interacting with Pol II, thus supporting a role of CTCF in facilitating contacts between transcription regulatory sequences. This model has been demonstrated by the previous work on the beta-globin locus.
Observed activity
The binding of CTCF has been shown to have many effects, which are enumerated below. In each case, it is unknown if CTCF directly evokes the outcome or if it does so indirectly (in particular through its looping role).
Transcriptional regulation
The protein CTCF plays a heavy role in repressing the insulin-like growth factor 2 gene, by binding to the H-19 imprinting control region (ICR) along with differentially-methylated region-1 (DMR1) and MAR3.
Insulation
Binding of targeting sequence elements by CTCF can block the interaction between enhancers and promoters, therefore limiting the activity of enhancers to certain functional domains. Besides acting as enhancer blocking, CTCF can also act as a chromatin barrier by preventing the spread of heterochromatin structures.
Regulation of chromatin architecture
CTCF physically binds to itself to form homodimers,
which causes the bound DNA to form loops. CTCF also occurs frequently at the boundaries of sections of DNA bound to the nuclear lamina. Using chromatin immuno-precipitation (ChIP) followed by ChIP-seq, it was found that CTCF localizes with cohesin genome-wide and affects gene regulatory mechanisms and the higher-order chromatin structure. It is currently believed that the DNA loops are formed by the loop extrusion mechanism, whereby the cohesin ring is actively being translocated along the DNA until it meets CTCF. CTCF has to be in a proper orientation to stop cohesin.
Regulation of RNA splicing
CTCF binding has been shown to influence mRNA splicing.
DNA binding
CTCF binds to the consensus sequence CCGCGNGGNGGCAG (in IUPAC notation). This sequence is defined by 11 zinc finger motifs in its structure. CTCF's binding is disrupted by CpG methylation of the DNA it binds to. On the other hand, CTCF binding may set boundaries for the spreading of DNA methylation. In recent studies, CTCF binding loss is reported to increase localized CpG methylation, which reflected another epigenetic remodeling role of CTCF in human genome.
CTCF binds to an average of about 55,000 DNA sites in 19 diverse cell types (12 normal and 7 immortal) and in total 77,811 distinct binding sites across all 19 cell types.
CTCF's ability to bind to multiple sequences through the usage of various combinations of its zinc fingers earned it the status of a “multivalent protein”. More than 30,000 CTCF binding sites have been characterized. The human genome contains anywhere between 15,000 and 40,000 CTCF binding sites depending on cell type, suggesting a widespread role for CTCF in gene regulation. In addition CTCF binding sites act as nucleosome positioning anchors so that, when used to align various genomic signals, multiple flanking nucleosomes can be readily identified. On the other hand, high-resolution nucleosome mapping studies have demonstrated that the differences of CTCF binding between cell types may be attributed to the differences in nucleosome locations. Methylation loss at CTCF-binding site of some genes has been found to be related to human diseases, including male infertility.
Protein-protein interactions
CTCF binds to itself to form homodimers. CTCF has also been shown to interact with Y box binding protein 1. CTCF also co-localizes with cohesin, which extrudes chromatin loops by actively translocating one or two DNA strands through its ring-shaped structure, until it meets CTCF in a proper orientation. CTCF is also known to interact with chromatin remodellers such as Chd4 and Snf2h (SMARCA5).
References
Further reading
External links
https://www.ctcfemory.com/ A Group for families affected by CTCF mutations
Transcription factors
Gene expression
Nuclear organization | CTCF | [
"Chemistry",
"Biology"
] | 1,302 | [
"Transcription factors",
"Gene expression",
"Signal transduction",
"Nuclear organization",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Induced stem cells"
] |
9,463,516 | https://en.wikipedia.org/wiki/Delbr%C3%BCck%20scattering | Delbrück scattering, the deflection of high-energy photons in the Coulomb field of nuclei as a consequence of vacuum polarization, was observed in 1975. The related process of the scattering of light by light, also a consequence of vacuum polarization, was not observed until 1998. In both cases, it is a process described by quantum electrodynamics.
Discovery
From 1932 to 1937, Max Delbrück worked in Berlin as an assistant to Lise Meitner, who was collaborating with Otto Hahn on the results of irradiating uranium with neutrons. During this period he wrote a few papers, one of which turned out to be an important contribution on the scattering of gamma rays by a Coulomb field due to polarization of the vacuum produced by that field (1933). His conclusion proved to be theoretically sound but inapplicable to the case in point, but 20 years later Hans Bethe confirmed the phenomenon and named it "Delbrück scattering".
In 1953, Robert Wilson observed Delbrück scattering of 1.33 MeV gamma-rays by the electric fields of lead nuclei.
Description
Delbrück scattering is the coherent elastic scattering of photons in the Coulomb field of heavy nuclei. It is one of the two nonlinear effects of quantum electrodynamics (QED) in the Coulomb field investigated experimentally. The other is the splitting of a photon into two photons. Delbrück scattering was introduced by Max Delbrück in order to explain discrepancies between experimental and predicted data in a Compton scattering experiment on heavy atoms carried out by Meitner and Kösters. Delbrück's arguments were based on the relativistic quantum mechanics of Dirac according to which the QED vacuum is filled with electrons of negative energy or – in modern terms – with electron-positron pairs. These electrons of negative energy should be capable of producing coherent-elastic photon scattering because the recoil momentum during absorption and emission of the photon is transferred to the total atom while the electrons remain in their state of negative energy. This process is the analog of atomic Rayleigh scattering with the only difference that in the latter case the electrons are bound in the electron cloud of the atom. The experiment of Meitner and Kösters was the first in a series of experiments where the discrepancy between experimental and predicted differential cross sections for elastic scattering by heavy atoms were interpreted in terms of Delbrück scattering. From the present point of view these early results are not trustworthy. Reliable investigations were possible only after modern QED techniques based on Feynman diagrams were available for quantitative predictions, and on the experimental side photon detectors with high energy resolution and high detection efficiency had been developed. This was the case at the beginning of the 1970s when also computers with high computing capacity were in operation which delivered numerical results for Delbrück scattering amplitudes with sufficient precision.
A first observation of Delbrück scattering was achieved in a high-energy, small-angle photon scattering experiment carried out at DESY (Germany) in 1973, where only the imaginary part of the scattering amplitude is of importance. Agreement was obtained with predictions of Cheng Wu which later were verified by Milstein and Strakhovenko. These latter authors make use of the quasi-classical approximation being very different from the one of Cheng and Wu. It could however be shown that both approximations are equivalent and lead to the same numerical results.
The essential breakthrough came with the Göttingen (Germany) experiment in 1975 carried out at an energy of 2.754 MeV. In the Göttingen experiment Delbrück scattering was observed as the dominant contribution to the coherent-elastic scattering process, in addition to minor contributions stemming from atomic Rayleigh scattering and nuclear Rayleigh scattering. This experiment was the first where exact predictions based on Feynman diagrams, were confirmed with high precision and, therefore, has to be considered as the first definite observation of Delbrück scattering. For a comprehensive description of the present status of Delbrück scattering
see. Nowadays, the most accurate measurements of high-energy Delbrück scattering are performed at the Budker Institute of Nuclear Physics (BINP) in Novosibirsk (Russia). The experiment where photon splitting was really observed for the first time was also performed at the BINP.
There are a number of experimental works published previously to the 1975 Göttingen experiment (or even to the Desy 1973 one). Most notable Jackson and Wetzel in 1969 and Moreh and Kahane in 1973. In both these works use was made of higher energy gamma rays compared with the Göttingen one, conferring a higher contribution of the Delbrück scattering to the overall measured cross section. In general, in the low energy nuclear physics region i.e. <10–20 MeV, a Delbrück experiment measures a number of competing coherent processes including also Rayleigh scattering from electrons, Thomson scattering from the point nucleus and nuclear excitation via the giant dipole resonance. Apart from the Thomson scattering which is well known, the other two (namely Rayleigh and GDR) have considerable uncertainties. The interference of these effects with Delbrück is by no means "minor" (again "at classical nuclear physics energies"). Even at very forward scattering angles, where Delbrück is very strong, there is a substantial interference with the Rayleigh scattering, the amplitudes of both effects being of the same order of magnitude.
References
Quantum electrodynamics
Scattering | Delbrück scattering | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,095 | [
"Condensed matter physics",
"Scattering",
"Particle physics",
"Nuclear physics"
] |
9,463,527 | https://en.wikipedia.org/wiki/Algebraic%20modeling%20language | Algebraic modeling languages (AML) are high-level computer programming languages for describing and solving high complexity problems for large scale mathematical computation (i.e. large scale optimization type problems). One particular advantage of some algebraic modeling languages like AIMMS, AMPL, GAMS,
Gekko,
MathProg,
Mosel,
and
OPL
is the similarity of their syntax to the mathematical notation of optimization problems. This allows for a very concise and readable definition of problems in the domain of optimization, which is supported by certain language elements like sets, indices, algebraic expressions, powerful sparse index and data handling variables, constraints with arbitrary names. The algebraic formulation of a model does not contain any hints how to process it.
An AML does not solve those problems directly; instead, it calls appropriate external algorithms to obtain a solution. These algorithms are called solvers and can handle certain kind of mathematical problems like:
linear problems
integer problems
(mixed integer) quadratic problems
mixed complementarity problems
mathematical programs with equilibrium constraints
constrained nonlinear systems
general nonlinear problems
non-linear programs with discontinuous derivatives
nonlinear integer problems
global optimization problems
stochastic optimization problems
Core elements
The core elements of an AML are:
a modeling language interpreter (the AML itself)
solver links
user interfaces (UI)
data exchange facilities
Design principles
Most AML follow certain design principles:
a balanced mix of declarative and procedural elements
open architecture and interfaces to other systems
different layers with separation of:
model and data
model and solution methods
model and operating system
model and interface
Data driven model generation
Most modeling languages exploit the similarities between structured models and relational databases by providing a database access layer, which enables the modelling system to directly access data from external data sources (e.g. these table handlers for AMPL).
With the refinement of analytic technologies applied to business processes, optimization models are becoming an integral part of decision support systems; optimization models can be structured and layered to represent and support complex business processes. In such applications, the multi-dimensional data structure typical of OLAP systems can be directly mapped to the optimization models and typical MDDB operations can be translated into aggregation and disaggregation operations on the underlying model
History
Algebraic modelling languages find their roots in matrix-generator and report-writer programs (MGRW), developed in the late seventies. Some of these are MAGEN, MGRW (IBM), GAMMA.3, DATAFORM and MGG/RWG. These systems simplified the communication of problem instances to the solution algorithms and the generation of a readable report of the results.
An early matrix-generator for LP was developed around 1969 at the Mathematisch Centrum (now CWI), Amsterdam.
Its syntax was very close to the usual mathematical notation, using subscripts en sigmas. Input for the generator consisted of separate sections for the model and the data. It found users at universities and in industry. The main industrial user was the steel maker Hoogovens (now Tata Steel) where it was used for nearly 25 years.
A big step towards the modern modelling languages is found in UIMP, where the structure of the mathematical programming models taken from real life is analyzed for the first time, to highlight the natural grouping of variables and constraints arising from such models. This led to data-structure features, which supported structured modelling; in this paradigm, all the input and output tables, together with the decision variables, are defined in terms of these structures, in a way comparable to the use of subscripts and sets.
This is probably the single most notable feature common to all modern AMLs and enabled, in time, a separation between the model structure and its data, and a correspondence between the entities in an MP model and data in relational databases. So, a model could be finally instantiated and solved over different datasets, just by modifying its datasets.
The correspondence between modelling entities and relational data models, made then possible to seamlessly generate model instances by fetching data from corporate databases.
This feature accounts now for a lot of the usability of optimization in real life applications, and is supported by most well-known modelling languages.
While algebraic modelling languages were typically isolated, specialized and commercial languages, more recently algebraic modelling languages started to appear in the form of open-source, specialized libraries within a general-purpose language, like Gekko or Pyomo for Python or JuMP for the Julia language.
Notable AMLs
Specialized AMLs
AIMMS
AMPL
GAMS
MathProg
MiniZinc
AML Packages in Generic Programming Languages
FlopC++ for C++
OptimJ for Java
JuMP for Julia
GBOML for Python
Pyomo for Python
References
Computer algebra systems
Mathematical optimization software
Specification languages | Algebraic modeling language | [
"Mathematics",
"Engineering"
] | 962 | [
"Software engineering",
"Specification languages",
"Computer algebra systems",
"Mathematical software"
] |
9,463,674 | https://en.wikipedia.org/wiki/Vainu%20Bappu | Manali Kallat Vainu Bappu (10 August 1927 – 19 August 1982) was an Indian astronomer and president of the International Astronomical Union. Bappu helped to establish several astronomical institutions in India, including the Vainu Bappu Observatory which is named after him, and he also contributed to the establishment of the modern Indian Institute of Astrophysics. In 1957, he discovered the Wilson–Bappu effect jointly with American astronomer Olin Chaddock Wilson.
On 2 July 1949, when Bappu was taking pictures of the night sky, he spotted a bright moving object which he had rightfully understood to be a comet. When he turned to his professor, Bart Bok, and colleague Gordon Newkirk, they confirmed the discovery. They calculated the orbit of the comet which revealed that the comet would reappear only after 60,000 years.
The International Astronomical Union officially named the comet as the Bappu-Bok-Newkirk comet (C/1949N1). Bappu also received the Donohoe Comet Medal of the Astronomical Society of the Pacific.
This is the only comet with an Indian name.
Early life
Vainu Bappu was born on 10 August 1927, in Chennai, as the only child of Manali Kukuzhi Bappu and Kallat Sunanna Bappu. His family originally hails from Thalassery in Kerala. His father was an astronomer at the Nizamiah Observatory in Telangana. He attended the Harvard Graduate School of Astronomy for his PhD after obtaining postgraduate degree from the Madras University.
Discoveries
Bappu, along with two of his colleagues, discovered the 'Bappu-Bok-Newkirk' comet. He was awarded the Donhoe Comet-Medal by the Astronomical Society of the Pacific in 1949.
In a paper published in 1957, American astronomer Olin Chaddock Wilson and Bappu had described what would later be known as the Wilson–Bappu effect. The effect as described by L.V. Kuhi is: 'The width of the Ca II emission in normal, nonvariable, G, K, and M stars is correlated with the visual absolute magnitude in the sense that the brighter the star the wider the emission.' The paper opened up the field of stellar chromospheres for research.
Vainu Bappu Observatory
On his return to India, Bappu was appointed to head a team of astronomers to build an observatory at Nainital. His efforts of building an indigenous large optical telescope and a research observatory led to the founding of the optical observatory of Kavalur and its large telescope. The Vainu Bappu Observatory is one of the main observatories of the Indian Institute of Astrophysics, also initiated in its modern avatar by Bappu in 1971. Later, a number of discoveries were made from the Vainu Bappu Observatory.
Career overview
See also
Cosmic distance ladder
References
1927 births
Harvard University alumni
Indian astrophysicists
1982 deaths
Recipients of the Padma Bhushan in science & engineering
People from Thalassery
Scientists from Kerala
20th-century Indian astronomers
Presidents of the International Astronomical Union | Vainu Bappu | [
"Astronomy"
] | 639 | [
"Astronomers",
"Presidents of the International Astronomical Union"
] |
9,463,925 | https://en.wikipedia.org/wiki/Wilson%E2%80%93Bappu%20effect | The Ca II K line in cool stars is among the strongest emission lines which originates in the star's chromosphere. In 1957, Olin C. Wilson and M. K. Vainu Bappu reported on the remarkable correlation between the measured width of the aforementioned emission line and the absolute visual magnitude of the star. This is known as the Wilson–Bappu effect. The correlation is independent of spectral type and is applicable to stellar classification main sequence types G, K, and Red giant type M. The greater the emission band, the brighter the star, which is correlated with distance empirically.
The main interest of the Wilson–Bappu effect is in its use for determining the distance of stars too remote for direct measurements. It can be studied using nearby stars, for which independent distance measurements are possible, and it can be expressed in a simple analytical form. In other words, the Wilson–Bappu effect can be calibrated with stars within 100 parsecs from the Sun. The width of the emission core of the K line () can be measured in distant stars, so, knowing W0 and the analytical form expressing the Wilson–Bappu effect, we can determine the absolute magnitude of a star. The distance of a star follows immediately from the knowledge of both absolute and apparent magnitude, provided that the interstellar reddening of the star is either negligible or well known.
The first calibration of the Wilson–Bappu effect using distance from Hipparcos parallaxes was made in 1999 by Wallerstein et al. A later work also used W0 measurements on high-resolution spectra taken with CCD, but a smaller sample.
According to the latest calibration, the relation between absolute visual magnitude (Mv) expressed in magnitudes and W0, transformed in km/s, is the following:
The data error, however, is quite large: about 0.5 mag, rendering the effect too imprecise to significantly improve the cosmic distance ladder. Another limitation comes from the fact that the measurement of W0 in distant stars is very challenging, requires long observations at big telescopes. Sometimes the emission feature in the core of the K line is affected by the interstellar extinction. In these cases an accurate measurement of W0 is not possible.
The Wilson–Bappu effect is also valid for the Mg II k line. However, the Mg II k line is at 2796.34 Å in the ultraviolet, and since the radiation at this wavelength does not reach the Earth's surface it can only be observed with satellites such as the International Ultraviolet Explorer.
In 1977, Stencel published a spectroscopic survey that showed that the wing emission features seen in the broad wings of the K line among higher luminosity late type stars, share a correlation of line width and Mv similar to the Wilson–Bappu effect.
References
Astronomical spectroscopy | Wilson–Bappu effect | [
"Physics",
"Chemistry"
] | 591 | [
"Astronomical spectroscopy",
"Spectroscopy",
"Spectrum (physical sciences)",
"Astrophysics"
] |
9,464,457 | https://en.wikipedia.org/wiki/Acidophil%20cell | In the anterior pituitary, the term "acidophil" is used to describe two different types of cells which stain well with acidic dyes.
somatotrophs, which secrete growth hormone (a peptide hormone)
lactotrophs, which secrete prolactin (a peptide hormone)
When using standard staining techniques, they cannot be distinguished from each other (though they can be distinguished from basophils and chromophobes), and are therefore identified simply as "acidophils".
See also
Eosinophilic
Acidophile (histology)
Basophilic
Chromophobe cell
Melanotroph
Chromophil
Basophil cell
Oxyphil cell
Oxyphil cell (parathyroid)
Pituitary gland
Neuroendocrine cell
References
Histology | Acidophil cell | [
"Chemistry"
] | 175 | [
"Histology",
"Microscopy"
] |
9,464,769 | https://en.wikipedia.org/wiki/Brusselization | In urban planning, Brusselization (UK and US) or Brusselisation (UK variant) (, ) is "the indiscriminate and careless introduction of modern high-rise buildings into gentrified neighbourhoods" and has become a byword for "haphazard urban development and redevelopment."
The notion applies to anywhere whose development follows the pattern of the uncontrolled development of Brussels in the 1960s and 1970s, that resulted from a lack of zoning regulations and the city authorities' laissez-faire approach to city planning.
Brussels
Historical precedent and underpinnings for modernization in Brussels
The 1950s was not the first time that Brussels had been radically altered by major redevelopment. Two prior sweeping changes to the city's urban fabric were the straight-lined central boulevards modeled after Paris, which were created following the covering and diverting of the river Senne, as well as the North–South railway connection, which took around forty years to finish (1911–1952), and which had left swaths of the city center filled with debris and craters for decades. Another precedent was the construction of the Palace of Justice, the largest building erected in the 19th century (1866–1883), for which a section of the Marolles/Marollen neighbourhood was demolished.
The writer André de Vries asserts that the penchant for heavy-handedness can be traced back to the reign of King Leopold II in the late 19th century, and possibly even all the way back to the bombardment of the city by Louis XIV's troops in 1695. "There is barely one building still standing", he says, "from before 1695, with the exception of some churches and the Town Hall". Leopold II sought to give Brussels the image of a grand capital city of an imperial/colonial power. By the middle 20th century, there was a tacit alliance between urban development entrepreneurs and local government, with a modernist agenda and with their sights set firmly on large-scale development projects. The citizens of Brussels were largely left out of the process.
From the 1960s to the 1980s
The original Brusselization was the type of urban regeneration performed by Brussels in connection with the 1958 Brussels World's Fair (Expo 58). In order to prepare the city for Expo 58, buildings were torn down without regard either to their architectural or historical importance, high-capacity square office or apartment buildings were built, boulevards were created and tunnels dug. Among the most controversial was the large-scale demolition of town houses for development of the high-rise business district in the Northern Quarter. All of these changes were designed to quickly increase the number of people working and living in the city and improve transportation.
Further radical changes resulted from Brussels's role as the center of the EU and NATO, beginning with the construction of the European Commission's headquarters in 1959. The introduction of a high-speed rail network in the 1990s was the latest excuse to speculate on multiple rows of properties for modern office or hotel redevelopment, which led to the razing of neighborhood blocks near Brussels-South railway station.
These changes caused outcry amongst the citizens of Brussels and by environmentalist and preservationist organizations. The demolition of Victor Horta's Art Nouveau Maison du Peuple/Volkshuis in 1965 was one focus of such protests, as was the construction of the IBM Tower in 1978. Many architects also protested, and it was the architectural world that coined the name Brusselization for what was happening to Brussels. Architects such as Léon Krier and Maurice Culot formulated an anti-capitalist urban planning theory, as a rejection of the rampant modernism that they saw overtaking Brussels.
The 1990s: From Brusselization to façadism
In the early 1990s, laws were introduced in Brussels restricting the demolition of buildings that were deemed to have architectural or historical significance, and in 1999, the city authorities' urban development plan explicitly declared high-rise buildings to be architecturally incompatible with the existing aesthetics of the city centre. This led to the rise of what was termed façadisme, i.e. the destruction of the whole interior of a historic building while preserving its historic façade, with new buildings erected behind or around it.
These laws were the Town Planning Act 1991, which gave local authorities the powers to refuse demolition requests on the grounds of historical, aesthetic, or cultural significance, and to designate architectural heritage zones; and the Heritage Conservation Act of 1993, which gave the government of the Brussels-Capital Region the power to designate buildings to be protected for historic reasons. However, this system had its deficiencies. Whilst the Capital Region's government could designate historic buildings, it was the nineteen municipal authorities within it that were responsible for demolition permits. Not until the introduction of a system was this internecine conflict resolved.
See also
Californication
Historic preservation
Manhattanization
Venice Charter
Redevelopment of Norrmalm
Vancouverism
References
Cross-reference
Sources used
Further reading
Urban studies and planning terminology
Historic preservation
Architectural history
20th century in Brussels
Urban decay in Europe | Brusselization | [
"Engineering"
] | 1,023 | [
"Architectural history",
"Architecture"
] |
9,465,204 | https://en.wikipedia.org/wiki/Guard%20tour%20patrol%20system | A guard tour patrol system is a system for logging the rounds of employees in a variety of situations such as security guards patrolling property, technicians monitoring climate-controlled environments, and correctional officers checking prisoner living areas. It helps ensure that the employee makes their appointed rounds at the correct intervals and can offer a record for legal or insurance reasons. Such systems have existed for many years using mechanical watchclock-based systems (watchman clocks/guard tour clocks/patrol clocks). Computerized systems were first introduced in Europe in the early 1980s, and in North America in 1986. Modern systems are based on handheld data loggers and RFID sensors.
The system provides a means to record the time when the employee reaches certain points on their tour. Checkpoints or watchstations are commonly placed at the extreme ends of the tour route and at critical points such as vaults, specimen refrigerators, vital equipment, and access points. Some systems are set so that the interval between stations is timed so if the employee fails to reach each point within a set time, other staff are dispatched to ensure the employee's well-being.
An example of a modern set-up might work as follows: the employee carries a portable electronic sensor (PES) or electronic data collector which is activated at each checkpoint. Checkpoints can consist of iButton semiconductors, magnetic strips, proximity microchips such as RFIDs or NFC- or optical barcodes. The data collector stores the serial number of the checkpoint with the date and time. Later, the information is downloaded from the collector into a computer where the checkpoint's serial number will have an assigned location (i.e. North Perimeter Fence, Cell Number 1, etc.). Data collectors can also be programmed to ignore duplicate checkpoint activations that occur sequentially or within a certain time period. Computer software used to compile the data from the collector can print out summaries that pinpoint missed checkpoints or patrols without the operator having to review all the data collected. Because devices can be subject to misuse, some have built-in microwave, g-force, and voltage detection.
System composition
It combines readers, tags and software.
Guard patrol reader
The first Guard tour system were the touch readers with software. Upon further development, more working modes for the readers became available. Such as RFID and GPS. And the communication of readers and software was connected with USB cables or download stations. For USB connection, the Pogo Pin connection is very popular. Because the contacts with gold-plating are very stable and waterproof.
Newer, light-weight guard touring systems utilize QR codes or barcodes rather than expensive electronic components. A mobile phone app is used to scan (take a photo) of the QR code which creates a time stamp in the system.
Guard patrol tags
The reader needs to read the tags to record the information, such as the time and tag's ID. Then upload the information to software to get the report.
Guard ID tags: there are touch ibuttons or RFID tags. The guard ID tags replace the name of guard.
Checkpoint tags: there are touch ibuttons or RFID tags. The checkpoint tags replace the checkpoints which guards need to patrol.
Event wallets: there are many event tags in it. And they are touch ibuttons or RFID tags. Each tag means one event thing. When the reader reads the tag, it will get the event. For example, fire, stolen, broken.
QR Codes: some systems use QR codes or barcodes instead of electronic tags and the associated readers. The codes are often printed on stickers which can be easily placed nearly anywhere, changed (to ensure accountability) or added to change touring routes.
Guard patrol software
There are three types of guard patrol software. They are desktop, local network client-server, and web-based versions.
The desktop version can only work on one computer.
The local network client server type can work using the local area network.
The web-based version can work everywhere with internet access.
In the analog age, the device used for this purpose was the watchclock. Watchclocks often had a paper or light cardboard disk placed inside for each 24-hour period. The user would carry the clock to each checkpoint, where a numbered key could be found (typically chained in place). The key would be inserted into the clock where it would imprint the disk. At the end of the shift or 24-hour period an authorized person (usually a supervisor) would unlock the watchclock and retrieve the disk.
As development of guard tour system, the device can work with more functions. Such as send data real-time by GPRS to software and GPS location and tracking mode.
In software, we set up the Patrol Department, Patrol Route, Guard, Checkpoint, Event and Patrol Plan in general, depending on the software purchased. The software will then have specific tours set for officers to complete, being able to indicate whether the inspection was completed properly or not, with the ability to note a specific temperature of an inspection, or make any kind of notes necessary. Guard Tour software systems seem to be becoming the norm in tracking tours for officers.
New touring solutions rely on cloud-based Software as a Service (SaaS) combined with mobile or fixed on-site devices. These offer the advantages of lower installation and maintenance costs, forgoing the need for hardware, software upgrades, data backups and computer maintenance. On-site systems need all the usual software patches, backups and periodic hardware replacement. In operation, the role of the watchclock system, described above, has largely been replaced by some combination of GPS, RFID/NFC, or QR coded labels. Users prove that they have visited particular locations or performed tasks by scanning these tags or via GPS generated maps. These technologies result in lower costs, while increasing the flexibility of the systems to handle changes or new uses. This is important when routes change, or if a solution is needed on short notice. Tag-based touring systems typically utilize a mobile phone or tablet app to scan the tags and then upload that information along with a time stamp, phone's location information, and optionally other information the guard enters into the app on the phone. These systems provide instant access to tour information as it is uploaded by the application or device carried by the user, rather than requiring the officer to return to an upload station.
References
Automatic identification and data capture
Crime prevention
Recording devices
Security engineering | Guard tour patrol system | [
"Technology",
"Engineering"
] | 1,323 | [
"Systems engineering",
"Security engineering",
"Data",
"Automatic identification and data capture",
"Recording devices"
] |
9,465,208 | https://en.wikipedia.org/wiki/Ammonium%20cerium%28IV%29%20sulfate | Ammonium cerium(IV) sulfate is an inorganic compound with the formula (NH4)4Ce(SO4)4·2H2O. It is an orange-colored solid. It is a strong oxidant, the potential for reduction is about +1.44V. Cerium(IV) sulfate is a related compound.
Structure
A crystallographic study shows that the compound contains the Ce2(SO4)88− anion, where the cerium atoms are 9 coordinated by oxygen atoms belonging to sulfate groups, in a distorted tricapped trigonal prism. The compound is thus sometimes formulated as (NH4)8[Ce2(SO4)8]·4H2O.
References
Cerium(IV) compounds
Sulfates
Ammonium compounds
Oxidizing agents | Ammonium cerium(IV) sulfate | [
"Chemistry"
] | 165 | [
"Redox",
"Sulfates",
"Oxidizing agents",
"Salts",
"Ammonium compounds"
] |
9,465,302 | https://en.wikipedia.org/wiki/Thallium%20halides | The thallium halides include monohalides, where thallium has oxidation state +1, trihalides in which thallium generally has oxidation state +3, and some intermediate halides containing thallium with mixed +1 and +3 oxidation states. These salts find use in specialized optical settings, such as focusing elements in research spectrophotometers. Compared to the more common zinc selenide-based optics, materials such as thallium bromoiodide enable transmission at longer wavelengths. In the infrared, this allows for measurements as low as 350 cm−1 (28 μm), whereas zinc selenide is opaque by 21.5 μm, and ZnSe optics are generally only usable to 650 cm−1 (15 μm).
Monohalides
The monohalides, also known as thallous halides, all contain thallium with oxidation state +1. Parallels can be drawn between the thallium(I) halides and their corresponding silver salts; for example, thallium(I) chloride and bromide are light-sensitive, and thallium(I) fluoride is more soluble in water than the chloride and bromide.
Thallium(I) fluoride
TlF is a white crystalline solid, with a mp of 322 °C. It is readily soluble in water unlike the other Tl(I) halides. The normal room-temperature form has a similar structure to α-PbO which has a distorted rock salt structure with essentially five coordinate thallium, the sixth fluoride ion is at 370 pm. At 62 °C it transforms to a tetragonal structure. This structure is unchanged up to pressure of 40 GPa.
The room temperature structure has been explained in terms of interaction between Tl 6s and the F 2p states producing strongly antibonding Tl-F states. The structure distorts to minimise these unfavourable covalent interactions.
Thallium(I) chloride
TlCl is a light sensitive, white crystalline solid, mp 430 °C. The crystal structure is the same as CsCl.
Thallium(I) bromide
TlBr is a light sensitive, pale yellow crystalline solid, mp 460 °C. The crystal structure is the same as CsCl.
Thallium(I) iodide
At room temperature, TlI is a yellow crystalline solid, mp 442 °C. The crystal structure is a distorted rock salt structure known as the β-TlI structure. At higher temperatures the colour changes to red with a structure the same as CsCl.
Thallium(I) mixed halides
Thallium bromoiodide / thallium bromide iodide () and thallium bromochloride / thallium bromide chloride () are mixed salts of thallium(I) that are used in spectroscopy as an optical material for transmission, refraction, and focusing of infrared radiation. The materials were first grown by R. Koops in the laboratory of Olexander Smakula at the Carl Zeiss Optical Works, Jena in 1941. The red bromoiodide was coded KRS-5 and the colourless bromochloride, KRS-6 and this is how they are commonly known. The KRS prefix is an abbreviation of "Kristalle aus dem Schmelz-fluss", (crystals from the melt). The compositions of KRS-5 and KRS-6 approximate to and . KRS-5 is the most commonly used, its properties of being relatively insoluble in water and non-hygroscopic, make it an alternative to KBr, CsI, and AgCl.
Trihalides
The thallium trihalides, also known as thallic halides, are less stable than their corresponding aluminium, gallium, and indium counterparts and chemically quite distinct. The triiodide does not contain thallium with oxidation state +3 but is a thallium(I) compound and contains the linear ion.
Thallium(III) fluoride
TlF3 is a white solid, mp 550 °C. Its structure is the same as and β-: thallium atom is 9 coordinate (tricapped trigonal prismatic). It can be synthesised by fluoridation of the oxide, Tl2O3, with F2, BrF3, or SF4 at 300 °C.
Thallium(III) chloride
has a distorted Cr(III) chloride structure like and . It can be prepared] by treating with gas. Crystallization from water gives the tetrahydrate. Solid decomposes at 40 °C, losing chlorine to give .
Thallium(III) bromide
can be prepared] by treating with gas. Crystallization from water gives the tetrahydrate. Solid decomposes at 40 °C, losing bromine to give .
Thallium(I) triiodide
is a black crystalline solid prepared from and in aqueous HI. It does not contain thallium(III), but has the same structure as containing the linear ion.
Mixed-valence halides
As a group, these are not well characterised. They contain both Tl(I) and Tl(III), where the thallium(III) atom is present as complex anions, e.g. .
This is formulated as .
This yellow compound is formulated .
This compound is similar to and is formulated
This pale brown solid is formulated
This compound has been reported as an intermediate in the synthesis of from and . The structure is not known.
Halide complexes
Thallium(I) complexes
Thallium(I) can form complexes of the type and both in solution and when thallium(I) halides are incorporated into alkali metal halides. These doped alkali metal halides have new absorption and emission nbands and are used as phosphors in scintillation radiation detectors.
Thallium(III) fluoride complexes
The salts and do not contain discrete tetrahedral and octahedral anions. The structure of is the same as fluorite (CaF2) with and atoms occupying the 8 coordinate sites. Na3TlF6 has the same structure as cryolite, . In this the thallium atoms are octahedrally coordinated. Both compounds are usually considered to be mixed salts of and .
Thallium(III) chloride complexes
Salts of tetrahedral and octahedral are known with various cations.
Salts containing with a square pyramidal structure are known. Some salts that nominally contain actually contain the dimeric anion , long chain anions where is 6 coordinate and the octahedral units are linked by bridging chlorine atoms, or mixed salts of and .
The ion , where thallium atoms are octahedrally coordinated with three bridging chlorine atoms, has been identified in the caesium salt, .
Thallium(III) bromide complexes
Salts of and are known with various cations.
The anion has been characterised in a number of salts and is trigonal bipyramidal. Some other salts that nominally contain are mixed salts containing and .
Thallium(III) iodide complexes
Salts of are known. The anion is stable even though the triiodide is a thallium(I) compound.
References
Further information
Metal halides
Mixed valence compounds
Thallium compounds | Thallium halides | [
"Chemistry"
] | 1,554 | [
"Mixed valence compounds",
"Inorganic compounds",
"Metal halides",
"Salts"
] |
9,465,380 | https://en.wikipedia.org/wiki/Watchclock | A watchclock is a mechanical clock used by security guards as part of their guard tour patrol system which require regular patrols. The most commonly used form was the mechanical clock systems that required a key for manual punching of a number to a strip of paper inside with the time pre-printed on it. Recently, electronic systems have increased in popularity due to their light weight, ease of use, and downloadable logging capabilities.
This increase in the electronic systems led the largest U.S. manufacturer of watchclocks, Detex, to discontinue all of their mechanical watchclocks on December 31, 2011, including the Detex Newman which had been manufactured for 130 years.
Watchclocks often had a paper or light cardboard disk or paper tape placed inside for a set period of time, usually 24 hours for disk models, and 96 hours for tape models. The user would carry the clock to each checkpoint where a numbered key was mounted (typically chained in place, ensuring that the user was present). That key was then inserted into the clock and turned, which would imprint the disk with the key number. The paper disk or tape had the times pre-printed and the key impressed the key number on the corresponding time. After the shift (or a specified time period, up to 96 hours in the case of the Detex Guardsman clocks), an authorized person (usually a supervisor), would unlock the watchclock and retrieve the disk or tape and insert a new one. In the case of Detex brand clocks, each time the cover is opened or closed, a mechanical device would puncture the disk or tape at the current time; if a disk had more than two perforations on it, it proved that the clock had been opened and possibly tampered with, or records forged.
The approximately five pound circular watchclock was enclosed in a black leather pouch attached to a leather strap and carried over the shoulder. Inside buildings mounted near doors, were watchclock stations consisting of a small metal box with a hinged lid, which contained a numbered key affixed by a twelve-inch chain. The watchman would insert the key into the clock, rotate it and a numeric stamp would be pressed onto a roll or disk of paper locked inside the clock.
Gallery
References
External links
Detex Corporation official page
Watchclocks at Watchcloks.org
Watchclocks
Automatic identification and data capture
Recording devices | Watchclock | [
"Technology"
] | 491 | [
"Recording devices",
"Data",
"Automatic identification and data capture"
] |
9,465,500 | https://en.wikipedia.org/wiki/Mycotoxicology | Mycotoxicology is the branch of mycology that focuses on analyzing and studying the toxins produced by fungi, known as mycotoxins. In the food industry it is important to adopt measures that keep mycotoxin levels as low as practicable, especially those that are heat-stable. These chemical compounds are the result of secondary metabolism initiated in response to specific developmental or environmental signals. This includes biological stress from the environment, such as lower nutrients or competition for those available. Under this secondary path the fungus produces a wide array of compounds in order to gain some level of advantage, such as incrementing the efficiency of metabolic processes to gain more energy from less food, or attacking other microorganisms and being able to use their remains as a food source.
Mycotoxins are made by fungi and are toxic to vertebrates and other animal groups in low concentrations. Low-molecular-weight fungal metabolites such as ethanol that are toxic only in high concentrations are not considered mycotoxins. Mushroom poisons are fungal metabolites that can cause disease and death in humans and other animals; they are rather arbitrarily excluded from discussions of mycotoxicology. Molds make mycotoxins; mushrooms and other macroscopic fungi make mushroom poisons. The distinction between a mycotoxin and a mushroom poison is based not only on the size of the producing fungus, but also on human intention. Mycotoxin exposure is almost always accidental. In contrast, with the exception of the victims of a few mycologically accomplished murderers, mushroom poisons are usually ingested by amateur mushroom hunters who have collected, cooked, and eaten what was misidentified as a harmless, edible species.
Mycotoxins are hard to define and are also very difficult to classify. Mycotoxins have diverse chemical structures, biosynthetic origins, myriad biological effects, and produce numerous different fungal species. Classification generally reflects the training of the categorizer and does not adhere to any set system. Mycotoxins are often arranged by physicians depending on what organ they effect. Mycotoxins can be categorized as nephrotoxins, hepatoxins, immunotoxins, neurotoxins, etc. Generic groups created by cell biologist are teratogens, mutagens, allergens, and carcinogens. Organic chemists have attempted to classify them by their chemical structures (e.g., lactones, coumarins); biochemists according to their biosynthetic origins (polyketides, amino acid-derived, etc.); physicians by the illnesses they cause (e.g., St. Anthony's fire, stachybotryotoxicosis), and mycologists by the fungi that produce them (e.g., Aspergillus toxins, Penicillium toxins). None of these classifications is entirely satisfactory. Aflatoxin, for example, is a hepatotoxic, mutagenic, carcinogenic, difuran-containing, polyketide-derived Aspergillus toxin. Zearalenone is a Fusarium metabolite with potent estrogenic activity; hence, in addition to being called (probably erroneously) a mycotoxin, it also has been labeled a phytoestrogen, a mycoestrogen, and a growth promotant.
Types of mycotoxins
Citrinin
Citrinin was first isolated from Penicillium citrinum prior to World War II; subsequently, it was identified in over a dozen species of Penicillium and several species of Aspergillus (e.g., Aspergillus terreus and Aspergillus niveus), including certain strains of Penicillium camemberti (used to produce cheese) and Aspergillus oryzae (used to produce sake, miso, and soy sauce). More recently, citrinin has also been isolated from Monascus ruber and Monascus purpureus, industrial species used to produce red pigments.
Aflatoxins
The aflatoxins were isolated and characterized after the death of more than 100,000 turkey poults (turkey X disease) was traced to the consumption of a mold-contaminated peanut meal. The four major aflatoxins are called B1, B2, G1, and G2 based on their fluorescence under UV light (blue or green) and relative chromatographic mobility during thin-layer chromatography. Aflatoxin B1 is the most potent natural carcinogen known and is usually the major aflatoxin produced by toxigenic strains. It is also the best studied: in a large percentage of the papers published, the term aflatoxin can be construed to mean aflatoxin B1. However, well over a dozen other aflatoxins (e.g., P1. Q1, B2a, and G2a) have been described, especially as mammalian biotransformation products of the major metabolites. The classic book Aflatoxin: Scientific Background, Control, and Implications, published in 1969, is still a valuable resource for reviewing the history, chemistry, toxicology, and agricultural implications of aflatoxin research.
Fumonisins
Fumonisins were first described and characterized in 1988. The most abundantly produced member of the family is fumonisin B1. They are thought to be synthesized by condensation of the amino acid alanine into an acetate-derived precursor. Fumonisins are produced by a number of Fusarium species, notably Fusarium verticillioides (formerly Fusarium moniliforme = Gibberella fujikuroi), Fusarium proliferatum, and Fusarium nygamai, as well as Alternaria alternata f. sp. lycopersici. These fungi are taxonomically challenging, with a complex and rapidly changing nomenclature which has perplexed many nonmycologists (and some mycologists, too). The major species of economic importance is Fusarium verticillioides, which grows as a corn endophyte in both vegetative and reproductive tissues, often without causing disease symptoms in the plant. However, when weather conditions, insect damage, and the appropriate fungal and plant genotype are present, it can cause seedling blight, stalk rot, and ear rot. Fusarium verticillioides is present in virtually all corn samples. Most strains do not produce the toxin, so the presence of the fungus does not necessarily mean that fumonisin is also present. Although it is phytotoxic, fumonisin B1 is not required for plant pathogenesis.
Ochratoxins
Ochratoxin A was discovered as a metabolite of Aspergillus ochraceus in 1965 during a large screen of fungal metabolites that was designed specifically to identify new mycotoxins. Shortly thereafter, it was isolated from a commercial corn sample in the United States and recognized as a potent nephrotoxin. Members of the ochratoxin family have been found as metabolites of many different species of Aspergillus, including Aspergillus alliaceus, Aspergillus auricomus, Aspergillus carbonarius, Aspergillus glaucus, Aspergillus melleus, and Aspergillus niger. Because Aspergillus niger is used widely in the production of enzymes and citric acid for human consumption, it is important to ensure that industrial strains are nonproducers. Although some early reports implicated several Penicillium species, it is now thought that Penicillium verrucosum, a common contaminant of barley, is the only confirmed ochratoxin producer in this genus. Nevertheless, many mycotoxin reviews reiterate erroneous species lists.
Patulin
Patulin is produced by many different molds but was first isolated as an antimicrobial active principle during the 1940s from Penicillium patulum (later called Penicillium urticae, now Penicillium griseofulvum). The same metabolite was also isolated from other species and given the names clavacin, claviformin, expansin, mycoin C, and penicidin. A number of early studies were directed towards harnessing its antibiotic activity. For example, it was tested as both a nose and throat spray for treating the common cold and as an ointment for treating fungal skin infections However, during the 1950s and 1960s, it became apparent that, in addition to its antibacterial, antiviral, and antiprotozoal activity, patulin was toxic to both plants and animals, precluding its clinical use as an antibiotic. During the 1960s, patulin was reclassified as a mycotoxin.
Trichothecenes
The trichothecenes constitute a family of more than sixty sesquiterpenoid metabolites produced by a number of fungal genera, including Fusarium, Myrothecium, Phomopsis, Stachybotrys, Trichoderma, Trichothecium, and others. The term trichothecene is derived from trichothecin, which was one of the first members of the family identified. All trichothecenes contain a common 12,13-epoxytrichothene skeleton and an olefinic bond with various side chain substitutions. They are commonly found as food and feed contaminants, and consumption of these mycotoxins can result in alimentary hemorrhage and vomiting; direct contact causes dermatitis.
Zearalenone
Zearalenone, a secondary metabolite from Fusarium graminearum (teleomorph Gibberella zeae), was given the trivial name zearalenone as a combination of G. zeae, resorcylic acid lactone, -ene (for the presence of the C-1′ to C-2 double bond), and -one, for the C-6′ ketone. Almost simultaneously, a second group isolated, crystallized, and studied the metabolic properties of the same compound and named it F-2. Much of the early literature uses zearalenone and F-2 as synonyms; the family of analogues are known as zearalenones and F-2 toxins, respectively. Perhaps because the original work on these fungal macrolides coincided with the discovery of aflatoxins, chapters on zearalenone have become a regular fixture in monographs on mycotoxins (see, for example, Mirocha and Christensen and Betina ). Nevertheless, the word toxin is almost certainly a misnomer because zearalenone, while biologically potent, is hardly toxic; rather, it sufficiently resembles 17β-estradiol, the principal hormone produced by the human ovary, to allow it to bind to estrogen receptors in mammalian target cells Zearalenone is better classified as a nonsteroidal estrogen or mycoestrogen. Sometimes it is called a phytoestrogen. For the structure-activity relationships of zearalenone and its analogues, see Hurd and Shier.
References
See also
Mycology
Foodborne illness
Poisonous fungi
Branches of mycology | Mycotoxicology | [
"Biology",
"Environmental_science"
] | 2,406 | [
"Branches of mycology",
"Toxicology",
"Poisonous fungi"
] |
9,465,520 | https://en.wikipedia.org/wiki/7-Dehydrositosterol | 7-Dehydrositosterol is a sterol which serves as a precursor for sitocalciferol (vitamin D5).
External links
Sterols
Vitamin D | 7-Dehydrositosterol | [
"Chemistry",
"Biology"
] | 39 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
9,465,910 | https://en.wikipedia.org/wiki/Zero%20one%20infinity%20rule | The Zero one infinity (ZOI) rule is a rule of thumb in software design proposed by early computing pioneer Willem van der Poel. It argues that arbitrary limits on the number of instances of a particular type of data or structure should not be allowed. Instead, an entity should either be forbidden entirely, only one should be allowed, or any number of them should be allowed. Although various factors outside that particular software could limit this number in practice, it should not be the software itself that puts a hard limit on the number of instances of the entity.
Examples of this rule may be found in the structure of many file systems' directories (also known as folders):
0 – The topmost directory has zero parent directories; that is, there is no directory that contains the topmost directory.
1 – Each subdirectory has exactly one parent directory (not including shortcuts to the directory's location; while such files may have similar icons to the icons of the destination directories, they are not directories at all).
Infinity – Each directory, whether the topmost directory or any of its subdirectories, according to the file system's rules, may contain any number of files or subdirectories. Practical limits to this number are caused by other factors, such as space available on storage media and how well the computer's operating system is maintained.
Authorship
Van der Poel confirmed that he was the originator of the rule, but Bruce MacLennan has also claimed authorship (in the form "The only reasonable numbers are zero, one and infinity."), writing in 2015 that:
See also
Magic number (programming)#Unnamed numerical constants
References
Software engineering folklore
Programming principles | Zero one infinity rule | [
"Engineering"
] | 354 | [
"Software engineering",
"Software engineering folklore"
] |
9,466,823 | https://en.wikipedia.org/wiki/Wildlife%20of%20India | India is one of the most biodiverse regions and is home to a large variety of wildlife. It is one of the 17 megadiverse countries and includes three of the world's 36 biodiversity hotspots – the Western Ghats, the Eastern Himalayas, and the Indo-Burma hotspot.
About 24.6% of the total land area is covered by forests. It has various ecosystems ranging from the high altitude Himalayas, tropical evergreen forests along the Western Ghats, desert in the north-west, coastal plains and mangroves along the peninsular region. India lies within the Indomalayan realm and is home to about 7.6% of mammal, 14.7% of amphibian, 6% of bird, 6.2% of reptilian, and 6.2% of flowering plant species.
Human encroachment, deforestation and poaching are significant challenges that threaten the existence of certain fauna and flora. Government of India established a system of national parks and protected areas in 1935, which have been subsequently expanded to nearly 1022 protected areas by 2023. India has enacted the Wildlife Protection Act of 1972 and special projects such as Project Tiger, Project Elephant and Project Dolphin for protection of critical species.
Fauna
India has an estimated 92,873 species of fauna, roughly about 7.5% of the species available worldwide. Insects form the major category with 63423 recorded species. India is home to 423 mammals, 1233 birds, 526 reptiles, 342 amphibians, 3022 fish apart from other species which form 7.6% of mammal, 14.7% of amphibian, 6% of bird, 6.2% of reptilian species worldwide. Among Indian species, only 12.6% of mammals and 4.5% of birds are endemic, contrasting with 45.8% of reptiles and 55.8% of amphibians.
The Indian subcontinent was formerly an island landmass (Insular India) that split away from Gondwana around 125 million years ago, during the Early Cretaceous. Late Cretaceous Insular Indian faunas were very similar to those found on Madagascar due to their shared connection until around 90 million years ago. The Cretaceous-Paleogene extinction event around 66 million years ago caused the extinction of many animals native to Insular India, such as its titanosaurian and abelisaurid dinosaurs. During the early Cenozoic era, around 55-50 million years ago, the Indian subcontinent collided with Laurasia, allowing animals from Asia to migrate into the Indian subcontinent. Some elements of India's modern fauna, such as the frog family Nasikabatrachidae and the caecillian family Chikilidae, are suggested to have been present in India prior to its collision with Asia.
Four species of megafauna (large animals) native to India became extinct during the Late Pleistocene around 10,000-50,000 years ago as part of a global wave of megafauna extinctions, these include the very large elephant Palaeoloxodon namadicus (possibly the largest land mammal to have ever lived), the elephant relative Stegodon, the hippopotamus Hexaprotodon, and the equine Equus namadicus. These extinctions are thought to have been after the arrival of modern humans on the Indian subcontinent. Ostriches were also formerly native to India, but also became extinct during the Late Pleistocene.
India is home to several well-known large animals, including the Indian elephant, Indian rhinoceros, and Gaur. India is the only country where the big cats tiger and lion exist in the wild. Members of the cat family include Bengal tiger, Asiatic lion, Indian leopard, snow leopard, and clouded leopard. Representative and endemic species include blackbuck, nilgai, bharal, barasingha, Nilgiri tahr, and Nilgiri langur.
There are about 31 species of aquatic mammals including dolphins, whales, porpoises, and dugong. Reptiles include the gharial, the only living members of Gavialis and saltwater crocodiles. Birds include peafowl, pheasants, geese, ducks, mynas, parakeets, pigeons, cranes, hornbills, and sunbirds. Endemic bird species include great Indian hornbill, great Indian bustard, nicobar pigeon, ruddy shelduck, Himalayan monal, and Himalayan quail.
Flora
About 24.6% of the total land area is covered by forests. It has various ecoregions ranging from the high altitude Himalayas, tropical evergreen forests along the Western Ghats, desert in the north-west, coastal plains and mangroves along the peninsular region. India's climate has become progessively drier since the late Miocene, reducing forest cover in northern India in favour of grassland.
There are about 29,015 species of plants including 17,926 species of flowering plants. This is about 9.1% of the total plant species identified worldwide and 6,842 species are endemic to India. Other plant species include 7,244 algae, 2,504 bryophytes, 1,267 pteridophytes and 74 gymnosperms. One-third of the fungal diversity of the world exists in India with over 27,000 recorded species, making it the largest biotic community after insects.
Conservation
India harbors 172 (2.9%) IUCN-designated threatened species. These include 39 species of mammals, 72 species of birds, 17 species of reptiles, three species of amphibians, two species of fish, and a number of insects including butterflies, moths, and beetles.
Human encroachment, deforestation and poaching are significant challenges that threaten the existence of certain fauna and flora. Government of India established a system of national parks and protected areas in 1935, which have been subsequently expanded to nearly 1022 protected areas by 2023. Various laws have been enacted such as Indian Forest Act, 1927 and Wildlife Protection Act of 1972 and special projects such as Project Tiger, Project Elephant and Project Dolphin have been initiated for the protection of forests, wildlife and critical species.
As of 2023, there are 1022 protected areas including 106 national parks, 573 wildlife sanctuaries, 220 conservation reserves and 123 community reserves. In addition, there are 55 tiger reserves, 18 biosphere reserves and 32 elephant reserves.
National symbols
See also
List of birds of India
List of mammals of India
List of reptiles of South Asia
Wildlife population of India
References
Further reading
Saravanan, Velayutham. Environmental History of Modern India: Land, Population, Technology and Development (Bloomsbury Publishing India, 2022) online review
External links
Official website of: Government of India, Ministry of Environment & Forests
"Legislations on Environment, Forests, and Wildlife" from the Official website of: Government of India, Ministry of Environment & Forests
"India's Forest Conservation Legislation: Acts, Rules, Guidelines", from the official website of the Government of India, Ministry of Environment & Forests
Wildlife Legislations, including - "The Indian Wildlife (Protection) Act" from the Official website of: Government of India, Ministry of Environment & Forests
India
Biota of India | Wildlife of India | [
"Biology"
] | 1,483 | [
"Biota by country",
"Biota of India",
"Wildlife by country"
] |
9,467,104 | https://en.wikipedia.org/wiki/Gromov%20Flight%20Research%20Institute | The Gromov Flight Research Institute or GFRI for short (, ) is an important Russian State Research Centre which operates an aircraft test base located in Zhukovsky, 40 km south-east of Moscow. The airfield is also known as Ramenskoye air base.
The airfield was used as the backup landing site for the Shuttle Buran test program and also as a test base for a Buran's aerodynamic prototype BTS-002.
GFRI periodically hosts the MAKS International Air Show (Aviasalon).
At present, GFRI also hosts Zhukovsky International Airport.
History
Foundation
The Flight Research Institute was founded on March 8, 1941, in accordance with the decree of Sovnarkom and the Central Committee of the Communist Party of the Soviet Union. Mikhail Gromov, a test pilot, Hero of the Soviet Union, became its first chief. From the very beginning the institute participated in development and testing of aircraft and airborne systems, conducted flight research in order to pave the way to further scientific activities.
The first years of the institute's existence fell on the war times. During the war experts of the institute kept developing recommendations to eliminate defects in flight qualities and war-fighting capabilities of the aircraft, flight testing of the aircraft prototypes, studied the foreign aircraft and equipment, both purchased and taken as trophies.
Cold War
Zhukovsky airfield was the Soviet Union's equivalent to the US Edwards AFB and as such many types of aircraft underwent evaluation.
Here some western aircraft were tested or analyzed:
Wrecks from F-111s shot down over North Vietnam were sent to Zhukovskiy to be analyzed.
Pieces of US planes shot down in North Vietnam and their captured electronic countermeasures equipment were taken for evaluation (F-111, A-6, A-7, B-52, F-4, F105, etc.).
Captured VNAF helicopters are believed to have been tested (UH-1H, CH-47).
Perestroika times
In 2001 GFRI had a staff of about 5000 then headed by Vyacheslav M. Bakaev. Number of research flying test-beds was about 70 complemented with 20 multipurpose test stands and simulators. The institute also supported Fedotov Test Pilot School. Newly built by AMST of Austria centrifuge then was one of the advanced in the world having a gondola with a 3D visual projection system and formed the core of GFRI's aerospace medical research complex. As said Vilgelm I. Vid (GFRI deputy chief for civil aviation) the institute pioneered a civil aircraft upset recovery system to decrease a number of CFIT accidents originated in aeroplane upsets. However, Bakaev said GFRI was passing through economic difficulties as most Russian aeronautical facilities. The institute was downsized by about 30% since 1996, and most of the test aircraft were underutilized.
Due to financial problems in the 1990s (known as perestroika times), tourist fighter flights in former secret jets became available, mainly for wealthy western tourists. The security check was comparable to the Russian visa. On offer for flights was the Aero L-39 Albatros jet trainer, the Soviet-built Mikoyan-Gurevich MiG-21, Mikoyan-Gurevich MiG-23, MiG-25 for stratosphere "Edge of Space" flights, the MiG-29 Fulcrum and even the Sukhoi Su-27 Flanker. From June 2006, such flights were stopped.
The airline was established by the institute in 1995 as a wholly owned commercial subsidiary and named Gromov Air (later Moskovia Airlines).
Current research and development activities
Aerospace flight research and testing in low and high speed aerodynamics, flight dynamics, propulsion and avionics technologies (GLL-8 (Gll-VK) Igla).
Testing and certification services for prototype aircraft and on-board equipment.
Research in aircraft flight safety, reliability, maintainability and other operating capabilities.
Fedotov Test Pilot School for training test pilots, navigators, and on-board test engineers.
Development, production and operation of a variety of flying testbeds including those based on the Tu-154, Su-30, Il-76, Il-103 aeroplanes, Mi-8 helicopters, etc.
Development and production of flight testing instrumentation (low and high frequency data collection solid state storage systems, vibration parameters measuring devices, instant temperature sensors, miniaturized flat piezoresistance beat and pressure distribution sensors, hot-wire airflow velocity vector transducers and aerodynamic friction stress measurement products, etc.).
Testbed aeroplanes
Notable employees
Heads of the institute
Mikhail Gromov (March – August 1941)
(1941–1942 and 1943–1947)
Vasily Molokov (1942–1943)
(1947–1951)
Alexandr Kobzarev (1951–1954)
(1954–1966)
(1966–1981)
Arseny Mironov (1981–1985)
(1985–1995)
Felix Zolotariev (1995–1998)
Viacheslav Bakaev (1998–2004)
Yury Klishin (2005–2006)
Vadim Shalygin (2006–2007)
Evgeny Gorbunov (2007–2009)
Pavel Vlasov (2010–2017)
(since 2017)
Scientists, test pilots, navigators, and engineers
Sergei Anokhin
Yuri Garnaev
Anatoly Kvochur
Viktor Korostiev
Anatoly Levchenko
Leonid Lobas
Guy Severin
Rimantas Stankevičius
Amet-khan Sultan
Ural Sultanov
Max Taitz
Igor Volk
See also
Armstrong Flight Research Center – the USA counterpart of the Gromov Flight Research Institute
List of aerospace flight test centres
References
External links
GFRI airfield at Google Maps
Historical video to celebrate 80 years of Gromov Flight Research Institute (in Russian)
Airports in Moscow Oblast
Airports built in the Soviet Union
United Aircraft Corporation
Companies based in Moscow Oblast
Buran program
Research institutes in Russia
Research institutes in the Soviet Union
Aviation in the Soviet Union
Aerospace research institutes
Aviation research institutes
Aerospace engineering organizations
Golden Idea national award winners | Gromov Flight Research Institute | [
"Engineering"
] | 1,270 | [
"Aerospace engineering",
"Aerospace engineering organizations",
"Aeronautics organizations"
] |
9,467,349 | https://en.wikipedia.org/wiki/Spatial%20statistics | Spatial statistics is a field of applied statistics dealing with spatial data.
It involves stochastic processes (random fields, point processes), sampling, smoothing and interpolation, regional (areal unit) and lattice (gridded) data, point patterns, as well as image analysis and stereology.
See also
Geostatistics
Modifiable areal unit problem
Spatial analysis
Spatial econometrics
Statistical geography
Spatial epidemiology
Spatial network
Statistical shape analysis
References
Applied statistics
Statistics | Spatial statistics | [
"Physics",
"Mathematics"
] | 99 | [
"Applied mathematics",
"Spatial analysis",
"Space",
"Spacetime",
"Applied statistics"
] |
9,467,708 | https://en.wikipedia.org/wiki/Room%20box | A room box is a display box used for three-dimensional miniature scale environments, or scale models. Although the name would suggest room boxes generally only represent typical rooms such as those found in houses or other buildings (bedrooms, kitchens, offices, etc.), room boxes are used for all sorts of environments – exterior views as well as interior ones, realistic ones as well as fantastical ones. While some miniaturists concentrate their efforts specifically on room boxes, many use them to take a break from larger projects, such as dollhouses or miniature villages, to create a smaller environment on a different theme. A room box can be tailored to one’s interests or mirror an important step in life - for example, a bakery or restaurant scene might be created by or for a baker or cook, and a wedding dress storefront might be created for a bride to be or as a reminiscence of one's wedding. Making a room box is often a first step to learning new techniques in miniature making; such projects are popular at miniaturists' events where attendees have only 1–2 days to make and finish a project. Once techniques are perfected in these smaller settings, craftspersons and hobbyists often reapply them to larger projects.
Room boxes are a cost- and time-effective way to make miniature settings without attempting larger setups such as a dollhouse or train set. Commercially bought room boxes tend to be made of wood, pressed wood products or plywood, with the top and front window made of removable clear acrylic that lets in light and enables access and viewing from two perspectives. Dimensions usually meet standard dollhouse proportions ("1:12 scale" in dollhouse speak means that 1" in the dollhouse world represents 1' in the real world), but anyone can make a room box from a leftover shoebox, orange crate, etc. and adapt an idea to suit the box's scale. Since any material can be used, whether leftover or new, people of all economic classes express themselves through this craft.
One elaborate example of 1:12 scale miniature rooms are the 68 miniature Thorne Rooms, each with a different theme. They were designed by Narcissa Niblack Thorne and furniture for them was created by craftsmen in the 1930s and 1940s. They are now at the Art Institute of Chicago, Phoenix Art Museum.
As evidenced in the recent increase in craft book and magazine publishing on different types of miniatures, interest in making room-boxes for miniature settings has steadily grown since the 1990s. Room boxes have even found a place during prime-time television: the winter 2007 season of CSI: Crime Scene Investigation included a clever storyline recurring throughout the season, where a murderer named The Miniature Killer leaves clues for investigators in the form of intricately made 3-D room boxes showing scenes of the crimes she committed, reproduced in scale miniature.
See also
Model
Scale model
Dollhouse
Model building
References
Scale modeling
Dollhouses | Room box | [
"Physics"
] | 600 | [
"Scale modeling"
] |
9,467,758 | https://en.wikipedia.org/wiki/Cryptococcus%20gattii | Cryptococcus gattii, formerly known as Cryptococcus neoformans var. gattii, is an encapsulated yeast found primarily in tropical and subtropical climates. Its teleomorph is Filobasidiella bacillispora, a filamentous fungus belonging to the class Tremellomycetes.
C. gattii is one of two organisms causing the infectious disease cryptococcosis (along with C. neoformans). Clinical manifestations of C. gattii infection include pulmonary cryptococcosis (lung infection), basal meningitis, and cerebral cryptococcomas. Occasionally, the fungus is associated with skin, soft tissue, lymph node, bone, and joint infections. In recent years, it has appeared in British Columbia, Canada and the Pacific Northwest. It has been suggested that tsunamis, such as the 1964 Alaska earthquake and tsunami, might have been responsible for carrying the fungus to North America and its subsequent spread there. From 1999 through to early 2008, 216 people in British Columbia have been infected with C. gattii, and eight died from complications related to it. The fungus also infects animals, such as dogs, koalas, and dolphins. In 2007, the fungus appeared for the first time in the United States, in Whatcom County, Washington and in April 2010 had spread to Oregon. The most recently identified strain, designated VGIIc, is particularly virulent, having proved fatal in 19 of 218 known cases.
Nomenclature
Cryptococcus gattii has recently been divided into five species. These are C. gattii, C. bacillisporus, C. deuterogattii, C. tetragattii, and C. decagattii.
Environmental microbiology
C. gattii occupies an environmental niche in decaying hollows of trees native to tropical as well as subtropical and temperate regions. It may then contaminate nearby soil or persist in wood products.
Distribution
Soil debris associated with certain tree species has been found frequently to contain C. gattii VGIII MATα and MATa, and less commonly VGI MATα, in Southern California. These isolates were fertile, were found to be indistinguishable from the human isolates by genome sequence, and were virulent in in vitro and animal tests. Isolates were found associated with Canary Island pine (Pinus canariensis), American sweetgum (Liquidambar styraciflua), and Pohutukawa tree (Metrosideros excelsa).
One study concluded "[j]ust as people who travel to South America are told to be careful about drinking the water, people who visit other areas like California, the Pacific Northwest, and Oregon need to be aware that they are at risk for developing a fungal infection, especially if their immune system is compromised."
Epidemiology
C. gattii infections were initially thought to be restricted to tropical and subtropical regions. C. gattii is the predominant cause of cryptococcosis in sub-Saharan Africa. The highest incidences of C. gattii infections occur in Papua New Guinea and Northern Australia. However cases have been reported in various other regions including Brazil, India and the Pacific Northwest of North America.
In the United States, C. gattii serotype B, subtype VGIIa, is largely responsible for clinical cases. The VGIIa subtype was responsible for the outbreaks in Canada; it then appeared in the U.S. Pacific Northwest.
According to a CDC summary, from 2004 to 2010, 60 cases were identified in the U.S.: 43 in Oregon, 15 in Washington, and one each in Idaho and California. Slightly more than half of these cases were immunocompromised; 92% of all isolates were of the VGIIa subtype. In 2007, the first case in North Carolina was reported, subtype VGI, which is identical to the isolates found in Australia and California.
The multiple clonal clusters in the Pacific Northwest likely arose independently of each other as a result of sexual reproduction occurring within the highly sexual VGII population. VGII C. gattii have probably undergone either bisexual or unisexual reproduction in multiple different locales, thus giving rise to novel virulent phenotypes.
Pathology
C. gattii notable for more causing cryptococcosis even in immunocompetent/otherwise healthy individuals. Unlike Cryptococcus neoformans, C. gattii is not particularly associated with human immunodeficiency virus infection or other forms of immunosuppression. Increased virulence may be related to its capability to rapidly proliferate within lymphocytes.
C. gattii infection is more likely to be limited to the lung (rather than disseminating to the CNS). When CNS infection does occur, it may involve more localised lesions (cryptococcomas) rather than the diffuse infection characteristic of C. neoformans.
Diagnosis
Culture of sputum, bronchoalveolar lavage, lung biopsy, cerebrospinal fluid or brain biopsy specimens on selective agar allows differentiation between the five members of the C. gattii species complex and the two members of the C. neoformans species complex.
Treatment
Medical treatment consists of prolonged intravenous therapy (for 6–8 weeks or longer) with the antifungal drug amphotericin B, either in its conventional or lipid formulation. The addition of oral or intravenous flucytosine improves response rates. Oral fluconazole is then administered for six months or more.
Antifungals alone are often insufficient to cure C. gattii infections, and surgery to resect infected lung (lobectomy) or brain is often required. Ventricular shunts and Ommaya reservoirs are sometimes employed in the treatment of central nervous system infection.
People who have C. gattii infection need to take prescription antifungal medication for at least 6 months; usually the type of treatment depends on the severity of the infection and the parts of the body that are affected.
For people who have asymptomatic infections or mild-to-moderate pulmonary infections, the treatment is usually fluconazole.
For people who have severe lung infections, or infections in the central nervous system (brain and spinal cord), the treatment is amphotericin B in combination with flucytosine.
See also
References
Further reading
Fungal pathogens of humans
Animal fungal diseases
Fungal plant pathogens and diseases
Fungi and humans
Fungi of Africa
Fungi of Asia
Tremellomycetes
Yeasts
Fungus species | Cryptococcus gattii | [
"Biology"
] | 1,389 | [
"Fungi",
"Fungus species",
"Yeasts",
"Fungi and humans",
"Humans and other species"
] |
9,468,040 | https://en.wikipedia.org/wiki/Scotobiology | Scotobiology is the study of biology as directly and specifically affected by darkness, as opposed to photobiology, which describes the biological effects of light.
Overview
The science of scotobiology gathers together under a single descriptive heading a wide range of approaches to the study of the biology of darkness. This includes work on the effects of darkness on the behavior and metabolism of animals, plants, and microbes. Some of this work has been going on for over a century, and lays the foundation for understanding the importance of dark night skies, not only for humans but for all biological species.
The great majority of biological systems have evolved in a world of alternating day and night and have become irrevocably adapted to and dependent on the daily and seasonally changing patterns of light and darkness. Light is essential for many biological activities such as sight and photosynthesis. These are the focus of the science of photobiology. But the presence of uninterrupted periods of darkness, as well as their alternation with light, is just as important to biological behaviour. Scotobiology studies the positive responses of biological systems to the presence of darkness, and not merely the negative effects caused by the absence of light.
Effects of darkness
Many of the biological and behavioural activities of plants, animals (including birds and amphibians), insects, and microorganisms are either adversely affected by light pollution at night or can only function effectively either during or as the consequence of nightly darkness. Such activities include foraging, breeding and social behavior in higher animals, amphibians, and insects, which are all affected in various ways if light pollution occurs in their environment. These are not merely photobiological phenomena; light pollution acts by interrupting critical dark-requiring processes.
But perhaps the most important scotobiological phenomena relate to the regular periodic alternation of light and darkness. These include breeding behavior in a range of animals, the control of flowering and the induction of winter dormancy in many plants, and the operational control of the human immune system. In many of these biological processes the critical point is the length of the dark period rather than that of the light. For example, "short-day" and "long-day" plants are, in fact, "long-night" and "short-night" respectively. That is to say, plants do not measure the length of the light period, but of the dark period. One consequence of artificial light pollution is that even brief periods of relatively bright light during the night may prevent plants or animals (including humans) from measuring the length of the dark period, and therefore from behaving in a normal or required manner. This is a critical aspect of scotobiology, and one of the major areas in the study of the responses of biological systems to darkness.
In discussing scotobiology, it is important to remember that darkness (the absence of light) is seldom absolute. An important aspect of any scotobiological phenomenon is the level and quality (wavelength) of light that is below the threshold of detection for that phenomenon and in any specific organism. This important variable in scotobiological studies is not always properly noted or examined. There are substantial levels of natural light pollution at night, of which moonlight is usually the strongest. For example, plants that rely on night length to program their behaviour have the capacity to ignore full moonlight during an otherwise dark night. If this ability had not evolved, plants would not be able to respond to changing night-length for such behavioural programs as the initiation of flowering and the onset of dormancy. On the other hand, some animal behavioural patterns are strongly responsive to moonlight. It is thus most important in any scotobiological study to determine the threshold level of light that may be required to interfere with or negate the normal pattern of dark-night activity.
Etymology
In 2003, at a symposium on the Ecology of the Night held in Muskoka, Canada, discussion centered around the many effects of night-time light pollution on the biology of a wide range of organisms, but it went far beyond this in describing darkness as a biological imperative for the functioning of biological systems. Presentations focused on the absolute requirement of darkness for many aspects of normal behaviour and metabolism of many organisms and for the normal progression of their life cycles. Because there was no suitable term to describe the Symposium's main focus, the term scotobiology was introduced. The word is derived from the Greek scotos, σκότος, "dark," and relates to photobiology, which describes the biological effects of light (φῶς, phos; root: φωτ-, phot-). The term scotobiology appears not to have been used previously, although related terms such as skototropism and scotophyle have appeared in the literature.
See also
Dark-sky movement
Dark-sky preserve
Ecological light pollution
Light effects on circadian rhythm
Photoperiodism
Sky brightness
References
Branches of biology | Scotobiology | [
"Biology"
] | 1,011 | [
"nan"
] |
9,469,328 | https://en.wikipedia.org/wiki/Abhyankar%E2%80%93Moh%20theorem | In mathematics, the Abhyankar–Moh theorem states that if is a complex line in the complex affine plane , then every embedding of into extends to an automorphism of the plane. It is named after Shreeram Shankar Abhyankar and Tzuong-Tsieng Moh, who published it in 1975. More generally, the same theorem applies to lines and planes over any algebraically closed field of characteristic zero, and to certain well-behaved subsets of higher-dimensional complex affine spaces.
References
.
Theorems in algebraic geometry | Abhyankar–Moh theorem | [
"Mathematics"
] | 118 | [
"Theorems in algebraic geometry",
"Theorems in geometry"
] |
9,470,070 | https://en.wikipedia.org/wiki/Henrik%20Svensmark | Henrik Svensmark (born 1958) is a Danish physicist and professor in the Division of Solar System Physics at the Danish National Space Institute (DTU Space) in Copenhagen. He is known for his work on the hypothesis that fewer cosmic rays are an indirect cause of global warming via cloud formation.
Early life and education
Henrik Svensmark obtained a Master of Science in Engineering (Cand. Polyt) in 1985 and a Ph.D. in 1987 from the Physics Laboratory I at the Technical University of Denmark.
Career
Henrik Svensmark is director of the Center for Sun-Climate Research at the Danish Space Research Institute (DSRI), a part of the Danish National Space Center. He previously headed the sun-climate group at DSRI. He held postdoctoral positions in physics at three other organizations: University of California, Berkeley, Nordic Institute for Theoretical Physics, and the Niels Bohr Institute.
In 1997, Svensmark and Eigil Friis-Christensen popularised a theory that linked galactic cosmic rays and global climate change mediated primarily by variations in the intensity of the solar wind, which they have termed cosmoclimatology. This theory had earlier been reviewed by Dickinson.
One of the small-scale processes related to this link was studied in a laboratory experiment performed at the Danish National Space Center (paper published in the Proceedings of the Royal Society A, February 8, 2007).
Svensmark's conclusions from his research downplay the significance of the effects of man-made increases in atmospheric CO2 on recent and historical global warming, with him arguing that while the climate change role of greenhouse gases is considerable, solar variations play a larger role.
Cosmoclimatology theory of climate change
Svensmark detailed his theory of cosmoclimatology in a paper published in 2007.
The Center for Sun-Climate Research at the Danish National Space Institute "investigates the connection between solar activity and climatic changes on Earth".
Its homepage lists several publications earlier works related to cosmoclimatology.
Svensmark and Nigel Calder published a book The Chilling Stars: A New Theory of Climate Change (2007) describing the Cosmoclimatology theory that cosmic rays "have more effect on the climate than manmade CO2":
"During the last 100 years cosmic rays became scarcer because unusually vigorous action by the Sun batted away many of them. Fewer cosmic rays meant fewer clouds—and a warmer world."
A documentary film on Svensmark's theory, The Cloud Mystery, was produced by Lars Oxfeldt Mortensen and premiered in January 2008 on Danish TV 2.
In April 2012, Svensmark published an expansion of his theory in the Monthly Notices of the Royal Astronomical Society
In the new work he claims that the diversity of life on Earth over the last 500 million years might be explained by tectonics affecting the sea-level together with variations in the local supernova rate, and virtually nothing else. This suggests that the progress of evolution is affected by climate variation depending on the galactic cosmic ray flux.
The director of DTU Space, Prof. Eigil Friis-Christensen, commented: "When this enquiry into effects of cosmic rays from supernova remnants began 16 years ago, we never imagined that it would lead us so deep into time, or into so many aspects of the Earth's history. The connection to evolution is a culmination of this work."
Hypothesis tests
Preliminary experimental tests have been conducted in the SKY Experiment at the Danish National Space Science Center. CERN, the European Organization for Nuclear Research in Geneva, is preparing comprehensive verification in the CLOUD Project.
SKY Experiment
Svensmark conducted proof of concept experiments in the SKY Experiment at the Danish National Space Institute.
To investigate the role of cosmic rays in cloud formation low in the Earth's atmosphere, the SKY experiment used natural muons (heavy electrons) that can penetrate even to the basement of the National Space Institute in Copenhagen. The hypothesis, verified by the experiment, is that electrons released in the air by the passing muons promote the formation of molecular clusters that are building blocks for cloud condensation nuclei.
Critics of the hypothesis claimed that particle clusters produced measured just a few nanometres across, whereas aerosols typically need to have a diameter of at least 50 nm in order to serve as so-called cloud condensation nuclei. Further experiments by Svensmark and collaborators published in 2013 showed that aerosols with diameter larger than 50 nm are produced by ultraviolet light (from trace amounts of ozone, sulfur dioxide, and water vapor), large enough to serve as cloud condensation nuclei.
CLOUD Project Experiments
Scientists are preparing detailed atmospheric physics experiments to test Svensmark's thesis, building on the Danish findings. CERN started a multi-phase project in 2006, including rerunning the Danish experiment. CERN plans to use an accelerator rather than rely on natural cosmic rays. CERN's multinational project will give scientists a permanent facility where they can study the effects of both cosmic rays and charged particles in the Earth's atmosphere. CERN's project is named CLOUD (Cosmics Leaving OUtdoor Droplets).
Dunne et al. (2016) have presented the main outcomes of 10 years of results obtained at the CLOUD experiment performed at CERN. They have studied in detail the physico-chemical mechanisms and the kinetics of aerosols formation. The nucleation process of water droplets/ice micro-crystals from water vapor reproduced in the CLOUD experiment and also directly observed in the Earth atmosphere do not only involve ions formation due to cosmic rays but also a range of complex chemical reactions with sulfuric acid, ammonia and organic compounds emitted in the air by human activities and by organisms living on land or in the oceans (plankton). Although they observe that a fraction of cloud nuclei is effectively produced by ionisation due to the interaction of cosmic rays with the constituents of Earth atmosphere, this process is insufficient to attribute the present climate modifications to the fluctuations of the cosmic rays intensity modulated by changes in the solar activity and Earth magnetosphere.
Debate and controversy
Galactic Cosmic Rays vs Global Temperature
Oceanographer Paul Farrar (2000)
argued that, based on the spatial distribution of the cloud variation during Svensmark's study period, the variation was due to an El Niño which was synchronized with the cosmic ray signal used by Svensmark during the data period of his study.
A (2003) critique by physicist Peter Laut of Svensmark's theory reanalyzed Svensmark's data and suggested that it does not support a correlation between cosmic rays and global temperature changes; it also disputes some of the theoretical bases for the theory. Svensmark replied to the paper, stating that "...nowhere in Peter Laut’s (PL) paper has he been able to explain, where physical data have been handled incorrectly, how the character of my papers are misleading, or where my work does not live up to scientific standards"
Mike Lockwood of the UK's Rutherford Appleton Laboratory and Claus Froehlich of the World Radiation Center in Switzerland published a paper in 2007 which concluded that the increase in mean global temperature observed since 1985 correlates so poorly with solar variability that no type of causal mechanism may be ascribed to it, although they accept that there is "considerable evidence" for solar influence on Earth's pre-industrial climate and to some degree also for climate changes in the first half of the 20th century.
Svensmark's coauthor Calder responded to the study in an interview with LondonBookReview.com, where he put forth the counterclaim that global temperature has not risen since 1999.
Later in 2007, Svensmark and Friis-Christensen brought out a Reply to Lockwood and Fröhlich which concludes that surface air temperature records used by Lockwood and Fröhlich apparently are a poor guide to Sun-driven physical processes, but tropospheric air temperature records do show an impressive negative correlation between cosmic-ray flux and air temperatures up to 2006 if a warming trend, oceanic oscillations and volcanism are removed from the temperature data. They also point out that Lockwood and Fröhlich present their data by using running means of around 10 years, which creates the illusion of a continued temperature rise, whereas all unsmoothed data point to a flattening of the temperature, coincident with the present maxing out of the magnetic activity of the Sun, and which the continued rapid increase in CO2 concentrations seemingly has been unable to overrule.
Galactic Cosmic Rays vs Cloud Cover
In April 2008, Professor Terry Sloan of Lancaster University published a paper in the journal Environmental Research Letters titled "Testing the proposed causal link between cosmic rays and cloud cover", which found no significant link between cloud cover and cosmic ray intensity in the last 20 years. Svensmark responded by saying "Terry Sloan has simply failed to understand how cosmic rays work on clouds". Dr. Giles Harrison of Reading University, describes the work as important "as it provides an upper limit on the cosmic ray-cloud effect in global satellite cloud data". Harrison studied the effect of cosmic rays in the UK. He states: "Although the statistically significant non-linear cosmic ray effect is small, it will have a considerably larger aggregate effect on longer timescale (e.g. century) climate variations when day-to-day variability averages out". Brian H. Brown (2008) of Sheffield University further found a statistically significant (p<0.05) short term 3% association between Galactic Cosmic Rays (GCR) and low level clouds over 22 years with a 15-hour delay. Long-term changes in cloud cover (> 3 months) and GCR gave correlations of p=0.06.
Debate updates
More recently, Laken et al. (2012) found that new high quality satellite data show that the El Niño Southern Oscillation is responsible for most changes in cloud cover at the global and regional levels. They also found that galactic cosmic rays, and total solar irradiance did not have any statistically significant influence on changes in cloud cover.
Lockwood (2012) conducted a thorough review of the scientific literature on the "solar influence" on climate. It was found that when this influence is included appropriately into climate models causal climate change claims such as those made by Svensmark are shown to have been exaggerated. Lockwood's review also highlighted the strength of evidence in favor of the solar influence on regional climates.
Sloan and Wolfendale (2013) demonstrated that while temperature models showed a small correlation every 22 years, less than 14 percent of global warming since the 1950s could be attributed to cosmic ray rate. The study concluded that the cosmic ray rate did not match the changes in temperature, indicating that it was not a causal relationship. Another 2013 study found, contrary to Svensmark's claims, "no statistically significant correlations between cosmic rays and global albedo or globally averaged cloud height."
In 2013, a laboratory study by Svensmark, Pepke and Pedersen published in Physics Letters A showed that there is in fact a correlation between cosmic rays and the formation of aerosols of the type that seed clouds. Extrapolating from the laboratory to the actual atmosphere, the authors asserted that solar activity is responsible for approximately 50 percent of temperature variation.
In a detailed 2007 post on the scientists' blog RealClimate, Rasmus E. Benestad presented arguments for considering Svensmark's claims to be "wildly exaggerated". (Time magazine has characterized the main purpose of this blog as a "straightforward presentation of the physical evidence for global warming".)
Selected publications
Books
Contribution in Die kalte Sonne. Warum die Klimakatastrophe nicht stattfindet (The Cold Sun), by Fritz Vahrenholt and Sebastian Lüning (edrs)
Film
The Cloud Mystery
Awards
2001, the Energy-E2 Research Prize
1997, Knud Hojgaard Anniversary Research Prize
References
External links
Calder, Nigel, An experiment that hints we are wrong on climate change Nigel Calder, former editor of New Scientist, says the orthodoxy must be challenged, TimesOnline, February 11, 2007
DISCOVER Interview with Henrik Svensmark, by Marion Long. Sun's shift may cause global warming - June 2007
LondonBookReview.com - Book review of The Chilling Stars
The CLOUD project
Danish climatologists
21st-century Danish physicists
Danish nuclear physicists
1958 births
Environmental scientists
Living people | Henrik Svensmark | [
"Environmental_science"
] | 2,544 | [
"Environmental scientists"
] |
9,470,331 | https://en.wikipedia.org/wiki/Cell-penetrating%20peptide | Cell-penetrating peptides (CPPs) are short peptides that facilitate cellular intake and uptake of molecules ranging from nanosize particles to small chemical compounds to large fragments of DNA. The "cargo" is associated with the peptides either through chemical linkage via covalent bonds or through non-covalent interactions.
CPPs deliver the cargo into cells, commonly through endocytosis, for use in research and medicine. Current use is limited by a lack of cell specificity in CPP-mediated cargo delivery and insufficient understanding of the modes of their uptake. Other delivery mechanisms that have been developed include CellSqueeze and electroporation.
CPPs typically have an amino acid composition that either contains a high relative abundance of positively charged amino acids such as lysine or arginine or has sequences that contain an alternating pattern of polar, charged amino acids and non-polar, hydrophobic amino acids. These two types of structures are referred to as polycationic or amphipathic, respectively. A third class of CPPs are the hydrophobic peptides, containing only apolar residues with low net charge
or hydrophobic amino acid groups that are crucial for cellular uptake.
Transactivating transcriptional activator (TAT), from human immunodeficiency virus 1 (HIV-1), was the first CPP discovered. In 1988, two laboratories independently found that TAT could be efficiently taken up from the surrounding media by numerous cell types in culture. Since then, the number of known CPPs has expanded considerably, and small molecule synthetic analogues with more effective protein transduction properties have been generated.
A recent discovery found that Papillomaviridae, such as the human papillomavirus, use CPPs to penetrate the intracellular membrane to trigger retrograde trafficking of the viral unit to the nucleus.
Mechanisms of membrane translocation
Cell-penetrating peptides are of different sizes, amino acid sequences, and charges, but all CPPs have the ability to translocate the plasma membrane and facilitate the delivery of various molecular cargoes to the cytoplasm or an organelle. No real consensus explains the translocation mechanism, but candidates can be classified into three mechanisms: direct penetration in the membrane, endocytosis-mediated entry, and translocation through a transitory structure. CPP transduction is an area of ongoing research.
Cell-penetrating peptides (CPP) are able to transport different types of cargo molecules across plasma membrane; thus, they act as molecular delivery vehicles. They have numerous applications in medicine as drug delivery agents in the treatment of different diseases including cancer and virus inhibitors, as well as contrast agents for cell labeling. Examples of the latter include acting as a carrier for GFP, MRI contrast agents, or quantum dots.
Direct penetration
The majority of early research suggested that the translocation of polycationic CPPs across biological membranes occurred via an energy-independent cellular process. It was believed that translocation could progress at 4 °C and most likely involved a direct electrostatic interaction with negatively charged phospholipids. Researchers proposed several models in attempts to elucidate the biophysical mechanism of this energy-independent process. Although CPPs promote direct effects on the biophysical properties of pure membrane systems, the identification of fixation artifacts when using fluorescent labeled probe CPPs caused a reevaluation of CPP-import mechanisms. These studies promoted endocytosis as the translocation pathway. An example of direct penetration has been proposed for TAT. The first step in this proposed model is an interaction with the unfolded fusion protein (TAT) and the membrane through electrostatic interactions, which disrupt the membrane enough to allow the fusion protein to cross the membrane. After internalization, the fusion protein refolds due to the chaperone system. This mechanism was not agreed upon, and other mechanisms involving clathrin-dependent endocytosis have been suggested.
Many more detailed methods of CPP uptake have been proposed including transient pore formation. This mechanism involves strong interactions between cell-penetrating peptides and the phosphate groups on both sides of the lipid bilayer, the insertion of positively charged arginine side-chains that nucleate the formation of a transient pore, followed by the translocation of cell-penetrating peptides by diffusing on the pore surface. This mechanism explains how key ingredients, such as the cooperation among the peptides, the large positive charge, and specifically the guanidinium groups, contribute to the uptake. The proposed mechanism also illustrates the importance of membrane fluctuations. Indeed, mechanisms that involve large fluctuations of the membrane structure, such as transient pores and the insertion of charged amino acid side-chains, may be common and perhaps central to the functions of many membrane protein functions.
Endocytosis-mediated translocation
Endocytosis is the second mechanism liable for cellular internalization. Endocytosis is the process of cellular ingestion by which the plasma membrane folds inward to bring substances into the cell. During this process cells absorb material from the outside of the cell by imbibing it with their cell membrane. The classification of cellular localization using fluorescence or by endocytosis inhibitors is the basis of most examination. However, the procedure used during preparation of these samples creates questionable information regarding endocytosis. Moreover, studies show that cellular entry of penetratin by endocytosis is an energy-dependent process. This process is initiated by polyarginines interacting with heparan sulphates that promote endocytosis. Research has shown that TAT is internalized through a form of endocytosis called macropinocytosis.
Studies have illustrated that endocytosis is involved in the internalization of CPPs, but it has been suggested that different mechanisms could transpire at the same time. This is established by the behavior reported for penetratin and transportan wherein both membrane translocation and endocytosis occur concurrently.
Translocation through the formation of a transitory structure
The third mechanism responsible for the translocation is based on the formation of the inverted micelles. Inverted micelles are aggregates of colloidal surfactants in which the polar groups are concentrated in the interior and the lipophilic groups extend outward into the solvent. According to this model, a penetratin dimer combines with the negatively charged phospholipids, thus generating the formation of an inverted micelle inside of the lipid bilayer. The structure of the inverted micelles permits the peptide to remain in a hydrophilic environment.
Nonetheless, this mechanism is still a matter of discussion, because the distribution of the penetratin between the inner and outer membrane is non-symmetric. This non-symmetric distribution produces an electrical field that has been well established. Increasing the amount of peptide on the outer leaflets causes the electric field to reach a critical value that can generate an electroporation-like event.
The last mechanism implied that internalization occurs by peptides that belong to the family of primary amphipathic peptides, MPG and Pep-1. Two similar models have been proposed based on physicochemical studies, consisting of circular dichroism, Fourier transform infrared, and nuclear magnetic resonance spectroscopy. These models are associated with electrophysiological measurements and investigations that have the ability to mimic model membranes such as monolayer at the air-water interface. The structure giving rise to the pores is the major difference between the proposed MPG and Pep-1 model. In the MPG model, the pore is formed by a b-barrel structure, whereas the Pep-1 is associated with helices. In addition, strong hydrophobic phospholipid-peptide interactions have been discovered in both models.
In the two peptide models, the folded parts of the carrier molecule correlate to the hydrophobic domain, although the rest of the molecule remains unstructured.
Cell-penetrating peptide facilitated translocation is a topic of great debate. Evidence has been presented that translocation could use several different pathways for uptake. In addition, the mechanism of translocation can be dependent on whether the peptide is free or attached to cargo. The quantitative uptake of free or CPP connected to cargo can differ greatly but studies have not proven whether this change is a result of translocation efficiency or the difference in translocation pathway. It is probable that the results indicate that several CPP mechanisms are in competition and that several pathways contribute to CPP internalization.
Applications
Nucleic acid delivery
Nucleic acid-based macromolecules such as siRNA, antisense oligonucleotide, decoy DNA, and plasmid are promising biological and pharmacological therapeutics in regulation of gene expression. However, unlike other small-molecular drugs, their development and applications are limited by high molecular weight and negative charges, which results in poor uptake efficiency and low cellular traffic. To overcome these problems, several different delivery systems have been developed, including CPP-nucleic acid conjugate, which is a powerful tool.
Formation of CPP-nucleic acid complexes
Most CPP-nucleic acid complexes that have been proposed so far are formed through covalent bonding. A range of CPP-nucleic acid complexes have been synthesized through different chemistries that are either stable or cleavable linkages. And the most widely used method in publication is cleavable disulfide linkages through total stepwise solid-phase synthesis or solution-phase or solid-phase fragment coupling. Some other strategies like stable amide, thiazolidine, oxime and hydrazine linkage have also been developed.
However, those covalent linking methods are limited by the concern that the synthetic covalent bond between CPP and nucleic acid may alter the biological activity of the latter. Thus, a new non-covalent strategy requiring no chemical modification with short amphipathic CPPs, like MPG and Pep-1 as carriers has been successfully applied for delivery of cargoes. These non-covalent conjugates are formed through either electrostatic or hydrophobic interactions. With this method, cargoes such as nucleic acids and proteins could be efficiently delivered while maintaining full biological activity.
siRNA delivery
Short interfering RNA (siRNA) is a powerful new tool that can interfere with and silence the expression of specific disease gene. To improve cellular uptake of siRNA, CPP strategies have been applied to facilitate the delivery of siRNA into cells through either covalent or non-covalent linkages. In one study, siRNA is covalently linked to transportan and penetratin by disulfide-linkage at 5'-end of the sense strands of siRNA to target luciferase or eGFP mRNA reporters. In another study, TAT-siRNA conjugate through a stable thiomaleimide linkage at 3'-end of siRNA was delivered into HeLa cells for eGFP gene silencing.
However, non-covalent strategies appear to be better for siRNA delivery with a more significant biological response. In one study, MPG/siRNA complexes formed through stable non-covalent strategy showed successful introduction of siRNA into cultured cells and induced robust regulation of target mRNA. Furthermore, MPG/siRNA complexes have also been applied for delivery of siRNA in vivo into mouse blastocytes for gene regulation. MPG forms stable complexes with siRNA with a low degradation rate and can be easily functionalized for specific targeting, which are major advantages compared with the covalent CPP technology.
New substrate design for siRNA delivery
siRNA cell delivery represent a valuable tool for treatment of cancer disease, viral infections and genetic disorders. However, classical strategies involve covalent linking of cargo molecules and CPPs, which does not provide efficient protection of siRNA molecules in vivo; thus results reported in literature are not consistent. Recently, non-covalent strategies have been successfully reported. Secondary amphipathic peptides based on aromatic tryptophan and arginine residues linked with lysine as spacer have been reported under the name of CADY. CADY contains a short peptide sequence of 20 amino acids, with the sequence “Ac-GLWRALWRLLRSLWRLLWRA-cysteamide."
This peptide is able to self-assemble in a helical shape with hydrophilic and hydrophobic residues on different side of the molecule, it has two different orientations of the surface that represent the lowest energy and it is able to form complexes with siRNA at different molar ratio varying from 1:1 to 80:1.
CADY is able to form a shield around siRNA molecule protecting it from biodegradative processes that may occur before cellular penetration occurs. These types of substrates may present important applications in vivo.
Antisense oligomer delivery
Antisense oligonucleotides (asONs) have been used in basic research and are being developed as possible medical treatments. CPP strategies have been developed to deliver antisense oligomers such as PNA and PMO into cells. Overcoming the repulsion by the cell membrane of negative-charged ONs and the degradation of asONs by enzymes, CPPs increase asONs bioavailability. Two types of neutral ON analogues, peptide nucleic acid (PNA) and phosphorodiamidate morpholino oligomers (PMO or Morpholino) are becoming dominant in this area. PNA has been conjugated with various CPPs either through disulfide linkages or through stable amide bonds. For example, antisense activity within cells that blocked expression of the galanin receptor was observed when a 21-mer PNA was coupled to the penetratin. Results on antiviral activity with PNA targeting HIV-1 have also been reported through disulfide linkage with TAT. CPP-PMO conjugates have also been successfully used to inhibit the replication of several viruses such as SARS and influenza and attachment of CPPs has improved the efficacy of splice-modifying Morpholinos in development for treatment of Duchenne muscular dystrophy
Decoy DNA delivery
Decoy DNA is an exogenous double-strand DNA (dsDNA), which can mimic a promoter sequence that can inhibit the activity of a specific transcription factor. But dsDNA has the same problem as other therapeutics, poor bioavailability. In one study, CPPs TP and TP10 were coupled to NFкB decoy DNA, which blocked the effect of interleukin-1-induced NFкB activation and IL-6 gene expression. In another study, TP10 coupled Myc decoy DNA decreased proliferative capacity of N2a cells.
Plasmid delivery
Individual genes can be inserted into specific sites on plasmids, and recombinant plasmids can be introduced into living cells. A method using macro-branched TAT has been proposed for plasmid DNA delivery into various cell lines and showed significant transfection capabilities. Multimers of TAT have been found to increase transfection efficiency of plasmid DNA by 6-8 times more than poly-L-arginine or mutant TAT2-M1, and by 390 times compared with the standard vectors.
Protein delivery
The development of therapeutic proteins that has presented a valuable method to treat diseases is limited by low efficiency of traditional delivery methods. The evaluation of cytosolic delivery of CPP linked proteins has been found to be prone to artifacts and therefore requires the use of evaluation methods that distinguish true cytosolic delivery from cell surface attached or endosomally entrapped CPP-proteins. Recently, several methods using CPPs as vehicles to deliver biologically active, full-length proteins into living cells and animals have been reported.
Several groups have successfully delivered CPP fused proteins in vitro. TAT was able to deliver different proteins, such as horseradish peroxidase and RNase A across cell membrane into the cytoplasm in different cell lines in vitro. The size range of proteins with effective delivery is from 30kDa to 120-150kDa. In one study, TAT-fused proteins are rapidly internalized by lipid raft−dependent macropinocytosis using a transducible TAT−Cre recombinase reporter assay on live cells. In another study, a TAT-fused protein was delivered into mitochondria of breast cancer cells and decreased the survival of breast cancer cells, which showed capability of TAT-fusion proteins to modulate mitochondrial function and cell survival. Moreover, cR10, a cyclic poly-arginine CPP, enabled the endocytose independent transduction of antigen binding proteins through the cellular membrane with immediate bioavailability. Thereby, the authors of the study were able to deliver fluorescent antigen binding proteins into cells facilitating live-cell immunostaining. However, few in vivo studies have succeeded. In one study, in vivo delivery of TAT- or penetratin-crosslinked Fab fragments yielded varied organ distributions and an overall increase in organ retention, which showed tissue localization.
A non-covalent method that forms CPP/protein complexes has also been developed to address the limitations in covalent methods, such as chemical modification before crosslinking, and denaturation of proteins before delivery. In one study, a short amphipathic peptide carrier, Pep-1, and protein complexes have proven effective for delivery. It was shown that Pep-1 could facilitate rapid cellular uptake of various peptides, proteins, and even full-length antibodies with high efficiency and less toxicity. This approach has greatly simplified the formulation of reagents.
Contrast agent transport
CPPs found applications as transporters of contrast agents across plasma membranes. These contrast agents are able to label the tumor cells, making the compounds important tools in cancer diagnosis; they are also used in in vivo and in vitro cellular experiments.
The most important classes of CPP are isolated from viruses, such as TAT (transactivated-transcription) derived from HIV-1, penetratin, and transportan. The most widely used CPPs are based on TAT derivatives. TAT is an arginine-rich CPP. Several improvements for this substrate includes the usage of unnatural β or γ amino acids. This strategy offers multiple advantages, such resistance to proteolytic degradation, a natural degradation process by which peptide bonds are hydrolyzed to amino acids. Unnatural acid insertion in the peptide chain has multiple advantages. It facilitates the formation of stable foldamers with distinct secondary structure. β-Peptides are conformationally more stable in aqueous solution than naturally occurring peptides, especially for small chains. The secondary structure is reinforced by the presence of a rigid β-amino acid, which contains cyclohexane or cyclopentane fragments. These fragments generate a more rigid structure and influence the opening angle of the foldamer. These features are important for new peptide design. Helical β-peptides mimic antimicrobial activities of host defense peptides. This feature requires the orientation of cationic –hydrophilic on one side, and hydrophobic residues on the other side of the helix. The attachment of fluorescent group on one head of the molecule confers contrast properties.
A new strategy to enhance the cellular up-take capacity of CPP is based on association of polycationic and polyanionic domains that are separated by a linker. Cellular association of polycationic residues (polyarginine) with negatively charged membrane cells is effectively blocked by the presence of polyanionic residue (poly-glutamic acid) and the linker, which confer the proper distance between these two charged residues in order to maximize their interaction. These peptides adopt hairpin structure, confirmed by overhauser effect correlation for proton-proton proximities of the two charged moieties.
At this stage only the linker is exposed to protease hydrolysis in vivo applications. The linker hydrolysis occur and the two charged fragments experience more conformational freedom. In the absence of linker, the cationic peptide can interact more efficient with the target cell and cellular uptake occurs before proteolysis. This strategy found applications in labeling tumor cells in vivo. Tumor cells were marked in minutes.
Linker degradation can be predicted by the amount of D-aminoacids (the unnatural isomer) incorporated in the peptide chain, this restricts in vivo proteolysis to the central linker.
Contrast agents as cargo molecules
Quantum dots
Quantum dots (QD) represent a relative new class of fluorescent probes that have superior optical properties than classical organic dyes based on fluorescent groups. The main advantages of QD include high quantum yields, broad absorption spectra, size-tunable emission spectra, and good resistance to chemical and photochemical degradation.
In vivo tests have shown that several positively charged peptides (based on guanidine residues) are able to cross cell membranes and to promote cellular uptake of attached molecules including quantum dots.
QD properties can be easily modified by changing the organic substrates linked to them, offering a versatile biological tool as cell markers. Research is in progress to optimize the methodologies for the intracellular delivery of QD and QD bioconjugates, and characterization of long-term in vivo photophysical properties.
Quantum dots are colloidal nanocrystals, based on a cadmium-selenium (CdSe) core covered with a zinc-sulfur (ZnS) layer. This substrate has been used intensively as a cellular marker because CdSe emits in the visible domain and is an excellent contrast agent, while the ZnS layer protects the core from oxidation and also the leeching of CdSe into the surrounding solution. This strategy also improves the photo-luminescence yield. The properties can be tuned by the thickness of the ZnS protective layers. Colloidal QD emission can be modulated from UV-Vis to the infrared by using different types of coating agents, such as ZnS, CdS, ZnSe, CdTe and PbSe. The properties of quantum dots can be also tuned by the synthetic scheme, high temperature solvent/ligand mixtures that influence the nanocrystal properties. High-quality QD contrast agents are obtained at elevated temperatures; however, because they have lower water solubility, their usage as cell markers is limited. Further functionalization with hydrophilic ligands is required.
The advantages of QD are represented by their fast action; they are able to label a target tissue or cell in seconds. In vivo studies show that QD are able to selectively label cancer cells, and they accumulate at tumor sites. Tumor cells labeled with QD can be tracked with multiphoton microscopy as they invade lung tissue. In both studies, spectral imaging and autofluorescent subtraction allowed multicolour in vivo visualization of cells and tissues. A major drawback of QD is their relatively high toxicity. Functionalizations with different substrates that increase bioaffinity and decrease toxicity are in progress. For instance, sulfur from the QD shell is able to form reversible disulfide bonds with a wide class of organic compounds.
Magnetic resonance imaging
Magnetic resonance imaging (MRI) is a powerful tool for disease diagnosis such as cancer metastasis and inflammation, using different metal chelates. Metal chelates increase the contrast signal between normal and diseased tissues by catalyzing the relaxation of water protons in their proximities. Typical examples are Gd3+ low-molecular-weight chelates, and superparamagnetic iron oxide (SPIO). In vivo administration of these agents allows the labeling of tumor cells; or cells can be labeled in vitro with contrast agents and then they can be injected and monitored in vivo by using MRI techniques.
SPIO nanoparticles confer high sensitivity in MRI but they have lower affinity for cells; they work at high concentrations. Functionalizations of these compounds using dendrimeric guanidines showed similar activities as TAT-based CPPs but higher toxicity. New substrates based on dendrons with hydroxyl or amine peripheries show low toxicity. Applications of SPIO includes cell labeling in vivo; due to low toxicity, they are clinically approved for use in liver, spleen, and gastrointestinal imaging.
The presence of octamer arginine residues allows cell membrane transduction of various cargo molecules including peptides, DNA, siRNA, and contrast agents. However, the ability of cross membrane is not unidirectional; arginine-based CPPs are able to enter-exit the cell membrane, displaying an overall decreasing concentration of contrast agent and a decrease of magnetic resonance (MR) signal in time. This limits their application in vivo. To solve this problem, contrast agents with disulfide, reversible bond between metal chelate and transduction moiety enhance the cell-associated retention. The disulfide bond is reduced by the target cell environment and the metal chelate remains trapped in the cytoplasm, increasing the retention time of chelate in the target cell.
References
Peptides
Cell biology | Cell-penetrating peptide | [
"Chemistry",
"Biology"
] | 5,226 | [
"Biomolecules by chemical classification",
"Cell biology",
"Peptides",
"Molecular biology"
] |
9,470,663 | https://en.wikipedia.org/wiki/Wishaw%20and%20Coltness%20Railway | The Wishaw and Coltness Railway was an early Scottish mineral railway. It ran for approximately 11 miles from Chapel Colliery, at Newmains in North Lanarkshire connecting to the Monkland and Kirkintilloch Railway near Whifflet, giving a means of transport for minerals around Newmains to market in Glasgow and Edinburgh.
Shortage of capital made construction slow, and the line was opened in stages from 1833, opening fully on 9 March 1844.
It was built to the track gauge of , commonly used in Scotland for coal railways. It had several branches serving pits and ironworks.
In 1849 it became part of the Caledonian Railway and sections of the original network form part of the modern West Coast Main Line railway.
Formation of the railway
In the early decades of the nineteenth century, the pace of industrialisation in central Scotland accelerated considerably, generating a huge demand for the raw materials of coal and iron ore. Transport of these heavy materials to market was a key issue. Canals offered some solution to this problem, but railways came to be seen as a more accessible option. The Monkland and Kirkintilloch Railway was opened in 1828, giving access to Monklands pits to Glasgow and Edinburgh via the Forth and Clyde Canal, vastly reducing the cost of carriage. Pits further afield saw the benefit to their competitors, and thought of constructing their own lines.
A Garturk and Garion Railway bill was presented to Parliament in 1829 (though Awdry calls it the Garion and Garturk Railway).
During the parliamentary process the name was changed to the Wishaw and Coltness Railway, and under that name it was incorporated by an act of Parliament, the (10 Geo. 4. c. cvii), on 1 June 1829. This authorised "making a railway from Chapel, in the parish of Cambusnethan, ...by Coltness and Gariongill, to join the Monkland and Kirkintilloch Railway ... in the parish of Old Monkland". Share capital was £80,000 with borrowing powers of £20,000. Tolls were laid down, and "the company may provide carriages for the conveyance of passengers, and charge for each person conveyed a rate of 4d per mile" and "locomotive engines may be used on the railway"
The name of the company refers to the area where minerals would originate. Coltness Colliery was in the area of Wishaw, and both places were some distance from the present-day communities. The northern end of the proposed system was a junction to another railway at Whifflet, and perhaps did not seem an attractive component when the company's name was being chosen.
Priestley says that the line was "designed to pass from the collieries of Chapel and Crawfoot, in the parish of Cambusnethan, in the county of Lanark, through Daiziel, Hamilton, Bothwell, Coltness, Overtown, Wishawtown, Motherwell, Burnhouse and Carnbroe, to join the Monkland and Kirkintilloch Railway at Old Monkland; with a branch to Rosehall; a second to the collieries of Stevenson, Carfin and Cleland; and a third from these last places to Law, in the parish of Carluke, in the same county of Lanark". Several of these objectives were never achieved in the independent lifetime of the company.
In securing parliamentary authority, the company had to accept a clause in its act forbidding the use of locomotives, to overcome the opposition of Drysdale of Jerviston, to a line which, he claimed, "would enable the Landed Proprietors south of Major Drysdale to enhance the value of their estates at the expense of his". The prohibition was later bought out for £1,000, of which the company paid half.
Shortage of funds led to further acts, the (4 & 5 Will. 4. c. xli) and the (7 Will. 4 & 1 Vict. c. c) to obtain three-year extensions for completion of the railway; a further £80,000 of capital was raised by 1840 and this was followed by a further act, the (4 & 5 Vict. c. xi) to raise an addition £160,000 of capital.
Opening in stages
First section to Chapelknowe
The new company found raising money difficult, and this considerably slowed completion of the line.
The first section was opened on 23 January 1834, running southwards from a junction with the Monkland and Kirkintilloch Railway at Whifflat Junction (as it was then spelt: immediately east of the present A725 road) to "Holytown", actually on the Holytown Road, now the A775. There was a tunnel at Carnbroe Iron Works and a "Holytown Tunnel" a short distance north of the present A8 road crossing.
Nine weeks later, on 31 March 1834 the line was opened to the pits at Chapelknowe: immediately south of Holytown Road the line swerved eastwards. When other parts of the line were built later, there was a continuation south and the site became, much later, Mossend North Junction. However a passenger station called Holytown was opened there in 1844, changing its name to Mossend in 1882.
Now running south-east to east, the line passed a Carfin station (renamed Holytown in 1882) and a Newarthill station (renamed Carfin in 1928). As far as this point, the route is still in operation today as Whifflet - Mossend North Jn - Holytown - Carfin, but here the original line turned south to serve several pits in the Cleland estate at Chapelknowe.
Coltness Estate
In 1836 Thomas Houldsworth, a Manchester entrepreneur, purchased the Coltness Estate, with its extensive mineral rights. At this time the Railway Company had run out of money to complete the building of their line. "In order to allow the development of his minerals, Houldsworth agreed that if the company extended their line to Coltness, for which they would require to borrow £20,000 and to get an Act allowing an extension of time for construction, he would personally guarantee the interest on the loan, and pay £300 towards the Act. He also agree to give the necessary land on the Coltness Estate free (apart from compensation to tenants) and to send all his goods by the railway, if the Company charged him the prevailing rate." He also agreed to pay half of the payment to the Drysdale estate to buy out the prohibition on locomotives.
Reaching Jerviston
As money became available, the next extension (actually the intended "main line") was built, about a mile long, and opening in 1838, running due south from the junction at "Holytown" (i.e. Mossend North Junction) to "Jerviston", serving several pits. The location was at the north end of the (later) Ravenscraig complex, and the route was to the east of the present Mossend - Motherwell line (opened in 1857).
On to "Coltness"
In 1841 the next section was opened, from Jerviston to Overton (now spelt Overtown). Continuing south from Jerviston and crossing the Clyde, the line turned south-east at Motherwell Junction, a little distance south-east of the present-day station there; from there the line formed the route of the present West Coast Main Line, serving coal and ironstone pits and clay pits and reaching Coltness Colliery, which was in the area between the bridges at Pather and Overtown.
To Chapel at last
In 1842 a further extension was opened, taking the line on from Coltness Colliery to Chapel Colliery—the first-named objective in Priestley's description—in the area of Morningside. This was immediately east of the Stirling Road (later A73). Passenger stations were opened in 1845 at Overtown and "Carluke"; the Carluke station was on the Stirling Road near the site of the old Law Hospital.
Morningside
The final stage in the gradual extension of the route was achieved on 9 March 1844; continuing from Chapel Colliery, the line ran the short distance to pits and a brick and tile works close to Morningside, involving a bridge over the Auchter Water.
Wilsontown, Morningside and Coltness Railway
In 1845 the Wilsontown, Morningside and Coltness Railway opened its line, on 2 June. The new railway sought to connect pits further east to the developing network towards the Monklands and the Clyde, and was therefore dependent on the Wishaw and Coltness Railway. The newcomer was always short of money and never succeeded to connecting the important iron works at Wilsontown, and the Inspecting Officer for passenger operation wrote that "the line terminates in a large field, about a mile from a small village called Whitburn."
The newcomer built a Morningside terminus, facing east, in the north-east angle of Mill Road and Morningside Road, and built a short connection to the Wishaw and Coltness (W&CR) line on the west side of Morningside Road. The W&CR promptly built its own Morningside station at that point, abutting the road, and 2 chains (44 yards, 40 m) from the WM&CR station.
Operation
Like the other coal railways built in the same period, the railway thought of itself as analogous to a canal, where it provided a route and independent hauliers provided wagons and horses to pull them, and paid the company a toll for the privilege. In fact the original Act stipulated that "Owners of land may erect wharfs, warehouses and cranes on the line, and if they refuse the company may do so, charging for the use thereof [certain laid down charges]"
By 1838 the G&GR was operating locomotives over the G&GR and the W&CR.
In 1839 the company decided to adopt locomotive traction, and to reduce the multiplicity of horse traders, in order to "do away with the collisions which are daily taking place between the drivers".
In 1842 the company bought 323 wagons from the independent hauliers on their line to reduce the number of traders on the line and to keep down the complaints from traders that locomotives were damaging their wagons.
The wagons were primitive and unsprung, and when the Caledonian Railway was later negotiating to purchase the line, it was essential to be able to report that "new wagons with springs and entire new engines have been put on".
In 1843–1845, 1,057,431 tons were carried by the company, of which 61.2% were conveyed by the company itself, and the balance by independent hauliers.
In this period, attention was being drawn to the high proportion of operating costs: of £11,125 annual average revenue in 1839–1843, 36.9% of that figure was expended in operation.
Locomotives
The Wishaw and Coltness company acquired three locomotives designed by Robert Dodds and built by James M Rowan of Glasgow: they were named Wishaw, Coltness and Cleland; they started work at the end of 1840. It was reported that
When received, they were forthwith applied to the purpose of the traffic, and they have provided the advantages to be derived on this as on the adjoining railways from the more general employment of engines instead of horse haulage. In all cases they have been found to be of great service to the prosperity of the Company's revenue.
Passengers
The Company started a passenger service to Coatbridge on 8 May 1845 leaving Morningside at 7.07 a.m., and calling at Stirling Road, Overtown Road, Wishaw, Motherwell, Holytown and Carnbroe Iron Works, arriving at Coatbridge at 8.50 a.m. A through coach was taken on by the Glasgow, Garnkirk and Coatbridge Railway (GG&CR) to its Townhead terminal in Glasgow, and there was a corresponding late afternoon return service. Through tickets were issued from Lanark, by omnibus to Stirling Road to connect, and the GG&CR later developed tourism by this route, advertising the scenic beauty of the Falls of Clyde at Lanark, using a morning outward service and afternoon return.
Names of the early passenger stations bear little resemblance to modern naming. From the junction at Whifflet, they were:
Cleland line:
Carnbroe (or Carnbroe Iron Works), opened 1843, closed about 1846
Holytown (located at the later Mossend North Junction); opened 1844, renamed Mossend 1882, closed 1962
Carfin, opened in about 1834, renamed Holytown in 1882, still in operation
Newarthill (located at the present-day Carfin station); opened in 1834, closed in 1880
Coltness line:
Motherwell (located a short distance south of the present station); opened 1845, closed in 1885 when the present station opened
Flemington, opened in 1891, closed in 1965
Shieldmuir, opened 1990
Wishaw, opened in 1845; renamed Wishaw South in 1880 when the new line from Holytown Junction to Law Junction opened; closed in 1958
Overtown, opened in 1845; closed in 1881
Carluke (located a considerable distance from the present Carluke station); opened 1845, renamed Stirling Road, Morningside in 1848; closed in 1853
Morningside, opened in 1845, closed in 1930
The Morningside line became a minor branch off the West Coast Main Line when the Caledonian Railway took over the Company, and the passenger service probably ceased in 1853.
When the Caledonian Railway built a new line via Newmains, it approached Morningside from the north and a passenger service from Holytown terminated there. In 1895 and in 1922 there were five trains each way with one extra on Saturdays. The Wilsontown, Morningside and Coltness Railway operated a service from its own Morningside station, 40 m away by rail, heading east to Bathgate. There was not much co-ordination of train times, and the 40 m of railway between the two stations did not have a passenger service.
Wider horizons
The line was built as a coal railway, with the primary object of conveying the mineral to market in central Scotland. The industrial processes developed rapidly during the first decade of the line's existence, so did the demand for efficient transport. As the Coltness Iron Works and other industries developed, they needed to bring in materials from further afield, and to dispatch their products to far-off destinations.
In common with the other "coal railways" the technical limitations of the little railway became more obvious, and the most important of these was the track gauge of 4 ft 6 in, which required transshipment of loads at the point of connection with standard gauge lines. In 1847, the railway changed its gauge to the standard 4 ft 8½ in.
The Caledonian Railway was being promoted about the same time, with the object of participating in forming a main line route between Central Scotland and Carlisle, connecting there with the English railway trunk network. At the time the Grand Junction Railway was planning an approach from the south to Carlisle.
The Caledonian Railway promoters planned an entry through Annandale. To get access to Glasgow, the Caledonian secured agreement from the Wishaw and Coltness Railway and the Glasgow Garnkirk and Coatbridge Railway to use their lines for the approach to the city. The Caledonian took a lease of the Wishaw and Coltness from 1 January 1847, guaranteeing 10.5% on the W&C capitalisation of £240,000. (Agreement to lease the GG&CR had been obtained a year earlier at 8%.)
For the time being the Caledonian used the GG&CR Townhead terminus in Glasgow, but soon extended to a new terminal at Glasgow, Buchanan Street.
Parts of the Wishaw and Coltness routes remain in use at the present day: the section from the original Motherwell station, just south-east of the present-day station, to Garriongill Junction, and the section from Whifflet to Mossend South Junction follows the original construction.
References
Sources
Notes
Caledonian Railway
Closed railway lines in Scotland
Horse-drawn railways
Mining railways
Early Scottish railway companies
Pre-grouping British railway companies
4 ft 6 in gauge railways in Scotland
Railway companies established in 1829
Railway lines opened in 1844
Railway companies disestablished in 1849
1829 establishments in Scotland
British companies established in 1829
Transport in North Lanarkshire
British companies disestablished in 1849 | Wishaw and Coltness Railway | [
"Engineering"
] | 3,352 | [
"Mining equipment",
"Mining railways"
] |
13,468,756 | https://en.wikipedia.org/wiki/Leukotriene%20B4%20receptor%201 | {{DISPLAYTITLE:Leukotriene B4 receptor 1}}
Leukotriene B4 receptor 1, also known as BLT1 or BLT1 receptor, is a protein that in humans is encoded by the LTB4R gene.
See also
Eicosanoid receptor
Etalocib, an antagonist at the leukotriene B4 receptor
References
Further reading
External links
G protein-coupled receptors | Leukotriene B4 receptor 1 | [
"Chemistry"
] | 91 | [
"G protein-coupled receptors",
"Signal transduction"
] |
13,469,869 | https://en.wikipedia.org/wiki/Peter%20Johnstone%20%28mathematician%29 | Peter Tennant Johnstone (born December 28, 1948) is Professor of the Foundations of Mathematics at the University of Cambridge, and a fellow of St. John's College.
He invented or developed a broad range of fundamental ideas in topos theory. His thesis, completed at the University of Cambridge in 1974, was entitled "Some Aspects of Internal Category Theory in an Elementary Topos".
Peter Johnstone is a choral singer, having sung for over thirty years with the Cambridge University Musical Society and since 2004 with the (London) Bach Choir. Following a severe bout of COVID-19 in 2020, he was invited by the Bach Choir's musical director David Hill to provide the text for a new choral work about the pandemic which the Choir commissioned from the composer Richard Blackford; the piece, `Vision of a Garden', was performed at the Bach Choir's first post-lockdown concert in October 2021 in the Royal Festival Hall, london, and again in July 2023 in King's College Chapel, Cambridge.
He is a great-great-great nephew of the Reverend George Gilfillan who was eulogised in William McGonagall's first poem.
Books
.
— "[F]ar too hard to read, and not for the faint-hearted"
.
.
(v.3 in preparation)
References
External links
Johnstone's web page
Category theorists
Living people
Cambridge mathematicians
Fellows of St John's College, Cambridge
1948 births | Peter Johnstone (mathematician) | [
"Mathematics"
] | 297 | [
"Category theorists",
"Mathematical structures",
"Category theory"
] |
13,471,217 | https://en.wikipedia.org/wiki/Russell%20bodies | Russell bodies are inclusion bodies usually found in atypical plasma cells that become known as Mott cells. Russell bodies are eosinophilic, homogeneous immunoglobulin (Ig)-containing inclusions usually found in cells undergoing excessive synthesis of Ig; the Russell body is characteristic of the distended endoplasmic reticulum. Russell bodies are large and globular of varying size, and become packed into the cell's cytoplasm pushing the nucleus to the edge of the cell, and are found in the peripheral areas of tumors. Russell bodies are thought to have originated as abnormal proteins that have not been secreted. The excess immunoglobulin builds up and forms intracytoplasmic globules, which is thought to be a result of insufficient protein transport within the cell. This causes the proteins to neither be degraded or secreted and stay stored in dilated cisternae. In 1949, Pearse discovered that Russell bodies also contain mucoproteins that are secreted by plasma cells. Russell bodies are not tissue specific; during research they were induced in rat glioma cells. Russell bodies were found to have positive reactions to PAS stain, CD 38 and CD 138 stains. Plasma cells that contain Russell bodies and are stained with H&E stain are found to be autofluorescent, while those without Russell bodies are not. Russell bodies tend to be found in places with chronic inflammation.
This is one cell variation found in multiple myeloma.
Similar inclusion bodies that tend to overlie the nucleus or invaginate into it are known as Dutcher bodies.
They are named for William Russell (1852–1940), a Scottish physician.
References
External links
http://www.healthsystem.virginia.edu/internet/hematology/hessimages/russell-bodies-website-arrow.jpg
Histopathology | Russell bodies | [
"Chemistry"
] | 396 | [
"Histopathology",
"Microscopy"
] |
13,471,415 | https://en.wikipedia.org/wiki/Anaphase%20lag | Anaphase lag is a consequence of an event during cell division where sister chromatids do not properly separate from each other because of improper spindle formation. The chromosome or chromatid does not properly migrate during anaphase and the daughter cells will lose some genetic information. It is one of many causes of aneuploidy. This event can occur during both meiosis and mitosis with unique repercussions. In either case, anaphase lag will cause one daughter cell to receive a complete set of chromosomes while the other lacks one paired set of chromosomes, creating a form of monosomy. Whether the cell survives depends on which sister chromatid was lost and the background genomic state of the cell. The passage of abnormal numbers of chromosomes will have unique consequences with regards to mosaicism and development as well as the progression and heterogeneity of cancers.
Mechanisms
There are two notable mechanisms that cause Anaphase Lag, each of which are characterized by merotelic attachments of kinetochores to the microtubules responsible for chromatid separation. Merotelic attachments occur when a single centromere kinetochore attaches to microtubules originating from both spindle poles of the dividing cell. The merotelic attachments can occur in two ways: centrosome spindle attachments from both poles on the same chromatid kinetochore or the formation of a third centrosome whose microtubule spindles attach to a chromatid kinetochore. Because the chromatid is being pulled in two opposing directions or away from the correct centriole, it cannot migrate to the mass of segregated chromatids at either pole. If the migration is significantly delayed the reformation of nuclei will begin to occur without a full complement of chromosomes. This nuclear envelope formation is also seen for the lone lagging sister chromatid, forming a micronucleus. The micronucleus has the capacity to persist in the daughter cell but with abnormal replication and maintenance machinery. This allows for the accumulation of mutations, increasing the potential for future miss-segregation events. In total these events cause problematic aneuploid cells with increased genomic instability. This has important implications in the development and persistence of cancers as well as debilitating developmental diseases.
Hallmark of cancer
One of the hallmarks of cancer formation and persistence is genomic instability, referring to the increased frequency in sequence mutation, chromosome rearrangement, and aneuploidy. The instability allows a cancerous growth to increasingly diverge from normal cell growth and division, with the potential to gain new traits such as angiogenesis, immune system evasion, and loss of cell cycle checkpoint genes. Aneuploidy is a drastic divergence from the normal karyotype, as such the potential heterogeneity within these cells makes diagnosis and treatment increasingly difficult.
Genomic causes
The increasing importance of genomic instability on cancer progression has been emphasized in recent years. There are many ways to cause aneuploidy, however the genomic predispositions for these events are less well understood. In regards to the merotelic kinetochore attachments associated with anaphase lag, several genes have been implicated. Aurora B is a kinase active in late metaphase, and has been shown to function as a checkpoint for the proper attachments of centriole spindles to the chromatid kinetochores. When Aurora B was partially inhibited by a small molecule drug, Cimini et al. observed lagging chromatids at increasing frequency. Similarly, mutations to the gene Stag2 have been associated with increased aneuploidy in cancers. Stag2 encodes a cohesin protein responsible for holding sister chromatids together pre-anaphase. Imaging of cells with Stag2 knock-outs showed increased frequency of lagging anaphase chromatids; subsequent gene correction in human glioblastoma cell lines reduced the occurrence of this genomic instability.
Prognosis and treatment
Consequent of this genomic instability, the resulting cancer cells have the potential to diverge in sequence and gain new traits. This intratumoral heterogeneity creates a tumor mass with different genomic backgrounds as well as unique cellular traits and drug susceptibilities. Several research groups have shown that heterogeneity and genomic instability are heavily correlated with poor patient outcomes and aggressive cancers. Chang-Min Choi et al. examined the survival of individuals with adenocarcinoma of the lung. Those individuals with higher rates of chromosome instability were associated with worse 5-year survival curves. This was similarly observed in a colorectal study by Walther et al. These more aggressive heterogenous tumors also provide unique difficulties for treatment regimens. To support this hypothesis, Duesberg et al. tested drug susceptibility on cell lines with and without aneuploidy. While the diploid cell lines remained drug sensitive, the aneuploid lines showed marked increases in mutation rates, drug resistance, and unintended morphological changes to cell phenotypes. As the importance of genomic instability in cancer prognosis/treatment continues, identifying the causes and consequences of mechanisms such as anaphase lag will be critical to understanding how cancer develops as well as developing better multi-target therapies.
References
Chromosomal abnormalities
Cytogenetics
Meiosis
Mitosis | Anaphase lag | [
"Biology"
] | 1,134 | [
"Molecular genetics",
"Meiosis",
"Cellular processes",
"Mitosis"
] |
13,471,652 | https://en.wikipedia.org/wiki/Generalized%20forces | In analytical mechanics (particularly Lagrangian mechanics), generalized forces are conjugate to generalized coordinates. They are obtained from the applied forces , acting on a system that has its configuration defined in terms of generalized coordinates. In the formulation of virtual work, each generalized force is the coefficient of the variation of a generalized coordinate.
Virtual work
Generalized forces can be obtained from the computation of the virtual work, , of the applied forces.
The virtual work of the forces, , acting on the particles , is given by
where is the virtual displacement of the particle .
Generalized coordinates
Let the position vectors of each of the particles, , be a function of the generalized coordinates, . Then the virtual displacements are given by
where is the virtual displacement of the generalized coordinate .
The virtual work for the system of particles becomes
Collect the coefficients of so that
Generalized forces
The virtual work of a system of particles can be written in the form
where
are called the generalized forces associated with the generalized coordinates .
Velocity formulation
In the application of the principle of virtual work it is often convenient to obtain virtual displacements from the velocities of the system. For the n particle system, let the velocity of each particle Pi be , then the virtual displacement can also be written in the form
This means that the generalized force, , can also be determined as
D'Alembert's principle
D'Alembert formulated the dynamics of a particle as the equilibrium of the applied forces with an inertia force (apparent force), called D'Alembert's principle. The inertia force of a particle, , of mass is
where is the acceleration of the particle.
If the configuration of the particle system depends on the generalized coordinates , then the generalized inertia force is given by
D'Alembert's form of the principle of virtual work yields
See also
Lagrangian mechanics
Generalized coordinates
Degrees of freedom (physics and chemistry)
Virtual work
References
Mechanical quantities
Classical mechanics
Lagrangian mechanics | Generalized forces | [
"Physics",
"Mathematics"
] | 401 | [
"Mechanical quantities",
"Physical quantities",
"Quantity",
"Lagrangian mechanics",
"Classical mechanics",
"Mechanics",
"Dynamical systems"
] |
13,472,059 | https://en.wikipedia.org/wiki/1972%20Great%20Daylight%20Fireball | The Great Daylight Fireball (also known as the Grand Teton Meteor) was an Earth-grazing fireball that passed within of Earth's surface at 20:29 UTC on August 10, 1972. It entered Earth's atmosphere at a speed of in daylight over Utah, United States (14:30 local time) and passed northwards leaving the atmosphere over Alberta, Canada. It was seen by many people and recorded on film and by space-borne sensors. An eyewitness to the event, located in Missoula, Montana, saw the object pass directly overhead and heard a double sonic boom. The smoke trail lingered in the atmosphere for several minutes.
The atmospheric pass modified the object's mass and orbit around the Sun. A 1994 study found that it is probably still in an Earth-crossing orbit and predicted that it would pass close to Earth again in August 1997. However, the object has not been observed again and so its post-encounter orbit remains unknown.
Description
Analysis of its appearance and trajectory showed the object was about in diameter, depending on whether it was a comet made of ice or a stony and therefore denser asteroid. Other sources identified it as an Apollo asteroid in an Earth-crossing orbit that would make a subsequent close approach to Earth in August 1997. In 1994, Czech astronomer Zdeněk Ceplecha reanalysed the data and suggested the passage would have reduced the asteroid's mass to about a third or half of its original mass, reducing its diameter to ..
The object was tracked by military surveillance systems and sufficient data obtained to determine its orbit both before and after its 100-second passage through Earth's atmosphere. Its velocity was reduced by about and the encounter significantly changed its orbital inclination from 15 degrees to 7 degrees. If it had not entered at such a grazing angle, this meteoroid would have lost all its velocity in the upper atmosphere, possibly ending in an airburst, and any remnant would have fallen at terminal velocity.
See also
List of asteroid close approaches to Earth
References
Further reading
External links
US19720810 (Daylight Earth grazer) orbital characteristics from Global Superbolide Network Archive, 2000
Fireball, meteorite, bolide, meteor, video and photo link to photos and cine film by Linda Baker
Earthgrazer: The Great Daylight Fireball of 1972 overview of the event including photo by NASA's Astronomy Picture of the Day
Astronomical Society of the Pacific: Observation of Meteoroid Impacts by Space-Based Sensors – one of several similar events; includes ground track
Earth Impact Calculator
Meteoroids
19720810
19720810
Modern Earth impact events
Earth-grazing fireballs
August 1972 events in North America
August 1972 events in the United States
20th-century astronomical events | 1972 Great Daylight Fireball | [
"Astronomy"
] | 555 | [
"Astronomical events",
"20th-century astronomical events"
] |
13,472,894 | https://en.wikipedia.org/wiki/Server%20room | A server room is a room, usually air-conditioned, devoted to the continuous operation of computer servers. An entire building or station devoted to this purpose is a data center.
The computers in server rooms are usually headless systems that can be operated remotely via KVM switch or remote administration software, such as Secure Shell, VNC, and remote desktop.
Climate is one of the factors that affects the energy consumption and environmental impact of a server room. In areas where climate favours cooling and an abundance of renewable electricity, the environmental effects will be more moderate. Thus, countries with favourable conditions such as Canada, Finland, Sweden, and Switzerland are trying to attract companies to site server rooms there.
Design considerations
Building a server or computer room requires detailed attention to five main design considerations:
Location
Computer or server room location is the first consideration, even before considering the layout of the room's contents. Most designers agree that, where possible, the computer room should not be built where one of its walls is an exterior wall of the building. Exterior walls can often be quite damp and can contain water pipes that could burst and drench the equipment.
Avoiding exterior windows means avoiding a security risk, and breakages. Avoiding both the top floors and basements means avoiding flooding, and leaks in the case of roofs. Lastly, server rooms should be centrally located because of the horizontal cabling involved which extends from this room to devices in other rooms. If a centralized computer room is not feasible, server closets on each floor may be an option. This is where computer, network and phone equipment are housed in closets and each closet is stacked above each other on the floor that they service.
In addition to the hazards of exterior walls, designers need to evaluate any potential sources of interference in proximity to the computer room. Designing such a room means keeping clear of radio transmitters and electrical interference from power plants or lift rooms, etc.
Other physical design considerations range from room size, door sizes and access ramps (to get equipment in and out) to cable organization, physical security and maintenance access.
Air conditioning
Computer equipment generates heat, and is sensitive to heat, humidity, and dust, but also the need for very high resilience and failover requirements. Maintaining a stable temperature and humidity within tight tolerances is critical to IT system reliability.
In most server rooms "close control air conditioning" systems, also known as PAC (precision air conditioning) systems, are installed. These systems control temperature, humidity and particle filtration within tight tolerances 24 hours a day and can be remotely monitored. They can have built-in automatic alerts when conditions within the server room move outside defined tolerances.
Air conditioning designs for most computer or server rooms will vary depending on various design considerations, but they are generally one of two types: "up-flow" and "down-flow" configurations.
Up-flow air conditioning
This type of air conditioning draws air into the front of the air handler unit (AHU), cools the air over the heat exchanger, then distributes the cooled air out through the top or through duct work. This air conditioning configuration is well suited to retro-fitted computer rooms when raised floors are either of inadequate depth or do not exist at all.
Down-flow air conditioning
Typically, this type of air conditioning unit draws the air into the top of the air handling unit, cools the air over the heat exchanger, then distributes the air out of the bottom into the floor void. This conditioned air is then discharged into the server room via strategically placed floor grilles and onwards to equipment racks. These systems are well suited to new office buildings where the design can encompass raised floors suitable for ducting to computer racks.
Hot aisle / cold aisle
Hot aisle / cold aisle configurations switch the forward direction of every other row so that two rows face each other and have their backs to the next row.
This avoids the hot exhaust of one row of racks being sucked into the cooling intake of an adjacent row. Air conditioning ducts or vents are located between the two fronts since most equipment vents front to rear. A drawback of unenclosed hot aisle / cold aisle configuration is that there is a significant amount of uncontrolled or bypass mixing of hot and cold air outside the equipment.
Aisle containment
In an aisle containment configuration one of the aisle is enclosed with walls, ceilings and access doors to create an enclosed space. Aisle containment does not allow bypass mixing of hot and cold air. This forces all cold to hot air transformation to happen inside the equipment. Careful attention is paid to avoid open rack slots or other air flow leaks to make the front of the rack a continuous wall of the contained aisle.
Liquid cooling and energy efficiency
The adoption of liquid cooling technologies has allowed for highly efficient server room designs. When liquid cooling technologies are applied, server rooms don't rely on energy consuming air conditioning systems any more. Instead, all heat is captured in liquid, which can be rejected with a simple and efficient dry cooler.
Another factor of using liquid is the potential for heat reuse. Server rooms are slowly becoming part of heating systems and integrated within the same rooms, or connected to the utility space of buildings through a water circuit. This allows the heating installation to utilise server heat before using alternate means of heating. Temperature chaining principles are slowly adopted to generate sufficient temperature levels for reuse scenarios.
Fire protection
The fire protection system's main goal should be to detect and alert of fire in the early stages, then bring fire under control without disrupting the flow of business and without threatening the personnel in the facility.
Server room fire suppression technology has been around for as long as there have been server rooms. Traditionally, most computer rooms used Halon gas, but this has been shown to be environmentally unfriendly (ozone depleting) and unsafe for humans. Modern computer rooms use combinations of inert gases such as nitrogen, argon and carbon dioxide. Other solutions include clean chemical agents such as FM200 and also hypoxic air solutions that keep oxygen levels down. To prevent fires from spreading due to data cable and cord heat generation, organizations have also used plenum cable coated with FEP tubing. This plastic reduces heat generation and safeguards material metal efficiently.
Future-proofing
The demands of server rooms are constantly changing as organizations evolve and grow and as technology changes. An essential part of computer room design is future proofing so that new requirements can be accommodated with minimal effort.
As computing requirements grow, so will a server room's power and cooling requirements. As a rough guide, for every additional 100 kW of equipment installed, a further 30 kW of power is required to cool it. As a result, air conditioning designs will need to have scalability designed in from the outset.
The choice of racks in a server room is usually the prime factor when determining space. Many organisations use telco racks or enclosed cabinets to make the most of the space they have. Today, with servers that are one-rack-unit (1U) high and new blade servers, a single 19- or 23-inch rack can accommodate anywhere from 42 to hundreds of servers.
Redundancy
If the computer systems in a server room are mission critical, removing single points of failure and common-mode failures may be of high importance. The level of desired redundancy is determined by factors such as whether the organisation can tolerate interruption whilst failover systems are activated, or must they be seamless without any business impacts. Other than computer hardware redundancy, the main consideration here is the provisioning of failover power supplies and cooling.
See also
Equipment room
Facility management
Server farm
Datacenter
Wiring closet
Distribution frame
References
Rooms
Room | Server room | [
"Engineering"
] | 1,559 | [
"Rooms",
"Architecture"
] |
13,473,033 | https://en.wikipedia.org/wiki/Bending%20stiffness | The bending stiffness () is the resistance of a member against bending deflection/deformation. It is a function of the Young's modulus , the second moment of area of the beam cross-section about the axis of interest, length of the beam and beam boundary condition. Bending stiffness of a beam can analytically be derived from the equation of beam deflection when it is applied by a force.
where is the applied force and is the deflection. According to elementary beam theory, the relationship between the applied bending moment and the resulting curvature of the beam is:
where is the deflection of the beam and is the distance along the beam. Double integration of the above equation leads to computing the deflection of the beam, and in turn, the bending stiffness of the beam.
Bending stiffness in beams is also known as Flexural rigidity.
See also
Applied mechanics
Beam theory
Bending
Stiffness
References
External links
Efunda's beam calculator
Beam theory
Continuum mechanics
Structural analysis | Bending stiffness | [
"Physics",
"Engineering"
] | 208 | [
"Structural engineering",
"Continuum mechanics",
"Structural analysis",
"Classical mechanics",
"Mechanical engineering",
"Aerospace engineering"
] |
13,473,147 | https://en.wikipedia.org/wiki/Lyotropic%20liquid%20crystal | Lyotropic liquid crystals result when amphiphiles, which are both hydrophobic and hydrophilic, dissolve into a solution that behaves both like a liquid and a solid crystal. This liquid crystalline mesophase includes everyday mixtures like soap and water.
The term comes . Historically, the term was used to describe the common behavior of materials composed of amphiphilic molecules upon the addition of a solvent. Such molecules comprise a hydrophilic (literally 'water-loving') head-group (which may be ionic or non-ionic) attached to a hydrophobic ('water-hating') group.
The micro-phase segregation of two incompatible components on a nanometer scale results in different type of solvent-induced extended anisotropic arrangement, depending on the volume balances between the hydrophilic part and hydrophobic part. In turn, they generate the long-range order of the phases, with the solvent molecules filling the space around the compounds to provide fluidity to the system.
In contrast to thermotropic liquid crystals, lyotropic liquid crystals have therefore an additional degree of freedom, that is the concentration that enables them to induce a variety of different phases. As the concentration of amphiphilic molecules is increased, several different type of lyotropic liquid crystal structures occur in solution. Each of these different types has a different extent of molecular ordering within the solvent matrix, from spherical micelles to larger cylinders, aligned cylinders and even bilayered and multiwalled aggregates.
Types of lyotropic systems
Examples of amphiphilic compounds are the salts of fatty acids, phospholipids. Many simple amphiphiles are used as detergents. A mixture of soap and water is an everyday example of a lyotropic liquid crystal.
Biological structures such as fibrous proteins showings relatively long and well-defined hydrophobic and hydrophilic ‘‘blocks’’ of aminoacids can also show lyotropic liquid crystalline behaviour.
Amphiphile self-assembly
A typical amphiphilic flexible surfactant can form aggregates through a self-assembly process that results of specific interactions between the molecules of the amphiphilic mesogen and those of the non-mesogenic solvent.
In aqueous media, the driving force of the aggregation is the "hydrophobic effect". The aggregates formed by amphiphilic molecules are characterised by structures in which the hydrophilic head-groups expose their surface to aqueous solution, shielding the hydrophobic chains from contact with water.
For most lyotropic systems aggregation occurs only when the concentration of the amphiphile exceeds a critical concentration (known variously as the critical micelle concentration (CMC) or the critical aggregation concentration (CAC)).
At very low amphiphile concentration, the molecules will be dispersed randomly without any ordering. At slightly higher (but still low) concentration, above the CMC, self-assembled amphiphile aggregates exist as independent entities in equilibrium with monomeric amphiphiles in solution, but with no long ranged orientational or positional (translational) order. As a result, phases are isotropic (i.e. not liquid crystalline). These dispersions are generally referred to as 'micellar solutions', often denoted by the symbol L1, while the constituent spherical aggregates are known as 'micelles'.
At higher concentration, the assemblies will become ordered. True lyotropic liquid crystalline phases are formed as the concentration of amphiphile in water is increased beyond the point where the micellar aggregates are forced to be disposed regularly in space. For amphiphiles that consist of a single hydrocarbon chain the concentration at which the first liquid crystalline phases are formed is typically in the range 25–30 wt%.
Liquid crystalline phases and composition/temperature
The simplest liquid crystalline phase that is formed by spherical micelles is the 'micellar cubic', denoted by the symbol I1. This is a highly viscous, optically isotropic phase in which the micelles are arranged on a cubic lattice. Prior to becoming macroscopic liquid crystals, tactoids are formed, which are liquid crystal microdomains in an isotrophic phase. At higher amphiphile concentrations the micelles fuse to form cylindrical aggregates of indefinite length, and these cylinders are arranged on a long-ranged hexagonal lattice. This lyotropic liquid crystalline phase is known as the 'hexagonal phase', or more specifically the 'normal topology' hexagonal phase and is generally denoted by the symbol HI.
At higher concentrations of amphiphile the 'lamellar phase' is formed. This phase is denoted by the symbol Lα and can be considered the lyotropic equivalent of a smectic A mesophase. This phase consists of amphiphilic molecules arranged in bilayer sheets separated by layers of water. Each bilayer is a prototype of the arrangement of lipids in cell membranes.
For most amphiphiles that consist of a single hydrocarbon chain, one or more phases having complex architectures are formed at concentrations that are intermediate between those required to form a hexagonal phase and those that lead to the formation of a lamellar phase. Often this intermediate phase is a bicontinuous cubic phase.
Increasing the amphiphile concentration beyond the point where lamellar phases are formed would lead to the formation of the inverse topology lyotropic phases, namely the inverse cubic phases, the inverse hexagonal columnar phase (columns of water encapsulated by amphiphiles, (HII) and the inverse micellar cubic phase (a bulk liquid crystal sample with spherical water cavities). In practice inverse topology phases are more readily formed by amphiphiles that have at least two hydrocarbon chains attached to a headgroup. The most abundant phospholipids that are found in cell membranes of mammalian cells are examples of amphiphiles that readily form inverse topology lyotropic phases.
Even within the same phases, self-assembled structures are tunable by the concentration: For example, in lamellar phases, the layer distances increase with the solvent volume. Since lyotropic liquid crystals rely on a subtle balance of intermolecular interactions, it is more difficult to analyze their structures and properties than those of thermotropic liquid crystals.
The objects created by the amphiphiles are usually spherical (as in the case of micelles), but may also be disc-like (bicelles), rod-like, or biaxial (all three micelle axes are distinct). These anisotropic self-assembled nano-structures can then order themselves in much the same way as thermotropic liquid crystals do, forming large-scale versions of all the thermotropic phases (such as a nematic phase of rod-shaped micelles).
Host molecules
It is possible that specific molecules are dissolved in lyotropic mesophases, where they can be located mainly inside, outside, or at the surface of the aggregates.
Some of such molecules act as dopants, inducing specific properties to the whole phase, other ones can be considered simple guests with limited effect on the surrounding environment but possibly strong consequences on their physico-chemical properties, and some of them are used as probe to detect molecular-level properties of the whole mesophase in specific analytical techniques.
Rod-like macromolecules
The term lyotropic has also been applied to the liquid crystalline phases that are formed by certain polymeric materials, particularly those consisting of rigid rod-like macromolecules, when they are mixed with appropriate solvents. Examples are suspensions of rod-like viruses such as the tobacco mosaic virus as well as synthetic macromolecules, such as Li2Mo6Se6 nanowire or colloidal suspensions of non-spherical colloidal particles. Cellulose and cellulose derivatives form lyotropic liquid crystal phases as do nanocrystalline (nanocellulose) suspensions. Other examples include DNA and Kevlar, which dissolve in sulfuric acid to give a lyotropic phase. It is noted that in these cases the solvent acts to lower the melting point of the materials thereby enabling the liquid crystalline phases to be accessible. These liquid crystalline phases are closer in architecture to thermotropic liquid crystalline phases than to the conventional lyotropic phases. In contrast to the behaviour of amphiphilic molecules, the lyotropic behaviour of the rod-like molecules does not involve self-assembly.
Disk-like macromolecules / Nanosheets
Examples of lyotropic liquid crystals can also be generated using 2D nanosheets. The most striking example of a true nematic phase has been demonstrated for many smectite clays. The issue of the existence of such a lyotropic phase was raised by Langmuir in 1938, but remained an open question for a very long time and was only confirmed recently. With the rapid development of nanosciences, and the synthesis of many new anisotropic 2D nanoparticles, the number of such Nematic mesophase based on 2D nanosheet has increased quickly, with, for example graphene oxide colloidal suspensions.
Noteworthy, a lamellar phase was even discovered, H3Sb3P2O14, which exhibits hyperswelling up to ~250 nm for the interlamellar distance.
References
Further reading
Chemical properties
Phases of matter
Liquid crystals | Lyotropic liquid crystal | [
"Physics",
"Chemistry"
] | 1,996 | [
"nan",
"Phases of matter",
"Matter"
] |
13,473,221 | https://en.wikipedia.org/wiki/Quantum%20bus | A quantum bus is a device which can be used to store or transfer information between independent qubits in a quantum computer, or combine two qubits into a superposition. It is the quantum analog of a classical bus.
There are several physical systems that can be used to realize a quantum bus, including trapped ions, photons, and superconducting qubits. Trapped ions, for example, can use the quantized motion of ions (phonons) as a quantum bus, while photons can act as a carrier of quantum information by utilizing the increased interaction strength provided by cavity quantum electrodynamics. Circuit quantum electrodynamics, which uses superconducting qubits coupled to a microwave cavity on a chip, is another example of a quantum bus that has been successfully demonstrated in experiments.
History
The concept was first demonstrated by researchers at Yale University and the National Institute of Standards and Technology (NIST) in 2007. Prior to this experimental demonstration, the quantum bus had been described by scientists at NIST as one of the possible cornerstone building blocks in quantum computing architectures.
Mathematical description
A quantum bus for superconducting qubits can be built with a resonance cavity. The hamiltonian for a system with qubit A, qubit B, and the resonance cavity or quantum bus connecting the two is where is the single qubit hamiltonian, is the raising or lowering operator for creating or destroying excitations in the th qubit, and is controlled by the amplitude of the D.C. and radio frequency flux bias.
References
Quantum information science
Quantum electronics | Quantum bus | [
"Physics",
"Materials_science"
] | 325 | [
"Quantum electronics",
"Quantum mechanics",
"Condensed matter physics",
"Nanotechnology",
"Quantum physics stubs"
] |
13,473,292 | https://en.wikipedia.org/wiki/Biochemical%20Society%20Transactions | Biochemical Society Transactions is a bimonthly peer-reviewed scientific journal which publishes the transactions of the annual conference and focused meetings of the Biochemical Society, together with independent meetings supported by the society. The society's annual symposium, previously published only in Biochemical Society Symposium, was first published in the Transactions in 2008. The journal was established in 1973 and is published by Portland Press, the Society's publishing arm.
The journal was issued quarterly until 1999. Since 2004, issues have been made up entirely of full papers, having previously alternated between an issue of abstracts and an issue of full papers. Transactions take the form of short papers, usually of 3–4 pages; the journal also publishes longer papers from the society's award lectures.
Since 2005, David J. Richardson (University of East Anglia) has been honorary editor. Colin Bingle is the editor-in-chief in 2020. According to the Journal Citation Reports, the journal has a 2020 impact factor of 6.5.
References
External links
Delayed open access journals
Academic journals established in 1973
Biochemistry journals
Bimonthly journals
Academic journals published by learned and professional societies
English-language journals | Biochemical Society Transactions | [
"Chemistry"
] | 233 | [
"Biochemistry stubs",
"Biochemistry journals",
"Biochemistry literature",
"Biochemistry journal stubs"
] |
13,473,328 | https://en.wikipedia.org/wiki/SharePoint | SharePoint is a collection of enterprise content management and knowledge management tools developed by Microsoft. Launched in 2001, it was initially bundled with Windows Server as Windows SharePoint Server, then renamed to Microsoft Office SharePoint Server, and then finally renamed to SharePoint. It is provided as part of Microsoft 365, but can also be configured to run as on-premises software.
According to Microsoft, SharePoint had over 200 million users.
Applications
The most common uses of the SharePoint include:
Enterprise content and document management
SharePoint allows for storage, retrieval, searching, archiving, tracking, management, and reporting on electronic documents and records. Many of the functions in this product are designed around various legal, information management, and process requirements in organizations. SharePoint also provides search and 'graph' functionality. SharePoint's integration with Microsoft Windows and Microsoft 365 (previously known as Office) allows for collaborative real-time editing, and encrypted/information rights managed synchronization.
This capability is often used to replace an existing corporate file server, and is typically coupled with an enterprise content management policy.
Intranet and social network
A SharePoint intranet or intranet portal is a way to centralize access to enterprise information and applications. It is a tool that helps an organization manage its internal communications, applications and information more easily. Microsoft claims that this has organizational benefits such as increased employee engagement, centralizing process management, reducing new staff on-boarding costs, and providing the means to capture and share tacit knowledge (e.g. via tools such as wikis, media libraries, etc.).
Group collaboration
SharePoint contains team collaboration groupware capabilities, including: document management, project scheduling (integrated with Outlook and Project), and other information tracking. This capability is centred around the concept of a "Team Site". Team sites can be independent, or linked to a Microsoft Teams team.
File hosting service (personal cloud)
SharePoint hosts OneDrive for Business, which allows storage and synchronization of an individual's personal work documents, as well as public/private file sharing of those documents.
Custom web applications
SharePoint's custom development capabilities provide an additional layer of services that allow rapid prototyping of integrated (typically line-of-business) web applications. SharePoint provides developers with integration into corporate directories and data sources through standards such as REST/OData/OAuth. Enterprise application developers use SharePoint's security and information management capabilities across a variety of development platforms and scenarios. SharePoint also contains an enterprise "app store" that has different types of external applications which are encapsulated and managed to access to resources such as corporate user data and document data.
Configuration and customization
Web-based configuration
SharePoint is primarily configured through a web browser. The web-based user interface provides most of the configuration capability of the product.
SharePoint Designer
SharePoint Designer is a semi-deprecated product that provided 'advanced editing' capabilities for HTML/ASPX pages, but remains the primary method of editing SharePoint workflows.
A significant subset of HTML editing features were removed in Designer 2013, and the product is expected to be deprecated in 2016–7.
Microsoft SharePoint's Server Features are configured either using PowerShell, or a Web UI called "Central Administration". Configuration of server farm settings (e.g. search crawl, web application services) can be handled through these central tools.
While Central Administration is limited to farm-wide settings (config DB), it provides access to tools such as the 'SharePoint Health Analyzer', a diagnostic health-checking tool.
In addition to PowerShell's farm configuration features, some limited tools are made available for administering or adjusting settings for sites or site collections in content databases.
A limited subset of these features are available by SharePoint's SaaS providers, including Microsoft.
Custom development
The SharePoint Framework (SPFx) provides a development model based on the TypeScript language. The technical stack is Node.js, Yeoman, Gulp, NPM, and Webpack. It is the only supported way to customize the new modern experience user interface (UI). It has been globally available since mid 2017. It allows a web developer to step into SharePoint development more easily.
The SharePoint "App Model", later renamed to the "Add-in model" provides various types of external applications that offer the capability to show authenticated web-based applications through a variety of UI mechanisms. Apps may be either "SharePoint-hosted", or "Provider-hosted". Provider hosted apps may be developed using most back-end web technologies (e.g. ASP.NET, Node.js, PHP). Apps are served through a proxy in SharePoint, which requires some DNS/certificate manipulation in on-premises versions of SharePoint. Microsoft announced the retirement of the Add-in model in November 2023 with an end-of-life date set to April 2026).
The SharePoint "Client Object Model" (available for JavaScript and .NET), and REST/SOAP APIs can be referenced from many environments, providing authenticated users access to a wide variety of SharePoint capabilities.
"Sand-boxed" plugins can be uploaded by any end-user who has been granted permission. These are security-restricted, and can be governed at multiple levels (including resource consumption management). In multi-tenant cloud environments, these are the only customizations that are typically allowed.
Farm features are typically fully trusted code that need to be installed at a farm-level. These are considered deprecated for new development.
Service applications: It is possible to integrate directly into the SharePoint SOA bus, at a farm level.
Customization may appear through:
Application-to-application integration with SharePoint.
Extensions to SharePoint functionality (e.g. custom workflow actions).
'Web Parts' (also known as "portlets", "widgets", or "gadgets") that provide new functionality when added to a page.
Pages/sites or page/site templates.
Server architecture
SharePoint Server can be scaled down to operate entirely from one developer machine, or scaled up to be managed across hundreds of machines.
Farms
A SharePoint farm is a logical grouping of SharePoint servers that share common resources. A farm typically operates stand-alone, but can also subscribe to functions from another farm, or provide functions to another farm. Each farm has its own central configuration database, which is managed through either a PowerShell interface, or a Central Administration website (which relies partly on PowerShell's infrastructure). Each server in the farm is able to directly interface with the central configuration database. Servers use this to configure services (e.g. IIS, windows features, database connections) to match the requirements of the farm, and to report server health issues, resource allocation issues, etc...
Web applications
Web applications (WAs) are top-level containers for content in a SharePoint farm. A web application is associated primarily with IIS configuration. A web application consists of a set of access mappings or URLs defined in the SharePoint central management console, which are replicated by SharePoint across every IIS Instance (e.g. Web Application Servers) configured in the farm.
Site collections
A site collection is a hierarchical group of 'SharePoint Sites'. Each web application must have at least one site collection. Site collections share common properties (detailed here), common subscriptions to service applications, and can be configured with unique host names. A site collection may have a distinct content databases, or may share a content database with other site collections in the same web application.
Service applications
Service applications provide granular pieces of SharePoint functionality to other web and service applications in the farm. Examples of service applications include the User Profile Sync service, and the Search Indexing service. A service application can be turned off, exist on one server, or be load-balanced across many servers in a farm. Service Applications are designed to have independent functionality and independent security scopes.
Administration, security, compliance
SharePoint's architecture enables a 'least-privileges' execution permission model.
SharePoint Central Administration (the CA) is a web application that typically exists on a single server in the farm; however, it is also able to be deployed for redundancy to multiple servers. This application provides a complete centralized management interface for web and service applications in the SharePoint farm, including Active Directory account management for web and service applications. In the event of the failure of the CA, Windows PowerShell is typically used on the CA server to reconfigure the farm.
The structure of the SharePoint platform enables multiple WAs to exist on a single farm. In a shared (cloud) hosting environment, owners of these WAs may require their own management console. The SharePoint 'Tenant Administration' (TA) is an optional web application used by web application owners to manage how their web application interacts with the shared resources in the farm.
History
Origins
SharePoint evolved from projects codenamed "Office Server" and "Tahoe" during the Office XP development cycle.
"Office Server" evolved out of the FrontPage and Office Server Extensions and "Team Pages". It targeted simple, bottom-up collaboration.
"Tahoe", built on shared technology with Exchange and the "Digital Dashboard", targeted top-down portals, search and document management. The searching and indexing capabilities of SharePoint came from the "Tahoe" feature set. The search and indexing features were a combination of the index and crawling features from the Microsoft Site Server family of products and from the query language of Microsoft Index Server.
GAC-(Global Assembly Cache) is used to accommodate the shared assemblies that are specifically designated to be shared by applications executed on a system.
See also
Enterprise portal
List of collaborative software
List of content management systems
References
External links
SharePoint Roadmap
2001 software
Content management systems
Document management systems
Information management
Portal software
Proprietary database management systems
Proprietary wiki software
Records management technology
Microsoft Office servers
Android (operating system) software | SharePoint | [
"Technology"
] | 2,077 | [
"Information systems",
"Information management"
] |
13,474,394 | https://en.wikipedia.org/wiki/Burt%20strut | A Burt strut, also known as a timing strut or beam splitter, is a black, rectangular plate attached to the front of a competition vehicle, usually a racing car, to provide a standardised, repeatable method by which to break a timing light beam at the start and finish of events timed to high-degrees of accuracy. These events are commonly those in which competitors race against the clock, rather than physically against another vehicle, such as sprint or hillclimb races. The strut was invented in 1967 by Ron Smith; manager, chief mechanic and future husband to 1970 British sprint champion Patsy Burt. As the strut made its first appearance on the front of Burt's McLaren-Oldsmobile her name was used as its official title. In recent years the generic term timing strut has also become common.
The Burt strut was introduced to replace previous timing mechanisms, whereby a chock with a sensor was placed behind the rear wheel of a car at the start. Due to the chock being related to the rear of the car at the start, and readings being taken from the front of the car at the finish, the degree of precision within which cars could be timed was limited. As most British hillclimb courses are somewhat less than 1500 yards (1372 m) long, it is not uncommon for competitors' times to be separated by only a few hundredths of a second. The introduction of more accurate light beam timing required that all cars provide a consistent surface with which to break the beam at both start and finish of the timed section. However, owing to the variable shape of vehicles and inconsistencies in the placing of the beam sensors, this was not a simple condition to meet before the introduction of the Burt strut.
The Burt strut has since been made compulsory in most national and international timed sprint events. Within the United Kingdom the rules governing the size and position of the strut are determined by the Royal Automobile Club Motor Sports Association. The strut is currently defined in the MSA Competitors' Yearbook (the Blue Book) regulations document as being a single rectangular plate, painted matt black on both sides, no less than in height and in width, the lower edge of which should be mounted not more than from the ground, with the upper edge being at least from the ground. The strut must be the most forward part of the vehicle. There are no restrictions on the material used to make the strut, so long as the strut itself conforms to the regulations, and some designs can be very simple to construct.
References
Motorsport terminology
Auto racing equipment
Hillclimbing
Timekeeping | Burt strut | [
"Physics"
] | 533 | [
"Spacetime",
"Timekeeping",
"Physical quantities",
"Time"
] |
13,474,567 | https://en.wikipedia.org/wiki/Block%20Error%20Rate | Block Error Rate (BLER) is a ratio of the number of erroneous blocks to the total number of blocks ansmitted on a digital circuit.
It is used in measuring the error rate when extracting data frames from a Compact Disc (CD). The BLER measurement is often used as a quality control measure with regards to how well audio is retained on a compact disc over time.
BLER is also used for W-CDMA performance requirements tests (demodulation tests in multipath conditions, etc.). BLER is measured after channel de-interleaving and decoding by evaluating the Cyclic Redundancy Check (CRC) on each transport block.
Block Error Rate (BLER) is used in LTE/4G technology to determine the in-sync or out-of-sync indication during radio link monitoring (RLM). Normal BLER is 2% for an in-sync condition and 10% for an out-of-sync condition.8ballPool
References
Compact disc
Audio software | Block Error Rate | [
"Engineering"
] | 211 | [
"Audio engineering",
"Audio software"
] |
13,474,685 | https://en.wikipedia.org/wiki/Ground-effect%20vehicle | A ground-effect vehicle (GEV), also called a wing-in-ground-effect (WIGE or WIG), ground-effect craft/machine (GEM), wingship, flarecraft, surface effect vehicle or ekranoplan (), is a vehicle that is able to move over the surface by gaining support from the reactions of the air against the surface of the earth or water. Typically, it is designed to glide over a level surface (usually over the sea) by making use of ground effect, the aerodynamic interaction between the moving wing and the surface below. Some models can operate over any flat area such as frozen lakes or flat plains similar to a hovercraft. The term Ground-Effect Vehicle originally referred to any craft utilizing ground effect, including what is known later as hovercraft, in descriptions of patents during the 1950s. However, this term is nowadays regarded as distinct from air-cushion vehicles or hovercraft. The definition of GEVs does not include racecars utilizing ground-effect for increasing downforce.
Design
A ground-effect vehicle needs some forward velocity to produce lift dynamically, and the principal benefit of operating a wing in ground effect is to reduce its lift-dependent drag. The basic design principle is that the closer the wing operates to an external surface such as the ground, when it is said to be in ground effect, the less drag it experiences.
An airfoil passing through air increases air pressure on the underside, while decreasing pressure across the top. The high and low pressures are maintained until they flow off the ends of the wings, where they form vortices which in turn are the major cause of lift-induced drag—normally a significant portion of the drag affecting an aircraft. The greater the span of a wing, the less induced drag created for each unit of lift and the greater the efficiency of the particular wing. This is the primary reason gliders have long wings.
Placing the same wing near a surface such as the water or the ground has the same effect as increasing the aspect ratio because the ground prevents wingtip vortices from expanding, but without having the complications associated with a long and slender wing, so that the short stubs on a GEV can produce just as much lift as the much larger wing on a transport aircraft, though it can do this only when close to the earth's surface. Once sufficient speed has built up, some GEVs may be capable of leaving ground effect and functioning as normal aircraft until they approach their destination. The distinguishing characteristic is that they are unable to land or take off without a significant amount of help from the ground effect cushion, and cannot climb until they have reached a much higher speed.
A GEV is sometimes characterized as a transition between a hovercraft and an aircraft, although this is not correct as a hovercraft is statically supported upon a cushion of pressurized air from an onboard downward-directed fan. Some GEV designs, such as the Russian Lun and Dingo, have used forced blowing under the wing by auxiliary engines to increase the high pressure area under the wing to assist the takeoff; however they differ from hovercraft in still requiring forward motion to generate sufficient lift to fly.
Although the GEV may look similar to the seaplane and share many technical characteristics, it is generally not designed to fly out of ground effect. It differs from the hovercraft in lacking low-speed hover capability in much the same way that a fixed-wing airplane differs from the helicopter. Unlike the hydrofoil, it does not have any contact with the surface of the water when in "flight". The ground-effect vehicle constitutes a unique class of transportation.
The Boston-based (United States) company REGENT proposed an electric-powered high-wing design with a standard hull for water operations, but also incorporated fore- and aft-mounted hydrofoil units designed to lift the craft out of the water during takeoff run, to facilitate lower liftoff speeds.
Wing configurations
Straight wing
Used by the Russian Rostislav Alexeyev for his ekranoplan. The wings are significantly shorter than those of comparable aircraft, and this configuration requires a high aft-placed horizontal tail to maintain stability. The pitch and altitude stability comes from the lift slope difference between a front low wing in ground-effect (commonly the main wing) and an aft, higher-located second wing nearly out of ground-effect (generally named a stabilizer).
Reverse-delta wing
Developed by Alexander Lippisch, this wing allows stable flight in ground-effect through self-stabilization. This is the main Class B form of GEV. Hanno Fischer later developed WIG craft based on the configuration, which were then transferred to multiple companies in Asia, thus becoming one of the "standards" in GEV design.
Tandem wings
Tandem wings can have three configurations:
A biplane-style type-1 utilising a shoulder-mounted main lift wing and belly-mounted sponsons similar to those on combat and transport helicopters.
A canard-style type-2 with a mid-size horizontal wing near the nose of the craft directing airflow under the main lift airfoil. This type-2 tandem design is a major improvement during takeoff, as it creates an air cushion to lift the craft above the water at a lower speed, thereby reducing water drag, which is the biggest obstacle to successful seaplane launches.
Two stubby wings as in the tandem-airfoil flairboat produced by Günther Jörg in Germany. His particular design is self-stabilizing longitudinally.
Advantages and disadvantages
Given similar hull size and power, and depending on its specific design, the lower lift-induced drag of a GEV, as compared to an aircraft of similar capacity, will improve its fuel efficiency and, up to a point, its speed. GEVs are also much faster than surface vessels of similar power, because they avoid drag from the water.
On the water the aircraft-like construction of GEVs increases the risk of damage in collisions with surface objects. Furthermore, the limited number of egress points make it more difficult to evacuate the vehicle in an emergency. According to WST, the builders of the WIG craft WSH-500, GEVs furthermore have the advantage of avoiding conflict with ocean currents by flying over them.
Since most GEVs are designed to operate from water, accidents and engine failure typically are less hazardous than in a land-based aircraft, but the lack of altitude control leaves the pilot with fewer options for avoiding collision, and to some extent that negates such benefits. Low altitude brings high-speed craft into conflict with ships, buildings and rising land, which may not be sufficiently visible in poor conditions to avoid. GEVs may be unable to climb over or turn sharply enough to avoid collisions, while drastic, low-level maneuvers risk contact with solid or water hazards beneath. Aircraft can climb over most obstacles, but GEVs are more limited.
In high winds, take-off must be into the wind, which takes the craft across successive lines of waves, causing heavy pounding, stressing the craft and creating an uncomfortable ride. In light winds, waves may be in any direction, which can make control difficult as each wave causes the vehicle to both pitch and roll. The lighter construction of GEVs makes their ability to operate in higher sea states less than that of conventional ships, but greater than the ability of hovercraft or hydrofoils, which are closer to the water surface.
Like conventional aircraft, greater power is needed for takeoff, and, like seaplanes, ground-effect vehicles must get on the step before they can accelerate to flight speed. Careful design, usually with multiple redesigns of hullforms, is required to get this right, which increases engineering costs. This obstacle is more difficult for GEVs with short production runs to overcome. For the vehicle to work, its hull needs to be stable enough longitudinally to be controllable yet not so stable that it cannot lift off the water.
The bottom of the vehicle must be formed to avoid excessive pressures on landing and taking off without sacrificing too much lateral stability, and it must not create too much spray, which damages the airframe and the engines. The Russian ekranoplans show evidence of fixes for these problems in the form of multiple chines on the forward part of the hull undersides and in the forward location of the jet engines.
Finally, limited utility has kept production levels low enough that it has been impossible to amortize development costs sufficiently to make GEVs competitive with conventional aircraft.
A 2014 study by students at NASA's Ames Research Center claims that use of GEVs for passenger travel could lead to cheaper flights, increased accessibility and less pollution.
Classification
One obstacle to GEV development is the classification and legislation to be applied. The International Maritime Organization has studied the application of rules based on the International Code of Safety for High-Speed Craft (HSC code) which was developed for fast ships such as hydrofoils, hovercraft, catamarans and the like. The Russian Rules for classification and construction of small type A ekranoplans is a document upon which most GEV design is based. However, in 2005, the IMO classified the WISE or GEV under the category of ships.
The International Maritime Organization recognizes three types of GEVs:
At the time of writing, those classes only applied to craft carrying 12 passengers or more, and (as of 2019) there was disagreement between national regulatory agencies about whether these vehicles should be classified, and regulated, as aircraft or as boats.
History
By the 1920s, the ground effect phenomenon was well-known, as pilots found that their airplanes appeared to become more efficient as they neared the runway surface during landing. In 1934 the US National Advisory Committee for Aeronautics issued Technical Memorandum 771, Ground Effect on the Takeoff and Landing of Airplanes, which was a translation into English of a summary of French research on the subject. The French author Maurice Le Sueur had added a suggestion based on this phenomenon: "Here the imagination of inventors is offered a vast field. The ground interference reduces the power required for level flight in large proportions, so here is a means of rapid and at the same time economic locomotion: Design an airplane which is always within the ground-interference zone. At first glance this apparatus is dangerous because the ground is uneven and the altitude called skimming permits no freedom of maneuver. But on large-sized aircraft, over water, the question may be attempted ..."
By the 1960s, the technology started maturing, in large part due to the independent contributions of Rostislav Alexeyev in the Soviet Union and German Alexander Lippisch, working in the United States. Alexeyev worked from his background as a ship designer whereas Lippisch worked as an aeronautical engineer. The influence of Alexeyev and Lippisch remains noticeable in most GEVs seen today.
Canada
It is said that the research hydrofoil HD-4 by Alexander Graham Bell had part of its dynamic lift contributed by its pair of wings operating in ground effect. However it is dubious whether the designer was aware of its existence due to the relative infancy of aerodynamics.
Avro Canada investigated into aircraft with a Coanda-effect propulsion system. Such jets were supposed to create an air cushion below the airframe that will allow them to hover on the ground. In fact, of the only test aircraft built, this was the only mode they could possibly operate from due to stability issues when taking off. The designs were later further developed by the United States, while Convair could have possibly been inspired by them to create a preliminary design of a large ocean-going ground-effect ship called Hydroskimmer.
Soviet Union
Led by Alexeyev, the Soviet Central Hydrofoil Design Bureau () was the center of ground-effect craft development in the USSR. The vehicle came to be known as an ekranoplan (, экран screen + план plane, from , literally screen effect, or ground effect in English). The military potential for such a craft was soon recognized, and Alexeyev received support and financial resources from Soviet leader Nikita Khrushchev.
Some manned and unmanned prototypes were built, ranging up to eight tonnes in displacement. This led to the development of a 550-tonne military ekranoplan of length. The craft was dubbed the Caspian Sea Monster by U.S. intelligence experts, after a huge, unknown craft was spotted on satellite reconnaissance photos of the Caspian Sea area in the 1960s. With its short wings, it looked airplane-like in planform, but would probably be incapable of flight. Although it was designed to travel a maximum of above the sea, it was found to be most efficient at , reaching a top speed of in research flights.
The Soviet ekranoplan program continued with the support of Minister of Defence Dmitriy Ustinov. It produced the most successful ekranoplan so far, the 125-tonne A-90 Orlyonok. These craft were originally developed as high-speed military transports and were usually based on the shores of the Caspian Sea and Black Sea. The Soviet Navy ordered 120 Orlyonok-class ekranoplans, but this figure was later reduced to fewer than 30 vessels, with planned deployment mainly in the Black Sea and Baltic Sea fleets.
A few Orlyonoks served with the Soviet Navy from 1979 to 1992. In 1987, the 400-tonne Lun-class ekranoplan was built as an anti-ship missile launch platform. A second Lun, renamed Spasatel, was laid down as a rescue vessel, but was never finished. The two major problems that the Soviet ekranoplans faced were poor longitudinal stability and a need for reliable navigation.
Minister Ustinov died in 1984, and the new Minister of Defence, Marshal Sokolov, cancelled funding for the program. Only three operational Orlyonok-class ekranoplans (with revised hull design) and one Lun-class ekranoplan remained at a naval base near Kaspiysk.
Since the dissolution of the Soviet Union, ekranoplans have been produced by the Volga Shipyard in Nizhniy Novgorod. Smaller ekranoplans for non-military use have been under development. The CHDB had already developed the eight-seat Volga-2 in 1985, and Technologies and Transport is developing a smaller version called the Amphistar. Beriev proposed a large craft of the type, the Be-2500, as a "flying ship" cargo carrier, but nothing came of the project.
United States of America
During the 1950s, the US Navy investigated into anti-submarine vessels operating on the ram effect, a product of ground effect. Such vessels were to use this to create an air cushion below the hulls that will allow hovering. If this is not possible, additional engines were to be used to artificially blow air underneath the craft. The project was designated RAM-2. Several other projects were proposed throughout the early Cold War, some using a similar mix of wings and lift engines while others are more akin to Russian types. More than a decade later, General Dynamics designed catamaran vessels equipped with ground-effect and filed them as patents.
Germany
Lippisch Type and Hanno Fischer
In Germany, Lippisch was asked to build a very fast boat for American businessman Arthur A. Collins. In 1963 Lippisch developed the X-112, a revolutionary design with reversed delta wing and T-tail. This design proved to be stable and efficient in ground effect, and even though it was successfully tested, Collins decided to stop the project and sold the patents to the German company Rhein Flugzeugbau (RFB), which further developed the inverse delta concept into the X-113 and the six-seat X-114. These craft could be flown out of ground effect so that, for example, peninsulas could be overflown.
Hanno Fischer took over the works from RFB and created his own company, Fischer Flugmechanik, which eventually completed two models. The Airfisch 3 carried two persons, and the FS-8 carried six persons. The FS-8 was to be developed by Fischer Flugmechanik for a Singapore-Australian joint venture called Flightship. Powered by a V8 Chevrolet automobile engine rated at 337 kW, the prototype made its first flight in February 2001 in the Netherlands. The company no longer exists but the prototype craft was bought by Wigetworks, a company based in Singapore and renamed as AirFish 8. In 2010, that vehicle was registered as a ship in the Singapore Registry of Ships.
The University of Duisburg-Essen is supporting an ongoing research project to develop the Hoverwing.
Günther Jörg-type tandem-airfoil flairboat
German engineer Günther Jörg, who had worked on Alexeyev's first designs and was familiar with the challenges of GEV design, developed a GEV with two wings in a tandem arrangement, the Jörg-II. It was the third, manned, tandem-airfoil boat, named "Skimmerfoil", which was developed during his consultancy period in South Africa. It was a simple and low-cost design of a first 4-seater tandem-airfoil flairboat completely constructed of aluminium. The prototype was in the SAAF Port Elizabeth Museum from 4 July 2007 until 2013, and is now in private use. Pictures of the museum show the boat after some years outside the museum and without protection against the sun.
The consultancy of Günther Jörg, a specialist and insider of German airplane industry from 1963 and a colleague of Alexander Lippisch and Hanno Fischer, was founded with a fundamental knowledge of wing in ground effect physics, as well as results of fundamental tests under different conditions and designs having begun in 1960. For over 30 years, Jörg built and tested 15 different tandem-airfoil flairboats in different sizes and made of different materials.
The following tandem-airfoil flairboat (TAF) types had been built after a previous period of nearly 10 years of research and development:
TAB VII-3: First manned tandem W.I.G type Jörg, being built at Technical University of Darmstadt, Akaflieg
TAF VII-5: Second manned tandem-airfoil Flairboat, 2 seater made of wood
TAF VIII-1: 2-seater tandem-airfoil flairboat built of glass-reinforced plastic (GRP) and aluminium. A small serie of 6 Flairboats had been produced by former Botec Company
TAF VIII-2: 4-seater tandem-airfoil Flairboat built of full aluminium (2 units) and built of GRP (3 units)
TAF VIII-3: 8-seater tandem-airfoil Flairboat built of aluminium combined with GRP parts
TAF VIII-4: 12-seater tandem-airfoil Flairboat built of aluminium combined with GRP parts
TAF VIII-3B: 6-seater tandem-airfoil flairboat under carbon fibre composite construction
Bigger concepts are: 25-seater, 32-seater, 60-seater, 80-seater and bigger up to the size of a passenger airplane.
1980-1999
Since the 1980s GEVs have been primarily smaller craft designed for the recreational and civilian ferry markets. Germany, Russia and the United States have provided most of the activity with some development in Australia, China, Japan, Korea and Taiwan. In these countries and regions, small craft with up to ten seats have been built. Other larger designs such as ferries and heavy transports have been proposed but have not been carried to completion.
Besides the development of appropriate design and structural configuration, automatic control and navigation systems have been developed. These include altimeters with high accuracy for low altitude flight and lesser dependence on weather conditions. "Phase radio altimeters" have become the choice for such applications beating laser altimeter, isotropic or ultrasonic altimeters.
With Russian consultation, the United States Defense Advanced Research Projects Agency (DARPA) studied the Aerocon Dash 1.6 wingship.
Universal Hovercraft developed a flying hovercraft, first flying a prototype in 1996. Since 1999, the company has offered plans, parts, kits and manufactured ground effect hovercraft called the Hoverwing.
2000-2019
Iran deployed three squadrons of Bavar 2 two-seat GEVs in September 2010. This GEV carries one machine gun and surveillance gear, and incorporates features to reduce its radar signature. In October 2014, satellite images showed the GEV in a shipyard in southern Iran. The GEV has two engines and no armament.
In Singapore, Wigetworks obtained certification from Lloyd's Register for entry into class. On 31 March 2011, AirFish 8-001 became one of the first GEVs to be flagged with the Singapore Registry of Ships, one of the largest ship registries. Wigetworks partnered with National University of Singapore's Engineering Department to develop higher capacity GEVs.
Burt Rutan in 2011 and Korolev in 2015 showed GEV projects.
In Korea, Wing Ship Technology Corporation developed and tested a 50-seat passenger GEV named the WSH-500. in 2013
Estonian transport company Sea Wolf Express planned to launch passenger service in 2019 between Helsinki and Tallinn, a distance of 87 km taking only half an hour, using a Russian-built ekranoplan. The company ordered 15 ekranoplans with maximum speed of 185 km/h and capacity of 12 passengers, built by Russian RDC Aqualines.
2020-
In 2021 Brittany Ferries announced that they were looking into using REGENT (Regional Electric Ground Effect Naval Transport) ground effect craft "seagliders" for cross English Channel services. Southern Airways Express also placed firm orders for seagliders with intent to operate them along Florida's east coast.
Around mid-2022, the US Defense Advanced Research Projects Agency (DARPA) launched its Liberty Lifter project, with the goal of creating a low-cost seaplane that would use the ground-effect to extend its range. The program aims to carry 90 tons over , operate at sea without ground-based maintenance, all using low-cost materials.
In May 2024, Ocean Glider announced a deal with UK-based investor MONTE to finance $145m of a $700m deal to begin operating 25 REGENT seagliders between destinations in New Zealand. The order includes 15 12-seater Viceroys and 10 100-seater Monarchs.
See also
Aerodynamically alleviated marine vehicle
Flying Platform
Ground effect (aerodynamics)
Ground-effect train
Hovercraft
List of ground-effect vehicles
Surface effect ship
Caspian Sea Monster
Footnotes
Notes
Citations
Bibliography
.
External links
Amphibious vehicles
Aircraft configurations
Ekranoplan
Soviet inventions | Ground-effect vehicle | [
"Engineering"
] | 4,633 | [
"Aircraft configurations",
"Aerospace engineering"
] |
13,474,705 | https://en.wikipedia.org/wiki/Tilted%20large%20deviation%20principle | In mathematics — specifically, in large deviations theory — the tilted large deviation principle is a result that allows one to generate a new large deviation principle from an old one by exponential tilting, i.e. integration against an exponential functional. It can be seen as an alternative formulation of Varadhan's lemma.
Statement of the theorem
Let X be a Polish space (i.e., a separable, completely metrizable topological space), and let (με)ε>0 be a family of probability measures on X that satisfies the large deviation principle with rate function I : X → [0, +∞]. Let F : X → R be a continuous function that is bounded from above. For each Borel set S ⊆ X, let
and define a new family of probability measures (νε)ε>0 on X by
Then (νε)ε>0 satisfies the large deviation principle on X with rate function IF : X → [0, +∞] given by
References
Asymptotic analysis
Mathematical principles
Probability theorems
Large deviations theory | Tilted large deviation principle | [
"Mathematics"
] | 224 | [
"Mathematical principles",
"Mathematical analysis",
"Mathematical theorems",
"Theorems in probability theory",
"Asymptotic analysis",
"Mathematical problems"
] |
13,475,603 | https://en.wikipedia.org/wiki/Index%20selection | Index selection is a method of artificial selection in which several useful traits are selected simultaneously. First, each trait that is going to be selected is assigned a weight – the importance of the trait. I.e., if you were selecting for both height and the coat darkness in dogs, if height were the more important of the two one would assign that a higher weighting. For instance, height's weighting could be ten and coat darkness could be one. This weighting value is then multiplied by the observed value in each individual animal and then the score for each of the characteristics is summed for each individual. This result is the index score and can be used to compare the worth of each organism being selected. Therefore, only those with the highest index score are selected for breeding via artificial selection.
This method has advantages over other methods of artificial selection, such as tandem selection, in that you can select for traits simultaneously rather than sequentially. Thereby, no useful traits are being excluded from selection at any one time and so none will stagnate or reverse while you concentrate on improving another property of the organism. However, its major disadvantage is that the weightings assigned to each characteristic are inherently quite hard to calculate precisely and so require some elements of trial and error before they become optimal to the breeder.
The selection index theory is well described in Erling Strandberg and Birgitte Malmfors's notes under the headings Genetic Evaluation.
Calculation of a selection index based on actual data can be carried out using an applet made by Knud Christensen. The applet can be found here
References
Breeding | Index selection | [
"Biology"
] | 327 | [
"Behavior",
"Breeding",
"Reproduction"
] |
13,475,684 | https://en.wikipedia.org/wiki/Microbial%20biodegradation | Microbial biodegradation is the use of bioremediation and biotransformation methods to harness the naturally occurring ability of microbial xenobiotic metabolism to degrade, transform or accumulate environmental pollutants, including hydrocarbons (e.g. oil), polychlorinated biphenyls (PCBs), polyaromatic hydrocarbons (PAHs), heterocyclic compounds (such as pyridine or quinoline), pharmaceutical substances, radionuclides and metals.
Interest in the microbial biodegradation of pollutants has intensified in recent years, and recent major methodological breakthroughs have enabled detailed genomic, metagenomic, proteomic, bioinformatic and other high-throughput analyses of environmentally relevant microorganisms, providing new insights into biodegradative pathways and the ability of organisms to adapt to changing environmental conditions.
Biological processes play a major role in the removal of contaminants and take advantage of the catabolic versatility of microorganisms to degrade or convert such compounds. In environmental microbiology, genome-based global studies are increasing the understanding of metabolic and regulatory networks, as well as providing new information on the evolution of degradation pathways and molecular adaptation strategies to changing environmental conditions.
Aerobic biodegradation of pollutants
The increasing amount of bacterial genomic data provides new opportunities for understanding the genetic and molecular bases of the degradation of organic pollutants. Aromatic compounds are among the most persistent of these pollutants and lessons can be learned from the recent genomic studies of Burkholderia xenovorans LB400 and Rhodococcus sp. strain RHA1, two of the largest bacterial genomes completely sequenced to date. These studies have helped expand our understanding of bacterial catabolism, non-catabolic physiological adaptation to organic compounds, and the evolution of large bacterial genomes. First, the metabolic pathways from phylogenetically diverse isolates are very similar with respect to overall organization. Thus, as originally noted in pseudomonads, a large number of "peripheral aromatic" pathways funnel a range of natural and xenobiotic compounds into a restricted number of "central aromatic" pathways. Nevertheless, these pathways are genetically organized in genus-specific fashions, as exemplified by the b-ketoadipate and Paa pathways. Comparative genomic studies further reveal that some pathways are more widespread than initially thought. Thus, the Box and Paa pathways illustrate the prevalence of non-oxygenolytic ring-cleavage strategies in aerobic aromatic degradation processes. Functional genomic studies have been useful in establishing that even organisms harboring high numbers of homologous enzymes seem to contain few examples of true redundancy. For example, the multiplicity of ring-cleaving dioxygenases in certain rhodococcal isolates may be attributed to the cryptic aromatic catabolism of different terpenoids and steroids. Finally, analyses have indicated that recent genetic flux appears to have played a more significant role in the evolution of some large genomes, such as LB400's, than others. However, the emerging trend is that the large gene repertoires of potent pollutant degraders such as LB400 and RHA1 have evolved principally through more ancient processes. That this is true in such phylogenetically diverse species is remarkable and further suggests the ancient origin of this catabolic capacity.
Anaerobic biodegradation of pollutants
Anaerobic microbial mineralization of recalcitrant organic pollutants is of great environmental significance and involves intriguing novel biochemical reactions. In particular, hydrocarbons and halogenated compounds have long been doubted to be degradable in the absence of oxygen, but the isolation of hitherto unknown anaerobic hydrocarbon-degrading and reductively dehalogenating bacteria during the last decades provided ultimate proof for these processes in nature. While such research involved mostly chlorinated compounds initially, recent studies have revealed reductive dehalogenation of bromine and iodine moieties in aromatic pesticides. Other reactions, such as biologically induced abiotic reduction by soil minerals, has been shown to deactivate relatively persistent aniline-based herbicides far more rapidly than observed in aerobic environments. Many novel biochemical reactions were discovered enabling the respective metabolic pathways, but progress in the molecular understanding of these bacteria was rather slow, since genetic systems are not readily applicable for most of them. However, with the increasing application of genomics in the field of environmental microbiology, a new and promising perspective is now at hand to obtain molecular insights into these new metabolic properties. Several complete genome sequences were determined during the last few years from bacteria capable of anaerobic organic pollutant degradation. The ~4.7 Mb genome of the facultative denitrifying Aromatoleum aromaticum strain EbN1 was the first to be determined for an anaerobic hydrocarbon degrader (using toluene or ethylbenzene as substrates). The genome sequence revealed about two dozen gene clusters (including several paralogs) coding for a complex catabolic network for anaerobic and aerobic degradation of aromatic compounds. The genome sequence forms the basis for current detailed studies on regulation of pathways and enzyme structures. Further genomes of anaerobic hydrocarbon degrading bacteria were recently completed for the iron-reducing species Geobacter metallireducens (accession nr. NC_007517) and the perchlorate-reducing Dechloromonas aromatica (accession nr. NC_007298), but these are not yet evaluated in formal publications. Complete genomes were also determined for bacteria capable of anaerobic degradation of halogenated hydrocarbons by halorespiration: the ~1.4 Mb genomes of Dehalococcoides ethenogenes strain 195 and Dehalococcoides sp. strain CBDB1 and the ~5.7 Mb genome of Desulfitobacterium hafniense strain Y51. Characteristic for all these bacteria is the presence of multiple paralogous genes for reductive dehalogenases, implicating a wider dehalogenating spectrum of the organisms than previously known. Moreover, genome sequences provided unprecedented insights into the evolution of reductive dehalogenation and differing strategies for niche adaptation.
Recently, it has become apparent that some organisms, including Desulfitobacterium chlororespirans, originally evaluated for halorespiration on chlorophenols, can also use certain brominated compounds, such as the herbicide bromoxynil and its major metabolite as electron acceptors for growth. Iodinated compounds may be dehalogenated as well, though the process may not satisfy the need for an electron acceptor.
Bioavailability, chemotaxis, and transport of pollutants
Bioavailability, or the amount of a substance that is physiochemically accessible to microorganisms is a key factor in the efficient biodegradation of pollutants. O'Loughlin et al. (2000) showed that, with the exception of kaolinite clay, most soil clays and cation exchange resins attenuated biodegradation of 2-picoline by Arthrobacter sp. strain R1, as a result of adsorption of the substrate to the clays. Chemotaxis, or the directed movement of motile organisms towards or away from chemicals in the environment is an important physiological response that may contribute to effective catabolism of molecules in the environment. In addition, mechanisms for the intracellular accumulation of aromatic molecules via various transport mechanisms are also important.
Oil biodegradation
Petroleum oil contains aromatic compounds that are toxic to most life forms. Episodic and chronic pollution of the environment by oil causes major disruption to the local ecological environment. Marine environments in particular are especially vulnerable, as oil spills near coastal regions and in the open sea are difficult to contain and make mitigation efforts more complicated. In addition to pollution through human activities, approximately 250 million litres of petroleum enter the marine environment every year from natural seepages. Despite its toxicity, a considerable fraction of petroleum oil entering marine systems is eliminated by the hydrocarbon-degrading activities of microbial communities, in particular by a recently discovered group of specialists, the hydrocarbonoclastic bacteria (HCB). Alcanivorax borkumensis was the first HCB to have its genome sequenced. In addition to hydrocarbons, crude oil often contains various heterocyclic compounds, such as pyridine, which appear to be degraded by similar mechanisms to hydrocarbons.
Cholesterol biodegradation
Many synthetic steroidic compounds like some sexual hormones frequently appear in municipal and industrial wastewaters, acting as environmental pollutants with strong metabolic activities negatively affecting the ecosystems. Since these compounds are common carbon sources for many different microorganisms their aerobic and anaerobic mineralization has been extensively studied. The interest of these studies lies on the biotechnological applications of sterol transforming enzymes for the industrial synthesis of sexual hormones and corticoids. Very recently, the catabolism of cholesterol has acquired a high relevance because it is involved in the infectivity of the pathogen Mycobacterium tuberculosis (Mtb). Mtb causes tuberculosis disease, and it has been demonstrated that novel enzyme architectures have evolved to bind and modify steroid compounds like cholesterol in this organism and other steroid-utilizing bacteria as well. These new enzymes might be of interest for their potential in the chemical modification of steroid substrates.
Analysis of waste biotreatment
Sustainable development requires the promotion of environmental management and a constant search for new technologies to treat vast quantities of wastes generated by increasing anthropogenic activities. Biotreatment, the processing of wastes using living organisms, is an environmentally friendly, relatively simple and cost-effective alternative to physico-chemical clean-up options. Confined environments, such as bioreactors, have been engineered to overcome the physical, chemical and biological limiting factors of biotreatment processes in highly controlled systems. The great versatility in the design of confined environments allows the treatment of a wide range of wastes under optimized conditions. To perform a correct assessment, it is necessary to consider various microorganisms having a variety of genomes and expressed transcripts and proteins. A great number of analyses are often required. Using traditional genomic techniques, such assessments are limited and time-consuming. However, several high-throughput techniques originally developed for medical studies can be applied to assess biotreatment in confined environments.
Metabolic engineering and biocatalytic applications
The study of the fate of persistent organic chemicals in the environment has revealed a large reservoir of enzymatic reactions with a large potential in preparative organic synthesis, which has already been exploited for a number of oxygenases on pilot and even on industrial scale. Novel catalysts can be obtained from metagenomic libraries and DNA sequence based approaches. Our increasing capabilities in adapting the catalysts to specific reactions and process requirements by rational and random mutagenesis broadens the scope for application in the fine chemical industry, but also in the field of biodegradation. In many cases, these catalysts need to be exploited in whole cell bioconversions or in fermentations, calling for system-wide approaches to understanding strain physiology and metabolism and rational approaches to the engineering of whole cells as they are increasingly put forward in the area of systems biotechnology and synthetic biology.
Fungal biodegradation
In the ecosystem, different substrates are attacked at different rates by consortia of organisms from different kingdoms. Aspergillus and other moulds play an important role in these consortia because they are adept at recycling starches, hemicelluloses, celluloses, pectins and other sugar polymers. Some aspergilli are capable of degrading more refractory compounds such as fats, oils, chitin, and keratin. Maximum decomposition occurs when there is sufficient nitrogen, phosphorus and other essential inorganic nutrients. Fungi also provide food for many soil organisms.
For Aspergillus the process of degradation is the means of obtaining nutrients. When these moulds degrade human-made substrates, the process usually is called biodeterioration. Both paper and textiles (cotton, jute, and linen) are particularly vulnerable to Aspergillus degradation. Our artistic heritage is also subject to Aspergillus assault. To give but one example, after Florence in Italy flooded in 1969, 74% of the isolates from a damaged Ghirlandaio fresco in the Ognissanti church were Aspergillus versicolor.
See also
Biodegradation
Bioremediation
Biotransformation
Bioavailability
Chemotaxis
Microbiology
Environmental microbiology
Industrial microbiology
References
Bioremediation
Biotechnology
Environmental microbiology
Environmental soil science
Soil contamination
Biodegradation
Environmental science | Microbial biodegradation | [
"Chemistry",
"Biology",
"Environmental_science"
] | 2,714 | [
"Environmental chemistry",
"Biotechnology",
"Biodegradation",
"Environmental microbiology",
"Ecological techniques",
"Soil contamination",
"nan",
"Bioremediation",
"Environmental soil science"
] |
13,475,776 | https://en.wikipedia.org/wiki/Resistive%20skin%20time | The resistive skin time is a characteristic time of typical magnetohydrodynamic (MHD) phenomena, describing the diffusion time associated with a resistive wall mode (RWM). Due to this, it is also sometimes referred to as the wall skin time or resistive wall skin time.
Definition
The resistive skin time is defined as:
where is the resistivity, is a typical radius of the RWM and is the magnetic permeability. This formula is distinct from, but analogous to the generalized diffusion time formula , where D is the diffusion coefficient. The interpretation of this means that the quantity (which has units of ) serves as the diffusion coefficient when describing RWMs.
Uses
While the resistive skin time is often referenced in journals concerning RWMs, it is almost never a primary focus of the study, but rather a time scale used to reference other occurrences in the RWM. Most commonly, it is used to describe events whose timescales are short enough that the overall evolution of the RWM has little impact on individual events. It may also be compared to the Alfvén time, to describe a specific wave interation with the RWM.
Notes
References
Magnetohydrodynamics | Resistive skin time | [
"Chemistry"
] | 252 | [
"Magnetohydrodynamics",
"Fluid dynamics"
] |
13,475,840 | https://en.wikipedia.org/wiki/Pentadin | Pentadin, a sweet-tasting protein, was discovered and isolated in 1989, in the fruit of oubli (Pentadiplandra brazzeana ), a climbing shrub growing in some tropical countries of Africa. Sweet tasting proteins are often used in the treatment of diabetes, obesity, and other metabolic disorders that one can experience. These proteins are isolated from the pulp of various fruits, typically found in rain forests and are also used as low calorie sweeteners that can enhance and modify existing foods.
Pentadin and brazzein were discovered in 1994, and are the 2 sweet-tasting proteins discovered in the African fruit, Pentadiplandra brazzeana. Pentadiplandra brazzeana consists of a red outer-shell that contains three to five seeds inside of it, which are covered by a layer of red pulp that contain brazzein and pentadin, sweet tasting proteins. Pentadiplandra brazzenna Baillon bears red berries that are about 2 inches in diameter and contain the sweet tasting proteins, Brazzein and Pentadin, as discussed above. Brazzein and Pentadin are extracted from the same fruit however Pentadin is extracted from the fruit after it is heat-dried and Brazzein is extracted from the fresh form of the fruit. The fruit has been consumed by the apes and the natives for a long time. The berries of the plant were incredibly sweet African locals call them "j'oublie" (French for "I forget") because their taste helps nursing infants forget their mothers' milk.
Sweet tasting proteins have been known to exist for many years and indigenous people have been known to use these proteins as a way to add sweetness to their foods without the use of other sweetening agents, such as sucrose. The sweetness of Pentadin has been estimated to be about 500 times more than Sucrose, when looked at on a weight basis.
The molecular weight of Pentadin is estimated to be 12kDa and has a sweetening ability of 500 times more than sucrose. This sweet tasting protein is known to resemble monellin on a sweetness basis and is higher than thaumatin.
Pentadin is the second protein discovered in Oubli (Pentadiplandra brazzeana) and is similar to Brazzein, the first protein discovered from Pentadiplandra brazzeana. More structural analysis has been done on Brazzein than on Pentadin and it is difficult to understand the particular structure of Pentadin however, some of the structural properties of Brazzein can be applied to Pentadin. Brazzein contains two regions that are particularly critical for the sweetness of the protein, the N- and C- terminus of the protein, and a region of the protein that contains the flexible loop around Arg43. The exact properties for Pentadin are unknown, however we can apply particular regions of the N- and C- terminus regions to the structure of Pentadin as they are both derived from the same fruit(Pentadiplandra brazzeana).
There are six sweet-tasting proteins - pentadin, thaumatin, monellin, mabinlin, brazzein, and curculin - all of which are isolated from plants in tropical forests. These proteins show no similarities in a structural or homologous sequence aspect. All of these sweet tasting proteins have different molecular lengths, with no sequence homology and little to none structural homology. Efforts to identify structural similarities among sweet tasting proteins included using the 3D structures and DALI to find similarities. However only a vague resemblance was found for the three proteins tested, monellin, thaumatin, and brazzein. Brazzein and thaumatin invoke respinses in humans through the T1R2-T1R3 receptor and can be applied t Pentadin a Brazzein and Pentadin are similar to one another. These responses in the T1R2-T1R3 receptor are similar to the small molecular weight sweeteners that include popular sweeteners. Proteins cannot generally stimulate taste receptors like sugar normally does, however the identified sweet tasting proteins, such as monellin, thaumatin, pentadin, curculin, and mabinlin are able to interact with one's taste receptors to create a sweet taste. Very low concentrations of these sweet tasting proteins are required for them to interact with our receptors, therefore they are also known to be low calorie sweeteners.
Physical properties
The amino acid composition of pentadin contains:
Aspartic Acid
Glutamic Acid
Serine
Proline
Glycine
Alanine
Valine
Methionine
Isoleucine
Leucine
Tyrosine
Phenylalanine
Lysine
Arginine
Histidine
Studies that have been conducted on the electrophoretic profile of pentadin revealed the presence of subunits that were joined together by disulfide bonds in the mature protein structure. These studies were done with and without the presence of 2-mercaptoethanol. The more prevalent amino acids found in Pentadin are aspartic acid, glutamic acid, tyrosine, lysine, and proline, with proline being the most dominant amino acid. The structure of Pentadin consists of subunits that are coupled by disulfide bonds and it is soluble in water. Pentadin can also withstand temperatures of 100 °C when exposed to it for 5 hours. The strength of the protein remains the same when it is exposed to temperature at and below 100 °C for an extended period of time (≤ 5 hours).
Uses
The six sweet-tasting proteins can be used as a natural low-calorie sweetener to replace certain sugars. They are also good for the response of insulin in people who are diabetic. Sweet tasting proteins can be used as naturally occurring low calorie sweeteners due to them having more sweetness and a lower calorie value than Sucrose. Pentadin is a naturally occurring form of a low-calorie sweetener and can be used as a substitute for commonly used sugars, such as sucrose, glucose, and fructose.
Growing interests in artificial sweeteners and sweet-tasting proteins
There is a growing interest surrounding low calorie sweeteners due to the average American consuming approximately 17 teaspoons of sugar on a daily basis. The recommended amount of sugar consumed for men is 9 teaspoons and 6 teaspoons for women and with these increased amounts of sugar consumption, numerous health issues increase (high blood pressure, cardiovascular diseases, and increased risk of obesity). Sweet tasting proteins are being introduced as alternatives to other forms of sweeetening agents because they are also known to contain health benefits.
There are two forms of sweeteners available: natural sweeteners and artificial (synthetic) sweeteners. Natural sweeteners are derived from plants and these include Brazzein, Pentadin, and Thaumatin. These compounds provide sweetness with little to no calories however the long term effects of these natural proteins have not been studied intensively to accurately determine the adverse effects that may be caused. Some researchers have identified that these naturally derived sweet tasting proteins may cause weight gain and insulin secretion when consumed for long periods of time.
A link between chronic diseases, such as cardiovascular diseases, diabetes, hypertension, and obesity and excessive sugar consumption has been developed over the years. Increased sugar consumption causes an increase in energy intake leading to increased weight gain and chronic diseases. Due to an association between sugar consumption and chronic diseases, it is important to understand that there are sugar substitutes that one can use. Many sugar substitutes are available in the market today, however more research is required to determine whether or not sweet proteins, such as Pentadin, are safe for human consumption over extended periods of time.
Sweet tasting proteins and taste modifying proteins, such as Pentadin and Miraculin, are being used as safer alternatives to normal table sugar due to their low caloric intake. All of these sweet tasting proteins are isolated from fruits and contain no unpleasant aftertaste, however the nature of these proteins don't allow for mass production like we can do with artificial sweeteners.
See also
Brazzein
Mabinlin
Monellin
Thaumatin
Miraculin
Curculin
Lysosyme
References
Sugar substitutes
Proteins | Pentadin | [
"Chemistry"
] | 1,742 | [
"Biomolecules by chemical classification",
"Proteins",
"Molecular biology"
] |
13,475,854 | https://en.wikipedia.org/wiki/Max%20Steenbeck | Max Christian Theodor Steenbeck (21 March 1904 – 15 December 1981) was a German nuclear physicist who invented the betatron in 1934 during his employment at the Siemens AG.
After the World War II, Steenbeck was taken into the Soviet custody and held in Russia where he was one of many German nuclear physicists in the Soviet program of nuclear weapons. After accepting the teaching position at the University of Jena, Steenback was reparated back to Germany where he devoted his career in teaching courses in university academia.
Early life
Steenbeck was born in Kiel, Schleswig-Holstein, on 21 March 1904. From 1920–29, he attended the University of Kiel where he earned his bachelor's degree in physics and completed his doctoral studies in physics. He completed his thesis on x-rays under Walther Kossel; he submitted the thesis in 1927/1928 and his doctorate was awarded in January 1929.
While a student at Kiel, he formulated the concept of the cyclotron.
Career
Early years
From 1927 to 1945, Steenbeck was a senior staff scientist at the Siemens AG in Berlin. From 1934, he was a laboratory director, and it was in that year that he submitted a patent for the betatron. In 1943, he was appointed technical director of a static converter plant at Siemens, conducting research in gas-discharge physics. Additionally, at his plant, he was head of the Volkssturm (people's army), the organised civilian resistance at the plant, which was to, as a last resort, defend the territory.
In Russia
At the close of the World War II, Steenbeck was taken in the Soviet custody with the Red Army holding him at a concentration camp in Poznań in Poland. Eventually, he directed a letter to the Soviet intelligence service, the NKVD, where he explained his scientific background, which allowed him to be taken to recuperate at the dacha in Opalikha railway station at the end of 1945, after which he was sent to work at Manfred von Ardenne's Institute A, in Sinop, a suburb of Sukhumi. He headed a group working on both electromagnetic and centrifugal isotope separation for the enrichment of uranium, with the latter having the highest priority. Steenbeck and his group were pioneers in the development of supercritical centrifuges. Steenbeck’s group, at its largest, included from 60 to 100 German and Russian personnel. Steenbeck was kept in the Soviet custody until 1956, when he went to East Germany.
While Steenbeck developed the theory of the centrifugal isotope separation process, Gernot Zippe, an Austrian engineer, headed the experimental effort in Steenbeck’s group. Zippe, a POW from the Krasnogorsk camp, joined the group in the summer of 1946. Zippe returned to Germany in 1956. In 1957, he attended a conference on centrifugal isotope separation; it was then that he realized how advanced the work had been in Steenbeck’s group, and Zippe then applied for a patent on short-bowl centrifuge technology, known as the Zippe-type centrifuge. He was invited to repeat the experiments at the University of Virginia. Shortly after completing the work, at the request of the United States, all centrifuge research in Germany became classified on August 1, 1960. The work of Steenbeck and Zippe shaped European, Japanese, and Pakistan's enrichment processes.
Steenbeck and Zippe, before being allowed to leave the Soviet Union, were put into quarantine in the second half of 1952. During the quarantine period, they only performed unclassified work. First they went to Leningrad, after which they worked in the Institute of Semiconductors of the Academy of Sciences in Kiev. They both left the Soviet Union in 1956.
Return to (East) Germany
In 1956, Steenbeck became an ordinarius professor of plasma physics at the University of Jena, and, from 1956 to 1959, he was also director of the Institute for Magnetic Materials at Jena. From 1958 to 1969, he was director of the German Academy of Science Institute for Magnetohydrodynamics, also in Jena. From 1957 to 1963, he was the head of the Technological Science Bureau on Reactor Construction, in Berlin. From 1962 to 1964, he was vice-president and in 1965 president of the German Academy of Science. In 1970, he was president of the East German Committee on European Security. In 1976, Steenbeck was honorary president of the East German Research Council. He died in East Berlin.
The Max-Steenbeck Gymnasium in Cottbus, an academic high school offering extended mathematical-scientific-technical training, was named in his honour. .
Selected literature
W. Kossel and M. Steenbeck Absolute Messung des Quantenstroms im Röntgenstrahl, Zeitschrift für Physik Volume 42, Numbers 11-12, 832-834 (1927). The authors were cited as being from the Physikalisches Institut, Kiel. The article was received on 14. March 1927.
Alfred von Engel and Max Steenbeck On the Gas-Temperature in the Positive Column of an Arc Phys. Rev. Volume 37, Issue 11, 1554 - 1554 (1931). The authors were cited as being at Wissenschaftliche Abteilung, der Siemens-Schuckertwerke A.-G., Berlin. The article was received on 28 April 1931.
Books
Max Steenbeck Probleme und Ergebnisse der Elektro- und Magnetohydrodynamik (Akademie-Verl., 1961)
Max Steenbeck, Fritz Krause, and Karl-Heinz Rädler Elektrodynamische Eigenschaften turbulenter Plasmen (Akademie-Verl., 1963)
Max Steenbeck Wilhelm Wien und sein Einfluss auf die Physik seiner Zeit (Akademie-Verl., 1964)
Max Steenbeck Die wissenschaftlich-technische Entwicklung und Folgerungen für den Lehr- und Lernprozess im System der Volksbildung der Deutschen Demokratischen Republik (VEB Verl. Volk u. Wissen, 1964)
Max Steenbeck Wachsen und Wirken der sozialistischen Persönlichkeit in der wissenschaftlich-technischen Revolution (Dt. Kulturbund, 1968)
Max Steenbeck Impulse und Wirkungen. Schritte auf meinem Lebensweg. (Verlag der Nation, 1977)
Bibliography
Albrecht, Ulrich, Andreas Heinemann-Grüder, and Arend Wellmann Die Spezialisten: Deutsche Naturwissenschaftler und Techniker in der Sowjetunion nach 1945 (Dietz, 1992, 2001)
Barwich, Heinz and Elfi Barwich Das rote Atom (Fischer-TB.-Vlg., 1984)
Heinemann-Grüder, Andreas Keinerlei Untergang: German Armaments Engineers during the Second World War and in the Service of the Victorious Powers in Monika Renneberg and Mark Walker (editors) Science, Technology and National Socialism 30-50 (Cambridge, 2002 paperback edition)
Hentschel, Klaus (editor) and Ann M. Hentschel (editorial assistant and translator) Physics and National Socialism: An Anthology of Primary Sources (Birkhäuser, 1996)
Holloway, David Stalin and the Bomb: The Soviet Union and Atomic Energy 1939 – 1956 (Yale, 1994)
Naimark, Norman M. The Russians in Germany: A History of the Soviet Zone of Occupation, 1945-1949 (Hardcover - Aug 11, 1995) Belknap
Oleynikov, Pavel V. German Scientists in the Soviet Atomic Project, The Nonproliferation Review Volume 7, Number 2, 1 – 30 (2000). The author has been a group leader at the Institute of Technical Physics of the Russian Federal Nuclear Centre in Snezhinsk (Chelyabinsk-70).
Riehl, Nikolaus and Frederick Seitz Stalin’s Captive: Nikolaus Riehl and the Soviet Race for the Bomb (American Chemical Society and the Chemical Heritage Foundations, 1996) . This book is a translation of Nikolaus Riehl’s book Zehn Jahre im goldenen Käfig (Ten Years in a Golden Cage) (Riederer-Verlag, 1988); Seitz has written a lengthy introduction to the book. This book is a treasure trove with its 58 photographs.
External links
Lawrence and His Laboratory - II — A Million Volts or Bust in Heilbron, J. L., and Robert W. Seidel Lawrence and His Laboratory: A History of the Lawrence Berkeley Laboratory', Volume I. (Berkeley: University of California Press, 2000)
Tracking the technology – Nuclear Engineering International, 31 August 2004
NYT – William J. Broad Slender and Elegant, It Fuels the Bomb, New York Times March 23, 2004
Notes
1904 births
1981 deaths
People from Kiel
People from the Province of Schleswig-Holstein
University of Kiel alumni
German physicists
20th-century German physicists
Scientists from Kiel
Volkssturm personnel
German expatriates in the Soviet Union
Nuclear weapons program of the Soviet Union people
East German scientists
Foreign members of the USSR Academy of Sciences
Recipients of the Lomonosov Gold Medal
Members of the German Academy of Sciences at Berlin | Max Steenbeck | [
"Technology"
] | 1,977 | [
"Science and technology awards",
"Recipients of the Lomonosov Gold Medal"
] |
13,476,645 | https://en.wikipedia.org/wiki/Synthome | The synthome comprises the set of all reactions that are available to a chemist for the synthesis of small molecules. The word was coined by Stephen F. Martin.
References
Chemical synthesis | Synthome | [
"Chemistry"
] | 37 | [
"nan",
"Chemical synthesis"
] |
13,477,275 | https://en.wikipedia.org/wiki/Lesk%20algorithm | Lesk algorithm is a classical algorithm for word sense disambiguation introduced by Michael E. Lesk in 1986. It operates on the premise that words within a given context are likely to share a common meaning. This algorithm compares the dictionary definitions of an ambiguous word with the words in its surrounding context to determine the most appropriate sense. Variations, such as the Simplified Lesk algorithm, have demonstrated improved precision and efficiency. However, the Lesk algorithm has faced criticism for its sensitivity to definition wording and its reliance on brief glosses. Researchers have sought to enhance its accuracy by incorporating additional resources like thesauruses and syntactic models.
Overview
The Lesk algorithm is based on the assumption that words in a given "neighborhood" (section of text) will tend to share a common topic. A simplified version of the Lesk algorithm is to compare the dictionary definition of an ambiguous word with the terms contained in its neighborhood. Versions have been adapted to use WordNet. An implementation might look like this:
for every sense of the word being disambiguated one should count the number of words that are in both the neighborhood of that word and in the dictionary definition of that sense
the sense that is to be chosen is the sense that has the largest number of this count.
A frequently used example illustrating this algorithm is for the context "pine cone". The following dictionary definitions are used:
PINE
1. kinds of evergreen tree with needle-shaped leaves
2. waste away through sorrow or illness
CONE
1. solid body which narrows to a point
2. something of this shape whether solid or hollow
3. fruit of certain evergreen trees
As can be seen, the best intersection is Pine #1 ⋂ Cone #3 = 2.
Simplified Lesk algorithm
In Simplified Lesk algorithm, the correct meaning of each word in a given context is determined individually by locating the sense that overlaps the most between its dictionary definition and the given context. Rather than simultaneously determining the meanings of all words in a given context, this approach tackles each word individually, independent of the meaning of the other words occurring in the same context.
"A comparative evaluation performed by Vasilescu et al. (2004) has shown that the simplified Lesk algorithm can significantly outperform the original definition of the algorithm, both in terms of precision and efficiency. By evaluating the disambiguation algorithms on the Senseval-2 English all words data, they measure a 58% precision using the simplified Lesk algorithm compared to the only 42% under the original algorithm.
Note: Vasilescu et al. implementation considers a back-off strategy for words not covered by the algorithm, consisting of the most frequent sense defined in WordNet. This means that words for which all their possible meanings lead to zero overlap with current context or with other word definitions are by default assigned sense number one in WordNet."
Simplified LESK Algorithm with smart default word sense (Vasilescu et al., 2004)
The COMPUTEOVERLAP function returns the number of words in common between two sets, ignoring function words or other words on a stop list. The original Lesk algorithm defines the context in a more complex way.
Criticisms
Unfortunately, Lesk’s approach is very sensitive to the exact wording of definitions, so the absence of a certain word can radically change the results. Further, the algorithm determines overlaps only among the glosses of the senses being considered. This is a significant limitation in that dictionary glosses tend to be fairly short and do not provide sufficient vocabulary to relate fine-grained sense distinctions.
A lot of work has appeared offering different modifications of this algorithm. These works use other resources for analysis (thesauruses, synonyms dictionaries or morphological and syntactic models): for instance, it may use such information as synonyms, different derivatives, or words from definitions of words from definitions.
Lesk variants
Original Lesk (Lesk, 1986)
Adapted/Extended Lesk (Banerjee and Pederson, 2002/2003): In the adaptive lesk algorithm, a word vector is created corresponds to every content word in the wordnet gloss. Concatenating glosses of related concepts in WordNet can be used to augment this vector. The vector contains the co-occurrence counts of words co-occurring with w in a large corpus. Adding all the word vectors for all the content words in its gloss creates the Gloss vector g for a concept. Relatedness is determined by comparing the gloss vector using the Cosine similarity measure.
There are a lot of studies concerning Lesk and its extensions:
Wilks and Stevenson, 1998, 1999;
Mahesh et al., 1997;
Cowie et al., 1992;
Yarowsky, 1992;
Pook and Catlett, 1988;
Kilgarriff and Rosensweig, 2000;
Kwong, 2001;
Nastase and Szpakowicz, 2001;
Gelbukh and Sidorov, 2004.
See also
Word-sense disambiguation
References
Natural language processing
Semantics
Computational linguistics
Word-sense disambiguation | Lesk algorithm | [
"Technology"
] | 1,033 | [
"Natural language processing",
"Natural language and computing",
"Computational linguistics"
] |
13,477,931 | https://en.wikipedia.org/wiki/Computer%20Russification | In computing, Russification involves the localization of computers and software, allowing the user interface of a computer and its software to communicate in the Russian language using Cyrillic script.
Problems associated with Russification before the advent of Unicode included the absence of a single character-encoding standard for Cyrillic (see Cyrillic script#Computer encoding).
History of the MS-DOS Russification
The first official Russification of MS-DOS was carried out for MS-DOS 4.01 in 1989/1990, released on . In Microsoft, the Russification project manager and one of its main developers was Nikolai Lyubovny (Николай Любовный). A Russian version of MS-DOS 5.0 was also developed in 1991, released on . Based on an initiative of Microsoft Germany in March 1991, derivates of the Russian MS-DOS 5.0 drivers used for keyboard, display and printer localization support (DISPLAY.SYS, EGS.CPI, EGA2.CPI, KEYB.COM, KEYBOARD.SYS, MSPRINT.SYS, COUNTRY.SYS, ALPHA.EXE) could also be purchased separately (with English messages) as part of Microsoft's AlphabetPlus kit. This enabled English issues of MS-DOS 3.3, 4.01 and 5.0 to be set up for Eastern European countries like Czechoslovakia, Poland, Hungary, Yugoslavia, Romania and Bulgaria.
Russification of Microsoft Windows
A comprehensive instruction set for computer Russification is maintained by Paul Gorodyansky. It is mirrored in many places and recommended by the U.S. Library of Congress.
See also
Cyrillization
GOST 10859
Romanization of Russian
АДОС, unrelated to Russian MS-DOS
PTS-DOS
Mojibake
References
External links
Modern Online (Virtual) Keyboard for Russian (not just alphabet order)
Online Keyboard for Russian
Virtual Russian Online Keyboard with Spellcheck
User interfaces
Russian language
Russification
Computing in the Soviet Union | Computer Russification | [
"Technology"
] | 407 | [
"User interfaces",
"Computing in the Soviet Union",
"Computer science stubs",
"Computer science",
"Interfaces",
"Computing stubs",
"Natural language and computing",
"History of computing"
] |
13,478,005 | https://en.wikipedia.org/wiki/Ballistic%20photon | Ballistic light, also known as ballistic photons, is photons of light that have traveled through a scattering (turbid) medium in a straight line.
When pulses of laser light pass through a turbid medium such as fog or body tissue, most of the photons are either scattered or absorbed. However, across short distances, a few photons pass through the scattering medium in straight lines. These coherent photons are referred to as ballistic photons. Photons that are slightly scattered, retaining some degree of coherence, are referred to as snake photons.
The aim of ballistic imaging modalities is to efficiently detect ballistic photons that carry useful information, while rejecting non-ballistic photons. To perform this task, specific characteristics of ballistic photons vs. non-ballistic photons are used, such as time of flight through coherence-gated imaging, collimation, wavefront propagation, and polarization. Slightly scattered "quasi-ballistic" photons are often measured as well, to increase the signal 'strength' (i.e., signal-to-noise ratio).
Ballistic photons have many applications, especially in high-resolution medical imaging systems. Ballistic scanners (using ultrafast time gates) and optical coherence tomography (OCT) (using the interferometry principle) are just two popular imaging systems that rely on ballistic photon detection to create diffraction-limited images. Advantages over other existing imaging modalities (e.g., ultrasound and magnetic resonance imaging) is that ballistic imaging can achieve a higher resolution in the order of 1 to 10 micro-meters, however it suffers from limited imaging depth.
Due to the exponential reduction of ballistic photons as thickness of the scattering medium increases, the images often have a low number of photons per pixel, resulting in shot noise. Digital image processing and noise reduction are often applied to reduce that noise.
See also
Medical optical imaging
References
K. Yoo and R. R. Alfano, "Time-resolved coherent and incoherent components of forward light scattering in random media", Optics Letters 15, 320–322 (1990).
L Wang, P P Ho, C Liu, G Zhang, R R Alfano "Ballistic 2-d imaging through scattering walls using an ultrafast optical kerr gate" 1991, August 16th
K. M. Yoo and R. R. Alfano "Time-resolved coherent and incoherent components of forward light scattering in random media" 1990
K. M. Yoo, Feng Liu, and R. R. Alfano "When does the diffusion approximation fail to describe photon transport in random media?" 28 May 1990
S. Farsiu, J. Christofferson, B. Eriksson, P. Milanfar, B. Friedlander, A. Shakouri, R. Nowak, "Statistical detection and imaging of objects hidden in turbid media using ballistic photons", Applied Optics, vol. 46, no. 23, pp. 5805–5822, Aug. 2007.
Light
Optical imaging | Ballistic photon | [
"Physics"
] | 628 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Waves",
"Light"
] |
13,478,284 | https://en.wikipedia.org/wiki/Differentiation%20of%20integrals | In mathematics, the problem of differentiation of integrals is that of determining under what circumstances the mean value integral of a suitable function on a small neighbourhood of a point approximates the value of the function at that point. More formally, given a space X with a measure μ and a metric d, one asks for what functions f : X → R does
for all (or at least μ-almost all) x ∈ X? (Here, as in the rest of the article, Br(x) denotes the open ball in X with d-radius r and centre x.) This is a natural question to ask, especially in view of the heuristic construction of the Riemann integral, in which it is almost implicit that f(x) is a "good representative" for the values of f near x.
Theorems on the differentiation of integrals
Lebesgue measure
One result on the differentiation of integrals is the Lebesgue differentiation theorem, as proved by Henri Lebesgue in 1910. Consider n-dimensional Lebesgue measure λn on n-dimensional Euclidean space Rn. Then, for any locally integrable function f : Rn → R, one has
for λn-almost all points x ∈ Rn. It is important to note, however, that the measure zero set of "bad" points depends on the function f.
Borel measures on Rn
The result for Lebesgue measure turns out to be a special case of the following result, which is based on the Besicovitch covering theorem: if μ is any locally finite Borel measure on Rn and f : Rn → R is locally integrable with respect to μ, then
for μ-almost all points x ∈ Rn.
Gaussian measures
The problem of the differentiation of integrals is much harder in an infinite-dimensional setting. Consider a separable Hilbert space (H, ⟨ , ⟩) equipped with a Gaussian measure γ. As stated in the article on the Vitali covering theorem, the Vitali covering theorem fails for Gaussian measures on infinite-dimensional Hilbert spaces. Two results of David Preiss (1981 and 1983) show the kind of difficulties that one can expect to encounter in this setting:
There is a Gaussian measure γ on a separable Hilbert space H and a Borel set M ⊆ H so that, for γ-almost all x ∈ H,
There is a Gaussian measure γ on a separable Hilbert space H and a function f ∈ L1(H, γ; R) such that
However, there is some hope if one has good control over the covariance of γ. Let the covariance operator of γ be S : H → H given by
or, for some countable orthonormal basis (ei)i∈N of H,
In 1981, Preiss and Jaroslav Tišer showed that if there exists a constant 0 < q < 1 such that
then, for all f ∈ L1(H, γ; R),
where the convergence is convergence in measure with respect to γ. In 1988, Tišer showed that if
for some α > 5 ⁄ 2, then
for γ-almost all x and all f ∈ Lp(H, γ; R), p > 1.
As of 2007, it is still an open question whether there exists an infinite-dimensional Gaussian measure γ on a separable Hilbert space H so that, for all f ∈ L1(H, γ; R),
for γ-almost all x ∈ H. However, it is conjectured that no such measure exists, since the σi would have to decay very rapidly.
See also
References
Differentiation rules
Measure theory
Theorems in analysis
Theorems in calculus | Differentiation of integrals | [
"Mathematics"
] | 756 | [
"Theorems in mathematical analysis",
"Mathematical analysis",
"Mathematical theorems",
"Theorems in calculus",
"Calculus",
"Mathematical problems"
] |
13,478,498 | https://en.wikipedia.org/wiki/Digital%20Preservation%20Award | The Digital Preservation Award is an international award sponsored by the Digital Preservation Coalition. The award 'recognises the many new initiatives being undertaken in the challenging field of digital preservation'. It was inaugurated in 2004 and was initially presented as part of the Institute of ConservationConservation Awards. Since 2012 the prize, which includes a trophy and a cheque, is presented independently. Awards ceremonies have taken place at the British Library, the British Museum and the Wellcome Trust.
Winners and shortlisted entries
2004
Winner
The Digital Archive: The National Archives of the United Kingdom
Shortlisted
The CAMiLEON Project: University of Leeds & University of Michigan (Special Commendation)
JISC Continuing Access and Digital Preservation Strategy: Jisc
Preservation Metadata Extraction Tool: National Library of New Zealand
Wellcome Library/JISC Web Archiving Project: Wellcome Library & Jisc
2005
Winner
PREMIS (Preservation Metadata: Implementation Strategies): PREMIS Working Group
Shortlisted
Choosing the optimal digital preservation strategy: Vienna University of Technology
Digital Preservation Testbed: National Archives of the Netherlands
Reverse Standards Conversion: British Broadcasting Corporation
UK Web Archiving Consortium
2007
Winner
Active Preservation at The National Archives - PRONOM Technical Registry and DROID file format identification tool: The National Archives of the United Kingdom
Shortlisted
LIFE: British Library
Web Curator Tool software development project: National Library of New Zealand & British Library
PARADIGM (The Personal Archives Accessible in Digital Media): Bodleian Library, University of Oxford, & John Rylands University Library, University of Manchester
Digital Repository Audit and Certification: Center for Research Libraries, RLG-OCLC, NARA, Digital Curation Centre, Digital Preservation Europe and NESTOR
2010
Winner
The Memento Project: Time Travel for the Web : Old Dominion University and the Los Alamos National Laboratory in the United States
Shortlisted
Web Continuity: ensuring access to online government information, from The National Archives UK
PLATO 3: Preservation Planning made simple from Vienna University of Technology and the PLANETS Project
The Blue Ribbon Task Force on Sustainable Digital Preservation and Access
Preserving Virtual Worlds, University of Illinois at Urbana Champaign with Rochester Institute of Technology, University of Maryland, Stanford University and Linden Lab in the United States
2012
Winner - outstanding contribution to teaching and communication in digital preservation in the last 2 years
The Digital Preservation Training Programme, University of London Computing Centre
Shortlisted - outstanding contribution to teaching and communication in digital preservation in the last 2 years
The Signal, Library of Congress
Keeping Research Data Safe Project, Charles Beagrie Ltd and partners
Digital Archaeology Exhibition, Story Worldwide Ltd
Winner - outstanding contribution to research and innovation in digital preservation in the last two years
The PLANETS Project Preservation and Long-term Access through Networked Services, The Open Planets Foundation and partners
Shortlisted - outstanding contribution to research and innovation in digital preservation in the last two years
Data Management Planning Toolkit, The Digital Curation Centre and partners
TOTEM Trustworthy Online Technical Environment Metadata Registry, University of Portsmouth and partners
The KEEP Emulation Framework, Koninklijke Bibliotheek (National Library of the Netherlands) and partners
Winner - most outstanding contribution to digital preservation in the last decade
The Archaeology Data Service at the University of York
Shortlisted - most outstanding contribution to digital preservation in the last decade
The International Internet Preservation Consortium
The National Archives for the PRONOM and DROID services
The PREMIS Preservation Metadata Working Group for the PREMIS Standard
2014
Winner - OPF Award for Research and Innovation
bwFLA Functional Long Term Archiving and Access by the University of Freiburg and partners
Shortlisted - OPF Award for Research and Innovation
Jpylyzer by the KB (National Library of the Netherlands) and partners
The SPRUCE Project by The University of Leeds and partners
Winner - NCDD Award for Teaching and Communications
Practical Digital Preservation: a how to guide for organizations of any size by Adrian Brown
Shortlisted - NCDD Award for Teaching and Communications
Skilling the Information Professional by Aberystwyth University
Introduction to Digital Curation: An open online UCLeXtend Course by University College London
Winner - Award for the Most Distinguished Student Work in Digital Preservation
Game Preservation in the UK by Alasdair Bachell, University of Glasgow
Shortlisted - Award for the Most Distinguished Student Work in Digital Preservation
Voices from a Disused Quarry by Kerry Evans, Ann MacDonald and Sarah Vaughan, University of Aberystwyth and partners
Emulation v Format Conversion by Victoria Sloyan, University College London
Winner - Award for Safeguarding the Digital Legacy
Carcanet Press Email Archive, University of Manchester
Shortlisted - Award for Safeguarding the Digital Legacy
Conservation and Re-enactment of Digital Art Ready-Made, by the University of Freiburg and Rhizome
Inspiring Ireland, Digital Repository of Ireland and Partners
The Cloud and the Cow, Archives and Records Council of Wales
2016
Winner - SSI Award for Research and Innovation
NCDD and NDE, ‘Constructing a network of nationwide facilities together.’
Winner - NCDD Award for Teaching and Communications
The National Archives and The Scottish Council on Archives: ‘Transforming Archives/Opening Up Scotland’s Archives.’
Winner - Award for the Most Distinguished Student Work in Digital Preservation
Anthea Seles, University College London and ‘The Transferability of Trusted Digital Repository Standards to an East African context.’
Winner - The National Archives Award for Safeguarding the Digital Legacy
Amsterdam Museum and Partners, ‘The Digital City revives: A case study of web archaeology.’
Winner - DPC Award for the Most Outstanding Digital Preservation Initiative in Industry
HSBC, 'The Global Digital Archive'
DPC Fellowship
Brewster Kahle, the Internet Archive
Full List of Finalists 2016
List of Finalists
2018
Winner - Software Sustainability Institute Award for Research and Innovation
ePADD, University of Stanford
Shortlisted - Software Sustainability Institute Award for Research and Innovation
VeraPDF, Open Preservation Foundation
Contributions towards Defining the Discipline, Sarah Higgins - Aberystwyth University
Flashback: Preservation of legacy digital collections, British Library
Winner - DPC Award for Teaching and Communications
The Archivist’s Guide to KryoFlux, Universities of Texas, Duke, Los Angeles, Yale and Emory
Shortlisted - DPC Award for Teaching and Communications
Evidence-based postgraduate education in digital information management, University College Dublin
Leren Preserveren (Learning Digital Preservation), Digital Heritage Network and Het Nieuwe Instituut
Ibadan/Liverpool Digital Curation Curriculum Review Project, Universities of Ibadan and Liverpool
Winner - National Records of Scotland Award for the Most Distinguished Student Work in Digital Preservation
'Navigating the PDF/A Standard: A Case Study of Theses' by Anna Oates, University of Illinois at Urbana-Champaign
Shortlisted - National Records of Scotland Award for the Most Distinguished Student Work in Digital Preservation
'Preserving the past: the challenge of digital archiving within a Scottish Local Authority' by Lorraine Murray, University of Glasgow
'Essay on the record-making and record-keeping issues implicit in Wearables' by Philippa Turner, University of Liverpool
Winner - Open Data Institute Award for the Most Outstanding Digital Preservation Initiative in Commerce, Industry and the Third sector
Archiving Crossrail - Europe’s largest infrastructure project, Crossrail and Transport for London
Shortlisted - Open Data Institute Award for the Most Outstanding Digital Preservation Initiative in Commerce, Industry and the Third sector
Music Treasures, Stichting Omroep Muziek (SOM)
Heritage preservation of contemporary dance and choreography through research and innovation in digital documentation and annotation of creative processes, ICKamsterdam and Motion Bank
Winner - The National Archives Award for Safeguarding the Digital Legacy
IFI Open Source tools: IFIscripts/ Loopline project, IFI Irish Film Archive
Shortlisted - The National Archives Award for Safeguarding the Digital Legacy
Cloud-Enabled Preservation of Life in the 20th Century White House, White House Historical Association Digital Library
Design, Deliver, Embed: Establishing Digital Transfer in Parliament, UK Parliamentary Archives
Local Authority Digital Preservation Consortium: Dorset History Centre, West Sussex Records Office, Wiltshire & Swindon History Centre
DPC Fellowship
Barbara Sierman, KB Netherlands
See also
List of computer science awards
Digital preservation
Digital Preservation Coalition
References
External links
Digital Preservation Awards website
Conservation Awards website
Digital preservation
Computer science awards | Digital Preservation Award | [
"Technology"
] | 1,611 | [
"Science and technology awards",
"Computer science",
"Computer science awards"
] |
13,478,591 | https://en.wikipedia.org/wiki/Thomas%20M.%20Connelly | Thomas M. Connelly Jr. (born June 1952) is an American business executive with a focus on chemical engineering. In February 2015, he succeeded Madeleine Jacobs as chief executive officer and executive director of the American Chemical Society.
In November 2014, E. I. du Pont de Nemours and Company announced that Connelly was retiring from his position as executive vice president and chief innovation officer after 36 years with the company.
Education
Connelly studied at Princeton University earning degrees in Chemical Engineering and Economics in 1974. He then attended the University of Cambridge as a Winston Churchill Scholar, where he received a Ph.D. in chemical engineering.
DuPont
Connelly was employed by E. I. du Pont de Nemours and Company for 36 years. He joined the company in 1977 as a research engineer at the DuPont Experimental Station in Wilmington, Delaware. He had assignments in Kentucky and West Virginia before starting his overseas assignments. He had positions in England, Switzerland and China – the final position with responsibility for DuPont's Asia Pacific businesses. He then returned to Wilmington in 1999 and was named vice president and general manager of DuPont Fluoroproducts. He was named senior vice-president of research and Chief Science and Technology Officer in 2001. He was promoted to Executive Vice President, the Chief Innovation Officer and a member of the Office of the Chief Executives of DuPont in 2006. In this position, he had responsibility for DuPont's Applied BioSciences, Nutrition & Health, Performance Polymers and Packaging & Industrial Polymers businesses. He also had responsibility for Integrated Operations which includes Operations, Sourcing & Logistics and Engineering. DuPont announced he was retiring from the company in 2014.
Other positions and honors
He is a member of the Department of Chemical Engineering Advisory Committee of Princeton University. As part of the Chemical Heritage Foundation "Heritage Day 2005" ceremonies, Connelly received the 2005 Award for Executive Excellence of the Commercial Development and Marketing Association (CDMA).
References
External links
Engineers from Ohio
Businesspeople from Toledo, Ohio
American chemical industry businesspeople
DuPont people
1952 births
Living people
American Chemical Society | Thomas M. Connelly | [
"Chemistry"
] | 418 | [
"American Chemical Society"
] |
13,478,962 | https://en.wikipedia.org/wiki/Cultural%20algorithm | Cultural algorithms (CA) are a branch of evolutionary computation where there is a knowledge component that is called the belief space in addition to the population component. In this sense, cultural algorithms can be seen as an extension to a conventional genetic algorithm. Cultural algorithms were introduced by Reynolds (see references).
Belief space
The belief space of a cultural algorithm is divided into distinct categories. These categories represent different domains of knowledge that the population has of the search space.
The belief space is updated after each iteration by the best individuals of the population. The best individuals can be selected using a fitness function that assesses the performance of each individual in population much like in genetic algorithms.
List of belief space categories
Normative knowledge A collection of desirable value ranges for the individuals in the population component e.g. acceptable behavior for the agents in population.
Domain specific knowledge Information about the domain of the cultural algorithm problem is applied to.
Situational knowledge Specific examples of important events - e.g. successful/unsuccessful solutions
Temporal knowledge History of the search space - e.g. the temporal patterns of the search process
Spatial knowledge Information about the topography of the search space
Population
The population component of the cultural algorithm is approximately the same as that of the genetic algorithm.
Communication protocol
Cultural algorithms require an interface between the population and belief space. The best individuals of the population can update the belief space via the update function. Also, the knowledge categories of the belief space can affect the population component via the influence function. The influence function can affect population by altering the genome or the actions of the individuals.
Pseudocode for cultural algorithms
Initialize population space (choose initial population)
Initialize belief space (e.g. set domain specific knowledge and normative value-ranges)
Repeat until termination condition is met
Perform actions of the individuals in population space
Evaluate each individual by using the fitness function
Select the parents to reproduce a new generation of offspring
Let the belief space alter the genome of the offspring by using the influence function
Update the belief space by using the accept function (this is done by letting the best individuals to affect the belief space)
Applications
Various optimization problems
Social simulation
Real-parameter optimization
See also
Artificial intelligence
Artificial life
Evolutionary computation
Genetic algorithm
Harmony search
Machine learning
Memetic algorithm
Memetics
Metaheuristic
Social simulation
Sociocultural evolution
Stochastic optimization
Swarm intelligence
References
Robert G. Reynolds, Ziad Kobti, Tim Kohler: Agent-Based Modeling of Cultural Change in Swarm Using Cultural Algorithms
R. G. Reynolds, “An Introduction to Cultural Algorithms, ” in Proceedings of the 3rd Annual Conference on Evolutionary Programming, World Scientific Publishing, pp 131–139, 1994.
Robert G. Reynolds, Bin Peng. Knowledge Learning and Social Swarms in Cultural Systems. Journal of Mathematical Sociology. 29:1-18, 2005
Reynolds, R. G., and Ali, M. Z, “Embedding a Social Fabric Component into Cultural Algorithms Toolkit for an Enhanced Knowledge-Driven Engineering Optimization”, International Journal of Intelligent Computing and Cybernetics (IJICC), Vol. 1, No 4, pp. 356–378, 2008
Reynolds, R G., and Ali, M Z., Exploring Knowledge and Population Swarms via an Agent-Based Cultural Algorithms Simulation Toolkit (CAT), in proceedings of IEEE Congress on Computational Intelligence 2007.
Evolutionary algorithms
Genetic algorithms
Nature-inspired metaheuristics | Cultural algorithm | [
"Biology"
] | 681 | [
"Genetics techniques",
"Genetic algorithms"
] |
13,479,645 | https://en.wikipedia.org/wiki/Check%20weigher | A checkweigher is an automatic or manual machine for checking the weight of packaged commodities.
It is normally found at the offgoing end of a production process and is used to ensure that the weight of a pack of the commodity is within specified limits. Any packs that are outside the tolerance are taken out of line automatically.
A checkweigher can weigh in excess of 500 items per minute (depending on carton size and accuracy requirements).
Checkweighers can be used with metal detectors and X-ray machines to enable other attributes of the pack to be checked and acted upon accordingly.
A typical machine
An automatic checkweigher incorporates a series of conveyor belts. These checkweighers are known also as belt weighers, in-motion scales, conveyor scales, dynamic scales, and in-line scales. In filler applications, they are known as check scales. Typically, there are three belts or chain beds:
An infeed belt that may change the speed of the package and to bring it up or down to a speed required for weighing. The infeed is also sometimes used as an indexer, which sets the gap between products to an optimal distance for weighing. It sometimes has special belts or chains to position the product for weighing.
A weigh belt. This is typically mounted on a weight transducer which can typically be a strain-gauge load cell or a servo-balance (also known as a force-balance), or sometimes known as a split-beam. Some older machines may pause the weigh bed belt before taking the weight measurement. This may limit line speed and throughput.
A reject belt that provides a method of removing an out-of-tolerance package from the conveyor line. The reject can vary by application. Some require an air-amplifier to blow small products off the belt, but heavier applications require a linear or radial actuator. Some fragile products are rejected by "dropping" the bed so that the product can slide gently into a bin or other conveyor.
For high-speed precision scales, a load cell using electromagnetic force restoration (EMFR) is appropriate. This kind of system charges an inductive coil, effectively floating the weigh bed in an electromagnetic field. When the weight is added, the movement of a ferrous material through that coil causes a fluctuation in the coil current proportional to the weight of the object. Other technologies used include strain gauges and vibrating wire load cells.
It is usual for a built-in computer to take many weight readings from the transducer over the time that the package is on the weigh bed to ensure an accurate weight reading.
Calibration is critical. A lab scale, which usually is in an isolated chamber pressurized with dry nitrogen (pressurized at sea level) can weigh an object within plus or minus 100th of a gram, but ambient air pressure is a factor. This is straightforward when there is no motion, but in motion there is a factor that is not obvious-noise from the motion of a weigh belt, vibration, air-conditioning or refrigeration which can cause drafts. Torque on a load cell causes erratic readings.
A dynamic, in-motion checkweigher takes samples, and analyzes them to form an accurate weight over a given time period. In most cases, there is a trigger from an optical (or ultrasonic) device to signal the passing of a package. Once the trigger fires, there is a delay set to allow the package to move to the "sweet spot" (center) of the weigh bed to sample the weight. The weight is sampled for a given duration. If either of these times are wrong, the weight will be wrong. There seems to be no scientific method to predict these timings. Some systems have a "graphing" feature to do this, but it is generally more of an empirical method that works best.
A reject conveyor to enable the out-of-tolerance packages to be removed from the normal flow while still moving at the conveyor velocity. The reject mechanism can be one of several types. Among these are a simple pneumatic pusher to push the reject pack sideways from the belt, a diverting arm to sweep the pack sideways and a reject belt that lowers or lifts to divert the pack vertically. A typical checkweigher usually has a bin to collect the out-of-tolerance packs. Sometimes these bins are provided with a lock, to prevent that out of specification items are fed back on the conveyor belt.
Tolerance methods
There are several tolerance methods:
The traditional "minimum weight" system where weights below a specified weight are rejected. Normally the minimum weight is the weight that is printed on the pack or a weight level that exceeds that to allow for weight losses after production such as evaporation of commodities that have a moisture content. The larger wholesale companies have mandated that any product shipped to them have accurate weight checks such that a customer can be confident that they are getting the amount of product for which they paid. These wholesalers charge large fees for inaccurately filled packages.
The European Average Weight System which follows three specified rules known as the "Packers Rules".
Other published standards and regulations such as NIST Handbook 133
Data collection
There is also a requirement under the European Average Weight System that data collected by checkweighers is archived and is available for inspection. Most modern checkweighers are therefore equipped with communications ports to enable the actual pack weights and derived data to be uploaded to a host computer. This data can also be used for management information enabling processes to be fine-tuned and production performance monitored.
Checkweighers that are equipped with high speed communications such as Ethernet ports are capable of integrating themselves into groups such that a group of production lines that are producing identical products can be considered as one production line for the purposes of weight control. For example, a line that is running with a low average weight can be complemented by another that is running with a high average weight such that the aggregate of the two lines will still comply with rules.
An alternative is to program the checkweigher to check bands of different weight tolerances. For instance, the total valid weight is 100 grams ±15 grams. This means that the product can weigh 85 g to 115 g. However, if 10,000 packs a day are being produced, and most are 110 g, 100 kg of product is being lost. If it is run closer to 85 g, there may be a high rejection rate.
Example: A checkweigher is programmed to indicate 5 zones with resolution to 1 g:
With a check weigher programmed as a zone checkweigher, the data collection over the networks, as well as local statistics, can indicate the need to check the settings on the upstream equipment to better control flow into the packaging. In some cases the dynamic scale sends a signal to a filler, for instance, in real-time, controlling the actual flow into a barrel, can, bag, etc. In many cases a checkweigher has a light-tree with different lights to indicate the variation of the zone weight of each product.
This data can be used to determine if an issue exists with an upstream filling, or packaging, machine. A checkweigher can send a signal to the machine to increase or decrease the amount put into a package. This can result in a payback associated with the checkweigher since producers will be better able to control the amount of give-away. See checkweigher case study outlining ground beef and packaging savings.
Application considerations
Speed and accuracy that can be achieved by a checkweigher is influenced by the following:
Pack length or dia
Pack weight
Line speed required
Pack content (solid or liquid)
Motor technology
Stabilization time of the weight transducer
Airflow causing readings in error
Vibrations from machinery causing unnecessary rejects
Sensitivity to temperature, as the load cells can be temperature sensitive
Applications
In-motion scales are dynamic machines that can be designed to perform thousands of tasks. Some are used as simple case weighers at the end of the conveyor line to ensure the overall finished package product is within its target weight.
An in motion conveyor checkweigher can be used to detect missing pieces of a kit, such as a cell phone package that is missing the manual, or other collateral. Checkweighers are typically used on the incoming conveyor chain, and the output pre-packaging conveyor chain in a poultry processing plant. The bird is weighed when it comes onto the conveyor, then after processing and washing at the end, the network computer can determine whether or not the bird absorbed too much water, which as it is further processed, will be drained, making the bird under its target weight.
A high speed conveyor scale can be used to change the pacing, or pitch of the products on the line by speeding, or slowing the product speed to change the distance between packs before reaching a different speed going into a conveyor machine that is boxing multiple packs into a box. The "pitch" is the measurement of the product as it comes down the conveyor line from leading edge to leading edge.
A checkweigher can be used to count packs, and the aggregate (total) weight of the boxes going onto a pallet for shipment, including the ability to read each package's weight and cubic dimensions. The controller computer can print a shipping label and a bar-code label to identify the weight, the cubic dimensions, ship-to address, and other data for machine ID through the shipment of the product. A receiving checkweigher for the shipment can read the label with a bar code scanner, and determine if the shipment is as it was before the transportation carrier received it from the shipper's loading dock, and determine if a box is missing, or something was pilfered or broken in transit.
Checkweighers are also used for Quality management. For instance, raw material for machining a bearing is weighed prior to beginning the process, and after the process, the quality inspector expects that a certain amount of metal was removed in the finishing process. The finished bearings are weighed by the checkweigher, and bearings over- or underweight are rejected for physical inspection. This is a benefit to the inspector, since he can have a high confidence that the ones not rejected are within machining tolerance. A common usage is for throttling plastic extruders such that a bottle used to package detergent meets that requirements of the finished packager.
Quality management can use a checkweigher for Nondestructive testing to verify finished goods using common Evaluation methods to detect pieces missing from a "finished" product, such as grease from a bearing, or a missing roller within the housing.
Checkweighers can be built with metal detectors, x-ray machines, open-flap detection, bar-code scanners, holographic scanners, temperature sensors, vision inspectors, timing screws to set the timing and spacing between product, indexing gates and concentrator ducts to line up the product into a designated area on the conveyor. An industrial motion checkweigher can sort products from a fraction of a gram to many, many kilograms. In English units, is this from less than 100th of an ounce to as much as 500 lbs or more. Specialized checkweighers can weigh commercial aircraft, and even find their center-of-gravity.
Checkweighers can operate at very high speeds, processing products weighing fractions of a gram at over 100m/m (meters per minute) and materials such as pharmaceuticals and 200 lb bags of produce at over 100fpm (feet per minute). They can be designed in many shapes and sizes, hung from ceilings, raised on mezzanines, operated in ovens or in refrigerators. Their conveying medium can be industrial belting, low-static belting, chains similar to bicycle chains (but much smaller), or interlocked chain belts of any width. They can have chain belts made of special materials, different polymers, metals, etc.
Checkweighers are used in cleanrooms, mints (rolls of coins), dry atmosphere environments, wet environments, produce barns, food processing, drug processing, etc. Checkweighers are specified by the kind of environment, and the kind of cleaning will be used. Typically, a checkweigher for produce is made of mild steel, and one that will be cleaned with harsh chemicals, such as bleach, will be made with all stainless steel parts, even the Load cells. These machines are labeled "full washdown", and must have every part and component specified to survive the washdown environment.
Checkweighers are operated in some applications for extremely long periods of time- 24/7 year round. Generally, conveyor lines are not stopped unless there is maintenance required, or there is an emergency stop, called an E-stop. Checkweighers operating in high density conveyor lines may have numerous special equipments in their design to ensure that if an E-stop occurs, all power going to all motors is removed until the E-stop is cleared and reset.
References
Books
Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009,
External links
Industrial equipment
Weighing instruments
Industrial automation
Meat industry
Quality management
Nondestructive testing
Applications of computer vision
Holography
Packaging machinery | Check weigher | [
"Physics",
"Materials_science",
"Technology",
"Engineering"
] | 2,700 | [
"Weighing instruments",
"Packaging machinery",
"Mass",
"Measuring instruments",
"Industrial engineering",
"Automation",
"Nondestructive testing",
"Materials testing",
"Industrial machinery",
"nan",
"Industrial automation",
"Matter"
] |
1,014,518 | https://en.wikipedia.org/wiki/Volatile%20organic%20compound | Volatile organic compounds (VOCs) are organic compounds that have a high vapor pressure at room temperature. They are common and exist in a variety of settings and products, not limited to house mold, upholstered furniture, arts and crafts supplies, dry cleaned clothing, and cleaning supplies. VOCs are responsible for the odor of scents and perfumes as well as pollutants. They play an important role in communication between animals and plants, such as attractants for pollinators, protection from predation, and even inter-plant interactions. Some VOCs are dangerous to human health or cause harm to the environment, often despite the odor being perceived as pleasant, such as "new car smell".
Anthropogenic VOCs are regulated by law, especially indoors, where concentrations are the highest. Most VOCs are not acutely toxic, but may have long-term chronic health effects. Some VOCs have been used in pharmaceutical settings, while others are the target of administrative controls because of their recreational use. The high vapor pressure of VOCs correlates with a low boiling point, which relates to the number of the sample's molecules in the surrounding air, a trait known as volatility.
Definitions
Diverse definitions of the term VOC are in use. Some examples are presented below.
Canada
Health Canada classifies VOCs as organic compounds that have boiling points roughly in the range of . The emphasis is placed on commonly encountered VOCs that would have an effect on air quality.
European Union
The European Union defines a VOC as "any organic compound as well as the fraction of creosote, having at 293.15 K a vapour pressure of 0.01 kPa or more, or having a corresponding volatility under the particular conditions of use;". The VOC Solvents Emissions Directive was the main policy instrument for the reduction of industrial emissions of volatile organic compounds (VOCs) in the European Union. It covers a wide range of solvent-using activities, e.g. printing, surface cleaning, vehicle coating, dry cleaning and manufacture of footwear and pharmaceutical products. The VOC Solvents Emissions Directive requires installations in which such activities are applied to comply either with the emission limit values set out in the Directive or with the requirements of the so-called reduction scheme. Article 13 of The Paints Directive, approved in 2004, amended the original VOC Solvents Emissions Directive and limits the use of organic solvents in decorative paints and varnishes and in vehicle finishing products. The Paints Directive sets out maximum VOC content limit values for paints and varnishes in certain applications. The Solvents Emissions Directive was replaced by the Industrial Emissions Directive from 2013.
China
The People's Republic of China defines a VOC as those compounds that have "originated from automobiles, industrial production and civilian use, burning of all types of fuels, storage and transportation of oils, fitment finish, coating for furniture and machines, cooking oil fume and fine particles (PM 2.5)", and similar sources. The Three-Year Action Plan for Winning the Blue Sky Defence War released by the State Council in July 2018 creates an action plan to reduce 2015 VOC emissions 10% by 2020.
India
The Central Pollution Control Board of India released the Air (Prevention and Control of Pollution) Act in 1981, amended in 1987, to address concerns about air pollution in India. While the document does not differentiate between VOCs and other air pollutants, the CPCB monitors "oxides of nitrogen (NOx), sulphur dioxide (SO2), fine particulate matter (PM10) and suspended particulate matter (SPM)".
United States
The definitions of VOCs used for control of precursors of photochemical smog used by the U.S. Environmental Protection Agency (EPA) and state agencies in the US with independent outdoor air pollution regulations include exemptions for VOCs that are determined to be non-reactive, or of low-reactivity in the smog formation process. Prominent is the VOC regulation issued by the South Coast Air Quality Management District in California and by the California Air Resources Board (CARB). However, this specific use of the term VOCs can be misleading, especially when applied to indoor air quality because many chemicals that are not regulated as outdoor air pollution can still be important for indoor air pollution.
Following a public hearing in September 1995, California's ARB uses the term "reactive organic gases" (ROG) to measure organic gases. The CARB revised the definition of "Volatile Organic Compounds" used in their consumer products regulations, based on the committee's findings.
In addition to drinking water, VOCs are regulated in pollutant discharges to surface waters (both directly and via sewage treatment plants) as hazardous waste, but not in non-industrial indoor air. The Occupational Safety and Health Administration (OSHA) regulates VOC exposure in the workplace. Volatile organic compounds that are classified as hazardous materials are regulated by the Pipeline and Hazardous Materials Safety Administration while being transported.
Biologically generated VOCs
Most VOCs in Earth's atmosphere are biogenic, largely emitted by plants.
Biogenic volatile organic compounds (BVOCs) encompass VOCs emitted by plants, animals, or microorganisms, and while extremely diverse, are most commonly terpenoids, alcohols, and carbonyls (methane and carbon monoxide are generally not considered). Not counting methane, biological sources emit an estimated 760 teragrams of carbon per year in the form of VOCs. The majority of VOCs are produced by plants, the main compound being isoprene. Small amounts of VOCs are produced by animals and microbes. Many VOCs are considered secondary metabolites, which often help organisms in defense, such as plant defense against herbivory. The strong odor emitted by many plants consists of green leaf volatiles, a subset of VOCs. Although intended for nearby organisms to detect and respond to, these volatiles can be detected and communicated through wireless electronic transmission, by embedding nanosensors and infrared transmitters into the plant materials themselves.
Emissions are affected by a variety of factors, such as temperature, which determines rates of volatilization and growth, and sunlight, which determines rates of biosynthesis. Emission occurs almost exclusively from the leaves, the stomata in particular. VOCs emitted by terrestrial forests are often oxidized by hydroxyl radicals in the atmosphere; in the absence of NOx pollutants, VOC photochemistry recycles hydroxyl radicals to create a sustainable biosphere–atmosphere balance. Due to recent climate change developments, such as warming and greater UV radiation, BVOC emissions from plants are generally predicted to increase, thus upsetting the biosphere–atmosphere interaction and damaging major ecosystems. A major class of VOCs is the terpene class of compounds, such as myrcene.
Providing a sense of scale, a forest in area, the size of the U.S. state of Pennsylvania, is estimated to emit of terpenes on a typical August day during the growing season. Maize produces the VOC (Z)-3-hexen-1-ol and other plant hormones.
Anthropogenic sources
Anthropogenic sources emit about 142 teragrams (1.42 × 1011 kg, or 142 billion kg) of carbon per year in the form of VOCs.
The major source of man-made VOCs are:
Fossil fuel use and production, e.g. incompletely combusted fossil fuels or unintended evaporation of fuels. The most prevalent VOC is ethane, a relatively inert compound.
Solvents used in coatings, paints, and inks. Approximately 12 billion litres of paint are produced annually. Typical solvents include aliphatic hydrocarbons, ethyl acetate, glycol ethers and acetone. Motivated by cost, environmental concerns, and regulation, the paint and coating industries are increasingly shifting toward aqueous solvents.
Compressed aerosol products, mainly butane and propane, estimated to contribute 1.3 million tonnes of VOC emissions per year globally.
Biofuel use, e.g., cooking oils in Asia and bioethanol in Brazil.
Biomass combustion, especially from rain forests. Although combustion principally releases carbon dioxide and water, incomplete combustion affords a variety of VOCs.
Indoor VOCs
Due to their numerous sources indoors, concentrations of VOCs indoors are consistently higher in indoor air (up to ten times higher) than outdoors due to the many sources. VOCs are emitted by thousands of indoor products. Examples include: paints, varnishes, waxes and lacquers, paint strippers, cleaning and personal care products, pesticides, building materials and furnishings, office equipment such as copiers and printers, correction fluids and carbonless copy paper, graphics and craft materials including glues and adhesives, permanent markers, and photographic solutions. Human activities such as cooking and cleaning can also emit VOCs. Cooking can release long-chain aldehydes and alkanes when oil is heated and terpenes can be released when spices are prepared and/or cooked. Cleaning products contain a range of VOCs, including monoterpenes, sesquiterpenes, alcohols and esters. Once released into the air, VOCs can undergo reactions with ozone and hydroxyl radicals to produce other VOCs, such as formaldehyde.
Some VOCs are emitted directly indoors, and some are formed through the subsequent chemical reactions. The total concentration of all VOCs (TVOC) indoors can be up to five times higher than that of outdoor levels.
New buildings experience particularly high levels of VOC off-gassing indoors because of the abundant new materials (building materials, fittings, surface coverings and treatments such as glues, paints and sealants) exposed to the indoor air, emitting multiple VOC gases. This off-gassing has a multi-exponential decay trend that is discernible over at least two years, with the most volatile compounds decaying with a time-constant of a few days, and the least volatile compounds decaying with a time-constant of a few years.
New buildings may require intensive ventilation for the first few months, or a bake-out treatment. Existing buildings may be replenished with new VOC sources, such as new furniture, consumer products, and redecoration of indoor surfaces, all of which lead to a continuous background emission of TVOCs, and requiring improved ventilation.
There are strong seasonal variations in indoors VOC emissions, with emission rates increasing in summer. This is largely due to the rate of diffusion of VOC species through materials to the surface, increasing with temperature. This leads to generally higher concentrations of TVOCs indoors in summer.
Indoor air-quality measurements
Measurement of VOCs from the indoor air is done with sorption tubes e. g. Tenax (for VOCs and SVOCs) or DNPH-cartridges (for carbonyl-compounds) or air detector. The VOCs adsorb on these materials and are afterwards desorbed either thermally (Tenax) or by elution (DNPH) and then analyzed by GC–MS/FID or HPLC. Reference gas mixtures are required for quality control of these VOC measurements. Furthermore, VOC emitting products used indoors, e.g. building products and furniture, are investigated in emission test chambers under controlled climatic conditions. For quality control of these measurements round robin tests are carried out, therefore reproducibly emitting reference materials are ideally required. Other methods have used proprietary Silcosteel-coated canisters with constant flow inlets to collect samples over several days. These methods are not limited by the adsorbing properties of materials like Tenax.
Regulation of indoor VOC emissions
In most countries, a separate definition of VOCs is used with regard to indoor air quality that comprises each organic chemical compound that can be measured as follows: adsorption from air on Tenax TA, thermal desorption, gas chromatographic separation over a 100% nonpolar column (dimethylpolysiloxane). VOC (volatile organic compounds) are all compounds that appear in the gas chromatogram between and including n-hexane and n-hexadecane. Compounds appearing earlier are called VVOC (very volatile organic compounds); compounds appearing later are called SVOC (semi-volatile organic compounds).
France, Germany (AgBB/DIBt), Belgium, Norway (TEK regulation) and Italy (CAM Edilizia) have enacted regulations to limit VOC emissions from commercial products. European industry has developed numerous voluntary ecolabels and rating systems, such as EMICODE, M1, Blue Angel, GuT (textile floor coverings), Nordic Swan Ecolabel, EU Ecolabel, and Indoor Air Comfort. In the United States, several standards exist; California Standard CDPH Section 01350 is the most common one. These regulations and standards changed the marketplace, leading to an increasing number of low-emitting products.
Health risks
Respiratory, allergic, or immune effects in infants or children are associated with man-made VOCs and other indoor or outdoor air pollutants.
Some VOCs, such as styrene and limonene, can react with nitrogen oxides or with ozone to produce new oxidation products and secondary aerosols, which can cause sensory irritation symptoms. VOCs contribute to the formation of tropospheric ozone and smog.
Health effects include eye, nose, and throat irritation; headaches, loss of coordination, nausea, hearing disorders and damage to the liver, kidney, and central nervous system. Some VOCs are suspected or known to cause cancer in humans. Key signs or symptoms associated with exposure to VOCs include conjunctival irritation, nose and throat discomfort, headache, allergic skin reaction, dyspnea, declines in serum cholinesterase levels, nausea, vomiting, nose bleeding, fatigue, dizziness.
The ability of organic chemicals to cause health effects varies greatly from those that are highly toxic to those with no known health effects. As with other pollutants, the extent and nature of the health effect will depend on many factors including level of exposure and length of time exposed. Eye and respiratory tract irritation, headaches, dizziness, visual disorders, and memory impairment are among the immediate symptoms that some people have experienced soon after exposure to some organics. At present, not much is known about what health effects occur from the levels of organics usually found in homes.
Ingestion
While null in comparison to the concentrations found in indoor air, benzene, toluene, and methyl tert-butyl ether (MTBE) were found in samples of human milk and increase the concentrations of VOCs that we are exposed to throughout the day. A study notes the difference between VOCs in alveolar breath and inspired air suggesting that VOCs are ingested, metabolized, and excreted via the extra-pulmonary pathway. VOCs are also ingested by drinking water in varying concentrations. Some VOC concentrations were over the EPA's National Primary Drinking Water Regulations and China's National Drinking Water Standards set by the Ministry of Ecology and Environment.
Dermal absorption
The presence of VOCs in the air and in groundwater has prompted more studies. Several studies have been performed to measure the effects of dermal absorption of specific VOCs. Dermal exposure to VOCs like formaldehyde and toluene downregulate antimicrobial peptides on the skin like cathelicidin LL-37, human β-defensin 2 and 3. Xylene and formaldehyde worsen allergic inflammation in animal models. Toluene also increases the dysregulation of filaggrin: a key protein in dermal regulation. this was confirmed by immunofluorescence to confirm protein loss and western blotting to confirm mRNA loss. These experiments were done on human skin samples. Toluene exposure also decreased the water in the trans-epidermal layer allowing for vulnerability in the skin's layers.
Limit values for VOC emissions
Limit values for VOC emissions into indoor air are published by AgBB, AFSSET, California Department of Public Health, and others. These regulations have prompted several companies in the paint and adhesive industries to adapt with VOC level reductions their products. VOC labels and certification programs may not properly assess all of the VOCs emitted from the product, including some chemical compounds that may be relevant for indoor air quality. Each ounce of colorant added to tint paint may contain between 5 and 20 grams of VOCs. A dark color, however, could require 5–15 ounces of colorant, adding up to 300 or more grams of VOCs per gallon of paint.
VOCs in healthcare settings
VOCs are also found in hospital and health care environments. In these settings, these chemicals are widely used for cleaning, disinfection, and hygiene of the different areas. Thus, health professionals such as nurses, doctors, sanitation staff, etc., may present with adverse health effects such as asthma; however, further evaluation is required to determine the exact levels and determinants that influence the exposure to these compounds.
Concentration levels of individual VOCs such as halogenated and aromatic hydrocarbons vary substantially between areas of the same hospital. Generally, ethanol, isopropanol, ether, and acetone are the main compounds in the interior of the site. Following the same line, in a study conducted in the United States, it was established that nursing assistants are the most exposed to compounds such as ethanol, while medical equipment preparers are most exposed to 2-propanol.
In relation to exposure to VOCs by cleaning and hygiene personnel, a study conducted in 4 hospitals in the United States established that sterilization and disinfection workers are linked to exposures to d-limonene and 2-propanol, while those responsible for cleaning with chlorine-containing products are more likely to have higher levels of exposure to α-pinene and chloroform. Those who perform floor and other surface cleaning tasks (e.g., floor waxing) and who use quaternary ammonium, alcohol, and chlorine-based products are associated with a higher VOC exposure than the two previous groups, that is, they are particularly linked to exposure to acetone, chloroform, α-pinene, 2-propanol or d-limonene.
Other healthcare environments such as nursing and age care homes have been rarely a subject of study, even though the elderly and vulnerable populations may spend considerable time in these indoor settings where they might be exposed to VOCs, derived from the common use of cleaning agents, sprays and fresheners. In one study, more than 200 chemicals were identified, of which 41 have adverse health effects, 37 of them being VOCs. The health effects include skin sensitization, reproductive and organ-specific toxicity, carcinogenicity, mutagenicity, and endocrine-disrupting properties. Furthermore, in another study carried out in the same European country, it was found that there is a significant association between breathlessness in the elderly population and elevated exposure to VOCs such as toluene and o-xylene, unlike the remainder of the population.
VOCs in hospitality and retail
Workers in hospitality are also exposed to VOCs from a variety of sources including cleaning products (air fresheners, floor cleaners, disinfectants, etc.), building materials and furnishings, as well as fragrances. One of the most common VOC found in hospitality settings are alkanes, which are a major ingredient in cleaning products (35%). Other products present in hospitality that contain alkanes are laundry detergents, paints, and lubricants. Housekeepers in particular may also be exposed to formaldehyde, which is present in some fabrics used to make towels and bedding, however exposure decreases after several washes. Some hotels still use bleach to clean, and this bleach can form chloroform and carbon tetrachloride. Fragrances are often used in hotels and are composed of many different chemicals.
There are many negative health outcomes associated with VOC exposure in hospitality. VOCs present in cleaning supplies can cause skin, eye, nose, and throat irritation, which can develop into dermatitis. VOCs in cleaning supplies can also cause more serious conditions, such as respiratory diseases and cancer. One study found that n-nonane and formaldehyde were the main drivers of eye and upper respiratory tract irritation while cancer risks were driven by chloroform and formaldehyde. Some solvent-based products have also been shown to cause damage to the kidneys and reproductive organs. One study showed that the star rating of the hotel may influence VOC exposure, as hotels with lower star ratings tend to have lower quality materials for the furnishings. Additionally, due to a movement among higher-end hotels to be more environmentally friendly, there has been a shift to using less harsh cleaning agents.
Another similar environment that exposes workers to VOCs are retail spaces. Studies have shown that retail spaces have the highest VOC concentrations compared to all other indoor spaces such as residences, offices, and vehicles. The concentration of VOCs present as well as the types depend on the type of store, but common sources of VOCs in retail spaces include motor vehicle exhaust, building materials, cleaning products, products, and fragrances. One study found that VOC concentrations were higher in retail storage spaces compared to the sales areas, particularly formaldehyde. In retail spaces, formaldehyde concentrations ranged from 8.0 to 19.4 µg m−3 compared to 14.2 to 45.0 µg m−3 in storage spaces. Occupational exposure to VOCs also depends on the task. One study found that workers were exposed to peak total VOC concentrations when they were removing the plastic film off of new products. This peak was 7 times higher than total VOC concentration peaks of all other tasks, contributing greatly to retail workers’ exposure to VOCs despite being a relatively short task.
One way that VOC concentrations can be kept minimal within retail and hospitality is by ensuring there is proper air ventilation. Employers can ensure proper ventilation by placing furniture in a way that enhances air circulation, as well as checking that the HVAC (heating, ventilation, and air conditioning) system is working properly to remove pollutants from the air. Workers can make sure that air vents are not blocked.
Analytical methods
Sampling
Obtaining samples for analysis is challenging. VOCs, even when at dangerous levels, are dilute, so preconcentration is typically required. Many components of the atmosphere are mutually incompatible, e.g. ozone and organic compounds, peroxyacyl nitrates and many organic compounds. Furthermore, collection of VOCs by condensation in cold traps also accumulates a large amount of water, which generally must be removed selectively, depending on the analytical techniques to be employed.
Solid-phase microextraction (SPME) techniques are used to collect VOCs at low concentrations for analysis. As applied to breath analysis, the following modalities are employed for sampling: gas sampling bags, syringes, evacuated steel and glass containers.
Principle and measurement methods
In the U.S., standard methods have been established by the National Institute for Occupational Safety and Health (NIOSH) and another by U.S. OSHA. Each method uses a single component solvent; butanol and hexane cannot be sampled, however, on the same sample matrix using the NIOSH or OSHA method.
VOCs are quantified and identified by two broad techniques. The major technique is gas chromatography (GC). GC instruments allow the separation of gaseous components. When coupled to a flame ionization detector (FID) GCs can detect hydrocarbons at the parts per trillion levels. Using electron capture detectors, GCs are also effective for organohalide such as chlorocarbons.
The second major technique associated with VOC analysis is mass spectrometry, which is usually coupled with GC, giving the hyphenated technique of GC-MS.
Direct injection mass spectrometry techniques are frequently utilized for the rapid detection and accurate quantification of VOCs. PTR-MS is among the methods that have been used most extensively for the on-line analysis of biogenic and anthropogenic VOCs. PTR-MS instruments based on time-of-flight mass spectrometry have been reported to reach detection limits of 20 pptv after 100 ms and 750 ppqv after 1 min. measurement (signal integration) time. The mass resolution of these devices is between 7000 and 10,500 m/Δm, thus it is possible to separate most common isobaric VOCs and quantify them independently.
Chemical fingerprinting and breath analysis
The exhaled human breath contains a few thousand volatile organic compounds and is used in breath biopsy to serve as a VOC biomarker to test for diseases, such as lung cancer. One study has shown that "volatile organic compounds ... are mainly blood borne and therefore enable monitoring of different processes in the body." And it appears that VOC compounds in the body "may be either produced by metabolic processes or inhaled/absorbed from exogenous sources" such as environmental tobacco smoke. Chemical fingerprinting and breath analysis of volatile organic compounds has also been demonstrated with chemical sensor arrays, which utilize pattern recognition for detection of component volatile organics in complex mixtures such as breath gas.
Metrology for VOC measurements
To achieve comparability of VOC measurements, reference standards traceable to SI units are required. For a number of VOCs gaseous reference standards are available from specialty gas suppliers or national metrology institutes, either in the form of cylinders or dynamic generation methods. However, for many VOCs, such as oxygenated VOCs, monoterpenes, or formaldehyde, no standards are available at the appropriate amount of fraction due to the chemical reactivity or adsorption of these molecules. Currently, several national metrology institutes are working on the lacking standard gas mixtures at trace level concentration, minimising adsorption processes, and improving the zero gas. The final scopes are for the traceability and the long-term stability of the standard gases to be in accordance with the data quality objectives (DQO, maximum uncertainty of 20% in this case) required by the WMO/GAW program.
See also
Aroma compound
Criteria air contaminants
Fugitive emission
Non-methane volatile organic compound
Organic compound
Trichloroethylene
Vapor intrusion
VOC contamination of groundwater
Volatile Organic Compounds Protocol
References
External links
Volatile Organic Compounds (VOCs) web site of the Chemicals Control Branch of Environment Canada
EPA New England: Ground-level Ozone (Smog) Information
VOC emissions and calculations
Examples of product labels with low VOC emission criteria
KEY-VOCS: Metrology for VOC indicators in air pollution and climate change, a European Metrology Research Project.
VOCs in Paints
Chemical Safety in the Workplace, by the US National Institute for Occupational Safety and Health
Building biology
Organic compounds
Pollutants
Smog
Flavors
Perfumes
Pollution
Chemical hazards
Indoor air pollution | Volatile organic compound | [
"Physics",
"Chemistry",
"Engineering"
] | 5,649 | [
"Visibility",
"Physical quantities",
"Building engineering",
"Smog",
"Chemical hazards",
"Organic compounds",
"Building biology"
] |
1,014,534 | https://en.wikipedia.org/wiki/Projectively%20extended%20real%20line | In real analysis, the projectively extended real line (also called the one-point compactification of the real line), is the extension of the set of the real numbers, , by a point denoted . It is thus the set with the standard arithmetic operations extended where possible, and is sometimes denoted by or The added point is called the point at infinity, because it is considered as a neighbour of both ends of the real line. More precisely, the point at infinity is the limit of every sequence of real numbers whose absolute values are increasing and unbounded.
The projectively extended real line may be identified with a real projective line in which three points have been assigned the specific values , and . The projectively extended real number line is distinct from the affinely extended real number line, in which and are distinct.
Dividing by zero
Unlike most mathematical models of numbers, this structure allows division by zero:
for nonzero a. In particular, and , making the reciprocal function a total function in this structure. The structure, however, is not a field, and none of the binary arithmetic operations are total – for example, is undefined, even though the reciprocal is total. It has usable interpretations, however – for example, in geometry, the slope of a vertical line is .
Extensions of the real line
The projectively extended real line extends the field of real numbers in the same way that the Riemann sphere extends the field of complex numbers, by adding a single point called conventionally .
In contrast, the affinely extended real number line (also called the two-point compactification of the real line) distinguishes between and .
Order
The order relation cannot be extended to in a meaningful way. Given a number , there is no convincing argument to define either or that . Since can't be compared with any of the other elements, there's no point in retaining this relation on . However, order on is used in definitions in .
Geometry
Fundamental to the idea that is a point no different from any other is the way the real projective line is a homogeneous space, in fact homeomorphic to a circle. For example the general linear group of 2 × 2 real invertible matrices has a transitive action on it. The group action may be expressed by Möbius transformations (also called linear fractional transformations), with the understanding that when the denominator of the linear fractional transformation is , the image is .
The detailed analysis of the action shows that for any three distinct points P, Q and R, there is a linear fractional transformation taking P to 0, Q to 1, and R to that is, the group of linear fractional transformations is triply transitive on the real projective line. This cannot be extended to 4-tuples of points, because the cross-ratio is invariant.
The terminology projective line is appropriate, because the points are in 1-to-1 correspondence with one-dimensional linear subspaces of .
Arithmetic operations
Motivation for arithmetic operations
The arithmetic operations on this space are an extension of the same operations on reals. A motivation for the new definitions is the limits of functions of real numbers.
Arithmetic operations that are defined
In addition to the standard operations on the subset of , the following operations are defined for , with exceptions as indicated:
Arithmetic operations that are left undefined
The following expressions cannot be motivated by considering limits of real functions, and no definition of them allows the statement of the standard algebraic properties to be retained unchanged in form for all defined cases. Consequently, they are left undefined:
The exponential function cannot be extended to .
Algebraic properties
The following equalities mean: Either both sides are undefined, or both sides are defined and equal. This is true for any
The following is true whenever expressions involved are defined, for any
In general, all laws of arithmetic that are valid for are also valid for whenever all the occurring expressions are defined.
Intervals and topology
The concept of an interval can be extended to . However, since it is not an ordered set, the interval has a slightly different meaning. The definitions for closed intervals are as follows (it is assumed that
):
With the exception of when the end-points are equal, the corresponding open and half-open intervals are defined by removing the respective endpoints. This redefinition is useful in interval arithmetic when dividing by an interval containing 0.
and the empty set are also intervals, as is excluding any single point.
The open intervals as a base define a topology on . Sufficient for a base are the bounded open intervals in and the intervals for all such that
As said, the topology is homeomorphic to a circle. Thus it is metrizable corresponding (for a given homeomorphism) to the ordinary metric on this circle (either measured straight or along the circle). There is no metric which is an extension of the ordinary metric on
Interval arithmetic
Interval arithmetic extends to from . The result of an arithmetic operation on intervals is always an interval, except when the intervals with a binary operation contain incompatible values leading to an undefined result. In particular, we have, for every :
irrespective of whether either interval includes and .
Calculus
The tools of calculus can be used to analyze functions of . The definitions are motivated by the topology of this space.
Neighbourhoods
Let and .
is a neighbourhood of , if contains an open interval that contains .
is a right-sided neighbourhood of , if there is a real number such that and contains the semi-open interval .
is a left-sided neighbourhood of , if there is a real number such that and contains the semi-open interval .
is a punctured neighbourhood (resp. a right-sided or a left-sided punctured neighbourhood) of , if and is a neighbourhood (resp. a right-sided or a left-sided neighbourhood) of .
Limits
Basic definitions of limits
Let and .
The limit of f (x) as approaches p is L, denoted
if and only if for every neighbourhood A of L, there is a punctured neighbourhood B of p, such that implies .
The one-sided limit of f (x) as x approaches p from the right (left) is L, denoted
if and only if for every neighbourhood A of L, there is a right-sided (left-sided) punctured neighbourhood B of p, such that implies
It can be shown that if and only if both and .
Comparison with limits in
The definitions given above can be compared with the usual definitions of limits of real functions. In the following statements, the first limit is as defined above, and the second limit is in the usual sense:
is equivalent to
is equivalent to
is equivalent to
is equivalent to
is equivalent to
is equivalent to
Extended definition of limits
Let . Then p is a limit point of A if and only if every neighbourhood of p includes a point such that
Let , p a limit point of A. The limit of f (x) as x approaches p through A is L, if and only if for every neighbourhood B of L, there is a punctured neighbourhood C of p, such that implies
This corresponds to the regular topological definition of continuity, applied to the subspace topology on and the restriction of f to
Continuity
The function
is continuous at if and only if is defined at and
If the function
is continuous in if and only if, for every , is defined at and the limit of as tends to through is
Every rational function , where and are polynomials, can be prolongated, in a unique way, to a function from to that is continuous in In particular, this is the case of polynomial functions, which take the value at if they are not constant.
Also, if the tangent function is extended so that
then is continuous in but cannot be prolongated further to a function that is continuous in
Many elementary functions that are continuous in cannot be prolongated to functions that are continuous in This is the case, for example, of the exponential function and all trigonometric functions. For example, the sine function is continuous in but it cannot be made continuous at As seen above, the tangent function can be prolongated to a function that is continuous in but this function cannot be made continuous at
Many discontinuous functions that become continuous when the codomain is extended to remain discontinuous if the codomain is extended to the affinely extended real number system This is the case of the function On the other hand, some functions that are continuous in and discontinuous at become continuous if the domain is extended to This is the case for the arctangent.
As a projective range
When the real projective line is considered in the context of the real projective plane, then the consequences of Desargues' theorem are implicit. In particular, the construction of the projective harmonic conjugate relation between points is part of the structure of the real projective line. For instance, given any pair of points, the point at infinity is the projective harmonic conjugate of their midpoint.
As projectivities preserve the harmonic relation, they form the automorphisms of the real projective line. The projectivities are described algebraically as homographies, since the real numbers form a ring, according to the general construction of a projective line over a ring. Collectively they form the group PGL(2, R).
The projectivities which are their own inverses are called involutions. A hyperbolic involution has two fixed points. Two of these correspond to elementary, arithmetic operations on the real projective line: negation and reciprocation. Indeed, 0 and ∞ are fixed under negation, while 1 and −1 are fixed under reciprocation.
See also
Real projective plane
Complex projective plane
Wheel theory
Notes
References
Real analysis
Topological spaces
Projective geometry
Infinity | Projectively extended real line | [
"Mathematics"
] | 1,994 | [
"Mathematical structures",
"Mathematical objects",
"Infinity",
"Space (mathematics)",
"Topological spaces",
"Topology"
] |
1,014,590 | https://en.wikipedia.org/wiki/Cannabis%20sativa | Cannabis sativa is an annual herbaceous flowering plant. The species was first classified by Carl Linnaeus in 1753. The specific epithet sativa means 'cultivated'. Indigenous to Eastern Asia, the plant is now of cosmopolitan distribution due to widespread cultivation. It has been cultivated throughout recorded history and used as a source of industrial fiber, seed oil, food, and medicine. It is also used as a recreational drug and for religious and spiritual purposes.
Description
The flowers of Cannabis sativa plants are most often either male or female, but, only plants displaying female pistils can be or turn hermaphrodite. Males can never become hermaphrodites. It is a short-day flowering plant, with staminate (male) plants usually taller and less robust than pistillate (female or male) plants. The flowers of the female plant are arranged in racemes and can produce hundreds of seeds. Male plants shed their pollen and die several weeks prior to seed ripening on the female plants. Under typical conditions with a light period of 12 to 14 hours, both sexes are produced in equal numbers because of heritable X and Y chromosomes. Although genetic factors dispose a plant to become male or female, environmental factors including the diurnal light cycle can alter sexual expression. Naturally occurring monoecious plants, with both male and female parts, are either sterile or fertile; but artificially induced "hermaphrodites" can have fully functional reproductive organs. "Feminized" seed sold by many commercial seed suppliers are derived from artificially "hermaphroditic" females that lack the male gene, or by treating the plants with hormones or silver thiosulfate.
Chemical constituents
Although the main psychoactive constituent of Cannabis is tetrahydrocannabinol (THC), the plant is known to contain more than 500 compounds, among them at least 113 cannabinoids; however, most of these "minor" cannabinoids are only produced in trace amounts. Besides THC, another cannabinoid produced in high concentrations by some plants is cannabidiol (CBD), which is not psychoactive but has recently been shown to block the effect of THC in the nervous system. Differences in the chemical composition of Cannabis varieties may produce different effects in humans. Synthetic THC, called dronabinol, does not contain cannabidiol (CBD), cannabinol (CBN), or other cannabinoids, which is one reason why its pharmacological effects may differ significantly from those of natural Cannabis preparations.
Beside cannabinoids, the chemical constituents of Cannabis include about 120 compounds responsible for its characteristic aroma. These are mainly volatile terpenes and sesquiterpenes.
α-Pinene
Myrcene
Linalool
Limonene
Trans-β-ocimene
α-Terpinolene
Trans-caryophyllene
α-Humulene, contributes to the characteristic aroma of Cannabis sativa
Caryophyllene, with which some hashish detection dogs are trained
A 1980 study identifying constituents of C. sativa established 19 major chemical families (number of chemicals within group):
Acids (18)
Alcohols (6)
Aldehydes (12)
Amino Acids (18)
Cannabinoids (55)
Esters/Lactones (11)
Flavanoids Glycosides (14)
Fatty Acids (20)
Hydrocarbons (46)
Ketones (13)
Nitrogenous Compounds (18)
Non-Cannabinoid Phenols (14)
Phytocannabinoids (111)
Pigments (2)
Proteins (7)
Steroids (9)
Sugars (32)
Terpenes (98)
Vitamins (1)
Cannabis also produces numerous volatile sulfur compounds that contribute to the plant's skunk-like aroma, with Prenylthiol (3-methyl-2-butene-1-thiol) identified as the primary odorant. These compounds are found in much lower concentrations than the major terpenes and sesquiterpenes. However, they contribute significantly to the pungent aroma of cannabis due to their low odor thresholds as often seen with thiols or other sulfur-containing compounds.
A number of specific aromatic compounds have been implicated in variety-specific aromas. These include another class of volatile sulfur compounds, referred to as tropical volatile sulfur compounds, that include 3-mercaptohexanol, 3-mercaptohexyl acetate, and 3-mercaptohexyl butyrate. These compounds possess powerful and distinctive fruity, tropical, and citrus aromas in low concentrations such as those found in certain cannabis varieties. These compounds are also important in the citrus and tropical flavors of hops, beer, wine, and tropical fruits.
In addition to volatile sulfur compounds, the heterocyclic compounds indole and skatole (3-Methyl-1H-indole) contribute to the chemical or savory aromas of certain varieties. Skatole in particular was identified as a key contributor to this scent. This compound is found in mammalian feces and is used in the perfuming industry. It possesses a complex aroma that is highly dependent on concentration.
Cultivation
A Cannabis plant in the vegetative growth phase of its life requires more than 16–18 hours of light per day to stay vegetative. Flowering usually occurs when darkness equals at least 12 hours per day. The flowering cycle can last anywhere between seven and fifteen weeks, depending on the strain and environmental conditions. When the production of psychoactive cannabinoids is sought, female plants are grown separately from male plants to induce parthenocarpy in the female plant's fruits (popularly called "sin semilla" which is Spanish for "without seed" ) and increase the production of cannabinoid-rich resin.
In soil, the optimum pH for the plant is 6.3 to 6.8. In hydroponic growing, the nutrient solution is best at 5.2 to 5.8, making Cannabis well-suited to hydroponics because this pH range is hostile to most bacteria and fungi.
Tissue culture multiplication has become important in producing medically important clones, while seed production remains the generally preferred means of multiplication. Sativa plants have narrow leaves and grow best in warm environments. They do, however, take longer to flower than their Indica counterparts, and they grow taller than the Indica cannabis strains as well.
Cultivars
Broadly, there are three main cultivar groups of cannabis that are cultivated today:
Cultivars primarily cultivated for their fibre, characterized by long stems and little branching.
Cultivars grown for seed which can be eaten entirely raw or from which hemp oil is extracted.
Cultivars grown for medicinal or recreational purposes, characterized by extensive branching to maximize the number of flowers.
A nominal if not legal distinction is often made between industrial hemp, with concentrations of psychoactive compounds far too low to be useful for that purpose, and marijuana.
Uses
Cannabis sativa seeds are chiefly used to make hempseed oil which can be used for cooking, lamps, lacquers, or paints. They can also be used as caged-bird feed, as they provide a source of nutrients for most animals. The flowers and fruits (and to a lesser extent the leaves, stems, and seeds) contain psychoactive chemical compounds known as cannabinoids that are consumed for recreational, medicinal, and spiritual purposes. When so used, preparations of flowers and fruits (called marijuana) and leaves and preparations derived from resinous extract (e.g., hashish) are consumed by smoking, vaporising, and oral ingestion. Historically, tinctures, teas, and ointments have also been common preparations. In traditional medicine of India in particular cannabis sativa has been used as hallucinogenic, hypnotic, sedative, analgesic, and anti-inflammatory agent. Terpenes have gained public awareness through the growth and education of medical and recreational cannabis. Organizations and companies operating in cannabis markets have pushed education and marketing of terpenes in their products as a way to differentiate taste and effects of cannabis. The entourage effect, which describes the synergy of cannabinoids, terpenes, and other plant compounds, has also helped further awareness and demand for terpenes in cannabis products.
See also
Cannabis indica
Cannabis ruderalis
Cannabis strains
Difference between C. indica and C. sativa
References
External links
Biopiracy
Cannabis strains
Crops originating from Asia
Entheogens
Euphoriants
Flora of Central Asia
Hemp
Medicinal plants of Asia
Phytoremediation plants
Plants described in 1753
Plants used in traditional Chinese medicine
Taxa named by Carl Linnaeus | Cannabis sativa | [
"Biology"
] | 1,801 | [
"Biopiracy",
"Phytoremediation plants",
"Cannabis strains",
"Biodiversity",
"Bioremediation"
] |
1,014,591 | https://en.wikipedia.org/wiki/Eden%20%28Lem%20novel%29 | Eden is a 1958 social science fiction novel by Polish writer Stanisław Lem. It was first published in 1958 in issues 211-271 of the newspaper Trybuna Robotnicza. The first book edition was in 1959. It was first published in English in 1989 ().
Plot
A starship crew—Captain (in the original, Coordinator), Doctor, Engineer, Chemist, Physicist and Cyberneticist (robotics expert)—crash land on an alien world they call Eden. After escaping their wrecked ship they set out to explore the planet, first traveling through an unsettling wilderness and coming upon an abandoned automated factory. There they find a constant cycle of materials being produced and then destroyed and recycled. Perplexed, they return to their ship. At the crash site they find a local sentient alien has entered their vessel. They name these large creatures, with small torsos retractable into their large bodies, doublers (a translation of Lem's neologism dubelt, to mean "double-bodied").
The next day the expedition begins to come into contact with the local civilization, and their strange, wheel-like vehicles. Eventually they come into conflict with a vehicle's pilot, who is a doubler. Killing the pilot and fleeing in his vehicle, they return to the ship and prepare defenses. After an attack never comes, they assemble their jeep and half the team sets out to explore further, the other half remaining behind to repair the ship.
The jeep team eventually discovers structures resembling graves and hundreds of preserved skeletons, and adjacent to it, a settlement. Two expedition members exploring the settlement become caught in a stampede of doublers, who seem totally indifferent to the presence of the alien expedition. One doubler however, comes to the jeep and refuses to return to the settlement, and is brought back to the ship. While the expedition explored the settlement, a large group of doubler vehicles had reconnoitered the crash site and then fled.
After a while the crew learns that the local civilization is planning to act against them. Shortly thereafter the area around the ship is bombarded for several hours, with all hits falling into a circular ditch made earlier. It turns out they were bombarded with "micromechanical devices", from which a wall of glass begins to grow and eventually assembles into a dome, an attempt to isolate the ship.
The doubler that has joined the group proves to be uncommunicative, leading some of the crew to suggest that it has some sort of intellectual disability. The crew also begins to postulate that the "naked" doublers they have seen are the victims of genocide. Choosing to explore further, the crew activates "Defender", a large tank which they have managed to repair. Blasting through the glass dome they travel far southwest, observing from a distance, for the first time, everyday doubler life.
Returning to the ship in the night, the crew encounters a group of doublers being gassed to death, and act in self defense with their antimatter weapons, killing an indeterminate number of both "naked" and "soldier" doublers. When the Defender team returns to the ship, they find that most of the glass wall has repaired itself, and blast another hole. Returning to the ship until the radioactivity dies down, the expedition plans its next move. In the middle of the meeting a dressed doubler suddenly enters, and the crew makes contact, discovering the doubler to have knowledge of astronomy.
The first contact however, is soon turned into a bitter victory, as the crew learns that this doubler has unwittingly exposed himself to radiation by entering the hole made by Defender. Informing the doubler of his impending death, both parties struggle to learn as much as they can. Through a developed computer translator, the crew and the doubler can speak to one another and begin to gain an understanding of the other's species.
An indistinct image emerges of doublers' Orwellian information-controlled civilization that is almost self-regulating, with a special kind of system of government—one that officially does not exist and is thus impossible to destroy. The society is controlled through a fictitious advanced branch of information science Lem dubs procrustics, based on the control and stratification of information flows within the society. It is used for molding groups within a society and ultimately a society as a whole to behave as designed by secret hidden rulers. One example described in the novel is the above-mentioned settlement, kind of a "concentration camp" without any guards, designed so that the prisoners stay inside apparently of their own "free" will.
Although the doublers know almost nothing of nuclear science, they once conducted a massive genetic experiment to enhance the species. This attempt failed miserably, resulting in deformed doublers who, if they survive, are often driven to the fringes of society. Much like the government, the very existence of this experiment, and the factories created for it, are denied, and anyone with the knowledge of them is eliminated. The doubler explains that the information disseminated to the higher echelons of doubler society was that humans, having been subjected to the effects of cosmic rays throughout their space journey, were mutant monsters that were being quarantined, but he had seen it as a once-in-a-lifetime opportunity and chose to pursue it, a choice the humans greatly empathize with.
Finally the ship is repaired and the crew is ready to leave Eden. The astronomer doubler, although recovering fully from his radiation exposure, decides to stay behind, and as the starship takes off, much to the crew's sadness, the two doublers stand by the ship's exhaust, choosing to die rather than return to their oppressive society.
The planet is seen from the distance once again, a beautiful violet sphere, whose beauty, they now recall, was the very reason they crashed while attempting too close a fly-by and hitting the atmosphere by mistake. It was because of its beauty that they called it, when first seeing it, Eden.
Literary criticism
Lem's own opinion about the book was unflattering. He wrote "From today's perspective Eden is neutral in my eyes. It is so-so. From the point of view of literature it is a rather unsuccessful book; its characters tend to be schematic and the pictured universe is a bit "flat" and one-dimensional."
Marek Oramus in his essay Doktryna nieingerencji ("Doctrine of Noninterfence") writes that the novel marks the beginning of the full maturity of Lem as a writer. He also notes that, although the technology in the novel overall is rather archaic from today's point of view, the novel describes something that is called nanotechnology today. He also remarks that Eden is the first of Lem's stories about failed "first contact". Lem's later writings of this type are much more skeptical, the best known case being Solaris.
Background
The book was written in the Polish People's Republic while under a Soviet-style regime, which in all Eastern Bloc countries had a system of all-pervasive censorship and information control and stratification.
See also
Double bind
Some other novels in which information manipulation is a major part of the plot
The Bull's Hour (planet-wide totalitarian state through information control and stratification)
Prisoners of Power (anonymous totalitarian oligarchy through propaganda and mind control)
The Lucifer Principle (society as memes carrier, shaped by them in return)
Nineteen Eighty-Four
Brave New World (hedonistic caste-based society with everyone conditioned to be non-curious, content with their place; full information available to a select few "World Controllers")
References
Sources
Harvest Books; Reprint edition. (1991)
External links
1958 science fiction novels
Space exploration novels
Novels by Stanisław Lem
Fiction about nanotechnology
Polish novels
Polish science fiction novels
Science fiction about first contact | Eden (Lem novel) | [
"Materials_science"
] | 1,645 | [
"Fiction about nanotechnology",
"Nanotechnology"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.