id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
962,739 | https://en.wikipedia.org/wiki/Institute%20of%20Chemistry%2C%20Slovak%20Academy%20of%20Sciences | The research activities of the Institute of Chemistry of the Slovak Academy of Sciences are aimed at the chemistry and biochemistry of saccharides. The main fields of interest may be classified into the following directions:
Synthesis and structure of biologically important mono- and oligosaccharides and their derivatives
Structure and functional properties of polysaccharides, their derivatives, and conjugates with other polymers
Structure, function, and mechanism of action of glycanases
Development of physicochemical methods for structural analysis of carbohydrates
Gene engineering and nutritional and biologically active proteins
Glycobiotechnology
Ecology, taxonomy, and phylogenesis of yeasts and yeasts-like fungi
Development of technologies for isolation of natural compounds and preparation of saccharides and their derivatives for commercial purposes
References
Biochemistry research institutes
Slovak Academy of Sciences | Institute of Chemistry, Slovak Academy of Sciences | [
"Chemistry"
] | 174 | [
"Biochemistry research institutes",
"Chemistry organization stubs",
"Biochemistry organizations"
] |
962,752 | https://en.wikipedia.org/wiki/EXSLT | EXSLT is a community initiative to provide extensions to XSLT, which are broken down into a number of modules, listed below.
The creators (Jeni Tennison, Uche Ogbuji, Jim Fuller, Dave Pawson, et al.) of EXSLT aim to encourage the implementers of XSLT processors to use these extensions, in order to increase the portability of stylesheets.
List of functions
Common EXSLT
Common covers common, basic extension elements and functions.
Math EXSLT
Math covers extension elements and functions that provide facilities to do with mathematics.
Sets EXSLT
Sets covers those extension elements and functions that provide facilities to do with set manipulation.
Dates and Times EXSLT
Dates and Times covers date and time-related extension elements and functions.
Strings EXSLT
Strings covers extension elements and functions that provide facilities to do with string manipulation.
Regular Expressions EXSLT
Regular Expressions covers extension elements and functions that provide facilities to do with regular expressions.
Dynamic EXSLT
Dynamic covers extension elements and functions that deal with the dynamic evaluation of strings containing XPath expressions.
Random EXSLT
Random covers extension elements and functions that provide facilities to do with randomness.
References
External links
EXSLT Tools
XML-based standards
Markup languages
Functional languages | EXSLT | [
"Technology"
] | 260 | [
"Computer standards",
"XML-based standards"
] |
962,908 | https://en.wikipedia.org/wiki/Change%20control | Within quality management systems (QMS) and information technology (IT) systems, change control is a process—either formal or informal—used to ensure that changes to a product or system are introduced in a controlled and coordinated manner. It reduces the possibility that unnecessary changes will be introduced to a system without forethought, introducing faults into the system or undoing changes made by other users of software. The goals of a change control procedure usually include minimal disruption to services, reduction in back-out activities, and cost-effective utilization of resources involved in implementing change. According to the Project Management Institute, change control is a "process whereby modifications to documents, deliverables, or baselines associated with the project are identified, documented, approved, or rejected."
Change control is used in various industries, including in IT, software development, the pharmaceutical industry, the medical device industry, and other engineering/manufacturing industries. For the IT and software industries, change control is a major aspect of the broader discipline of change management. Typical examples from the computer and network environments are patches to software products, installation of new operating systems, upgrades to network routing tables, or changes to the electrical power systems supporting such infrastructure.
Certain portions of ITIL cover change control.
The process
There is considerable overlap and confusion between change management, configuration management and change control. The definition below is not yet integrated with definitions of the others.
Change control can be described as a set of six steps:
Plan / scope
Assess / analyze
Review / approval
Build / test
Implement
Close
Plan / scope
Consider the primary and ancillary detail of the proposed change. This should include aspects such as identifying the change, its owner(s), how it will be communicated and executed, how success will be verified, the change's estimate of importance, its added value, its conformity to business and industry standards, and its target date for completion.
Assess / analyze
Impact and risk assessment is the next vital step. When executed, will the proposed plan cause something to go wrong? Will related systems be impacted by the proposed change? Even minor details should be considered during this phase. Afterwards, a risk category should ideally be assigned to the proposed change: high-, moderate-, or low-risk. High-risk change requires many additional steps such as management approval and stakeholder notification, whereas low-risk change may only require project manager approval and minimal documentation. If not addressed in the plan/scope, the desire for a backout plan should be expressed, particularly for high-risk changes that have significant worst-case scenarios.
Review / approval
Whether it's a change controller, change control board, steering committee, or project manager, a review and approval process is typically required. The plan/scope and impact/risk assessments are considered in the context of business goals, requirements, and resources. If, for example, the change request is deemed to address a low severity, low impact issue that requires significant resources to correct, the request may be made low priority or shelved altogether. In cases where a high-impact change is requested but without a strong plan, the review/approval entity may request a full business case may be requested for further analysis.
Build / test
If the change control request is approved to move forward, the delivery team will execute the solution through a small-scale development process in test or development environments. This allows the delivery team an opportunity to design and make incremental changes, with unit and/or regression testing. Little in the way of testing and validation may occur for low-risk changes, though major changes will require significant testing before implementation. They will then seek approval and request a time and date to carry out the implementation phase. In rare cases where the solution can't be tested, special consideration should be made towards the change/implementation window.
Implement
In most cases a special implementation team with the technical expertise to quickly move a change along is used to implement the change. The team should also be implementing the change not only according to the approved plan but also according to organizational standards, industry standards, and quality management standards. The implementation process may also require additional staff responsibilities outside the implementation team, including stakeholders who may be asked to assist with troubleshooting. Following implementation, the team may also carry out a post-implementation review, which would take place at another stakeholder meeting or during project closing procedures.
Close
The closing process can be one of the more difficult and important phases of change control. Three primary tasks at this end phase include determining that the project is actually complete, evaluating "the project plan in the context of project completion," and providing tangible proof of project success. If despite best efforts something went wrong during the change control process, a post-mortem on what happened will need to be run, with the intent of applying lessons learned to future changes.
Regulatory environment
In a good manufacturing practice regulated industry, the topic is frequently encountered by its users. Various industrial guidances and commentaries are available for people to comprehend this concept. As a common practice, the activity is usually directed by one or more SOPs. From the information technology perspective for clinical trials, it has been guided by another U.S. Food and Drug Administration document.
See also
Change request
Change order
Engineering change order
Documentation
Identifier
Version control
Changelog
Living document
Specification (technical standard)
Standardization
Scope management
Citations
References
Information technology management
Project management
Software project management | Change control | [
"Technology"
] | 1,092 | [
"Information technology",
"Information technology management"
] |
962,982 | https://en.wikipedia.org/wiki/XFA | XFA (also known as XFA forms) stands for XML Forms Architecture, a family of proprietary XML specifications that was suggested and developed by JetForm to enhance the processing of web forms. It can be also used in PDF files starting with the PDF 1.5 specification. The XFA specification is referenced as an external specification necessary for full application of the ISO 32000-1 specification (PDF 1.7). The XML Forms Architecture was not standardized as an ISO standard, and has been deprecated in PDF 2.0.
Overview
XFA's main extension to XML are computationally active tags. In addition, all instances created from a given XFA form template keep the specification of data capture, rendering, and manipulation rules from the original. Another major advantage of XFA is that its data format allows compatibility with other systems, and with changes to other technology, applications and technology standards.
According to JetForm's submission to the World Wide Web Consortium, "XFA addresses the needs of organizations to securely capture, present, move, process, output and print information associated with electronic forms." The XFA proposal was submitted to the W3C in May 1999.
In 2002, the JetForm Corporation was acquired by Adobe Systems, and the latter introduced XFA forms with PDF 1.5 and the subsequent Acrobat releases (6 and 7) in 2003.
XFA forms are saved internally in PDF files or as XDP (XML Data Package) files which can be opened in Adobe's LiveCycle Designer software.
An XDP can package a PDF file, along with XML form and template data. XDP provides a mechanism for packaging form components within a surrounding XML container.
Although XFA can make use of PDF, XFA is not tied to a particular page description language.
The XFA specification includes an appendix that discusses details of the Adobe-specific XFA implementation and behaviors of Adobe products that deviate from the XFA specification.
Data filled in an XFA form may be submitted to a host using an HTTP POST operation in XDP format, PDF format, XFDF format, XML 1.0 format or as an URL-encoded format.
XFA supports the use of XSLT to transform the XML data before it is loaded to XFA Data DOM or after it is unloaded from XFA Data DOM.
One of XFA approaches to pagination duplicates the pagination logic and much of the syntax of XSL-FO.
XFA forms are synonymous with SmartForms in the Australian government.
Static and dynamic forms
XFA defines static forms (since XFA 2.0 and before) and dynamic forms (since XFA 2.1 or 2.2).
In a static form the form’s appearance and layout is fixed, regardless of the field content. Any unfilled fields are present in the form. By default, static forms do not require re-rendering. XFA recognises two types of static forms: "old-style static forms" (using "full XFA") and XFAF (a subset of full XFA, defined since XFA 2.5).
Dynamic forms (defined since XFA 2.1 or 2.2) can change in appearance in several ways in response to changes in the data. Dynamic form requires rendering of its content on file opening. Dynamic forms may also be designed to change structure to accommodate changes in the structure of the data supplied to the form. For example, a page of a form may be omitted if there is no data for it. Another example is a field that may occupy a variable amount of space on the page, resizing itself to efficiently hold its content. Dynamic form cannot rely on a PDF representation of its boilerplate, because the positioning and layout of the boilerplate change as the fields grow and shrink or as subforms are omitted and included.
Usage with Portable Document Format
PDF 1.7 supports two different methods for integrating data and PDF forms.
AcroForms (also known as Acrobat forms), introduced and included in the PDF 1.2 format specification.
Adobe XML Forms Architecture (XFA) forms, introduced in the PDF 1.5 format specification as an optional feature (The XFA specification is not included in the PDF specification, it is only referenced.)
Adobe XFA Forms are not compatible with AcroForms. When an XFA is packaged inside a PDF file, it is placed in the AcroForm document resources dictionary ("Shell PDF") or referenced from the AcroForm entry in the document catalog.
Creating XFA Forms for use in Adobe Reader requires Adobe LiveCycle Designer. Adobe Reader contains "disabled features" for use of XFA Forms, that will activate only when opening a PDF document that was created using enabling technology available only from Adobe. The XFA Forms are not compatible with Adobe Reader prior to version 6.
Profiles
Starting with XFA 2.5 forms can use a subset of the full XFA capability. Currently the only specified is the XFAF profile.
XFA can be used as:
full XFA - which express all of the form, including boilerplate, directly in XFA (without any PDF or without a complete PDF background). It can be packaged inside a "shell PDF" with minimal PDF markup or as a standalone XDP. It is used for dynamic XFA forms (since XFA 2.1) and also for so called "traditional" (old-style) static XFA forms. Optionally it may include a pre-rendered depiction of the XFA form as PDF pages - but it is useful only for traditional static forms. Dynamic XFA must be rendered on file opening.
XFAF (XFA Foreground) subset - (introduced in XFA 2.5) - a form in which each page of the XFA form overlays a PDF background. It can be used only for static XFA forms. This architecture uses only a subset of XFA. It can be packaged inside a regular PDF document or as a standalone XDP file with embedded PDF. In XFAF each XFA field corresponds to a PDF interactive field (AcroForm field).
Packaging
XFA forms can be created and used as PDF 1.5 - 1.7 files or as XDP (XML Data Package). The format of an XFA resource in PDF is described by the XML Data Package Specification. PDF may contain XFA in XDP format, but XFA may also contain PDF.
When the XFA (XML Forms Architecture) grammars used for an XFA form are moved from one application to another, they must be packaged as an XML Data Package. The XDP may be a standalone document or it may in turn be carried inside a PDF document.
XFA Form packaging variants (using XDP):
as a standalone XML Data Package (XDP) (.xdp file) which can optionally also include a PDF file
inside a regular PDF Document - used for static forms - XFAF.
inside a "Shell PDF" - used for the "full XFA" form (dynamic or traditional static) - A Shell PDF file contains only a minimal skeleton of PDF markup plus the complete XFA content, any fonts and images needed for rendering of the form. It minimizes the file size and the rendering overhead is moved from the server to the client.
Packaging an XDP within PDF has the advantage that it is more compact, because PDF is compressed. XDP in PDF can be digitally signed in ways that a standalone XDP cannot.
In contrast, packaging form components within an XML container (XDP) makes it easy for standard XML applications to work with XFA forms. The XML components are human readable and easy editable (in contrast with PDF source code). When in XDP form, an XFA document may be validated using schemas attached to XFA specification.
Compatibility
Most PDF processors do not handle XFA content. When generating a shell PDF it is recommended to include in the PDF markup a simple one-page PDF image displaying a warning message (e.g. "To view the full contents of this document, you need a later version of the PDF viewer.", "The full content of this file cannot be displayed with your current PDF viewer.", "Please wait... If this message is not eventually replaced by the proper contents of the document, your PDF viewer may not be able to display this type of document.", etc.). PDF processors that can render XFA content should either not display the supplied warning page image or replace it quickly with the dynamic form content.
In 2013, as a solution for mobile platforms and desktop platforms without XFA support, Adobe created software that creates online HTML5 fillable forms from XFA (known as Adobe "Mobile Forms"). Mobile Forms are not a single file like a PDF or XDP.
Rich text
Rich text may appear in data supplied to the XFA forms, in XFA templates as default text values, as field captions, or as boilerplate (draw) content.
Starting with PDF 1.5 (XFA 2.02), the text contents of variable text form fields, as well as markup annotations, may include formatting information (style information). These rich text strings are XML documents that conform to the rich text conventions specified for the XML Forms Architecture specification, which is itself a subset of the XHTML 1.0 specification, augmented with a restricted set of CSS2 style attributes.
In PDF 1.6, PDF supports the rich text elements and attributes specified in the XML Forms Architecture (XFA) Specification, 2.2. In PDF 1.7, PDF supports the rich text elements and attributes specified in the XML Forms Architecture (XFA) Specification, 2.4. It was announced in 2011 that PDF 2.0 (ISO 32000 Part 2) would reference XFA 3.1, but when published, PDF 2.0 deprecated it.
PDF/A
When an XFA form is converted to PDF/A, both the boilerplate and field content are flattened into a PDF appearance stream. PDF/A forbids active content and all XFA content except, optionally, the XML Data Document (forms data created by a user).
Standardization
The XML Forms Architecture specification is not included in the PDF 1.7 standard (ISO 32000-1:2008) and it is only referenced as an external proprietary specification created and published by Adobe. However, the ISO 32000-1 references XFA as normative and indispensable for the application of the ISO 32000-1 specification. XFA was not standardized as an ISO standard.
Since 2007, development of PDF standard has been conducted by ISO's Technical Committee 171/Subcommittee 2/Working Group 8 (TC 171/SC 2/WG 8).
In 2011 the ISO Committee urged Adobe Systems to submit the XFA Specification, XML Forms Architecture (XFA), to ISO for standardization, and requested that Adobe Systems stabilize the XFA specification. The Committee expressed its concerns about the stability of the XFA specification.
In 2017 the ISO Committee deprecated XFA from PDF 2.0.
XFA versions
See also
Portable Document Format
XML Data Package
References
External links
Adobe XML Forms Architecture (XFA) - developer resources
XML-based standards
Markup languages | XFA | [
"Technology"
] | 2,324 | [
"Computer standards",
"XML-based standards"
] |
963,042 | https://en.wikipedia.org/wiki/Finitely%20generated%20group | In algebra, a finitely generated group is a group G that has some finite generating set S so that every element of G can be written as the combination (under the group operation) of finitely many elements of S and of inverses of such elements.
By definition, every finite group is finitely generated, since S can be taken to be G itself. Every infinite finitely generated group must be countable but countable groups need not be finitely generated. The additive group of rational numbers Q is an example of a countable group that is not finitely generated.
Examples
Every quotient of a finitely generated group G is finitely generated; the quotient group is generated by the images of the generators of G under the canonical projection.
A group that is generated by a single element is called cyclic. Every infinite cyclic group is isomorphic to the additive group of the integers Z.
A locally cyclic group is a group in which every finitely generated subgroup is cyclic.
The free group on a finite set is finitely generated by the elements of that set (§Examples).
A fortiori, every finitely presented group (§Examples) is finitely generated.
Finitely generated abelian groups
Every abelian group can be seen as a module over the ring of integers Z, and in a finitely generated abelian group with generators x1, ..., xn, every group element x can be written as a linear combination of these generators,
x = α1⋅x1 + α2⋅x2 + ... + αn⋅xn
with integers α1, ..., αn.
Subgroups of a finitely generated abelian group are themselves finitely generated.
The fundamental theorem of finitely generated abelian groups states that a finitely generated abelian group is the direct sum of a free abelian group of finite rank and a finite abelian group, each of which are unique up to isomorphism.
Subgroups
A subgroup of a finitely generated group need not be finitely generated. The commutator subgroup of the free group on two generators is an example of a subgroup of a finitely generated group that is not finitely generated.
On the other hand, all subgroups of a finitely generated abelian group are finitely generated.
A subgroup of finite index in a finitely generated group is always finitely generated, and the Schreier index formula gives a bound on the number of generators required.
In 1954, Albert G. Howson showed that the intersection of two finitely generated subgroups of a free group is again finitely generated. Furthermore, if and are the numbers of generators of the two finitely generated subgroups then their intersection is generated by at most generators. This upper bound was then significantly improved by Hanna Neumann to ; see Hanna Neumann conjecture.
The lattice of subgroups of a group satisfies the ascending chain condition if and only if all subgroups of the group are finitely generated. A group such that all its subgroups are finitely generated is called Noetherian.
A group such that every finitely generated subgroup is finite is called locally finite. Every locally finite group is periodic, i.e., every element has finite order. Conversely, every periodic abelian group is locally finite.
Applications
Finitely generated groups arise in diverse mathematical and scientific contexts. A frequent way they do so is by the Švarc-Milnor lemma, or more generally thanks to an action through which a group inherits some finiteness property of a space. Geometric group theory studies the connections between algebraic properties of finitely generated groups and topological and geometric properties of spaces on which these groups act.
Differential geometry and topology
Fundamental groups of compact manifolds are finitely generated. Their geometry coarsely reflects the possible geometries of the manifold: for instance, non-positively curved compact manifolds have CAT(0) fundamental groups, whereas uniformly positively-curved manifolds have finite fundamental group (see Myers' theorem).
Mostow's rigidity theorem: for compact hyperbolic manifolds of dimension at least 3, an isomorphism between their fundamental groups extends to a Riemannian isometry.
Mapping class groups of surfaces are also important finitely generated groups in low-dimensional topology.
Algebraic geometry and number theory
Lattices in Lie groups, in p-adic groups...
Superrigidity, Margulis' arithmeticity theorem
Combinatorics, algorithmics and cryptography
Infinite families of expander graphs can be constructed thanks to finitely generated groups with property T
Algorithmic problems in combinatorial group theory
Group-based cryptography attempts to make use of hard algorithmic problems related to group presentations in order to construct quantum-resilient cryptographic protocols
Analysis
Probability theory
Random walks on Cayley graphs of finitely generated groups provide approachable examples of random walks on graphs
Percolation on Cayley graphs
Physics and chemistry
Crystallographic groups
Mapping class groups appear in topological quantum field theories
Biology
Knot groups are used to study molecular knots
Related notions
The word problem for a finitely generated group is the decision problem of whether two words in the generators of the group represent the same element. The word problem for a given finitely generated group is solvable if and only if the group can be embedded in every algebraically closed group.
The rank of a group is often defined to be the smallest cardinality of a generating set for the group. By definition, the rank of a finitely generated group is finite.
See also
Finitely generated module
Presentation of a group
Notes
References
Group theory
Properties of groups | Finitely generated group | [
"Mathematics"
] | 1,135 | [
"Mathematical structures",
"Properties of groups",
"Group theory",
"Fields of abstract algebra",
"Algebraic structures"
] |
963,140 | https://en.wikipedia.org/wiki/Magic%20%28game%20terminology%29 | Magic or mana is an attribute assigned to characters within a role-playing or video game that indicates their power to use special magical abilities or "spells". Magic is usually measured in magic points or mana points, shortened as MP. Different abilities will use up different amounts of MP. When the MP of a character reaches zero, the character will not be able to use special abilities until some of their MP is recovered.
Much like health, magic might be displayed as a numeric value, such as "50/100". Here, the first number indicates the current amount of MP a character has whereas the second number indicates the character's maximum MP. In video games, magic can also be displayed visually, such as with a gauge that empties itself as a character uses their abilities.
History
The magic system in tabletop role-playing games such as Dungeons & Dragons is largely based on patterns established in the Dying Earth novels of author Jack Vance. In this system, the player character can only memorize a fixed number of spells from a list of spells. Once this spell is used once, the character forgets it and becomes unable to use it again.
"Mana" is a word that comes from Polynesian languages with a complex meaning. Mostly, it loosely represents power, respect and dignity. The concept of mana was introduced in Europe by missionary Robert Henry Codrington in 1891 and was popularized by Mircea Eliade in the 1950s. It was first introduced as a magical fuel used to cast spells in the 1969 short story, "Not Long Before the End", by Larry Niven, which is part of and later popularized by his The Magic Goes Away setting. It has since become a common staple in both role-playing and video games.
Mechanisms
Because skills and abilities are not usually lost, a game designer might decide to limit the use of such an ability by linking its use to magic points. This way, after using an ability, the player is required to rest or use an item to replenish their character's MP. This is done for balancing, so that each skill does not have an infinite casting ability with equal results every time.
"Magic" may be substituted with psychic powers, spiritual power, advanced technology or other concepts that would allow a character to influence the world around them that is not available in real life. Magic is often restricted to a specific class of character, such as a "mage" or "spellcaster", while other character classes have to rely on melee combat or physical projectiles. Other character classes, such as those that rely on melee attacks, may also have a "magic" bar that limits their special abilities, although they are usually called something different, such as the Barbarian's "Fury" in Diablo 3.
In video games, MP can often be restored by consuming magic potions or it may regenerate over time. Status effects are temporary modification to a game character's original set of stats. A character may cast a spell that inflicts a positive or negative status effect on another character.
In role-playing games
In both tabletop role-playing games and role-playing video games, magic is most usually used to cast spells during battles. However magic has many uses outside of combat situations, such as using love spells on NPCs to gain information. Some games base the strength and amount of a character's magic on stats such as "wisdom" or "intelligence". These stats are used because they are easy to keep track of and develop in pen-and-paper RPGs.
Some games introduce a separate point system per skill. For example, in the Pokémon games, each skill of each fighting character has its own "Power Points" (PP). If the PP of only one of its skills are depleted, that specific Pokémon still has three other skills to choose from.
In god games
In god games, the player's power is usually called mana and grows along with the number and prosperity of the player's worshipers. Here, the population size influences the maximum amount of mana the player has and the rate at which their mana restores itself when it is below that maximum. Using "godly powers" consumes mana, but such actions are necessary to increase the number and prosperity of the population.
References
See also
Health (game terminology)
Experience point
Video game terminology
Magic (supernatural)
Fantasy games | Magic (game terminology) | [
"Technology"
] | 884 | [
"Computing terminology",
"Video game terminology"
] |
963,160 | https://en.wikipedia.org/wiki/Messier%2046 | Messier 46 or M46, also known as NGC 2437, is an open cluster of stars in the slightly southern constellation of Puppis. It was discovered by Charles Messier in 1771. Dreyer described it as "very bright, very rich, very large." It is about 5,000 light-years away. There are an estimated 500 stars in the cluster with a combined mass of , and it is thought to be a mid-range estimate of 251.2 million years old.
The cluster has a very broadest (tidal) radius of and core radius of . It has a greater spatial extent in infrared than in visible light, suggesting it is undergoing some mass segregation with the fainter (redder) stars migrating to a coma (tail) region. The fainter stars that extend out to the south and west may form a tidal tail due to a past interaction.
The planetary nebula NGC 2438 appears to lie within the cluster near its northern edge (the faint almost rainbow array of colored smudge at the top-center of the image), but it is most likely unrelated since it does not share the cluster's radial velocity. This makes for superimposed objects of interest, another instance perhaps being NGC 2818.
On the other hand, the illuminating star of the bipolar Calabash Nebula shares the radial velocity and proper motion of Messier 46, and is at the same distance, so is a bona fide member of the open cluster.
M46 is located close by to another open cluster, Messier 47. M46 is about a degree east of M47 in the sky, so the two fit well in a binocular or wide-angle telescope field.
See also
List of Messier objects
Messier object
References
External links
Messier 46, SEDS Messier pages
Messier 46, Amateur Astronomer Image – Waid Observatory
Dark Atmospheres Photography – M46 w/ NGC 2438 detail
– featuring M46
Messier 046
Orion–Cygnus Arm
Messier 046
046
Messier 046
17710219
Discoveries by Charles Messier | Messier 46 | [
"Astronomy"
] | 423 | [
"Puppis",
"Constellations"
] |
963,195 | https://en.wikipedia.org/wiki/Messier%2047 | Messier 47 (M47 or NGC 2422) and also known as NGC 2478 is an open cluster in the mildly southern constellation of Puppis. It was discovered by Giovanni Batista Hodierna before 1654 and in his then keynote work re-discovered by Charles Messier on 1771. It was also independently discovered by Caroline Herschel.
There is no cluster in the position indicated by Messier, which he expressed in terms of its right ascension and declination with respect to the star 2 Puppis. However, if the signs (+ and −) he wrote are swapped, the position matches. Until this equivalency was found, M47 was considered a lost Messier Object. This identification as the same thing (ad idem) only came in 1959 with a realization by Canadian astronomer T. F. Morris.
M47 is centered about 1,600 light-years away and is about 78 million years old. The member stars have been measured down to about red dwarfs at apparent magnitude 19. There are around 500 members, the brightest being HD 60855, a magnitude 5.7 Be star. The cluster is dominated by hot class B main sequence and giant stars, but a noticeable colour contrast comes from its brightest red giants.
It about a degree from Messier 46, which is much older and much further away.
Gallery
See also
List of Messier objects
References and footnotes
External links
Messier 47, SEDS Messier pages
Messier 47 Amateur Image - Waid Observatory
Messier 047
Messier 047
047
Messier 047
Orion–Cygnus Arm
? | Messier 47 | [
"Astronomy"
] | 325 | [
"Puppis",
"Constellations"
] |
963,213 | https://en.wikipedia.org/wiki/Messier%2048 | Messier 48 or M48, also known as NGC 2548, is an open cluster of stars in the equatorial constellation of Hydra. It sits near Hydra's westernmost limit with Monoceros, about to the east and slightly south of Hydra's brightest star, Alphard. This grouping was discovered by Charles Messier in 1771, but there is no cluster precisely where Messier indicated; he made an error, as he did with M47. The value that he gave for the right ascension matches, however, his declination is off by five degrees. Credit for discovery is sometimes given instead to Caroline Herschel in 1783. Her nephew John Herschel described it as, "a superb cluster which fills the whole field; stars of 9th and 10th to the 13th magnitude – and none below, but the whole ground of the sky on which it stands is singularly dotted over with infinitely minute points".
M48 is visible to the naked eye under good atmospheric conditions. The brightest member is the star HIP 40348 at visual magnitude 8.3. The cluster is located some from the Sun. The age estimated from isochrones is Myr, while gyrochronology age estimate is Myr – in good agreement. This makes it intermediate in age between the Pleiades, at around 100 Myr, and the Hyades, at about 650 Myr. The metallicity of the cluster, based on the abundance of iron (Fe), is [Fe/H] = , where −1 would be ten times lower than in the Sun. It is more metal-poor than the Pleiades, Hyades, and Praesepe clusters.
The cluster has a tidal radius of with at least 438 members and a mass of . The general structure of the cluster is fragmented and lumpy, which may be due to interactions with the galactic disk. The cluster is now subdivided into three groups, each of which has its own collective proper motion.
See also
List of Messier objects
References
External links
Messier 48, SEDS Messier pages
Messier 048
Orion–Cygnus Arm
Messier 048
048
Messier 048
17710219
Discoveries by Charles Messier | Messier 48 | [
"Astronomy"
] | 451 | [
"Hydra (constellation)",
"Constellations"
] |
963,313 | https://en.wikipedia.org/wiki/Hemp | Hemp, or industrial hemp, is a plant in the botanical class of Cannabis sativa cultivars grown specifically for industrial and consumable use. It can be used to make a wide range of products. Along with bamboo, hemp is among the fastest growing plants on Earth. It was also one of the first plants to be spun into usable fiber 50,000 years ago. It can be refined into a variety of commercial items, including paper, rope, textiles, clothing, biodegradable plastics, paint, insulation, biofuel, food, and animal feed.
Although chemotype I cannabis and hemp (types II, III, IV, V) are both Cannabis sativa and contain the psychoactive component tetrahydrocannabinol (THC), they represent distinct cultivar groups, typically with unique phytochemical compositions and uses. Hemp typically has lower concentrations of total THC and may have higher concentrations of cannabidiol (CBD), which potentially mitigates the psychoactive effects of THC. The legality of hemp varies widely among countries. Some governments regulate the concentration of THC and permit only hemp that is bred with an especially low THC content into commercial production.
Etymology
The etymology is uncertain but there appears to be no common Proto-Indo-European source for the various forms of the word; the Greek term () is the oldest attested form, which may have been borrowed from an earlier Scythian or Thracian word. Then it appears to have been borrowed into Latin, and separately into Slavic and from there into Baltic, Finnish, and Germanic languages.
In the Germanic languages, following Grimm's law, the "k" would have changed to "h" with the first Germanic sound shift, giving Proto-Germanic *hanapiz, after which it may have been adapted into the Old English form, , . Barber (1991) however, argued that the spread of the name "kannabis" was due to its historically more recent plant use, starting from the south, around Iran, whereas non-THC varieties of hemp are older and prehistoric. Another possible source of origin is Assyrian , which was the name for a source of oil, fiber, and medicine in the 1st millennium BC.
Cognates of hemp in other Germanic languages include Dutch , Danish and Norwegian , Saterland Frisian , German , Icelandic and Swedish . In those languages "hemp" can refer to either industrial fiber hemp or narcotic cannabis strains.
Uses
Hemp is used to make a variety of commercial and industrial products, including rope, textiles, clothing, shoes, food, paper, bioplastics, insulation, and biofuel. The bast fibers can be used to make textiles that are 100% hemp, but they are commonly blended with other fibers, such as flax, cotton or silk, as well as virgin and recycled polyester, to make woven fabrics for apparel and furnishings. The inner two fibers of the plant are woodier and typically have industrial applications, such as mulch, animal bedding, and litter. When oxidized (often erroneously referred to as "drying"), hemp oil from the seeds becomes solid and can be used in the manufacture of oil-based paints, in creams as a moisturizing agent, for cooking, and in plastics. Hemp seeds have been used in bird feed mix as well. A survey in 2003 showed that more than 95% of hemp seed sold in the European Union was used in animal and bird feed.
Food
Hemp seeds can be eaten raw, ground into hemp meal, sprouted or made into dried sprout powder. Hemp seeds can also be made into a slurry used for baking or for beverages, such as hemp milk and tisanes. Hemp oil is cold-pressed from the seed and is high in unsaturated fatty acids.
In the UK, the Department for Environment, Food and Rural Affairs treats hemp as a purely non-food crop, but with proper licensing and proof of less than 0.3% THC concentration, hemp seeds can be imported for sowing or for sale as a food or food ingredient. In the US, hemp can be used legally in food products and, , was typically sold in health food stores or through mail order.
Nutrition
A portion of hulled hemp seeds supplies of food energy. They contain 5% water, 5% carbohydrates, 49% total fat, and 31% protein.
The share of protein obtained from the hemp seeds can be increased in by processing the seeds, such as by dehulling the seeds, or by using the meal or cake (also called hemp seed flour), that is, the remaining fraction of hemp seed obtained after expelling its oil fraction. The proteins are mostly located in the inner layer of the seed, whereas the hull is poor in proteins, as it mostly contains the fiber.
Hemp seeds are notable in providing 64% of the Daily Value (DV) of protein per 100-gram serving. The three main proteins in hemp seeds are edestin (83% of total protein content), albumin (13%) and ß-conglycinin (up to 5%). Hemp seed proteins are highly digestible compared to soy proteins when untreated (unheated). The amino acid profile of hemp seeds is comparable to the profiles of other protein-rich foods, such as meat, milk, eggs, and soy. Protein digestibility-corrected amino acid scores were 0.49–0.53 for whole hemp seed, 0.46–0.51 for hemp seed meal, and 0.63–0.66 for hulled hemp seed. The most abundant amino acid in hemp seed is glutamic acid (3.74–4.58% of whole seed) followed by arginine (2.28–3.10% of whole seed). The whole hemp seed can be considered a rich-protein source containing a protein amount higher or similar than other protein-rich products, such as quinoa (13.0%), chia seeds (18.2–19.7%), buckwheat seeds (27.8%) and linseeds (20.9%). Nutritionally, the protein fraction of hemp seed is highly digestible comparing to other plant-based proteins such as soy protein. Hemp seed protein has a good profile of essential amino acids, still, this profile of amino acids is inferior to that of soy or casein.
Hemp seeds are a rich source of dietary fiber (20% DV), B vitamins, and the dietary minerals manganese (362% DV), phosphorus (236% DV), magnesium (197% DV), zinc (104% DV), and iron (61% DV). About 73% of the energy in hemp seeds is in the form of fats and essential fatty acids, mainly polyunsaturated fatty acids, linoleic, oleic, and alpha-linolenic acids. The ratio of the 38.100 grams of polyunsaturated fats per 100 grams is 9.301 grams of omega-3 to 28.698 grams of omega-6. Typically, the portion suggested on packages for an adult is 30 grams, approximately three tablespoons.
With its gluten content as low as 4.78 ppm, hemp is attracting attention as a gluten-free (<20 ppm) food material.
Despite the rich nutrient content of hemp seeds, the seeds contain antinutritional compounds, including phytic acid, trypsin inhibitors, and tannins, in statistically significant concentrations.
Storage
Hemp oil oxidizes and turns rancid within a short period of time if not stored properly; its shelf life is extended when it is stored in a dark airtight container and refrigerated. Both light and heat can degrade hemp oil.
Fiber
Hemp fiber has been used extensively throughout history, with production climaxing soon after being introduced to the New World. For centuries, items ranging from rope, to fabrics, to industrial materials were made from hemp fiber. Hemp was also commonly used to make sail canvas. The word "canvas" is derived from the word cannabis. Pure hemp has a texture similar to linen. Because of its versatility for use in a variety of products, today hemp is used in a number of consumer goods, including clothing, shoes, accessories, dog collars, and home wares. For clothing, in some instances, hemp is mixed with lyocell. Its benefits in terms for sustainability also increase its appeal in industries, such as the clothing industry.
Building material
Hemp as a building construction material provides solutions to a variety of issues facing current building standards. Its light weight, mold resistance, breathability, etc. makes hemp products versatile in a multitude of uses. Following the co-heating tests of NNFCC Renewable House at the Building Research Establishment (BRE), hemp is reported to be a more sustainable material of construction in comparison to most building methods used today. In addition, its practical use in building construction could result in the reduction of both energy consumption costs and the creation of secondary pollutants.
In 2022, hemp-lime, also known as hempcrete, was accepted as a building material, along with methodologies for its use, by the International Code Council, and was included in the 2024 edition of the International Residential Code as an appendix: "Appendix BL Hemp-Lime (Hempcrete) Construction". This inclusion in the IRC model code is expected to promote expansion of the use and legitimacy of hemp-lime in construction in the United States.
The hemp market was at its largest during the 17th century. In the 19th century and onward, the market saw a decline during its rapid illegalization in many countries. Hemp has resurfaced in green building construction, primarily in Europe. The modern-day disputes regarding the legality of hemp lead to its main disadvantages: importing and regulating costs. Final Report on the Construction of the Hemp Houses at Haverhill, UK conducts that hemp construction exceeds the cost of traditional building materials by £48per square meter.
Currently, the University of Bath researches the use of hemp-lime panel systems for construction. Funded by the European Union, the research tests panel design within their use in high-quality construction, on site assembly, humidity and moisture penetration, temperature change, daily performance and energy saving documentations. The program, focusing on Britain, France, and Spain markets aims to perfect protocols of use and application, manufacturing, data gathering, certification for market use, as well as warranty and insurance.
The most common use of hemp-lime in building is by casting the hemp-hurd and lime mix while wet around a timber frame with temporary shuttering and tamping the mix to form a firm mass. After the removal of the temporary shuttering, the solidified hemp mix is then ready to be plastered with lime plaster.
Sustainability
Hemp is classified under the green category of building design, primarily due to its positive effects on the environment. A few of its benefits include but are not limited to the suppression of weed growth, anti-erosion, reclamation properties, and the ability to remove poisonous substances and heavy metals from soil.
The use of hemp is beginning to gain popularity alongside other natural materials. This is because cannabis processing is done mechanically with minimal harmful effects on the environment. A part of what makes hemp sustainable is its minimal water usage and non-reliance on pesticides for proper growth. It is recyclable, non-toxic, and biodegradable, making hemp a popular choice in green building construction.
Hemp fiber is known to have high strength and durability, and has been known to be a good protector against vermin. The fiber has the capability to reinforce structures by embossing threads and cannabis shavers. Hemp has been involved more recently in the building industry, producing building construction materials including insulation, hempcrete, and varnishes.
Hemp made materials have low embodied energy. The plant has the ability to absorb large amounts of CO2, providing air quality, thermal balance, creating a positive environmental impact.
Hemp's properties allow mold resistance, and its porous materiality makes the building materials made of it breathable. In addition hemp possesses the ability to absorb and release moisture without deteriorating. Hemp can be non-flammable if mixed with lime and could be applied on numerous aspects of the building (wall, roofs, etc.) due to its lightweight properties.
Insulation
Hemp is commonly used as an insulation material. Its flexibility and toughness during compression allows for easier implementation within structural framing systems. The insulation material could also be easily adjusted to different sizes and shapes by being cut during the installation process. The ability to not settle and therefore avoiding cavity developments lowers its need for maintenance.
Hemp insulation is naturally lightweight and non-toxic, allowing for an exposed installation in a variety of spaces, including flooring, walling, and roofing. Compared to mineral insulation, hemp absorbs roughly double the amount of heat and could be compared to wood, in some cases even overpassing some of its types.
Hemp insulation's porous materiality allows for air and moisture penetration, with a bulk density going up to 20% without losing any thermal properties. In contrast, the commonly used mineral insulation starts to fail after 2%. The insulation evenly distributes vapor and allows for air circulation, constantly carrying out used air and replacing with fresh. Its use on the exterior of the structure, overlaid with breathable water-resistive barriers, eases the withdrawal of moisture from within the wall structure.
In addition, the insulation doubles as a sound barrier, weakening airborne sound waves passing through it.
Hempcrete
In addition to the CO2 absorbed during its growth period, hemp-lime, also known as hempcrete, continues absorption during the curing process. The mixture hardens when the silica contained in hemp shives mixes with hydraulic lime, resulting in the mineralization process called "carbonation"..
Though not a load-bearing material, hempcrete is most commonly used as infill in building construction due to its light weight (roughly seven times lighter than common concrete) and vapor permeability. The building material is made of hemp hurds (shiv or shives), hydraulic lime, and water mixed in varying ratios. The mix depends on the use of the material within the structure and could differ in physical properties. Surfaces such as flooring interact with a multitude of loads and would have to be more resistive, while walls and roofs are required to be more lightweight. The application of this material in construction requires minimal skill.
Hempcrete can be formed in-situ or formed into blocks. Such blocks are not strong enough to be used for structural elements and must be supported by brick, wood, or steel framing. In the end of the twentieth century, during his renovation of Maison de la Turquie in Nogent-sur-Seine, France, Charles Rasetti first invented and applied the use of hempcrete in construction. Shortly after, in the 2000s, Modece Architects used hemp-lime for test designs in Haverhill. The dwellings were studied and monitored for comparison with other building performances by BRE. Completed in 2009, the Center for the Built Environment's Renewable House was found to be among the most technologically advanced structures made of hemp-based material. A year later the first home made of hemp-based materials was completed in Asheville, North Carolina, US.
Oils and varnishes
Cannabis seeds have high-fat content and contain 30-35% of fatty acids. The extracted oil is suited for a variety of construction applications. The biodegradable hemp oil acts as a wood varnish, protecting flooring from mold, pests, and wear. Its use prevents the water from penetrating the wood while still allowing air and vapor to pass through. Its most common use can be seen in wood framing construction, one of the most common construction methods in the world. Because of its low UV-resistant rating, the finish is most often used indoors, on surfaces such as flooring and wood paneling.
Plaster
Hemp-based insulating plaster is created by combining hemp fibers with calcium lime and sand. This material, when applied on internal walls, ceilings, and flooring, can be layered up to ten centimeters in thickness. Its porous materiality allows the created plaster to regulate air humidity and evenly distribute it. The gradual absorption and release of water prevent the material from cracking and breaking apart. Similar to high-density fiber cement, hemp plaster can naturally vary in color and be manually pigmented.
Ropes and strands
Hemp ropes can be woven in various diameters, possessing high amounts of strength making them suitable for a variety of uses for building construction purposes. Some of these uses include installation of frames in building openings and connection of joints. The ropes also used in bridge construction, tunnels, traditional homes, etc. One of the earliest examples of hemp rope and other textile use can be traced back to 1500 BC Egypt.
Plastics
Cannabis geotextiles could be put in both wet and dry conditions. Hemp-based bioplastic is a biodegradable alternative to regular plastic and can potentially replace polyvinyl chloride (PVC), a material used for plumbing pipes.
Wood
Hemp growth lasts roughly 100 days, a much faster time period than an average tree used for construction purposes. While dry, the fibers could be pressed into tight wood alternatives to wood-frame construction, wall/ceiling paneling, and flooring. As an addition, hemp is flexible and versatile allowing it to be used in a greater number of ways than wood. Similarly, hemp wood could also be made of recycled hemp-based paper.
Composite materials
A mixture of fiberglass, hemp fiber, kenaf, and flax has been used since 2002 to make composite panels for automobiles. The choice of which bast fiber to use is primarily based on cost and availability.
Various car makers are beginning to use hemp in their cars, including Audi, BMW, Ford, GM, Chrysler, Honda, Iveco, Lotus, Mercedes, Mitsubishi, Porsche, Saturn, Volkswagen and Volvo. For example, the Lotus Eco Elise
and the Mercedes C-Class both contain hemp (up to 20 kg in each car in the case of the latter).
Paper
Hemp paper are paper varieties consisting exclusively or to a large extent from pulp obtained from fibers of industrial hemp. The products are mainly specialty papers such as cigarette paper, banknotes and technical filter papers. Compared to wood pulp, hemp pulp offers a four to five times longer fiber, a significantly lower lignin fraction as well as a higher tear resistance and tensile strength. However, production costs are about four times higher than for paper from wood, since the infrastructure for using hemp is underdeveloped. If the paper industry were to switch from wood to hemp for sourcing its cellulose fibers, the following benefits could be utilized:
Hemp yields three to four times more usable fiber per hectare per annum than forests, and hemp does not need pesticides or herbicides.
Hemp has a much faster crop yield. It takes about 3–4 months for hemp stalks to reach maturity, while trees can take between 20 and 80 years. Not only does hemp grow at a faster rate, but it also contains a high level of cellulose. This quick return means that paper can be produced at a faster rate if hemp were used in place of wood.
Hemp paper does not require the use of toxic bleaching or as many chemicals as wood pulp because it can be whitened with hydrogen peroxide. This means using hemp instead of wood for paper would end the practice of poisoning Earth's waterways with chlorine or dioxins from wood paper manufacturing.
Hemp paper can be recycled up to 8 times, compared to just 3 times for paper made from wood pulp.
Compared to its wood pulp counterpart, paper from hemp fibers resists decomposition and does not yellow or brown with age. It is also one of the strongest natural fibers in the world - one of the reasons for its longevity and durability.
Several factors favor the increased use of wood substitutes for paper, especially agricultural fibers such as hemp. Deforestation, particularly the destruction of old growth forests, and the world's decreasing supply of wild timber resources are today major ecological concerns. Hemp's use as a wood substitute will contribute to preserving biodiversity.
However, hemp has had a hard time competing with paper from trees or recycled newsprint. Only the outer part of the stem consists mainly of fibers which are suitable for the production of paper. Numerous attempts have been made to develop machines that efficiently and inexpensively separate useful fibers from less useful fibers, but none have been completely successful. This has meant that paper from hemp is still expensive compared to paper from trees.
Jewelry
Hemp jewelry is the product of knotting hemp twine through the practice of macramé. Hemp jewelry includes bracelets, necklaces, anklets, rings, watches, and other adornments. Some jewelry features beads made from crystals, glass, stone, wood and bones. The hemp twine varies in thickness and comes in a variety of colors. There are many different stitches used to create hemp jewelry, however, the half knot and full knot stitches are most common.
Cordage
Hemp rope was used in the age of sailing ships, though the rope had to be protected by tarring, since hemp rope has a propensity for breaking from rot, as the capillary effect of the rope-woven fibers tended to hold liquid at the interior, while seeming dry from the outside. Tarring was a labor-intensive process, and earned sailors the nickname "Jack Tar". Hemp rope was phased out when manila rope, which does not require tarring, became widely available. Manila is sometimes referred to as Manila hemp, but is not related to hemp; it is abacá, a species of banana.
Animal bedding
Hemp shives are the core of the stem, hemp hurds are broken parts of the core. In the EU, they are used for animal bedding (horses, for instance), or for horticultural mulch. Industrial hemp is much more profitable if both fibers and shives (or even seeds) can be used.
Water and soil purification
Hemp can be used as a "mop crop" to clear impurities out of wastewater, such as sewage effluent, excessive phosphorus from chicken litter, or other unwanted substances or chemicals. Additionally, hemp is being used to clean contaminants at the Chernobyl nuclear disaster site, by way of a process which is known as phytoremediation – the process of clearing radioisotopes and a variety of other toxins from the soil, water, and air.
Weed control
Hemp crops are tall, have thick foliage, and can be planted densely, and thus can be grown as a smother crop to kill tough weeds. Using hemp this way can help farmers avoid the use of herbicides, gain organic certification, and gain the benefits of crop rotation. However, due to the plant's rapid and dense growth characteristics, some jurisdictions consider hemp a prohibited and noxious weed, much like Scotch Broom.
Biofuels
Biodiesel can be made from the oils in hemp seeds and stalks; this product is sometimes called "hempoline". Alcohol fuel (ethanol or, less commonly, methanol) can be made by fermenting the whole plant.
Filtered hemp oil can be used directly to power diesel engines. In 1892, Rudolf Diesel invented the diesel engine, which he intended to power "by a variety of fuels, especially vegetable and seed oils, which earlier were used for oil lamps, i.e. the Argand lamp".
Production of vehicle fuel from hemp is very small. Commercial biodiesel and biogas is typically produced from cereals, coconuts, palm seeds, and cheaper raw materials like garbage, wastewater, dead plant and animal material, animal feces and kitchen waste.
Processing
Separation of hurd and bast fiber is known as decortication. Traditionally, hemp stalks would be water-retted first before the fibers were beaten off the inner hurd by hand, a process known as scutching. As mechanical technology evolved, separating the fiber from the core was accomplished by crushing rollers and brush rollers, or by hammer-milling, wherein a mechanical hammer mechanism beats the hemp against a screen until hurd, smaller bast fibers, and dust fall through the screen. After the Marijuana Tax Act was implemented in 1938, the technology for separating the fibers from the core remained "frozen in time". Recently, new high-speed kinematic decortication has come about, capable of separating hemp into three streams; bast fiber, hurd, and green microfiber.
Only in 1997, did Ireland, parts of the Commonwealth and other countries begin to legally grow industrial hemp again. Iterations of the 1930s decorticator have been met with limited success, along with steam explosion and chemical processing known as thermomechanical pulping.
Cultivation
Hemp is usually planted between March and May in the northern hemisphere, between September and November in the southern hemisphere. It matures in about three to four months, depending on various conditions.
Millennia of selective breeding have resulted in varieties that display a wide range of traits; e.g. suited for particular environments/latitudes, producing different ratios and compositions of terpenoids and cannabinoids (CBD, THC, CBG, CBC, CBN...etc.), fiber quality, oil/seed yield, etc. Hemp grown for fiber is planted closely, resulting in tall, slender plants with long fibers.
The use of industrial hemp plant and its cultivation was commonplace until the 1900s when it was associated with its genetic sibling a.k.a. Drug-Type Cannabis species (which contain higher levels of psychoactive THC). Influential groups misconstrued hemp as a dangerous "drug", even though hemp is not a recreational drug and has the potential to be a sustainable and profitable crop for many farmers due to hemp's medical, structural and dietary uses. In the United States, the public's perception of hemp as marijuana has blocked hemp from becoming a useful crop and product," in spite of its vital importance prior to World War II.
Ideally, according to Britain's Department for Environment, Food and Rural Affairs, the herb should be desiccated and harvested toward the end of flowering. This early cropping reduces the seed yield but improves the fiber yield and quality.
The seeds are sown with grain drills or other conventional seeding equipment to a depth of . Greater seeding depths result in increased weed competition. Nitrogen should not be placed with the seed, but phosphate may be tolerated. The soil should have available 89 to 135 kg/ha of nitrogen, 46 kg/ha phosphorus, 67 kg/ha potassium, and 17 kg/ha sulfur. Organic fertilizers such as manure are one of the best methods of weed control.
Cultivars
In contrast to cannabis for medical use, varieties grown for fiber and seed have less than 0.3% THC and are unsuitable for producing hashish and marijuana. Present in industrial hemp, cannabidiol is a major constituent among some 560 compounds found in hemp.
Cannabis sativa L. subsp. sativa var. sativa is the variety grown for industrial use, while C. sativa subsp. indica generally has poor fiber quality and female buds from this variety are primarily used for recreational and medicinal purposes. The major differences between the two types of plants are the appearance, and the amount of Δ9-tetrahydrocannabinol (THC) secreted in a resinous mixture by epidermal hairs called glandular trichomes, although they can also be distinguished genetically. Oilseed and fiber varieties of Cannabis approved for industrial hemp production produce only minute amounts of this psychoactive drug, not enough for any physical or psychological effects. Typically, hemp contains below 0.3% THC, while cultivars of Cannabis grown for medicinal or recreational use can contain anywhere from 2% to over 20%.
Harvesting
Smallholder plots are usually harvested by hand. The plants are cut at 2 to 3 cm above the soil and left on the ground to dry. Mechanical harvesting is now common, using specially adapted cutter-binders or simpler cutters.
The cut hemp is laid in swathes to dry for up to four days. This was traditionally followed by retting, either water retting (the bundled hemp floats in water) or dew retting (the hemp remains on the ground and is affected by the moisture in dew and by molds and bacterial action).
Pests
Several arthropods can cause damage or injury to hemp plants, but the most serious species are associated with the Insecta class. The most problematic for outdoor crops are the voracious stem-boring caterpillars, which include the European corn borer, Ostrinia nubilalis, and the Eurasian hemp borer, Grapholita delineana. As the names imply, they target the stems reducing the structural integrity of the plant. Another lepidopteran, the corn earworm, Helicoverpa zea, is known to damage flowering parts and can be challenging to control. Other foliar pests, found in both indoor and outdoor crops, include the hemp russet mite, Aculops cannibicola, and cannabis aphid, Phorodon cannabis. They cause injury by reducing plant vigor because they feed on the phloem of the plant. Root feeders can be difficult to detect and control because of their below surface habitat. A number of beetle grubs and chafers are known to cause damage to hemp roots, including the flea beetle and Japanese beetle, Popillia Japonica. The rice root aphid, Rhopalosiphum rufiabdominale, has also been reported but primarily affects indoor growing facilities. Integrated pest management strategies should be employed to manage these pests with prevention and early detection being the foundation of a resilient program. Cultural and physical controls should be employed in conjunction with biological pest controls, chemical applications should only be used as a last resort.
Diseases
Hemp plants can be vulnerable to various pathogens, including bacteria, fungi, nematodes, viruses and other miscellaneous pathogens. Such diseases often lead to reduced fiber quality, stunted growth, and death of the plant. These diseases rarely affect the yield of a hemp field, so hemp production is not traditionally dependent on the use of pesticides.
Environmental impact
Hemp is considered by a 1998 study in Environmental Economics to be environmentally friendly due to a decrease of land use and other environmental impacts, indicating a possible decrease of ecological footprint in a US context compared to typical benchmarks. A 2010 study, however, that compared the production of paper specifically from hemp and eucalyptus concluded that "industrial hemp presents higher environmental impacts than eucalyptus paper"; however, the article also highlights that "there is scope for improving industrial hemp paper production". Hemp is also claimed to require few pesticides and no herbicides, and it has been called a carbon negative raw material.
Results indicate that high yield of hemp may require high total nutrient levels (field plus fertilizer nutrients) similar to a high yielding wheat crop.
A United Nations report endorses the versatility and sustainability of hemp and its productive potential in developing countries. Hemp uses a quarter of the water required by cotton, and absorbs more carbon dioxide than other crops and most trees.
Producers
The world-leading producer of hemp is China, which produces more than 70% of the world output. France ranks second with about a quarter of the world production. Smaller production occurs in the rest of Europe, Chile, and North Korea. Over 30 countries produce industrial hemp, including Australia, Austria, Canada, Chile, China, Denmark, Egypt, Finland, Germany, Greece, Hungary, India, Italy, Japan, Korea, Netherlands, New Zealand, Poland, Portugal, Romania, Russia, Slovenia, Spain, Sweden, Switzerland, Thailand, Turkey, the United Kingdom and Ukraine.
The United Kingdom and Germany resumed commercial production in the 1990s. British production is mostly used as bedding for horses; other uses are under development. Companies in Canada, the UK, the United States, and Germany, among many others, process hemp seed into a growing range of food products and cosmetics; many traditional growing countries continue to produce textile-grade fiber.
Air-dried stem yields in Ontario have from 1998 and onward ranged from 2.6 to 14.0 tons of dry, retted stalks per hectare (1–5.5 t/ac) at 12% moisture. Yields in Kent County, have averaged 8.75 t/ha (3.5 t/ac). Northern Ontario crops averaged 6.1 t/ha (2.5 t/ac) in 1998. Statistic for the European Union for 2008 to 2010 say that the average yield of hemp straw has varied between 6.3 and 7.3 ton per ha. Only a part of that is bast fiber. Around one ton of bast fiber and 2–3 tons of core material can be decorticated from 3–4 tons of good-quality, dry-retted straw. For an annual yield of this level is it in Ontario recommended to add nitrogen (N):70–110 kg/ha, phosphate (P2O5): up to 80 kg/ha and potash (K2O): 40–90 kg/ha.
The average yield of dry hemp stalks in Europe was 6 ton/ha (2.4 ton/ac) in 2001 and 2002.
FAO argue that an optimum yield of hemp fiber is more than 2 tons per ha, while average yields are around 650 kg/ha.
Australia
In the Australian states of Tasmania, Victoria, Queensland, Western Australia, New South Wales, and most recently, South Australia, the state governments have issued licenses to grow hemp for industrial use. The first to initiate modern research into the potential of cannabis was the state of Tasmania, which pioneered the licensing of hemp during the early 1990s. The state of Victoria was an early adopter in 1998, and has reissued the regulation in 2008.
Queensland has allowed industrial production under license since 2002, where the issuance is controlled under the Drugs Misuse Act 1986.
Western Australia enabled the cultivation, harvest and processing of hemp under its Industrial Hemp Act 2004, New South Wales now issues licenses under a law, the Hemp Industry Regulations Act 2008 (No 58), that came into effect as of 6 November 2008.
Most recently, South Australia legalized industrial hemp under South Australia's Industrial Hemp Act 2017, which commenced on 12 November 2017.
Canada
Commercial production (including cultivation) of industrial hemp has been permitted in Canada since 1998 under licenses and authorization issued by Health Canada.
In the early 1990s, industrial hemp agriculture in North America began with the Hemp Awareness Committee at the University of Manitoba. The Committee worked with the provincial government to get research and development assistance and was able to obtain test plot permits from the Canadian government. Their efforts led to the legalization of industrial hemp (hemp with only minute amounts of tetrahydrocannabinol) in Canada and the first harvest in 1998.
In 2017, the cultivated area for hemp in the Prairie provinces include Saskatchewan with more than , Alberta with , and Manitoba with . Canadian hemp is cultivated mostly for its food value as hulled hemp seeds, hemp oils, and hemp protein powders, with only a small fraction devoted to production of hemp fiber used for construction and insulation.
France
France is Europe's biggest producer (and the world's second largest producer) with cultivated. 70–80% of the hemp fiber produced in 2003 was used for specialty pulp for cigarette papers and technical applications. About 15% was used in the automotive sector, and 5–6% was used for insulation mats. About 95% of hurds were used as animal bedding, while almost 5% was used in the building sector. In 2010–2011, a total of was cultivated with hemp in the EU, a decline compared with previous year.
Russia and Ukraine
From the 1950s to the 1980s, the Soviet Union was the world's largest producer of hemp ( in 1970). The main production areas were in Ukraine, the Kursk and Orel regions of Russia, and near the Polish border. Since its inception in 1931, the Hemp Breeding Department at the Institute of Bast Crops in Hlukhiv (Glukhov), Ukraine, has been one of the world's largest centers for developing new hemp varieties, focusing on improving fiber quality, per-hectare yields, and low THC content.
After the collapse of the Soviet Union, the commercial cultivation of hemp declined sharply. However, at least an estimated 2.5 million acres of hemp grow wild in the Russian Far East and the Black Sea regions.
United Kingdom
In the United Kingdom, cultivation licenses are issued by the Home Office under the Misuse of Drugs Act 1971. When grown for nondrug purposes, hemp is referred to as industrial hemp, and a common product is fiber for use in a wide variety of products, as well as the seed for nutritional aspects and the oil. Feral hemp or ditch weed is usually a naturalized fiber or oilseed strain of Cannabis that has escaped from cultivation and is self-seeding.
United States
In October 2019, hemp became legal to grow in 46 U.S. states under federal law. As of 2019, 47 states have enacted legislation to make hemp legal to grow at the state level, with several states implementing medical provisions regarding the growing of plants specifically for non-psychoactive CBD.
The 2018 Farm Bill, which incorporated the Hemp Farming Act of 2018, removed hemp as a Schedule I drug and instead made it an agricultural commodity. This legalized hemp at the federal level, which made it easier for hemp farmers to get production licenses, acquire loans, and receive federal crop insurance.
NH 2014 N.H. Laws, Chap. 18, SD: HB 1008 (2020)
S.D. Codified Laws Ann. §38-35-1 et seq.
Authorizes the growth, production and transportation of hemp with a license, and directs the Department of Agriculture to submit a state plan to USDA.
Requires a minimum of five contiguous outdoor acres for grower license applications, and requires any license applicants to submit to a state and federal criminal background investigation.
Requires a transportation permit for any transporter traveling within or through the state and creates two types of industrial hemp transportation permits (grower licensee and general) provided by the Department of Public Safety.
Creates the Hemp Regulatory Program Fund.
The process to legalize hemp cultivation began in 2009, when Oregon began approving licenses for industrial hemp. Then, in 2013, after the legalization of marijuana, several farmers in Colorado planted and harvested several acres of hemp, bringing in the first hemp crop in the United States in over half a century. After that, the federal government created a Hemp Farming Pilot Program as a part of the Agricultural Act of 2014. This program allowed institutions of higher education and state agricultural departments to begin growing hemp without the consent of the Drug Enforcement Administration (DEA). Hemp production in Kentucky, formerly the United States' leading producer, resumed in 2014. Hemp production in North Carolina resumed in 2017, and in Washington State the same year. By the end of 2017, at least 34 U.S. states had industrial hemp programs. In 2018, New York began taking strides in industrial hemp production, along with hemp research pilot programs at Cornell University, Binghamton University and SUNY Morrisville.
As of 2017, the hemp industry estimated that annual sales of hemp products were around $820 million annually; hemp-derived CBD have been the major force driving this growth.
Despite this progress, hemp businesses in the US have had difficulties expanding as they have faced challenges in traditional marketing and sales approaches. According to a case study done by Forbes, hemp businesses and startups have had difficulty marketing and selling non-psychoactive hemp products, as majority of online advertising platforms and financial institutions do not distinguish between hemp and marijuana.
History
Gathered hemp fiber was used to make cloth long before agriculture, nine to fifty thousand years ago. It may also be one of the earliest plants to have been cultivated. An archeological site in the Oki Islands of Japan contained cannabis achenes from about 8000 BC, probably signifying use of the plant. Hemp use archaeologically dates back to the Neolithic Age in China, with hemp fiber imprints found on Yangshao culture pottery dating from the 5th millennium BC. The Chinese later used hemp to make clothes, shoes, ropes, and an early form of paper. The classical Greek historian Herodotus (ca. 480 BC) reported that the inhabitants of Scythia would often inhale the vapors of hemp-seed smoke, both as ritual and for their own pleasurable recreation.
Textile expert Elizabeth Wayland Barber summarizes the historical evidence that Cannabis sativa, "grew and was known in the Neolithic period all across the northern latitudes, from Europe (Germany, Switzerland, Austria, Romania, Ukraine) to East Asia (Tibet and China)," but, "textile use of Cannabis sativa does not surface for certain in the West until relatively late, namely the Iron Age."
"I strongly suspect, however, that what catapulted hemp to sudden fame and fortune as a cultigen and caused it to spread rapidly westwards in the first millennium B.C. was the spread of the habit of pot-smoking from somewhere in south-central Asia, where the drug-bearing variety of the plant originally occurred. The linguistic evidence strongly supports this theory, both as to time and direction of spread and as to cause."
Jews living in Palestine in the 2nd century were familiar with the cultivation of hemp, as witnessed by a reference to it in the Mishna (Kil'ayim 2:5) as a variety of plant, along with arum, that sometimes takes as many as three years to grow from a seedling. In late medieval Holy Roman Empire (Germany) and Italy, hemp was employed in cooked dishes, as filling in pies and tortes, or boiled in a soup. Hemp in later Europe was mainly cultivated for its fibers and was used for ropes on many ships, including those of Christopher Columbus. The use of hemp as a cloth was centered largely in the countryside, with higher quality textiles being available in the towns.
The Spaniards brought hemp to the Americas and cultivated it in Chile starting about 1545. Similar attempts were made in Peru, Colombia, and Mexico, but only in Chile did the crop find success. In July 1605, Samuel Champlain reported the use of grass and hemp clothing by the (Wampanoag) people of Cape Cod and the (Nauset) people of Plymouth Bay told him they harvested hemp in their region where it grew wild to a height of 4 to 5 ft.
In May 1607, "hempe" was among the crops Gabriel Archer observed being cultivated by the natives at the main Powhatan village, where Richmond, Virginia, is now situated; and in 1613, Samuell Argall reported wild hemp "better than that in England" growing along the shores of the upper Potomac. As early as 1619, the first Virginia House of Burgesses passed an Act requiring all planters in Virginia to sow "both English and Indian" hemp on their plantations. The Puritans are first known to have cultivated hemp in New England in 1645.
United States
George Washington pushed for the growth of hemp as it was a cash crop commonly used to make rope and fabric. In May 1765 he noted in his diary about the sowing of seeds each day until mid-April. Then he recounts the harvest in October which he grew 27 bushels that year.
It is sometimes supposed that an excerpt from Washington's diary, which reads "Began to the Male from the Female hemp at Do.&—rather too late" is evidence that he was trying to grow female plants for the THC found in the flowers. However, the editorial remark accompanying the diary states that "This may arise from their [the male] being coarser, and the stalks larger" In subsequent days, he describes soaking the hemp (to make the fibers usable) and harvesting the seeds, suggesting that he was growing hemp for industrial purposes, not recreational.
George Washington also imported the Indian hemp plant from Asia, which was used for fiber and, by some growers, for intoxicating resin production. In a 1796 letter to William Pearce who managed the plants for him, Washington says, "What was done with the Indian Hemp plant from last summer? It ought, all of it, to be sown again; that not only a stock of seed sufficient for my own purposes might have been raised, but to have disseminated seed to others; as it is more valuable than common hemp."
Other presidents known to have farmed hemp for alternative purposes include Thomas Jefferson, James Madison, James Monroe, Andrew Jackson, Zachary Taylor, and Franklin Pierce.
Historically, hemp production had made up a significant portion of antebellum Kentucky's economy. Before the American Civil War, many slaves worked on plantations producing hemp.
In 1937, the Marihuana Tax Act of 1937 was passed in the United States, levying a tax on anyone who dealt commercially in cannabis, hemp, or marijuana. The passing of the Act to destroy the U.S. hemp industry has been reputed to involve businessmen Andrew Mellon, Randolph Hearst and the Du Pont family.
One claim is that Hearst believed that his extensive timber holdings were threatened by the invention of the decorticator that he feared would allow hemp to become a cheap substitute for the paper pulp used for newspaper. Historical research indicates this fear was unfounded because improvements of the decorticators in the 1930s – machines that separated the fibers from the hemp stem – could not make hemp fiber a cheaper substitute for fibers from other sources. Further, decorticators did not perform satisfactorily in commercial production.
Another claim is that Mellon, Secretary of the Treasury and the wealthiest man in America at that time, had invested heavily in DuPont's new synthetic fiber, nylon, and believed that the replacement of the traditional resource, hemp, was integral to the new product's success. DuPont and many industrial historians dispute a link between nylon and hemp, nylon became immediately a scarce commodity. Nylon had characteristics that could be used for toothbrushes (sold from 1938) and very thin nylon fiber could compete with silk and rayon in various textiles normally not produced from hemp fiber, such as very thin stockings for women.
While the Marijuana Tax Act of 1937 had just been signed into law, the United States Department of Agriculture lifted the tax on hemp cultivation during WWII. Before WWII, the U.S. Navy used Jute and Manila Hemp from the Philippines and Indonesia for the cordage on their ships. During the war, Japan cut off those supply lines. America was forced to turn inward and revitalize the cultivation of Hemp on U.S. soils.
Hemp was used extensively by the United States during World War II to make uniforms, canvas, and rope. Much of the hemp used was cultivated in Kentucky and the Midwest. During World War II, the U.S. produced a short 1942 film, Hemp for Victory, promoting hemp as a necessary crop to win the war. By the 1980s the film was largely forgotten, and the U.S. government even denied its existence. The film, and the important historical role of hemp in U.S. agriculture and commerce was brought to light by hemp activist Jack Herer in the book The Emperor Wears No Clothes.
U.S. farmers participated in the campaign to increase U.S. hemp production to 36,000 acres in 1942. This increase amounted to more than 20 times the production in 1941 before the war effort.
In the United States, Executive Order 12919 (1994) identified hemp as a strategic national product that should be stockpiled.
Historical cultivation
Hemp has been grown for millennia in Asia and the Middle East for its fiber. Commercial production of hemp in the West took off in the eighteenth century, but was grown in the sixteenth century in eastern England. Because of colonial and naval expansion of the era, economies needed large quantities of hemp for rope and oakum. In the early 1940s, world production of hemp fiber ranged from 250,000 to 350,000 metric tons, Russia was the biggest producer.
In Western Europe, the cultivation of hemp was not legally banned by the 1930s, but the commercial cultivation stopped by then, due to decreased demand compared to increasingly popular artificial fibers. Speculation about the potential for commercial cultivation of hemp in large quantities has been criticized due to successful competition from other fibers for many products. The world production of hemp fiber fell from over 300,000 metric tons 1961 to about 75,000 metric tons in the early 1990s and has after that been stable at that level.
Japan
In Japan, hemp was historically used as paper and a fiber crop. There is archaeological evidence cannabis was used for clothing and the seeds were eaten in Japan back to the Jōmon period (10,000 to 300 BC). Many Kimono designs portray hemp, or asa (), as a beautiful plant. In 1948, marijuana was restricted as a narcotic drug. The ban on marijuana imposed by the United States authorities was alien to Japanese culture, as the drug had never been widely used in Japan before. Though these laws against marijuana are some of the world's strictest, allowing five years imprisonment for possession of the drug, they exempt hemp growers, whose crop is used to make robes for Buddhist monks and loincloths for Sumo wrestlers. Because marijuana use in Japan has doubled in the past decade, these exemptions have recently been called into question.
Portugal
The cultivation of hemp in Portuguese lands began around the fourteenth century. The raw material was used for the preparation of rope and plugs for the Portuguese ships. Portugal also utilized its colonies to support its hemp supply, including in certain parts of Brazil.
In order to recover the ailing Portuguese naval fleet after the Restoration of Independence in 1640, King John IV put a renewed emphasis on the growing of hemp. He ordered the creation of the Royal Linen and Hemp Factory in the town of Torre de Moncorvo to increase production and support the effort.
In 1971, the cultivation of hemp became illegal, and the production was substantially reduced. Because of EU regulations 1308–70, 619/71 and 1164–89, this law was revoked (for some certified seed varieties).
See also
Cannabis flower essential oil
Fiber crop
Fiber rope
Flax seed
Hemp Industries Association
Industrial Hemp Farming Act of 2009
International Year of Natural Fibres
Natural fiber
The Emperor Wears No Clothes (book)
References
Articles containing video clips
Biofuels
Fiber plants
Herbs
Non-food crops
Traditional knowledge
Biopiracy
Food sovereignty | Hemp | [
"Biology"
] | 10,741 | [
"Biopiracy",
"Biodiversity"
] |
963,403 | https://en.wikipedia.org/wiki/Messier%2049 | Messier 49 (also known as M49 or NGC 4472) is a giant elliptical galaxy about away in the equatorial constellation of Virgo. This galaxy was discovered by astronomer Charles Messier in 1777.
As an elliptical galaxy, Messier 49 has the physical form of a radio galaxy, but it only has the radio emission of a normal galaxy. From the detected radio emission, the core region has roughly 1053 erg (1046 J or 1022 YJ) of synchrotron energy. The nucleus of this galaxy is emitting X-rays, suggesting the likely presence of a supermassive black hole with an estimated mass of , or 565 million times the mass of the Sun (). X-ray emissions shows a structure to the north of Messier 49 that resembles a bow shock. To the southwest of the core, the luminous outline of the galaxy can be traced out to a distance of 260 kpc.
This galaxy has many globular clusters: estimated to be about 5,900. This is far more than the roughly 200 orbiting the Milky Way, but dwarfed by the 13,450 orbiting the supergiant elliptical galaxy Messier 87. On average, the globular clusters of M49 are about 10 billion years old. Between 2000 and 2009, strong evidence for a stellar mass black hole was discovered in one. A second candidate was announced in 2011.
Messier 49 was the first member of the Virgo Cluster of galaxies to be discovered. It is the most luminous member of that cluster and more luminous than any galaxy closer to the Earth. This galaxy forms part of the smaller Virgo B subcluster 4.5° away from the dynamic center of the Virgo Cluster, centered on Messier 87. Messier 49 is gravitationally interacting with the dwarf irregular galaxy UGC 7636. The dwarf shows a trail of debris spanning roughly 1 × 5 arcminutes, which corresponds to a physical dimension of .
One supernova has been observed in M49: SN 1969Q (type unknown, mag. 13) was discovered by Evans on 12 June 1969. [Note: some sources incorrectly report the discovery date as 1 June 1969.]
See also
List of Messier objects
References and footnotes
External links
SEDS: Elliptical Galaxy M49
Black hole found in a star cluster in M49
Messier 049
Messier 049
Messier 049
049
Messier 049
07629
41220
134
17710219
Discoveries by Charles Messier | Messier 49 | [
"Astronomy"
] | 510 | [
"Virgo (constellation)",
"Constellations"
] |
963,466 | https://en.wikipedia.org/wiki/Messier%2050 | Messier 50 or M 50, also known as NGC 2323 or the Heart-shaped Cluster, is an open cluster of stars in the constellation Monoceros. It was recorded by G. D. Cassini before 1711 and independently discovered by Charles Messier in 1772 while observing Biela's Comet. It is sometimes described as a 'heart-shaped' figure or a blunt arrowhead.
M50 is about 2,900 light-years away from Earth and is near to but narrowly not estimated to be gravitationally tied to the Canis Major (CMa) OB1 association. It has a core radius of and spans . The cluster has 508 confirmed and 109 probable members – their combined mass is more than , the mean stellar density would thus be 1.3 stars per cubic parsec. It is around 140 million years old, with two high-mass white dwarfs and two chemically peculiar stars.
Gallery
See also
List of Messier objects
References and footnotes
External links
Messier 50 - at Deep Sky Videos
Messier 50, SEDS Messier pages
M50 Image by Waid Observatory
Messier 050
Orion–Cygnus Arm
Messier 050
050
Messier 050
? | Messier 50 | [
"Astronomy"
] | 242 | [
"Monoceros",
"Constellations"
] |
963,679 | https://en.wikipedia.org/wiki/Messier%2052 | Messier 52 or M52, also known as NGC 7654 or the Scorpion Cluster, is an open cluster of stars in the highly northern constellation of Cassiopeia. It was discovered by Charles Messier in 1774. It can be seen from Earth under a good night sky with binoculars. The brightness of the cluster is influenced by extinction, which is stronger in the southern half. Its metallicity is somewhat below that of the Sun, and is estimated to be [Fe/H] = −0.05 ± 0.01.
R. J. Trumpler classified the cluster appearance as II2r, indicating a rich cluster with little central concentration and a medium range in the brightness of the stars. This was later revised to I2r, denoting a dense core. The cluster has a core radius of and a tidal radius of . It has an estimated age of 158.5 million years and a mass of .
The magnitude 8.3 supergiant star BD +60°2532 is a probable member of the cluster, so too 18 candidate slowly pulsating B stars, one being a Delta (δ) Scuti variable, and three candidate Gamma Doradus (γ Dor) variables. There may also be three Be stars. The core of the cluster shows a lack of interstellar matter, which may be due to supernovae explosion(s) early in the cluster's history.
See also
List of Messier objects
References and footnotes
External links
Messier 52, SEDS Messier pages
Messier 052
Messier 052
052
Messier 052
Perseus Arm
?
Discoveries by Charles Messier | Messier 52 | [
"Astronomy"
] | 334 | [
"Cassiopeia (constellation)",
"Constellations"
] |
963,711 | https://en.wikipedia.org/wiki/Messier%2053 | Messier 53 (also known as M53 or NGC 5024) is a globular cluster in the Coma Berenices constellation. It was discovered by Johann Elert Bode in 1775. M53 is one of the more outlying globular clusters, being about light-years away from the Galactic Center, and almost the same distance (about ) from the Solar System. The cluster has a core radius (rc) of 2.18 pc, a half-light radius (rh) of 5.84 pc, and a tidal radius (rtr) of 239.9 pc.
This is considered a metal-poor cluster and at one time was thought to be the most metal-poor cluster in the Milky Way. Abundance measurements of cluster members on the red giant branch show that most are first-generation stars. That is, they did not form from gas recycled from previous generations of stars. This differs from the majority of globular clusters that are more dominated by second generation stars. The second generation stars in NGC 5024 tend to be more concentrated in the core region. Overall, the stellar composition of cluster members is similar to members of the Milky Way halo.
The cluster displays various tidal-like features including clumps and ripples around the cluster, and tails along the cluster's orbit in an east–west direction. A tidal bridge-like structure appears to connect M53 with the close, very diffuse neighbor NGC 5053, as well as an envelope surrounding both clusters. These may indicate a dynamic tidal interaction has occurred between the two clusters; a possibly unique occurrence within the Milky Way since there are no known binary clusters within the galaxy. In addition, M53 is a candidate member of the Sagittarius dwarf galaxy tidal stream.
Among the variable star population in the cluster, there are 55 known to be RR Lyrae variables. Of these, a majority of 34 display behavior typical of the Blazhko effect, including 23 of type RRc – the largest known population of the latter in any globular cluster. There are also at least three variables of type SX Phe and a semi-regular red giant.
Gallery
See also
List of Messier objects
Notes
References
External links
SEDS: Messier Object 53
Messier 53, Galactic Globular Clusters Database page
Messier 053
Messier 053
053
Messier 053
Astronomical objects discovered in 1775
Discoveries by Johann Elert Bode | Messier 53 | [
"Astronomy"
] | 495 | [
"Coma Berenices",
"Constellations"
] |
963,881 | https://en.wikipedia.org/wiki/Alarm%20management | Alarm management is the application of human factors and ergonomics along with instrumentation engineering and systems thinking to manage the design of an alarm system to increase its usability. Most often the major usability problem is that there are too many alarms annunciated in a plant upset, commonly referred to as alarm flood (similar to an interrupt storm), since it is so similar to a flood caused by excessive rainfall input with a basically fixed drainage output capacity. However, there can also be other problems with an alarm system such as poorly designed alarms, improperly set alarm points, ineffective annunciation, unclear alarm messages, etc. Poor alarm management is one of the leading causes of unplanned downtime, contributing to over $20B in lost production every year, and of major industrial incidents. Developing good alarm management practices is not a discrete activity, but more of a continuous process (i.e., it is more of a journey than a destination).
Alarm problem history
From their conception, large chemical, refining, power generation, and other processing plants required the use of a control system to keep the process operating successfully and producing products. Due to the fragility of the components as compared to the process, these control systems often required a control room to protect them from the elements and process conditions. In the early days of control rooms, they used what were referred to as "panel boards" which were loaded with control instruments and indicators. These were tied to sensors located in the process streams and on the outside of process equipment. The sensors relayed their information to the control instruments via analogue signals, such as a 4-20 mA current loop in the form of twisted pair wiring. At first these systems merely yielded information, and a well-trained operator was required to make adjustments either by changing flow rates, or altering energy inputs to keep the process within its designed limits.
Alarms were added to alert the operator to a condition that was about to exceed a design limit, or had already exceeded a design limit. Additionally, shutdown systems were employed to halt a process that was in danger of exceeding either safety, environmental or monetarily acceptable process limits. Alarm were indicated to the operator by annunciator horns, and lights of different colours. (For instance, green lights meant OK, Yellow meant not OK, and Red meant BAD.) Panel boards were usually laid out in a manner that replicated the process flow in the plant. So instrumentation indicating operating units with the plant was grouped together for recognition sake and ease of problem solution. It was a simple matter to look at the entire panel board, and discern whether any section of the plant was running poorly. This was due to both the design of the instruments and the implementation of the alarms associated with the instruments. Instrumentation companies put a lot of effort into the design and individual layout of the instruments they manufactured. To do this they employed behavioural psychology practices which revealed how much information a human being could collect in a quick glance. More complex plants had more complex panel boards, and therefore often more human operators or controllers.
Thus, in the early days of panel board systems, alarms were regulated by both size and cost. In essence, they were limited by the amount of available board space, and the cost of running wiring, and hooking up an annunciator (horn), indicator (light) and switches to flip to acknowledge, and clear a resolved alarm. It was often the case that if a new alarm was needed, an old one had to be given up.
As technology developed, the control system and control methods were tasked to continue to advance a higher degree of plant automation with each passing year. Highly complex material processing called for highly complex control methodologies. Also, global competition pushed manufacturing operations to increase production while using less energy, and producing less waste. In the days of the panel boards, a special kind of engineer was required to understand a combination of the electronic equipment associated with process measurement and control, the control algorithms necessary to control the process (PID basics), and the actual process that was being used to make the products. Around the mid 80's, we entered the digital revolution. Distributed control systems (DCS) were a boon to the industry. The engineer could now control the process without having to understand the equipment necessary to perform the control functions. Panel boards were no longer required, because all of the information that once came across analogue instruments could be digitised, stuffed into a computer and manipulated to achieve the same control actions once performed with amplifiers and potentiometers.
As a side effect, that also meant that alarms were easy and cheap to configure and deploy. You simply typed in a location, a value to alarm on and set it to active. The unintended result was that soon people alarmed everything. Initial installers set an alarm at 80% and 20% of the operating range of any variable just as a habit. The integration of programmable logic controllers, safety instrumented systems, and packaged equipment controllers has been accompanied by an overwhelming increase in associated alarms. One other unfortunate part of the digital revolution was that what once covered several square yards of panel space, now had to be fit into a 17-inch computer monitor. Multiple pages of information was thus employed to replicate the information on the replaced panel board. Alarms were used to tell an operator to go look at a page he was not viewing. Alarms were used to tell an operator that a tank was filling. Every mistake made in operations usually resulted in a new alarm. With the implementation of the OSHA 1910 regulations, HAZOPS studies usually requested several new alarms. Alarms were everywhere. Incidents began to accrue as a combination of too much data collided with too little useful information.
Alarm management history
Recognizing that alarms were becoming a problem, industrial control system users banded together and formed the Alarm Management Task Force, which was a customer advisory board led by Honeywell in 1990. The AMTF included participants from chemical, petrochemical, and refining operations. They gathered and wrote a document on the issues associated with alarm management. This group quickly realised that alarm problems were simply a subset of a larger problem, and formed the Abnormal Situation Management Consortium (ASM is a registered trademark of Honeywell). The ASM Consortium developed a research proposal and was granted funding from the National Institute of Standards and Technology (NIST) in 1994. The focus of this work was addressing the complex human-system interaction and factors that influence successful performance for process operators. Automation solutions have often been developed without consideration of the human that needs to interact with the solution. In particular, alarms are intended to improve situation awareness for the control room operator, but a poorly configured alarm system does not achieve this goal.
The ASM Consortium has produced documents on best practices in alarm management, as well as operator situation awareness, operator effectiveness, and other operator-oriented issues. These documents were originally for ASM Consortium members only, but the ASMC has recently offered these documents publicly.
The ASM consortium also participated in development of an alarm management guideline published by the Engineering Equipment & Materials Users' Association (EEMUA) in the UK. The ASM Consortium provided data from their member companies, and contributed to the editing of the guideline. The result is EEMUA 191 "Alarm Systems- A Guide to Design, Management and Procurement".
Several institutions and societies are producing standards on alarm management to assist their members in the best practices use of alarms in industrial manufacturing systems. Among them are the ISA (ISA 18.2), API (API 1167) and NAMUR (Namur NA 102). Several companies also offer software packages to assist users in dealing with alarm management issues. Among them are DCS manufacturing companies, and third-party vendors who offer add-on systems.
Concepts
The fundamental purpose of alarm annunciation is to alert the operator to deviations from normal operating conditions, i.e. abnormal operating situations. The ultimate objective is to prevent, or at least minimise, physical and economic loss through operator intervention in response to the condition that was alarmed. For most digital control system users, losses can result from situations that threaten environmental safety, personnel safety, equipment integrity, economy of operation, and product quality control as well as plant throughput. A key factor in operator response effectiveness is the speed and accuracy with which the operator can identify the alarms that require immediate action.
By default, the assignment of alarm trip points and alarm priorities constitute basic alarm management. Each individual alarm is designed to provide an alert when that process indication deviates from normal. The main problem with basic alarm management is that these features are static. The resultant alarm annunciation does not respond to changes in the mode of operation or the operating conditions.
When a major piece of process equipment like a charge pump, compressor, or fired heater shuts down, many alarms become unnecessary. These alarms are no longer independent exceptions from normal operation. They indicate, in that situation, secondary, non-critical effects and no longer provide the operator with important information. Similarly, during start-up or shutdown of a process unit, many alarms are not meaningful. This is often the case because the static alarm conditions conflict with the required operating criteria for start-up and shutdown.
In all cases of major equipment failure, start-ups, and shutdowns, the operator must search alarm annunciation displays and analyse which alarms are significant. This wastes valuable time when the operator needs to make important operating decisions and take swift action. If the resultant flood of alarms becomes too great for the operator to comprehend, then the basic alarm management system has failed as a system that allows the operator to respond quickly and accurately to the alarms that require immediate action. In such cases, the operator has virtually no chance to minimise, let alone prevent, a significant loss.
In short, one needs to extend the objectives of alarm management beyond the basic level. It is not sufficient to utilise multiple priority levels because priority itself is often dynamic. Likewise, alarm disabling based on unit association or suppressing audible annunciation based on priority do not provide dynamic, selective alarm annunciation. The solution must be an alarm management system that can dynamically filter the process alarms based on the current plant operation and conditions so that only the currently significant alarms are annunciated.
The fundamental purpose of dynamic alarm annunciation is to alert the operator to relevant abnormal operating situations. They include situations that have a necessary or possible operator response to ensure:
Personnel and Environmental Safety,
Equipment Integrity,
Product Quality Control.
The ultimate objectives are no different from the previous basic alarm annunciation management objectives. Dynamic alarm annunciation management focuses the operator's attention by eliminating extraneous alarms, providing better recognition of critical problems, and insuring swifter, more accurate operator response.
The need for alarm management
Alarm management is usually necessary in a process manufacturing environment that is controlled by an operator using a supervisory control system, such as a DCS, a SCADA or a programmable logic controller (PLC). Such a system may have hundreds of individual alarms that up until very recently have probably been designed with only limited consideration of other alarms in the system. Since humans can only do one thing at a time and can pay attention to a limited number of things at a time, there needs to be a way to ensure that alarms are presented at a rate that can be assimilated by a human operator, particularly when the plant is upset or in an unusual condition. Alarms also need to be capable of directing the operator's attention to the most important problem that he or she needs to act upon, using a priority to indicate degree of importance or rank, for instance. To ensure a continuous production, a seamless service, a perfect quality at any time of day or night, there must be an organisation which implies several teams of people handling, one after the other, the occurring events.
This is more commonly called the on-call management. The on-call management relies on a team of one or more persons (site manager, maintenance staff) or on external organisation (guards, telesurveillance centre). To avoid having a full-time person to monitor a single process or a level, information and / or events transmission is mandatory. This information transmission will enable the on-call staff to be more mobile, more efficient and will allow it to perform other tasks at the same time.
Some improvement methods
The techniques for achieving rate reduction range from the extremely simple ones of reducing nuisance and low value alarms to redesigning the alarm system in a holistic way that considers the relationships among individual alarms.
Design guide
This step involves documenting the methodology or philosophy of how to design alarms. It can include things such as what to alarm, standards for alarm annunciation and text messages, how the operator will interact with the alarms.
Rationalization and Documentation
This phase is a detailed review of all alarms to document their design purpose, and to ensure that they are selected and set properly and meet the design criteria. Ideally this stage will result in a reduction of alarms, but doesn't always.
Advanced methods
The above steps will often still fail to prevent an alarm flood in an operational upset, so advanced methods such as alarm suppression under certain circumstances are then necessary. As an example, shutting down a pump will always cause a low flow alarm on the pump outlet flow, so the low flow alarm may be suppressed if the pump was shut down since it adds no value for the operator, because he or she already knows it was caused by the pump being shut down. This technique can of course get very complicated and requires considerable care in design. In the above case for instance, it can be argued that the low flow alarm does add value as it confirms to the operator that the pump has indeed stopped. Process boundaries (Boundary Management) must also be taken into account.
Alarm management becomes more and more necessary as the complexity and size of manufacturing systems increases. A lot of the need for alarm management also arises because alarms can be configured on a DCS at nearly zero incremental cost, whereas in the past on physical control panel systems that consisted of individual pneumatic or electronic analogue instruments, each alarm required expenditure and control panel area, so more thought usually went into the need for an alarm. Numerous disasters such as Three Mile Island, Chernobyl accident and the Deepwater Horizon have established a clear need for alarm management.
The seven steps to alarm management
Step 1: Create and adopt an alarm philosophy
A comprehensive design and guideline document is produced which defines a plant standard employing a best-practise alarm management methodology.
Step 2: Alarm performance benchmarking
Analyze the alarm system to determine its strengths and deficiencies, and effectively map out a practical solution to improve it.
Step 3: “Bad actor” alarm resolution
From experience, it is known that around half of the entire alarm load usually comes from a relatively few alarms. The methods for making them work properly are documented, and can be applied with minimum effort and maximum performance improvement.
Step 4: Alarm documentation and rationalisation (D&R)
A full overhaul of the alarm system to ensure that each alarm complies with the alarm philosophy and the principles of good alarm management.
Step 5: Alarm system audit and enforcement
DCS alarm systems are notoriously easy to change and generally lack proper security. Methods are needed to ensure that the alarm system does not drift from its rationalised state.
Step 6: Real-time alarm management
More advanced alarm management techniques are often needed to ensure that the alarm system properly supports, rather than hinders, the operator in all operating scenarios. These include Alarm Shelving, State-Based Alarming, and Alarm Flood Suppression technologies.
Step 7: Control and maintain alarm system performance
Proper management of change and longer term analysis and KPI monitoring are needed, to ensure that the gains that have been achieved from performing the steps above do not dwindle away over time. Otherwise they will; the principle of “entropy” definitely applies to an alarm system.
See also
List of human-computer interaction topics, since most control systems are computer-based
Design, especially interaction design
Detection theory
Physical security
Annunciator panel
Alarm fatigue
Fault management
Notes
References
SSM InfoTech Solutions Pvt. Ltd. -
EPRI (2005) Advanced Control Room Alarm System: Requirements and Implementation Guidance. Palo Alto, CA. EPRI report 1010076.
EEMUA 191 Alarm Systems - A Guide to Design, Management and Procurement - Edition 3 (2013)
PAS - The Alarm Management Handbook - Second Edition (2010)
ASM Consortium (2009) - Effective Alarm Management Practices
ANSI/ISA–18.2–2009 - Management of Alarm Systems for the Process Industries
IEC 62682 Management of alarms systems for the process industries
Ako-Tec AG - Description of a modern Alarm Management System
Alarm Management and ISA-18 A Journey Not a Destination
RFC8632 A YANG Data Model for Alarm Management
External links
"Principles for alarm system design" YA-711 Norwegian Petroleum Directorate
Alarms
Safety
Security
Process safety
Production and manufacturing | Alarm management | [
"Chemistry",
"Technology",
"Engineering"
] | 3,459 | [
"Warning systems",
"Safety engineering",
"Alarms",
"Process safety",
"Chemical process engineering"
] |
963,914 | https://en.wikipedia.org/wiki/Braun%27s%20lipoprotein | Braun's lipoprotein (BLP, Lpp, murein lipoprotein, or major outer membrane lipoprotein), found in some gram-negative cell walls, is one of the most abundant membrane proteins; its molecular weight is about 7.2 kDa. It is bound at its C-terminal end (a lysine) by a covalent bond to the peptidoglycan layer (specifically to diaminopimelic acid molecules) and is embedded in the outer membrane by its hydrophobic head (a cysteine with lipids attached). BLP tightly links the two layers and provides structural integrity to the outer membrane.
Characteristics
The gene encoding Braun's lipoprotein initially produces a protein composed of 78 amino acids, which includes a 20 amino acid signal peptide at the amino terminus. The mature protein is 6 kDa in size. Three monomers of Lpp assemble into a leucine zipper coiled-coil trimer.
Large amounts of Braun's lipoprotein is present, more than any other protein in E. coli. Unlike other lipoproteins, it is linked covalently to the peptidoglycan. Lpp connects the outer membrane to the peptidoglycan. Lpp is anchored to the outer membrane by its amino-terminal lipid group. In E. coli, one third of Lpp proteins form a peptide bond via the side chain of its carboxy-terminal lysine with diaminopimelic acid in the peptidoglycan layer. The rest of the Lpp molecules are present in a "free" form unlinked to peptidoglycan. The free form is exposed on the surface of E. coli.
Functions
Lpp, along with another OmpA-like lipoprotein called Pal/OprL (), maintains the stability of the cell envelope by attaching the outer membrane to the cell wall.
Lpp has been proposed as a virulence factor of Yersinia pestis, the cause of plague. Y. pestis needs lpp for maximum survival in macrophages and to efficiently kill mouse models of bubonic and pneumonic plague.
Immunology
Braun's lipoprotein binds to the pattern recognition receptor TLR2. Lpp induces adhesion of neutrophils to human endothelial cells by activating the latter.
References
Lipoproteins
Peripheral membrane proteins | Braun's lipoprotein | [
"Chemistry"
] | 511 | [
"Lipid biochemistry",
"Lipoproteins"
] |
963,963 | https://en.wikipedia.org/wiki/Israeli%20Combat%20Engineering%20Corps | The Israeli Combat Engineering Corps (, Heil HaHandasa HaKravit) is part of the Israel Defense Forces with responsibility for mobility assurance, road breaching, defense and fortifications, counter-mobility of enemy forces, construction and destruction under fire, sabotage, explosives, bomb disposal, counter-weapons of mass destruction (NBC) and special engineering missions.
The Combat Engineering Corps beret's color is silver and its symbol features a sword on a defensive tower with an explosion halo on the background. The Combat Engineering Corps mottos are "Always First" (ראשונים תמיד Rishonim Tamid) and the unofficial "The hard, we shall do today; the impossible, we shall do tomorrow".
In addition to Combat Engineering Corps sappers, each infantry brigade has an engineering company trained with basic engineering and explosive ordnance disposal (EOD) skills (called פלח"הן). Combat Engineering Corps sappers and heavy equipment operators are often attached to other units (such as armored or infantry brigades) in order to help them breach through obstacles and handle explosive threats.
Roles
Beside extensive training in basic combat engineering, combat engineers receive specialized training in their respective professions. These are:
Sapper: trained with all the basic engineering skills and also trained at high infantry level (רובאי 07). Their main role is to breach through terrain obstacles (natural and artificial), breach through minefields and enable ground forces to advance in the battlefield. They are trained to supply close combat support for both armored fighting vehicles and infantry. Some of them are trained in driving the Combat Engineering Corps standard CEV: the IDF Puma. Their professional ranks after advanced training are Rifleman 07 (רובאי 07) and Sapper 06 (פלס 06).
Engineering Vehicles Operator (EVO): less combatant but nonetheless important, these soldiers are skilled in the operation of heavy mechanical equipment and engineering vehicles such as heavy bulldozers, excavators, cranes, tractors and mine-breaching devices. EVO units are called צמ"ה (Tzama) in Hebrew, acronym of Tziyud Mechani Handasi (Mechanical Engineering Equipment). Their professional ranks are Rifleman 05 (רובאי 05) and EVO Operator 07 (מפעיל צמ"ה 07).
Bulldozer Operators: belong to the EVO, these soldiers operate the IDF Caterpillar D9 armored bulldozers, including under heavy fire. Their roles are versatile and differ according to the units to which they are attached. The D9 operators perform construction, destruction, breaching and EOD missions while assisting tanks, infantry and even special forces during battle.
NBC Disposal: called "purifiers", they are expert in handling nuclear, biological and chemical threats.
EOD experts: the EOD are experts in nondestructive detonation of explosives and bomb disposal. Among their equipment are the Barrett M82A1 and remote-control EOD robots with shotguns and mechanical arms. The EOD are the military equivalent of the police's bomb squad. In the IDF, they are a part of the elite Engineering unit Yahalom.
Demolition experts: they are specially trained in blowing up targets in the most accurate and effective way. They explode targets ranging from cellular phones and doorlocks up to tanks and large buildings. In the IDF, the demolition experts are united in Sayeret Yael of Yahalom (Sayeret is the Hebrew term for a SF elite unit) and therefore gain high infantry training as well.
Fortification experts: assigned to designing and overseeing the construction of bases, outposts, bridges and fortifications. Construction itself is usually done by the EVOs.
Counter-Tunnels experts: established in 2003 by the late Captain Aviv Hakani, these Combat Engineering Corps soldiers are experts in finding smuggling tunnels and weapon caches, and demolishing them. They operated in Rafah during the al-Aqsa Intifada and received recommendation of honor for their activity. After the 2004 APC incident the Rafah tunnel team was united with the Combat Engineering Corps elite unit Yahalom and was renamed Sayeret Samur ("Samur" means "Weasel" in Hebrew).
Units
Active engineering units
Assigned to commands, divisions, and brigades:
Northern Command Engineering Unit 801
Central Command Engineering Unit 802
Southern Command Engineering Unit 803
601st Combat Engineering Battalion "Asaf" (Assigned to the 401st Armored Brigade "I'kvot haBarzel")
603rd Combat Engineering Battalion "Lahav" (Assigned to the 7th Armored Brigade "Saar me-Golan")
605th Combat Engineering Battalion "HaMahatz" (Assigned to the 188th Armored Brigade "Barak")
614th Combat Engineering Battalion (Assigned to the 460th Armored Brigade "Bnei Or")
Combat Engineering Company "Galilee Cats" (Assigned to the 91st Division)
Combat Engineering Company "Steel Cats" (Assigned to the 143rd Division)
Combat Engineering Company "Steel Knights" (Assigned to the 143rd Division)
Combat Engineering Company "Plateau Cats" (Assigned to the 210th Division)
Combat Engineering Company "Wild Cats" (Assigned to the 877th Division)
Assigned to other commands:
Yahalom – special operations engineering unit
Sayeret Yael – commando reconnaissance unit
SAP – EOD and bomb disposal unit
SAMUR – counter-tunneling unit
Hevzek – Robotics unit
76th CBRN defense Battalion (Assigned to Home Front Command)
Military Engineering School (BAHALATZ 14)
YANSHUF – CBRN defense Training Cente
Reserve engineering units
271st Combat Engineering Battalion (Assigned to the 14th Armored Brigade "Machatz")
710th Combat Engineering Battalion (Assigned to the 179th Armored Brigade "Re'em")
749th Combat Engineering Battalion (Assigned to the 828th Infantry Brigade)
924th Combat Engineering Battalion (Assigned to the 10th Armored Brigade "Harel")
5280th Combat Engineering Battalion (Assigned to the 3rd Infantry Brigade "Alexandroni")
7071st Combat Engineering Battalion (Assigned to the 4th Armored Brigade "Kiryati")
7086th Combat Engineering Battalion "Alon" (Assigned to the 1st Infantry Brigade "Golani")
7107th Combat Engineering Battalion "Raz" (Assigned to the 933rd Infantry Brigade "Nahal")
8170th Combat Engineering Battalion (Assigned to the 84th Infantry Brigade "Givati")
8173rd Combat Engineering Battalion (Assigned to the 6th Infantry Brigade "Etzioni")
8219th Combat Engineering Battalion (Assigned to the 551st Paratroopers Brigade "Hetzei HaEsch")
9227th Combat Engineering Battalion (Assigned to the 679th Armored Brigade "Yiftach")
Equipment
Personal gear
The Israeli combat engineers and sappers are combat soldiers and therefore have a personal gear and weapons as infantry soldiers. Their issued rifle is the M-16A1 (short 13/14-inch barrel) and M4 Carbine. Other weapons include hand grenades, M203 grenade launcher, IMI Negev, FN MAG and M2 Browning machineguns and M24 SWS and Barret M82A1 sniper rifles.
Vehicles
The combat engineering soldiers are mobilized by APCs and armored 4×4 vehicles.
The Armoured personnel carriers include the Centurion tank-based IDF Puma, a heavy combat engineering vehicle equipped with engineering devices such as mine plows. Reserve forces use the old and versatile M113 APC. In 2016 the 603rd Combat Engineering Battalion ("Lahav") started to receive IDF Namer Combat engineering vehicle based on the Namer APC.
Wheeled armored vehicles include the HMMWV ("Hummer"), Wolf Armoured Vehicle and M240 Sufa.
Heavy equipment
The Combat Engineering Corps operates heavy equipment and engineering vehicles (called TZAMA in Hebrew) such as armored bulldozers, armored excavators, armored wheeled loaders, armored backhoe loaders and more. The best known tool is the heavily armored IDF Caterpillar D9 bulldozer.
Mine breaching devices
The Combat Engineering Corps has different means to breach fast through mine fields. These include personal sapper gear, vehicle-mounted mine plows and mine rollers which can be attached to engineering vehicles and tanks, CARPET air-fuel rockets and the "Tzefa Shiryon" (Hebrew for "Armor's Viper") which is extremely powerful and can clear large mine fields.
Explosives
The Combat Engineering Corps has a wide range of explosives, demolition charges and different land mines.
Robots
Yahalom SF Unit operates many types of robots, including bomb disposal robots, reconnaissance robots and remote-controlled heavy equipment (such as "Raam HaShachar" D9N bulldozer, and the "Front-Runner" mini-cat loader).
NBC
Counter-NBC soldiers are equipped with protective suits and gas masks, chemical ID systems and purification vehicles.
Gallery
History
Founding
The Combat Engineering Corps has a record of professional achievement and decoration. Its best known operation is the bridging of the Suez Canal during the Yom Kippur War.
The corps was formed from the sabotage unit of the Palmach and the tractor operators units of the 1947–1949 Palestine war. In its early years, the Combat Engineering Corps drew its soldiers mainly from Jews who had served in the United Kingdom's Royal Engineers.
ICEC chief engineer, Brigadier General David Leskov (not to be confused with Chief Engineering Officer קצין הנדסה ראשי, the commander of the ICEC), developed many combat engineering systems for the Israel Defense Forces, and won three Israel Security Prizes. He served in the IDF until his death at the age of 86, thus being the oldest soldier in the world.
In Israel's wars
In the 1947–1949 Palestine war, the Combat Engineering Corps blasted bridges over the Jordan River and the streams of the southern Coastal plain in order to stop the advance of the Arab armored forces into the Israeli civilian rear. The Combat Engineering Corps also helped in breaching the "Burma Road" into besieged Jerusalem.
In the 1956 Sinai war, the Combat Engineering Corps destroyed Egyptian military infrastructure in the Sinai Peninsula and was awarded with a battalion recommendation of honor.
In the 1967 Six-Day War the Combat Engineering Corps stormed Jordanian fortifications along the walls of the Old City of Jerusalem. After Israel annexed the Old City, the Combat Engineering Corps removed landmines planted in the city by the Jordanians. This was the first war in which Caterpillar D9 bulldozers were employed by the corps.
After the war, the Combat Engineering Corps helped to build a fortification line of defense along the Suez Canal and were awarded the Israel Security Prize in 1969. The Israeli Engineering Corps were the first corps to win the award.
In the 1973 Yom Kippur War the combat engineering battalions attached to Ariel Sharon's armored division bridged the Suez Canal during "Operation Knights of Heart", while carrying tanks and paratroopers across the canal with Gillois amphibious tank-carriers. This effort enabled Sharon and Avraham "Bren" Adan's armored divisions to cross the canal and surround the 3rd Egyptian Army, forcing it to surrender. The bridging of the canal is regarded by many as the turning point of the war on the southern front. On the northern front, a Combat Engineering Corps Caterpillar D9 bulldozer was the first ever motorized vehicle to reach the summit of the Hermon.
In Operation Peace for Galilee the Combat Engineering Corps worked intensively to open routes for Israeli forces. Their duties also included the disarming landmines and Improvised explosive devices as well as building fortifications and outposts.
In the 1991 Gulf War, the NBC purifiers of the Combat Engineering Corps were on a "code red" alert for disarming Iraqi Scud missiles, armed with non-conventional warheads.
The October 2000 Lebanon abduction
On 7 October 2000 three Israeli combat engineering soldiers were abducted by Hezbollah from the Shebaa Farms, in the Golan Heights. The soldiers, Beni Avraham, Adi Avitan and Omar Sawaed, suffered fatal injuries during their abduction. Their bodies were retrieved in 2004 in a prisoner swap with Hezbollah.
A series of accusations were made against the United Nations Interim Force in Lebanon (UNIFIL) by press and partisan web sites for having cooperated with the abduction. Those accusations stem from a video, whose existence was originally denied by UN officials, recorded by Indian peacekeepers one day after the abduction. The video, which the UN agreed to provide to Israeli officials in June 2001 with civilian faces blurred, showed abandoned vehicles with fake UN license plates and uniforms, and Hezbollah supporters intercepting UN efforts to retrieve the vehicles. A UN investigation found no evidence to support accusations of peacekeepers involvement in the abduction. Although the bereaved families met with Kofi Annan, they refused to accept the UN version. In September 2004, the bereaved families announced their intention to sue the UN, Hezbollah, Iran, Syria and Lebanon for their parts in the abduction.
The Second Intifada
For further discussions see: al-Aqsa Intifada, IDF Caterpillar D9, Operation Defensive Shield, Battle of Jenin 2002, Operation Rainbow.
During the al-Aqsa Intifada, which erupted in September 2000, the Combat Engineering Corps were employed to disarm many Palestinian IED explosive charges and booby traps. In many cases, the Combat Engineering EOD operators, together with Israeli Police bomb disposal operators, also detonated explosive belts captured on Palestinian suicide bombers. The Combat Engineering Corps also dynamited Palestinian houses, bomb labs and smuggling tunnels.
However, the Combat Engineering Corps were most known for operating the armored IDF Caterpillar D9 armored bulldozers. On the other side, for Palestinians, the bulldozers became a nightmare, as they bulldozed many Palestinian buildings and shrubbery, and were almost impervious to Palestinian attacks. The Combat Engineering Corps bulldozers' operators unit received a recommendation of honor for its activity in Jenin during Operation Defensive Shield.
Armored bulldozers were also massively employed in Rafah to counter terrorist smuggling tunnels. Human Rights Watch published a report criticizing the extensive destruction of Palestinian houses in the southern Gaza strip, and said it was unlawful, claiming that Israel uses the Palestinian smuggling tunnels as a pretext to create a "buffer zone" along the Gaza-Egypt border. In Rafah, the Combat Engineering Corps formed a special unit, designated for searching and destroying smuggling tunnels, it is called SAMUR and now belongs to Yaalom. They also received an honor of recommendation, for their conduct. Until the Gaza Disengagement plan, the Combat Engineering 603 battalion's reconnaissance platoon (מחס"ר) held a record of over 70 terrorists killed in 2004–2005 on the border between the Gaza Strip and Israel. They received a recommendation of honor for this achievement.
Second Lebanon War
The Combat Engineering Corps took significant part in the Second Lebanon War that erupted in 2006 after Hizbullah attacked IDF patrol, abducted two soldiers and killed another 8 with anti-tank missiles and improvised explosive devices that hit the rescuers.
On 16 July combat engineering forces from Asaf battalion were the first to enter Lebanon. Their mission was to clear improvised explosive devices, open safe routes to ground forces and demolish Hizbullah infrastructures. Yahalom bomb disposal experts and IDF Caterpillar D9 bulldozers cleared most of Hizbullah's IEDs. During the war, a D9 went over a 500 kg belly charge improvised explosive devices but survived without taking significant damage.
During the war, combat engineers used bulldozers and explosives to destroy Hezbollah outposts, bunkers, warehouses and HQs—mainly along the border. The works intensified as the war reached near end, and indeed the borderline was cleared in time.
Combat engineers also rescued damaged tanks, often under fire.
Two combat engineers were awarded with Medal of Distinguished Service and other two awarded a recommendation of honor from the General Chief of Staff. Many other awarded with recommendation of honor from less senior commanders.
Operation Cast Lead
During the Gaza War (2008–2009) codenamed "Operation Cast Lead" by the IDF, combat engineering forces were the first to enter the Gaza Strip to clear IEDs, booby traps and open safe routes to armor and infantry.
Many booby traps, rigged structures and tunnels were present in the Gaza Strip as part of Hamas efforts to prepare to the Gaza War. These were often concealed in civilian structures, and were even found in schools and mosques. However, most of the Palestinian booby traps were successfully countered by the IDF Combat Engineering Corps bomb disposal experts (part of Yahalom Special Engineering Unit) which dismentaled the bombs and armored D9 bulldozers which detonated bombs and booby traps while sustaining no damage from the explosions. IDF Caterpillar D9R and unmanned "Raam HaShachar" D9N armored bulldozers which opened route in dangerous areas have taken many improvised explosive devices, landmines, explosive charges and RPG hits, but no crewmen were killed. However, a Yahalom bomb disposal expert was killed after entering a house and encountering a suicide bomber. He was the only fatality of the Combat Engineering Corps during the war.
Besides neutralizing Hamas IEDs and traps, combat engineering forces demolished Hamas infrastructure and other structures used as outposts, shooting positions, traps, cover for tunnels, HQs and warehouses. The head officer of the Combat Engineering Corps (קהנ"ר) estimated that about 600 buildings were bulldozed or exploded by his troops.
The Combat Engineering Corps' success heightened their reputation within the IDF and in the Israeli public. This was manifested in increased number of conscripts who chose the Combat Engineering Corps as their first priority in their draft preferation questionnaire ("Manila מנילה"—a form in which the conscript chooses in what unit he would like to serve, the IDF tries to fulfill his request as much as possible).
Operation Protective Edge
During Operation Protective Edge (July–August 2014) Combat Engineers played a major role in destroying Hamas' cross-border underground infiltration tunnels. The tunnels were exposed and cleared by armored bulldozers and excavators, and then detonated by Yahalom's Samoor unit. In total, about 32 tunnels were destroyed. In addition, combat engineers participated in the battles, neutralized Hamas-planted improvised explosive devices, cleared booby-traps, opened routes for armor and infantry, and destroyed terrorist infrastructure. Six combat engineers were killed during the battles in the Gaza Strip.
Operation Northern Shield
In late 2018 the IDF commenced Operation Northern Shield to detect, located and destroy Hezbollah tunnels dug into northern Israel from South Lebanon. The combat engineering corps played the major role in the operation, operating on ground and below it, deploying Yahalom tunnel warfare teams and heavy equipment. As for 12 January 2019, the IDF discovered 6 tunnels.
2023 Israel–Hamas war
References
External links
Combat Engineering Corps, IDF official website (English)
Official website by "Palas" – Combat Engineers Association (Hebrew)
Combat Engineering page – IDF's Ground Command website (Hebrew)
Combat Engineering Corps – Israel Defense Forces YouTube channel, 2011
Captain Aviv Hakani
Military engineer corps
Military units and formations established in 1947 | Israeli Combat Engineering Corps | [
"Engineering"
] | 3,980 | [
"Engineering units and formations",
"Military engineer corps"
] |
963,970 | https://en.wikipedia.org/wiki/Messier%2055 | Messier 55 (also known as M55, NGC 6809, or Specter Cluster) is a globular cluster in the south of the constellation Sagittarius. It was discovered by Nicolas Louis de Lacaille in 1752 while observing from what today is South Africa. Starting in 1754, Charles Messier made several attempts to find this object from Paris but its low declination meant from there it rises daily very little above the horizon, hampering observation. He observed and catalogued it in 1778. The cluster can be seen with 50 mm binoculars; resolving individual stars needs a medium-sized telescope.
It is about 17,600 light-years away from Earth. It contains about 269,000 solar masses (). As with other Milky Way globular clusters, it has few elements other than hydrogen and helium compared to the Sun. Messier 55 therefore has "low metallicity". This quantity is normally listed as the base 10 logarithm of the proportion of the Sun; for NGC 6809 the metallicity is given by: [Fe/H] = −1.94 dex, whereby −2 would be 100 times less iron than the Sun. This means the cluster has 1.1% of the proportion of the Sun's iron compared to hydrogen and helium.
Only about 55 variable stars have been found in the central part of M55.
Gallery
See also
List of Messier objects
References and footnotes
External links
Messier 55, SEDS Messier pages
Messier 55, Galactic Globular Clusters Database page
Carina–Sagittarius Arm
Messier 055
Messier 055
055
Messier 055
17520616 | Messier 55 | [
"Astronomy"
] | 342 | [
"Sagittarius (constellation)",
"Constellations"
] |
963,988 | https://en.wikipedia.org/wiki/Messier%2056 | Messier 56 (also known as M56 or NGC 6779) is a globular cluster in the constellation Lyra. It was discovered by Charles Messier in 1779. It is angularly found about midway between Albireo (Beta (β) Cygni) and Sulafat (Gamma (γ) Lyrae). In a good night sky it is tricky to find with large (50–80 mm) binoculars, appearing as a slightly fuzzy star. The cluster can be resolved using a telescope with an aperture of or larger.
M56 is about 32,900 light-years away from Earth and measures roughly 84 light-years across, containing 230,000 solar masses (). It is about from the Galactic Center and above the galactic plane. This cluster has an estimated age of 13.70 billion years and is following a retrograde orbit through the Milky Way. The properties of this cluster suggest that it may have been acquired during the merger of a dwarf galaxy, of which Omega Centauri forms the surviving nucleus. For Messier 56, the abundance of elements other than hydrogen and helium, what astronomers term the metallicity, has a very low value of [Fe/H] = –2.00 dex which is of the abundance in the Sun.
The brightest stars in M56 are of 13th magnitude, while it contains only about a dozen known variable stars, such as V6 (RV Tauri star; period: 90 days) or V1 (Cepheid: 1.510 days); other variable stars are V2 (irregular) and V3 (semiregular). In 2000, a diffuse X-ray emission was tentatively identified coming from the vicinity of the cluster. This is most likely interstellar medium that has been heated by the passage of the cluster through the galactic halo. The relative velocity of the cluster is about 177 km s−1, which is sufficient to heat the medium in its wake to a temperature of 940,000 K.
M56 is part of the Gaia Sausage, the hypothesised remains of a merged dwarf galaxy.
Gallery
See also
List of Messier objects
References and footnotes
External links
Messier 56, SEDS Messier pages
Messier 56, Galactic Globular Clusters Database page
Hubble snaps a collection of ancient stars, August 26, 2012, TG Daily
Messier 056
Messier 056
056
Messier 056
Gaia-Enceladus
Astronomical objects discovered in 1779
Discoveries by Charles Messier | Messier 56 | [
"Astronomy"
] | 508 | [
"Lyra",
"Constellations"
] |
964,004 | https://en.wikipedia.org/wiki/Messier%2058 | Messier 58 (also known as M58 and NGC 4579) is an intermediate barred spiral galaxy with a weak inner ring structure located within the constellation Virgo, approximately 68 million light-years away from Earth. It was discovered by Charles Messier on April 15, 1779 and is one of four barred spiral galaxies that appear in Messier's catalogue. M58 is one of the brightest galaxies in the Virgo Cluster. From 1779 it was arguably (though unknown at that time) the farthest known astronomical object until the release of the New General Catalogue in the 1880s and even more so the publishing of redshift values in the 1920s.
Early observations
Charles Messier discovered Messier 58, along with the elliptical galaxies Messier 59 and Messier 60, on April 15, 1779. M58 was reported on the chart of the Comet of 1779 as it was almost on the same parallel as the star Epsilon Virginis. Messier described M58 as a very faint nebula in Virgo which would disappear in the slightest amount of light he used to illuminate the micrometer wires. This description was later contradicted by John Herschel's observations in 1833 where he described it as a very bright galaxy, especially towards the middle. Herschel's observations were also similar to the descriptions of both John Dreyer and William Henry Smyth who said that M58 was a bright galaxy, mottled, irregularly round and very much brighter toward the middle.
Characteristics
Like many other spiral galaxies of the Virgo Cluster (e.g. Messier 90), Messier 58 is an anemic galaxy with low star formation activity concentrated within the galaxy's optical disk, and relatively little neutral hydrogen, also located inside its disk, concentrated in clumps, compared with other galaxies of similar morphological type. This deficiency of gas is believed to be caused by interactions with Virgo's intracluster medium.
Messier 58 has a low-luminosity active galactic nucleus, where a starburst may be present as well as a supermassive black hole with a mass of around 70 million solar masses. It is also one of the very few galaxies known to possess a UCNR (ultra-compact nuclear ring), a series of star-forming regions located in a very small ring around the center of the galaxy. This led to its being dubbed the "ring bearer galaxy" by the popular astronomy YouTube program "Deep Sky videos".
Supernovae
Two supernovae have been observed in the M58 galaxy:
SN 1988A (type II, mag. 13.5) was discovered by Kaoru Ikeya, Robert Evans, Christian Pollas and Shingo Horiguchi on January 18, 1988. It was found 40 arcseconds south of the galaxy center.
SN 1989M (Type Ia, mag. 12.2) was discovered by Givi N. Kimeridze on 28 June 1989. It was found 33 arcseconds north and 44 arcseconds west of M58's nucleus.
See also
List of Messier objects
Messier 91
Messier 95
Messier 109
M100
NGC 4536
Notes
References
External links
SEDS Messier: M58
Spitzer Space Telescope page on Messier 58
Intermediate spiral galaxies
Messier 058
Messier 058
058
Messier 058
07796
42168
Astronomical objects discovered in 1779
Discoveries by Charles Messier | Messier 58 | [
"Astronomy"
] | 698 | [
"Virgo (constellation)",
"Constellations"
] |
964,023 | https://en.wikipedia.org/wiki/Messier%2059 | Messier 59 or M59, also known as NGC 4621, is an elliptical galaxy in the equatorial constellation of Virgo. It is a member of the Virgo Cluster, with the nearest fellow member away and around 5 magnitudes fainter. The nearest cluster member of comparable brightness is the lenticular galaxy NGC 4638, which is around away. It and the angularly nearby elliptical galaxy Messier 60 were both discovered by Johann Gottfried Koehler in April 1779 when observing comet seeming close by. Charles Messier listed both in the Messier Catalogue about three days after Koehler's discovery.
This is an elliptical galaxy of type E5 with a position angle of 163.3°, indicating the overall shape shows a flattening of 50%. However, isophotes for this galaxy deviate from a perfect ellipticity, showing pointed shapes instead. These can be decomposed mathematically into a three component model, with each part having a different eccentricity. The main elliptical component appears to be superimposed upon a flatter, disk-like feature, with the entirety embedded within a circular halo. The luminosity contribution of the components is 62% for the pure elliptical part, 22% for the halo, and the remainder coming from the disk. The light ratio of the disk to the main elliptical body is 0.25, whereas it is typically closer to 0.5 in a lenticular galaxy.
The core contains a supermassive black hole (SMBH), with a mass that has been estimated to be 270 million times the mass of the Sun, and counter-rotates with respect of the rest of the galaxy, being bluer. The SMBH is quiescent, but is detectable as an X-ray and radio source that indicates an outflow. The nucleus contains an embedded stellar disk that is bluer (younger) than the bulge region, with a blue component stretching along a position angle of around 150°. This extended disk feature may be the result of a galactic merger followed by a starburst event.
Messier 59 is very rich in globular clusters, with a population of them that has been estimated to be around 2,200. It has two satellites, the ultra compact dwarf galaxy M59-UCD3 and M59cO, which is a rare example of a galaxy in between compact ellipticals such as Messier 32 and ultra compact dwarfs.
Supernova
One supernova has been recorded in M59: SN 1939B (type Ia, mag. 15) was discovered by Fritz Zwicky on 19 May 1939. It reached a peak magnitude of 11.9. The region where this supernova occurred shows no trace of star formation, which suggests this was a type Ia supernova.
See also
List of Messier objects
References
External links
Elliptical Galaxy M59 @ SEDS Messier pages
Messier 059
Messier 059
Messier 059
059
Messier 059
07858
042628
Astronomical objects discovered in 1779
+02-32-183 | Messier 59 | [
"Astronomy"
] | 618 | [
"Virgo (constellation)",
"Constellations"
] |
964,071 | https://en.wikipedia.org/wiki/Messier%2060 | Messier 60 or M60, also known as NGC 4649, is an elliptical galaxy approximately 57 million light-years away in the equatorial constellation of Virgo. Together with NGC 4647, it forms a pair known as Arp 116. Messier 60 and nearby elliptical galaxy Messier 59 were discovered by Johann Gottfried Koehler in April 1779, observing a comet in the same part of the sky. Charles Messier added both to his catalogue about three days after this.
Characteristics
This is an elliptical galaxy of type E (E1.5), although some sources class it as S0 – a lenticular galaxy. An E2 class indicates a flattening of 20%, which has a nearly round appearance. The isophotes of the galaxy are boxy in shape, rather than simple ellipses. The mass-to-light ratio is a near constant 9.5 in the V (visual) band of the UBV system. The galaxy has an effective radius of (translating, at its distance, to about 10 kpc), with an estimated mass of ~1012 within a threefold volume, of which nearly half is dark matter. The mass estimated from X-ray emission is within 5 effective radii.
Supermassive black hole
At the center of M60 is a supermassive black hole (SMBH) of billion solar masses, one of the largest ever found. It is currently inactive. X-ray emission from the galaxy shows a cavity created by jets emitted by the hole during past active periods, which correspond to weak radio lobes. The power needed to generate these features is in the range (ergs per second).
Supernovae
In 2004, supernova SN 2004W was observed in Messier 60. It was a type Ia supernova found west and south of the nucleus.
Environment
M60 is the third-brightest giant elliptical galaxy of the Virgo cluster of galaxies, and is the dominant member of a subcluster of four galaxies, the M60 group, which is the closest-known isolated compact group of galaxies. It has several satellite galaxies, one of them being the ultracompact dwarf galaxy M60-UCD1, discovered in 2013. The motion of M60 through the intercluster medium is resulting in ram-pressure stripping of gas from the galaxy's outer halo, beyond a radius of 12 kpc.
NGC 4647 appears approximately 2.5 from Messier 60; the optical disks of the two galaxies overlap. Although this overlap suggests that the galaxies are interacting, photographic images of the two galaxies do not reveal any evidence for gravitational interactions between the two galaxies as would be suggested if the two galaxies were physically close to each other. This suggests that the galaxies are at different distances and are only weakly interacting if at all. However, studies with the Hubble Space Telescope show indications that a tidal interaction may have just begun.
Recession speed and distance estimations
Messier 60 was the fastest-moving galaxy included in Edwin Hubble's landmark 1929 paper concerning the relationship between recession speed and distance. He used a value of 1090 km/s for the recession speed, 1.8% less than the more recent value of about 1110 km/s (based on a redshift of 0.003726). But he estimated the distance of this galaxy as well as of the three nebulas of the Virgo Cluster which he included (Messier 85, 49, and 87), to be only two million parsecs, rather than the accepted value today of around 16 million parsecs. These errors in distance led him to propose a Hubble constant of 500 km/s/Mpc, whereas the present estimate is around 70 km/s/Mpc.
Gallery
See also
List of Messier objects
NGC 7318
References
External links
StarDate: M60 Fact Sheet
Messier 060
Messier 060
Messier 060
Messier 060
Messier 060
060
Messier 060
07898
42831
116
Astronomical objects discovered in 1779 | Messier 60 | [
"Astronomy"
] | 829 | [
"Virgo (constellation)",
"Constellations"
] |
964,083 | https://en.wikipedia.org/wiki/Messier%2062 | Messier 62 or M62, also known as NGC 6266 or the Flickering Globular Cluster, is a globular cluster of stars in the south of the equatorial constellation of Ophiuchus. It was discovered in 1771 by Charles Messier, then added to his catalogue eight years later.
M62 is about from Earth and from the Galactic Center. It is among the ten most massive and luminous globular clusters in the Milky Way, showing an integrated absolute magnitude of −9.18. It has an estimated mass of and a mass-to-light ratio of in the core visible light band, the V band. It has a projected ellipticity of 0.01, meaning it is essentially spherical. The density profile of its member stars suggests it has not yet undergone core collapse. It has a core radius of , a half-mass radius of , and a half-light radius of . The stellar density at the core is per cubic parsec. It has a tidal radius of .
The cluster shows at least two distinct populations of stars, which most likely represent two separate episodes of star formation. Of the main sequence stars in the cluster, are from the first generation and from the second. The second is enriched by elements released by the first. In particular, abundances of helium, carbon, magnesium, aluminium, and sodium differ between these two.
Indications are this is an Oosterhoff type I, or "metal-rich" system. A 2010 study identified 245 variable stars in the cluster's field, of which 209 are RR Lyrae variables, four are Type II Cepheids, 25 are long period variables, and one is an eclipsing binary. The cluster may prove to be the galaxy's richest in terms of RR Lyrae variables. It has ten binary millisecond pulsars, including one (M62B) that is displaying eclipsing behavior from gas streaming off its companion, and one (M62H) with an orbiting exoplanet about three times the mass of Jupiter. There are multiple X-ray sources, including 50 within the half-mass radius. 47 blue straggler candidates have been identified, formed from the merger of two stars in a binary system, and these are preferentially concentrated near the core region.
It is hypothesized that this cluster may be host to an intermediate mass black hole (IMBH) – it is considered well-suited for searching for such an object. A brief study, before 2013, of the proper motion of stars within of the core did not require an IMBH to explain. However, simulations can not rule out one with a mass of a few thousand in M62's core. For example, based upon radial velocity measurements within an arcsecond of the core, Kiselev et al. (2008) made the claim of an IMBH in M15, likewise with mass of .
Gallery
See also
List of Messier objects
References and footnotes
External links
Messier 62, Galactic Globular Clusters Database page
M62 on willig.net
Messier 062
Messier 062
062
Messier 062
?
Discoveries by Charles Messier | Messier 62 | [
"Astronomy"
] | 657 | [
"Ophiuchus",
"Constellations"
] |
964,113 | https://en.wikipedia.org/wiki/Messier%2063 | Messier 63 or M63, also known as NGC 5055 or the seldom-used Sunflower Galaxy, is a spiral galaxy in the northern constellation of Canes Venatici with approximately 400 billion stars. M63 was first discovered by the French astronomer Pierre Méchain, then later verified by his colleague Charles Messier on 14 June 1779. The galaxy became listed as object 63 in the Messier Catalogue. In the mid-19th century, Anglo-Irish astronomer Lord Rosse identified spiral structures within the galaxy, making this one of the first galaxies in which such structure was identified.
The shape or morphology of this galaxy has a classification of SAbc, indicating a spiral form with no central bar feature (SA) and moderate to loosely wound arms (bc). There is a general lack of large-scale continuous spiral structure in visible light, so it is considered a flocculent galaxy. However, when observed in the near infrared, a symmetric, two-arm structure is seen. Each arm wraps 150° around the galaxy and extends out to from the nucleus.
M63 is a weakly active galaxy with a LINER nucleus – short for 'low-ionization nuclear emission-line region'. This displays as an unresolved source at the galactic nucleus that is cloaked in a diffuse emission. The latter is extended along a position angle of 110° relative to the north celestial pole, and both soft X-rays and hydrogen (H-alpha) emission can be observed coming from along nearly the same direction. The existence of a supermassive black hole (SMBH) at the nucleus is uncertain; if it does exist, then the mass is estimated as , or around 850 million times the mass of the Sun.
Radio observations at the 21-cm hydrogen line show the gaseous disk of M63 extends outward to a radius of , well past the bright optical disk. This gas shows a symmetrical form that is warped in a pronounced manner, starting at a radius of . The form suggests a dark matter halo that is offset with respect to the inner region. The reason for the warp is unclear, but the position angle points toward the smaller companion galaxy, UGC 8313.
The distance to M63, based upon the luminosity-distance measurement is . The radial velocity relative to the Local Group yields an estimate of . Estimates based on the Tully–Fisher relation range over . The tip of the red-giant branch technique gives a distance of . M63 is part of the M51 Group, a group of galaxies that also includes M51 (the 'Whirlpool Galaxy').
One supernova has been observed in M63: (typeIa, mag. 11.8) was discovered by Glenn Jolly on 24 May 1971, and was discovered independently by Roger Clark on 29 May 1971. It reached peak light around 26 May. While the spectrum was consistent with a supernova of type I, the spectroscopic behavior appeared anomalous.
Gallery
See also
List of Messier objects
References
External links
Sunflower Galaxy @ SEDS Messier pages
Sunflower Galaxy (M63) at Constellation Guide
Unbarred spiral galaxies
LINER galaxies
M51 Group
Canes Venatici
063
NGC objects
08334
46153
Astronomical objects discovered in 1779
Discoveries by Pierre Méchain | Messier 63 | [
"Astronomy"
] | 674 | [
"Canes Venatici",
"Constellations"
] |
964,134 | https://en.wikipedia.org/wiki/Messier%2065 | Messier 65 (also known as NGC 3623) is an intermediate spiral galaxy about 35 million light-years away in the constellation Leo, within its highly equatorial southern half. It was discovered by Charles Messier in 1780. With M66 and NGC 3628, it forms the Leo Triplet, a small close group of galaxies.
Discovery
M65 was discovered by Charles Messier and included in his Messier Objects list. However, William Henry Smyth accidentally attributed the discovery to Pierre Méchain in his popular 19th-century astronomical work A Cycle of Celestial Objects (stating "They [M65 and M66] were pointed out by Méchain to Messier in 1780"). This error was in turn picked up by Kenneth Glyn Jones in Messier's Nebulae and Star Clusters. This has since ramified into a number of other books by a variety of authors.
Star formation
The galaxy is low in dust and gas, and there is little star formation in it, although there has been some relatively recently in the arms. The ratio of old stars to new stars is correspondingly quite high. In most wavelengths it is quite uninteresting, though there is a radio source visible in the NVSS, offset from the core by about two arc-minutes. The identity of the source is uncertain, as it has not been identified visually, or formally studied in any published papers.
Interaction with other galaxies
To the eye, M65's disk appears slightly warped, and its relatively recent burst of star formation is also suggestive of some external disturbance. Rots (1978) suggests that the two other galaxies in the Leo Triplet interacted with each other about 800 million years ago. Recent research by Zhiyu Duan suggests that M65 may also have interacted, though much less strongly. He also notes that M65 may have a central bar—it is difficult to tell because the galaxy is seen from an oblique angle—a feature which is suggestive of tidal disruption.
Gallery
See also
List of Messier objects
References
External links
SEDS Messier: Spiral Galaxy M65
Intermediate spiral galaxies
Messier 065
Messier 065
065
Messier 065
06328
34612
Astronomical objects discovered in 1780
Discoveries by Charles Messier | Messier 65 | [
"Astronomy"
] | 458 | [
"Leo (constellation)",
"Constellations"
] |
964,138 | https://en.wikipedia.org/wiki/Universal%20Abit | Universal ABIT Co., Ltd (formerly ABIT Computer Corporation) was a computer components manufacturer, based in Taiwan, active since the 1980s. Its core product line were motherboards aimed at the overclocker market. ABIT experienced serious financial problems in 2005. The brand name "ABIT" and other intangible properties, including patents and trademarks, were acquired by Universal Scientific Industrial Co., Ltd. (USI) in May 2006.
The parent firm discontinued the brand as of 31 March 2009.
History
ABIT was founded in 1989. In 1991, the company had become the fastest growing motherboard manufacturer, claiming US$10 million in sales.
In 2000, ABIT underwent an initial public offering (IPO) on the TAIEX stock exchange. To keep pace with their "good" sales figures, they opened a factory in Suzhou, China, and moved to new headquarters in Neihu, Taipei. The number of motherboards sold was claimed to have doubled between 2000 and 2001.
Abit chose to outsource two low-end boards for trial production from June 2002 to Elitegroup Computer Systems. Confirmation of the outsourcing move was made public in July 2002, accounting for 10% of Abit's motherboard shipments for the first model, and by August 2002, this would increase to 15-20% for the second model, for the company's niche products, such as servers and routers, Abit's factory in Taoyuan, Taiwan factory will then serve as their base.
Abit had somewhat of a blow in March 2003, when Oskar Wu, a leading engineer on the famous ABIT NF7-S motherboard, resigned after the NForce series to become head of the LANParty range at competitor DFI.
On 15 December 2004, the Taiwan Stock Exchange downgraded ABIT's stock due to questionable accounting practices. Investigations revealed that the majority of their import/export business was conducted through seven companies, all located at the same address and each of which had a capital of only HK$2. This made it easy to inflate the reported number of motherboards sold. The Hong Kong media also reported that the management was being investigated for embezzling funds from the company.
In June 2005, ABIT partnered with Wan Hai Industries. This container shipping company, also a principal investor in China Airlines, brought the company much needed capital, since the company had financial problems at this time, partly due to a class action lawsuit involving faulty capacitors on their products, but also because of marketing highly technical products to the general public while offering longer-than-average warranties and generous return policies.
On 25 January 2006, ABIT announced that USI intended to purchase ABIT Computer's motherboard business and brand and announced a special shareholders' meeting to discuss the sale of ABIT's Neihu building, changing ABIT's company name, the disposition of the company's assets, and the release of the directors from non-competition restrictions. ABIT sold its own office building in Taipei to Deutsche Bank in order to raise money to cut its debt.
Following USI's acquisition of the motherboard business, the remaining divisions of ABIT switched to distributing components and networking products, while using its Suzhou, China plant only to offer some motherboard contract manufacturing services.
The acquired motherboard business and the 'ABIT' brand name were used by USI under the new brand name Universal Abit. In the US, it was known as Universal Abit USA Corporation. The old company, ABIT Computer Corporation (USA), is now dissolved and is no longer in existence.
Universal Abit later announced that it would close on 31 December 2008, and officially cease to exist on 1 January 2009.
By 2009, Abit no longer sold motherboards.
Universal Abit was located in Neihu, Taiwan with regional offices in China, USA, Iran and the Netherlands.
Technical achievements
ABIT had a reputation among PC enthusiasts for producing motherboards that support overclocking. In the late 1990s, the company introduced their Softmenu feature, one of the first jumperless CPU configuration systems that enable overclocking to be adjusted from the BIOS instead of fiddling with jumpers. Softmenu was later extended with the development of the μGuru chip. μGuru is a custom microprocessor on Abit motherboards which, in conjunction with ABIT software, gives the ability to modify overclocking settings in real-time while the OS is running. By providing instant feedback on the results of a particular overclock setting, μGuru reduced the time required to discover optimal settings. μGuru provided a special connector for a panel in a 5¼" drive bay to display CPU speed and voltage settings. They were also one of the first motherboard manufacturers to enable undervolting.
ABIT was the first motherboard manufacturer to introduce 133 MHz FSB operation for the Intel BX chipset with the aptly named AB-BX133. ABIT also achieved symmetric multiprocessing (SMP) operation for Intel's Mendocino Celeron CPU, in their BP6 motherboard. This was an achievement because Intel had blocked SMP operation in the Celeron.
In 2004, they introduced the OTES cooling system. This heat pipe based cooling system is intended to transfer heat from the chipset or the motherboard's voltage regulators and expel it out of the system through the rear I/O panel.
During Computex 2008, Universal Abit unveiled the FunFab P80 Digital Photo Frame and Printer. It integrated a photo printer directly to a mobile phone.
Products
References
S. Chen, S. Shen. "Abit cuts debts by selling properties, but trouble remains", DigiTimes.com, 28 December 2005.
E. Wang. "Abit reaches tentative agreement with creditor banks", DigiTimes.com, 21 January 2005.
E. Wang. "Abit stock downgraded to requiring full delivery", DigiTimes.com, 15 December 2004.
External links
Archive of the Abit Website prior to closure
Archive of the Abit FTP Server prior to closure (including BIOS updates, Manuals and proprietary Abit software)
1989 establishments in Taiwan
2008 disestablishments in Taiwan
Companies established in 1989
Companies disestablished in 2008
Computer companies of Taiwan
Computer hardware companies
Electronics companies of Taiwan
Motherboard companies
Companies listed on the Taiwan Stock Exchange
Taiwanese brands | Universal Abit | [
"Technology"
] | 1,335 | [
"Computer hardware companies",
"Computers"
] |
964,142 | https://en.wikipedia.org/wiki/Leo%20Triplet | The Leo Triplet (also known as the M66 Group) is a small group of galaxies about 35 million light-years away in the constellation Leo. This galaxy group consists of the spiral galaxies M65, M66, and NGC 3628.
Members
The table below lists galaxies that have been consistently identified as group members in the Nearby Galaxies Catalog, the Lyons Groups of Galaxies (LGG) Catalog, and the group lists created from the Nearby Optical Galaxy sample of Giuricin et al.
Member list
Additionally, some of the references cited above indicate that one or two other nearby galaxies may be group members. NGC 3593 is frequently but not consistently identified as a member of this group.
Nearby groups
The M96 Group is located physically near the Leo Triplet. These two groups may actually be separate parts of a much larger group,
and some group identification algorithms actually identify the Leo Triplet as part of the M96 Group.
See also
NGC 5866 Group – another small group of galaxies
References
External links
The Leo Triplett (M66 group) at messier.seds.org
Finder chart at freestarcharts.com (PDF format)
Leo (constellation)
317 | Leo Triplet | [
"Astronomy"
] | 242 | [
"Leo (constellation)",
"Constellations"
] |
964,161 | https://en.wikipedia.org/wiki/Modulus%20of%20continuity | In mathematical analysis, a modulus of continuity is a function ω : [0, ∞] → [0, ∞] used to measure quantitatively the uniform continuity of functions. So, a function f : I → R admits ω as a modulus of continuity if
for all x and y in the domain of f. Since moduli of continuity are required to be infinitesimal at 0, a function turns out to be uniformly continuous if and only if it admits a modulus of continuity. Moreover, relevance to the notion is given by the fact that sets of functions sharing the same modulus of continuity are exactly equicontinuous families. For instance, the modulus ω(t) := kt describes the k-Lipschitz functions, the moduli ω(t) := ktα describe the Hölder continuity, the modulus ω(t) := kt(|log t|+1) describes the almost Lipschitz class, and so on. In general, the role of ω is to fix some explicit functional dependence of ε on δ in the (ε, δ) definition of uniform continuity. The same notions generalize naturally to functions between metric spaces. Moreover, a suitable local version of these notions allows to describe quantitatively the continuity at a point in terms of moduli of continuity.
A special role is played by concave moduli of continuity, especially in connection with extension properties, and with approximation of uniformly continuous functions. For a function between metric spaces, it is equivalent to admit a modulus of continuity that is either concave, or subadditive, or uniformly continuous, or sublinear (in the sense of growth). Actually, the existence of such special moduli of continuity for a uniformly continuous function is always ensured whenever the domain is either a compact, or a convex subset of a normed space. However, a uniformly continuous function on a general metric space admits a concave modulus of continuity if and only if the ratios
are uniformly bounded for all pairs (x, x′) bounded away from the diagonal of X x X. The functions with the latter property constitute a special subclass of the uniformly continuous functions, that in the following we refer to as the special uniformly continuous functions. Real-valued special uniformly continuous functions on the metric space X can also be characterized as the set of all functions that are restrictions to X of uniformly continuous functions over any normed space isometrically containing X. Also, it can be characterized as the uniform closure of the Lipschitz functions on X.
Formal definition
Formally, a modulus of continuity is any increasing real-extended valued function ω : [0, ∞] → [0, ∞], vanishing at 0 and continuous at 0, that is
Moduli of continuity are mainly used to give a quantitative account both of the continuity at a point, and of the uniform continuity, for functions between metric spaces, according to the following definitions.
A function f : (X, dX) → (Y, dY) admits ω as (local) modulus of continuity at the point x in X if and only if,
Also, f admits ω as (global) modulus of continuity if and only if,
One equivalently says that ω is a modulus of continuity (resp., at x) for f, or shortly, f is ω-continuous (resp., at x). Here, we mainly treat the global notion.
Elementary facts
If f has ω as modulus of continuity and ω1 ≥ ω, then f admits ω1 too as modulus of continuity.
If f : X → Y and g : Y → Z are functions between metric spaces with moduli respectively ω1 and ω2 then the composition map has modulus of continuity .
If f and g are functions from the metric space X to the Banach space Y, with moduli respectively ω1 and ω2, then any linear combination af+bg has modulus of continuity |a|ω1+|b|ω2. In particular, the set of all functions from X to Y that have ω as a modulus of continuity is a convex subset of the vector space C(X, Y), closed under pointwise convergence.
If f and g are bounded real-valued functions on the metric space X, with moduli respectively ω1 and ω2, then the pointwise product fg has modulus of continuity .
If is a family of real-valued functions on the metric space X with common modulus of continuity ω, then the inferior envelope , respectively, the superior envelope , is a real-valued function with modulus of continuity ω, provided it is finite valued at every point. If ω is real-valued, it is sufficient that the envelope be finite at one point of X at least.
Remarks
Some authors do not require monotonicity, and some require additional properties such as ω being continuous. However, if f admits a modulus of continuity in the weaker definition, it also admits a modulus of continuity which is increasing and infinitely differentiable in (0, ∞). For instance, is increasing, and ω1 ≥ ω; is also continuous, and ω2 ≥ ω1, and a suitable variant of the preceding definition also makes ω2 infinitely differentiable in [0, ∞].
Any uniformly continuous function admits a minimal modulus of continuity ωf, that is sometimes referred to as the (optimal) modulus of continuity of f: Similarly, any function continuous at the point x admits a minimal modulus of continuity at x, ωf(t; x) (the (optimal) modulus of continuity of f at x) : However, these restricted notions are not as relevant, for in most cases the optimal modulus of f could not be computed explicitly, but only bounded from above (by any modulus of continuity of f). Moreover, the main properties of moduli of continuity concern directly the unrestricted definition.
In general, the modulus of continuity of a uniformly continuous function on a metric space needs to take the value +∞. For instance, the function f : N → R such that f(n) := n2 is uniformly continuous with respect to the discrete metric on N, and its minimal modulus of continuity is ωf(t) = +∞ for any t≥1, and ωf(t) = 0 otherwise. However, the situation is different for uniformly continuous functions defined on compact or convex subsets of normed spaces.
Special moduli of continuity
Special moduli of continuity also reflect certain global properties of functions such as extendibility and uniform approximation. In this section we mainly deal with moduli of continuity that are concave, or subadditive, or uniformly continuous, or sublinear. These properties are essentially equivalent in that, for a modulus ω (more precisely, its restriction on [0, ∞)) each of the following implies the next:
ω is concave;
ω is subadditive;
ω is uniformly continuous;
ω is sublinear, that is, there are constants a and b such that ω(t) ≤ at+b for all t;
ω is dominated by a concave modulus, that is, there exists a concave modulus of continuity such that for all t.
Thus, for a function f between metric spaces it is equivalent to admit a modulus of continuity which is either concave, or subadditive, or uniformly continuous, or sublinear. In this case, the function f is sometimes called a special uniformly continuous map. This is always true in case of either compact or convex domains. Indeed, a uniformly continuous map f : C → Y defined on a convex set C of a normed space E always admits a subadditive modulus of continuity; in particular, real-valued as a function ω : [0, ∞) → [0, ∞). Indeed, it is immediate to check that the optimal modulus of continuity ωf defined above is subadditive if the domain of f is convex: we have, for all s and t:
Note that as an immediate consequence, any uniformly continuous function on a convex subset of a normed space has a sublinear growth: there are constants a and b such that |f(x)| ≤ a|x|+b for all x. However, a uniformly continuous function on a general metric space admits a concave modulus of continuity if and only if the ratios are uniformly bounded for all pairs (x, x′) with distance bounded away from zero; this condition is certainly satisfied by any bounded uniformly continuous function; hence in particular, by any continuous function on a compact metric space.
Sublinear moduli, and bounded perturbations from Lipschitz
A sublinear modulus of continuity can easily be found for any uniformly continuous function which is a bounded perturbation of a Lipschitz function: if f is a uniformly continuous function with modulus of continuity ω, and g is a k Lipschitz function with uniform distance r from f, then f admits the sublinear module of continuity min{ω(t), 2r+kt}. Conversely, at least for real-valued functions, any special uniformly continuous function is a bounded, uniformly continuous perturbation of some Lipschitz function; indeed more is true as shown below (Lipschitz approximation).
Subadditive moduli, and extendibility
The above property for uniformly continuous function on convex domains admits a sort of converse at least in the case of real-valued functions: that is, every special uniformly continuous real-valued function f : X → R defined on a metric space X, which is a metric subspace of a normed space E, admits extensions over E that preserves any subadditive modulus ω of f. The least and the greatest of such extensions are respectively:
As remarked, any subadditive modulus of continuity is uniformly continuous: in fact, it admits itself as a modulus of continuity. Therefore, f∗ and f* are respectively inferior and superior envelopes of ω-continuous families; hence still ω-continuous. Incidentally, by the Kuratowski embedding any metric space is isometric to a subset of a normed space. Hence, special uniformly continuous real-valued functions are essentially the restrictions of uniformly continuous functions on normed spaces. In particular, this construction provides a quick proof of the Tietze extension theorem on compact metric spaces. However, for mappings with values in more general Banach spaces than R, the situation is quite more complicated; the first non-trivial result in this direction is the Kirszbraun theorem.
Concave moduli and Lipschitz approximation
Every special uniformly continuous real-valued function f : X → R defined on the metric space X is uniformly approximable by means of Lipschitz functions. Moreover, the speed of convergence in terms of the Lipschitz constants of the approximations is strictly related to the modulus of continuity of f. Precisely, let ω be the minimal concave modulus of continuity of f, which is
Let δ(s) be the uniform distance between the function f and the set Lips of all Lipschitz real-valued functions on C having Lipschitz constant s :
Then the functions ω(t) and δ(s) can be related with each other via a Legendre transformation: more precisely, the functions 2δ(s) and −ω(−t) (suitably extended to +∞ outside their domains of finiteness) are a pair of conjugated convex functions, for
Since ω(t) = o(1) for t → 0+, it follows that δ(s) = o(1) for s → +∞, that exactly means that f is uniformly approximable by Lipschitz functions. Correspondingly, an optimal approximation is given by the functions
each function fs has Lipschitz constant s and
in fact, it is the greatest s-Lipschitz function that realize the distance δ(s). For example, the α-Hölder real-valued functions on a metric space are characterized as those functions that can be uniformly approximated by s-Lipschitz functions with speed of convergence while the almost Lipschitz functions are characterized by an exponential speed of convergence
Examples of use
Let f : [a, b] → R a continuous function. In the proof that f is Riemann integrable, one usually bounds the distance between the upper and lower Riemann sums with respect to the Riemann partition P := {t0, ..., tn} in terms of the modulus of continuity of f and the mesh of the partition P (which is the number )
For an example of use in the Fourier series, see Dini test.
History
Steffens (2006, p. 160) attributes the first usage of omega for the modulus of continuity to Lebesgue (1909, p. 309/p. 75) where omega refers to the oscillation of a Fourier transform. De la Vallée Poussin (1919, pp. 7-8) mentions both names (1) "modulus of continuity" and (2) "modulus of oscillation" and then concludes "but we choose (1) to draw attention to the usage we will make of it".
The translation group of Lp functions, and moduli of continuity Lp.
Let 1 ≤ p; let f : Rn → R a function of class Lp, and let h ∈ Rn. The h-translation of f, the function defined by (τhf)(x) := f(x−h), belongs to the Lp class; moreover, if 1 ≤ p < ∞, then as ǁhǁ → 0 we have:
Therefore, since translations are in fact linear isometries, also
as ǁhǁ → 0, uniformly on v ∈ Rn.
In other words, the map h → τh defines a strongly continuous group of linear isometries of Lp. In the case p = ∞ the above property does not hold in general: actually, it exactly reduces to the uniform continuity, and defines the uniform continuous functions. This leads to the following definition, that generalizes the notion of a modulus of continuity of the uniformly continuous functions: a modulus of continuity Lp for a measurable function f : X → R is a modulus of continuity ω : [0, ∞] → [0, ∞] such that
This way, moduli of continuity also give a quantitative account of the continuity property shared by all Lp functions.
Modulus of continuity of higher orders
It can be seen that formal definition of the modulus uses notion of finite difference of first order:
If we replace that difference with a difference of order n, we get a modulus of continuity of order n:
See also
Constructive analysis
Modulus of convergence
References
Reproduced in:
Lipschitz maps
Approximation theory
Constructivism (mathematics)
Fourier analysis | Modulus of continuity | [
"Mathematics"
] | 3,069 | [
"Approximation theory",
"Mathematical logic",
"Mathematical relations",
"Constructivism (mathematics)",
"Approximations"
] |
964,167 | https://en.wikipedia.org/wiki/DSLReports | DSLReports was a North American-oriented broadband information and review site based in New York City. The site's main focus is on internet, phone, cable TV, fiber optics, and wireless services in the United States and Canada, as well as other countries (United Kingdom and Australia).. On January 15, 2025, the site went offline with a gateway timeout error. It is unknown if this is due to a technical issue or if the site has been shut down. 
DSLReports was created by Justin Beech in June 1999. According to Alexa's page ranking system and the WHOIS, dslreports.com's domain URL was registered on May 28, 1999.
History
"Broadband Reports"
In the 2000s, DSLReports was concurrently branded as "BroadbandReports.com," a domain that now redirects to dslreports.com.
2011 SQL Injection attack
Over a four-hour period on April 27, 2011, an automated SQL Injection attack occurred on the DSLReports website. The attack was able to extract 8% of the site's username/password pairs, which amounted to approximately 8,000 of the 9,000 active accounts and 90,000 old or inactive accounts created during the site's 10-year history. Once the intrusion was detected, stopped and the extent of the compromised accounts had been assessed, passwords for those accounts were automatically reset.
Content
DSLReports rates and reviews cable, DSL and fiber optic internet services from providers all over North America. The site also runs support and discussion forums and offers online tools for testing internet connection.
Reviews
DSLReports allows its users to submit reviews of their Internet service provider (ISP), Web hosting service, digital phone service (VOIP), and more. Users may also read reviews written by others. Many large ISPs have over a thousand reviews on the site. Reviews may be filtered for the user's location and/or connectivity preference.
News
The site is a source of internet related news and opinion, and occasionally breaks stories about broadband internet service providers, such as Time Warner Cable's 2008 decision to test consumption-based billing with subscribers. That same year, when Charter Communications began sending letters to high-speed internet customers regarding a new website tracking policy, reports of the letters first appeared on DSLReports. DSLReport's editors post Internet-related news and opinion items on the site's front page throughout the day. Common topics of news items and features include wireless technologies, peer-to-peer file sharing, upgrades and new offerings from ISPs, legal issues, regulatory issues, and security issues. However, since July 2, 2018, the site has not published new articles, as its main editor, Karl Bode, was laid off due to funding. However, compilations of links to articles on other sites are published every weekday.
Tools
DSLReports is reported to have the most comprehensive package of internet and connection testing tools available.
Speed tests
The DSLReports speed test claims to be the best speed test and the first popular speed test. The speed test uses HTML5.
Ping tests
DSLReports does have a ping and jitter test.
Other tests and tools
Other tools include stream tests, line monitoring, tweak testing, packet loss testing, and many other tools. Some of these services are provided free of charge, but others require the user to purchase "tool points", which are approximately $1.
Community
DSLReports operated over 200 forums, many of which focused on Internet and computer-related topics. Other forums are dedicated to general conversation, political discussions, do-it-yourself projects or regional discussions. There were over 1.8 million total registered users on the DSLReports forums. A discussion forum was automatically created for every news and opinion article posted on the front page, which allowed members to discuss the article in question. Although membership is free, the forum community allows for anonymous posting so the information or source in [anonymous] posts may be questionable as compared to posts made by actual frequent members of the site. There are also well-hidden private invitation and very controversial forums such as the "meatlocker" which can be seen by adding the /forums/meatlocker suffix to the website address. It is said this private area is for nude and pornographic material submitted by the moderators and special guests.
Robb Topolski, a software tester whose findings and subsequent political activities have contributed to the movement for net neutrality has contributed to the site.
Influence
DSLReports has been written about or had their reports featured in CNN, USA Today, Forbes, NBC News, The Washington Post, The New York Times and Ars Technica, among others.
The site has been described by The Washington Post as a "comprehensive reference" for internet services. Discussion topics on the DSLReports frequently generate thousands of comments. The Associated Press reported that over 5,000 messages were posted to forum discussing a potential data cap imposed upon Comcast Corp. customers in 2003.
CNN has rated DSLReports as one of the best free online services.
References
External links
American review websites
Internet forums
Technology websites
Consumer guides
Recommender systems
Companies based in New York City
Internet properties established in 1999
1999 establishments in New York (state) | DSLReports | [
"Technology"
] | 1,106 | [
"Information systems",
"Recommender systems"
] |
964,170 | https://en.wikipedia.org/wiki/NGC%203628 | NGC 3628, also known as the Hamburger Galaxy or Sarah's Galaxy, is an unbarred spiral galaxy about 35 million light-years away in the constellation Leo. It was discovered by William Herschel in 1784. It has an approximately 300,000 light-years long tidal tail. Along with M65 and M66, NGC 3628 forms the Leo Triplet, a small group of galaxies. Its most conspicuous feature is the broad and obscuring band of dust located along the outer edge of its spiral arms, effectively transecting the galaxy to the view from Earth.
Due to the presence of an x-shaped bulge, visible in multiple wavelengths, it has been argued that NGC 3628 is instead a barred spiral galaxy with the bar seen end-on. Simulations have shown that bars often form in disk galaxies during interactions and mergers, and NGC 3628 is known to be interacting with its two large neighbors.
The name "Hamburger Galaxy" is a reference to its shape resembling a hamburger, while the name "Sarah's Galaxy" is thought to refer to poet Sarah Williams (1837–1868), most famous for the poem "The Old Astronomer:"
References
External links
SEDS: Spiral Galaxy NGC 3628
Unbarred spiral galaxies
Peculiar galaxies
Leo Triplet
Leo (constellation)
3628
06350
34697
Astronomical objects discovered in 1784 | NGC 3628 | [
"Astronomy"
] | 277 | [
"Leo (constellation)",
"Constellations"
] |
964,174 | https://en.wikipedia.org/wiki/Messier%2066 | Messier 66 or M66, also known as NGC 3627, is an intermediate spiral galaxy in the southern, equatorial half of Leo. It was discovered by French astronomer Charles Messier on 1 March 1780, who described it as "very long and very faint". This galaxy is a member of a small group of galaxies that includes M65 and NGC 3628, known as the Leo Triplet or the M66 Group. M65 and M66 are a common object for amateur astronomic observation, being separated by only .
M66 has a morphological classification of SABb, indicating a spiral shape with a weak bar feature and loosely wound arms. The isophotal axis ratio is 0.32, indicating that it is being viewed at an angle. M66 is receding from us with a heliocentric radial velocity of . It lies 31 million light-years away and is about 95 thousand light-years across with striking dust lanes and bright star clusters along sweeping spiral arms.
Gravitational interaction from its past encounter with neighboring NGC 3628 has resulted in an extremely high central mass concentration; a high molecular to atomic mass ratio; and a resolved non-rotating clump of H I material apparently removed from one of the spiral arms. The latter feature shows up visually as an extremely prominent and unusual spiral arm and dust lane structures as originally noted in the Atlas of Peculiar Galaxies.
Supernovae
Five supernovae have been observed in M66:
SN 1973R (type II, mag. 14.5) was discovered by Leonida Rosino on 19 December 1973.
SN 1989B (type Ia, mag. 13) was discovered by Robert Evans on 30 January 1989.
SN 1997bs (type uncertain, mag. 17) was discovered by the Lick Observatory Supernova Search (LOSS) on 15 April 1997. This event was initially classified as a type IIn supernova, but more recent analysis suggests that it is instead either a luminous blue variable or a "gap" transient.
SN 2009hd (type II, mag. 15.8) was discovered by Libert (Berto) Monard on 2 July 2009.
SN 2016cok (type IIP, mag. 16.6) was discovered by the All Sky Automated Survey for Supernovae on 28 May 2016.
Gallery
See also
List of Messier objects
References
External links
Spiral Galaxy M66
Astronomy Picture of the Day – Unusual Spiral Galaxy M66 from Hubble – 2010 April 13
Messier 66 Close Up, APOD June 13, 2024
Intermediate spiral galaxies
Messier 066
Messier 066
066
Messier 066
06346
34695
016
Astronomical objects discovered in 1780
Articles containing video clips
Discoveries by Charles Messier | Messier 66 | [
"Astronomy"
] | 554 | [
"Leo (constellation)",
"Constellations"
] |
964,177 | https://en.wikipedia.org/wiki/Maurer%E2%80%93Cartan%20form | In mathematics, the Maurer–Cartan form for a Lie group is a distinguished differential one-form on that carries the basic infinitesimal information about the structure of . It was much used by Élie Cartan as a basic ingredient of his method of moving frames, and bears his name together with that of Ludwig Maurer.
As a one-form, the Maurer–Cartan form is peculiar in that it takes its values in the Lie algebra associated to the Lie group . The Lie algebra is identified with the tangent space of at the identity, denoted . The Maurer–Cartan form is thus a one-form defined globally on , that is, a linear mapping of the tangent space at each into . It is given as the pushforward of a vector in along the left-translation in the group:
Motivation and interpretation
A Lie group acts on itself by multiplication under the mapping
A question of importance to Cartan and his contemporaries was how to identify a principal homogeneous space of . That is, a manifold identical to the group , but without a fixed choice of unit element. This motivation came, in part, from Felix Klein's Erlangen programme where one was interested in a notion of symmetry on a space, where the symmetries of the space were transformations forming a Lie group. The geometries of interest were homogeneous spaces , but usually without a fixed choice of origin corresponding to the coset .
A principal homogeneous space of is a manifold abstractly characterized by having a free and transitive action of on . The Maurer–Cartan form gives an appropriate infinitesimal characterization of the principal homogeneous space. It is a one-form defined on satisfying an integrability condition known as the Maurer–Cartan equation. Using this integrability condition, it is possible to define the exponential map of the Lie algebra and in this way obtain, locally, a group action on .
Construction
Intrinsic construction
Let be the tangent space of a Lie group at the identity (its Lie algebra). acts on itself by left translation
such that for a given we have
and this induces a map of the tangent bundle to itself:
A left-invariant vector field is a section of such that
The Maurer–Cartan form is a -valued one-form on defined on vectors by the formula
Extrinsic construction
If is embedded in by a matrix valued mapping , then one can write explicitly as
In this sense, the Maurer–Cartan form is always the left logarithmic derivative of the identity map of .
Characterization as a connection
If we regard the Lie group as a principal bundle over a manifold consisting of a single point then the Maurer–Cartan form can also be characterized abstractly as the unique principal connection on the principal bundle . Indeed, it is the unique valued -form on satisfying
where is the pullback of forms along the right-translation in the group and is the adjoint action on the Lie algebra.
Properties
If is a left-invariant vector field on , then is constant on . Furthermore, if and are both left-invariant, then
where the bracket on the left-hand side is the Lie bracket of vector fields, and the bracket on the right-hand side is the bracket on the Lie algebra . (This may be used as the definition of the bracket on .) These facts may be used to establish an isomorphism of Lie algebras
By the definition of the exterior derivative, if and are arbitrary vector fields then
Here is the -valued function obtained by duality from pairing the one-form with the vector field , and is the Lie derivative of this function along . Similarly is the Lie derivative along of the -valued function .
In particular, if and are left-invariant, then
so
but the left-invariant fields span the tangent space at any point (the push-forward of a basis in under a diffeomorphism is still a basis), so the equation is true for any pair of vector fields and . This is known as the Maurer–Cartan equation. It is often written as
Here denotes the bracket of Lie algebra-valued forms.
Maurer–Cartan frame
One can also view the Maurer–Cartan form as being constructed from a Maurer–Cartan frame. Let be a basis of sections of consisting of left-invariant vector fields, and be the dual basis of sections of such that , the Kronecker delta. Then is a Maurer–Cartan frame, and is a Maurer–Cartan coframe.
Since is left-invariant, applying the Maurer–Cartan form to it simply returns the value of at the identity. Thus . Thus, the Maurer–Cartan form can be written
Suppose that the Lie brackets of the vector fields are given by
The quantities are the structure constants of the Lie algebra (relative to the basis ). A simple calculation, using the definition of the exterior derivative , yields
so that by duality
This equation is also often called the Maurer–Cartan equation. To relate it to the previous definition, which only involved the Maurer–Cartan form , take the exterior derivative of :
The frame components are given by
which establishes the equivalence of the two forms of the Maurer–Cartan equation.
On a homogeneous space
Maurer–Cartan forms play an important role in Cartan's method of moving frames. In this context, one may view the Maurer–Cartan form as a defined on the tautological principal bundle associated with a homogeneous space. If is a closed subgroup of , then is a smooth manifold of dimension . The quotient map induces the structure of an -principal bundle over . The Maurer–Cartan form on the Lie group yields a flat Cartan connection for this principal bundle. In particular, if }, then this Cartan connection is an ordinary connection form, and we have
which is the condition for the vanishing of the curvature.
In the method of moving frames, one sometimes considers a local section of the tautological bundle, say . (If working on a submanifold of the homogeneous space, then need only be a local section over the submanifold.) The pullback of the Maurer–Cartan form along defines a non-degenerate -valued -form over the base. The Maurer–Cartan equation implies that
Moreover, if and are a pair of local sections defined, respectively, over open sets and , then they are related by an element of in each fibre of the bundle:
The differential of gives a compatibility condition relating the two sections on the overlap region:
where is the Maurer–Cartan form on the group .
A system of non-degenerate -valued -forms defined on open sets in a manifold , satisfying the Maurer–Cartan structural equations and the compatibility conditions endows the manifold locally with the structure of the homogeneous space . In other words, there is locally a diffeomorphism of into the homogeneous space, such that is the pullback of the Maurer–Cartan form along some section of the tautological bundle. This is a consequence of the existence of primitives of the Darboux derivative.
Notes
References
Lie groups
Equations
Differential geometry | Maurer–Cartan form | [
"Mathematics"
] | 1,462 | [
"Lie groups",
"Mathematical structures",
"Mathematical objects",
"Equations",
"Algebraic structures"
] |
964,229 | https://en.wikipedia.org/wiki/Protoplast | Protoplast (), is a biological term coined by Hanstein in 1880 to refer to the entire cell, excluding the cell wall. Protoplasts can be generated by stripping the cell wall from plant, bacterial, or fungal cells by mechanical, chemical or enzymatic means.
Protoplasts differ from spheroplasts in that their cell wall has been completely removed. Spheroplasts retain part of their cell wall. In the case of Gram-negative bacterial spheroplasts, for example, the peptidoglycan component of the cell wall has been removed but the outer membrane component has not.
Enzymes for the preparation of protoplasts
Cell walls are made of a variety of polysaccharides. Protoplasts can be made by degrading cell walls with a mixture of the appropriate polysaccharide-degrading enzymes:
During and subsequent to digestion of the cell wall, the protoplast becomes very sensitive to osmotic stress. This means cell wall digestion and protoplast storage must be done in an isotonic solution to prevent rupture of the plasma membrane.
Uses for protoplasts
Protoplasts can be used to study membrane biology, including the uptake of macromolecules and viruses . These are also used in somaclonal variation.
Protoplasts are widely used for DNA transformation (for making genetically modified organisms), since the cell wall would otherwise block the passage of DNA into the cell. In the case of plant cells, protoplasts may be regenerated into whole plants first by growing into a group of plant cells that develops into a callus and then by regeneration of shoots (caulogenesis) from the callus using plant tissue culture methods. Growth of protoplasts into callus and regeneration of shoots requires the proper balance of plant growth regulators in the tissue culture medium that must be customized for each species of plant. Unlike protoplasts from vascular plants, protoplasts from mosses, such as Physcomitrella patens, do not need phytohormones for regeneration, nor do they form a callus during regeneration. Instead, they regenerate directly into the filamentous protonema, mimicking a germinating moss spore.
Protoplasts may also be used for plant breeding, using a technique called protoplast fusion. Protoplasts from different species are induced to fuse by using an electric field or a solution of polyethylene glycol. This technique may be used to generate somatic hybrids in tissue culture.
Additionally, protoplasts of plants expressing fluorescent proteins in certain cells may be used for Fluorescence Activated Cell Sorting (FACS), where only cells fluorescing a selected wavelength are retained. Among other things, this technique is used to isolate specific cell types (e.g., guard cells from leaves, pericycle cells from roots) for further investigations, such as transcriptomics.
See also
Bacterial morphological plasticity
L-form bacteria
Spheroplasts
References
Cell biology
Membrane biology
Molecular biology
Plant physiology
Plant reproduction | Protoplast | [
"Chemistry",
"Biology"
] | 649 | [
"Plant physiology",
"Behavior",
"Cell biology",
"Plant reproduction",
"Plants",
"Reproduction",
"Membrane biology",
"Molecular biology",
"Biochemistry"
] |
964,312 | https://en.wikipedia.org/wiki/12AX7 | 12AX7 (also known as ECC83) is a miniature dual-triode vacuum tube with high voltage gain. Developed around 1946 by RCA engineers in Camden, New Jersey, under developmental number A-4522, it was released for public sale under the 12AX7 identifier on September 15, 1947.
The 12AX7 was originally intended as replacement for the 6SL7 family of dual-triode amplifier tubes for audio applications. As a popular choice for guitar tube amplifiers, its ongoing use in such equipment makes it one of the few small-signal vacuum tubes in continuous production since it was introduced.
History
The 12AX7 is a twin triode basically composed of two of the triodes from a 6AV6, a double diode triode. The 6AV6 is a miniature repackaging (with just a single cathode) of the triode and twin diodes from the octal 6SQ7 (a double-diode triode used in AM radios), which itself is very similar to the older type 75 triode-diode dating from 1930.
Application
The 12AX7 is a high-gain (typical amplification factor 100), low-plate-current triode best suited for low-level audio voltage amplification. In this role it is widely used for the preamplifier (input and mid-level) stages of audio amplifiers. It has relatively high Miller capacitance, making it unsuitable for radio-frequency use.
Typically a 12AX7 triode is configured with a high-value plate resistor, 100 kohms in most guitar amps and 220 kΩ or more in high-fidelity equipment. Grid bias is most often provided by a cathode resistor. If the cathode resistor is unbypassed, negative feedback is introduced and each half of a 12AX7 provides a typical voltage gain of about 30; the amplification factor is basically twice the maximum stage gain, as the plate impedance must be matched. Thus half the voltage is across the tube at rest, half across the load resistor. The cathode resistor can be bypassed to reduce or eliminate AC negative feedback and thereby increase gain; maximum gain is about 60 times with a 100k plate load, and a center biased and bypassed cathode, and higher with a larger plate load.
Where = voltage gain, is the amplification factor of the valve, is the internal plate resistance, is the cathode resistor and is the parallel combination of (external plate resistor) and . If the cathode resistor is bypassed, use .
The initial “12” in the designator implies a 12-volt heater requirement; however, the tube has a center-tapped heater so it can be used in either 6.3-V or 12.6-V heater circuits.
Similar twin-triode designs
The 12AX7 is the most common member of what eventually became a large family of twin-triode vacuum tubes, manufactured all over the world, all sharing the same pinout (EIA 9A). Most use heaters which can be optionally wired in series (12.6V, 150 mA) or parallel (6.3V, 300 mA). Other tubes, which in some cases can be used interchangeably in an emergency or for different performance characteristics, include the 12AT7, 12AU7, 12AV7, 12AY7, and the low-voltage 12U7, plus many four-digit EIA series dual triodes. They span a wide range of voltage gain and transconductance. Different versions of each were designed for enhanced ruggedness, low microphonics, stability, lifespan, etc.
Those other designs offer lower voltage gain (traded off for higher plate current) than the 12AX7 (which has a voltage gain or of 100), and are more suitable for high-frequency applications.
Some American designs similar to the 12AX7:
12AD7 (October 10, 1955 - 225mA heater - low hum)
12AT7 (May 20, 1947, dual 6AB4, = 60)
12AU7 (October 18, 1946, dual 6C4, = 17-20)
12AV7 (February 14, 1950 - dual 6BC4, = 37-41)
12AX7 (September 15, 1947 - dual 6DR4, also like octal 6SL7, = 100) dual 12AV6 (6AV6)
12AY7 (December 7, 1948 - = 44, for audio preamp use)
12AZ7 (March 2, 1951 - 225mA heater, = 60)
12DF7 ( = 100, low microphonics)
12DT7 ( = 100)
12DW7 (First triode: = 100, Second triode: = 17)
12U7 ( = 20, for use in automotive radios on 12-volt plate supply)
Although commonly known in Europe by its Mullard–Philips tube designation of ECC83, other European variations also exist including the low-noise versions 12AX7A, 12AD7, 6681, 7025, and 7729; European versions B339, B759, CV492, CV4004, CV8156, CV8222, ECC803, ECC803S, E2164, and M8137; and the lower-gain low-noise versions 5751 and 6851, intended for avionics equipment.
In European usage special-quality valves of some sort were often indicated by exchanging letters and digits in the name: the E83CC was a special-quality ECC83.
In the US a "W" in the designation, as in 12AX7WA, designates the tube as complying with military grade, higher reliability specifications.
The 'E' in the European designation classifies this as having a 6.3 volt heater, whereas the American designation of 12AX7 classifies it as having a 12.6 volt heater. It can, of course, be wired for operation off either voltage.
Manufacturers
versions of the 12AX7/ECC83 are available from the following manufacturers:
In Russia: New Sensor, which produces tubes under the Sovtek, Electro-Harmonix, Svetlana, Tung-Sol, and Mullard brands
In Slovakia: JJ Electronic (annual production of approximately two million units)
In China: Hengyang Electronics, Tubes sold under Psvane and TAD brand names. Company owns its independent manufacturing facility in Southern China (former Guiguang tube factory) and recently they acquired former small signal tube manufacturing line from Tianjin Quanzheng factory (TJ Full Music)
Gallery
See also
List of vacuum tubes
References
External links
Duncan's Amps TDSL.
Several tube datasheets.
Reviews of 12ax7 tubes.
The 12AX7 tube.
12AX7/ECC83 page (with datasheet) at JJ Electronic
Vacuum tubes
Guitar amplification tubes
RCA | 12AX7 | [
"Physics"
] | 1,462 | [
"Vacuum tubes",
"Vacuum",
"Matter"
] |
964,328 | https://en.wikipedia.org/wiki/Wolfgang%20Krull | Wolfgang Krull (26 August 1899 – 12 April 1971) was a German mathematician who made fundamental contributions to commutative algebra, introducing concepts that are now central to the subject.
Krull was born and went to school in Baden-Baden. He attended the Universities of Freiburg, Rostock and finally Göttingen from 1919–1921, where he earned his doctorate under Alfred Loewy. He worked as an instructor and professor at Freiburg, then spent a decade at the University of Erlangen. In 1939, Krull moved to become chair at the University of Bonn, where he remained for the rest of his life. Wolfgang Krull was a member of the Nazi Party.
His 35 doctoral students include Wilfried Brauer, Karl-Otto Stöhr and Jürgen Neukirch.
See also
Cohen structure theorem
Jacobson ring
Local ring
Prime ideal
Real algebraic geometry
Regular local ring
Valuation ring
Krull dimension
Krull ring
Krull topology
Krull–Azumaya theorem
Krull–Schmidt category
Krull–Schmidt theorem
Krull's intersection theorem
Krull's principal ideal theorem
Krull's separation lemma
Krull's theorem
Publications
References
External links
1899 births
1971 deaths
20th-century German mathematicians
Nazi Party members
Algebraists | Wolfgang Krull | [
"Mathematics"
] | 263 | [
"Algebra",
"Algebraists"
] |
964,342 | https://en.wikipedia.org/wiki/Enterprise%20value | Enterprise value (EV), total enterprise value (TEV), or firm value (FV) is an economic measure reflecting the market value of a business (i.e. as distinct from market price). It is a sum of claims by all claimants: creditors (secured and unsecured) and shareholders (preferred and common). Enterprise value is one of the fundamental metrics used in business valuation, financial analysis, accounting, portfolio analysis, and risk analysis.
Enterprise value is more comprehensive than market capitalization, which only reflects common equity. Importantly, EV reflects the opportunistic nature of business and may change substantially over time because of both external and internal conditions. Therefore, financial analysts often use a comfortable range of EV in their calculations.
EV equation
For detailed information on the valuation process see Valuation (finance).
Enterprise value =
common equity at market value (this line item is also known as "market cap")
+ debt at market value (here debt refers to interest-bearing liabilities, both long-term and short-term)
+ preferred equity at market value
+ unfunded pension liabilities and other debt-deemed provisions
– value of associate companies
– cash and cash equivalents.
Understanding
A simplified way to understand the EV concept is to envision purchasing an entire business. If you settle with all the security holders, you pay EV. Counterintuitively, increases or decreases in enterprise value do not necessarily correspond to "value creation" or "value destruction". Any acquisition of assets (whether paid for in cash or through share issues) will increase EV, whether or not those assets are productive. Similarly, reductions in capital intensity (for example by reducing working capital) will reduce EV.
EV can be negative if the company, for example, holds abnormally high amounts of cash that are not reflected in the market value of the stock and total capitalization.
All the components are relevant in liquidation analysis, since using absolute priority in bankruptcy all securities senior to the equity have par claims. Generally, also, debt is less liquid than equity, so the "market price" may be significantly different from the price at which an entire debt issue could be purchased. In valuing equities, this approach is more conservative than using the "market price".
Cash is subtracted because it reduces the net cost to a potential purchaser. The effect applies whether the cash is used to issue dividends or to pay down debt.
Value of minority interest is added because it reflects the claim on assets consolidated into the firm in question.
Value of associate companies is subtracted because it reflects the claim on assets consolidated into other firms.
EV should also include such special components as unfunded pension liabilities, employee stock options, environmental provisions, abandonment provisions, and so on since they also reflect claims on the company.
There are certain limitations and traps in using enterprise value. One of which can be a simplified aggregation of company's financial situation. One unit of additional debt may not be of same importance as additional one unit of missing cash.
It can be demonstrated that enterprise value depends on the probability of default (the rating) and works as a "negative growth rate" in the future.
Usage
Because EV is a capital structure-neutral metric, it is useful when comparing companies with diverse capital structures. Price/earnings ratios, for example, will be significantly more volatile in companies that are highly leveraged.
Stock market investors use EV/EBITDA to compare returns between equivalent companies on a risk-adjusted basis. They can then superimpose their own choice of debt levels. In practice, equity investors may have difficulty accurately assessing EV if they do not have access to the market quotations of the company debt. It is not sufficient to substitute the book value of the debt because a) the market interest rates may have changed, and b) the market's perception of the risk of the loan may have changed since the debt was issued. Remember, the point of EV is to neutralize the different risks, and costs of different capital structures.
Buyers of controlling interests in a business use EV to compare returns between businesses, as above. They also use the EV valuation (or a debt free cash free valuation) to determine how much to pay for the whole entity (not just the equity) since the change of control may require debt repayment. They may also want to change the capital structure once in control.
Technical considerations
Data availability
Unlike market capitalization, where both the market price and the outstanding number of shares in issue are readily available and easy to find, it is virtually impossible to calculate an EV without making a number of adjustments to published data, including often subjective estimations of value:
The vast majority of corporate debt is not publicly traded. Most corporate debt is in the form of bank financing, finance leases and other forms of debt for which there is no market price.
Associates and minority interests are stated at historical book values in the accounts, which may be very different from their market values.
Unfunded pension liabilities rely on a variety of actuarial assumptions and represent an estimate of the outstanding liability, not a true “market” value.
Public data for certain key inputs of EV, such as cash balances, debt levels and provisions are only published infrequently (often only once a year in the annual report & accounts of the company).
Published accounts are only disclosed weeks or months after the year-end date, meaning that the information disclosed is already out of date.
In practice, EV calculations rely on reasonable estimates of the market value of these components. For example, in many professional valuations:
Unfunded pension liabilities are valued at face value as set out in notes to the latest available accounts.
Debt that is not publicly traded is usually taken at face value, unless the company is highly geared (in which case a more sophisticated analysis is required).
Associates & minority interests are usually valued either at book value or as a multiple of their earnings.
Avoiding temporal mismatches
When using valuation multiples such as EV/EBITDA and EV/EBIT, the numerator should correspond to the denominator. In other words, the profitability metric in the denominator should be available to all stakeholders represented in the numerator. The EV should, therefore, correspond to the market value of the assets that were used to generate the profits in question, excluding assets acquired (and including assets disposed) during a different financial reporting period. This requires restating EV for any mergers and acquisitions (whether paid in cash or equity), significant capital investments or significant changes in working capital occurring after or during the reporting period being examined. Ideally, multiples should be calculated using the market value of the weighted average capital employed of the company during the comparable financial period.
When calculating multiples over different time periods (e.g. historic multiples vs forward multiples), EV should be adjusted to reflect the weighted average invested capital of the company in each period.
See also
DCF, discounted cash flow method of valuation
Capital structure
WACC, weighted average cost of capital
Social accounting
Residual income valuation
Notes
References
External links
Investopedia Video: Introduction To Enterprise Value
Mathematical finance
Fundamental analysis | Enterprise value | [
"Mathematics"
] | 1,459 | [
"Applied mathematics",
"Mathematical finance"
] |
964,378 | https://en.wikipedia.org/wiki/Ramachandran%20plot | In biochemistry, a Ramachandran plot (also known as a Rama plot, a Ramachandran diagram or a [φ,ψ] plot), originally developed in 1963 by G. N. Ramachandran, C. Ramakrishnan, and V. Sasisekharan, is a way to visualize energetically allowed regions for backbone dihedral angles ( also called as torsional angles , phi and psi angles ) ψ against φ of amino acid residues in protein structure. The figure on the left illustrates the definition of the φ and ψ backbone dihedral angles (called φ and φ' by Ramachandran). The ω angle at the peptide bond is normally 180°, since the partial-double-bond character keeps the peptide bond planar. The figure in the top right shows the allowed φ,ψ backbone conformational regions from the Ramachandran et al. 1963 and 1968 hard-sphere calculations: full radius in solid outline, reduced radius in dashed, and relaxed tau (N-Cα-C) angle in dotted lines. Because dihedral angle values are circular and 0° is the same as 360°, the edges of the Ramachandran plot "wrap" right-to-left and bottom-to-top. For instance, the small strip of allowed values along the lower-left edge of the plot are a continuation of the large, extended-chain region at upper left.
Uses
A Ramachandran plot can be used in two somewhat different ways. One is to show in theory which values, or conformations, of the ψ and φ angles are possible for an amino-acid residue in a protein (as at top right). A second is to show the empirical distribution of datapoints observed in a single structure (as at right, here) in usage for structure validation, or else in a database of many structures (as in the lower 3 plots at left). It's used to predict about Drug-ligand interaction and helpful in pharmaceutical industries. Either case is usually shown against outlines for the theoretically favored regions.
Amino-acid preferences
One might expect that larger side chains would result in more restrictions and consequently a smaller allowable region in the Ramachandran plot, but the effect of side chains is small. In practice, the major effect seen is that of the presence or absence of the methylene group at Cβ. Glycine has only a hydrogen atom for its side chain, with a much smaller van der Waals radius than the CH3, CH2, or CH group that starts the side chain of all other amino acids. Hence it is least restricted, and this is apparent in the Ramachandran plot for glycine (see Gly plot in gallery) for which the allowable area is considerably larger. In contrast, the Ramachandran plot for proline, with its 5-membered-ring side chain connecting Cα to backbone N, shows a limited number of possible combinations of ψ and φ (see Pro plot in gallery). The residue preceding proline ("pre-proline") also has limited combinations compared to the general case.
More recent updates
The first Ramachandran plot was calculated just after the first protein structure at atomic resolution was determined (myoglobin, in 1960), although the conclusions were based on small-molecule crystallography of short peptides. Now, many decades later, there are tens of thousands of high-resolution protein structures determined by X-ray crystallography and deposited in the Protein Data Bank (PDB). Many studies have taken advantage of this data to produce more detailed and accurate φ,ψ plots (e.g., Morris et al. 1992; Kleywegt & Jones 1996; Hooft et al. 1997; Hovmöller et al. 2002; Lovell et al. 2003; Anderson et al. 2005. Ting et al.'' 2010).
The four figures below show the datapoints from a large set of high-resolution structures and contours for favored and for allowed conformational regions for the general case (all amino acids except Gly, Pro, and pre-Pro), for Gly, and for Pro. The most common regions are labeled: α for α helix, Lα for left-handed helix, β for β-sheet, and ppII for polyproline II. Such a clustering is alternatively described in the ABEGO system, where each letter stands for α (and 310) helix, right-handed β sheets (and extended structures), left-handed helixes, left-handed sheets, and finally unplottable cis peptide bonds sometimes seen with proline; it has been used in the classification of motifs and more recently for designing proteins.
While the Ramachandran plot has been a textbook resource for explaining the structural behavior of peptide bond, an exhaustive exploration of how a peptide behaves in every region of the Ramachandran plot was only recently published (Mannige 2017).
The Molecular Biophysics Unit at Indian Institute of Science celebrated 50 years of Ramachandran Map by organizing International Conference on Biomolecular Forms and Functions from 8–11 January 2013.
Related conventions
One can also plot the dihedral angles in polysaccharides (e.g. with CARP ).
Gallery
Software
Web-based Structural Analysis tool for any uploaded PDB file, producing Ramachandran plots, computing dihedral angles and extracting sequence from PDB
Web-based tool showing Ramachandran plot of any PDB entry
MolProbity web service that produces Ramachandran plots and other validation of any PDB-format file
SAVES (Structure Analysis and Verification) — uses WHATCHECK, PROCHECK, and does its own internal Ramachandran Plot
STING
Pymol with the DynoPlot extension
VMD, distributed with dynamic Ramachandran plot plugin
WHAT CHECK, the stand-alone validation routines from the WHAT IF software
UCSF Chimera, found under the Model Panel.
Sirius
Swiss PDB Viewer
TALOS
Zeus molecular viewer — found under "Tools" menu, high quality plots with regional contours
Procheck
Neighbor-Dependent and Neighbor-Independent Ramachandran Probability Distributions
See also PDB for a list of similar software.
References
Further reading
, available on-line at Anatax
External links
DynoPlot in PyMOL wiki
Link to Ramachandran Plot Map of alpha-helix and beta-sheet locations
Link to Ramachandran plot calculated from protein structures determined by X-ray crystallography compared to the original Ramachan.
Proteopedia Ramachandran Plot
Biochemistry methods
Plots (graphics) | Ramachandran plot | [
"Chemistry",
"Biology"
] | 1,353 | [
"Biochemistry methods",
"Biochemistry"
] |
964,428 | https://en.wikipedia.org/wiki/Weighing%20scale | A scale or balance is a device used to measure weight or mass. These are also known as mass scales, weight scales, mass balances, massometers, and weight balances.
The traditional scale consists of two plates or bowls suspended at equal distances from a fulcrum. One plate holds an object of unknown mass (or weight), while objects of known mass or weight, called weights, are added to the other plate until mechanical equilibrium is achieved and the plates level off, which happens when the masses on the two plates are equal. The perfect scale rests at neutral. A spring scale will make use of a spring of known stiffness to determine mass (or weight). Suspending a certain mass will extend the spring by a certain amount depending on the spring's stiffness (or spring constant). The heavier the object, the more the spring stretches, as described in Hooke's law. Other types of scales making use of different physical principles also exist.
Some scales can be calibrated to read in units of force (weight) such as newtons instead of units of mass such as kilograms. Scales and balances are widely used in commerce, as many products are sold and packaged by mass.
Pan balance
History
The balance scale is such a simple device that its usage likely far predates the evidence. What has allowed archaeologists to link artifacts to weighing scales are the stones for determining absolute mass. The balance scale itself was probably used to determine relative mass long before absolute mass.
The oldest attested evidence for the existence of weighing scales dates to the Fourth Dynasty of Egypt, with Deben (unit) balance weights, from the reign of Sneferu (c. 2600 BC) excavated, though earlier usage has been proposed. Carved stones bearing marks denoting mass and the Egyptian hieroglyphic symbol for gold have been discovered, which suggests that Egyptian merchants had been using an established system of mass measurement to catalog gold shipments or gold mine yields. Although no actual scales from this era have survived, many sets of weighing stones as well as murals depicting the use of balance scales suggest widespread usage.
Examples, dating , have also been found in the Indus River valley. Uniform, polished stone cubes discovered in early settlements were probably used as mass-setting stones in balance scales. Although the cubes bear no markings, their masses are multiples of a common denominator. The cubes are made of many different kinds of stones with varying densities. Clearly their mass, not their size or other characteristics, was a factor in sculpting these cubes.
In China, the earliest weighing balance excavated was from a tomb of the State of Chu of the Chinese Warring States Period dating back to the 3rd to 4th century BC in Mount Zuojiagong near Changsha, Hunan. The balance was made of wood and used bronze masses.
Variations on the balance scale, including devices like the cheap and inaccurate bismar (unequal-armed scales), began to see common usage by c. 400 BC by many small merchants and their customers. A plethora of scale varieties each boasting advantages and improvements over one another appear throughout recorded history, with such great inventors as Leonardo da Vinci lending a personal hand in their development.
Even with all the advances in weighing scale design and development, all scales until the seventeenth century AD were variations on the balance scale. The standardization of the weights used – and ensuring traders used the correct weights – was a considerable preoccupation of governments throughout this time.
The original form of a balance consisted of a beam with a fulcrum at its center. For highest accuracy, the fulcrum would consist of a sharp V-shaped pivot seated in a shallower V-shaped bearing. To determine the mass of the object, a combination of reference masses was hung on one end of the beam while the object of unknown mass was hung on the other end (see balance and steelyard balance). For high precision work, such as empirical chemistry, the center beam balance is still one of the most accurate technologies available, and is commonly used for calibrating test masses.
However, bronze fragments discovered in central Germany and Italy had been used during the Bronze Age as an early form of currency. In the same time period, merchants had used standard weights of equivalent value between 8 and 10.5 grams from Great Britain to Mesopotamia.
Mechanical balances
The balance (also balance scale, beam balance and laboratory balance) was the first mass measuring instrument invented. In its traditional form, it consists of a pivoted horizontal lever with arms of equal lengththe beam or tron and a weighing pan suspended from each arm (hence the plural name "scales for a weighing instrument). The unknown mass is placed in one pan and standard masses are added to the other pan until the beam is as close to equilibrium as possible. In precision balances, a more accurate determination of the mass is given by the position of a sliding mass moved along a graduated scale. A decimal balance uses the lever in which the arm for weights is 10 times longer than the arm for weighted objects, so that much lighter weights may be used to weigh heavy object. Similarly a centesimal balance uses arms in ratio 1:100.
Unlike spring-based scales, balances are used for the precision measurement of mass as their accuracy is not affected by variations in the local gravitational field. (On Earth, for example, these can amount to ±0.5% between locations.) A change in the strength of the gravitational field caused by moving the balance does not change the measured mass, because the moments of force on either side of the center balanced beam are affected equally. A center beam balance will render an accurate measurement of mass at any location experiencing a constant gravity or acceleration.
Very precise measurements are achieved by ensuring that the balance's fulcrum is essentially friction-free (a knife edge is the traditional solution), by attaching a pointer to the beam which amplifies any deviation from a balance position; and finally by using the lever principle, which allows fractional masses to be applied by movement of a small mass along the measuring arm of the beam, as described above. For greatest accuracy, there needs to be an allowance for the buoyancy in air, whose effect depends on the densities of the masses involved.
To reduce the need for large reference masses, an off-center beam can be used. A balance with an off-center beam can be almost as accurate as a scale with a center beam, but the off-center beam requires special reference masses and cannot be intrinsically checked for accuracy by simply swapping the contents of the pans as a center-beam balance can. To reduce the need for small graduated reference masses, a sliding weight called a poise can be installed so that it can be positioned along a calibrated scale. A poise adds further intricacies to the calibration procedure, since the exact mass of the poise must be adjusted to the exact lever ratio of the beam.
For greater convenience in placing large and awkward loads, a platform can be floated on a cantilever beam system which brings the proportional force to a noseiron bearing; this pulls on a stilyard rod to transmit the reduced force to a conveniently sized beam.
One still sees this design in portable beam balances of 500 kg capacity which are commonly used in harsh environments without electricity, as well as in the lighter duty mechanical bathroom scale (which actually uses a spring scale, internally). The additional pivots and bearings all reduce the accuracy and complicate calibration; the float system must be corrected for corner errors before the span is corrected by adjusting the balance beam and poise.
Roberval balance
In 1669 the Frenchman Gilles Personne de Roberval presented a new kind of balance scale to the French Academy of Sciences. This scale consisted of a pair of vertical columns separated by a pair of equal-length arms and pivoting in the center of each arm from a central vertical column, creating a parallelogram. From the side of each vertical column a peg extended. To the amazement of observers, no matter where Roberval hung two equal weight along the peg, the scale still balanced. In this sense, the scale was revolutionary: it evolved into the more-commonly encountered form consisting of two pans placed on vertical column located above the fulcrum and the parallelogram below them. The advantage of the Roberval design is that no matter where equal weights are placed in the pans, the scale will still balance.
Further developments have included a "gear balance" in which the parallelogram is replaced by any odd number of interlocking gears greater than one, with alternating gears of the same size and with the central gear fixed to a stand and the outside gears fixed to pans, as well as the "sprocket gear balance" consisting of a bicycle-type chain looped around an odd number of sprockets with the central one fixed and the outermost two free to pivot and attached to a pan.
Because it has more moving joints which add friction, the Roberval balance is consistently less accurate than the traditional beam balance, but for many purposes this is compensated for by its usability.
Torsion balance
The torsion balance is one of the most mechanically accurate of analog balances. Pharmacy schools still teach how to use torsion balances in the U.S. It utilizes pans like a traditional balance that lie on top of a mechanical chamber which bases measurements on the amount of twisting of a wire or fiber inside the chamber. The scale must still use a calibration weight to compare against, and can weigh objects greater than 120 mg and come within a margin of error +/- 7 mg. Many microbalances and ultra-microbalances that weigh fractional gram values are torsion balances. A common fiber type is quartz crystal.
Electronic devices
Microbalance
A microbalance (also called an ultramicrobalance, or nanobalance) is an instrument capable of making precise measurements of the mass of objects of relatively small mass: on the order of a million parts of a gram and below.
Analytical balance
An analytical balance is a class of balance designed to measure small mass in the sub-milligram range. The measuring pan of an analytical balance (0.1 mg or better) is inside a transparent enclosure with doors so that dust does not collect and so any air currents in the room do not affect the balance's operation. This enclosure is often called a draft shield. The use of a mechanically vented balance safety enclosure, which has uniquely designed acrylic airfoils, allows a smooth turbulence-free airflow that prevents balance fluctuation and the measure of mass down to 1 μg without fluctuations or loss of product. Also, the sample must be at room temperature to prevent natural convection from forming air currents inside the enclosure from causing an error in reading. Single-pan mechanical substitution balances maintain consistent response throughout the useful capacity, which is achieved by maintaining a constant load on the balance beam and thus the fulcrum by subtracting mass on the same side of the beam to which the sample is added.
Electronic analytical scales measure the force needed to counter the mass being measured rather than using actual masses. As such they must have calibration adjustments made to compensate for gravitational differences. They use an electromagnet to generate a force to counter the sample being measured and output the result by measuring the force needed to achieve balance. Such a measurement device is called an electromagnetic force restoration sensor.
Pendulum balance scales
Pendulum type scales do not use springs. These designs use pendulums and operate as a balance that is unaffected by differences in gravity. An example of application of this design are scales made by the Toledo Scale Company.
Programmable scales
A programmable scale has a programmable logic controller in it, allowing it to be programmed for various applications such as batching, labeling, filling (with check weight function), truck scales, and more.
Another important function is counting, e. g. used to count small parts in larger quantities during the annual stock taking. Counting scales (which can also do just weighing) can range from mg to tonnes.
Symbolism
The scales (specifically, a two-pan, beam balance) are one of the traditional symbols of justice, as wielded by statues of Lady Justice. This corresponds to the use in a metaphor of matters being "held in the balance". It has its origins in ancient Egypt.
Scales also are widely used as a symbol of finance, commerce, or trade, in which they have played a traditional, vital role since ancient times. For instance, balance scales are depicted in the seal of the U.S. Department of the Treasury and the Federal Trade Commission.
Scales are also the symbol for the astrological sign Libra.
Scales (specifically, a two-pan, beam balance in a state of equal balance) are the traditional symbol of Pyrrhonism indicating the equal balance of arguments used in inducing epoche.
Force-measuring (weight) scales
History
Although records dating to the 1700s refer to spring scales for measuring mass, the earliest design for such a device dates to 1770 and credits Richard Salter, an early scale-maker. Spring scales came into wide usage in the United Kingdom after 1840 when R. W. Winfield developed the candlestick scale for weighing letters and packages, required after the introduction of the Uniform Penny Post. Postal workers could work more quickly with spring scales than balance scales because they could be read instantaneously and did not have to be carefully balanced with each measurement.
By the 1940s, various electronic devices were being attached to these designs to make readings more accurate. Load cells – transducers that convert force to an electrical signal – have their beginnings as early as the late nineteenth century, but it was not until the late twentieth century that their widespread usage became economically and technologically viable.
Mechanical scales
A mechanical scale or balance is used to describe a weighing device that is used to measure the mass, force exertion, tension, and resistance of an object without the need of a power supply. Types of mechanical scales include decimal balances, spring scales, hanging scales, triple beam balances, and force gauges.
Spring scales
A spring scale measures mass by reporting the distance that a spring deflects under a load. This contrasts to a balance, which compares the torque on the arm due to a sample weight to the torque on the arm due to a standard reference mass using a horizontal lever. Spring scales measure force, which is the tension force of constraint acting on an object, opposing the local force of gravity. They are usually calibrated so that measured force translates to mass at earth's gravity. The object to be weighed can be simply hung from the spring or set on a pivot and bearing platform.
In a spring scale, the spring either stretches (as in a hanging scale in the produce department of a grocery store) or compresses (as in a simple bathroom scale). By Hooke's law, every spring has a proportionality constant that relates how hard it is pulled to how far it stretches. Weighing scales use a spring with a known spring constant (see Hooke's law) and measure the displacement of the spring by any variety of mechanisms to produce an estimate of the gravitational force applied by the object. Rack and pinion mechanisms are often used to convert the linear spring motion to a dial reading.
Spring scales have two sources of error that balances do not: the measured mass varies with the strength of the local gravitational force (by as much as 0.5% at different locations on Earth), and the elasticity of the measurement spring can vary slightly with temperature. With proper manufacturing and setup, however, spring scales can be rated as legal for commerce. To remove the temperature error, a commerce-legal spring scale must either have temperature-compensated springs or be used at a fairly constant temperature. To eliminate the effect of gravity variations, a commerce-legal spring scale must be calibrated where it is used.
Hydraulic or pneumatic scale
It is also common in high-capacity applications such as crane scales to use hydraulic force to sense mass. The test force is applied to a piston or diaphragm and transmitted through hydraulic lines to a dial indicator based on a Bourdon tube or electronic sensor.
Domestic Weighing Scale
Electronic digital scales display weight as a number, usually on a liquid crystal display (LCD). They are versatile because they may perform calculations on the measurement and transmit it to other digital devices. On a digital scale, the force of the weight causes a spring to deform, and the amount of deformation is measured by one or more transducers called strain gauges. A strain gauge is a conductor whose electrical resistance changes when its length changes. Strain gauges have limited capacity and larger digital scales may use a hydraulic transducer called a load cell instead. A voltage is applied to the device, and the weight causes the current through it to change. The current is converted to a digital number by an analog-to-digital converter, translated by digital logic to the correct units, and displayed on the display. Usually, the device is run by a microprocessor chip.
Digital bathroom scale
A digital bathroom scale is a scale on the floor which a person stands on. The weight is shown on an LED or LCD display. The digital electronics may do more than just display weight, it may calculate body fat, BMI, lean mass, muscle mass, and water ratio. Some modern bathroom scales are wirelessly or cellularly connected and have features like smartphone integration, cloud storage, and fitness tracking. They are usually powered by a button cell, or battery of AA or AAA size.
Digital kitchen scale
Digital kitchen scales are used for weighing food in a kitchen during cooking. These are usually lightweight and compact.
Strain gauge scale
In electronic versions of spring scales, the deflection of a beam supporting the unknown mass is measured using a strain gauge, which is a length-sensitive electrical resistance. The capacity of such devices is only limited by the resistance of the beam to deflection. The results from several supporting locations may be added electronically, so this technique is suitable for determining the mass of very heavy objects, such as trucks and rail cars, and is used in a modern weighbridge.
Supermarket and other retail scale
These scales are used in the modern bakery, grocery, delicatessen, seafood, meat, produce and other perishable goods departments. Supermarket scales can print labels and receipts, mark mass and count, unit price, total price and in some cases tare. Some modern supermarket scales print an RFID tag that can be used to track the item for tampering or returns. In most cases, these types of scales have a sealed calibration so that the reading on the display is correct and cannot be tampered with. In the US, the scales are certified by the National Type Evaluation Program (NTEP), in South Africa by the South African Bureau of Standards, in Australia, they are certified by the National Measurement Institute (NMI) and in the UK by the International Organization of Legal Metrology.
Industrial weighing scale
An industrial weighing scale is a device that measures the weight or mass of objects in various industries. It can range from small bench scales to large weighbridges, and it can have different features and capacities. Industrial weighing scales are used for quality control, inventory management, and trade purposes.
There are many kinds of industrial weighing scales that are used for different purposes and applications. Some of the common types are:
Weighbridges : A large scale that can weigh trucks, lorries, containers, and other heavy-duty vehicles. They are used in industries like manufacturing, shipping, mining, agriculture, etc
Container Stacker Scale : A container stacker scale is a specialized weighing system designed for accurately measuring the weight of shipping containers. It is typically integrated into the equipment used for loading and unloading containers, such as container handlers or stacker cranes. Container stacker scales provide real-time weight measurements, allowing logistics professionals to ensure that each container is loaded within the specified weight limits. Container stacker scales are used in industries like ports, shipping, and logistics
Forklift scale : A forklift scale is a weighing system that is built into a forklift truck. It allows for the weighing of loads while they are being lifted and transported by the forklift. This eliminates the need for separate weighing operations and reduces the time and labor required for material handling operations. Forklift scales are used in various industries, such as manufacturing, logistics, and shipping.
Material Handler Scale : A Material Handler Scale is a weighing system that is integrated into a material handler machine, such as a grapple or a magnet. It allows for the accurate and efficient weighing of materials while they are being moved, unloaded, or loaded. A Material Handler Scale can be used in various industries, such as scrap, recycling, waste, and port and harbor. A Material Handler Scale can also transfer the weighing information to a cloud service or an ERP system for real-time monitoring and management of material flow.
A pallet jack scale is a device that combines a pallet jack and a weighing scale. It allows you to weigh and move pallets at the same time, saving time and labor. Pallet jack scales are used in various industries, such as manufacturing, logistics, and shipping.
Crane Scale : A crane scale is a device that measures the weight or mass of objects that are suspended from a crane. It has a hook at the bottom and a large display that allows distant viewing. Crane scales are used for various industrial applications, such as manufacturing, shipping, mining, recycling, and more
Wheel Loader Scale : A wheel loader scale is a system that measures the weight of the materials lifted by a wheel loader, a type of heavy machinery used for moving large amounts of earth, sand, gravel, or other materials. A wheel loader scale can help improve the efficiency and accuracy of loading operations, as well as the inventory management and safety of the industries that use them. A wheel loader scale typically consists of a hydraulic sensor, a display unit, and a data management system. The hydraulic sensor is installed in the wheel loader and detects the pressure changes caused by the load. The display unit shows the weight information to the operator and allows them to set target loads, select products and customers, and export data. The data management system can store, analyze, and transmit the weight data to other devices or platforms.
Testing and certification
Most countries regulate the design and servicing of scales used for commerce. For example, in the European Union weighing instruments are subject to 2014/31/EU and 2014/32/EU directives. A conformity assessment procedure is carried out before placing the instrument on the market, andv the instruments are verified after a given period of time in member states of the European Union. This has tended to cause scale technology to lag behind other technologies because expensive regulatory hurdles are involved in introducing new designs. Nevertheless, there has been a trend to "digital load cells" which are actually strain-gauge cells with dedicated analog converters and networking built into the cell itself. Such designs have reduced the service problems inherent with combining and transmitting a number of 20 millivolt signals in hostile environments.
Government regulation generally requires periodic inspections by licensed technicians, using masses whose calibration is traceable to an approved laboratory. Scales intended for non-trade use, such as those used in bathrooms, doctor's offices, kitchens (portion control), and price estimation (but not official price determination) may be produced, but must by law be labelled "Not Legal for Trade" to ensure that they are not re-purposed in a way that jeopardizes commercial interest. In the United States, the document describing how scales must be designed, installed, and used for commercial purposes is NIST Handbook 44. Legal For Trade (LFT) certification usually approve the readability by testing repeatability of measurements to ensure a maximum margin of error of 10%.
Because gravity varies by over 0.5% over the surface of the earth, the distinction between force due to gravity and mass is relevant for accurate calibration of scales for commercial purposes. Usually, the goal is to measure the mass of the sample rather than its force due to gravity at that particular location.
Traditional mechanical balance-beam scales intrinsically measured mass. But ordinary electronic scales intrinsically measure the gravitational force between the sample and the earth, i.e. the weight of the sample, which varies with location. So such a scale has to be re-calibrated after installation, for that specific location, in order to obtain an accurate indication of mass.
Sources of error
Some of the sources of error in weighing are:
Buoyancy – Objects in air develop a buoyancy force that is directly proportional to the volume of air displaced. The difference in density of air due to barometric pressure and temperature creates errors.
Error in the mass of reference weight
Air gusts, even small ones, which push the scale up or down
Friction in the moving components that causes the scale to reach equilibrium at a different configuration than a frictionless equilibrium should occur.
Settling airborne dust contributing to the weight
Mis-calibration over time, due to drift in the circuit's accuracy, or temperature change
Mis-aligned mechanical components due to thermal expansion or contraction of components
Magnetic fields acting on ferrous components
Forces from electrostatic fields, for example, from feet shuffled on carpets on a dry day
Chemical reactivity between air and the substance being weighed (or the balance itself, in the form of corrosion)
Condensation of atmospheric water on cold items
Evaporation of water from wet items
Convection of air from hot or cold items
Gravitational differences for a scale which measures force, but not for a balance.
Vibration and seismic disturbances
Hybrid spring and balance scales
Elastic arm scale
In 2014 a concept of hybrid scale was introduced, the elastically deformable arm scale, which is a combination between a spring scale and a beam balance, exploiting simultaneously both principles of equilibrium and deformation. In this scale, the rigid arms of a classical beam balance (for example a steelyard) are replaced with a flexible elastic rod in an inclined frictionless sliding sleeve. The rod can reach a unique sliding equilibrium when two vertical dead loads (or masses) are applied at its edges. Equilibrium, which would be impossible with rigid arms, is guaranteed because configurational forces develop at the two edges of the sleeve as a consequence of both the free sliding condition and the nonlinear kinematics of the elastic rod. This mass measuring device can also work without a counterweight.
See also
Ampere balance
Apparent weight
Auncel
Combination weigher
Digital spoon scale
Digital Weight Indicator
Evans balance
Faraday balance
Gouy balance
Kibble balance, also known as a Watt balance
Mass versus weight
Multihead weigher
Nutrition scale
On-board scale, an on-vehicle truck scale
Themis
Weigh house - historic public building for the weighing of goods
Weigh lock - for weighing canal barges
Weigh station, a checkpoint to inspect vehicular weights, usually equipped with a truck scale (weigh bridge)
References
External links
This a comprehensive review of the history and contemporaneous state of weighing machines.
National Conference on Weights and Measures, NIST Handbook 44, Specifications, Tolerances, And Other Technical Requirements for Weighing and Measuring Devices, 2003
Analytical Balance article at ChemLab
relivant dual weighing scale for babies and adults
"The Precious Necklace Regarding Weigh Scales" is an 18th-century manuscript by Abd al-Rahman al-Jabarti about the "design and operation" of scales
Ancient Egyptian technology
Professional symbols
Weighing instruments | Weighing scale | [
"Physics",
"Technology",
"Engineering"
] | 5,638 | [
"Weighing instruments",
"Mass",
"Matter",
"Measuring instruments"
] |
964,578 | https://en.wikipedia.org/wiki/Rossmann%20fold | The Rossmann fold is a tertiary fold found in proteins that bind nucleotides, such as enzyme cofactors FAD, NAD+, and NADP+. This fold is composed of alternating beta strands and alpha helical segments where the beta strands are hydrogen bonded to each other forming an extended beta sheet and the alpha helices surround both faces of the sheet to produce a three-layered sandwich. The classical Rossmann fold contains six beta strands whereas Rossmann-like folds, sometimes referred to as Rossmannoid folds, contain only five strands. The initial beta-alpha-beta (bab) fold is the most conserved segment of the Rossmann fold. The motif is named after Michael Rossmann who first noticed this structural motif in the enzyme lactate dehydrogenase in 1970 and who later observed that this was a frequently occurring motif in nucleotide binding proteins.
Rossmann and Rossmannoid fold proteins are extremely common. They make up 20% of proteins with known structures in the Protein Data Bank, and are found in more than 38% of KEGG metabolic pathways. The fold is extremely versatile in that it can accommodate a wide range of ligands. They can function as metabolic enzymes, DNA/RNA binding, and regulatory proteins in addition to the traditional role.
History
The Rossmann fold was first described by Dr. Michael Rossmann and coworkers in 1974. He was the first to deduce the structure of lactate dehydrogenase and characterized the structural motif within this enzyme which would later be called the Rossmann fold. It was subsequently found that most dehydrogenases that utilize NAD or NADP contain this same structurally conserved Rossmann fold motif.
In 1989, Israel Hanukoglu from the Weizmann Institute of Science discovered that the consensus sequence for NADP+ binding site in some enzymes that utilize NADP+ differs from the NAD+ binding motif. This discovery was used to re-engineer coenzyme specificities of enzymes.
Structure
The Rossmann fold is composed of six parallel beta strands that form an extended beta sheet. The first three strands are connected by α- helices resulting in a beta-alpha-beta-alpha-beta structure. This pattern is duplicated once to produce an inverted tandem repeat containing six strands. Overall, the strands are arranged in the order of 321456 (1 = N-terminal, 6 = C-terminal). Five stranded Rossmann-like folds are arranged in the order 32145. The overall tertiary structure of the fold resembles a three-layered sandwich wherein the filling is composed of an extended beta sheet and the two slices of bread are formed by the connecting parallel alpha-helices.
One of the features of the Rossmann fold is its co-factor binding specificity. Through the analysis of four NADH-binding enzymes, it was found that in all four enzymes the nucleotide co-factor entailed the same conformation and orientation with respect to the polypeptide chain.
The fold may contain additional strands joined by short helices or coils. The most conserved segment of Rossmann folds is the first beta-alpha-beta segment. Phosphate-binding loop is located between the first beta-strand and alpha-helix. On the tip of the second beta-strand, there is a conserved aspartate residue that is involved in ribose binding. Since this segment is in contact with the ADP portion of dinucleotides such as FAD, NAD and NADP it is also called as an "ADP-binding beta-beta fold.
Function
The function of the Rossmann fold in enzymes is to bind nucleotide cofactors. It also often contributes to substrate binding.
Metabolic enzymes normally have one specific function, and in the case of UDP-glucose 6-dehydrogenase, the primary function is to catalyze the two step NAD(+)-dependent oxidation of UDP-glucose into UDP-glucuronic acid. The N- and C-terminal domains of UgdG share structural features with ancient mitochondrial ribonucleases named MAR. MARs are present in lower eukaryotic microorganisms, have a Rossmannoid-fold and belong to the isochorismatase superfamily. This observation reinforces that the Rossmann structural motifs found in NAD(+)-dependent dehydrogenases can have a dual function working as a nucleotide cofactor binding domain and as a ribonuclease.
Evolution
Rossman and Rossmannoids
The evolutionary relationship between the Rossmann fold and Rossmann-like folds is unclear. These folds are referred to as Rossmannoids. It has been hypothesized that all these folds, including a Rossmann fold originated from a single common ancestral fold, that had nucleotide binding capabilities, in addition to non-specific catalytic activity.
However, an analysis of the PDB finds evidence of convergent evolution with 156 separate H-groups of demonstrable homology, from which 123 X-groups of probable homology can be found. The groups have been integrated into ECOD.
Conventional Rossman group
Phylogenetic analysis of the NADP binding enzyme adrenodoxin reductase revealed that from prokaryotes, through metazoa and up to primates the sequence motif difference from that of most FAD and NAD-binding sites is strictly conserved.
In many articles and textbooks, a Rossmann fold is defined as a strict repeated series of βαβ structure. Yet, comprehensive examination of the Rossmann folds in many NAD(P) and FAD binding sites revealed that only the first βα structure is strictly conserved. In some enzymes, there may be many loops and several helices (i.e., not a single helix) between the beta strands that form the beta-sheet. These enzymes have a common origin indicated by conserved sequence and structural features, according to Hanukoglu.
The result by Hanukoglu (2017) is corroborated by Medvedev et al. (2020), in the form of an ECOD "H-group" called "Rossmann-related". Even within this group, ECOD describes a wide range of non-nucleotide activities.
References
External links
Proteopedia page on the Rossmann folds
Protein folds
Protein structural motifs
Protein superfamilies | Rossmann fold | [
"Biology"
] | 1,302 | [
"Protein structural motifs",
"Protein superfamilies",
"Protein classification"
] |
964,610 | https://en.wikipedia.org/wiki/Border%20states%20%28Eastern%20Europe%29 | Border states, or European buffer states, were the European nations that won their independence from the Russian Empire after the Bolshevik Revolution of 1917, the Treaty of Brest-Litovsk and ultimately the defeat of the German Empire and Austria-Hungary in World War I. During the interwar period, the nations of Western Europe implemented a border states policy, which aimed at uniting them in protection against the Soviet Union and communist expansionism. The border states were interchangeably Finland, Estonia, Latvia, Lithuania, Poland, Romania and, until their annexation into the Soviet Union, short-lived Belarus and Ukraine.
The policy tended to see the border states as a cordon sanitaire, or buffer states, separating Western Europe from the newly formed Soviet Union. The policy was very successful. At the time, Soviet foreign policy was driven by the Trotskyist idea of permanent revolution, the end goal of which was to spread communism worldwide through perpetual warfare. However, the Soviet advance to the west was halted by Poland, which managed to defeat the Red Army during the Polish–Soviet War. After the war, Polish leader Józef Piłsudski made attempts to unify the border states under a federation called Intermarium, but disputes and different allegiances between and within the group of states prevented such a thing from happening, leaving them more susceptible to possible incursions by their more powerful neighbors. The matter was further complicated by the rise of the expansionist Nazi Germany to the west. In 1939, Germany and the Soviet Union signed the Molotov–Ribbentrop Pact, which included a secret clause that sanctioned the partitioning of several border states between the two regimes in the event of war. Only nine days after the pact was signed, Nazi Germany invaded Poland, and the Soviets followed suit shortly after, beginning World War II in Europe. After the end of the war, all border states except for Finland were transferred to Soviet occupation as a result of the Western betrayal although Finland had already ceded some of its territory to the Soviet Union following the Winter War.
See also
Limitrophe states
Mitteleuropa
March (territory)
Post-Soviet states
References
History of Europe
Borders
Geopolitics | Border states (Eastern Europe) | [
"Physics"
] | 441 | [
"Spacetime",
"Borders",
"Space"
] |
964,617 | https://en.wikipedia.org/wiki/Chinese%20architecture | Chinese architecture is the embodiment of an architectural style that has developed over millennia in China and has influenced architecture throughout East Asia. Since its emergence during the early ancient era, the structural principles of its architecture have remained largely unchanged. The main changes involved diverse decorative details. Starting with the Tang dynasty, Chinese architecture has had a major influence on the architectural styles of neighbouring East Asian countries such as Japan, Korea, Vietnam, and Mongolia in addition to minor influences on the architecture of Southeast and South Asia including the countries of Malaysia, Singapore, Indonesia, Sri Lanka, Thailand, Laos, Cambodia, and the Philippines.
Chinese architecture is characterized by bilateral symmetry, use of enclosed open spaces, feng shui (e.g. directional hierarchies), a horizontal emphasis, and an allusion to various cosmological, mythological or in general symbolic elements. Chinese architecture traditionally classifies structures according to type, ranging from pagodas to palaces. Due to the frequent use of wood, a relatively perishable material, as well as few monumental structures built of more durable materials, much historical knowledge of Chinese architecture derives from surviving miniature models in ceramic and published diagrams and specifications.
Although unifying aspects exist, Chinese architecture varies widely based on status or affiliation, such as whether the structures were constructed for emperors, commoners, or for religious purposes. Other variations in Chinese architecture are shown in vernacular styles associated with different geographic regions and different ethnic heritages.
In more recent times, China has become the most rapidly modernizing country in the world. In the past few decades, cities like Shanghai have completely changed their skyline, with some of the worlds tallest skyscrapers dotting the horizon. China also has one of the most extensive high speed rail networks, connecting and allowing its large population to travel more efficiently.
Throughout the 20th century, Chinese architects have attempted to bring traditional Chinese designs into modern architecture. Moreover, the pressure for urban development throughout China requires high speed construction and a greater floor area ratio: thus, in cities the demand for traditional Chinese buildings (which are normally less than 3 levels) has declined in favor of high-rises. However, the traditional skills of Chinese architecture, including major and minor carpentry, masonry, and stonemasonry, are used in the construction of vernacular architecture in China's rural areas.
History
Neolithic and early antiquity
Chinese civilizations and cultures developed in the plains along China's numerous rivers that emptied into Bohai and Hongzhow bays. The most prominent of these rivers, the Yellow and the Yangtze, hosted many villages. The climate was warmer and more humid than today, allowing millet to be grown in the north and rice in the south. However, Chinese civilization has no single "origin". Instead, it featured a gradual multinuclear development between 4000 and 2000 BC – from village communities to what anthropologists call cultures to states.
Two of the more important cultures were Hongshan culture (4700–2900 BC) to the north of Bohai Bay in Inner Mongolia and Hebei Province and contemporaneous Yangshao culture (5000–3000 BC) in Henan Province. Between the two, and developing later, was Longshan culture (3000–2000 BC) in the central and lower Yellow River valley. These combined areas gave rise to thousands of small/proto-states by 3000 BC. Some shared a common ritual center that linked them to a single symbolic order, but others developed more independently. The emergence of walled cities during this time is a clear indication that the political landscape was often unstable.
The Hongshan culture of Inner Mongolia (located along the Laoha, Yingjin, and Daling rivers that empty into Bohai Bay) was scattered over a large area but had a single, common ritual center of at least 14 burial mounds and altars over several ridges. It is dated to around 3500 BC, or possibly earlier. Although no evidence suggests village settlements nearby, its size is much larger than one clan or village could support. In other words, though rituals would have been performed there for the elites, the large area implies that audiences for the ritual would have encompassed all the villages of the Hongshan. As a sacred landscape, the center might have attracted supplicants from even further afield.
20th century
Rammed earth construction was both practically and ideologically important during the rapid construction of the Daqing oil field and the related development of Daqing. The "Daqing Spirit" represented deep personal commitment in pursuing national goals, self-sufficient and frugal living, and urban-rural integrated land use. Daqing's urban-rural landscape was said to embody the ideal communist society described by Karl Marx because it eliminated (1) the gap between town and country, (2) the gap between workers and peasants, and (3) the gap between manual and mental labor.
Drawing on the Daqing experience, China encouraged rammed earth construction in the mid-1960s. Starting in 1964, Mao Zedong advocated for a "mass design revolution movement". In the context of the Sino-Soviet split, Mao urged that planners should avoid the use of Soviet-style prefabricated materials and instead embrace the proletarian spirit of on-site construction using rammed earth. The Communist Party promoted the use of rammed earth construction as a low-cost method which was indigenous to China and required little technical skill.
Reinforced concrete, brick-infill, and prefabricated materials were used increasingly following the Wall Reform Movement of 1973–1976 and were promoted in publications such as Architectural Journal.
In 2014, the city of Datong started to rebuild the Datong ancient city wall and buildings in traditional architecture, although received skepticism and opposition by citizens by then, many praised the mayor for bringing back traditional Chinese aesthetics later on..Starting with the Northern Wei dynasty 1,600 years ago, Datong was a beautiful capital. It continued to thrive in the Liao and Jin dynasties, and later regained prominence as a major strategic centre in the Ming dynasty (1368–1644).
Geography
Vernacular Chinese architecture shows variations related to local terrain and climate.
Features
Bilateral symmetry
An important feature in Chinese architecture is its emphasis on articulation and bilateral symmetry, which there signifies balance. These are found everywhere in Chinese architecture, from palace complexes to humble farmhouses. Secondary elements are positioned on either side of the main structures as wings to maintain overall symmetry. Buildings are typically planned to contain an even number of columns to produce an odd number of bays (間). Placing the main door in the center bay maintains symmetry.
In contrast to buildings, Chinese gardens tend to be asymmetrical. Gardens are designed to provide enduring flow. The design of the classic Chinese garden is based on the ideology of "Nature and Man in One", as opposed to the home itself, which shows the human sphere co-existing with, but separate from nature. The intent is that people feel surrounded by, and in harmony with, nature. The two essential garden elements are stones and water. The stones signify the pursuit of immortality, while water represents emptiness and existence. The mountain belongs to yang (static beauty), and the water belongs to yin (dynamic wonder). They depend on each other and complete each other.
Enclosure
In much Chinese architecture, buildings or building complexes surround open spaces. These enclosed spaces come in two forms:
Courtyard (院): Open courtyards are a common feature in many projects. This is best exemplified in Siheyuan: It consisted of an empty space surrounded by buildings connected with one another either directly or through verandas.
"Sky well" (天井): Although large open courtyards are less commonly found in southern Chinese architecture, the concept of an "open space" surrounded by buildings can be seen in the southern building structure known as the "sky well". This structure is essentially a relatively enclosed courtyard formed from the intersections of closely spaced buildings and offers a small opening to the sky through the roof space.
These enclosures aid in temperature regulation and in ventilation. Northern courtyards are typically open and face south to allow the maximum exposure of the building windows and walls to the sun while keeping out the cold north winds. Southern sky wells are relatively small and collect rainwater from the roof tops. They perform the same duties as the Roman impluvium while restricting the amount of sunlight that enters the building. Sky wells also vent hot air skyward, which draws cool air from the lower areas and the outside.
Hierarchy
The projected hierarchy and importance and building uses in Chinese architecture are based on the strict placement of buildings in a property/complex. Buildings with doors facing the front of the property are considered more important than those facing the sides. Buildings facing away from the front are the least important.
South-facing buildings in the rear and more private areas with higher exposure to sunlight are held in lower esteem and reserved for elders or ancestral plaques. Buildings facing east and west are generally for junior members or branches of the family, while buildings near the front are typically for servants and hired help.
Front-facing buildings in the back of properties are used for celebratory rites and for the placement of ancestral halls and plaques. In multi-courtyard complexes, central courtyards and their buildings are considered more important than peripheral ones, the latter typically for storage, servants' rooms, or kitchens.
Horizontal emphasis
Classical Chinese buildings, especially those of the wealthy, are built with an emphasis on breadth and less on height, featuring an enclosed heavy platform and a large roof that floats over this base, with the vertical walls deemphasized. Buildings that were too high and large were considered unsightly, and therefore generally avoided. Chinese architecture stresses the visual impact of the width of the buildings, using sheer scale to inspire awe. This preference contrasts with Western architecture, which tends to emphasize height and depth. This often meant that pagodas towered above other buildings.
The halls and palaces in the Forbidden City have rather low ceilings when compared to equivalent stately buildings in the West, but their external appearance suggests the all-embracing nature of imperial China. These ideas have found their way into modern Western architecture, for example through the work of Jørn Utzon.
Cosmological concepts
Chinese architecture used concepts from Chinese cosmology such as feng shui (geomancy) and Taoism to organize construction and layout. These include:
Screen walls to face the main entrance, which stems from the belief that evil things travel in straight lines.
Talismans and imagery of good fortune:
Door gods displayed on doorways to ward off evil and encourage good fortune
Three anthropomorphic figures representing Fu Lu Shou (福祿壽 fú-lù-shòu) stars are prominently displayed, sometimes with the proclamation "the three stars are present" (三星宅 sān-xīng-zhài)
Animals and fruits that symbolize good fortune and prosperity, such as bats and pomegranates, respectively. The association is often done through rebuses.
Orienting the structure with its back to an elevated landscape and placing water in the front.
Ponds, pools, wells, and other water sources are built into the structure.
Aligning a building along a north–south axis, with the building facing south (in the north where the wind is coldest in winter). The two sides face east and west respectively. The back of the structure is generally windowless.
The use of certain colors, numbers and the cardinal directions reflected the belief in a type of immanence, where the nature of a thing could be wholly contained in its own form.
Beijing and Chang'an are examples of traditional Chinese town planning that represent these cosmological concepts.
Architectural types
The types of Chinese architecture may relate to the use of the structures, such as whether they were built for royals, commoners, or the religious.
Commoners
Due to primarily wooden construction and poor maintenance, far fewer examples of commoner's homes survive compared to those of nobles. Korman claimed the average commoner's home did not change much, even centuries after the establishment of the universal style: early-20th-century homes were similar to late and mid-imperial homes.
These homes tended to follow a set pattern: the center of the building was a shrine for deities and ancestors, and was also used during festivities. On its two sides were bedrooms for elders; the two wings (known as "guardian dragons") were for junior members, as well as the living room, the dining room, and the kitchen, although sometimes the living room was close to the center.
Sometimes the extended families became so large that one or two extra pairs of "wings" had to be built. This produced a U-shape, with a courtyard suitable (e.g., for farm work). Merchants and bureaucrats preferred to close off the front with an imposing gate. All buildings were legally regulated, and the law required that the number of stories, the length of the building and the building colours reflect the owner's class.
Some commoners living in areas plagued by bandits built communal fortresses called Tulou for protection. Often favoured by the Hakka in Fujian and Jiangxi, the design of Tulou shows the ancient philosophy of harmony between people and environment. People used local materials, often building the walls with rammed earth. No window reached the outside on the lower two floors (for defense), but the inside included a common courtyard and let people gather.
Imperial
Certain architectural features were reserved for buildings built for the emperor of China. One example is the use of yellow (the imperial color) roof tiles. Yellow tiles still adorn most of the buildings within the Forbidden City. Only the emperor could use hip roofs, with all four sides sloping. The two types of hip roof were single-eave and double-eave. The Hall of Supreme Harmony is the archetypal example of double eaves. The Temple of Heaven uses blue roof tiles to symbolize the sky. The roofs are almost invariably supported by brackets ("dougong"), a feature shared only with the largest of religious buildings. The building's wooden columns well as the wall surfaces, tend to be red. Black is often used in pagodas. It was believed that the gods were inspired by the black color to visit earth.
The 5-clawed dragon, adopted by the Hongwu emperor (first emperor of Ming dynasty) for his personal use, was used to decoration the beams, pillars, and on the doors on imperial architecture. Curiously, the dragon was never used on roofs of imperial buildings.
Only buildings used by the imperial family were allowed to have nine jian (間, space between two columns); only gates used by the emperor could have five arches, with the centre one, reserved for the emperor. The ancient Chinese favored the color red.
Beijing became the capital of China after the Mongol invasion of the 13th century, completing the easterly migration of the Chinese capital begun in the Jin dynasty. The Ming uprising in 1368 reasserted Chinese authority and fixed Beijing as the seat of imperial power for the next five centuries. The emperor and the empress lived in palaces on the central axis of the Forbidden City, the crown prince at the eastern side, and the concubines at the back (the imperial concubines were often referred to as "The Back Palace Three Thousand"). During the mid-Qing dynasty, the emperor's residence was moved to the western side of the complex. It is misleading to speak of an axis in the Western sense of a visual perspective ordering facades. The Chinese axis is a line of privilege, usually built upon, regulating access—instead of vistas, a series of gates and pavilions are used.
Numerology influenced imperial architecture, hence the use of nine (the greatest single digit number) in much of construction and the reason why the Forbidden City in Beijing is said to have 9,999.9 rooms—just short of heaven's mythical 10,000 rooms. The importance of the East (the direction of the rising sun) in orienting and siting imperial buildings is a form of solar worship found in many ancient cultures, reflecting the affiliation of Ruler with the Sun.
The tombs and mausoleums of imperial family members, such as the 8th-century Tang dynasty tombs at the Qianling Mausoleum, can be counted as part of the imperial tradition. These above-ground earthen mounds and pyramids had subterranean shaft-and-vault structures that were lined with brick walls since at least the Warring States period (481–221 BC).
Religious
Generally speaking, Buddhist architecture follows the imperial style. A large Buddhist monastery normally has a front hall, housing the statues of the Four Heavenly Kings, followed by a great hall, housing statues of the Buddhas. Accommodations are located at the two sides. Some of the greatest examples of this come from the 18th-century Puning Temple and Putuo Zongcheng Temple. Buddhist monasteries sometimes also have pagodas, which may house relics of the Gautama Buddha; older pagodas tend to be four-sided, while later pagodas usually have eight sides.
Daoist architecture usually follows the commoners' style. The main entrance is, however, usually at the side, out of superstition about demons that might try to enter the premise (see feng shui.) In contrast to the Buddhists, in a Daoist temple the main deity is located in the main hall at the front, with lesser deities in the back hall and at the sides. This is because Chinese people believe that even after the body has died, the soul is still alive. From the Han grave design, it shows the forces of cosmic yin/yang, the two forces from the heaven and earth that create eternity.
The tallest pre-modern building in China was built for both religious and martial purposes. The Liaodi Pagoda of 1055 AD stands at a height of , and although it served as the crowning pagoda of the Kaiyuan monastery in old Dingzhou, Hebei, it was also used as a military watchtower for Song dynasty soldiers to observe potential Liao dynasty troop movements.
The architecture of the mosques and gongbei tomb shrines of Chinese Muslims often combines traditional Chinese styles with Middle Eastern influences. The royal and nonroyal tombs found in the third through sixth centuries traced back to Han construction. Some tombs were considered two-chamber spaces, where the focal point was the central pagoda pillar. This focal point served as what Buddhist call a pagoda, which is a symbol of the Buddha and his death. The layout of such tombs has the corpse in the back chamber, as the pillar location indicated the Buddha's death. There would sometimes be interior tomb decoration to portray immortal or divine meaning.
Dome ceilings in the 4th and 7th centuries were representations of the heavens. This originates from Roman provincial art and ancient Egypt. As most of these representations are circular, other forms are present: dodecagon, octagonal, and square. Many caves in the 4th-7th centuries were probably carved throughout the Han and Tang period.
Gallery
Urban planning
Chinese urban planning is based on fengshui geomancy and the well-field system of land division, both used since the Neolithic age. The basic well-field diagram is overlaid with the luoshu, a magic square divided into 9 sub-squares, and linked with Chinese numerology. In Southern Song dynasty (1131AD), the design of Hongcun city in Anhui was based around "harmony between man and nature", facing south and surrounded by mountains and water. According to fengshui, it is a carefully planned ancient village and shows the Human-Nature Intergraded Ecological Planning concept.
Since wars were frequent in northern China, many people moved to southern China. The building method of a courtyard house was adapted to southern China. The village of Tungyuan in Fujian Province is a good example of a planned settlement that shows the feng shui elements – psychological self-defense and building structure – in the form of material self-defense.
Construction
Materials and history
Wood was typically utilised as a primary building material. Also, Chinese culture holds that life connects with nature and that humans should interact with animated things. By contrast stone was associated with the homes of the dead. However, unlike other building materials, wooden structures are less durable. The Songyue Pagoda (built in 523) is China's oldest extant pagoda; its use of brick instead of wood allowed it to endure across the centuries. From the Tang dynasty (618–907) onwards, brick and stone architecture gradually became more common. The earliest examples of this transition can be seen in building projects such as the Zhaozhou Bridge completed in 605 or the Xumi Pagoda built in 636. Some stone and brick architecture was used in subterranean tomb architecture of earlier dynasties.
In the early 20th century no known fully wood-constructed Tang dynasty buildings still existed; the oldest so far discovered was the 1931 find of Guanyin Pavilion at Dule Monastery, dated 984 during the Song dynasty. Later architectural historians Liang Sicheng, Lin Huiyin, Mo Zongjiang, discovered that the Great East Hall of Foguang Temple on Mount Wutai in Shanxi dated to 857. The ground floor of this monastic hall measures . The main hall of nearby Nanchan Temple on Mount Wutai was later dated to 782. Six Tang era wooden buildings had been found by the 21st century. The oldest intact fully wooden pagoda is the Pagoda of Fogong Temple of the Liao dynasty, located in Ying County of Shanxi. While the East Hall of Foguang Temple features seven types of bracket arms in its construction, the 11th-century Pagoda of Fogong Temple features fifty-four.
The earliest walls and platforms used rammed earth construction. Ancient sections of the Great Wall of China used brick and stone, although the brick and stone Great Wall seen today is a Ming dynasty renovation.
Buildings for public use and for elites usually consisted of earth mixed with bricks or stones on raised platforms which allowed them to survive. The earliest of this sort of construction was during the Shang dynasty ( – 1046 BCE)
Structure
Ceilings: The form that served greatest interest was the English vault or dome. The ceiling had the appearance of posed of flat beams, diagonal-support planks (xiecheng banliang), broken-line wedge shaped with a plank inserted, tongue-and-groove joints, barrel vault, or a domical vault. Most of this construction would be done with wood.
Foundation: Most buildings typically use raised platforms (臺基) as their foundations. Vertical structural beams may rest on stone pedestals (柱础) that occasionally rest on piles. In lower class construction, the platforms are constructed of rammed earth, either unpaved or paved with brick or ceramics. In the simplest cases vertical structural beams are driven into the ground. Upper class constructions typically sit on raised stone-paved rammed earth or stone foundations with ornately carved heavy stone pedestals for supporting large vertical structural beams. The beams remain on their pedestals solely by friction and the weight of the building structure.
Framing: Dating back to the 5th and 6th centuries, timber framing is evident in cave-temples like Mogao, Yungang, Maijishan and Tianlongshan. Most of these caves use the same method: eight sided columns, two-plate capitals, and alternating bracket arms and V-shaped braces. Whether or not certain structural supports were included was entirely up to what the artisans chose. There were no symbolic meanings behind these designs.
Structural beams: Large structural timbers support the roof. Timber, usually large trimmed logs, are used as load-bearing columns and lateral beams. These beams are connected to each other directly or, in larger and higher class structures, tied through the use of brackets. These structural timbers are prominently displayed in finished structures. It is not definitively known how ancient builders raised the columns into position.
Structural connections: Timber frames are typically constructed with joinery and dowelling, seldom with glue or nails. These types of semi-rigid structural joints allow the timber structure to resist bending and torsion under high compression. Structural stability is enhanced through the use of heavy beams and roofs. The lack of glue or nails in joinery, the use of non-rigid support such as dougong, and the use of wood as structural members allow the buildings to slide, flex, and hinge while absorbing shock, vibration, and ground shifts from earthquakes without significant damage. The rich decorated the Dougong with valuable materials to display their wealth. Common people used artwork to express their appreciation to the house.
Walls: Curtain walls or door panels delineated rooms or enclosed a building, with the general de-emphasis of load-bearing walls in most higher class construction. However, later dynasties faced a shortage of trees, leading to the use of load-bearing walls in non-governmental or religious construction, made of brick and stone.
Roofs: Flat roofs are uncommon while gabled roofs are omnipresent. Roofs are either built on roof cross-beams or rest directly on vertical structural beams. In higher class construction, roof beams are supported through complex dougong bracketing systems that indirectly connect them to the primary structural beams. The three main types of roofs are:
Straight inclined: Roofs with a single incline. These are the most economical and are most prevalent in commoner structures.
Multi-inclined: Roofs with 2 or more sections of incline. These roofs are used in higher class constructions.
Sweeping: Roofs with a sweeping curvature that rises at the corners. This type is usually reserved for temples and palaces although it may also be found in the homes of the wealthy. In the former cases, the roof ridges are usually highly decorated with ceramic figurines.
Roof apex: The roof apex of a large hall is usually topped with a ridge of tiles and statues for decorative purposes as well as to weigh down the tiles for stability. These ridges are often well decorated, especially for religious or palatial structures. In some regions, the ridges are sometimes extended or incorporated into the walls of the building to form matouqiang (horse-head walls), which served as a fire deterrent from drifting embers.
Roof top decorations: Symbolism can be found in the colors of the eaves, roofing materials and roof top decorations. Gold/yellow is an auspicious (good) color, imperial roofs are gold or yellow. Green roofs symbolize bamboo shafts, which in turn represent youth and longevity.
Patterns, decoration, elaboration, and ornament: all signatures dating back to Chinese architecture from the 5th and 6th century. Many cave temples demonstrate such practice. Studies find that certain patterns were repeated often in different locations across different dynasties. It was also found that designs found in western Asian art travelled to patterns found in Chinese timber.
Classification by structure
Chinese classifications for architecture include:
亭 () ting (Chinese pavilions)
臺 () tai (terraces)
樓 () lou (multistory buildings)
閣 () ge (two-story pavilions)
軒 (轩) xuan (verandas with windows)
塔 ta (Chinese pagodas)
榭 xie (pavilions or houses on terraces)
屋 wu (Rooms along roofed corridors)
斗拱 () dougong interlocking wooden brackets, often used in clusters to support roofs and add ornamentation.
藻井 Caisson domed or coffered ceiling
宮 () palaces, larger buildings used as imperial residences, temples, or centers for cultural activities.
Miniature models
Although mostly only ruins of brick and rammed earth walls and towers from ancient China (i.e. before the 6th century AD) survive, information on ancient Chinese architecture (especially wooden architecture) can be discerned from clay models of buildings created as funerary items. This is similar to the paper joss houses burned in some modern Chinese funerals. The following models were made during the Han dynasty (202 BC – AD 220):
During the Jin dynasty (266–420) and the Six Dynasties, miniature models of buildings or entire architectural ensembles were often made to decorate the tops of the so-called "soul vases" (hunping), found in many tombs of that period.
Culture
Beyond China's physically creative architecture techniques lies an "imaginary architecture". This imaginary architecture reflected three major principles that carry messages about the relations between inhabitants, society, and the cosmos, and that depict gender power imbalances.
Confucius
The first design principle was that the Chinese house was the embodiment of Neo-Confucian values. These collaborative values were loyalty, respect, and service. They were depicted through representations of generations, gender, and age. Unlike western homes, the Chinese home was not a private space or a place separated from the state. It was a community in itself that sheltered a patrilineal kinship clan. It was quite common for houses to shelter "five generations under one roof". Social concepts reflected the Five Relationships between "ruler and subject, father and child, husband and wife, elder and younger brother and friends." The unequal relationship between the superior and subordinate in these relationships was emphasized. The relationship between husband and wife was patriarchal. The husband was required to treat the spouse with kindness, consideration, and understanding.
Cosmic space
The Chinese house was a cosmic space. The house was designed as a shelter to foil evil influences by channeling cosmic energies (qi) by respecting feng shui. Depending on the season, astral cycle, landscape, and the house's design, orientation, and architectural details, some amount of energy would be produced. However, cosmic energy could be used in both moral and immoral ways. The moral way is by adding feng shui to a local community temple. Feng shui could also be used competitively to raise the value of one's house at the expense of others. For example, if someone built part of their house against the norm, their house could be considered a threat, because it was recklessly throwing off cosmic energy. In one detailed account, a fight broke out over feng shui.
Feng shui was also incorporated inside the home. Symmetry, orientations, arrangements of objects, and cleanliness were important factors to direct cosmic energy. Even in poorer homes cleanliness and tidiness were highly desired to compensate for the lack of space. Sweeping was a daily task that was thought to be a purifying act. Chinese historian Sima Guang writes, "The servants of the inner and outer quarters and the concubines all rise at the first crow of the cock. After combing their hair, washing, and getting dressed, the male servants should sweep the halls and front courtyard; the doorman and older servants should sweep the middle courtyard, while the maids sweep the living quarters, arrange tables and chairs, and prepare for the toilet of the master and mistress." The task of cleaning further illustrates, the gender segregation of the Chinese household.
Culture
The house was a space of culture that depicted the Chinese view of humanity. The house was a domestic domain, separated from the undomesticated world. The separation was commonly realized through walls and gates. Gates were first a physical barrier and second a notice board.
The home was where family rules could be enforced, dividing the upbringing of the inhabitants.
Women were often hidden away within the inner walls to perform domestic duties, while men would freely interact with the outside.
While brides entered an unknown and potentially hostile environment, the husband "never had to leave his parents or his home, he knew which lineage and which landscape he belonged to from the time he began to understand the world." New brides were typically treated badly by senior household members. Junior brides might be treated like unpaid servants and forced to do unpleasant chores. Bray characterized marriage as the bride's descent into hell. "The analogy of the wedding process with death is made explicit: the bride describes herself as being prepared for death, and the wedding process as the crossing of the yellow river that is the boundary between this life and the next. She appeals for justice, citing the valuable and unrecognized contribution she has made to her family. Her language is bitter and unrestrained, and she even curses the matchmaker and her future husband's family. Such lamenting can take place only within her parents' household and must cease halfway on the road to her new home, when the invisible boundary has been crossed." Women were fully accepted into a new home only after bearing a child.
The confinement of women was also a method of controlling their sexual lives. Confinement was used to prevent impregnation by an outsider who might thereby claim a slice of the family's wealth. Bray claimed that wives were often represented as "gossiping troublemakers eager to stir up strife between otherwise devoted brothers, the root of family discord, requiring strict patriarchal control."
Husbands and wives did not stay in the same private room for long periods. During the day, men would go out or work in their studies, avoiding unnecessary contact with female relatives. Women were generally confined to the inner perimeter. When leaving the inner perimeter, they must cover their face with a veil or a sleeve. Conversely, men were not usually permitted to enter the inner perimeter, providing women some control over their daily experience.
Influence from outside of China
Some Chinese mosques architecture received influence from abroad, particularly during dynasties such as the Yuan and Qing, which were more outward-facing. The arrival of many Muslim officials, architects and scholars from the Islamic world during the Yuan dynasty led to an influx of Islamic elements, especially in Chinese mosques.
The Zhenghai Mosque in Ningbo is an example of Islamic architecture that appeared in China during the Song dynasty. When Arabic traders settled in Ningbo, they spread Muslim culture and built a mosque. Later, mosques were built around Beijing. The mosques of Xi'an such as Xi'an Great Mosque and Daxuexi Alley Mosque reflected similar influences. Beijing's mosques follow essentially the norms of Chinese layout, design, and traditional wooden structure.
Many miniature pagodas exist in Northeast China. They were built by Buddhists during the Liao dynasty (907–1125), which supported Buddhism. They developed Buddhist architecture that used bricks. Many such pagodas spread from Hebei Province to Beijing and Inner Mongolia.
Influence beyond China
Chinese architecture has influenced the architecture of many other East Asian countries. During the Tang dynasty, much Chinese culture was imported by neighboring nations. Chinese architecture had a major influence on the architectural styles of Japan, Korea, Mongolia, and Vietnam where the East Asian hip-and-gable roof design is ubiquitous.
Chinese architecture influenced the architecture of various Southeast Asian countries. Chinese architectural elements were adopted by Thai artisans after trade commenced with the Yuan and Ming dynasties. Temple and palace roof tops adopted Chinese-style. Chinese-style buildings can be found in Ayutthaya, a nod towards the many Chinese shipbuilders, sailors and traders who came to the country. In Indonesia, mosques bearing Chinese influence can be found. This influence is recent in comparison to other parts of Asia and is largely due to the Chinese Indonesian community.
In South Asia, Chinese architecture played a significant role in shaping Sri Lankan architecture, alongside influences from other parts of Southeast Asia. The Kandyan roof style, for example bears many similarities to the East Asian hip-and-gable roof technique.
The Chinese-origin guardian lion is also found in front of Buddhist temples, buildings and some Hindu temples (in Nepal) across Asia including Japan, Korea, Thailand, Myanmar, Vietnam, Sri Lanka, Nepal, Cambodia and Laos.
Regional variation
Chinese architecture varied across regions. Several of the more notable regional styles include:
Hui Style architecture
Shanxi architecture
Shanxi preserves the oldest wooden structures in China from Tang dynasty, including the Foguang Temple and Nanchan Temple. Yungang Grottoes in Datong and numerous Buddhist temples in the sacred Mount Wutai exemplify Chinese religious architecture. Shanxi family compounds are representative of vernacular architecture in North China. In the mountainous areas of Shanxi, yaodong is a type of earth shelter that is commonly found.
Lingnan (Cantonese) architecture
Classical Lingnan architecture is used primarily in Guangdong and the eastern half of Guangxi. It is noted for its use of carvings and sculptures for decorations, green brick, balconies, "Cold alleys", "Narrow doors", and many other characteristics adaptive to the subtropical region.
Minnan (Hokkien) architecture
Minnan architecture, or Hokkien architecture, refers to the architectural style of the Hoklo people, the Han Chinese group who are the dominant demographic of Southern Fujian and Taiwan. This style is noted for its use of swallowtail roofs (heavily decorated upward-curving roof ridges) and "cut porcelain carving" for decorations. The swallowtail roof is a signature of Hokkien architecture, commonly used for religious buildings like shrines and temples, but also in dwellings. Hokkien architecture is dominated by decorations from carvings of natural elements like plants and animals, or figures from Chinese mythology.
Teochew architecture
Teochew architectural is the architectural style of the Teochew people, who come from the Chaoshan region of Guangdong province. Teochew architecture is categorised by its "curly grass roofs" (with the ridges curving into a loop) and wood carvings, and share the "cut porcelain carving" tradition with the closely related Hokkien people.
Hakka architecture
Hakka people are noted for building distinctive walled villages in order to protect themselves from clan wars.
Gan architecture
The Gan Chinese-speaking province of Jiangxi makes use of bricks, wood, and stones as materials, primarily using wooden frames.
Sui architecture
During the Sui period in the 7th century, structures were carved in the Hebei mountains. These structures had a quadrilateral ground plan with intent for a cubic interior. Pillars inside would be octagonal. Another feature included mullioned windows. Plus, there were anterooms, which were small Buddhist caves.
Yaodong architecture
The Jin Chinese cultural area of Shanxi and northern Shaanxi is noted for carving homes into the sides of mountains. The soft rock of the Loess Plateau in this region makes an excellent insulating material.
Tibetan architecture
Xinjiang architecture
Early architecture
Early Xinjiang architecture was influenced by Buddhist, Manichaean, Sogdian, Uyghur and Chinese cultural groups, most prominent examples including the cave temples of Bezeklik; religious and residential buildings at Jiahoe; and temples and shrines at Gaochang.
Islamic architecture
The first Muslims came to Xinjiang in the eighth or ninth centuries CE, yet only became a significant presence during the Yuan dynasty.
Islam came to Hami province in eastern Xinjiang at the end of the fourteenth century, and the province's first mosque was built in 1490, with ten generations of Muslim kings of Hami buried in the complex from the 1690s to 1932. The mausoleum complex of Hami was built in 1840 – the tomb of King Boxi'er is the complex's most prominent feature, having been constructed after the Muslim rebellion of 1867.
The mud-brick Emin Minaret (or Sugongta) in Turpan province is 44 metres (144 ft) tall is the tallest minaret in China. The tower is decorated with sixteen patterns on the exterior, with textured bricks carved into intricate, repetitive, geometric and floral mosaic patterns, such as stylized flowers and rhombuses. The minaret was started in 1777 during the reign of the Qianlong Emperor (r. 1735–1796) and was completed only one year later.
Others
Other regional styles include Hutong, found in northern China, Longtang and Shikumen of Haipai (Shanghainese) architecture.
See also
Ancient Chinese wooden architecture
Architecture of the Song dynasty
Architecture of Hong Kong
Architecture of Penang
Chinese garden
Chinese pagodas
Caihua
Feng Shui
Hutong
Imperial roof decoration
Imperial guardian lions
Shanghai – for a gallery of modern buildings
Shikumen
Siheyuan
Walled villages of Hong Kong
Yu Hao
References
Citations
Sources
Liang, Ssu-ch'eng 1984, A pictorial history of Chinese architecture: a study of the development of its structural system and the evolution of its types, ed. by Wilma Fairbanks, Cambridge (Mass.): MIT Press
Steinhardt, Nancy Shatzman. "Liao: An Architectural Tradition in the Making," Artibus Asiae (Volume 54, Number 1/2, 1994): 5–39.
Steinhardt, Nancy Shatzman. "The Tang Architectural Icon and the Politics of Chinese Architectural History," The Art Bulletin (Volume 86, Number 2, 2004): 228–254.
Weston, Richard. 2002. Utzon : inspiration, vision, architecture. Hellerup: Blondal.
China From Above" National Geographic
Further reading
Fletcher, Banister; Cruickshank, Dan, Sir Banister Fletcher's a History of Architecture, Architectural Press, 20th edition, 1996 (first published 1896). . Cf. Part Four, Chapter 24.
Sickman L and Soper A. The Art and Architecture of China (Penguin Books, 1956).
Genovese Paolo Vincenzo Harmony in Space. Introduction to Chinese Architecture (Libria, 2017)
External links
Yin Yu Tang: A Chinese Home To explore an in depth look into the ancient architecture of the Huang family's domestic life in China, the Yin Yu Tang house offers an interactive view of the typical domestic architecture of the Qing dynasty.
Herbert Offen Research Collection An excellent bibliography of publicly accessible books and manuscripts on Chinese architecture.
Islamic Architecture in China Introduction to the Chinese Mosques in South, West, and North respectively
Chinese Vernacular Architecture & General Chinese Architecture—Web Links Chinese Vernacular Architecture & General Chinese Architecture—Web Links
Chinese Residential Houses Ten types of Chinese residential houses
Asian Historical Architecture
Web Resources of Chinese Architecture History
Architectural history
Architectural styles | Chinese architecture | [
"Engineering"
] | 8,495 | [
"Architectural history",
"Architecture"
] |
964,630 | https://en.wikipedia.org/wiki/CATH%20database | The CATH Protein Structure Classification database is a free, publicly available online resource that provides information on the evolutionary relationships of protein domains. It was created in the mid-1990s by Professor Christine Orengo and colleagues including Janet Thornton and David Jones, and continues to be developed by the Orengo group at University College London. CATH shares many broad features with the SCOP resource, however there are also many areas in which the detailed classification differs greatly.
Hierarchical organization
Experimentally determined protein three-dimensional structures are obtained from the Protein Data Bank and split into their consecutive polypeptide chains, where applicable. Protein domains are identified within these chains using a mixture of automatic methods and manual curation.
The domains are then classified within the CATH structural hierarchy: at the Class (C) level, domains are assigned according to their secondary structure content, i.e. all alpha, all beta, a mixture of alpha and beta, or little secondary structure; at the Architecture (A) level, information on the secondary structure arrangement in three-dimensional space is used for assignment; at the Topology/fold (T) level, information on how the secondary structure elements are connected and arranged is used; assignments are made to the Homologous superfamily (H) level if there is good evidence that the domains are related by evolution i.e. they are homologous.
Additional sequence data for domains with no experimentally determined structures are provided by CATH's sister resource, Gene3D, which are used to populate the homologous superfamilies. Protein sequences from UniProtKB and Ensembl are scanned against CATH HMMs to predict domain sequence boundaries and make homologous superfamily assignments.
Releases
The CATH team releases new data both as daily snapshots, and official releases approximately annually. The latest release of CATH-Gene3D (v4.3) was released in December 2020 and consists of:
500,238 structural protein domain entries
151 mln non-structural protein domain entries
5,481 homologous superfamily entrie
212,872 functional family entries
Open-source software
CATH is an open source software project, with developers developing and maintaining a number of open-source tools, which are available publicly on GitHub.
References
Protein structure databases
Protein structure
Protein folds
Protein classification
Protein superfamilies
University College London | CATH database | [
"Chemistry",
"Biology"
] | 479 | [
"Protein structure",
"Protein superfamilies",
"Structural biology",
"Protein classification"
] |
964,703 | https://en.wikipedia.org/wiki/Messier%2068 | Messier 68 (also known as M68 or NGC 4590) is a globular cluster found in the east south-east of Hydra, away from its precisely equatorial part. It was discovered by Charles Messier in 1780. William Herschel described it as "a beautiful cluster of stars, extremely rich, and so compressed that most of the stars are blended together". His son John noted that it was "all clearly resolved into stars of 12th magnitude, very loose and ragged at the borders".
M68 is centred about 33,600 light-years away from Earth. It is orbiting our galaxy's galactic bulge with a great eccentricity of 0.5. This takes it to 100,000 light years from the center. It is one of the most metal-poor globular clusters, which means it has a paucity of elements other than hydrogen and helium. The cluster may be undergoing core-collapse, and it displays signs of being in rotation. The cluster may have been acquired in its gravitational tie to the Milky Way through accretion from a satellite galaxy.
As of 2015, 50 variable stars have been identified in this cluster; the first 28 being identified as early as 1919–20 by American astronomer Harlow Shapley. Most of the variables are of type RR Lyrae, or periodic variables. Six of the variables are of the SX Phoenicis variety, which display short pulsating behavior.
Gallery
See also
List of Messier objects
References
External links
Globular Cluster M68 @ SEDS Messier pages
Messier 68, Galactic Globular Clusters Database page
Globular clusters
Hydra (constellation)
068
NGC objects
Astronomical objects discovered in 1780
Discoveries by Charles Messier | Messier 68 | [
"Astronomy"
] | 352 | [
"Hydra (constellation)",
"Constellations"
] |
964,704 | https://en.wikipedia.org/wiki/Messier%2067 | Messier 67 (also known as M67 or NGC 2682) and sometimes called the King Cobra Cluster or the Golden Eye Cluster is an open cluster in the southern, equatorial half of Cancer. It was discovered by Johann Gottfried Koehler in 1779. Estimates of its age range between 3.2 and 5 billion years. Distance estimates are likewise varied, but typically are . Estimates of 855, 840, and 815 pc were established via binary star modelling and infrared color-magnitude diagram fitting.
Description
M67 is not the oldest known open cluster, several Milky Way clusters are known to be older, yet farther than M67. It is a paradigm study object in stellar evolution:
it is well-populated
has negligible amounts of dust obscuration
all its stars are at the same distance and age, save for approximately 30 anomalous blue stragglers
M67 is one of the most-studied open clusters, yet estimates of its physical parameters such as age, mass, and number of stars of a given type, vary substantially. Richer et al. estimate its age to be 4 billion years, its mass to be 1080 solar masses (), and number its white dwarfs at 150. Hurley et al. estimate its current mass to be and its initial mass to be approximately 10 times as great.
It has more than 100 stars similar to the Sun, and numerous red giants. The total star count has been estimated at well over 500. The ages and prevalence of Sun-like stars had led some astronomers to theorize it as the possible parent cluster of the Sun. However, computer simulations disagree on whether the outer Solar System would have survived an ejection from M67, and the cluster itself would probably not have survived such an ejection event.
The cluster contains no main sequence stars bluer (hotter) than spectral type F, other than perhaps some of the blue stragglers, since the brighter stars of that age have already left the main sequence. In fact, when the stars of the cluster are plotted on the Hertzsprung-Russell diagram, there is a distinct "turn-off" representing the stars which have terminated hydrogen fusion in the core and are destined to become red giants. As a cluster ages, the turn-off moves progressively down the main sequence to cooler stars.
It appears that M67 has a bias toward heavier stars. One cause of this is mass segregation, the process by which lighter stars gain speed at the expense of more massive stars during close encounters, which moves them to greater average distance from the center of the cluster or allows escape altogether.
A March 2016 joint AIP/JHU study by Barnes et al. on rotational periods of 20 Sun-like stars, measured by the effects of moving starspots on light curves, suggests that these approximately 4 billion-year old stars spin in about 26 days – like the Sun, which has a period at the equator of 25.38 days. Measurements were carried out as part of the extended K2 mission of Kepler space telescope. This reinforces the applicability of many key properties of the Sun to stars of the same size and age, a fundamental principle of modern solar and stellar physics. The authors abbreviate this as the "solar-stellar connection".
Planets
A radial velocity survey of M67 has found exoplanets around five stars in the cluster: YBP 1194, YBP 1514, YBP 401, Sand 978, and Sand 1429. A sixth star, Sand 364, was also thought to have a planet, but a follow-up study did not find evidence for it and concluded that the radial velocity variations have a non-planetary origin, likely stellar variability.
Gallery
See also
List of Messier objects
List of open clusters
Open cluster family
Open cluster remnant
References
External links
Messier 67, SEDS Messier pages
Messier 067
Messier 067
067
Messier 067
Orion–Cygnus Arm
Astronomical objects discovered in 1779 | Messier 67 | [
"Astronomy"
] | 817 | [
"Cancer (constellation)",
"Constellations"
] |
964,729 | https://en.wikipedia.org/wiki/Messier%2069 | Messier 69 or M69, also known NGC 6637, and NGC 6634, is a globular cluster in the southern constellation of Sagittarius. It can be found 2.5° to the northeast of the star Epsilon Sagittarii and is dimly visible in 50 mm aperture binoculars. The cluster was discovered by Charles Messier on August 31, 1780, the same night he discovered M70. At the time, he was searching for an object described by Nicolas-Louis de Lacaille in 1751–2 and thought he had rediscovered it, but it is unclear if Lacaille actually described M69.
This cluster is about 28,700 light-years away from Earth and from the Galactic Center, with a spatial radius of 45 light-years. It is a relatively metal-rich globular cluster that is a likely member of the galactic bulge population. It has a mass of with a half-mass radius of , a core radius of , and a tidal radius of . Its center has a bright luminosity density of ·pc−3 (meaning per cubic parsec). It is a close neighbor of its analog M70 – possibly only 1,800 light-years separates the two.
Gallery
See also
List of Messier objects
References and footnotes
External links
Messier 69, Galactic Globular Clusters Database page
Globular clusters
Sagittarius (constellation)
069
NGC objects
Astronomical objects discovered in 1780
Discoveries by Charles Messier | Messier 69 | [
"Astronomy"
] | 303 | [
"Sagittarius (constellation)",
"Constellations"
] |
964,752 | https://en.wikipedia.org/wiki/Messier%2070 | Messier 70 or M70, also known as NGC 6681, is a globular cluster of stars to be found in the south of Sagittarius. It was discovered by Charles Messier in 1780. The famous comet Hale–Bopp was discovered near this cluster in 1995.
It is about 29,400 light years away from Earth and around from the Galactic Center. It is roughly the same size and luminosity as its neighbour in space, M69. M70 has a very small core radius of and a half-light radius of . This cluster has undergone core collapse, leaving it centrally concentrated with the luminosity distribution following a power law.
There are two distinct stellar populations in the cluster, with each displaying unique abundances. These likely represent different generations of stars. Five known variable stars lie within the broadest radius, the tidal radius, of it, all of which are RR Lyrae variables. The cluster may have two blue stragglers near the core.
Gallery
See also
List of Messier objects
References and footnotes
External links
Messier 70, Galactic Globular Clusters Database page
Globular clusters
Sagittarius (constellation)
070
NGC objects
Astronomical objects discovered in 1780
Discoveries by Charles Messier | Messier 70 | [
"Astronomy"
] | 255 | [
"Sagittarius (constellation)",
"Constellations"
] |
964,760 | https://en.wikipedia.org/wiki/Messier%2071 | Messier 71 (also known as M71, NGC 6838, or the Angelfish Cluster) is a globular cluster in the small northern constellation Sagitta. It was discovered by Philippe Loys de Chéseaux in 1745 and included by Charles Messier in his catalog of non-comet-like objects in 1780. It was also noted by Koehler at Dresden around 1775. Messier 71 is also known as NGC 6839 and The Bernardo Star, though this identification is very uncertain.
This star cluster is about 13,000 light years away from Earth and spans . The irregular variable star Z Sagittae is a member.
M71 was for many decades thought (until the 1970s) to be a densely packed open cluster and was classified as such by leading astronomers in the field of star cluster research due to its lacking a dense central compression, and to its stars having more "metals" than is usual for an ancient globular cluster; furthermore, it lacks the RR Lyrae "cluster" variable stars that are common in most globulars. However, modern photometric photometry has detected a short "horizontal branch" in the H-R diagram (chart of temperature versus luminosity) which is characteristic of a globular cluster. The shortness of the branch explains the lack of RR Lyrae variables and is due to the globular's relatively young age of 9–10 billion years. Taking in many or only late series (Population I) stars explains relatively its stars. Hence today M71 is designated as a very loosely concentrated globular cluster, much like M68 in Hydra. M71 has a mass of about and a luminosity of around 19,000 .
See also
List of Messier objects
NGC 6366
NGC 6342
References
Gallery
External links
Messier71 @ SEDS Messier pages
Messier 71, Galactic Globular Clusters Database page
Messier 71, LRGB CCD image based on two hours total exposure
Messier 71: an Unusual Globular Cluster, ESA\Hubble picture of the week.
Messier 071
Messier 071
071
Messier 071
? | Messier 71 | [
"Astronomy"
] | 447 | [
"Sagitta",
"Constellations"
] |
964,775 | https://en.wikipedia.org/wiki/Messier%2074 | Messier 74 (also known as NGC 628 and Phantom Galaxy) is a large spiral galaxy in the equatorial constellation Pisces. It is about 32 million light-years away from Earth. The galaxy contains two clearly defined spiral arms and is therefore used as an archetypal example of a grand design spiral galaxy. The galaxy's low surface brightness makes it the most difficult Messier object for amateur astronomers to observe. Its relatively large angular (that is, apparent) size and the galaxy's face-on orientation make it an ideal object for professional astronomers who want to study spiral arm structure and spiral density waves. It is estimated that M74 hosts about 100 billion stars.
Observation history
M74 was discovered by Pierre Méchain in 1780. He then communicated his discovery to Charles Messier, who listed the galaxy in his catalog. In July 2022, it was observed by the James Webb Space Telescope.
Structure
M74 has two spiral arms that wind counterclockwise from the galaxy's center. The spiral arms widen as they get farther from M74's center, but one of the arms narrows at the end. The arms deviate slightly from a constant angle.
Supernovae
Three supernovae are known to have taken place within it: SN 2002ap, SN 2003gd, and SN 2013ej (the numbers denote the year). The latter was bright as 10th magnitude when viewed from the surface of Earth, so visible from almost all modern telescopes in a good night sky.
SN 2002ap was one of few Type Ic supernovae (which denotes hypernovae) recorded within 10 Mpc every century. This explosion has been used to test theories on the origins of others further away and theories on the emission by supernovae of gamma ray bursts.
SN 2003gd is a Type II-P supernova. Type II supernovae have known luminosities, so they can be used to accurately measure distances. The distance measured to M74 using SN 2003gd is 9.6 ± 2.8 Mpc, or 31 ± 9 million ly. For comparison, distances measured using the brightest supergiants are 7.7 ± 1.7 Mpc and 9.6 ± 2.2 Mpc. Ben Sugerman found a "light echo" – a later reflection of the explosion – associated with SN 2003gd. This is one of the few supernovae in which such a reflection has been found. This reflection appears to be from dust in a sheet-like cloud that lies in front of the supernova, and it can be used to determine the composition of the interstellar dust.
In addition to these supernovae, the astronomical transient AT 2019krl was discovered on 6 July 2019 and classified as either a type IIn supernova or an LBV in outburst. Later analysis argued that it was consistent with known examples of giant LBV eruptions and SN 2008S-like objects.
Galaxy group
This is the brightest member of the M74 Group, a group of 5 to 7 galaxies that also includes the peculiar spiral galaxy NGC 660 and a few irregular galaxies. Different group membership identification methods (ranging from a clear, to likely, to perhaps historic gravitational tie) identify several objects of the group in common, and a few galaxies whose exact status within such groupings is currently uncertain.
Suspected black hole
In 2005 the Chandra X-ray Observatory announced its observation of an ultraluminous X-ray source (ULX) in M74, radiating more X-ray power than a neutron star, in periodic intervals of around two hours. It has an estimated mass of . This is an indicator of an intermediate-mass black hole. This would be a rather uncommon class, in between in size of stellar black holes and the massive black holes theorized to be in the center of many galaxies. Such an object is believed to form from lesser ("stellar") black holes within a star cluster. The source has been given identification number CXOU J013651.1+154547.
Amateur astronomy observation
Messier 74 is 1.5° east-northeast of Eta Piscium. This galaxy has the second-lowest Earth-surface brightness of any Messier object. (M101 has the lowest.) It requires a good night sky. This galaxy may be best viewed under low magnification; when highly magnified, the diffuse emission becomes more extended and appears too faint to be seen by many people. Additionally, M74 may be more easily seen when using averted vision when the eyes are fully dark adapted.
See also
List of Messier objects
NGC 3184 – a similar face-on spiral galaxy
Messier 101 – a similar face-on spiral galaxy
Whirlpool Galaxy – a well-known face-on spiral galaxy
References and footnotes
External links
Spiral Galaxy M74 @ SEDS Messier pages
Unbarred spiral galaxies
Messier 074
Messier 074
074
Messier 074
01149
05974
Astronomical objects discovered in 1780
Discoveries by Pierre Méchain | Messier 74 | [
"Astronomy"
] | 1,045 | [
"Pisces (constellation)",
"Constellations"
] |
964,888 | https://en.wikipedia.org/wiki/Domperidone | Domperidone, sold under the brand name Motilium among others, is a dopamine antagonist medication which is used to treat nausea and vomiting and certain gastrointestinal problems like gastroparesis (delayed gastric emptying). It raises the level of prolactin in the human body and is used off label to induce and promote breast milk production. It may be taken by mouth or rectally.
Side effects may include headache, anxiety, dry mouth, abdominal cramps, diarrhea, and elevated prolactin levels. Secondary to increased prolactin levels, breast changes, milk outflow, menstrual irregularities, and hypogonadism can occur. Domperidone may also cause QT prolongation and has rarely been associated with serious cardiac complications such as sudden cardiac death. However, the risks are small and occur more with high doses. Domperidone acts as a peripherally selective antagonist of the dopamine D2 and D3 receptors. Due to its low entry into the brain, the side effects of domperidone are different from those of other dopamine receptor antagonists like metoclopramide and it produces little in the way of central nervous system adverse effects. However, domperidone can nonetheless increase prolactin levels as the pituitary gland is outside of the blood–brain barrier.
Domperidone was discovered in 1974 and was introduced for medical use in 1979. It was developed by Janssen Pharmaceutica. Domperidone is available over-the-counter in many countries, for instance in Europe and elsewhere throughout the world. It is not approved for use in the United States. However, it is available in the United States for people with severe and treatment-refractory gastrointestinal motility problems under an expanded access individual-patient investigational new drug application. An analogue of domperidone called deudomperidone is under development for potential use in the United States and other countries.
Medical uses
Nausea and vomiting
There is some evidence that domperidone has antiemetic activity. It is recommended by the Canadian Headache Society for treatment of nausea associated with acute migraine.
Gastroparesis
Gastroparesis is a medical condition characterised by delayed emptying of the stomach when there is no mechanical gastric outlet obstruction. Its cause is most commonly idiopathic, a diabetic complication or a result of abdominal surgery. The condition causes nausea, vomiting, fullness after eating, early satiety (feeling full before the meal is finished), abdominal pain, and bloating. Domperidone can be used to increase the transit of food through the stomach by increasing gastrointestinal peristalsis and hence to treat gastroparesis. It may be useful in idiopathic and diabetic gastroparesis. However, increased rate of gastric emptying induced by drugs like domperidone does not always correlate well with relief of symptoms.
Lactation
Domperidone is used off-label in some countries to stimulate lactation or enhance breast milk production, but, as of December 2023, it is not approved for that purpose in any country, and is not approved for use in humans in the United States. Domperidone acts as a peripheral dopamine antagonist and is hypothesized to stimulate prolactin secretion, with a 2003 study supporting that hypothesis.
A 2018 meta-analysis of five randomized controlled trials found that domperidone resulted in a moderate increase of in breast milk volume for mothers of preterm infants with insufficient milk supply. The analysis also indicated that domperidone was well tolerated with no significant difference in maternal adverse events compared to placebo. Domperidone has no officially established dosage for increasing milk supply, but most published studies have used 10 mg three times daily for 4 to 10 days (30 mg per day).
The US Food and Drug Administration (FDA) has expressed concerns about serious adverse side effects and concerns about its effectiveness. The FDA identified serious cardiac adverse events associated with domperidone use in lactating individuals, including arrhythmias, cardiac arrest, and sudden death. Additionally, discontinuation or tapering of domperidone has been linked to severe neuropsychiatric adverse events such as agitation, anxiety, and suicidal ideation. Because of these risks, the FDA strongly cautions against the use of domperidone to enhance lactation.
A review by Health Canada also found a link between the sudden discontinuation or tapering of domperidone when used off-label for lactation, and psychiatric withdrawal events, particularly daily doses greater than the maximum recommended dose of 30 mg per day. A 2021 study found that postpartum usage of domperidone increased across five Canadian provinces from 2004 and 2017 with usage plateauing in 2011 and a drop in usage after a 2012 Health Canada advisory warning about domperidone.
Other uses
Parkinson's disease
Parkinson's disease is a degenerative neurological condition where a decrease in dopamine in the brain leads to rigidity (stiffness of movement), tremor, and other symptoms and signs. Poor gastrointestinal function, nausea, and vomiting are major problems for people with Parkinson's disease because most medications used to treat Parkinson's disease are given by mouth. These medications, such as levodopa, can also cause nausea as a side effect. Furthermore, anti-nausea drugs, such as metoclopramide, which do cross the blood–brain barrier, may worsen the extrapyramidal symptoms of Parkinson's disease. Domperidone can be used to relieve nausea and gastrointestinal symptoms in Parkinson's disease; it blocks peripheral D2 receptors but minimally crosses the blood-brain barrier in normal doses, so has no effect on the extrapyramidal symptoms of the disease. In addition, domperidone may be useful in the treatment of orthostatic hypotension caused by dopaminergic therapy in people with Parkinson's disease.
Other gastrointestinal uses
Domperidone may be used in functional dyspepsia in both adults and children. It has also been found effective in the treatment of reflux in children. However some specialists consider its risks prohibitory of the treatment of infantile reflux.
Available forms
Domperidone is available for use by oral administration in the form of tablets, orally disintegrating tablets (ODTs) and suspension, and by rectal administration in the form of suppositories. The oral tablets are available in the strength of 10mg. Domperidone has been studied for use by intramuscular injection and an intravenous formulation was previously available, but the medication is now only available in forms for oral and rectal administration.
Veterinary uses
Domperidone is used as immunotherapy to treat leishmania in dogs.
Domperidone also has an FDA-approved formulation for the prevention of fescue toxicosis in periparturient mares.
Contraindications
Domperidone is contraindicated with QT-prolonging drugs like amiodarone.
Side effects
Side effects associated with domperidone include dry mouth, abdominal cramps, diarrhea, nausea, rash, itching, hives, and hyperprolactinemia (the symptoms of which may include breast enlargement, galactorrhea, breast pain/tenderness, gynecomastia, hypogonadism, and menstrual irregularities).
Due to the blockade of D2 receptors in the central nervous system, D2 receptor antagonists like metoclopramide and antipsychotics can also produce a variety of additional side effects including drowsiness, akathisia, restlessness, insomnia, lassitude, fatigue, extrapyramidal symptoms, dystonia, Parkinsonian symptoms, tardive dyskinesia, and depression. However, this is not the case with domperidone, because, unlike other D2 receptor antagonists, it minimally crosses the blood–brain barrier, and for this reason, is rarely associated with such side effects. However, domperidone theoretically might be able to produce some blockade of central D2 receptors at higher doses, in turn producing side effects similar to those of centrally permeable D2 receptor antagonists like antipsychotics.
Elevated prolactin levels
Due to D2 receptor blockade, domperidone causes hyperprolactinemia. Hyperprolactinemia can suppress the secretion of gonadotropin-releasing hormone (GnRH) from the hypothalamus, in turn suppressing the secretion of follicle-stimulating hormone (FSH) and luteinizing hormone (LH) and resulting in hypogonadism and low levels of the sex hormones estradiol and testosterone. Accordingly, 10 to 15% of females have been reported to experience mammoplasia (breast enlargement), mastodynia (breast pain/tenderness), galactorrhea (inappropriate or excessive milk production/secretion), and amenorrhea (cessation of menstrual cycles) with domperidone therapy. Males may experience low libido, erectile dysfunction, and impaired spermatogenesis, as well as galactorrhea and gynecomastia. D2 receptor antagonists like antipsychotics and domperidone may also increase the risk of prolactinomas, but more research is needed to confirm this.
Rare reactions
Cardiac complications
Domperidone use is associated with an increased risk of sudden cardiac death (by 70%) most likely through its prolonging effect of the cardiac QT interval and ventricular arrhythmias. The cause is thought to be blockade of hERG voltage-gated potassium channels. The risks are dose-dependent, and appear to be greatest with high/very high doses via intravenous administration and in the elderly, as well as with drugs that interact with domperidone and increase its circulating concentrations (namely CYP3A4 inhibitors). Conflicting reports exist, however. In neonates and infants, QT prolongation is controversial and uncertain.
UK drug regulatory authorities (MHRA) have issued the following restriction on domperidone in 2014 due to increased risk of adverse cardiac effects:
However, a 2015 Australian review concluded the following:
Possible central toxicity in infants
In Britain, a legal case involved the death of two children of a mother whose three children had all had hypernatraemia. She was charged with poisoning the children with salt. One of the children, who was born at 28 weeks gestation with respiratory complications and had a fundoplication for gastroesophageal reflux and failure to thrive was prescribed domperidone. An advocate for the mother suggested the child may have had neuroleptic malignant syndrome as a side effect of domperidone due to the drug crossing the child's immature blood–brain barrier.
Interactions
In healthy volunteers, the CYP3A4 inhibitor ketoconazole increased the Cmax and AUC concentrations of domperidone by 3- to 10-fold. This was accompanied by a QT interval prolongation of about 10–20 milliseconds when domperidone 10 mg four times daily and ketoconazole 200 mg twice daily were administered, whereas domperidone by itself at the dosage assessed produced no such effect. As such, domperidone with ketoconazole or other CYP3A4 inhibitors is a potentially dangerous combination.
Pharmacology
Pharmacodynamics
Domperidone is a peripherally selective dopamine D2 and D3 receptor antagonist. It has no clinically significant interaction with the D1 receptor, unlike metoclopramide. The medication provides relief from nausea by blocking D2 receptors in the chemoreceptor trigger zone and from gastrointestinal symptoms by blocking D2 receptors in the gut. It blocks D2 receptors in the lactotrophs of the anterior pituitary gland increasing release of prolactin which in turn increases lactation. Domperidone may be more useful in some patients and cause harm in others by way of the genetics of the person, such as polymorphisms in the drug transporter gene ABCB1 (which encodes P-glycoprotein), the voltage-gated potassium channel KCNH2 gene (hERG/Kv11.1), and the α1D-adrenergic receptor ADRA1D gene.
Effects on prolactin levels
A single 20 mg oral dose of domperidone has been found to increase mean serum prolactin levels (measured 90 minutes post-administration) in non-lactating women from 8.1 ng/mL to 110.9 ng/mL (a 13.7-fold increase). This was similar to the increase in prolactin levels produced by a single 20 mg oral dose of metoclopramide (7.4 ng/mL to 124.1 ng/mL; 16.7-fold increase). After two weeks of repeated administration (30 mg/day in both cases), the increase in prolactin levels produced by domperidone was reduced (53.2 ng/mL; 6.6-fold above baseline), but the increase in prolactin levels produced by metoclopramide, conversely, was heightened (179.6 ng/mL; 24.3-fold above baseline). This indicates that acute and continuous administration of both domperidone and metoclopramide is effective in increasing prolactin levels, but that there are different effects on the secretion of prolactin with repeated use. The mechanism of the difference is unknown. The increase in prolactin levels observed with the two drugs was much greater in women than in men. This appears to be due to the higher estrogen levels in women, as estrogen stimulates prolactin secretion from the pituitary gland.
For comparison, normal prolactin levels in women are less than 20 ng/mL, prolactin levels peak at 100 to 300 ng/mL at parturition in pregnant women, and in lactating women, prolactin levels have been found to be 90 ng/mL at 10 days postpartum and 44 ng/mL at 180 days postpartum.
Pharmacokinetics
Absorption
The absolute bioavailability of domperidone is low (13–17% or approximately 15%). This is due to extensive first-pass metabolism in the intestines and liver. Conversely, its bioavailability is high via intramuscular injection (90%). The onset of action of domperidone taken orally is about 30 to 60 minutes. Peak levels of domperidone following an oral dose occur after about 60 minutes. Domperidone exposure increases proportionally with doses in the 10 to 20 mg dose range. There is a 2- to 3-fold accumulation in levels of domperidone with frequent repeated oral administration of domperidone (four times per day (every 5 hours) for 4 days). The oral bioavailability of domperidone is somewhat increased, and time to peak slightly increased when it is taken with food and bioavailability is decreased by prior concomitant administration of cimetidine and sodium bicarbonate.
Distribution
The plasma protein binding of domperidone is 91 to 93%. The tissue distribution of domperidone based on animal studies is wide, but concentrations are low in the brain. The drug is a substrate for the P-glycoprotein (ABCB1) transporter, and animal studies suggest that this is the reason for the low central nervous system penetration of domperidone. Small amounts of domperidone cross the placenta in animals.
Metabolism
Domperidone is extensively metabolized in the liver and intestines with oral administration. This occurs via hydroxylation and N-dealkylation. Domperidone is almost exclusively metabolized by CYP3A4/5, though minor contributions by CYP1A2, CYP2D6, and CYP2C8 have been reported. CYP3A4 is the major enzyme involved in the N-dealkylation of domperidone, while CYP3A4, CYP1A2, and CYP2E1 are involved in its aromatic hydroxylation. All of the metabolites of domperidone are inactive as D2 receptor ligands. Overall and peak levels of domperidone are increased by about 2.9- and 1.5-fold in moderate hepatic impairment, respectively.
Elimination
Domperidone is eliminated 31% in urine and 66% in feces. The proportion of domperidone excreted unchanged is small (10% in feces and 1% in urine). The elimination half-life of domperidone is about 7 to 9 hours in healthy individuals. However, the elimination half-life of domperidone can be prolonged to 20 hours in people with several renal dysfunction.
Chemistry
Domperidone is a derivative of benzimidazolinone. It is structurally related to butyrophenone neuroleptics like haloperidol.
History
Domperidone was synthesized at Janssen Pharmaceutica in 1974 following their research on antipsychotic drugs. Janssen pharmacologists discovered that some antipsychotic drugs had a significant effect on dopamine receptors in the central chemoreceptor trigger zone that regulated vomiting, and started searching for a dopamine antagonist that would not pass the blood–brain barrier, thereby being free of the extrapyramidal side effects that were associated with drugs of this type. This led to the discovery of domperidone as a strong antiemetic with minimal central effects. Domperidone was patented in the United States in 1978, with the patent filed in 1976. In 1979, domperidone was first marketed, under the brand name Motilium, in Switzerland and West Germany. Domperidone was subsequently introduced in the forms of orally disintegrating tablets (based on Zydis technology) in 1999.
In April 2014, the Coordination Group for Mutual Recognition and Decentralised Procedures – Human (CMDh) published an official press release suggesting restricting the use of domperidone-containing medicines. It also approved earlier published suggestions by Pharmacovigilance Risk Assessment Committee (PRAC) to use domperidone only for treating nausea and vomiting and reduce maximum daily dosage to 10mg.
Society and culture
Generic names
Domperidone is the generic name of the drug and its , , , and .
Regulatory approval
It was reported in 2007 that domperidone is available in 58 countries, but the uses or indications of domperidone vary between nations. In Italy it is used in the treatment of gastroesophageal reflux disease and in Canada, the drug is indicated in upper gastrointestinal motility disorders and to prevent gastrointestinal symptoms associated with the use of dopamine agonist antiparkinsonian agents. In the United Kingdom, domperidone is only indicated for the treatment of nausea and vomiting and the treatment duration is usually limited to 1 week.
In the United States, domperidone is not a legally marketed human drug and it is not approved for sale in the United States. In June 2004, the Food and Drug Administration (FDA) issued a warning that distributing any domperidone-containing products is illegal.
It is available over-the-counter to treat gastroesophageal reflux disease and functional dyspepsia in many countries, such as Ireland, the Netherlands, Italy, South Africa, Mexico, India, Chile, and China.
Domperidone is not approved for use in the United States. There is an exception for use in people with treatment-refractory gastrointestinal symptoms under an FDA Investigational New Drug application.
Formulations
Research
Domperidone has been studied as a potential hormonal contraceptive to prevent pregnancy in women.
References
Antiemetics
Antihypotensive agents
Belgian inventions
Chloroarenes
Dopamine antagonists
HERG blocker
Janssen Pharmaceutica
Motility stimulants
Peripherally selective drugs
Piperidines
Potassium channel blockers
Prolactin releasers
Ureas | Domperidone | [
"Chemistry"
] | 4,335 | [
"Organic compounds",
"Ureas"
] |
964,995 | https://en.wikipedia.org/wiki/Postmodern%20architecture | Postmodern architecture is a style or movement which emerged in the 1960s as a reaction against the austerity, formality, and lack of variety of modern architecture, particularly in the international style advocated by Philip Johnson and Henry-Russell Hitchcock. The movement was formally introduced by the architect and urban planner Denise Scott Brown and architectural theorist Robert Venturi in their 1972 book Learning from Las Vegas. The style flourished from the 1980s through the 1990s, particularly in the work of Scott Brown & Venturi, Philip Johnson, Charles Moore and Michael Graves. In the late 1990s, it divided into a multitude of new tendencies, including high-tech architecture, neo-futurism, new classical architecture, and deconstructivism. However, some buildings built after this period are still considered postmodern.
Origins
Postmodern architecture emerged in the late 1960s as a reaction against the perceived shortcomings of modern architecture, particularly its rigid doctrines, its uniformity, its lack of ornament, and its habit of ignoring the history and culture of the cities where it appeared. In 1966, Venturi formalized the movement in his book, Complexity and Contradiction in Architecture. Venturi summarized the kind of architecture he wanted to see replace modernism:
I speak of a complex and contradictory architecture based on the richness and ambiguity of modern experience, including that experience which is inherent in art. ... I welcome the problems and exploit the uncertainties. ... I like elements which are hybrid rather than "pure", compromising rather than "clean" ... accommodating rather than excluding. ... I am for messy vitality over obvious unity. ... I prefer "both-and" to "either-or", black and white, and sometimes gray, to black or white. ... An architecture of complexity and contradiction must embody the difficult unity of inclusion rather than the easy unity of exclusion.
In place of the functional doctrines of modernism, Venturi proposed giving primary emphasis to the façade, incorporating historical elements, a subtle use of unusual materials and historical allusions, and the use of fragmentation and modulations to make the building interesting. Accomplished architect and urban planner Denise Scott Brown, who was Venturi's wife, and Venturi wrote Learning from Las Vegas (1972), co-authored with Steven Izenour, in which they further developed their joint argument against modernism. They urged architects to take into consideration and to celebrate the existing architecture in a place, rather than to try to impose a visionary utopia from their own fantasies. This was in line with Scott Brown's belief that buildings should be built for people, and that architecture should listen to them. Scott Brown and Venturi argued that ornamental and decorative elements "accommodate existing needs for variety and communication". The book was instrumental in opening readers' eyes to new ways of thinking about buildings, as it drew from the entire history of architecture—both high-style and vernacular, both historic and modern—and In response to Mies van der Rohe's famous maxim "Less is more", Venturi responded, to "Less is a bore." Venturi cited the example of one of his wife's and his own buildings, Guild House, in Philadelphia, as examples of a new style that welcomed variety and historical references, without returning to academic revival of old styles.
In Italy at about the same time, a similar revolt against strict modernism was being launched by the architect Aldo Rossi, who criticized the rebuilding of Italian cities and buildings destroyed during the war in the modernist style, which had had no relation to the architectural history, original street plans, or culture of the cities. Rossi insisted that cities be rebuilt in ways that preserved their historical fabric and local traditions. Similar ideas were and projects were put forward at the Venice Biennale in 1980. The call for a post-modern style was joined by Christian de Portzamparc in France and Ricardo Bofill in Spain, and in Japan by Arata Isozaki.
Notable postmodern buildings and architects
Robert Venturi
Robert Venturi (1925–2018) was both a prominent theorist of postmodernism and an architect whose buildings illustrated his ideas. After studying at the American Academy in Rome, he worked in the offices of the modernists Eero Saarinen and Louis Kahn until 1958, and then became a professor of architecture at Yale University. One of his first buildings was the Guild House in Philadelphia, built between 1960 and 1963, and a house for his mother in Chestnut Hill, in Philadelphia. These two houses became symbols of the postmodern movement. He went on to design, in the 1960s and 1970s, a series of buildings which took into account both historic precedents, and the ideas and forms existing in the real life of the cities around them.
Michael Graves
Michael Graves (1934–2015) designed two of the most prominent buildings in the postmodern style, the Portland Building and the Denver Public Library. He later followed up his landmark buildings by designing large, low-cost retail stores for chains such as Target and J.C. Penney in the United States, which had a major influence on the design of retail stores in city centers and shopping malls. In his early career, he, along with the Peter Eisenman, Charles Gwathmey, John Hejduk and Richard Meier, was considered one of the New York Five, a group of advocates of pure modern architecture, but in 1982 he turned toward postmodernism with the Portland Building, one of the first major structures in the style. The building has since been added to the National Register of Historic Places.
Charles Moore
The most famous work of architect Charles Moore (1925–1993) is the Piazza d'Italia in New Orleans (1978), a public square composed of an exuberant collection of pieces of famous Italian Renaissance architecture. Drawing upon the Spanish Revival architecture of the city hall, Moore designed the Beverly Hills Civic Center in a mixture of Spanish Revival, Art Deco and postmodern styles. It includes courtyards, colonnades, promenades, and buildings, with both open and semi-enclosed spaces, stairways and balconies.
The Haas School of Business at the University of California, Berkeley blends in with both the neo-Renaissance architecture of the Berkeley campus and with picturesque early 20th century wooden residential architecture in the neighboring Berkeley Hills.
Philip Johnson
Philip Johnson (1906–2005) began his career as a pure modernist. In 1935, he co-authored the famous catalog of the Museum of Modern Art exposition on the International Style, and studied with Walter Gropius and Marcel Breuer at Harvard. His Glass House in New Canaan, Connecticut (1949), inspired by a similar house by Ludwig Mies van der Rohe became an icon of the modernist movement. He worked with Mies on another iconic modernist project, the Seagrams Building in New York City. However, in the 1950s, he began to include certain playful and mannerist forms into his buildings, such as the Synagogue of Port Chester (1954–1956), with a vaulted plaster ceiling and narrow colored windows, and the Art Gallery of the University of Nebraska (1963). However, his major buildings in the 1970, such as IDS Center in Minneapolis (1973) and Pennzoil Place in Houston (1970–1976), were massive, sober, and entirely modernist.
With the AT&T Building (now named 550 Madison Avenue) (1978–1982), Johnson turned dramatically toward postmodernism. The building's most prominent feature is a purely decorative top modeled after a piece of Chippendale furniture, and it has other more subtle references to historical architecture. His intention was to make the building stand out as a corporate symbol among the modernist skyscrapers around it in Manhattan, and he succeeded; it became the best-known of all postmodern buildings. Soon afterward he completed another postmodern project, PPG Place in Pittsburgh, Pennsylvania (1979–1984), a complex of six glass buildings for the Pittsburgh Plate Glass Company. These buildings have neo-gothic features, including 231 glass spires, the largest of which is high.
In 1995, he constructed a postmodern gatehouse pavilion for his residence, Glass House. The gatehouse, called "Da Monstra", is 23 feet high, made of gunite, or concrete shot from a hose, colored gray and red. It is a piece of sculptural architecture with no right angles and very few straight lines, a predecessor of the sculptural contemporary architecture of the 21st century.
Frank Gehry
Frank Gehry (born 1929) was a major figure in postmodernist architecture, and is one of the most prominent figures in contemporary architecture. After studying at the University of Southern California in Los Angeles and then the Harvard Graduate School of Design, he opened his own office in Los Angeles in 1962. Beginning in the 1970s, he began using prefabricated industrial materials to construct unusual forms on private houses in Los Angeles, including, in 1978, his own house in Santa Monica. He broke their traditional design giving them an unfinished and unstable look. His Schnabel House in Los Angeles (1986–1989) was broken into individual structures, with a different structure for every room. His Norton Residence in Venice, California (1983) built for a writer and former lifeguard, had a workroom modeled after a lifeguard tower overlooking the Santa Monica beach. In his early buildings, different parts of the buildings were often different bright colors. In the 1980s he began to receive major commissions, including the Loyola Law School (1978–1984), and the California Aerospace Museum (1982–1984), then international commissions in the Netherlands and Czech Republic. His "Dancing House" in Prague (1996), constructed with an undulating façade of plaques of concrete; parts of the walls were composed of glass, which revealed the concrete pillars underneath. His most prominent project was the Guggenheim Bilbao museum (1991–1997), clad in undulating skins of titanium, a material which until then was used mainly in building aircraft, which changed color depending upon the light. Gehry was often described as a proponent of deconstructivism, but he refused to accept that or any other label for his work.
César Pelli
César Pelli (October 12, 1926 – July 19, 2019) was an Argentine architect who designed some of the world's tallest buildings and other major urban landmarks. Two of his most notable projects are the Petronas Towers in Kuala Lumpur and the World Financial Center in New York City. The American Institute of Architects named him one of the ten most influential living American architects in 1991 and awarded him the AIA Gold Medal in 1995. In 2008, the Council on Tall Buildings and Urban Habitat presented him with The Lynn S. Beedle Lifetime Achievement Award. In 1977, Pelli was selected to be the dean of the Yale School of Architecture in New Haven, Connecticut, and served in that post until 1984. Shortly after Pelli arrived at Yale, he won the commission to design the expansion and renovation of the Museum of Modern Art in New York, which resulted in the establishment of his own firm, Cesar Pelli & Associates. The museum's expansion/renovation and the Museum of Modern Art Residential Tower were completed 1984; the World Financial Center in New York, which includes the grand public space of the Winter Garden, was completed in 1988. Among other significant projects during this period are the Crile Clinic Building in Cleveland, Ohio, completed 1984; Herring Hall at Rice University in Houston, Texas (also completed 1984); completion in 1988 of the Green Building at the Pacific Design Center in West Hollywood, California; and the construction of the Wells Fargo Center in Minneapolis, Minnesota, in 1989.
Pelli was named one of the ten most influential living American Architects by the American Institute of Architects in 1991. In 1995, he was awarded the American Institute of Architects Gold Medal. In May 2004, Pelli was awarded an honorary Doctor of Humane Letters degree from the University of Minnesota Duluth where he designed Weber Music Hall. In 2005, Pelli was honored with the Connecticut Architecture Foundation's Distinguished Leadership Award.
Buildings designed by Pelli during this period are marked by further experimentation with a variety of materials (most prominently stainless steel) and his evolution of the skyscraper. One Canada Square at Canary Wharf in London (opened in 1991); Plaza Tower in Costa Mesa, California (completed 1991); and the NTT Headquarters in Tokyo (finished 1995) were preludes to a landmark project that Pelli designed for Kuala Lumpur, Malaysia. The Petronas Towers were completed in 1997, sheathed in stainless steel and reflecting Islamic design motifs. The dual towers were the world's tallest buildings until 2004. That year, Pelli received the Aga Khan Award for Architecture for the design of the Petronas Towers Pelli's design for the National Museum of Art in Osaka, Japan, was completed 2005, the same year that Pelli's firm changed its name to Pelli Clarke Pelli Architects to reflect the growing roles of senior principals Fred W. Clarke and Pelli's son Rafael.
Postmodernism in Europe
While postmodernism was best known as an American style, notable examples also appeared in Europe. In 1991 Robert Venturi completed the Sainsbury Wing of the National Gallery in London, which was modern but harmonized with the neoclassical architecture in and around Trafalgar Square. The German-born architect Helmut Jahn (1940–2021) constructed the Messeturm skyscraper in Frankfurt, Germany, a skyscraper adorned with the pointed spire of a medieval tower.
One of the early postmodernist architects in Europe was James Stirling (1926–1992). He was a first critic of modernist architecture, blaming modernism for the destruction of British cities in the years after World War II. He designed colorful public housing projects in the postmodern style, as well as the Neue Staatsgalerie in Stuttgart, Germany (1977–1983) and the Kammertheater in Stuttgart (1977–1982), as well as the Arthur M. Sackler Museum at Harvard University in the United States.
One of the most visible examples of the postmodern style in Europe is the SIS Building in London by Terry Farrell (1994). The building, next to the Thames, is the headquarters of the British Secret Intelligence Service. In 1992, Deyan Sudjic described it in The Guardian as an "epitaph for the 'architecture of the eighties. ... It's a design which combines high seriousness in its classical composition with a possible unwitting sense of humour. The building could be interpreted equally plausibly as a Mayan temple or a piece of clanking art deco machinery'.
The Belgian architectural firm Atelier d'architecture de Genval is renowned for its pioneering work in postmodern architecture in Belgium, particularly in Brussels with major realizations such as the Espace Leopold complex which includes the European Parliament, and other like the Euroclear Building, recalling for most of them the American postmodernist style.
The Italian architect Aldo Rossi (1931–1997) was known for his postmodern works in Europe, the Bonnefanten Museum in Maastricht, the Netherlands, completed in 1995. Rossi was the first Italian to win the most prestigious award in architecture, the Pritzker Prize, in 1990. He was noted for combining rigorous and pure forms with evocative and symbolic elements taken from classical architecture.
The Spanish architect Ricardo Bofill (born 1939) is also known for his early postmodern works, including a residential complex in the form of a castle with red walls at Calp on the coast of Spain (1973) and the social housing complex Les Espaces d'Abraxas (1983) in Noisy-le-Grand, France.
The works of Austrian architect Friedensreich Hundertwasser (1928–2000) are occasionally considered a special expression of postmodern architecture.
Postmodernism in Japan
The Japanese architects Tadao Ando (born 1941) and Isozaki Arata (1931–2022) introduced the ideas of the postmodern movement to Japan. Before opening his studio in Osaka in 1969, Ando traveled widely in North America, Africa and Europe, absorbing European and American styles, and had no formal architectural education, though he taught later at Yale University (1987), Columbia University (1988) and Harvard University (1990). Most of his buildings were constructed of raw concrete in cubic forms, but had wide openings which brought in light and views of the nature outside. Beginning in the 1990s, he began using wood as a building material, and introduced elements of traditional Japanese architecture, particularly in his design of the Museum of Wood Culture (1995). His Bennesse House in Naoshima, Kagama, has elements of classic Japanese architecture and a plan which subtly integrates the house into the natural landscape, He won the Pritzker Prize, the most prestigious award in architecture, in 1995.
Isozaki Arata worked two years in the studio of Kenzo Tange (1913–2005), before opening his own firm in Tokyo in 1963. His Museum of Contemporary Art in Nagi artfully combined wood, stone and metal, and joined three geometric forms, a cylinder, a half-cylinder and an extended block, to present three different artists in different settings. His Art Tower in Mito, Japan (1986–1990) featured a postmodernist Titanium and Stainless Steel tower that rotated upon its own axis. In addition to museums and cultural centers in Japan, he designed the Museum of Contemporary Art, Los Angeles (MOCA), (1981–1986), and the COSI Columbus science museum and research center in Columbus, Ohio.
Concert halls – Sydney Opera House and the Berlin Philharmonic
The Sydney Opera House in Sydney, Australia, by the Danish architect Jørn Utzon (1918–2008), is one of the most recognizable of all works of postwar architecture, and spans the transition from modernism to postmodernism. Construction began in 1957, but it was not completed until 1973 due to difficult engineering problems and growing costs. The giant shells of concrete soar over the platforms which form the roof of the hall itself. The architect resigned before the structure was completed, and the interior was designed largely after he left the project. The influence of the Sydney Opera House, can be seen in later concert halls with soaring roofs made of undulating stainless steel.
One of the most influential buildings of the postmodern period was the Berlin Philharmonic, designed by Hans Scharoun (1893–1972) and completed in 1963. The exterior, with its sloping roofs and glided façade, was a distinct break from the earlier, more austere modernist concert halls. The real revolution was inside, where Scharoun placed the orchestra in the center, with the audience seated on terraces around it. He described it this way: "The form given to the hall is inspired by a landscape; In the center is a valley, at the bottom of which is found the orchestra. Around it on all sides rise the terraces, like vineyards. Corresponding to an earthly landscape, the ceiling above appears like a sky." Following his description, future concert halls, such as the Walt Disney Concert Hall by Frank Gehry in Los Angeles, and the Philharmonie de Paris of Jean Nouvel (2015) used the term "vineyard style" and placed the orchestra in the center, instead of on a stage at the end of the hall.
Characteristics
Complexity and contradiction
Postmodern architecture first emerged as a reaction against the doctrines of modern architecture, as expressed by modernist architects including Le Corbusier and Ludwig Mies van der Rohe. In place of the modernist doctrines of simplicity as expressed by Mies in his famous "less is more;" and functionality, "form follows function" and the doctrine of Le Corbusier that "a house is a machine to live in," postmodernism, in the words Robert Venturi, offered complexity and contradiction. Postmodern buildings had curved forms, decorative elements, asymmetry, bright colours, and features often borrowed from earlier periods. Colours and textures were unrelated to the structure or function of the building. Rejecting the "puritanism" of modernism, it called for a return to ornament, and an accumulation of citations and collages borrowed from past styles. It borrowed freely from classical architecture, rococo, neoclassical architecture, the Vienna Secession, the British Arts and Crafts movement, the German Jugendstil.
Postmodern buildings often combined astonishing new forms and features with seemingly contradictory elements of classicism. James Stirling the architect of the Neue Staatsgalerie in Stuttgart, Germany (1984), described the style as "representation and abstraction, monumental and informal, traditional and high-tech."
Fragmentation
Postmodern architecture often breaks large buildings into several different structures and forms, sometimes representing different functions of those parts of the building. With the use of different materials and styles, a single building can appear like a small town or village. An example is the Abteiberg Museum by Hans Hollein in Mönchengladbach (1972–1974).
Asymmetric and oblique forms
Asymmetric forms are one of the trademarks of postmodernism. In 1968, the French architect Claude Parent and philosopher Paul Virilio designed the church Saint-Bernadette-du-Banlay in Nevers, France, in the form of a massive block of concrete leaning to one side. Describing the form, they wrote: "a diagonal line on a white page can be a hill, or a mountain, or slope, an ascent, or a descent." Parent's buildings were inspired in part by concrete German blockhouses he discovered on the French coast which had slid down the cliffs, but were perfectly intact, with leaning walls and sloping floors. Postmodernist compositions are rarely symmetric, balanced and orderly. Oblique buildings which tilt, lean, and seem about to fall over are common.
Polychromy
Color is an important element in many postmodern buildings; to give the façades variety and personality, colored glass is sometimes used, or ceramic tiles, or stone. The buildings of Mexican architect Luis Barragán offer bright sunlight color that give life to the forms.
Humor and "camp"
Humor is a particular feature of many postmodern buildings, particularly in the United States. An example is the Binoculars Building in the Venice neighborhood of Los Angeles, designed by Frank Gehry in collaboration with the sculptor Claes Oldenburg (1991–2001). The gateway of the building is in the form of an enormous pair of binoculars; cars enter the garage passing under the binoculars. "Camp" humor was popular during the postmodern period; it was an ironic humor based on the premise that something could appear so bad (such as a building that appeared about to collapse) that it was good. In 1964, American critic Susan Sontag defined camp as a style which put its accent on the texture, the surface, and style to the detriment of the content, which adored exaggeration, and things which were not what they seemed. Postmodern architecture sometimes used the same sense of theatricality, sense of the absurd and exaggeration of forms.
The aims of postmodernism, which include solving the problems of Modernism, communicating meanings with ambiguity, and sensitivity for the building's context, are surprisingly unified for a period of buildings designed by architects who largely never collaborated with each other. These aims do, however, leave room for diverse implementations as can be illustrated by the variety of buildings created during the movement.
Theories of postmodern architecture
The characteristics of postmodernism allow its aim to be expressed in diverse ways. These characteristics include the use of sculptural forms, ornaments, anthropomorphism and materials which perform trompe-l'œil. These physical characteristics are combined with conceptual characteristics of meaning. These characteristics of meaning include pluralism, double coding, flying buttresses and high ceilings, irony and paradox, and contextualism.
The sculptural forms, not necessarily organic, were created with much ardor. These can be seen in Hans Hollein's Abteiberg Museum (1972–1982). The building is made up of several building units, all very different. Each building's forms are nothing like the conforming rigid ones of Modernism. These forms are sculptural and are somewhat playful. These forms are not reduced to an absolute minimum; they are built and shaped for their own sake. The building units all fit together in a very organic way, which enhances the effect of the forms.
After many years of neglect, ornament returned. Frank Gehry's Venice Beach house, built in 1986, is littered with small ornamental details that would have been considered excessive and needless in Modernism. The Venice Beach House has an assembly of circular logs which exist mostly for decoration. The logs on top do have a minor purpose of holding up the window covers. However, the mere fact that they could have been replaced with a practically invisible nail, makes their exaggerated existence largely ornamental. The ornament in Michael Graves' Portland Municipal Services Building ("Portland Building") (1980) is even more prominent. The two obtruding triangular forms are largely ornamental. They exist for aesthetic or their own purpose.
Postmodernism, with its sensitivity to the building's context, did not exclude the needs of humans from the building. Carlo Scarpa's Brion Cemetery (1970–1972) exemplifies this. The human requirements of a cemetery is that it possesses a solemn nature, yet it must not cause the visitor to become depressed. Scarpa's cemetery achieves the solemn mood with the dull gray colors of the walls and neatly defined forms, but the bright green grass prevents this from being too overwhelming.
Postmodern buildings sometimes utilize trompe-l'œil, creating the illusion of space or depths where none actually exist, as has been done by painters since the Romans. The Portland Building (1980) has pillars represented on the side of the building that to some extent appear to be real, yet they are not.
The Hood Museum of Art (1981–1983) has a typical asymmetrical façade which was at the time prevalent throughout postmodern buildings.
Robert Venturi's Vanna Venturi House (1962–1964) illustrates the postmodernist aim of communicating a meaning and the characteristic of symbolism. The façade is, according to Venturi, a symbolic picture of a house, looking back to the 18th century. This is partly achieved through the use of symmetry and the arch over the entrance.
Perhaps the best example of irony in postmodern buildings is Charles Moore's Piazza d'Italia (1978). Moore quotes (architecturally) elements of Italian Renaissance and Roman Antiquity. However, he does so with a twist. The irony comes when it is noted that the pillars are covered with steel. It is also paradoxical in the way he quotes Italian antiquity far away from the original in New Orleans.
Double coding meant the buildings convey many meanings simultaneously. The Sony Building in New York provides one example. The building is a tall skyscraper which brings with it connotations of very modern technology. However, the top contradicts this. The top section conveys elements of classical antiquity. This double coding is a prevalent trait of postmodernism.
The characteristics of postmodernism were rather unified given their diverse appearances. The most notable among their characteristics is their playfully extravagant forms and the humour of the meanings the buildings conveyed.
Postmodern architecture as an international style – the first examples of which are generally cited as being from the 1950s – but did not become a movement until the late 1970s and continues to influence present-day architecture. Postmodernity in architecture is said to be heralded by the return of "wit, ornament and reference" to architecture in response to the formalism of the International Style of modernism. As with many cultural movements, some of postmodernism's most pronounced and visible ideas can be seen in architecture. The functional and formalized shapes and spaces of the modernist style are replaced by diverse aesthetics: styles collide, form is adopted for its own sake, and new ways of viewing familiar styles and space abound. Perhaps most obviously, architects rediscovered past architectural ornament and forms which had been abstracted by the Modernist architects.
Postmodern architecture has also been described as neo-eclectic, where reference and ornament have returned to the façade, replacing the aggressively unornamented modern styles. This eclecticism is often combined with the use of non-orthogonal angles and unusual surfaces, most famously in the State Gallery of Stuttgart by James Stirling and the Piazza d'Italia by Charles Moore. The Scottish Parliament Building in Edinburgh has also been cited as being of postmodern vogue.
Modernist architects may regard postmodern buildings as vulgar, associated with a populist ethic, and sharing the design elements of shopping malls, cluttered with "gew-gaws". Postmodern architects may regard many modern buildings as soulless and bland, overly simplistic and abstract. This contrast was exemplified in the juxtaposition of the "whites" against the "grays," in which the "whites" were seeking to continue (or revive) the modernist tradition of purism and clarity, while the "grays" were embracing a more multifaceted cultural vision, seen in Robert Venturi's statement rejecting the "black or white" world view of modernism in favor of "black and white and sometimes gray." The divergence in opinions comes down to a difference in goals: modernism is rooted in minimal and true use of material as well as absence of ornament, while postmodernism is a rejection of strict rules set by the early modernists and seeks meaning and expression in the use of building techniques, forms, and stylistic references.
One building form that typifies the explorations of postmodernism is the traditional gable roof, in place of the iconic flat roof of modernism. Shedding water away from the center of the building, such a roof form always served a functional purpose in climates with rain and snow, and was a logical way to achieve larger spans with shorter structural members, but it was nevertheless relatively rare in Modernist buildings. However, postmodernism's own modernist roots appear in some of the noteworthy examples of "reclaimed" roofs. For instance, Robert Venturi's Vanna Venturi House breaks the gable in the middle, denying the functionality of the form, and Philip Johnson's 1001 Fifth Avenue building in Manhattan advertises a mansard roof form as an obviously flat, false front. Another alternative to the flat roofs of modernism would exaggerate a traditional roof to call even more attention to it, as when Kallmann McKinnell & Wood's American Academy of Arts and Sciences in Cambridge, Massachusetts, layers three tiers of low hipped roof forms one above another for an emphatic statement of shelter.
Relationship to previous styles
A new trend became evident in the last quarter of the 20th century as some architects started to turn away from modern functionalism which they viewed as boring, and which some of the public considered unwelcoming and even unpleasant. These architects turned toward the past, quoting past aspects of various buildings and melding them together (even sometimes in an inharmonious manner) to create a new means of designing buildings. A vivid example of this new approach was that postmodernism saw the comeback of columns and other elements of premodern designs, sometimes adapting classical Greek and Roman examples. In Modernism, the traditional column (as a design feature) was treated as a cylindrical pipe form, replaced by other technological means such as cantilevers, or masked completely by curtain wall façades. The revival of the column was an aesthetic, rather than a technological necessity. Modernist high-rise buildings had become in most instances monolithic, rejecting the concept of a stack of varied design elements for a single vocabulary from ground level to the top, in the most extreme cases even using a constant "footprint" (with no tapering or "wedding cake" design), with the building sometimes even suggesting the possibility of a single metallic extrusion directly from the ground, mostly by eliminating visual horizontal elements—this was seen most strictly in Minoru Yamasaki's World Trade Center buildings.
Another return was that of the "wit, ornament and reference" seen in older buildings in terra cotta decorative façades and bronze or stainless steel embellishments of the Beaux-Arts and Art Deco periods. In postmodern structures this was often achieved by placing contradictory quotes of previous building styles alongside each other, and even incorporating furniture stylistic references at a huge scale.
Contextualism, a trend in thinking in the later parts of 20th century, influences the ideologies of the postmodern movement in general. Contextualism is centered on the belief that all knowledge is "context-sensitive". This idea was even taken further to say that knowledge cannot be understood without considering its context. While noteworthy examples of modern architecture responded both subtly and directly to their physical context, postmodern architecture often addressed the context in terms of the materials, forms and details of the buildings around it—the cultural context.
Roots of postmodernism
The postmodernist movement is often seen (especially in the US) as an American movement, starting in America around the 1960s–1970s and then spreading to Europe and the rest of the world, to remain right through to the present. In 1966, however, the architectural historian Sir Nikolaus Pevsner spoke of a revived Expressionism as being "a new style, successor to my International Modern of the 1930s, a post-modern style", and included as examples Le Corbusier's work at Ronchamp and Chandigarh, Denys Lasdun at the Royal College of Physicians in London, Richard Sheppard at Churchill College, Cambridge, and James Stirling's and James Gowan's Leicester Engineering Building, as well as Philip Johnson's own guest house at New Canaan, Connecticut. Pevsner disapproved of these buildings for their self-expression and irrationalism, but he acknowledged them as "the legitimate style of the 1950s and 1960s" and defined their characteristics. The job of defining postmodernism was subsequently taken over by a younger generation who welcomed rather than rejected what they saw happening and, in the case of Robert Venturi, contributed to it.
The aims of postmodernism or late-modernism begin with its reaction to modernism; it tries to address the limitations of its predecessor. The list of aims is extended to include communicating ideas with the public often in a then humorous or witty way. Often, the communication is done by quoting extensively from past architectural styles, often many at once. In breaking away from modernism, it also strives to produce buildings that are sensitive to the context within which they are built.
Postmodernism has its origins in the perceived failure of modern architecture. Its preoccupation with functionalism and economical building meant that ornaments were done away with and the buildings were cloaked in a stark rational appearance. Many felt the buildings failed to meet the human need for comfort both for body and for the eye, that modernism did not account for the desire for beauty. The problem worsened when some already monotonous apartment blocks degenerated into slums. In response, architects sought to reintroduce ornament, color, decoration and human scale to buildings. Form was no longer to be defined solely by its functional requirements or minimal appearance.
Changing pedagogies
Critics of the reductionism of modernism often noted the abandonment of the teaching of architectural history as a causal factor. The fact that a number of the major players in the shift away from modernism were trained at Princeton University's School of Architecture, where recourse to history continued to be a part of design training in the 1940s and 1950s, was significant. The increasing rise of interest in history had a profound impact on architectural education. History courses became more typical and regularized. With the demand for professors knowledgeable in the history of architecture, program were developed including the Advanced Masters-Level Course in the History and Theory of Architecture offered by Dalibor Vesely and Joseph Rykwert at the University of Essex in England between 1968 and 1978. It was the first of its kind.
Other programs followed suit, including several PhD programs in schools of architecture that arose to differentiate themselves from art history PhD programs, where architectural historians had previously trained. In the US, MIT and Cornell were the first, created in the mid-1970s, followed by Columbia, Berkeley, and Princeton. Among the founders of new architectural history programs were Bruno Zevi at the Institute for the History of Architecture in Venice, Stanford Anderson and Henry Millon at MIT, Alexander Tzonis at the Architectural Association, Anthony Vidler at Princeton, Manfredo Tafuri at the University of Venice, Kenneth Frampton at Columbia University, and Werner Oechslin and Kurt Forster at ETH Zürich.
The creation of these programs was paralleled by the hiring, in the 1970s, of professionally trained historians by schools of architecture: Margaret Crawford (with a PhD from UCLA) at SCI-Arc; Elisabeth Grossman (PhD, Brown University) at Rhode Island School of Design; Christian Otto (PhD, Columbia University) at Cornell University; Richard Chafee (PhD, Courtauld Institute) at Roger Williams University; and Howard Burns (MA Kings College) at Harvard, to name just a few examples. A second generation of scholars then emerged that began to extend these efforts in the direction of what is now called "theory": K. Michael Hays (PhD, MIT) at Harvard, Mark Wigley (PhD, Auckland University) at Princeton (now at Columbia University), and Beatriz Colomina (PhD, School of Architecture, Barcelona) at Princeton; Mark Jarzombek (PhD MIT) at Cornell (now at MIT), Jennifer Bloomer (PhD, Georgia Tech) at Iowa State and Catherine Ingraham (PhD, Johns Hopkins) now at Pratt Institute.
Postmodernism with its diversity possesses sensitivity to the building's context and history, and the client's requirements. The postmodernist architects often considered the general requirements of the urban buildings and their surroundings during the building's design. For example, in Frank Gehry's Venice Beach House, the neighboring houses have a similar bright flat color. This vernacular sensitivity is often evident, but other times the designs respond to more high-style neighbors. James Stirling's Arthur M. Sackler Museum at Harvard University features a rounded corner and striped brick patterning that relate to the form and decoration of the polychromatic Victorian Memorial Hall across the street, although in neither case is the element imitative or historicist.
Subsequent movements
Following the postmodern riposte against modernism, various trends in architecture established, though not necessarily following principles of postmodernism. Concurrently, the recent movements of New Urbanism and New Classical Architecture promote a sustainable approach toward construction, that appreciates and develops smart growth, architectural tradition and classical design. This in contrast to modernist and globally uniform architecture, as well as leaning against solitary housing estates and suburban sprawl. Both trends started in the 1980s. The Driehaus Architecture Prize is an award that recognizes efforts in New Urbanism and New Classical Architecture, and is endowed with a prize money twice as high as that of the modernist Pritzker Prize. Some postmodern architects, such as Robert A. M. Stern and Albert, Righter, & Tittman, have moved from postmodern design to new interpretations of traditional architecture.
The Neo-Andean style takes a similar approach to ornamentation as broader postmodernism. First brought to attention in 1996, the style is notable for being designed and championed by indigenous Peruvians and Bolivians, and takes inspiration from ancient Inca and Andean designs.
Postmodern architects
Some of the best-known and influential architects in the postmodern style are:
Other examples of postmodern architecture
See also
Charles Jencks
New classical architecture, a reference style to historical architecture, emerged from postmodernism. It creates more accurate references of historical architecture styles.
Third Bay Tradition
Explanatory footnotes
References
General and cited references
Klotz, Heinrich (1998). History of Post-Modern Architecture. Cambridge, MA: MIT Press. .
Robert Venturi (1977). Learning from Las Vegas: The Forgotten Symbolism of Architectural Form. Cambridge, MA: MIT Press, .
Further reading
Postmodern Architecture: Restoring Context Princeton University Lecture
Postmodern Architecture and Urbanism University of California–Berkeley Lecture
External links
About Postmodernism
Gallery of Postmodern Houses
Post Modern Architecture at Great Buildings Online (archived 10 January 2007)
20th-century architectural styles
House styles | Postmodern architecture | [
"Engineering"
] | 8,400 | [
"Postmodern architecture",
"Architecture"
] |
965,020 | https://en.wikipedia.org/wiki/Pozzolana | Pozzolana or pozzuolana ( , ), also known as pozzolanic ash (), is a natural siliceous or siliceous-aluminous material which reacts with calcium hydroxide in the presence of water at room temperature (cf. pozzolanic reaction). In this reaction insoluble calcium silicate hydrate and calcium aluminate hydrate compounds are formed possessing cementitious properties. The designation pozzolana is derived from one of the primary deposits of volcanic ash used by the Romans in Italy, at Pozzuoli. The modern definition of pozzolana encompasses any volcanic material (pumice or volcanic ash), predominantly composed of fine volcanic glass, that is used as a pozzolan. Note the difference with the term pozzolan, which exerts no bearing on the specific origin of the material, as opposed to pozzolana, which can only be used for pozzolans of volcanic origin, primarily composed of volcanic glass.
Historical use
Pozzolanas such as Santorin earth were used in the Eastern Mediterranean since 500–400 BC. Although pioneered by the ancient Greeks, it was the Romans who eventually fully developed the potential of lime-pozzolan pastes as binder phase in Roman concrete used for buildings and underwater construction. Vitruvius speaks of four types of pozzolana: black, white, grey, and red, all of which can be found in the volcanic areas of Italy, such as Naples. Typically it was very thoroughly mixed two-to-one with lime just prior to mixing with water. The Roman port at Cosa was built of pozzolana-lime concrete that was poured under water, apparently using a long tube to carefully lay it up without allowing sea water to mix with it. The three piers are still visible today, with the underwater portions in generally excellent condition even after more than 2100 years.
Geochemistry and mineralogy
The major pozzolanically active component of volcanic pumices and ashes is a highly porous glass. The easily alterable, or highly reactive, nature of these ashes and pumices limits their occurrence largely to recently active volcanic areas. Most of the traditionally used natural pozzolans belong to this group, i.e., volcanic pumice from Pozzuoli, Santorin earth and the incoherent parts of German trass.
The chemical composition of pozzolana is variable and reflects the regional type of volcanism. SiO2 being the major chemical component, most unaltered pumices and ashes fall in the intermediate (52–66 wt% SiO2) to acid (>66 wt% SiO2) composition range for glassy rock types outlined by the IUGS. Basic (45–52 wt% SiO2) and ultrabasic (<45 wt% SiO2) pyroclastics are less commonly used as pozzolans. Al2O3 is present in substantial amounts in most pozzolanas, Fe2O3 and MgO are present in minor proportions only, as is typical or more acid rock types. CaO and alkali contents are usually modest but can vary substantially from pozzolana to pozzolana.
The mineralogical composition of unaltered pyroclastic rocks is mainly determined by the presence of phenocrysts and the chemical composition of the parent magma. The major component is volcanic glass typically present in quantities over 50 wt%. Pozzolana containing significantly less volcanic glass, such as a trachyandesite from Volvic (France) with only 25 wt% are less reactive. Apart from the glass content and its morphology associated with the specific surface area, also defects and the degree of strain in the glass appear to affect the pozzolanic activity.
Typical associated minerals present as large phenocrysts are members of the plagioclase feldspar solid solution series. In pyroclastic rocks in which alkalis predominate over Ca, K-feldspar such as sanidine or albite Na-feldspar are found. Leucite is present in the K-rich, silica-poor Latium pozzolanas. Quartz is usually present in minor quantities in acidic pozzolanas, while pyroxenes and/or olivine phenocrysts are often found in more basic materials. Xenocrysts or rock fragments incorporated during the violent eruptional and depositional events are also encountered.
Zeolite, opal CT and clay minerals are often present in minor quantities as alteration products of the volcanic glass. While zeolitisation or formation of opal CT is in general beneficial for the pozzolanic activity, clay formation has adverse effects on the performance of lime-pozzolan blends or blended cements.
Modern use
Pozzolana is abundant in certain locations and is extensively used as an addition to Portland cement in countries such as Italy, Germany, Kenya, Uganda,Turkey, China and Greece. Compared to industrial by-product pozzolans they are characterized by larger ranges in composition and a larger variability in physical properties. The application of pozzolana in Portland cement is mainly controlled by the local availability of suitable deposits and the competition with the accessible industrial by-product supplementary cementitious materials. In part due to the exhaustion of the latter sources and the extensive reserves of pozzolana available, partly because of the proven technical advantages of an intelligent use of pozzolana, their use is expected to be strongly expanded in the future.
Pozzolanic reaction
The pozzolanic reaction is the chemical reaction that occurs in portland cement containing pozzolans. It is the main reaction involved in the Roman concrete invented in Ancient Rome. At the basis of the pozzolanic reaction stands a simple acid-base reaction between calcium hydroxide (as Portlandite) and silicic acid.
See also
Ancient Roman use as underwater cement
Caesarea Maritima, the Herodian port
Ostia Antica, the Trajanic port
Calcium silicate hydrate (CSH)
Cement
Cement chemist notation
Concrete
Energetically modified cement (EMC)
Fly ash
Metakaolin
Portland cement
Pozzolan
Pozzolanic reaction (main page)
Pumice
Rice hull ash
Roman concrete
Silica fume
References
Cook D.J. (1986) Natural pozzolanas. In: Swamy R.N., Editor (1986) Cement Replacement Materials, Surrey University Press, p. 200.
McCann A.M. (1994) "The Roman Port of Cosa" (273 BC), Scientific American, Ancient Cities, pp. 92–99, by Anna Marguerite McCann. Covers, hydraulic concrete, of "Pozzolana mortar" and the 5 piers, of the Cosa harbor, the Lighthouse on pier 5, diagrams, and photographs. Height of Port city: 100 BC.
Snellings R., Mertens G., Elsen J. (2012) Supplementary cementitious materials. Reviews in Mineralogy and Geochemistry 74:211–278.
Volcanology
Cement
Concrete | Pozzolana | [
"Engineering"
] | 1,471 | [
"Structural engineering",
"Concrete"
] |
965,155 | https://en.wikipedia.org/wiki/Rohm | (styled as ROHM) is a Japanese electronic parts manufacturer based in Kyoto, Japan. Rohm was incorporated as Toyo Electronics Industry Corporation by Kenichiro Sato (佐藤 研一郎) on September 17, 1958.
The company was originally called R.ohm, which was derived from R for resistors, the original product, plus ohm, the unit of measure for resistance.
The name of the company was officially changed to Rohm in 1979 and then changed again to Rohm Semiconductor in January 2009.
When Rohm was established, resistors were its main product. Later, the company began manufacturing semiconductors. ICs and discrete semiconductors now account for about 80% of Rohm's revenue.
Through 2012, Rohm was among the top 20 semiconductor sales leaders.
International expansion
In 2016, Rohm started the construction of their production facility in Kelantan, Malaysia and commenced the operation in April 2017. A few years later in 2022, Rohm expanded their facility by approximately 1.5 times bigger with the plan to start operating in 2023. The original plant mainly focuses on the production of discrete semiconductors such as diodes while the expanded facility will be focusing on the production of analog LSIs and transistors.
Products
Rohm produces a range of products such as power stage ICs aimed at power supplies, CMOS operational amplifier used to boost sensor-signals as well as Hall effect sensors. The company also sells DC to DC controller ICs, current sensor ICs and AC to DC converters.
FeRAM and other LSI integrated circuits are designed and manufactured by the Lapis Semiconductor division of Rohm,
formerly OKI Semiconductor, a division of Oki Electric.
References
External links
Companies listed on the Tokyo Stock Exchange
Companies listed on the Osaka Exchange
Electronics companies of Japan
Equipment semiconductor companies
Manufacturing companies based in Kyoto
Electronics companies established in 1958
1958 establishments in Japan
Japanese brands | Rohm | [
"Engineering"
] | 386 | [
"Equipment semiconductor companies",
"Semiconductor fabrication equipment"
] |
965,259 | https://en.wikipedia.org/wiki/Cable%20modem%20termination%20system | A cable modem termination system (CMTS, also called a CMTS Edge Router) is a piece of equipment, typically located in a cable company's headend or hubsite, which is used to provide data services, such as cable Internet or Voice over IP, to cable subscribers.
A CMTS provides similar functions to a DSLAM in a digital subscriber line or an optical line termination in a passive optical network.
Connections
In order to provide high speed data services, a cable company will connect its headend to the Internet via very high capacity data links to a network service provider. On the subscriber side of the headend, the CMTS enables communication with subscribers' cable modems. Different CMTSs are capable of serving different cable modem population sizes—ranging from 4,000 cable modems to 150,000 or more, depending in part on traffic, although it is recommended for an I-CMTS to service, for example, 30,000 subscribers (cable modems). A given headend may have between 1–12 CMTSs to service the cable modem population served by that headend or HFC hub.
One way to think of a CMTS is to imagine a router with Ethernet interfaces (connections) on one side and coaxial cable RF interfaces on the other side. The Ethernet side is known as the Network Side Interface or NSI.
A service group is a group of customers that share communication channels and thus bandwidth. A CMTS has separate RF interfaces and connectors for downlink and uplink signals. The RF/coax interfaces carry RF signals to and from coaxial "trunks" connected to subscribers' cable modems, using one pair of connectors per trunk, one for downlink and the other for uplink. In other words, there can be a pair of RF connectors for every service group, although it is possible to configure a network with different numbers of connectors that service a set of service groups, based on the number of downstream and upstream channels the cable modems in every service group use. Every connector has a finite number of channels it can carry, such as 16 channels per downstream connector, and 4 channels per upstream connector, depending on the CMTS. For example, if the cable modems on every service group use 24 channels for downstream, and 2 channels for upstream, then 3 downstream connectors can service the cable modems on two service groups, and be serviced by 1 upstream connector. A service group may serve up to 500 households. A service group has channels, whose bandwidth is shared among all members of the service group. The channels are later regrouped at the cable headend or distribution hub and serviced by CMTSs and other equipment such as Edge QAMs.
The RF signals from a CMTS, are connected via coaxial cable to headend RF management modules for RF splitting and combining, with other equipment such as other CMTSs so that several CMTS can service one service group, and then to an "optics platform" or headend platform, which has transmitter and receiver modules that turn the RF signals into light pulses for delivery over fiber optics through an HFC network. Examples of optics platforms are the Arris CH3000 and Cisco Prisma II. At the other end of the network, an optical node converts the light pulses into RF signals again and sends them through a coaxial cable "trunk". The trunk has one or more amplifiers along its length, and on the trunk there are distribution "taps" to which customers' modems are connected via coaxial cable.
In fact, most CMTSs have both Ethernet interfaces (or other more traditional high-speed data interfaces like SONET) as well as RF interfaces. In this way, traffic that is coming from the Internet can be routed (or bridged) through the Ethernet interface, through the CMTS and then onto the RF interfaces that are connected to the cable company's hybrid fiber coax (HFC). The traffic winds its way through the HFC to end up at the cable modem in the subscriber's home. Traffic from a subscriber's home system goes through the cable modem and out to the Internet in the opposite direction.
CMTSs typically carry only IP traffic. Traffic destined for the cable modem from the Internet, known as downstream traffic, is carried in IP packets encapsulated according to DOCSIS standard. These packets are carried on data streams that are typically modulated onto a TV channel using either 64-QAM or 256-QAM versions of quadrature amplitude modulation.
Upstream data (data from cable modems to the headend or Internet) is carried in Ethernet frames encapsulated inside DOCSIS frames modulated with QPSK, 16-QAM, 32-QAM, 64-QAM or 128-QAM using TDMA, ATDMA or S-CDMA frequency sharing mechanisms. This is usually done at the "subband" or "return" portion of the cable TV spectrum (also known as the "T" channels), a much lower part of the frequency spectrum than the downstream signal, usually 5–42 MHz in DOCSIS 2.0 or 5–65 MHz in EuroDOCSIS.
A typical CMTS allows a subscriber's computer to obtain an IP address by forwarding DHCP requests to the relevant servers. This DHCP server returns, for the most part, what looks like a typical response including an assigned IP address for the computer, gateway/router addresses to use, DNS servers, etc.
The CMTS may also implement some basic filtering to protect against unauthorized users and various attacks. Traffic shaping is sometimes performed to prioritize application traffic, perhaps based upon subscribed plan or download usage and also to provide guaranteed Quality of service (QoS) for the cable operator's own PacketCable-based VOIP service. However, the function of traffic shaping is more likely done by a Cable Modem or policy traffic switch. A CMTS may also act as a bridge or router.
A customer's cable modem cannot communicate directly with other modems on the line. In general, cable modem traffic is routed to other cable modems or to the Internet through a series of CMTSs and traditional routers. However, a route could conceivably pass through a single CMTS.
A CCAP (Converged Cable Access Platform) combines CMTS and Edge QAM functionality in a single device so that it can provide both data (internet) with CMTS functionality, and video (TV channels) with Edge QAM functionality. Edge QAM (Quadrature Amplitude Modulator/Modulation) converts video sent via IP (internet protocol) or otherwise, into a QAM signal for delivery over a cable network. Edge QAMs are normally standalone devices placed at the "edge" of a network. They can also be connected to a CMTS core, to make up an M-CMTS system which is more scalable. A CMTS core is normally a conventional or I-CMTS that supports operation as a CMTS core in an M-CMTS system.
Architectures
A CMTS can be broken down into several different architectures, Integrated CMTS (I-CMTS), Modular (M-CMTS), Virtual CMTS (vCMTS) and remote CMTS. An I-CMTS incorporates into a single unit all components necessary for its operation. There are both pros and cons to each type of architecture.
Modular CMTS (M-CMTS)
In a M-CMTS solution the architecture of an I-CMTS is broken up into two components. The first part is the Physical Downstream component (PHY) which is known as the Edge QAM (EQAM). The second part is the IP networking and DOCSIS MAC Component which is referred to as the M-CMTS Core. There are also several new protocols and components introduced with this type of architecture. One is the DOCSIS Timing Interface, which provides a reference frequency between the EQAM and M-CMTS Core via a DTI Server. The second is the Downstream External PHY Interface (DEPI). The DEPI protocol controls the delivery of DOCSIS
frames from the M-CMTS Core to the EQAM devices Some of the challenges that entail an M-CMTS platform are increased complexity in RF combining and an increase in the number of failure points. One of the benefits of an M-CMTS architecture is that it is extremely scalable to larger numbers of downstream channels.
Virtual CMTS
Virtual CCAPs (vCCAPs) or virtual CMTSs (vCMTSs) are implemented on commercial off the shelf x86-based servers with specialized software, and can be used to increase service capacity without purchasing new CMTS/CCAP chassis, or add features to the CMTS/CCAP more quickly.
Remote CMTS
Remote CMTS/Remote CCAP moves all CMTS/CCAP functionality to the outside plant, in stark contrast to conventional CMTSs or CCAPs which are installed at a service provider location.
Manufacturers
Current
ARRIS Group
C9 Networks
Catapult Technologies
Coaxial Networks Inc.
Casa Systems
Cisco Systems
Chongqing Jinghong
Damery sa
Gainspeed (Nokia company)
WISI Communications GmbH
Kathrein
Suma Scientific
Huawei Technologies
Harmonic Inc.
Teleste
Historical
3COM (Acquired by HP)
Broadband Access Systems (Acquired by ADC Telecommunications)
ADC Telecommunications (CMTS business acquired by BigBand Networks)
BigBand Networks (Exited CMTS business, remaining business later acquired by ARRIS)
Cadant (Acquired by ARRIS)
Com21 (CMTS business acquired by ARRIS)
RiverDelta (Acquired by Motorola)
Terayon (Acquired by Motorola)
Pacific Broadband Communications (Acquired by Juniper Networks)
Juniper Networks (Exited CMTS business)
LanCity (Acquired by BayNetworks)
Motorola (Acquired by ARRIS)
Daphne sa (Acquired by Damery sa)
Scientific Atlanta (Acquired by Cisco)
See also
DOCSIS
References
External links
Digital cable
Internet access | Cable modem termination system | [
"Technology"
] | 2,080 | [
"Internet access",
"IT infrastructure"
] |
965,308 | https://en.wikipedia.org/wiki/SARS%20conspiracy%20theory | The SARS conspiracy theory began to emerge during the severe acute respiratory syndrome (SARS) outbreak in China in the spring of 2003, when Sergei Kolesnikov, a Russian scientist and a member of the Russian Academy of Medical Sciences, first publicized his claim that the SARS coronavirus is a synthesis of measles and mumps. According to Kolesnikov, this combination cannot be formed in the natural world and thus the SARS virus must have been produced under laboratory conditions. Another Russian scientist, Nikolai Filatov, head of Moscow's epidemiological services, had earlier commented that the SARS virus was probably man-made.
However, independent labs concluded these claims to be premature since the SARS virus is a coronavirus, whereas measles and mumps are paramyxoviruses. The primary differences between a coronavirus and a paramyxovirus are in their structures and method of infection, thus making it implausible for a coronavirus to have been created from two paramyxoviruses.
The widespread reporting of claims by Kolesnokov and Filatov caused controversy in many Chinese internet discussion boards and chat rooms. Many Chinese believed that the SARS virus could be a biological weapon manufactured by the United States, which perceived China as a potential threat.
The failure to find the source of the SARS virus further convinced these people and many more that SARS was artificially synthesised and spread by some individuals and even governments. Circumstantial evidence suggests that the SARS virus crossed over to humans from Asian palm civets ("civet cats"), a type of animal that is often killed and eaten in Guangdong, where SARS was first discovered.
Supporters of the conspiracy theory suggest that SARS caused the most serious harm in mainland China, Hong Kong, Taiwan and Singapore, regions where most Chinese reside, while the United States, Europe and Japan were not affected as much. However, the highest mortality from SARS outside of China occurred in Canada where 43 died. Conspiracists further take as evidence the idea that, although SARS has an average mortality rate of around 10% around the world, no one died in the United States from SARS. However, there were only 8 confirmed cases out of 27 probable cases in the US (10% of 8 people is less than 1 person). Regarding reasons why SARS patients in the United States experienced a relatively mild illness, the U.S. Centers for Disease Control has explained that anybody with fever and a respiratory symptom who had traveled to an affected area was included as a SARS patient in the U.S., even though many of these were found to have had other respiratory illnesses.
Tong Zeng, an activist with no medical background, authored the book The Last Defense Line: Concerns About the Loss of Chinese Genes, published in 2003. In the book, Zeng suggested researchers from the United States may have created SARS as an anti-Chinese bioweapon after taking blood samples in China for a longevity study in the 1990s. The book's hypothesis was a front-page report in the Guangzhou newspaper Southern Metropolis Daily.
Coronaviruses similar to SARS have been found in bats in China, suggesting they may be their natural reservoir.
See also
Misinformation related to the COVID-19 pandemic
References
External links
ParaPundit: Conspiracy theories in China
San Francisco Chroncle's report
SARS Crisis: Don't Rule Out Linkages To China's Biowarfare Article by Richard D. Fisher Jr. for The Jamestown Foundation.
People's Daily's report on Tong Zeng's book (simplified Chinese)
Singapore's Lianhe Zaobao reports the conspiracy theory and Hou's assertion
conspiracy theory
Health-related conspiracy theories
Anti-American sentiment in China
Pseudohistory
Biological warfare
China–United States relations
Conspiracy theories in China
Pseudoscience
Conspiracy Theory | SARS conspiracy theory | [
"Technology",
"Biology"
] | 805 | [
"Biological warfare",
"Health-related conspiracy theories",
"Science and technology-related conspiracy theories"
] |
965,319 | https://en.wikipedia.org/wiki/Hendrik%20Poinar | Hendrik Nicholas Poinar (born May 31, 1969 in D.C, United States) is an evolutionary biologist specializing in ancient DNA. Poinar first became known for extracting DNA sequences from ground sloth coprolites. He is currently director of the Ancient DNA Centre at McMaster University in Hamilton, Ontario.
Education and academic career
The son of noted entomologist George Poinar Jr. and Eva Hecht-Poinar, Poinar received his B.S. and M.S. degrees from California Polytechnic University, San Luis Obispo in 1992 and 1999 respectively before earning a Ph.D. in 1999 from the Ludwig Maximilian University of Munich, after which he was a postdoctoral researcher from 2000 to 2003 at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. In 2003 he was hired as an assistant professor in the anthropology department at McMaster University in Canada.
In a joint 2000 paper in Science, Poinar and Dr. Alan Cooper argued that much existing work in human ancient DNA has not been sufficiently rigorous to prevent DNA contamination from modern human sources, and that many reported results for ancient human DNA may therefore be suspect.
In 2003, Poinar and others from the Max Planck Institute published genetic sequences isolated from coprolites of the extinct Shasta giant ground sloth, with an estimated age of 10500 years using radiocarbon dates. These were the first genetic sequences retrieved from any extinct ground sloth.
In September 2008, Poinar's laboratory published results showing that after a long period of separation in the mammoth populations of Siberia and North America, the Siberian mammoth population had been completely replaced by mammoths of North American origin.
In 2014, Poinar and colleagues published the first genomic data from victims of the Plague of Justinian in Bavaria, demonstrating that this plague was caused by a strain of Yersinia pestis now extinct.
References
External links
TED talk: Bring Back the Woolly Mammoth
1969 births
Living people
American evolutionary biologists
21st-century American biologists
Ludwig Maximilian University of Munich alumni
Feces
Ancient DNA (human) | Hendrik Poinar | [
"Biology"
] | 424 | [
"Excretion",
"Feces",
"Animal waste products"
] |
965,323 | https://en.wikipedia.org/wiki/Antimicrobial | An antimicrobial is an agent that kills microorganisms (microbicide) or stops their growth (bacteriostatic agent). Antimicrobial medicines can be grouped according to the microorganisms they act primarily against. For example, antibiotics are used against bacteria, and antifungals are used against fungi. They can also be classified according to their function. Antimicrobial medicines to treat infection are known as ⠀⠀antimicrobial chemotherapy, while antimicrobial drugs are used to prevent infection, which known as antimicrobial prophylaxis.
The main classes of antimicrobial agents are disinfectants (non-selective agents, such as bleach), which kill a wide range of microbes on non-living surfaces to prevent the spread of illness, antiseptics (which are applied to living tissue and help reduce infection during surgery), and antibiotics (which destroy microorganisms within the body). The term antibiotic originally described only those formulations derived from living microorganisms but is now also applied to synthetic agents, such as sulfonamide's or fluoroquinolone's. Though the term used to be restricted to antibacterial, and is often used as a synonym for them by medical professionals and in medical literature, its context has broadened to include all antimicrobials. Antibacterial agents can be further subdivided into bactericidal agents, which kill bacteria, and bacteriostatic agents, which slow down or stall bacterial growth. In response, further advancements in antimicrobial technologies have resulted in solutions that can go beyond simply inhibiting microbial growth. Instead, certain types of porous media have been developed to kill microbes on contact. The misuse and overuse of antimicrobials in humans, animals and plants are the main drivers in the development of drug-resistant pathogens. It is estimated that bacterial antimicrobial resistance (AMR) was directly responsible for 1.27 million global deaths in 2019 and contributed to 4.95 million deaths.
History
Antimicrobial use has been common practice for at least 2000 years. Ancient Egyptians and ancient Greeks used specific molds and plant extracts to treat infection.
In the 19th century, microbiologists such as Louis Pasteur and Jules Francois Joubert observed antagonism between some bacteria and discussed the merits of controlling these interactions in medicine. Louis Pasteur's work in fermentation and spontaneous generation led to the distinction between anaerobic and aerobic bacteria. The information garnered by Pasteur led Joseph Lister to incorporate antiseptic methods, such as sterilizing surgical tools and debriding wounds into surgical procedures. The implementation of these antiseptic techniques drastically reduced the number of infections and subsequent deaths associated with surgical procedures. Louis Pasteur's work in microbiology also led to the development of many vaccines for life-threatening diseases such as anthrax and rabies. On September 3, 1928, Alexander Fleming returned from a vacation and discovered that a Petri dish filled with Staphylococcus was separated into colonies due to the antimicrobial fungus Penicillium rubens. Fleming and his associates struggled to isolate the antimicrobial but referenced its therapeutic potential in 1929 in the British Journal of Experimental Pathology. In 1942, Howard Florey, Ernst Chain, and Edward Abraham used Fleming's work to purify and extract penicillin for medicinal uses earning them the 1945 Nobel Prize in Medicine.
Chemical
Antibacterials
Antibacterials are used to treat bacterial infections. Antibiotics are classified generally as beta-lactams, macrolides, quinolones, tetracyclines or aminoglycosides. Their classification within these categories depends on their antimicrobial spectra, pharmacodynamics and chemical composition. Prolonged use of certain antibacterials can decrease the number of enteric bacteria, which may have a negative impact on health. Consumption of probiotics and healthy eating may help to replace destroyed gut flora. Stool transplants may be considered however for patients who are having difficulty recovering from prolonged antibiotic treatment, such as recurrent Clostridioides difficile infections.
The discovery, development and use of antibacterials during the 20th century have reduced mortality from bacterial infections. The antibiotic era began with the therapeutic application of sulfonamide drugs in 1936, followed by a "golden" period of discovery from about 1945 to 1970, when a number of structurally diverse and highly effective agents were discovered and developed. Since 1980, the introduction of new antimicrobial agents for clinical use has declined, in part because of the enormous expense of developing and testing new drugs. In parallel, there has been an alarming increase in antimicrobial resistance of bacteria, fungi, parasites and some viruses to multiple existing agents.
Antibacterials are among the most commonly used and misused drugs by physicians, for example, in viral respiratory tract infections. As a consequence of widespread and injudicious use of antibacterials, there has been an accelerated emergence of antibiotic-resistant pathogens, resulting in a serious threat to global public health. The resistance problem demands that a renewed effort be made to seek antibacterial agents effective against pathogenic bacteria resistant to current antibacterials. Possible strategies towards this objective include increased sampling from diverse environments and application of metagenomics to identify bioactive compounds produced by currently unknown and uncultured microorganisms as well as the development of small-molecule libraries customized for bacterial targets.
Antifungals
Antifungals are used to kill or prevent further growth of fungi. In medicine, they are used as a treatment for infections such as athlete's foot, ringworm and thrush and work by exploiting differences between mammalian and fungal cells. Unlike bacteria, both fungi and humans are eukaryotes. Thus, fungal and human cells are similar at the molecular level, making it more difficult to find a target for an antifungal drug to attack that does not also exist in the host organism. Consequently, there are often side effects to some of these drugs. Some of these side effects can be life-threatening if the drug is not used properly.
As well as their use in medicine, antifungals are frequently sought after to control indoor mold in damp or wet home materials. Sodium bicarbonate (baking soda) blasted on to surfaces acts as an antifungal. Another antifungal solution applied after or without blasting by soda is a mix of hydrogen peroxide and a thin surface coating that neutralizes mold and encapsulates the surface to prevent spore release. Some paints are also manufactured with an added antifungal agent for use in high humidity areas such as bathrooms or kitchens. Other antifungal surface treatments typically contain variants of metals known to suppress mold growth e.g. pigments or solutions containing copper, silver or zinc. These solutions are not usually available to the general public because of their toxicity.
Antivirals
Antiviral drugs are a class of medication used specifically for treating viral infections. Like antibiotics, specific antivirals are used for specific viruses. They should be distinguished from viricides, which actively deactivate virus particles outside the body.
Many antiviral drugs are designed to treat infections by retroviruses, including HIV. Important antiretroviral drugs include the class of protease inhibitors. Herpes viruses, best known for causing cold sores and genital herpes, are usually treated with the nucleoside analogue acyclovir. Viral hepatitis is caused by five unrelated hepatotropic viruses (A-E) and may be treated with antiviral drugs depending on the type of infection. Some influenza A and B viruses have become resistant to neuraminidase inhibitors such as oseltamivir, and the search for new substances continues.
Antiparasitics
Antiparasitics are a class of medications indicated for the treatment of infectious diseases such as leishmaniasis, malaria and Chagas disease, which are caused by parasites such as nematodes, cestodes, trematodes and infectious protozoa. Antiparasitic medications include metronidazole, iodoquinol and albendazole. Like all therapeutic antimicrobials, they must kill the infecting organism without serious damage to the host.
Broad-spectrum therapeutics
Broad-spectrum therapeutics are active against multiple classes of pathogens. Such therapeutics have been suggested as potential emergency treatments for pandemics.
Non-pharmaceutical
A wide range of chemical and natural compounds are used as antimicrobials. Organic acids and their salts are used widely in food products, e.g. lactic acid, citric acid, acetic acid, either as ingredients or as disinfectants. For example, beef carcasses often are sprayed with acids, and then rinsed or steamed, to reduce the prevalence of Escherichia coli.
Heavy metal cations such as Hg2+ and Pb2+ have antimicrobial activities, but can be toxic. In recent years, the antimicrobial activity of coordination compounds has been investigated.
Traditional herbalists used plants to treat infectious disease. Many of these plants have been investigated scientifically for antimicrobial activity, and some plant products have been shown to inhibit the growth of pathogenic microorganisms. A number of these agents appear to have structures and modes of action that are distinct from those of the antibiotics in current use, suggesting that cross-resistance with agents already in use may be minimal.
Copper
Copper-alloy surfaces have natural intrinsic antimicrobial properties and can kill microorganisms such as E. coli and Staphylococcus. The United States Environmental Protection Agency approved the registration of antimicrobial copper alloy surfaces for use in addition to regular cleaning and disinfection to control infections. Antimicrobial copper alloys are being installed in some healthcare facilities and subway transit systems as a public hygienic measure. Copper nanoparticles are attracting interest for the intrinsic antimicrobial behaviours.
Essential oils
Many essential oils included in herbal pharmacopoeias are claimed to possess antimicrobial activity, with the oils of bay, cinnamon, clove and thyme reported to be the most potent in studies with foodborne bacterial pathogens. Coconut oil is also known for its antimicrobial properties. Active constituents include terpenoids and secondary metabolites. Despite their prevalent use in alternative medicine, essential oils have seen limited use in mainstream medicine. While 25 to 50% of pharmaceutical compounds are plant-derived, none are used as antimicrobials, though there has been increased research in this direction. Barriers to increased usage in mainstream medicine include poor regulatory oversight and quality control, mislabeled or misidentified products, and limited modes of delivery.
Antimicrobial pesticides
According to the U.S. Environmental Protection Agency (EPA), and defined by the Federal Insecticide, Fungicide, and Rodenticide Act, antimicrobial pesticides are used to control growth of microbes through disinfection, sanitation, or reduction of development and to protect inanimate objects, industrial processes or systems, surfaces, water, or other chemical substances from contamination, fouling, or deterioration caused by bacteria, viruses, fungi, protozoa, algae, or slime. The EPA monitors products, such as disinfectants/sanitizers for use in hospitals or homes, to ascertain efficacy. Products that are meant for public health are therefore under this monitoring system, including products used for drinking water, swimming pools, food sanitation, and other environmental surfaces. These pesticide products are registered under the premise that, when used properly, they do not demonstrate unreasonable side effects to humans or the environment. Even once certain products are on the market, the EPA continues to monitor and evaluate them to make sure they maintain efficacy in protecting public health.
Public health products regulated by the EPA are divided into three categories:
Disinfectants: Destroy or inactivate microorganisms (bacteria, fungi, viruses,) but may not act as sporicides (as those are the most difficult form to destroy). According to efficacy data, the EPA will classify a disinfectant as limited, general/ broad spectrum, or as a hospital disinfectant.
Sanitizers: Reduce the number of microorganisms, but may not kill or eliminate all of them.
Sterilizers (Sporicides): Eliminate all bacteria, fungi, spores, and viruses.
Antimicrobial pesticide safety
Antimicrobial pesticides have the potential to be a major factor in drug resistance. Organizations such as the World Health Organization call for significant reduction in their use globally to combat this. According to a 2010 Centers for Disease Control and Prevention report, health-care workers can take steps to improve their safety measures against antimicrobial pesticide exposure. Workers are advised to minimize exposure to these agents by wearing personal protective equipment such as gloves and safety glasses. Additionally, it is important to follow the handling instructions properly, as that is how the EPA has deemed them as safe to use. Employees should be educated about the health hazards and encouraged to seek medical care if exposure occurs.
Ozone
Ozone can kill microorganisms in air, water and process equipment and has been used in settings such as kitchen exhaust ventilation, garbage rooms, grease traps, biogas plants, wastewater treatment plants, textile production, breweries, dairies, food and hygiene production, pharmaceutical industries, bottling plants, zoos, municipal drinking-water systems, swimming pools and spas, and in the laundering of clothes and treatment of in–house mold and odors.
Antimicrobial scrubs
Antimicrobial scrubs can reduce the accumulation of odors and stains on scrubs, which in turn improves their longevity. These scrubs also come in a variety of colors and styles. As antimicrobial technology develops at a rapid pace, these scrubs are readily available, with more advanced versions hitting the market every year. These bacteria could then be spread to office desks, break rooms, computers, and other shared technology. This can lead to outbreaks and infections like methicillin-resistant staphylococcus aureus, treatments for which cost the healthcare industry $20 billion a year.
Halogens
Elements such as chlorine, iodine, fluorine, and bromine are nonmetallic in nature and constitute the halogen family. Each of these halogens have a different antimicrobial effect that is influenced by various factors such as pH, temperature, contact time, and type of microorganism. Chlorine and iodine are the two most commonly used antimicrobials. Chlorine is extensively used as a disinfectant in the water treatment plants, drug, and food industries. In wastewater treatment plants, chlorine is widely used as a disinfectant. It oxidizes soluble contaminants and kills bacteria and viruses. It is also highly effective against bacterial spores. The mode of action is by breaking the bonds present in these microorganisms. When a bacterial enzyme comes in contact with a compound containing chlorine, the hydrogen atom in that molecule gets displaced and is replaced with chlorine. This in turn changes the enzyme function which ultimately leads to the death of the bacterium. Iodine is most commonly used for sterilization and wound cleaning. The three major antimicrobial compounds containing iodine are alcohol-iodine solution, an aqueous solution of iodine, and iodophors. Iodophors are more bactericidal and are used as antiseptics as they are less irritating when applied to the skin. Bacterial spores on the other hand cannot be killed by iodine, but they can be inhibited by iodophors. The growth of microorganisms is inhibited when iodine penetrates into the cells and oxidizes proteins, genetic material, and fatty acids. Bromine is also an effective antimicrobial that is used in water treatment plants. When mixed with chlorine it is highly effective against bacterial spores such as S. faecalis.
Alcohols
Alcohols are commonly used as disinfectants and antiseptics. Alcohols kill vegetative bacteria, most viruses and fungi. Ethyl alcohol, n-propanol and isopropyl alcohol are the most commonly used antimicrobial agents. Methanol is also a disinfecting agent but is not generally used as it is highly poisonous. Escherichia coli, Salmonella, and Staphylococcus aureus are a few bacteria whose growth can be inhibited by alcohols. Alcohols have a high efficiency against enveloped viruses (60–70% ethyl alcohol) 70% isopropyl alcohol or ethanol are highly effective as an antimicrobial agent. In the presence of water, 70% alcohol causes coagulation of the proteins thus inhibiting microbial growth. Alcohols are not quite efficient when it comes to spores. The mode of action is by denaturing the proteins. Alcohols interfere with the hydrogen bonds present in the protein structure. Alcohols also dissolve the lipid membranes that are present in microorganisms. Disruption of the cell membrane is another property of alcohols that aids in cell death. Alcohols are cheap and effective antimicrobials. They are widely used in the pharmaceutical industry. Alcohols are commonly used in hand sanitizers, antiseptics, and disinfectants.
Phenol and Phenolic compounds
Phenol, also known as carbolic acid, was one of the first chemicals which was used as an antimicrobial agent. It has high antiseptic properties. It is bacteriostatic at concentrations of 0.1%–1% and is bactericidal/fungicidal at 1%–2%. A 5% solution kills anthrax spores in 48 hr. Phenols are most commonly used in oral mouth washes and household cleaning agents. They are active against a wide range of bacteria, fungi and viruses. Today phenol derivatives such as thymol and cresol are used because they are less toxic compared to phenol. These phenolic compounds have a benzene ring along with the –OH group incorporated into their structures. They have a higher antimicrobial activity. These compounds inhibit microbial growth by precipitating proteins which lead to their denaturation and by penetrating into the cell membrane of microorganisms and disrupting it. Phenolic compounds can also deactivate enzymes and damage the amino acids in microbial cells. Phenolics such as fentichlore, an antibacterial and antifungal agent, are used as an oral treatment for fungal infections. Trischlosan is highly effective against both gram-positive and gram-negative bacteria. Hexachlorophene (Bisphenol) is used as a surfactant. It is widely used in soaps, handwashes, and skin products because of its antiseptic properties. It is also used as a sterilizing agent. Cresol is an effective antimicrobial and is widely used in mouthwashes and cough drops. Phenolics have high antimicrobial activity against bacteria such as Staphylococcus epidermidis and Pseudomonas aeruginosa. 2-Phenylphenol-water solutions are used in immersion treatments of fruit for packing. (It is not used on the packing materials however.) Ihloff and Kalitzki 1961 find a small but measurable amount remains in the skin of fruits processed in this manner.
Aldehydes
Aldehydes are highly effective against bacteria, fungi, and viruses. Aldehydes inhibit bacterial growth by disrupting the outer membrane. They are used in the disinfection and sterilization of surgical instruments. As they are highly toxic, they are not used in antiseptics. Currently, only three aldehyde compounds are of widespread practical use as disinfectant biocides, namely glutaraldehyde, formaldehyde, and ortho-phthalaldehyde (OPA) despite the demonstration that many other aldehydes possess good antimicrobial activity. However, due to its long contact time other disinfectants are commonly preferred.
Physical
Heat
Microorganisms have a minimum temperature, an optimum, and a maximum temperature for growth. High temperature as well as low temperatures are used as physical agents of control. Different organisms show different degrees of resistance or susceptibility to heat or temperature, some organisms such as bacterial endospore are more resistant while vegetative cells are less resistant and are easily killed at lower temperatures. Another method that involves the use of heat to kill microorganisms is fractional sterilization. This process involves the exposure to a temperature of 100 degrees Celsius for an hour per day for several days. Fractional sterilization is also called tyndallization. Bacterial endospores can be killed using this method. Both dry and moist heat are effective in eliminating microbial life. For example, jars used to store preserves such as jam can be sterilized by heating them in a conventional oven. Heat is also used in pasteurization, a method for slowing the spoilage of foods such as milk, cheese, juices, wines and vinegar. Such products are heated to a certain temperature for a set period of time, which greatly reduces the number of harmful microorganisms. Low temperature is also used to inhibit microbial activity by slowing down microbial metabolism.
Radiation
Foods are often irradiated to kill harmful pathogens. There are two types of radiations that are used to inhibit the growth of microorganisms – ionizing and non-ionizing radiations. Common sources of radiation used in food sterilization include cobalt-60 (a gamma emitter), electron beams and . Ultraviolet light is also used to disinfect drinking water, both in small-scale personal-use systems and larger-scale community water purification systems.
Desiccation
Desiccation is also known as dehydration. It is the state of extreme dryness or the process of extreme drying. Some microorganisms like bacteria, yeasts and molds require water for their growth. Desiccation dries up the water content thus inhibiting microbial growth. On the availability of water, the bacteria resume their growth, thus desiccation does not completely inhibit bacterial growth. The instrument used to carry out this process is called a desiccator. This process is widely used in the food industry and is an efficient method for food preservation. Desiccation is also largely used in the pharmaceutical industry to store vaccines and other products.
Antimicrobial surfaces
Antimicrobial surfaces are designed to either inhibit the ability of microorganisms to grow or damaging them by chemical (copper toxicity) or physical processes (micro/nano-pillars to rupture cell walls). These surfaces are especially important for the healthcare industry. Designing effective antimicrobial surfaces demands an in-depth understanding of the initial microbe-surface adhesion mechanisms. Molecular dynamics simulation and time-lapse imaging are typically used to investigate these mechanisms.
Osmotic pressure
Osmotic pressure is the pressure required to prevent a solvent from passing from a region of high concentration to a region of low concentration through a semipermeable membrane. When the concentration of dissolved materials or solute is higher inside the cell than it is outside, the cell is said to be in a hypotonic environment and water will flow into the cell.When the bacteria is placed in hypertonic solution, it causes plasmolysis or cell shrinking, similarly in hypotonic solution, bacteria undergoes plasmotysis or turgid state. This plasmolysis and plasmotysis kills bacteria because it causes change in osmotic pressure.
Antimicrobial resistance
Antimicrobial resistance
The misuse and overuse of antimicrobials in humans, animals and plants are the main drivers in the development of drug-resistant pathogens. It is estimated that bacterial antimicrobial resistance (AMR) was directly responsible for 1.27 million global deaths in 2019 and contributed to 4.95 million deaths.
See also
Biocide
Antiviral drug
References
External links
BURDEN of Resistance and Disease in European Nations – An EU-Project to estimate the financial burden of antibiotic resistance in European Hospitals
Cochrane Wounds list of antimicrobials (PDF)
https://courses.lumenlearning.com/microbiology/chapter/using-physical-methods-to-control-microorganis
National Pesticide Information Center
Overview of the use of Antimicrobials in plastic applications
The Antimicrobial Index – A continuously updated list of antimicrobial agents found in scientific literature (includes plant extracts and peptides)
Biocides | Antimicrobial | [
"Biology",
"Environmental_science"
] | 5,212 | [
"Biocides",
"Antimicrobials",
"Toxicology"
] |
965,348 | https://en.wikipedia.org/wiki/Functional%20calculus | In mathematics, a functional calculus is a theory allowing one to apply mathematical functions to mathematical operators. It is now a branch (more accurately, several related areas) of the field of functional analysis, connected with spectral theory. (Historically, the term was also used synonymously with calculus of variations; this usage is obsolete, except for functional derivative. Sometimes it is used in relation to types of functional equations, or in logic for systems of predicate calculus.)
If is a function, say a numerical function of a real number, and is an operator, there is no particular reason why the expression should make sense. If it does, then we are no longer using on its original function domain. In the tradition of operational calculus, algebraic expressions in operators are handled irrespective of their meaning. This passes nearly unnoticed if we talk about 'squaring a matrix', though, which is the case of and an matrix. The idea of a functional calculus is to create a principled approach to this kind of overloading of the notation.
The most immediate case is to apply polynomial functions to a square matrix, extending what has just been discussed. In the finite-dimensional case, the polynomial functional calculus yields quite a bit of information about the operator. For example, consider the family of polynomials which annihilates an operator . This family is an ideal in the ring of polynomials. Furthermore, it is a nontrivial ideal: let be the finite dimension of the algebra of matrices, then is linearly dependent. So for some scalars , not all equal to 0. This implies that the polynomial lies in the ideal. Since the ring of polynomials is a principal ideal domain, this ideal is generated by some polynomial . Multiplying by a unit if necessary, we can choose to be monic. When this is done, the polynomial is precisely the minimal polynomial of . This polynomial gives deep information about . For instance, a scalar is an eigenvalue of if and only if is a root of . Also, sometimes can be used to calculate the exponential of efficiently.
The polynomial calculus is not as informative in the infinite-dimensional case. Consider the unilateral shift with the polynomials calculus; the ideal defined above is now trivial. Thus one is interested in functional calculi more general than polynomials. The subject is closely linked to spectral theory, since for a diagonal matrix or multiplication operator, it is rather clear what the definitions should be.
See also
References
External links | Functional calculus | [
"Mathematics"
] | 504 | [
"Mathematical objects",
"Functions and mappings",
"Mathematical relations",
"Functional calculus"
] |
965,373 | https://en.wikipedia.org/wiki/Silver%20bromide | Silver bromide (AgBr), a soft, pale-yellow, water-insoluble salt well known (along with other silver halides) for its unusual sensitivity to light. This property has allowed silver halides to become the basis of modern photographic materials. AgBr is widely used in photographic films and is believed by some to have been used for making the Shroud of Turin. The salt can be found naturally as the mineral bromargyrite (bromyrite).
Preparation
Although the compound can be found in mineral form, AgBr is typically prepared by the reaction of silver nitrate with an alkali bromide, typically potassium bromide:
AgNO3(aq) + KBr(aq) → AgBr(s)+ KNO3(aq)
Although less convenient, the salt can also be prepared directly from its elements.
Modern preparation of a simple, light-sensitive surface involves forming an emulsion of silver halide crystals in a gelatine, which is then coated onto a film or other support. The crystals are formed by precipitation in a controlled environment to produce small, uniform crystals (typically < 1 μm in diameter and containing ~1012 Ag atoms) called grains.
Reactions
Silver bromide reacts readily with liquid ammonia to generate a variety of ammine complexes, like and . In general:
AgBr + m NH3 + (n - 1) →
Silver bromide reacts with triphenylphosphine to give a tris(triphenylphosphine) product:
Physical properties
Crystal structure
AgF, AgCl, and AgBr all have face-centered cubic (fcc) rock-salt (NaCl) lattice structure with the following lattice parameters:
The larger halide ions are arranged in a cubic close-packing, while the smaller silver ions fill the octahedral gaps between them, giving a 6-coordinate structure where a silver ion Ag+ is surrounded by 6 Br− ions, and vice versa. The coordination geometry for AgBr in the NaCl structure is unexpected for Ag(I) which typically forms linear, trigonal (3-coordinated Ag) or tetrahedral (4-coordinated Ag) complexes.
Unlike the other silver halides, iodargyrite (AgI) contains a hexagonal zincite lattice structure.
Solubility
The silver halides have a wide range of solubilities. The solubility of AgF is about 6 × 107 times that of AgI. These differences are attributed to the relative solvation enthalpies of the halide ions; the enthalpy of solvation of fluoride is anomalously large.
Photosensitivity
Although photographic processes have been in development since the mid-1800s, there were no suitable theoretical explanations until 1938 with the publication of a paper by R.W. Gurney and N.F. Mott. This paper triggered a large amount of research in fields of solid-state chemistry and physics, as well more specifically in silver halide photosensitivity phenomena.
Further research into this mechanism revealed that the photographic properties of silver halides (in particular AgBr) were a result of deviations from an ideal crystal structure. Factors such as crystal growth, impurities, and surface defects all affect concentrations of point ionic defects and electronic traps, which affect the sensitivity to light and allow for the formation of a latent image.
Frenkel defects and quadropolar deformation
The major defect in silver halides is the Frenkel defect, where silver ions are located interstitially (Agi+) in high concentration with their corresponding negatively charged silver-ion vacancies (Agv−). What is unique about AgBr Frenkel pairs is that the interstitial Agi+ are exceptionally mobile, and that its concentration in the layer below the grain surface (called the space-charge layer) far exceeds that of the intrinsic bulk. The formation energy of the Frenkel pair is low at 1.16 eV, and the migration activation energy is unusually low at 0.05 eV (compare to NaCl: 2.18 eV for the formation of a Schottky pair and 0.75 eV for cationic migration). These low energies result in large defect concentrations, which can reach near 1% near the melting point.
The low activation energy in silver bromide can be attributed the silver ions' high quadrupolar polarizability; that is, it can easily deform from a sphere into an ellipsoid. This property, a result of the d9 electronic configuration of the silver ion, facilitates migration in both the silver ion and in silver-ion vacancies, thus giving the unusually low migration energy (for Agv−: 0.29–0.33 eV, compared to 0.65 eV for NaCl).
Studies have demonstrated that the defect concentrations are strongly affected (up to several powers of 10) by crystal size. Most defects, such as interstitial silver ion concentration and surface kinks, are inversely proportional to crystal size, although vacancy defects are directly proportional. This phenomenon is attributed to changes in the surface chemistry equilibrium, and thus affects each defect concentration differently.
Impurity concentrations can be controlled by crystal growth or direct addition of impurities to the crystal solutions. Although impurities in the silver bromide lattice are necessary to encourage Frenkel defect formation, studies by Hamilton have shown that above a specific concentration of impurities, the numbers of defects of interstitial silver ions and positive kinks reduce sharply by several orders of magnitude. After this point, only silver-ion vacancy defects, which actually increase by several orders of magnitude, are prominent.
Electron traps and hole traps
When light is incident on the silver halide grain surface, a photoelectron is generated when a halide loses its electron to the conduction band:
X− + hν → X + e−
After the electron is released, it will combine with an interstitial Agi+ to create a silver metal atom Agi0:
e− + Agi+ → Agi0
Through the defects in the crystal, the electron is able to reduce its energy and become trapped in the atom. The extent of grain boundaries and defects in the crystal affect the lifetime of the photoelectron, where crystals with a large concentration of defects will trap an electron much faster than a purer crystal.
When a photoelectron is mobilized, a photohole h• is also formed, which also needs to be neutralized. The lifetime of a photohole, however, does not correlate with that of a photoelectron. This detail suggests a different trapping mechanism; Malinowski suggests that the hole traps may be related to defects as a result of impurities. Once trapped, the holes attract mobile, negatively charged defects in the lattice: the interstitial silver vacancy Agv−:
h• + Agv− h.Agv
The formation of the h.Agv lowers its energy sufficiently to stabilize the complex and reduce the probability of ejection of the hole back into the valance band (the equilibrium constant for hole-complex in the interior of the crystal is estimated at 10−4.
Additional investigations on electron- and hole-trapping demonstrated that impurities also can be a significant trapping system. Consequently, interstitial silver ions may not be reduced. Therefore, these traps are actually loss mechanisms, and are considered trapping inefficiencies. For example, atmospheric oxygen can interact with photoelectrons to form an O2− species, which can interact with a hole to reverse the complex and undergo recombination. Metal ion impurities such as copper(I), iron(II), and cadmium(II) have demonstrated hole-trapping in silver bromide.
Crystal surface chemistry;
Once the hole-complexes are formed, they diffuse to the surface of the grain as a result of the formed concentration gradient. Studies demonstrated that the lifetimes of holes near the surface of the grain are much longer than those in the bulk, and that these holes are in equilibrium with adsorbed bromine. The net effect is an equilibrium push at the surface to form more holes. Therefore, as the hole-complexes reach the surface, they disassociate:
h.Agv− → h• + Agv− → Br → FRACTION Br2
By this reaction equilibrium, the hole-complexes are constantly consumed at the surface, which acts as a sink, until removed from the crystal. This mechanism provides the counterpart to the reduction of the interstitial Agi+ to Agi0, giving an overall equation of:
AgBr → Ag + FRACTION Br2
Latent image formation and photography
Now that some of the theory has been presented, the actual mechanism of the photographic process can be discussed. To summarize, as a photographic film is subjected to an image, photons incident on the grain produce electrons which interact to yield silver metal. More photons hitting a particular grain will produce a larger concentration of silver atoms, containing between 5 and 50 silver atoms (out of ~1012 atoms), depending on the sensitivity of the emulsion. The film now has a concentration gradient of silver atom specks based upon varying intensity light across its area, producing an invisible "latent image".
While this process is occurring, bromine atoms are being produced at the surface of the crystal. To collect the bromine, a layer on top of the emulsion, called a sensitizer, acts as a bromine acceptor.
During film development the latent image is intensified by addition of a chemical, typically hydroquinone, that selectivity reduces those grains which contain atoms of silver. The process, which is sensitive to temperature and concentration, will completely reduce grains to silver metal, intensifying the latent image on the order of 1010 to 1011. This step demonstrates the advantage and superiority of silver halides over other systems: the latent image, which takes only milliseconds to form and is invisible, is sufficient to produce a full image from it.
After development, the film is "fixed", during which the remaining silver salts are removed to prevent further reduction, leaving the "negative" image on the film. The agent used is sodium thiosulfate, and reacts according to the following equation:
AgX(s) + 2 Na2S2O3(aq) → Na3[Ag(S2O3)2](aq) + NaX(aq)
An indefinite number of positive prints can be generated from the negative by passing light through it and undertaking the same steps outlined above.
Semiconductor properties
As silver bromide is heated within 100 °C of its melting point, an Arrhenius plot of the ionic conductivity shows the value increasing and "upward-turning". Other physical properties such as elastic moduli, specific heat, and the electronic energy gap also increase, suggesting the crystal is approaching instability. This behavior, typical of a semi-conductor, is attributed to a temperature-dependence of Frenkel defect formation, and, when normalized against the concentration of Frenkel defects, the Arrhenius plot linearizes.
See also
Photography
Science of photography
Silver chloride
References
Metal halides
Bromides
Silver compounds
Photographic chemicals
Light-sensitive chemicals | Silver bromide | [
"Chemistry"
] | 2,310 | [
"Light-sensitive chemicals",
"Inorganic compounds",
"Salts",
"Light reactions",
"Bromides",
"Metal halides"
] |
965,376 | https://en.wikipedia.org/wiki/Minimal%20counterexample | In mathematics, a minimal counterexample is the smallest example which falsifies a claim, and a proof by minimal counterexample is a method of proof which combines the use of a minimal counterexample with the ideas of proof by induction and proof by contradiction. More specifically, in trying to prove a proposition P, one first assumes by contradiction that it is false, and that therefore there must be at least one counterexample. With respect to some idea of size (which may need to be chosen carefully), one then concludes that there is such a counterexample C that is minimal. In regard to the argument, C is generally something quite hypothetical (since the truth of P excludes the possibility of C), but it may be possible to argue that if C existed, then it would have some definite properties which, after applying some reasoning similar to that in an inductive proof, would lead to a contradiction, thereby showing that the proposition P is indeed true.
If the form of the contradiction is that we can derive a further counterexample D, that is smaller than C in the sense of the working hypothesis of minimality, then this technique is traditionally called proof by infinite descent. In which case, there may be multiple and more complex ways to structure the argument of the proof.
The assumption that if there is a counterexample, there is a minimal counterexample, is based on a well-ordering of some kind. The usual ordering on the natural numbers is clearly possible, by the most usual formulation of mathematical induction; but the scope of the method can include well-ordered induction of any kind.
Examples
The minimal counterexample method has been much used in the classification of finite simple groups. The Feit–Thompson theorem, that finite simple groups that are not cyclic groups have even order, was proved based on the hypothesis of some, and therefore some minimal, simple group G of odd order. Every proper subgroup of G can be assumed a solvable group, meaning that much theory of such subgroups could be applied.
Euclid's proof of the fundamental theorem of arithmetic is a simple proof which uses a minimal counterexample.
Courant and Robbins used the term minimal criminal for a minimal counter-example in the context of the four color theorem.
References
Mathematical proofs
Mathematical terminology | Minimal counterexample | [
"Mathematics"
] | 481 | [
"nan"
] |
965,387 | https://en.wikipedia.org/wiki/Bathymetry | Bathymetry (; ) is the study of underwater depth of ocean floors (seabed topography), lake floors, or river floors. In other words, bathymetry is the underwater equivalent to hypsometry or topography. The first recorded evidence of water depth measurements are from Ancient Egypt over 3000 years ago. Bathymetry has various uses including the production of bathymetric charts to guide vessels and identify underwater hazards, the study of marine life near the floor of water bodies, coastline analysis and ocean dynamics, including predicting currents and tides.
Bathymetric charts (not to be confused with hydrographic charts), are typically produced to support safety of surface or sub-surface navigation, and usually show seafloor relief or terrain as contour lines (called depth contours or isobaths) and selected depths (soundings), and typically also provide surface navigational information. Bathymetric maps (a more general term where navigational safety is not a concern) may also use a digital terrain model and artificial illumination techniques to illustrate the depths being portrayed. The global bathymetry is sometimes combined with topography data to yield a global relief model. Paleobathymetry is the study of past underwater depths.
Synonyms include seafloor mapping, seabed mapping, seafloor imaging and seabed imaging. Bathymetric measurements are conducted with various methods, from depth sounding, sonar and lidar techniques, to buoys and satellite altimetry. Various methods have advantages and disadvantages and the specific method used depends upon the scale of the area under study, financial means, desired measurement accuracy, and additional variables. Despite modern computer-based research, the ocean seabed in many locations is less measured than the topography of Mars.
Seabed topography
Measurement
Originally, bathymetry involved the measurement of ocean depth through depth sounding. Early techniques used pre-measured heavy rope or cable lowered over a ship's side. This technique measures the depth only a singular point at a time, and is therefore inefficient. It is also subject to movements of the ship and currents moving the line out of true and therefore is not accurate.
The data used to make bathymetric maps today typically comes from an echosounder (sonar) mounted beneath or over the side of a boat, "pinging" a beam of sound downward at the seafloor or from remote sensing LIDAR or LADAR systems. The amount of time it takes for the sound or light to travel through the water, bounce off the seafloor, and return to the sounder informs the equipment of the distance to the seafloor. LIDAR/LADAR surveys are usually conducted by airborne systems.
Starting in the early 1930s, single-beam sounders were used to make bathymetry maps. Today, multibeam echosounders (MBES) are typically used, which use hundreds of very narrow adjacent beams (typically 256) arranged in a fan-like swath of typically 90 to 170 degrees across. The tightly packed array of narrow individual beams provides very high angular resolution and accuracy. In general, a wide swath, which is depth dependent, allows a boat to map more seafloor in less time than a single-beam echosounder by making fewer passes. The beams update many times per second (typically 0.1–50 Hz depending on water depth), allowing faster boat speed while maintaining 100% coverage of the seafloor. Attitude sensors allow for the correction of the boat's roll and pitch on the ocean surface, and a gyrocompass provides accurate heading information to correct for vessel yaw. (Most modern MBES systems use an integrated motion-sensor and position system that measures yaw as well as the other dynamics and position.) A satellite-based global navigation system positions the soundings with respect to the surface of the earth. Sound speed profiles (speed of sound in water as a function of depth) of the water column correct for refraction or "ray-bending" of the sound waves owing to non-uniform water column characteristics such as temperature, conductivity, and pressure. A computer system processes all the data, correcting for all of the above factors as well as for the angle of each individual beam. The resulting sounding measurements are then processed either manually, semi-automatically or automatically (in limited circumstances) to produce a map of the area. a number of different outputs are generated, including a sub-set of the original measurements that satisfy some conditions (e.g., most representative likely soundings, shallowest in a region, etc.) or integrated digital terrain models (DTM) (e.g., a regular or irregular grid of points connected into a surface). Historically, selection of measurements was more common in hydrographic applications while DTM construction was used for engineering surveys, geology, flow modeling, etc. Since –2005, DTMs have become more accepted in hydrographic practice.
Satellites are also used to measure bathymetry. Satellite radar maps deep-sea topography by detecting the subtle variations in sea level caused by the gravitational pull of undersea mountains, ridges, and other masses. On average, sea level is higher over mountains and ridges than over abyssal plains and trenches.
In the United States the United States Army Corps of Engineers performs or commissions most surveys of navigable inland waterways, while the National Oceanic and Atmospheric Administration (NOAA) performs the same role for ocean waterways. Coastal bathymetry data is available from NOAA's National Geophysical Data Center (NGDC), which is now merged into National Centers for Environmental Information. Bathymetric data is usually referenced to tidal vertical datums. For deep-water bathymetry, this is typically Mean Sea Level (MSL), but most data used for nautical charting is referenced to Mean Lower Low Water (MLLW) in American surveys, and Lowest Astronomical Tide (LAT) in other countries. Many other datums are used in practice, depending on the locality and tidal regime.
Occupations or careers related to bathymetry include the study of oceans and rocks and minerals on the ocean floor, and the study of underwater earthquakes or volcanoes. The taking and analysis of bathymetric measurements is one of the core areas of modern hydrography, and a fundamental component in ensuring the safe transport of goods worldwide.
Satellite imagery
Another form of mapping the seafloor is through the use of satellites. The satellites are equipped with hyper-spectral and multi-spectral sensors which are used to provide constant streams of images of coastal areas providing a more feasible method of visualising the bottom of the seabed.
Hyper-spectral sensors
The data-sets produced by hyper-spectral (HS) sensors tend to range between 100 and 200 spectral bands of approximately 5–10 nm bandwidths. Hyper-spectral sensing, or imaging spectroscopy, is a combination of continuous remote imaging and spectroscopy producing a single set of data. Two examples of this kind of sensing are AVIRIS (airborne visible/infrared imaging spectrometer) and HYPERION.
The application of HS sensors in regards to the imaging of the seafloor is the detection and monitoring of chlorophyll, phytoplankton, salinity, water quality, dissolved organic materials, and suspended sediments. However, this does not provide a great visual interpretation of coastal environments.
Multi-spectral sensors
The other method of satellite imaging, multi-spectral (MS) imaging, tends to divide the EM spectrum into a small number of bands, unlike its partner hyper-spectral sensors which can capture a much larger number of spectral bands.
MS sensing is used more in the mapping of the seabed due to its fewer spectral bands with relatively larger bandwidths. The larger bandwidths allow for a larger spectral coverage, which is crucial in the visual detection of marine features and general spectral resolution of the images acquired.
Airborne laser bathymetry
High-density airborne laser bathymetry (ALB) is a modern, highly technical, approach to the mapping the seafloor. First developed in the 1960s and 1970s, ALB is a "light detection and ranging (LiDAR) technique that uses visible, ultraviolet, and near infrared light to optically remote sense a contour target through both an active and passive system." What this means is that airborne laser bathymetry also uses light outside the visible spectrum to detect the curves in underwater landscape.
LiDAR (light detection and ranging) is, according to the National Oceanic and Atmospheric Administration, "a remote sensing method that uses light in the form of a pulsed laser to measure distances". These light pulses, along with other data, generate a three-dimensional representation of whatever the light pulses reflect off, giving an accurate representation of the surface characteristics. A LiDAR system usually consists of a laser, scanner, and GPS receiver. Airplanes and helicopters are the most commonly used platforms for acquiring LIDAR data over broad areas. One application of LiDAR is bathymetric LiDAR, which uses water-penetrating green light to also measure seafloor and riverbed elevations.
ALB generally operates in the form of a pulse of non-visible light being emitted from a low-flying aircraft and a receiver recording two reflections from the water. The first of which originates from the surface of the water, and the second from the seabed. This method has been used in a number of studies to map segments of the seafloor of various coastal areas.
Examples of commercial LIDAR bathymetry systems
There are various LIDAR bathymetry systems that are commercially accessible. Two of these systems are the Scanning Hydrographic Operational Airborne Lidar Survey (SHOALS) and the Laser Airborne Depth Sounder (LADS). SHOALS was first developed to help the United States Army Corps of Engineers (USACE) in bathymetric surveying by a company called Optech in the 1990s. SHOALS is done through the transmission of a laser, of wavelength between 530 and 532 nm, from a height of approximately 200 m at speed of 60 m/s on average.
High resolution orthoimagery
High resolution orthoimagery (HRO) is the process of creating an image that combines the geometric qualities with the characteristics of photographs. The result of this process is an orthoimage, a scale image which includes corrections made for feature displacement such as building tilt. These corrections are made through the use of a mathematical equation, information on sensor calibration, and the application of digital elevation models.
An orthoimage can be created through the combination of a number of photos of the same target. The target is photographed from a number of different angles to allow for the perception of the true elevation and tilting of the object. This gives the viewer an accurate perception of the target area.
High resolution orthoimagery is currently being used in the 'terrestrial mapping program', the aim of which is to 'produce high resolution topography data from Oregon to Mexico'. The orthoimagery will be used to provide the photographic data for these regions.
History
The earliest known depth measurements were made about 1800 BCE by Egyptians by probing with a pole. Later a weighted line was used, with depths marked off at intervals. This process was known as sounding. Both these methods were limited by being spot depths, taken at a point, and could easily miss significant variations in the immediate vicinity. Accuracy was also affected by water movement–current could swing the weight from the vertical and both depth and position would be affected. This was a laborious and time-consuming process and was strongly affected by weather and sea conditions.
There were significant improvements with the voyage of HMS Challenger in the 1870s, when similar systems using wires and a winch were used for measuring much greater depths than previously possible, but this remained a one depth at a time procedure which required very low speed for accuracy. Greater depths could be measured using weighted wires deployed and recovered by powered winches. The wires had less drag and were less affected by current, did not stretch as much, and were strong enough to support their own weight to considerable depths. The winches allowed faster deployment and recovery, necessary when the depths measured were of several kilometers. Wire drag surveys continued to be used until the 1990s due to reliability and accuracy. This procedure involved towing a cable by two boats, supported by floats and weighted to keep a constant depth The wire would snag on obstacles shallower than the cable depth. This was very useful for finding navigational hazards which could be missed by soundings, but was limited to relatively shallow depths.
Single-beam echo sounders were used from the 1920s-1930s to measure the distance of the seafloor directly below a vessel at relatively close intervals along the line of travel. By running roughly parallel lines, data points could be collected at better resolution, but this method still left gaps between the data points, particularly between the lines. The mapping of the sea floor started by using sound waves, contoured into isobaths and early bathymetric charts of shelf topography. These provided the first insight into seafloor morphology, though mistakes were made due to horizontal positional accuracy and imprecise depths. Sidescan sonar was developed in the 1950s to 1970s and could be used to create an image of the bottom, but the technology lacked the capacity for direct depth measurement across the width of the scan. In 1957, Marie Tharp, working with Bruce Charles Heezen, created the first three-dimensional physiographic map of the world's ocean basins. Tharp's discovery was made at the perfect time. It was one of many discoveries that took place near the same time as the invention of the computer. Computers, with their ability to compute large quantities of data, have made research much easier, include the research of the world's oceans. The development of multibeam systems made it possible to obtain depth information across the width of the sonar swath, to higher resolutions, and with precise position and attitude data for the transducers, made it possible to get multiple high resolution soundings from a single pass.
The US Naval Oceanographic Office developed a classified version of multibeam technology in the 1960s. NOAA obtained an unclassified commercial version in the late 1970s and established protocols and standards. Data acquired with multibeam sonar have vastly increased understanding of the seafloor.
The U.S. Landsat satellites of the 1970s and later the European Sentinel satellites, have provided new ways to find bathymetric information, which can be derived from satellite images. These methods include making use of the different depths to which different frequencies of light penetrate the water. When water is clear and the seafloor is sufficiently reflective, depth can be estimated by measuring the amount of reflectance observed by a satellite and then modeling how far the light should penetrate in the known conditions. The Advanced Topographic Laser Altimeter System (ATLAS) on NASA's Ice, Cloud, and land Elevation Satellite 2 (ICESat-2) is a photon-counting lidar that uses the return time of laser light pulses from the Earth's surface to calculate altitude of the surface. ICESat-2 measurements can be combined with ship-based sonar data to fill in gaps and improve precision of maps of shallow water.
Mapping of continental shelf seafloor topography using remotely sensed data has applied a variety of methods to visualise the bottom topography. Early methods included hachure maps, and were generally based on the cartographer's personal interpretation of limited available data. Acoustic mapping methods developed from military sonar images produced a more vivid picture of the seafloor. Further development of sonar based technology have allowed more detail and greater resolution, and ground penetrating techniques provide information on what lies below the bottom surface. Airborne and satellite data acquisition have made further advances possible in visualisation of underwater surfaces: high-resolution aerial photography and orthoimagery is a powerful tool for mapping shallow clear waters on continental shelves, and airborne laser bathymetry, using reflected light pulses, is also very effective in those conditions, and hyperspectral and multispectral satellite sensors can provide a nearly constant stream of benthic environmental information. Remote sensing techniques have been used to develop new ways of visualizing dynamic benthic environments from general geomorphological features to biological coverage.
Charts
See also
Seabed 2030 Project
References
External links
Bathymetric Data Viewer from NOAA's NCEI
Overview for underwater terrain, data formats, etc. (vterrain.org)
High resolution bathymetry for the Great Barrier Reef and Coral Sea
A.PO.MA.B.-Academy of Positioning Marine and Bathymetry
WebMapping Application for searching free and open source Bathymetry datasets
Interactive Web Map, Set Negative Elevation for Bathymetry
NOAA Ocean Explorer
Schmidt Ocean Institute: Seafloor Mapping
Seafloormapping.co.uk
Coastal Bathymetry Map for US, Canda, Europe & Australia
Seabed 2030
Cartography
Geomorphology
Oceanography
Topography techniques | Bathymetry | [
"Physics",
"Environmental_science"
] | 3,449 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
965,390 | https://en.wikipedia.org/wiki/Matching%20hypothesis | The matching hypothesis (also known as the matching phenomenon) argues that people are more likely to form and succeed in a committed relationship with someone who is equally socially desirable, typically in the form of physical attraction. The hypothesis is derived from the discipline of social psychology and was first proposed by American social psychologist Elaine Hatfield and her colleagues in 1966.
Successful couples of differing physical attractiveness may be together due to other matching variables that compensate for the difference in attractiveness. For instance, some men with wealth and status desire younger, more attractive women. Some women are more likely to overlook physical attractiveness for men who possess wealth and status.
It is also similar to some of the theorems outlined in uncertainty reduction theory, from the post-positivist discipline of communication studies. These theorems include constructs of nonverbal expression, perceived similarity, liking, information seeking, and intimacy, and their correlations to one another.
Research
Walster et al. (1966)
Walster advertised a "Computer Match Dance". 752 student participants were rated on physical attractiveness by four independent judges, as a measure of social desirability. Participants were told to fill in a questionnaire for the purposes of computer matching based on similarity. Instead, participants were randomly paired, except no man was paired with a taller woman. During an intermission of the dance, participants were asked to assess their date. People with higher ratings were found to have more harsh judgment of their dates. Furthermore, higher levels of attractiveness indicated lower levels of satisfaction with their pairing, even when they were on the same level. It was also found that both men and women were more satisfied with their dates if their dates had high levels of attractiveness. Physical attractiveness was found to be the most important factor in enjoying the date and whether or not they would sleep with them when propositioned. It was more important than intelligence and personality.
One criticism Walster assigned to the study was that the four judges who assigned the attractiveness ratings to the participants had very brief interactions with them. Longer exposure may have changed the attraction ratings. In a follow-up of the experiment, it was found that couples were more likely to continue interacting if they held similar attraction ratings.
Walster and Walster (1971)
Walster and Walster ran a follow-up to the Computer Dance, but instead allowed participants to meet beforehand in order to give them greater chance to interact and think about their ideal qualities in a partner. The study had greater ecological validity than the original study, and the finding was that partners that were similar in terms of physical attractiveness expressed the most liking for each other – a finding that supports the matching hypothesis.
Murstein (1972)
Murstein also found evidence that supported the matching hypothesis. Photos of 197 couples in various statuses of relationship (from casually dating to married), were rated in terms of attractiveness by eight judges. Each person was photographed separately. The judges did not know which photographs went together within romantic partnerships. The ratings from the judges supported the matching hypothesis.
Self-perception and perception of the partner were included in the first round of the study; however, in the later rounds they were removed, as partners not only rated themselves unrealistically high, but their partners even higher.
Huston (1973)
Huston argued that the evidence for the matching hypothesis didn't come from matching but instead on the tendency of people to avoid rejection hence choosing someone similarly attractive to themselves, to avoid being rejected by someone more attractive than themselves. Huston attempted to prove this by showing participants photos of people who had already indicated that they would accept the participant as a partner. The participant usually chose the person rated as most attractive; however, the study has very flawed ecological validity as the relationship was certain, and in real life people wouldn't be certain hence are still more likely to choose someone of equal attractiveness to avoid possible rejection.
White (1980)
White conducted a study on 123 dating couples at UCLA. He stated that good physical matches may be conducive to good relationships. The study reported that partners most similar in physical attractiveness were found to rate themselves happier and report deeper feelings of love.
The study also supported that some, especially men, view relationships as a marketplace. If the partnership is weak, an individual may devalue it if they have many friends of the opposite sex who are more attractive. They may look at the situation as having more options present that are more appealing. At the same time, if the relationship is strong, they may value the relationship more because they are passing up on these opportunities in order to remain in the relationship.
Brown (1986)
Brown argued for the matching hypothesis, but maintained that it results from a learned sense of what is "fitting" – we adjust our expectation of a partner in line with what we believe we have to offer others, instead of a fear of rejection.
Garcia and Khersonsky (1996)
Garcia and Khersonsky studied this effect and how others view matching and non-matching couples. Participants viewed photos of couples who matched or did not match in physical attractiveness and completed a questionnaire. The questionnaire included ratings of how satisfied the couples appear in their current relationship, their potential marital satisfaction, how likely is it that they will break up and how likely it is that they will be good parents. Results showed that the attractive couple was rated as currently more satisfied than the non-matching couple, where the male was more attractive than the female. Additionally, the unattractive male was rated as more satisfied (currently and marital) than the attractive female in the non-matching couple. The attractive woman was also rated as more satisfied (currently and marital) in the attractive couple.
Shaw Taylor et al. (2011)
Shaw Taylor performed a series of studies involving the matching hypothesis in online dating. In one of the studies, the attractiveness of 60 males and 60 females were measured and their interactions were monitored. The people with whom they interacted were then monitored to see who they interacted with, and returned messages to. What they found was different from the original construct of matching. People contacted others who were significantly more attractive than they were. However it was found that the person was more likely to reply if they were closer to the same level of attractiveness. This study supported matching but not as something that is intentional.
Other studies
Further evidence supporting the matching hypothesis was found by:
Berscheid and Dion (1974)
Berscheid and Walster et al. (1974)
Quotations
Price and Vandenberg stated that "the matching phenomenon [of physical attractiveness between marriage partners] is stable within and across generations".
"Love is often nothing but a favorable exchange between two people who get the most of what they can expect, considering their value on the personality market." — Erich Fromm
See also
Assortative mating
Uncertainty reduction theory
References
1966 introductions
Interpersonal relationships | Matching hypothesis | [
"Biology"
] | 1,401 | [
"Behavior",
"Interpersonal relationships",
"Human behavior"
] |
965,409 | https://en.wikipedia.org/wiki/HD-MAC | HD-MAC (High Definition Multiplexed Analogue Components) was a broadcast television standard proposed by the European Commission in 1986, as part of Eureka 95 project. It belongs to the MAC - Multiplexed Analogue Components standard family. It is an early attempt by the EEC to provide High-definition television (HDTV) in Europe. It is a complex mix of analogue signal (based on the Multiplexed Analogue Components standard), multiplexed with digital sound, and assistance data for decoding (DATV). The video signal (1250 lines/50 fields per second in 16:9 aspect ratio, with 1152 visible lines) was encoded with a modified D2-MAC encoder.
HD-MAC could be decoded by normal D2-MAC standard definition receivers, but no extra resolution was obtained and certain artifacts were visible. To decode the signal in full resolution a specific HD-MAC tuner was required .
Naming convention
The European Broadcasting Union video format description is as follows: width x height [scan type: i or p] / number of full frames per second
European standard definition digital broadcasts use 720×576i/25, meaning 25 720 pixels wide and 576 pixels high interlaced frames: odd lines (1, 3, 5 ...) are grouped to build the odd field, which is transmitted first, then it is followed by the even field containing lines 2, 4, 6... Thus, there are two fields in a frame, resulting in a field frequency of 25 × 2 = 50 Hz.
The visible part of the video signal provided by an HD-MAC receiver was 1152i/25, which exactly doubles the vertical resolution of standard definition. The amount of information is multiplied by 4, considering the encoder started its operations from a 1440x1152i/25 sampling grid.
Standard history
Work on HD-MAC specification started officially in May 1986. The purpose was to react against a Japanese proposal, supported by the US, which aimed to establish the NHK-designed Hi-Vision (also known as MUSE) system as a world standard. Besides preservation of the European electronic industry, there was also a need to produce a standard that would be compliant with the 50 Hz field frequency systems (used by a large majority of countries in the world). Truth be said, the precisely 60 Hz of the Japanese proposal was also worrying the US, as their NTSC M-based standard definition infrastructure used a practical frequency of 59.94 Hz, potentially leading to incompatibility problems.
In September, 1988, the Japanese performed the first High Definition broadcasts of the Olympic games, using their Hi-Vision system (NHK produced material using this format since 1982). In that same month of September, Europe showed for the first time a credible alternative, namely a complete HD-MAC broadcasting chain, at IBC 88 in Brighton. This show included the first progressive scan HD video camera prototypes (Thomson/LER).
Golden SCART was developed as a transmission interface for consumer devices, a special and backward-compatible implementation of the normal SCART connection. Some television sets from Philips and Telefunken are said to have been equipped with it.
For the Albertville 1992 Winter Olympics and Barcelona 1992 Summer Olympics, a public demonstration of HD-MAC broadcasting took place. 60 HD-MAC receivers for the Albertville games and 700 for the Barcelona games were set up in "Eurosites" to show the capabilities of the standard. 1250 lines (1152 visible) CRT projectors were used to create an image a few meters wide in public spaces in Barcelona for the Olympics. There were some Thomson "Space system" 16:9 CRT TV sets as well. The project sometimes used rear-projection televisions. In addition, some 80,000 viewers of D2-MAC receivers were also able to watch the channel (though not in HD). It is estimated that 350,000 people across Europe were able to see this demonstration of European HDTV. This project was financed by the EEC. The PAL-converted signal was used by mainstream broadcasters such as SWR, BR and 3sat. The HD-MAC standard was also demonstrated at Seville Expo '92, exclusively using equipment designed to work with the standard such as Plumbicon and CCD cameras, direct view and rear projection CRT TVs, BCH 1000 Type B VTRs, single mode fiber optic cables, and Laserdisc players with their respective discs. Production equipment was visible to the public through windows.
Because UHF spare bandwidth was very scarce, HD-MAC was usable "de facto" only to cable and satellite providers, where their bandwidth was less constricted, similarly to Hi-Vision that was only broadcast by the NHK through a dedicated satellite channel called BShi. However, the standard never became popular among broadcasters. For all this, analogue HDTV could not replace conventional SDTV (terrestrial) PAL/SECAM, making HD-MAC sets unattractive to potential consumers.
It was required that all high-powered satellite broadcasters use MAC from 1986. However, the launch of middle-powered satellites by SES and the use of PAL allowed broadcasters to bypass HD-MAC, reducing their transmission costs. HD-MAC was left for transcontinental satellite links, however.
The HD-MAC standard was abandoned in 1993, and since then all EU and EBU efforts have focused on the DVB system (Digital Video Broadcasting), which allows both SDTV and HDTV.
This article about IFA 1993 provides a view of the project's status close to its end. It mentions "a special BBC compilation encoded in HD-MAC and replayed from a D1 Video Tape Recorder".
HD-MAC development was stopped alongside the EUREKA project in 1996, because picture quality was not deemed to be good enough, receiving TVs didn't have enough resolution, the 16:9 aspect ratio that would later become standard was seen as exotic, and receiving TVs weren't large enough to exhibit the image quality of the standard, and those that were, were CRT TVs which made them extremely heavy.
Technical details
Transmission
PAL/SECAM analogue SDTV broadcasts use 6-, 7- (VHF), or 8 MHz (UHF). The 819-line (System E) used 14 MHz wide VHF channels. For HD-MAC, the transmission medium must guarantee a baseband bandwidth of at least 11.14 MHz. This translates to a 12 MHz channel spacing in cable networks. The specification allows for 8 MHz channels, but in this case assistance data can no longer be correctly decoded, and it is only possible to extract a standard definition signal, using a D2-MAC receiver.
For satellite broadcasting, due to FM modulation spectrum expansion, an entire satellite transponder would be used, resulting in 27 to 36 MHz of bandwidth. The situation is pretty much the same in analogue standard definition : a given transponder can only support one analogue channel. So from this point of view, going to HD does not represent an inconvenience.
Bandwidth reduction
BRE (Bandwidth Reduction Encoding) operation started with analogue HD video (even when the source was a digital recorder, it was reconverted to analogue to feed the encoder). It was specified to have a 50 Hz field frequency. It could be interlaced, with 25 frames a second (called 1250/50/2 in the recommendation), or it could be progressively scanned with 50 full frames a second (called 1250/50/1). The interlaced version was the one used in practice. In any case, the number of visible lines was 1152, twice the standard 576 lines vertical definition. The full number of lines in a frame period, included those that cannot be displayed, was 1250. This made for a 32 μs line period. According to ITU recommendation for HDTV standards parameters the active part of the line was 26.67 μs long (see also the LDK 9000 camera document ).
Had the modern trend for square pixels applied, this would have yielded a 2048x1152 sampling grid. There was no such requirement in the standard, though, since CRT monitors don't need any extra scaling to be able to show non-square pixels. According to the specification, the sampling rate for the interlaced input to use was 72 MHz, resulting in 72 x 26.67 = 1920 horizontal samples. It was then reconverted to 1440 from within the sampled domain. The input signal often originated from sources previously sampled at only 54 MHz, for economical reasons, and therefore already containing no more than the analogue equivalent of 1440 samples per line.
Ultimately, the starting point for BRE was a 1440x1152 sampling grid (twice the horizontal and vertical resolutions of digital SD), interlaced, at 25 fps.
To improve horizontal resolution of the D2-MAC norm, only its bandwidth had to be increased. This was easily done as, unlike PAL, the sound is not sent on a sub-carrier, but multiplexed with the picture.
However, to increase vertical bandwidth was more complex, as the line frequency had to stay at 15.625 kHz to be compatible with D2-MAC. This offered three choices:
50 frames per second with only 288 lines for fast moving scenes (20 ms mode)
25 frames per second with 576 lines for normally moving scenes (40 ms mode)
12.5 frames per second with all 1152 lines for slow motion (80 ms mode)
As none of the three modes would have been sufficient, the choice during encoding was not made for the whole picture, but for little blocks of 16×16 pixels. The signal then contained hints (the DATV digital stream) that controlled which de-interlacing method the decoder should use.
The 20 ms mode offered an improved temporal resolution, but the 80 ms was the only one that provided High spatial definition in the usual sense. The 40 ms mode threw away one the HD fields and reconstructed it in the receiver with the
assistance of motion compensation data. Some indications were also provided in case of a whole frame movement (camera panning,..) to improve the quality of the reconstruction.
The encoder could work in "Camera" operating mode, using the three coding modes, but also in "film" mode where the 20 ms coding mode was not used.
The 80 ms mode took advantage of its reduced 12.5 fps frame rate to spread the contents of an HD frame over two SD frames, meaning four 20 ms fields = 80 ms, hence the name.
But that was not enough, as a single HD frame contains the equivalent of 4 SD frames. This could have been "solved" by doubling the bandwidth of the D2-MAC signal, thus increasing the allowed horizontal resolution by the same factor. Instead, the standard D2-MAC channel bandwidth was preserved, and one pixel out of two was dropped from each line. This sub-sampling was done in a quincux pattern. Assuming pixels on a line independently numbered from 1 to 1440, only pixels 1, 3, 5... were retained from the first line, pixels 2, 4, 6... from the second, 1, 3, 5...again from the third, and so on. That way, information from all the columns of the HD frame were conveyed to the receiver. Each missing pixel was surrounded by 4 transmitted ones (except on the sides) and could be interpolated from them. The resulting 720 horizontal resolution was further truncated to the 697 samples per line limit of the D2-HDMAC video multiplex.
As a consequence of those operations, a 4:1 reduction factor was achieved, allowing the high definition video signal to be transported in a standard D2-MAC channel. The samples retained by the BRE were assembled into a valid standard definition D2-MAC vision signal and finally converted to analogue for transmission. The modulation parameters were such that the independence of the samples was preserved.
To fully decode the picture, the receiver had to sample the signal again and then read from the memory several times. The BRD (Bandwidth Restoration Decoder) in the receiver would then reconstruct a 1394x1152 sampling grid from it, under the control of the DATV stream, to be fed into its DAC.
The final output was a 1250 (1152 visible) lines, 25 fps, interlaced, analogue HD video signal, with a 50 Hz field frequency.
Progressive scanning
European systems are generally referred to as 50 Hz standards (field frequency). The two fields are 20 ms apart in time. The Eu95 project stated it would evolve towards 1152p/50, and it is taken into account as a possible source in the D2-HDMAC specification. In that format, a full frame is captured every 20 ms, thus preserving the quality of motion of television and topping it with solid artifact-free frames representing only one instant in time, as is done for cinema. The 24 fps frame frequency of cinema is a bit low, though, and a generous amount of motion smear is required to allow the eye to perceive a smooth motion. 50 Hz is more than twice that rate, and the motion smear can be reduced in proportion, allowing for sharper pictures.
In practice, 50P was not used very much. Some tests were even done by having film shot at 50 fps and subsequently telecined.
Thomson / LER presented a progressive camera. However, it used a form of quincunx sampling and had therefore some bandwidth constraints.
This requirement meant pushing the technology boundaries of the time, and would have added to the notorious lack of sensitivity of some Eu 95 cameras (particularly CRT ones).
This thirst for light was one of the problems that plagued the operators shooting the French film "L'affaire Seznec (The Seznec case)" in 1250i.
Some CCD cameras were developed in the context of the project, see for example LDK9000 : 50 DB signal to noise ratio at 30 MHz, at F/4.
The Eu95 system would have provided better compatibility with cinema technology than its competitor, first because of progressive scanning, and second because of the convenience and quality of transfer between 50 Hz standards and film (no motion artifacts, one just needs to invert the usual "PAL speed-up" process by slowing down the frame rate in a 25/24 ratio). Taking one frame out of two from a 50P stream would have provided a suitable 25P video as a starting point for this operation. If the sequence is shot at 50 P with a fully opened shutter, it will produce the same amount of motion smear as a 25P shot with a half opened shutter, a common setting when shooting with a standard movie camera.
In practice, Hi-Vision seems to have been more successful in that regard, having been used for films such as Giulia e Giulia(1987) and Prospero's books(1991).
Recording
Consumer
A consumer tape recorder prototype was presented in 1988. It had an 80-minute recording time and used a 1.25 cm "metal" tape. Bandwidth was 10.125 MHz and signal to noise ratio 42 dB.
An HD-MAC videodisc prototype had been designed as well. The version that was presented in 1988 could record 20 min per side of a 30 cm disc. Bandwidth was 12 MHz and S/N 32 dB. This media was used for several hours at Expo 92.
Professional equipment
On the studio and production side, it was entirely different. HD-MAC bandwidth reduction techniques bring the HD pixel rate down to the level of SD. So in theory, it would have been possible to use an SD digital video recorder, assuming it provides enough room for the DATV assistance stream, which requires less than 1.1 Mbit/s. SD video using 4:2:0 format (12 bits per pixel) needs 720x576x25x12 bits per second, which is slightly less than 125 Mbit/s, to be compared with the 270 Mbit/s available from a D-1 machine.
But there is no real reason for the studio equipment to be constrained by HD-MAC, as the latter is only a transmission standard, used to convey the HD material from the transmitter to the viewers. Furthermore, technical and financial resources are available to store the HD video with better quality, for editing and archiving.
So in practice, other methods were used. At the start of the Eureka95 project the only means of recording the HD signal from a camera was on a massive 1-inch reel-to-reel tape machine, the BTS BCH 1000, which was based on the Type B videotape format but with 8 video heads instead of the two normally used, together with a higher linear tape speed of 66 cm/s, thus accommodating the higher bandwidth requirements of HD Video.
The plan within the Eureka95 project was to develop an uncompressed 72 MHz sampling digital recorder, dubbed the "Gigabit" recorder. It was expected to take a year to develop, so in the interim, two alternative digital recording systems were assembled, both using the standard definition "D1" uncompressed digital component recorder as starting points.
The Quincunx-subsampled, or double/dual D1 system developed by Thomson used two D-1 digital recorders which were synchronized in a master/slave relationship. Odd fields could then be recorded on one of the D-1 and even fields on the other. Horizontally the system recorded just half the horizontal bandwidth, with samples taken in a quincunx sampling grid. This gave the system a full bandwidth performance in the diagonal direction, but halved horizontally or vertically depending on the exact image temporal-spatial characteristics.
The Quadriga system was developed by the BBC in 1988 using 4 synchronised D1 recorders, 54 MHz sampling, and distributed the signal in such a way that blocks of 4 pixels were sent to each recorder in turn. Thus if a single tape was viewed, the image would appear as a fair but distorted representation of the whole image, enabling edit decisions to be taken on a single recording, and a three-machine edit was possible on a single quadriga by processing each of the four channels in turn, with identical edits made on the other three channels subsequently under the control of a programmed edit controller.
The original D1 recorders were restricted to a parallel video interface with very bulky short cables, but this was not a problem, since the digital signals were contained with the 5 half-height racks (4 D1s and the interface/control/interleaving rack) which made up the Quadriga, and initially all external signals were analogue components. The introduction of SDI (the 270 Mbit/s Serial Digital Interface) simplified cabling by the time the BBC constructed a second Quadriga.
Philips also constructed a Quadriga but used a slightly different format, with the HD image divided into four quadrants, each quadrant going to one of the four recorders. Excepting a slightly longer processing delay, it otherwise worked similarly to the BBC approach, and both versions of the Quadriga equipment were made to be interoperable, switchable between interleaved and quadrant modes.
In about 1993 Philips, in a joint venture with Bosch (BTS), produced a "BRR" (or Bit Rate Reduction) recording system to enable the full HD signal to be recorded onto a single D1 (or D5 HD) recorder. A low-resolution version of the image could be viewed in the centre of the screen if the tape was replayed on a conventional D1 recorder, and was surrounded by what appeared to be noise, but was in fact simply coded/compressed data, in a similar way to later MPEG digital compression techniques, with a compression rate of 5:1, starting with 72 MHz sampling. Some BRR equipment also contained Quadriga interfaces, for ease of conversion between recording formats, also being switchable between BBC and Philips versions of the Quadriga format. By this time, Quadriga signals were being carried on four SDI cables.
Finally, with help from Toshiba, in around 2000, the Gigabit recorder, by now known as the D6 HDTV VTR "Voodoo", was produced, some years after work on the 1250-line system had ceased in favour of the Common Image Format, the HDTV system as it is known today.
Hence the quality of Eureka 95 archives is higher than what viewers could see at the output of an HD-MAC decoder.
Transfer to film
For the making of the HD-based movie L'affaire Seznec, the Thomson company certified it would be able to transfer HD to 35 mm film. But none of the attempts were successful (shooting was done on dual-D1).
However, another French movie shot in 1994, Du fond du coeur: Germaine et Benjamin, allegedly achieved such a transfer. It is said to have been shot in digital high definition in 1250 lines.
If so, it would arguably be the first digital high definition movie, using a film-friendly 50 Hz field rate, 7 years before Vidocq and 8 years before Star Wars: Episode II – Attack of the Clones.. For a historical perspective on HD-originated movies, one can mention early attempts such as 'Harlow', shot in 1965 using a near-HD analogue 819 lines process that later evolved to higher resolutions (see Electronovision).
Project's afterlife
Experience was gained on important building blocks like HD digital recording, digital processing including motion compensation, HD CCD cameras, and also in factors driving acceptance or rejection of a new format by the professionals, and all of that was put to good use in the subsequent Digital Video Broadcasting project which, in contrast to HD-MAC, is a great worldwide success. Despite early claims by competitors that it could not do HD, it was soon deployed in Australia for just that purpose.
The cameras and tape recorders were reused for early experiments in digital high definition cinema.
The US brought home some of the Eu95 cameras to be studied in the context of their own HDTV standard development effort.
In France, a company called VTHR (Video Transmission Haute Resolution) used the Eu95 hardware for some time to retransmit cultural events to small villages (later, they switched to upscaled 15 Mbit/s MPEG2 SD).
In 1993, Texas Instruments built a 2048x1152 DMD prototype. No rationale is exposed in the papers for choosing this specific resolution over the Japanese 1035 active lines system, or alternatively doubling the 480 lines of the standard US TV to 960, but that way it could cover all resolutions expected to be present on the market, and that included the European one, which happened to be the highest. Some legacy of this development may be seen in "2K" and "4K" digital movie projectors using TI DLP chips, which run a slightly wider than usual 2048x1080 or 4096x2160 resolution, giving 1.896:1 aspect ratio without anamorphic stretching (vs the 1.778:1 of regular 16:9, with 1920 or 3840 horizontal pixels), give a little (6.7%) more horizontal resolution with anamorphic lenses when showing 2.21:1 (or wider) movies specifically prepared for them, and further enhancement (~13.78%) through reduced letterboxing if used without such lenses.
As of 2010, some computer monitors with 2048x1152 resolution were available (e.g. Samsung 2343BWX 23, Dell SP2309W). This unlikely to be in reference to Eu95, especially as the refresh rate will generally default to "60 Hz" (or 59.94 Hz), but simply a convenient "HD+" resolution made for bragging rights over ubiquitous 1920x1080 HD panels, with the slimmest possible actual resolution improvement whilst keeping the same 16:9 resolution for video playback without cropping or letterboxing (the next nearest "convenient" 16:9 resolution being the comparatively much larger, so much more expensive 2560x1600 "2.5K" as used in e.g. Apple Cinema and Retina displays); it is also a "neat" power-of-2 width, twice the width of one-time standard XGA (so, e.g. websites designed for that width can be smoothly zoomed to 200%), and happens to be 4x the size of the 1024x576 panels commonly used for cheaper netbooks and mobile tablets (much as the 2.5K standard is 4x the 1280x800 WXGA used in ultraportable laptops and midrange tablets). In this way, it can be considered a form of convergent specification evolution - although there's little chance the two standards are directly related, their particulars will have been landed on by broadly similar methods.
Although the fact is now mainly of historical interest, most larger-tube CRT PC monitors had a maximum horizontal scan rate of 70 kHz or higher, which means they could have handled 2048x1152 at 60 Hz progressive if set to use a custom resolution (with slimmer vertical blanking margins than HD-MAC/Eu95 itself for those rated for less than 75 kHz). Monitors able to support the lower refresh rate, including smaller models incapable of 70 kHz but good for at least 58 kHz (preferably 62.5 kHz) and able to support the lower vertical refresh rate could instead be set to run 50 Hz progressive, or even 100 Hz interlace to avert the flicker that would otherwise cause.
See also
TV transmission systems
Analog high-definition television system
PAL, what MAC technology tried to replace
SECAM, what MAC technology tried to replace
A-MAC
B-MAC
C-MAC
D-MAC
E-MAC
S-MAC
D2-MAC
DVB-S, MAC technology was replaced by this standard
DVB-T, MAC technology was replaced by this standard
Related standards:
NICAM-like audio coding is used in the HD-MAC system.
Chroma subsampling in TV indicated as 4:2:2, 4:1:1 etc...
References
External links
Multiplexed Analogue Components in "Analog TV Broadcast Systems" by Paul Schlyter
ETSI specification of the D2-HDMAC/Packet system (ETS 300 352)
HDTV programme production
HDTV coverage of the Barcelona Olympic Games
Analogue HDTV in europe (Includes description of EBU HD-MAC evaluation tests)
1152p50 CCD camera developed for Eureka 95
Richard Russel career in BBC (Section "High definition TV" talks about preservation of HD-MAC archives by the BBC)
The hdtv demonstrations at expo 92
IMDB link to "L'affaire Seznec", partially shot in 1250i
TVHD document including problems encountered when shooting "L'affaire Seznec"
High-definition television
Satellite television
Television technology
Television transmission standards
Audiovisual introductions in 1986
1986 establishments in Europe
Products and services discontinued in 1993
1993 disestablishments in Europe | HD-MAC | [
"Technology"
] | 5,609 | [
"Information and communications technology",
"Television technology"
] |
965,419 | https://en.wikipedia.org/wiki/Stochastic%20resonance | Stochastic resonance (SR) is a phenomenon in which a signal that is normally too weak to be detected by a sensor can be boosted by adding white noise to the signal, which contains a wide spectrum of frequencies. The frequencies in the white noise corresponding to the original signal's frequencies will resonate with each other, amplifying the original signal while not amplifying the rest of the white noise – thereby increasing the signal-to-noise ratio, which makes the original signal more prominent. Further, the added white noise can be enough to be detectable by the sensor, which can then filter it out to effectively detect the original, previously undetectable signal.
This phenomenon of boosting undetectable signals by resonating with added white noise extends to many other systems – whether electromagnetic, physical or biological – and is an active area of research.
Stochastic resonance was first proposed by the Italian physicists Roberto Benzi, Alfonso Sutera and Angelo Vulpiani in 1981, and the first application they proposed (together with Giorgio Parisi) was in the context of climate dynamics.
Technical description
Stochastic resonance (SR) is observed when noise added to a system changes the system's behaviour in some fashion. More technically, SR occurs if the signal-to-noise ratio of a nonlinear system or device increases for moderate values of noise intensity. It often occurs in bistable systems or in systems with a sensory threshold and when the input signal to the system is "sub-threshold." For lower noise intensities, the signal does not cause the device to cross threshold, so little signal is passed through it. For large noise intensities, the output is dominated by the noise, also leading to a low signal-to-noise ratio. For moderate intensities, the noise allows the signal to reach threshold, but the noise intensity is not so large as to swamp it. Thus, a plot of signal-to-noise ratio as a function of noise intensity contains a peak.
Strictly speaking, stochastic resonance occurs in bistable systems, when a small periodic (sinusoidal) force is applied together with a large wide band stochastic force (noise). The system response is driven by the combination of the two forces that compete/cooperate to make the system switch between the two stable states. The degree of order is related to the amount of periodic function that it shows in the system response. When the periodic force is chosen small enough in order to not make the system response switch, the presence of a non-negligible noise is required for it to happen. When the noise is small, very few switches occur, mainly at random with no significant periodicity in the system response. When the noise is very strong, a large number of switches occur for each period of the sinusoid, and the system response does not show remarkable periodicity. Between these two conditions, there exists an optimal value of the noise that cooperatively concurs with the periodic forcing in order to make almost exactly one switch per period (a maximum in the signal-to-noise ratio).
Such a favorable condition is quantitatively determined by the matching of two timescales: the period of the sinusoid (the deterministic time scale) and the Kramers rate (i.e., the average switch rate induced by the sole noise: the inverse of the stochastic time scale).
Stochastic resonance was discovered and proposed for the first time in 1981 to explain the periodic recurrence of ice ages. Since then, the same principle has been applied in a wide variety of systems. Nowadays stochastic resonance is commonly invoked when noise and nonlinearity concur to determine an increase of order in the system response.
Suprathreshold
Suprathreshold stochastic resonance is a particular form of stochastic resonance, in which random fluctuations, or noise, provide a signal processing benefit in a nonlinear system. Unlike most of the nonlinear systems in which stochastic resonance occurs, suprathreshold stochastic resonance occurs when the strength of the fluctuations is small relative to that of an input signal, or even small for random noise. It is not restricted to a subthreshold signal, hence the qualifier.
Neuroscience, psychology and biology
Stochastic resonance has been observed in the neural tissue of the sensory systems of several organisms. Computationally, neurons exhibit SR because of non-linearities in their processing. SR has yet to be fully explained in biological systems, but neural synchrony in the brain (specifically in the gamma wave frequency) has been suggested as a possible neural mechanism for SR by researchers who have investigated the perception of "subconscious" visual sensation. Single neurons in vitro including cerebellar Purkinje cells and squid giant axon could also demonstrate the inverse stochastic resonance, when spiking is inhibited by synaptic noise of a particular variance.
Medicine
SR-based techniques have been used to create a novel class of medical devices for enhancing sensory and motor functions such as vibrating insoles especially for the elderly, or patients with diabetic neuropathy or stroke.
See the Review of Modern Physics article for a comprehensive overview of stochastic resonance.
Stochastic Resonance has found noteworthy application in the field of image processing.
Signal analysis
A related phenomenon is dithering applied to analog signals before analog-to-digital conversion. Stochastic resonance can be used to measure transmittance amplitudes below an instrument's detection limit. If Gaussian noise is added to a subthreshold (i.e., immeasurable) signal, then it can be brought into a detectable region. After detection, the noise is removed. A fourfold improvement in the detection limit can be obtained.
See also
Mutual coherence (linear algebra)
Signal detection theory
Stochastic resonance (sensory neurobiology)
References
Bibliography
Hannes Risken The Fokker-Planck Equation, 2nd edition, Springer, 1989
Bibliography for suprathreshold stochastic resonance
N. G. Stocks, "Suprathreshold stochastic resonance in multilevel threshold systems," Physical Review Letters, 84, pp. 2310–2313, 2000.
M. D. McDonnell, D. Abbott, and C. E. M. Pearce, "An analysis of noise enhanced information transmission in an array of comparators," Microelectronics Journal 33, pp. 1079–1089, 2002.
M. D. McDonnell and N. G. Stocks, "Suprathreshold stochastic resonance," Scholarpedia 4, Article No. 6508, 2009.
M. D. McDonnell, N. G. Stocks, C. E. M. Pearce, D. Abbott, Stochastic Resonance: From Suprathreshold Stochastic Resonance to Stochastic Signal Quantization, Cambridge University Press, 2008.
External links
Scholar Google profile on stochastic resonance
Newsweek Being messy, both at home and in foreign policy, may have its own advantages Retrieved 3 Jan 2011
Stochastic Resonance Conference 1998–2008 ten years of continuous growth. 17-21 Aug. 2008, Perugia (Italy)
Stochastic Resonance - From Suprathreshold Stochastic Resonance to Stochastic Signal Quantization (book)
Review of Suprathreshold Stochastic Resonance
A.S. Samardak, A. Nogaret, N. B. Janson, A. G. Balanov, I. Farrer and D. A. Ritchie. "Noise-Controlled Signal Transmission in a Multithread Semiconductor Neuron" // Phys. Rev. Lett. 102 (2009) 226802,
Biophysics
Stochastic processes
Statistical mechanics
Oscillation
Signal processing
Sensory systems | Stochastic resonance | [
"Physics",
"Technology",
"Engineering",
"Biology"
] | 1,601 | [
"Telecommunications engineering",
"Applied and interdisciplinary physics",
"Computer engineering",
"Signal processing",
"Mechanics",
"Biophysics",
"Oscillation",
"Statistical mechanics"
] |
965,522 | https://en.wikipedia.org/wiki/PHIGS | PHIGS (Programmer's Hierarchical Interactive Graphics System) is an application programming interface (API) standard for rendering 3D computer graphics, considered to be the 3D graphics standard for the 1980s through the early 1990s. Subsequently, a combination of features and power led to the rise of OpenGL, which became the most popular professional 3D API of the mid to late 1990s.
Large vendors typically offered versions of PHIGS for their platforms, including DEC PHIGS, IBM's graPHIGS and Sun's SunPHIGS. It could also be used within the X Window System, supported via PEX. PEX consisted of an extension to X, adding commands that would be forwarded from the X server to the PEX system for rendering. Workstations were placed in windows typically, but could also be forwarded to take over the whole screen, or to various printer-output devices.
PHIGS was designed in the 1980s, inheriting many of its ideas from the Graphical Kernel System (GKS) of the late 1970s, and became a standard by 1988: ANSI (ANSI X3.144-1988), FIPS (FIPS 153) and then ISO (ISO/IEC 9592 and ISO/IEC 9593). Due to its early gestation, the standard supports only the most basic 3D graphics, including basic geometry and meshes, and only the basic Gouraud, "Dot", and Phong shading for rendering scenes. Although PHIGS ultimately expanded to contain advanced functions (including the more accurate Phong lighting model and Data Mapping), other features considered standard by the mid-1990s were not supported (notably texture mapping), nor were many machines of the era physically capable of optimizing it to perform in real time.
Technical details
The word "hierarchical" in the name refers to a notable feature of PHIGS: unlike most graphics systems, PHIGS included a scene graph system as a part of the basic standard. Models were built up in a Centralized Structure Store (CSS), a database containing a "world" including both the drawing primitives and their attributes (color, line style, etc.). CSSes could be shared among a number of virtual devices, known under PHIGS as workstations, each of which could contain any number of views.
Displaying graphics on the screen in PHIGS was a three-step process; first the model would be built into a CSS, then a workstation would be created and opened, and finally the model would be connected to the workstation. At that point the workstation would immediately render the model, and any future changes made to the model would instantly be reflected in all applicable workstation views.
PHIGS originally lacked the capability to render illuminated scenes, and was superseded by PHIGS+. PHIGS+ works in essentially the same manner, but added methods for lighting and filling surfaces within a 3D scene. PHIGS+ also introduced more advanced graphics primitives, such as Non-uniform rational B-spline (NURBS) surfaces. An ad hoc ANSI committee was formed around these proposed extensions to PHIGS, changing its name to the more descriptive and (optimistically) extensible name "PHIGS PLUS" -- "PLUS" being a slightly tongue-in-cheek acronym for "Plus Lumière Und Surfaces" (the two major areas of advancement over the base PHIGS standard).
The rise of OpenGL and the decline of PHIGS
OpenGL, unlike PHIGS, was an immediate-mode rendering system with no "state"; once an object is sent to a view to be rendered it essentially disappears. Changes to the model had to be re-sent into the system and re-rendered, a dramatically different programming mindset. For simple projects, PHIGS was considerably easier to use and work with.
However, OpenGL's "low-level" API allowed the programmer to make dramatic improvements in rendering performance by first examining the data on the CPU-side before trying to send it over the bus to the graphics engine. For instance, the programmer could "cull" the objects by examining which objects were actually visible in the scene, and sending only those objects that would actually end up on the screen. This was kept private in PHIGS, making it much more difficult to tune performance, but enabling tuning to happen "for free" within the PHIGS implementation.
Given the low performance systems of the era and the need for high-performance rendering, OpenGL was generally considered to be much more "powerful" for 3D programming. PHIGS fell into disuse. Version 6.0 of the PEX protocol was designed to support other 3D programming models as well, but did not regain popularity. PEX was mostly removed from XFree86 4.2.x (2002) and finally removed from the X Window System altogether in X11R6.7.0 (April 2004).
Standards
ISO
ISO/IEC 9592 Information technology – Computer graphics and image processing – Programmer's Hierarchical Interactive Graphics System (PHIGS)
ISO/IEC 9592-1:1997 Part 1: Functional description
ISO/IEC 9592-2:1997 Part 2: Archive file format
ISO/IEC 9592-3:1997 Part 3: Specification for clear-text encoding of archive file
ISO/IEC 9593 Information technology – Computer graphics – Programmer's Hierarchical Interactive Graphics System (PHIGS) language bindings
ISO/IEC 9593-1:1990 Part 1: FORTRAN
ISO/IEC 9593-3:1990 Part 3: ADA
ISO/IEC 9593-4:1991 Part 4: C
See also
OpenGL
Vulkan
DirectX
Notes
References
comp.windows.x.pex FAQ (28 March 1994)
An Introduction to PHIGS (actually PHIGS+)
External links
Open Source Implementation of PHIGS using OpenGL
3D scenegraph APIs
American National Standards Institute standards
Graphics libraries
Graphics standards
ISO standards
X-based libraries | PHIGS | [
"Technology"
] | 1,218 | [
"American National Standards Institute standards",
"Computer standards",
"Graphics standards"
] |
965,569 | https://en.wikipedia.org/wiki/Jason-1 | Jason-1 was a satellite altimeter oceanography mission. It sought to monitor global ocean circulation, study the ties between the ocean and the atmosphere, improve global climate forecasts and predictions, and monitor events such as El Niño and ocean eddies. Jason-1 was launched in 2001 and it was followed by OSTM/Jason-2 in 2008, and Jason-3 in 2016the Jason satellite series. Jason-1 was launched alongside the TIMED spacecraft.
Naming
The lineage of the name begins with the JASO1 meeting (JASO=Journées Altimétriques Satellitaires pour l'Océanographie) in Toulouse, France to study the problems of assimilating altimeter data in models. Jason as an acronym also stands for "Joint Altimetry Satellite Oceanography Network". Additionally, it is used to reference the mythical quest for knowledge of Jason and the Argonauts.
History
Jason-1 is the successor to the TOPEX/Poseidon mission, which measured ocean surface topography from 1992 through 2005. Like its predecessor, Jason-1 is a joint project between the NASA (United States) and CNES (France) space agencies. Jason-1's successor, the Ocean Surface Topography Mission on the Jason-2 satellite, was launched in June 2008. These satellites provide a unique global view of the oceans that is impossible to acquire using traditional ship-based sampling.
Jason-1 was built by Thales Alenia Space using a Proteus platform, under a contract from CNES, as well as the main Jason-1 instrument, the Poseidon-2 altimeter (successor to the Poseidon altimeter on-board TOPEX/Poseidon).
Jason-1 was designed to measure climate change through very precise millimeter-per-year measurements of global sea level changes. As did TOPEX/Poseidon, Jason-1 uses an altimeter to measure the hills and valleys of the ocean's surface. These measurements of sea surface topography allow scientists to calculate the speed and direction of ocean currents and monitor global ocean circulation. The global ocean is Earth's primary storehouse of solar energy. Jason-1's measurements of sea surface height reveal where this heat is stored, how it moves around Earth by ocean currents, and how these processes affect weather and climate.
Jason-1 was launched on 7 December 2001 from Vandenberg Air Force Base, in California, aboard a Delta II Launch vehicle. During the first months Jason-1 shared an almost identical orbit to TOPEX/Poseidon, which allowed for cross calibration. At the end of this period, the older satellite was moved to a new orbit midway between each Jason ground track. Jason had a repeat cycle of 10 days.
On 16 March 2002, Jason-1 experienced a sudden attitude upset, accompanied by temporary fluctuations in the onboard electrical systems. Soon after this incident, two new small pieces of space debris were observed in orbits slightly lower than Jason-1's, and spectroscopic analysis eventually proved them to have originated from Jason-1. In 2011, it was determined that the pieces of debris had most likely been ejected from Jason-1 by an unidentified, small "high-speed particle" hitting one of the spacecraft's solar panels.
Orbit maneuvers in 2009 put the Jason-1 satellite on the opposite side of Earth from the OSTM/Jason-2 satellite, which is operated by the United States and French weather agencies. At that time, Jason-1 flew over the same region of the ocean that OSTM/Jason-2 flew over five days earlier. Its ground tracks fell midway between those of OSTM/Jason-2, which are about apart at the equator.
This interleaved tandem mission provided twice the number of measurements of the ocean's surface, bringing smaller features such as ocean eddies into view. The tandem mission also helped pave the way for a future ocean altimeter mission that would collect much more detailed data with its single instrument than the two Jason satellites now do together.
In early 2012, having helped cross-calibrate the OSTM/Jason-2 replacement mission, Jason-1 was maneuvered into its graveyard orbit and all remaining fuel was vented. The mission was still able to return science data, measuring Earth's gravity field over the ocean. On 21 June 2013, contact with Jason-1 was lost; multiple attempts to re-establish communication failed. It was determined that the last remaining transmitter on board the spacecraft had failed. Operators sent commands to the satellite to turn off remaining functioning components on 1 July 2013, rendering it decommissioned. It is estimated that the spacecraft will remain on orbit for at least 1,000 years.
The program is named after the Greek mythological hero Jason.
Satellite instruments
Jason-1 has five 5 instruments:
Poseidon 2 – Nadir pointing Radar altimeter using C band and for measuring height above sea surface.
Jason Microwave Radiometer (JMR) – measures water vapor along altimeter path to correct for pulse delay
DORIS (Doppler Orbitography and Radiopositioning Integrated by Satellite) for orbit determination to within 10 cm or less and ionospheric correction data for Poseidon 2.
BlackJack Global Positioning System receiver provides precise orbit ephemeris data
Laser retroreflector array works with ground stations to track the satellite and calibrate and verify altimeter measurements.
The Jason-1 satellite, its altimeter instrument and a position-tracking antenna were built in France. The radiometer, Global Positioning System receiver and laser retroreflector array were built in the United States.
Use of information
TOPEX/Poseidon and Jason-1 have led to major advances in the science of physical oceanography and in climate studies. Their 15-year data record of ocean surface topography has provided the first opportunity to observe and understand the global change of ocean circulation and sea level. The results have improved the understanding of the role of the ocean in climate change and improved weather and climate predictions. Data from these missions are used to improve ocean models, forecast hurricane intensity, and identify and track large ocean/atmosphere phenomena such as El Niño and La Niña. The data are also used every day in applications as diverse as routing ships, improving the safety and efficiency of offshore industry operations, managing fisheries, and tracking marine mammals. Their 15-year data record of ocean surface topography has provided the first opportunity to observe and understand the global change of ocean circulation and sea level. The results have improved the understanding of the role of the ocean in climate change and improved weather and climate predictions. Data from these missions are used to improve ocean models, forecast hurricane intensity, and identify and track large ocean/atmosphere phenomena such as El Niño and La Niña. The data are also used every day in applications as diverse as routing ships, improving the safety and efficiency of offshore industry operations, managing fisheries, and tracking marine mammals.
TOPEX/Poseidon and Jason-1 have made major contributions to the understanding of:
Ocean variability
The missions revealed the surprising variability of the ocean, how much it changes from season to season, year to year, decade to decade and on even longer time scales. They ended the traditional notion of a quasi-steady, large-scale pattern of global ocean circulation by proving that the ocean is changing rapidly on all scales, from huge features such as El Niño and La Niña, which can cover the entire equatorial Pacific, to tiny eddies swirling off the large Gulf Stream in the Atlantic.
Sea level change
Measurements by Jason-1 indicate that mean sea level has been rising at an average rate of 2.28 mm (0.09 inch) per year since 2001. This is somewhat less than the rate measured by the earlier TOPEX/Poseidon mission, but over four times the rate measured by the later Envisat mission. Mean sea level measurements from Jason-1 are continuously graphed at the Centre National d'Études Spatiales web site, on the Aviso page. A composite sea level graph, using data from several satellites, is also available on that site.
The data record from these altimetry missions has given scientists important insights into how global sea level is affected by natural climate variability, as well as by human activities.
Planetary Waves
TOPEX/Poseidon and Jason-1 made clear the importance of planetary-scale waves, such as Rossby and Kelvin waves. No one had realized how widespread these waves are. Thousands of kilometers wide, these waves are driven by wind under the influence of Earth's rotation and are important mechanisms for transmitting climate signals across the large ocean basins. At high latitudes, they travel twice as fast as scientists believed previously, showing the ocean responds much more quickly to climate changes than was known before these missions.
Ocean tides
The precise measurements of TOPEX/Poseidon's and Jason-1 have brought knowledge of ocean tides to an unprecedented level. The change of water level due to tidal motion in the deep ocean is known everywhere on the globe to within 2.5 centimeters (1 inch). This new knowledge has revised notions about how tides dissipate. Instead of losing all their energy over shallow seas near the coasts, as previously believed, about one third of tidal energy is actually lost to the deep ocean. There, the energy is consumed by mixing water of different properties, a fundamental mechanism in the physics governing the general circulation of the ocean.
Ocean models
TOPEX/Poseidon and Jason-1 observations provided the first global data for improving the performance of the numerical ocean models that are a key component of climate prediction models. TOPEX/Poseidon and Jason-1 data are available at the University of Colorado Center for Astrodynamics Research, NASA's Physical Oceanography Distributed Active Archive Center, and the French data archive center AVISO.
Benefits to society
Altimetry data have a wide variety of uses from basic scientific research on climate to ship routing. Applications include:
Climate Research: altimetry data are incorporated into computer models to understand and predict changes in the distribution of heat in the ocean, a key element of climate.
El Niño and La Niña Forecasting: understanding the pattern and effects of climate cycles such as El Niño helps predict and mitigate the disastrous effects of floods and drought.
Hurricane Forecasting: altimeter data and satellite ocean wind data are incorporated into atmospheric models for hurricane season forecasting and individual storm severity.
Ship Routing: maps of ocean currents, eddies, and vector winds are used in commercial shipping and recreational yachting to optimize routes.
Offshore Industries: cable-laying vessels and offshore oil operations require accurate knowledge of ocean circulation patterns to minimize impacts from strong currents.
Marine Mammal Research: sperm whales, fur seals, and other marine mammals can be tracked, and therefore studied, around ocean eddies where nutrients and plankton are abundant.
Fisheries Management: satellite data identify ocean eddies which bring an increase in organisms that comprise the marine food web, attracting fish and fishermen.
Coral Reef Research: remotely sensed data are used to monitor and assess coral reef ecosystems, which are sensitive to changes in ocean temperature.
Marine Debris Tracking: the amount of floating and partially submerged material, including nets, timber and ship debris, is increasing with human population. Altimetry can help locate these hazardous materials.
See also
Argo - a project to measure the temperature and salinity of the upper 2 km of the water column
Seasat - an early radar altimeter satellite
TOPEX/Poseidon - the immediate predecessor to Jason-1
Ocean Surface Topography Mission/Jason-2 – the immediate successor to Jason-1
2004 Indian Ocean earthquake - Energy of the earthquake
French space program
References
External links
Jason 1 and 2 site at CNES (in French)
Jason 1 and 2 site at CNES (in English)
TOPEX/Jason site at NASA
DEOS: the Radar Altimeter Database System (RADS)
NASA Jason-1 mission page
Earth observation satellites of the United States
Earth observation satellites of France
2001 in France
Spacecraft launched in 2001
Spacecraft launched by Delta II rockets
Physical oceanography
Earth satellite radar altimeters
NASA satellites orbiting Earth
Jason satellite series
CNES | Jason-1 | [
"Physics"
] | 2,443 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
965,606 | https://en.wikipedia.org/wiki/Standpipe%20%28firefighting%29 | A standpipe or riser is a type of rigid water piping which is built into multi-story buildings in a vertical position, or into bridges in a horizontal position, to which fire hoses can be connected, allowing manual application of water to the fire. Within the context of a building or bridge, a standpipe serves the same purpose as a fire hydrant.
Dry standpipe
When standpipes are fixed into buildings, the pipe is in place permanently with an intake usually located near a road or driveway, so that a fire engine can supply water to the system. The standpipe extends into the building to supply fire fighting water to the interior of the structure via hose outlets, often located between each pair of floors in stairwells in high rise buildings. Dry standpipes are not filled with water until needed in fire fighting. Fire fighters often bring hoses in with them and attach them to standpipe outlets located along the pipe throughout the structure. This type of standpipe may also be installed horizontally on bridges.
Wet standpipe
A "wet" standpipe is filled with water and is pressurized at all times. In contrast to dry standpipes, which can be used only by firefighters, wet standpipes can be used by building occupants. Wet standpipes generally already come with hoses so that building occupants may fight fires quickly. This type of standpipe may also be installed horizontally on bridges.
Advantages
Laying a firehose up a stairwell takes time, and this time is saved by having fixed hose outlets already in place. There is also a tendency for heavy wet hoses to slide downward when placed on an incline (such as the incline seen in a stairwell), whereas standpipes do not move. The use of standpipes keeps stairwells clear and is safer for exiting occupants.
Standpipes go in a direct up and down direction rather than looping around the stairwell, greatly reducing the length and thus the loss of water pressure due to friction loss. Additionally, standpipes are rigid and do not kink, which can occur when a firehose is improperly laid on a stairwell.
Standpipe systems also provide a level of redundancy, should the main water distribution system within a building fail or be otherwise compromised by a fire or explosion.
Disadvantages
Standpipes are not fail-safe systems and there have been many instances where fire operations have been compromised by standpipe systems which were damaged or otherwise not working properly. Firefighters must take precautions to flush the standpipe before use to clear out debris and ensure that water is available.
See also
Fire Equipment Manufacturers' Association
Fire sprinkler
References
Essentials of Fire Fighting, Fourth Edition, copyright 1998 by the Board of Regents, Oklahoma State University
Firefighting equipment
Fire suppression
Piping | Standpipe (firefighting) | [
"Chemistry",
"Engineering"
] | 561 | [
"Piping",
"Chemical engineering",
"Mechanical engineering",
"Building engineering"
] |
965,610 | https://en.wikipedia.org/wiki/Tim%20Russ | Timothy Darrell Russ (born June 22, 1956) is an American actor, musician, screenwriter, director and amateur astronomer. He is best known for his roles as Lieutenant Commander Tuvok on Star Trek: Voyager, Robert Johnson in Crossroads (1986), Casey in East of Hope Street (1998), Frank on Samantha Who?, Principal Franklin on the Nickelodeon sitcom iCarly, and D. C. Montana on The Highwaymen (1987–1988). He appeared in The Rookie: Feds (2022) and reprised his role as Captain Tuvok on Season 3 of Star Trek: Picard.
Early life, family and education
Russ was born in Washington, D.C., on June 22, 1956, to a government employee mother and a U.S. Air Force officer father. He spent part of his childhood in Turkey. He attended his senior year of high school at Rome Free Academy, from which he graduated in 1974. He graduated from St. Edward's University with a degree in theater arts. He additionally attended graduate school at Illinois State University where he was inducted into its Hall of Fame.
Career
Acting
In 1985, Russ appeared in The Twilight Zone episode "Kentucky Rye" as Officer #2. He made a brief appearance in the comedy film Spaceballs as a trooper who "combs" the desert with a giant comb. Russ had a prominent role in the Charles Bronson film Death Wish 4.
Russ has been involved in the Star Trek franchise as a voice and film actor, writer, director, and producer. He played several minor roles before landing the role as the main character Tuvok in Star Trek: Voyager. Russ screentested, in 1987, for the role of Geordi La Forge on Star Trek: The Next Generation before being cast as Tuvok. Russ went into Voyager as a dedicated Trekkie with an extensive knowledge of Vulcan lore, and has played the following roles in the Star Trek universe:
Devor, a mercenary aboard the Enterprise-D disguised as a service engineer in The Next Generation episode "Starship Mine" (1993)
T'Kar, a Klingon in the Deep Space Nine episode "Invasive Procedures" (1993)
A human tactical Lieutenant on the USS Enterprise-B in the film Star Trek Generations (1994).
Tuvok's Mirror Universe counterpart in the Deep Space Nine episode "Through the Looking Glass" (1995).
A changeling impersonating Tuvok in Star Trek: Picard season 3.
In 1995, Russ co-wrote the story for the Malibu Comics Star Trek: Deep Space Nine #29 and 30, with Mark Paniccia. Russ performed voice acting roles as Tuvok for the video games Star Trek: Voyager – Elite Force and Star Trek: Elite Force II. Russ is the director and one of the stars of the fan series Star Trek: Of Gods and Men, the first third of which was released in December 2007, with the remaining two-thirds released in 2008.
Russ's character's name D. C. Montana in The Highwayman was a reference to Trek writer D. C. Fontana.
In 1990, he appeared in an episode of Freddy's Nightmares.
Russ directed and co-starred in Star Trek: Renegades, and in both 2013 and 2014 reprised his role as the voice of Tuvok in the massively multiplayer online game Star Trek Online.
Later work
Russ appeared as Frank, a sarcastic doorman in the sitcom Samantha Who? from 2007 to 2009, and appeared for six seasons as Principal Ted Franklin in Nickelodeon's show iCarly. He also portrayed a doctor on an episode of Hannah Montana, "I Am Hannah, Hear Me Croak."
Russ won an Emmy Award in 2014 for public service ads he did for the FBI's Los Angeles Field Office on intellectual property theft and cyberbullying.
He played Captain Kells in the 2015 Bethesda Game Studios video game Fallout 4.
Music and astronomy
Russ has been a lifelong musician and a singer. In addition, Russ has been an avid amateur astronomer most of his adult life, and is a member of the Los Angeles Astronomical Society. In 2021 he was among a small group of citizen astronomers who assisted in detection of the asteroid 617 Patroclus in preparation for NASA's Lucy probe. In February 2022, he stated that he owned a 10-inch Dobsonian telescope, an 8" Schmidt-Cassegrain telescope, and a Unistellar eVscope.
Filmography
References
External links
1956 births
Living people
African-American film directors
African-American male singers
African-American male writers
African-American screenwriters
African-American television directors
American expatriates in Turkey
American male film actors
American male screenwriters
American male singers
American male television actors
American male video game actors
American male voice actors
American television directors
Film directors from Washington, D.C.
Illinois State University alumni
Male actors from Washington, D.C.
Screenwriters from Washington, D.C.
Singers from Washington, D.C.
St. Edward's University alumni
20th-century African-American male actors
20th-century American male actors
21st-century African-American male actors
21st-century American male actors
20th-century American screenwriters
21st-century American screenwriters
20th-century American singers
21st-century American singers
Amateur astronomers | Tim Russ | [
"Astronomy"
] | 1,078 | [
"Astronomers",
"Amateur astronomers"
] |
965,646 | https://en.wikipedia.org/wiki/Module%20file | Module file (MOD music, tracker music) is a family of music file formats originating from the MOD file format on Amiga systems used in the late 1980s. Those who produce these files (using the software called music trackers) and listen to them form the worldwide MOD scene, a part of the demoscene subculture.
The mass interchange of "MOD music" or "tracker music" (music stored in module files created with trackers) evolved from early FIDO networks. Many websites host large numbers of these files, the most comprehensive of them being the Mod Archive.
Nowadays, most module files, including ones in compressed form, are supported by most popular media players such as VLC, Foobar2000, Exaile and many others (mainly due to inclusion of common playback libraries such as libmodplug for gstreamer).
Structure
Module files store digitally recorded samples and several "patterns" or "pages" of music data in a form similar to that of a spreadsheet. These patterns contain note numbers, instrument numbers, and controller messages. The number of notes that can be played simultaneously depends on how many "tracks" there are per pattern. And the song is built of a pattern list, that tells in what order these patterns shall be played in the song.
A disadvantage of module files is that there is no real standard specification in how the modules should be played back properly, which may result in modules sounding different in different players, sometimes quite significantly so. This is mostly due to effects that can be applied to the samples in the module file and how the authors of different players choose to implement them. However, tracker music has the advantage of requiring very little CPU overhead for playback, and is executed in real-time.
Popular formats
Each module file format builds on concepts introduced in its predecessors.
The MOD format (.MOD)
The MOD format was the first file format for tracked music. A very basic version of this format (with only very few pattern commands and short samples supported) was introduced by Karsten Obarski’s Ultimate Soundtracker in 1987 for the Amiga. It was designed to use 4 channels and fifteen samples. Ultimate SoundTracker was soon superseded by NoiseTracker and Protracker, which allowed for more tracker commands (effects) and instruments. Later, variants of the MOD format that appeared on the Personal Computer extended the number of channels, added panning commands (the Amiga’s four hardware channels had a pre-defined stereo setup) and expanded the Amiga’s frequency limit, allowing for more octaves of notes to be supported.
Arguably one of the most widespread tracker formats (also due to its use in many computer games and demos), it is also one of the simplest to use, but also only provides few pattern commands to use.
The Oktalyzer format (.OKT)
This was an early effort to bring eight-channel sound to the Amiga. Later replayers have improved on the sound quality attainable from these modules by more demanding mixing technologies.
The MultiTracker format (.MTM)
Produced by American Demoscene group Renaissance, MultiTracker brought up to 32-channel sound to the PC tracker community. Songs that took full advantage of the 32 simultaneous channels were extremely taxing to typical computers of the era.
The MED/OctaMED format (.MED)
This format is very similar to sound/pro/noisetracker, but the way the data is stored is different. MED was not a direct clone of SoundTracker, and had different features and file formats. OctaMED was an eight-channel version of MED, which eventually evolved into OctaMED Soundstudio (which offers 128-channel sound, optional synth sounds, MIDI support and many other high-end features).
The AHX format (.AHX)
This format is a synth-tracker. There are no samples in the module file, rather descriptions of how to synthesize the required sound. This results in very small audio files (AHX modules are typically 1k–4k in size), and a very characteristic sound. AHX is designed for music with chiptune sound. The AHX tracker requires Kickstart 2.0 and 2 Mb RAM memory.
The ScreamTracker 3 format (.S3M)
The Scream Tracker 3 S3M format added sample tuning (defining the exact frequency of the middle C for samples), increased the number of playback channels, made use of an extra column specifically for volume control (which was extended by other trackers to handle panning commands as well), and compressed pattern data for smaller file sizes. It is also one of the few widespread formats that support both sample playback and realtime synthesis (through the OPL2 chip) at the same time.
The FastTracker 2 format (.XM)
With the XM format, FastTracker 2 introduced the concept of "instruments", which applied volume and panning envelopes to samples. It also added the ability to map several samples to the same instrument for multi-sampled instruments or drum sets. XM uses instrument-based panning – instrument numbers in patterns always reset the channel’s panning to the current sample's initial panning. It uses MOD effect command letters, plus a few of its own for more sound control. The composer can define initial tempos and speeds; provide envelopes to samples by assigning them to instruments; set sample looping and apply automatic sample vibrato oscillation.
The Impulse Tracker format (.IT)
Impulse Tracker introduced the IT format, which, in comparison to the XM format, allows instruments to also specify the transposition of assigned samples depending on the note being played, applying resonant filters to samples, and defining “New Note Actions” (NNAs) for instruments to release playing notes on a pattern channel while a new note is already playing, which helps to keep the number of pattern channels to while still being able to have a high polyphony. Like S3M files (and contrary to XM files), panning is channel-based, meaning that channels have an initial pan position which can be overridden by panning commands or instruments’ and samples’ default panning settings.
Scene
The process of composing module files, known as tracking, is a skillful activity that involves a much closer contact with musical sound than conventional composition, as every aspect of each sonic event is coded, from pitch and duration to exact volume, panning, and laying in numerous effects such as echo, tremolo and fades. Once the module file is finished, it is released to the tracker community. The composer uploads the new composition to one or more of several sites where module files are archived, making it available to their audience, who will download the file on their own computers. By encoding textual information within each module file, composers maintain contact with their audiences and with one another by including their email addresses, greetings to fans and other composers, and virtual signatures.
Although trackers can be considered to have some technical limitations, they do not prevent a creative individual from producing music that is indiscernible from professionally created music. The demosceners were focused on pushing the limits of technology. Many tracker musicians gained international prominence within MOD software users and some of them went on to work for high-profile video game studios, or began to appear on large record labels. Notable artists include Andrew Sega, Purple Motion, Darude, Alexander Brandon, Peter Hajba, Axwell, Venetian Snares, Jesper Kyd, TDK, Thomas J. Bergersen, Markus Kaarlonen, Michiel van den Bos and Dan Gardopée. It is also widely known that many of Aphrodite's early releases were made on two synchronized Amigas running OctaMED, and that James Holden made majority of his early material in Jeskola Buzz. Deadmau5 and Erez Eisen of Infected Mushroom have both used Impulse Tracker in their early career.
Music disk
Music disk, or musicdisk, is a term used by the demoscene to describe a collection of songs made on a computer. They are essentially the computer equivalent of an album. A music disk is typically packaged in the form of a program with a custom user interface, so the listener does not need other software to play the songs. The "disk" part of the term comes from the fact that music disks were once made to fit on a single floppy disk, so they could be easily distributed at demo parties. On modern platforms, music disks are usually downloaded to a hard disk drive.
Amiga music disks usually consist of MOD files, while PC music disks often contain multichannel formats such as XM or IT. Music disks are also common on the Commodore 64 and Atari ST, where they use their own native formats.
Related terms include music pack, which can refer to a demoscene music collection that does not include its own player, and chipdisk, a music disk containing only chiptunes, which have become popular on the PC given the large size of MP3 music disks.
Software module file players and converters
Players
XMPlay (Windows), from Un4seen Developments, which also created the MO3 format
OZMod (Java, cross-platform)
Winamp (Windows)
AIMP
BZR Player (Windows)
OpenCubicPlayer (Linux/BSD port is actively maintained)
XMP (Linux, Android)
foobar2000 (Windows) (with foo_dumb or foo_openmpt plugin)
Mod4Win (Windows), one of the first Windows Mod player
K-Multimedia Player (Windows)
Audacious (Linux, Windows)
XMMS and XMMS2 (Linux)
Music Player Daemon (Linux)
DeaDBeeF (Linux, Windows, Android)
MikMod (Linux, macOS, Windows, DOS)
Modo Computer Music Player (Android)
DeliPlayer (Windows)
Amigaamp (Amiga)
JavaMod (Linux, macOS, Windows)
VLC
Converters and trackers
Cog (macOS)
Audacious (Linux)
OpenMPT (Windows)
SunVox (Windows, macOS, Linux, Android, iOS)
MilkyTracker (Windows, macOS, Linux, Android)
Schism Tracker (Windows, macOS, Linux)
Protracker (Amiga, Windows, macOS, Linux)
OctaMED (Amiga)
Renoise (Windows, macOS, Linux)
Unix Amiga Delitracker Emulator (Linux)
HoustonTracker (TI-82/83/84)
Radium (Windows, macOS, Linux)
Libraries
libmikmod - maintained in MikMod project
libmodplug - maintained in ModPlug XMMS Plugin project
libopenmpt - maintained in OpenMPT project
libBASS - developed by Un4seen Developments and used in XMPlay
libxmp
uFMOD
See also
Tracker
MOD (file format)
:Category:Tracker musicians
Demoscene
TraxWeekly
References
Further reading
External links
The Mod Archive
Amiga Music Preservation
The Tracker's Handbook
Demoscene
Chiptune
Video game culture
Video game terminology
Electronica
Digital audio
Articles containing video clips
Video game music file formats | Module file | [
"Technology"
] | 2,295 | [
"Computing terminology",
"Video game terminology"
] |
965,651 | https://en.wikipedia.org/wiki/Oxprenolol | Oxprenolol (brand names Trasacor, Trasicor, Coretal, Laracor, Slow-Pren, Captol, Corbeton, Slow-Trasicor, Tevacor, Trasitensin, Trasidex) is a non-selective beta blocker with some intrinsic sympathomimetic activity. It is used for the treatment of angina pectoris, abnormal heart rhythms and high blood pressure.
Oxprenolol is a lipophilic beta blocker which passes the blood–brain barrier more easily than water-soluble beta blockers. As such, it is associated with a higher incidence of CNS-related side effects than beta blockers with more hydrophilic molecules such as atenolol, sotalol and nadolol.
Oxprenolol is a potent beta blocker and should not be administered to asthmatics under any circumstances due to their low beta levels as a result of depletion due to other asthma medication, and because it can cause irreversible, often fatal, airway failure and inflammation.
Pharmacology
Pharmacodynamics
Oxprenolol is a beta blocker. In addition, it has been found to act as an antagonist of the serotonin 5-HT1A and 5-HT1B receptors with respective Ki values of 94.2 nM and 642 nM in rat brain tissue.
Chemistry
Stereochemistry
Oxprenolol is a chiral compound, the beta blocker is used as a racemate, e. g. a 1:1 mixture of (R)-(+)-oxprenolol and (S)-(–)-oxprenolol. Analytical methods (HPLC) for the separation and quantification of (R)-(+)-oxprenolol and (S)-(–)-oxprenolol in urine and in pharmaceutical formulations have been described in the literature.
References
5-HT1A antagonists
5-HT1B antagonists
Abandoned drugs
Allyl compounds
Beta blockers
N-isopropyl-phenoxypropanolamines
Sympathomimetic amines
Catechol ethers | Oxprenolol | [
"Chemistry"
] | 472 | [
"Drug safety",
"Abandoned drugs"
] |
965,675 | https://en.wikipedia.org/wiki/Messier%2075 | Messier 75 or M75, also known as NGC 6864, is a giant globular cluster of stars in the southern constellation Sagittarius. It was discovered by Pierre Méchain in 1780 and included in Charles Messier's catalog of comet-like objects that same year.
M75 is about 67,500 light years away from Earth and is 14,700 light years away from, and on the opposite side of, the Galactic Center. Its apparent size on the sky translates to a true radius of 67 light years. M75 is classified as class I, meaning it is one of the more densely concentrated globular clusters known. It shows a slow rotation around an axis that is inclined along a position angle of . The absolute magnitude of M75 is about −8.5, equating to 180,000 times more luminous than the Sun ().
The cluster has a half-light radius of with a core radius of about and appears not to have undergone core collapse yet. The mass density at the core is ·pc−3. There are 38 RR Lyrae variable stars and the cluster appears to be Oosterhoff-intermediate in terms of metallicity. 62 candidate blue stragglers have been identified in the cluster field, with 60% being in the core region.
Messier 75 is part of the Gaia Sausage, the hypothesized remains of a dwarf galaxy that merged with the Milky Way. It is a halo object with an orbital period of 0.4 billion years to travel around the galaxy on a very pronounced ellipse, specifically eccentricity of 0.87. The apocenter (maximal distance from Earth) is about .
Gallery
See also
List of Messier objects
References and footnotes
External links
Messier 75, Galactic Globular Clusters Database page
Messier 075
Messier 075
075
Messier 075
Gaia-Enceladus
Astronomical objects discovered in 1780
Discoveries by Pierre Méchain | Messier 75 | [
"Astronomy"
] | 402 | [
"Sagittarius (constellation)",
"Constellations"
] |
965,698 | https://en.wikipedia.org/wiki/Little%20Dumbbell%20Nebula | The Little Dumbbell Nebula, also known as Messier 76, NGC 650/651, the Barbell Nebula, or the Cork Nebula, is a planetary nebula in the northern constellation of Perseus. It was discovered by Pierre Méchain in 1780 and included in Charles Messier's catalog of comet-like objects as number 76. It was first classified as a planetary nebula in 1918 by the astronomer Heber Doust Curtis. However, others might have previously recognized it as a planetary nebula; for example, William Huggins found its spectrum indicated it was a nebula (instead of a galaxy or a star cluster); and Isaac Roberts in 1891 suggested that M76 might be similar to the Ring Nebula (M57), as seen instead from the side view.
M76 is currently classed as a type of bipolar planetary nebula (BPN), composed of a ring which we see edge-on as the central bar structure, and two lobes on either opening of the ring. The progenitor star ejected the ring when it was in the asymptotic giant branch, before it had become a planetary nebula. Soon afterward the star expelled the rest of its outer layers, creating the two lobes, and leaving a white dwarf as the remnant of the star's core. Distance to M76 is currently estimated to be 780 parsecs or 2,500 light years, making the average dimensions about 0.378 pc. (1.23 ly.) across.
The total nebula shines at the apparent magnitude of +10.1 with its central white dwarf or planetary nebula nucleus (PNN) at +15.9v (16.1B) magnitude. The nucleus has a surface temperature of about 88,400 K. It has a radial velocity of −19.1km/s.
The Little Dumbbell Nebula derives its common name from its resemblance to the Dumbbell Nebula (M27) in the constellation of Vulpecula. It was originally thought to consist of two separate emission nebulae so it bears the New General Catalogue numbers NGC 650 and 651.
See also
The Dumbbell (M27), Ring (M57), and Helix (NGC 7293) Nebulae (three other nebulae of the same type as M76)
List of Messier objects
List of planetary nebulae
References
External links
NightSkyInfo.com – M76, the Little Dumbbell Nebula
Little Dumbbell Nebula (M76, NGC 650 and 651)
The Little Dumbbell Nebula @ SEDS Messier pages
Messier objects
NGC objects
Perseus (constellation)
Planetary nebulae
Orion–Cygnus Arm
Astronomical objects discovered in 1780
Discoveries by Pierre Méchain | Little Dumbbell Nebula | [
"Astronomy"
] | 554 | [
"Perseus (constellation)",
"Constellations"
] |
965,734 | https://en.wikipedia.org/wiki/Special%20sensor%20microwave/imager | The Special Sensor Microwave/Imager (SSM/I) is a seven-channel, four-frequency, linearly polarized passive microwave radiometer system. It is flown on board the United States Air Force Defense Meteorological Satellite Program (DMSP) Block 5D-2 satellites. The instrument measures surface/atmospheric microwave brightness temperatures (TBs) at 19.35, 22.235, 37.0 and 85.5 GHz. The four frequencies are sampled in both horizontal and vertical polarizations, except the 22 GHz which is sampled in the vertical only.
The SSM/I has been a very successful instrument, superseding the across-track and Dicke radiometer designs of previous systems. Its combination of constant-angle rotary-scanning and total power radiometer design has become standard for passive microwave imagers, e.g. TRMM Microwave Imager, AMSR.
Its predecessor, the Scanning Multichannel Microwave Radiometer (SMMR), provided similar information. Its successor, the Special Sensor Microwave Imager / Sounder (SSMIS), is an enhanced eleven-channel, eight-frequency system.
Products
Along with its predecessor SMMR, the SSM/I contributes to an archive of global passive microwave products from late 1978 to present.
Information within the SSM/I TBs measurements allow the retrieval of four important meteorological parameters over the ocean: near-surface wind speed (note scalar not vector), total columnar water vapor, total columnar cloud liquid water (liquid water path) and precipitation. Accurate and quantitative measurement of these parameters from the SSM/I TBs is, however, a non-trivial task. Variations within the meteorological parameters significantly modify the TBs. As well as open ocean retrievals, it is also possible to retrieve quantitatively reliable information on sea ice, land snow cover and over-land precipitation.
Instrument characteristics
The Block 5D-2 satellites are in circular or near-circular Sun-synchronous and near-polar orbits at altitudes of 833 km with inclinations of 98.8° and orbital periods of 102.0 minutes, each making 14.1 full orbits per day. The scan direction is from the left to the right with the active scene measurements lying ± 51.2 degrees about when looking in the F8 forward (F10–F15) or aft (F8) direction of the spacecraft travel. This results in a nominal swath width of 1394 km allowing frequent ground coverage, especially at higher latitudes. All parts of the globe at latitudes greater than 58° are covered at least twice daily except for small unmeasured circular sectors of 2.4° about the poles. Extreme polar regions (> 72° N or S) receive coverage from two or more overpasses from both the ascending and descending orbits each day.
The spin rate of the SSM/I provides a period of 1.9 sec during which the DMSP spacecraft sub-satellite point travels 12.5 km. Each scan 128 discrete, uniformly spaced radiometric samples are taken at the two 85 GHz channels and, on alternate scans, 64 discrete samples are taken at the remaining 5 lower frequency channels. The resolution is determined by the Nyquist limit and the Earth surface's contribution of 3 dB bandwidth of the signal at a given frequency (see Table). The radiometer direction intersects the Earth's surface at a nominal incidence angle of 53.1 degrees, as measured from the local Earth normal.
Instrument history
The SMMR was flown on Seasat and NASA Nimbus 7 in 1978. Seasat operated only for a few months until the satellite suffered an electrical short that ended the mission, while Nimbus 7 unexpectedly operated for 9 years, returning data until 1987.
The SSM/I has been operating almost continuously on Block 5D-2 flights F8-F15 (not F9) since June 1987. Concerns about the radiometer's performance over the full range of space environmental conditions led to the F8 instrument being switched off in early December 1987 to avoid overheating. The 85 GHz vertical polarization channel failed to switch on in January 1988. Analysis showed inadequate thermal shielding of the sensor's radiometers due to excessive heating at perihelion. The 85 GHz horizontal polarization subsequently had a large increase in radiometric errors and was switched off in summer 1988.
The launch of the next SSM/I, on board the F10 satellite, took place on 1 December 1990, but was not fully successful. The explosion of the booster rocket left the F10 in an elliptical orbit. The incidence angle of the F10 SSM/I boresight would vary in relation to the Earth throughout each orbit and this also altered the surface area of the Earth viewed by the radiometer. The deviations in the incidence angle of up to 1.4° were quite large and would alter the responses of several geophysical algorithms if not taken into consideration. Further, related changes in the swath width from a minimum of 1226 km at perigee to 1427 km at apogee altered the amounts of radiation viewed by the F10 SSM/I radiometers. The non-circular orbit also caused slight precession of the equatorial crossing time of the F10 by 50 seconds per week.
The F12 imager had a delayed launch date (the spacecraft was out of the DMSP build sequence) due to a faulty SSM/I. The extra time and costs taken to rectify the problem did not, however, help. The SSM/I failed to ‘spin-up’ after launch, and consequently data were not available from this instrument. The SSM/Is on F11, F13, F14 and F15 have all produced excellent data.
Before the F8 was decommissioned, it aided investigations into measuring passive microwaves at higher Earth incidence angles (i.e. > 51 degrees). An increase in angle would allow a greater swath width to be utilised, giving a greater amount of coverage at the Earth's surface. The F8 Tilt Experiment (see links) was carried out between 25 June and 13 July 1993.
F17, F18, and F19 all carry SSMIS.
References
External links
F8 Tilt experiment
SSM/I daily over-ocean atmospheric retrievals
Near real-time multi-DMSP SSM/I meteorological parameter retrievals from NESDIS
USAF SSM/I users' guide
Radiometry
Satellite meteorology
Spacecraft instruments
Earth observation satellite sensors | Special sensor microwave/imager | [
"Engineering"
] | 1,328 | [
"Telecommunications engineering",
"Radiometry"
] |
965,817 | https://en.wikipedia.org/wiki/Center%20tap | In electronics, a center tap (CT) is a contact made to a point halfway along a winding of a transformer or inductor, or along the element of a resistor or a potentiometer.
Taps are sometimes used on inductors for the coupling of signals, and may not necessarily be at the half-way point, but rather, closer to one end. A common application of this is in the Hartley oscillator. Inductors with taps also permit the transformation of the amplitude of alternating current (AC) voltages for the purpose of power conversion, in which case, they are referred to as autotransformers, since there is only one winding. An example of an autotransformer is an automobile ignition coil.
Potentiometer tapping provides one or more connections along the device's element, along with the usual connections at each of the two ends of the element, and the slider connection. Potentiometer taps allow for circuit functions that would otherwise not be available with the usual construction of just the two end connections and one slider connection.
Volts center tapped
Volts center tapped (VCT) describes the voltage output of a center tapped transformer. For example, a 24 VCT transformer will measure 24 VAC across the outer two taps (winding as a whole), and 12 VAC from each outer tap to the center-tap (half winding). These two 12 VAC supplies are 180 degrees out of phase with each other, measured with respect to the tap, thus making it easy to derive positive and negative 12 volt DC power supplies from them.
Applications and history
In vacuum tube audio amplifiers, center-tapped transformers were sometimes used as the phase inverter to drive the two output tubes of a push-pull stage. The technique is nearly as old as electronic amplification and is well documented, for example, in The Radiotron Designer's Handbook, Third Edition of 1940. This technique was carried over into transistor designs also, part of the reason for which was that capacitors were large, expensive and unreliable. However, since that era, capacitors have become vastly smaller, cheaper and more reliable, whereas transformers are still relatively expensive. Furthermore, as designers acquired more experience with transistors, they stopped trying to treat them like tubes. Coupling a class A intermediate amplification stage to a class AB power stage using a transformer doesn't make sense anymore even in small systems powered from a single-voltage supply. Modern higher-end equipment is based on dual-supply designs which eliminates coupling. It is possible for an amplifier, from the input all the way to the loudspeaker, to be DC coupled without any capacitance or inductance. Nevertheless, this use is still relevant in the 21st century because tubes and tube amplifiers continue to be produced for niche markets.
In analog telecommunications systems center-tapped transformers can be used to provide a DC path around an AC coupled amplifier for signalling purposes.
Three wire power distribution can be used, e. g. with 240 VCT to provide two 120 VAC circuits in US/Canada.
Low-frequency mains transformers often have center taps. Historically, rectifier costs were high, so DC power supplies with a center-tapped transformer and two diodes justified extra cost of copper windings and iron laminations, using only half of the secondary coil per half-cycle. Consumer products like cassette recorders often used 18 VCT transformers to obtain 9 VDC until the 1980s. With four diodes, both halves can be used, which leads to efficient designs for symmetrical voltages with the center tap as common ground. E. g. in arcade machines like Atari Asteroids (1979), a 36 VCT transformer is used in four-diode configuration to produce +/- 15 VDC (after regulation), while the same power supply provides 10.3 VDC unregulated from a two-diode configuration. In the late 1970s, it became a better business case and simpler assembly to use bridge rectifiers.
In switch-mode power supplies, center-tapped transformers are often used, sometimes with single diodes or a dual diode half-bridge to optimize their dynamic electromagnetic behavior at the expense of the extra windings.
Phantom power can be supplied to a condenser microphone using center tap transformers. One method, called "direct center tap" uses two center tap transformers, one at the microphone body and one at the microphone preamp. Filtered DC voltage is connected to the microphone preamp center tap, and the microphone body center tap is grounded through the cable shield. The second method uses the same center tap transformer topology at the microphone body, but at the microphone preamp, a matched pair of resistors spanning the signal lines in series creates an "artificial center tap".
References
F. Langford Smith, The Radiotron Designer's Handbook Third Edition, (1940), The Wireless Press, Sydney, Australia, no ISBN, no Library of Congress card
Electrical circuits
Electric transformers
de:Transformator#Anzapfungen | Center tap | [
"Engineering"
] | 1,046 | [
"Electrical engineering",
"Electronic engineering",
"Electrical circuits"
] |
965,842 | https://en.wikipedia.org/wiki/CA%20Gen | Gen is a Computer Aided Software Engineering (CASE) application development environment marketed by Broadcom Inc. Gen was previously known as CA Gen, IEF (Information Engineering Facility), Composer by IEF, Composer, COOL:Gen, Advantage:Gen and AllFusion Gen.
The toolset originally supported the information technology engineering methodology developed by Clive Finkelstein, James Martin and others in the early 1980s. Early versions supported IBM's DB2 database, 3270 'block mode' screens and generated COBOL code.
In the intervening years the toolset has been expanded to support additional development techniques such as component-based development; creation of client/server and web applications and generation of C, Java and C#. In addition, other platforms are now supported such as many variants of Unix-like Operating Systems (AIX, HP-UX, Solaris, Linux) as well as Windows.
Its range of supported database technologies have widened to include ORACLE, Microsoft SQL Server, ODBC, JDBC as well as the original DB2.
The toolset is fully integrated - objects identified during analysis carry forward into design without redefinition. All information is stored in a repository (central encyclopedia). The encyclopedia allows for large team development - controlling access so that multiple developers may not change the same object simultaneously.
Overview
It was initially produced by Texas Instruments, with input from James Martin and his consultancy firm James Martin Associates, and was based on the Information Engineering Methodology (IEM). The first version was launched in 1987.
IEF (Information Engineering Facility) became popular among large government departments and public utilities. It initially supported a CICS/COBOL/DB2 target environment. However, it now supports a wider range of relational databases and operating systems. IEF was intended to shield the developer from the complexities of building complete multi-tier cross-platform applications.
In 1995, Texas Instruments decided to change their marketing focus for the product. Part of this change included a new name - "Composer".
By 1996, IEF had become a popular tool. However, it was criticized by some IT professionals for being too restrictive, as well as for having a high per-workstation cost ($15K USD). But it is claimed that IEF reduces development time and costs by removing complexity and allowing rapid development of large scale enterprise transaction processing systems.
In 1997, Composer had another change of branding, Texas Instruments sold the Texas Instruments Software division, including the Composer rights, to Sterling Software. Sterling software changed the well known name "Information Engineering Facility" to "COOL:Gen". COOL was an acronym for "Common Object Oriented Language" - despite the fact that there was little object orientation in the product.
In 2000, Sterling Software was acquired by Computer Associates (now CA). CA has rebranded the product three times to date and the product is still used widely today. Under CA, recent releases of the tool added support for the CA-Datacom DBMS, the Linux operating system, C# code generation and ASP.NET web clients. The current version is known as CA Gen - version 8 being released in May 2010, with support for customised web services, and more of the toolset being based around the Eclipse framework.
As of 2020, CA Gen is owned and marketed by Broadcom Inc., which rebranded the product to Gen to avoid confusion with the former owner of the product.
There are a variety of "add-on" tools available for Gen, including Project Phoenix from Jumar - a collection of software tools and services focused on the modernisation and re-platforming of existing/legacy Gen applications to new environments, GuardIEn - a Configuration Management and Developer Productivity Suite, QAT Wizard, an interview style wizard that takes advantage of the meta model in Gen, products for multi-platform application reporting and XML/SOAP enabling of Gen applications., and developer productivity tools such as Access Gen, APMConnect, QA Console and Upgrade Console from Response Systems
Version 8.6 of CA Gen came to market in June 2016.
Version 8.6.3 of CA Gen was released in 2021. Following this release, Broadcom have switched to a continuous delivery model with new features to be delivered as patches.
References
External links
CA Gen official site
Computer-aided software engineering tools
Data management
CA Technologies
Fourth-generation programming languages | CA Gen | [
"Technology"
] | 880 | [
"Data management",
"Data"
] |
965,874 | https://en.wikipedia.org/wiki/Nomex | Nomex is a trademarked term for an inherently flame-resistant fabric with meta-aramid chemistry widely used for industrial applications and fire protection equipment. It was developed in the early 1960s by DuPont and first marketed in 1967.
The fabric is often combined with Kevlar to increase its resistance for breakage or tear.
Properties
Nomex and related aramid polymers are related to nylon, but have aromatic backbones, and hence are more rigid and more durable. Nomex is an example of a meta variant of the aramids (Kevlar is a para aramid). Unlike Kevlar, Nomex strands cannot align during filament polymerization and have less strength: its ultimate tensile strength is . However, it has excellent thermal, chemical, and radiation resistance for a polymer material. It can withstand temperatures of up to .
Production
Nomex is produced by condensation reaction from the monomers m-phenylenediamine and isophthaloyl chloride.
It is sold in both fiber and sheet forms and is used as a fabric where resistance from heat and flame is required. Nomex sheet is actually a calendered paper and made in a similar fashion. Nomex Type 410 paper was the first Nomex paper developed and one of the higher volume grades made, mostly for electrical insulation purposes.
Wilfred Sweeny (1926–2011), the DuPont scientist responsible for discoveries leading to Nomex, earned a DuPont Lavoisier Medal in 2002 partly for this work.
Applications
Nomex Paper is used in electrical laminates such as circuit boards and transformer cores as well as fireproof honeycomb structures where it is saturated with a phenolic resin. Honeycomb structures such as these, as well as mylar-Nomex laminates, are used extensively in aircraft construction. Firefighting, military aviation, and vehicle racing industries use Nomex to create clothing and equipment that can withstand intense heat.
A Nomex hood is a common piece of racing and firefighting equipment. It is placed on the head on top of a firefighter's face mask. The hood protects the portions of the head not covered by the helmet and face mask from the intense heat of the fire.
Wildland firefighters wear Nomex shirts and trousers as part of their personal protective equipment during wildfire suppression activities.
Racing car drivers wear driving suits constructed of Nomex and or other fire retardant materials, along with Nomex gloves, long underwear, balaclavas, socks, helmet lining and shoes, to protect them in the event of a fire.
Military pilots and aircrew wear flight suits made of over 92 percent Nomex to protect them from cockpit fires (previously issued flight suits were treated in borax solution prior to the introduction). It is also worn as sailors' anti-flash gear. Troops riding in ground vehicles often wear Nomex for fire protection. Kevlar thread is often used to hold the fabric together at seams.
Military tank drivers also typically use Nomex hoods as protection against fire.
In the U.S. space program, Nomex has been used for the Thermal Micrometeoroid Garment on the Extravehicular Mobility Unit (in conjunction with Kevlar and Gore-Tex) and ACES pressure suit, both for fire and extreme environment (water immersion to near vacuum) protection, and as thermal blankets on the payload bay doors, fuselage, and upper wing surfaces of the Space Shuttle Orbiter. It has also been used for the airbags for the Mars Pathfinder and Mars Exploration Rover missions , the Galileo atmospheric probe, the Cassini-Huygens Titan probe, as an external covering on the AERCam Sprint, and is planned to be incorporated into NASA's upcoming Crew Exploration Vehicle.
Nomex has been used as an acoustic material in Troy, NY, at Rensselaer Polytechnic Institute's Experimental Media and Performing Arts Center (EMPAC) main concert hall. A ceiling canopy of Nomex reflects high and mid frequency sound, providing reverberation, while letting lower frequency sound partially pass through the canopy. According to RPI President Shirley Ann Jackson, EMPAC is the first venue in the world to use Nomex as an architectural material for acoustic reasons.
Nomex (like Kevlar) is also used in the production of loudspeaker drivers.
Honeycomb-structured Nomex paper is used as a spacer between layers of lead in the ATLAS Liquid Argon Calorimeter, and as a laminate core for hull and deck construction in custom boats such as Stiletto Catamarans like the Stiletto 27.
Nomex is used in industrial applications as a filter in exhaust filtration systems, typically a baghouse, that deal with hot gas emissions found in asphalt plants, cement plants, steel smelting facilities, and non-ferrous metal production facilities.
Nomex is used in some classical guitar tops in order to create a 'composite' soundboard. When Nomex is laminated between 2 spruce or cedar 'skins', a rigid and lightweight plate is produced, which can improve the efficiency of the soundboard. While the 'laminated' technique was created by Matthias Dammann, the use of Nomex within was first employed by luthier Gernot Wagner.
History
The deaths in fiery crashes of race car drivers Fireball Roberts at Charlotte, and Eddie Sachs and Dave MacDonald at Indianapolis in 1964, led to the use of flame-resistant fabrics such as Nomex. In early 1966 Competition Press and Autoweek reported: "During the past season, experimental driving suits were worn by Walt Hansgen, Masten Gregory, Marvin Panch and Group 44's Bob Tullius; these four representing a fairly good cross section in the sport. The goal was to get use-test information on the comfort and laundering characteristics of Nomex. The Chrysler-Plymouth team at the recent Motor Trend 500 at Riverside also wore these suits."
See also
Aramid
Gore-Tex
Kevlar
Marlan
PET film
Silica Aerogel
Thermal Micrometeoroid Garment
Twaron
Vectran
References
External links
DuPont Nomex
Dupont.com - 40th anniversary of Nomex - 2007
Comparison of single-layer Nomex suits
Flame retardant fabrics
Synthetic materials
Firefighting equipment
Synthetic fibers
Brand name materials
DuPont products | Nomex | [
"Chemistry"
] | 1,315 | [
"Synthetic fibers",
"Synthetic materials",
"Chemical synthesis"
] |
965,929 | https://en.wikipedia.org/wiki/Communication%20with%20submarines | Communication with submarines is a field within military communications that presents technical challenges and requires specialized technology. Because radio waves do not travel well through good electrical conductors like salt water, submerged submarines are cut off from radio communication with their command authorities at ordinary radio frequencies. Submarines can surface and raise an antenna above the sea level, or float a tethered buoy carrying an antenna, then use ordinary radio transmissions; however, this makes them vulnerable to detection by anti-submarine warfare forces.
Early submarines during World War II mostly travelled on the surface because of their limited underwater speed and endurance, and dived mainly to evade immediate threats or for stealthy approach to their targets. During the Cold War, however, nuclear-powered submarines were developed that could stay submerged for months.
In the event of a nuclear war, submerged ballistic missile submarines have to be ordered quickly to launch their missiles. Transmitting messages to these submarines is an active area of research. Very low frequency (VLF) radio waves can penetrate seawater just over one hundred feet (30 metres), and many navies use powerful shore VLF transmitters for submarine communications. A few nations have built transmitters which use extremely low frequency (ELF) radio waves, which can penetrate seawater to reach submarines at operating depths, but these require huge antennas. Other techniques that have been used include sonar and blue lasers.
Acoustic transmission
Sound travels far in water, and underwater loudspeakers and hydrophones can cover quite a gap. Apparently, both the American (SOSUS) and the Russian navies have placed sonic communication equipment in the seabed of areas frequently travelled by their submarines and connected it by underwater communications cables to their land stations. If a submarine hides near such a device, it can stay in contact with its headquarters. An underwater telephone sometimes called Gertrude is also used to communicate with submersibles.
Very low frequency
VLF radio waves (3–30 kHz) can penetrate seawater to a few tens of metres and a submarine at shallow depth can use them to communicate. A deeper vessel can use a buoy equipped with an antenna on a long cable. The buoy rises to a few metres below the surface, and may be small enough to remain undetected by enemy sonar and radar. However these depth requirements restrict submarines to short reception periods, and antisubmarine warfare technology may be capable of detecting the sub or antenna buoy at these shallow depths.
Natural background noise increases as frequency decreases, so a lot of radiated power is required to overcome it. Worse, small antennas (relative to a wavelength) are inherently inefficient. This implies high transmitter powers and very large antennas covering square kilometres. This precludes submarines from transmitting VLF, but a relatively simple antenna (usually a long trailing wire) will suffice for reception. Hence, VLF is always one-way, from land to boat. If two-way communication is needed, the boat must ascend nearer to the surface and raise an antenna mast to communicate on higher frequencies, usually HF and above.
Because of the narrow bandwidths available, voice transmission is impossible; only slow data is supported. VLF data transmission rates are around 300 bits/sec, so data compression is essential.
Only a few countries operate VLF facilities for communicating with their submarines: Norway, France, United States, Russia, United Kingdom, Germany, Australia, Pakistan, and India.
Extremely low frequency
Electromagnetic waves in the ELF and SLF frequency ranges (3–300 Hz) can penetrate seawater to depths of hundreds of metres, allowing signals to be sent to submarines at their operating depths. Building an ELF transmitter is a formidable challenge, as they have to work at incredibly long wavelengths: The U.S. Navy's Project ELF system, which was a variant of a larger system proposed under codename Project Sanguine, operated at 76 hertz, and the Soviet/Russian system (called ZEVS) at 82 Hertz. The latter corresponds to a wavelength of 3,656.0 kilometres. That is more than a quarter of the Earth's diameter. The usual half-wavelength dipole antenna cannot be feasibly constructed, as that would require a long antenna.
Instead, someone who wishes to construct such a facility has to find an area with very low ground conductivity (a requirement opposite to usual radio transmitter sites), bury two huge electrodes in the ground at different sites, and then feed lines to them from a station in the middle, in the form of wires on poles. Although other separations are possible, the distance used by the ZEVS transmitter located near Murmansk is . As the ground conductivity is poor, the current between the electrodes will penetrate deep into the Earth, essentially using a large part of the globe as an antenna. The antenna length in Republic, Michigan, was approximately . The antenna is very inefficient. To drive it, a dedicated power plant seems to be required, although the power emitted as radiation is only a few watts. Its transmission can be received virtually anywhere. A station in Antarctica at 78° S 167° W detected transmission when the Soviet Navy put their ZEVS antenna into operation.
Owing to the technical difficulty of building an ELF transmitter, the U.S., China, Russia, and India are the only nations known to have constructed ELF communication facilities:
Until it was dismantled in late September 2004, the American Seafarer, later called Project ELF system (76 Hz), consisted of two antennas, located at Clam Lake, Wisconsin (since 1977), and at Republic, Michigan, in the Upper Peninsula (since 1980).
The Russian antenna (ZEVS, 82 Hz) is installed at the Kola Peninsula, near Murmansk. It was noticed by the West in the early 1990s.
The Indian Navy has an operational VLF communication facility at the INS Kattabomman naval base, to communicate with its Arihant class and Akula class submarines. Beginning in 2012, this facility was being upgraded to also transmit ELF communications.
China on the other hand has recently constructed the world's largest ELF facility – roughly the size of New York City – in order to communicate with its submarine forces without them having to surface.
ELF transmissions
The coding used for U.S. military ELF transmissions employed a Reed–Solomon error correction code using 64 symbols, each represented by a very long pseudo-random sequence. The entire transmission was then encrypted. The advantages of such a technique are that by correlating multiple transmissions, a message could be completed even with very low signal-to-noise ratios, and because only a very few pseudo-random sequences represented actual message characters, there was a very high probability that if a message was successfully received, it was a valid message (anti-spoofing).
The communication link is one-way. No submarine could have its own ELF transmitter on board, due to the sheer size of such a device. Attempts to design a transmitter which can be immersed in the sea or flown on an aircraft were soon abandoned.
Owing to the limited bandwidth, information can only be transmitted very slowly, on the order of a few characters per minute (see Shannon’s coding theorem). Thus it was only ever used by the US Navy to give instructions to establish another form of communication and it is reasonable to assume that the actual messages were mostly generic instructions or requests to establish a different form of two-way communication with the relevant authority.
Standard radio technology
A surfaced submarine, or a submarine floating a tethered antenna buoy on the surface, can use ordinary radio communications. From the surface, submarines may use naval frequencies in the HF, VHF, and UHF bands, and transmit information via both voice and teleprinter modulation techniques. Where available, dedicated military communications satellite systems using line-of-sight frequencies are preferred for long distance communications, as HF are more likely to betray the location of the submarine. The U.S. Navy's system is called Submarine Satellite Information Exchange Sub-System (SSIXS), a component of the Navy Ultra High Frequency Satellite Communications System (UHF SATCOM).
Combining acoustic and radio transmissions
A recent technology developed by a team at MIT combines acoustic signals and radar to enable submerged submarines to communicate with airplanes. An underwater transmitter uses an acoustic speaker pointed upward to the surface. The transmitter sends multichannel sound signals, which travel as pressure waves. When these waves hit the surface, they cause tiny vibrations. Above the water, a radar, in the 300 GHz range, continuously bounces a radio signal off the water surface. When the surface vibrates slightly due to the sound signal, the radar can detect the vibrations, completing the signal's journey from the underwater speaker to an in-air receiver. The technology is called TARF (Translational Acoustic-RF) communication since it uses a translation between acoustic and RF signals. While promising, this technology is still in its infancy and has only been successfully tested in relatively controlled environments with small, up to approximately 200 mm, surface ripples, while larger waves prevented successful data communication.
Underwater modems
In April 2017, NATO's Centre for Maritime Research and Experimentation announced the approval of JANUS, a standardised protocol to transmit digital information underwater using acoustic sound (as modems with acoustic couplers did in order to make use of analogue telephone lines). Documented in STANAG 4748, it uses 900 Hz to 60 kHz frequencies at distances of up to . It is available for use with military and civilian, NATO and non-NATO devices; it was named after the Roman god of gateways, openings, etc.
Blue lasers
In 2009, a US military report stated that "Practical laser-based systems for deep depths were unavailable because lasers operating at the right colour with enough power efficiency to be used in satellites did not exist. DARPA is striving towards a blue laser efficient enough to make submarine laser communications at depth and speed a near-term reality. A recently demonstrated laser will be matched with a special optical filter to form the core of a communications system with a signal-to-noise ratio thousands of times better than other proposed laser systems. If DARPA can demonstrate such a system under realistic conditions, it would dramatically change how submarines can communicate and operate, thereby greatly enhancing mission effectiveness, for example, in anti-submarine warfare."
See also
Extremely low frequency
Ground dipole
Super low frequency
TACAMO, radio system intended to survive nuclear attack
References
External links
Submarines
Submarines
Military radio systems
Radio frequency propagation | Communication with submarines | [
"Physics",
"Engineering"
] | 2,134 | [
"Physical phenomena",
"Telecommunications engineering",
"Spectrum (physical sciences)",
"Radio frequency propagation",
"Electromagnetic spectrum",
"Waves",
"Military communications"
] |
966,106 | https://en.wikipedia.org/wiki/Fluidics | Fluidics, or fluidic logic, is the use of a fluid to perform analog or digital operations similar to those performed with electronics.
The physical basis of fluidics is pneumatics and hydraulics, based on the theoretical foundation of fluid dynamics. The term fluidics is normally used when devices have no moving parts, so ordinary hydraulic components such as hydraulic cylinders and spool valves are not considered or referred to as fluidic devices.
A jet of fluid can be deflected by a weaker jet striking it at the side. This provides nonlinear amplification, similar to the transistor used in electronic digital logic. It is used mostly in environments where electronic digital logic would be unreliable, as in systems exposed to high levels of electromagnetic interference or ionizing radiation.
Nanotechnology considers fluidics as one of its instruments. In this domain, effects such as fluid–solid and fluid–fluid interface forces are often highly significant. Fluidics have also been used for military applications.
History
In 1920, Nikola Tesla patented a valvular conduit or Tesla valve that works as a fluidic diode. It was a leaky diode, i.e. the reverse flow is non-zero for any applied pressure difference. Tesla's valve also had non-linear response, as its diodicity had frequency dependence. It could be used in fluid circuits, such as a full-wave rectifier, to convert AC to DC.
In 1957, Billy M. Horton of the Harry Diamond Laboratories (which later became a part of the Army Research Laboratory) first came up with the idea for the fluidic amplifier when he realized that he could redirect the direction of flue gases using a small bellows. He proposed a theory on stream interaction, stating that one can achieve amplification by deflecting a stream of fluid with a different stream of fluid. In 1959, Horton and his associates, Dr. R. E. Bowles and Ray Warren, constructed a family of working vortex amplifiers out of soap, linoleum, and wood. Their published result caught the attention of several major industries and created a surge of interest in applying fluidics (then called fluid amplification) to sophisticated control systems, which lasted throughout the 1960s. Horton is credited for developing the first fluid amplifier control device and launching the field of fluidics. In 1961, Horton, Warren, and Bowles were among the 27 recipients to receive the first Army Research and Development Achievement Award for developing the fluid amplifier control device.
Logic elements
Logic gates can be built that use water instead of electricity to power the gating function. These are reliant on being positioned in one orientation to perform correctly. An OR gate is simply two pipes being merged, and a NOT gate (inverter) consists of "A" deflecting a supply stream to produce Ā. The AND and XOR gates are sketched in the diagram. An inverter could also be implemented with the XOR gate, as A XOR 1 = Ā.
Another kind of fluidic logic is bubble logic. Bubble logic gates conserve the number of bits entering and exiting the device, because bubbles are neither produced nor destroyed in the logic operation, analogous to billiard-ball computer gates.
Components
Amplifiers
In a fluidic amplifier, a fluid supply, which may be air, water, or hydraulic fluid, enters at the bottom. Pressure applied to the control ports C1 or C2 deflects the stream, so that it exits via either port O1 or O2. The stream entering the control ports may be much weaker than the stream being deflected, so the device has gain.
This basic device can be used to construct other fluidic logic elements, as well fluidic oscillators that can be used in analogous way as flip flops. Simple systems of digital logic can thus be built.
Fluidic amplifiers typically have bandwidths in the low kilohertz range, so systems built from them are quite slow compared to electronic devices.
Triodes
The fluidic triode, an amplification device that uses a fluid to convey the signal, has been invented, as have fluid diodes, a fluid oscillator and a variety of hydraulic "circuits," including one that has no electronic counterpart.
Uses
The MONIAC Computer built in 1949 was a fluid-based analogue computer used for teaching economic principles as it could recreate complex simulations that digital computers could not at the time. Twelve to fourteen were built and acquired by businesses and teaching establishments.
The FLODAC Computer was built in 1964 as a proof of concept fluid-based digital computer.
Fluidic components appear in some hydraulic and pneumatic systems, including some automotive automatic transmissions. As electronic digital logic has become more accepted in industrial control, the role of fluidics in industrial control has declined.
In the consumer market, fluidically controlled products are increasing in both popularity and presence, installed in items ranging from toy spray guns through shower heads and hot tub jets; all provide oscillating or pulsating streams of air or water. Logic-enabled textiles for applications in wearable technology has also been researched.
Fluid logic can be used to create a valve with no moving parts such as in some anaesthetic machines.
Fluidic oscillators were used in the design of pressure-triggered, 3D printable, emergency ventilators for the COVID-19 pandemic.
Fluidic amplifiers are used to generate ultrasound for non-destructive testing by quickly switching pressurized air from one outlet to another.
A fluidic sound ampliflication system has been demonstrated in a synagogue, where regular electronic sound amplification can not be used for religious reasons.
Fluidic injection is being researched for use in aircraft to control direction, in two ways: circulation control and thrust vectoring. In both, larger more complex mechanical parts are replaced by fluidic systems, in which larger forces in fluids are diverted by smaller jets or flows of fluid intermittently, to change the direction of vehicles. In circulation control, near the trailing edges of wings, aircraft flight control systems such as ailerons, elevators, elevons, flaps, and flaperons are replaced by openings, usually rows of holes, or elongated slots, which emit fluid flows. In thrust vectoring, in jet engine nozzles, swiveling parts are replaced by openings which inject fluid flows into jets. Such systems divert thrust via fluid effects. Tests show that air forced into a jet engine exhaust stream can deflect thrust up to 15 degrees. In such uses, fluidics is desirable for lower: mass, cost (up to 50% less), drag (up to 15% less during use), inertia (for faster, stronger control response), complexity (mechanically simpler, fewer or no moving parts or surfaces, less maintenance), and radar cross section for stealth. This will likely be used in many unmanned aerial vehicles (UAVs), 6th generation fighter aircraft, and ships.
, at least two countries are known to be researching fluidic control. In Britain, BAE Systems has tested two fluidically controlled unmanned aircraft, one starting in 2010 named Demon, and another starting in 2017 named MAGMA, with the University of Manchester. In the United States, the Defense Advanced Research Projects Agency (DARPA) program named Control of Revolutionary Aircraft with Novel Effectors (CRANE) seeks "... to design, build, and flight test a novel X-plane that incorporates active flow control (AFC) as a primary design consideration. ... In 2023, the aircraft received its official designation as X-65." In winter 2024, construction began, at Boeing subsidiary Aurora Flight Sciences. In summer 2025, flight testing is to start.
Octobot, a 2016 proof of concept soft-bodied autonomous robot containing a microfluidic logic circuit, has been developed by researchers at Harvard University's Wyss Institute for Biologically Inspired Engineering.
See also
Water integrator
Microfluidics
Bio-MEMS
Lab-on-a-chip
MONIAC
Unconventional computing
References
Further reading
FLODAC – A Pure Fluid Digital Computer:
Stanley W. Angrist: Fluid control devices. In: Scientific American, December 1964, pp. 80–88.
Pneumatic logic elements from 1969
External links
Fluidics: How They've Taught A Stream of Air to Think pp. 118–121,196.197, illustrating several switch designs and discussing applications. Scanned article available online from Google Books: Popular Science June 1967
Visualization of the flow field of a fluidic oscillator
Fluid dynamics
Logic | Fluidics | [
"Chemistry",
"Engineering"
] | 1,753 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
966,255 | https://en.wikipedia.org/wiki/Participatory%20design | Participatory design (originally co-operative design, now often co-design) is an approach to design attempting to actively involve all stakeholders (e.g. employees, partners, customers, citizens, end users) in the design process to help ensure the result meets their needs and is usable. Participatory design is an approach which is focused on processes and procedures of design and is not a design style. The term is used in a variety of fields e.g. software design, urban design, architecture, landscape architecture, product design, sustainability, graphic design, industrial design, planning, and health services development as a way of creating environments that are more responsive and appropriate to their inhabitants' and users' cultural, emotional, spiritual and practical needs. It is also one approach to placemaking.
Recent research suggests that designers create more innovative concepts and ideas when working within a co-design environment with others than they do when creating ideas on their own. Companies increasingly rely on their user communities to generate new product ideas, marketing them as "user-designed" products to the wider consumer market; consumers who are not actively participating but observe this user-driven approach show a preference for products from such firms over those driven by designers. This preference is attributed to an enhanced identification with firms adopting a user-driven philosophy, consumers experiencing empowerment by being indirectly involved in the design process, leading to a preference for the firm's products. If consumers feel dissimilar to participating users, especially in demographics or expertise, the effects are weakened. Additionally, if a user-driven firm is only selectively open to user participation, rather than fully inclusive, observing consumers may not feel socially included, attenuating the identified preference.
Participatory design has been used in many settings and at various scales. For some, this approach has a political dimension of user empowerment and democratization. This inclusion of external parties in the design process does not excuse designers of their responsibilities. In their article "Participatory Design and Prototyping", Wendy Mackay and Michel Beaudouin-Lafon support this point by stating that "[a] common misconception about participatory design is that designers are expected to abdicate their responsibilities as designers and leave the design to users. This is never the case: designers must always consider what users can and cannot contribute."
In several Scandinavian countries, during the 1960s and 1970s, participatory design was rooted in work with trade unions; its ancestry also includes action research and sociotechnical design.
Definition
In participatory design, participants (putative, potential or future) are invited to cooperate with designers, researchers and developers during an innovation process. Co-design requires the end user's participation: not only in decision making but also in idea generation. Potentially, they participate during several stages of an innovation process: they participate during the initial exploration and problem definition both to help define the problem and to focus ideas for solution, and during development, they help evaluate proposed solutions. Maarten Pieters and Stefanie Jansen describe co-design as part of a complete co-creation process, which refers to the "transparent process of value creation in ongoing, productive collaboration with, and supported by all relevant parties, with end-users playing a central role" and covers all stages of a development process.
Differing terms
In "Co-designing for Society", Deborah Szebeko and Lauren Tan list various precursors of co-design, starting with the Scandinavian participatory design movement and then state "Co-design differs from some of these areas as it includes all stakeholders of an issue not just the users, throughout the entire process from research to implementation."
In contrast, Elizabeth Sanders and Pieter Stappers state that "the terminology used until the recent obsession with what is now called co-creation/co-design" was "participatory design". They also discuss the differences between co-design and co-creation and how they are "often confused and/or treated synonymously with one another". In their words, "Co-creation is a very broad term with applications ranging from the physical to the metaphysical and from the material to the spiritual", while seeing "co-design [as] a specific instance of co-creation". Pulling from the idea of what co-creation is, the definition of co-design in the context of their paper developed into "the creativity of designers and people not trained in design working together in the design development process". Another term brought up in this article front end design, which was formerly known as pre-design. "The goal of the explorations in the front end is to determine what is to be designed and sometimes what should not be designed and manufactured" and provides a space for the initial stages of co-design to take place.
An alternate definition of co-design has been brought up by Maria Gabriela Sanchez and Lois Frankel. They proposed that "Co-design may be considered, for the purpose of this study, as an interdisciplinary process that involves designers and non-designers in the development of design solutions" and that "the success of the interdisciplinary process depends on the participation of all the stakeholders in the project".
"Co-design is a perfect example of interdisciplinary work, where designer, researcher, and user work collaboratively in order to reach a common goal. The concept of interdisciplinarity, however, becomes broader in this context where it not only results from the union of different academic disciplines, but from the combination of different perspectives on a problem or topic."
Fourth Order Design
Similarly, another perspective comes from Golsby-Smith's "Fourth Order Design" which outlines a design process in which end-user participation is required and favours individual process over outcome. Buchanan's definition of culture as a verb is a key part of Golsby-Smith's argument in favour of fourth order design. In Buchanan's words, "Culture is not a state, expressed in an ideology or a body of doctrines. It is an activity. Culture is the activity of ordering, disordering and reordering in the search for understanding and for values which guide action." Therefore, to design for the fourth-order one must design within the widest scope. The system is discussion and the focus falls onto process rather than outcome. The idea that culture and people are an integral part of participatory design is supported by the idea that a "key feature of the field is that it involves people or communities: it is not merely a mental place or a series of processes". "Just as a product is not only a thing, but exists within a series of connected processes, so these processes do not live in a vacuum, but move through a field of less tangible factors such as values, beliefs and the wider context of other contingent processes."
Different dimensions
As described by Sanders and Stappers, one could position co-design as a form of human-centered design across two different dimensions. One dimension is the emphasis on research or design, another dimension is how much people are involved. Therefore, there are many forms of co-design, with different degrees of emphasis on research or design and different degrees of stakeholder involvement. For instance, one of the forms of co-design which involves stakeholders strongly early at the front end design process in the creative activities is generative co-design. Generative co-design is increasingly being used to involve different stakeholders as patient, care professionals and designers actively in the creative making process to develop health services.
Another dimension to consider is that of the crossover between design research and education. An example of this is a study that was completed at the Middle East Technical University in Turkey, the purpose of which was to look into the use of “team development [in] enhancing interdisciplinary collaboration between design and engineering students using design thinking”. The students in this study were tasked with completing a group project and reporting on the experience of working together. One of the main takeaways was that "Interdisciplinary collaboration is an effective way to address complex problems with creative solutions. However, a successful collaboration requires teams first to get ready to work in harmony towards a shared goal and to appreciate interdisciplinarity"
History
From the 1960s onward there was a growing demand for greater consideration of community opinions in major decision-making. In Australia many people believed that they were not being planned 'for' but planned 'at'. (Nichols 2009). A lack of consultation made the planning system seem paternalistic and without proper consideration of how changes to the built environment affected its primary users. In Britain "the idea that the public should participate was first raised in 1965." However the level of participation is an important issue. At a minimum public workshops and hearings have now been included in almost every planning endeavour. Yet this level of consultation can simply mean information about change without detailed participation. Involvement that 'recognises an active part in plan making' has not always been straightforward to achieve. Participatory design has attempted to create a platform for active participation in the design process, for end users.
History in Scandinavia
Participatory design was actually born in Scandinavia and called cooperative design. However, when the methods were presented to the US community 'cooperation' was a word that didn't resonate with the strong separation between workers and managers - they weren't supposed to discuss ways of working face-to-face. Hence, 'participatory' was instead used as the initial Participatory Design sessions weren't a direct cooperation between workers and managers, sitting in the same room discussing how to improve their work environment and tools, but there were separate sessions for workers and managers. Each group was participating in the process, not directly cooperating. (in historical review of Cooperative Design, at a Scandinavian conference).
In Scandinavia, research projects on user participation in systems development date back to the 1970s. The so-called "collective resource approach" developed strategies and techniques for workers to influence the design and use of computer applications at the workplace: The Norwegian Iron and Metal Workers Union (NJMF) project took a first move from traditional research to working with people, directly changing the role of the union clubs in the project.
The Scandinavian projects developed an action research approach, emphasizing active co-operation between researchers and workers of the organization to help improve the latter's work situation. While researchers got their results, the people whom they worked with were equally entitled to get something out of the project. The approach built on people's own experiences, providing for them resources to be able to act in their current situation. The view of organizations as fundamentally harmonious—according to which conflicts in an organization are regarded as pseudo-conflicts or "problems" dissolved by good analysis and increased communication—was rejected in favor of a view of organizations recognizing fundamental "un-dissolvable" conflicts in organizations (Ehn & Sandberg, 1979).
In the Utopia project (Bødker et al., 1987, Ehn, 1988), the major achievements were the experience-based design methods, developed through the focus on hands-on experiences, emphasizing the need for technical and organizational alternatives (Bødker et al., 1987).
The parallel Florence project (Gro Bjerkness & Tone Bratteteig) started a long line of Scandinavian research projects in the health sector. In particular, it worked with nurses and developed approaches for nurses to get a voice in the development of work and IT in hospitals. The Florence project put gender on the agenda with its starting point in a highly gendered work environment.
The 1990s led to a number of projects including the AT project (Bødker et al., 1993) and the EureCoop/EuroCode projects (Grønbæk, Kyng & Mogensen, 1995).
In recent years, it has been a major challenge to participatory design to embrace the fact that much technology development no longer happens as design of isolated systems in well-defined communities of work (Beck, 2002). At the dawn of the 21st century, we use technology at work, at home, in school, and while on the move.
Co-design
As mentioned above, one definition of co-design states that it is the process of working with one or more non-designers throughout the design process. This method is focused on the insights, experiences and input from end-users on a product or service, with the aim to develop strategies for improvement. It is often used by trained designers who recognize the difficulty in properly understanding the cultural, societal, or usage scenarios encountered by their user. C. K. Prahalad and Venkat Ramaswamy are usually given credit for bringing co-creation/co-design to the minds of those in the business community with the 2004 publication of their book, The Future of Competition: Co-Creating Unique Value with Customers. They propose:
The phrase co-design is also used in reference to the simultaneous development of interrelated software and hardware systems. The term co-design has become popular in mobile phone development, where the two perspectives of hardware and software design are brought into a co-design process.
Results directly related to integrating co-design into existing frameworks is "researchers and practitioners have seen that co-creation practiced at the early front end of the design development process can have an impact with positive, long-range consequences."
New role of the designer under co-design
Co-design is an attempt to define a new evolution of the design process and with that, there is an evolution of the designer. Within the co-design process, the designer is required to shift their role from one of expertise to one of an egalitarian mindset. The designer must believe that all people are capable of creativity and problem solving. The designer no longer exists from the isolated roles of researcher and creator, but now must shift to roles such as philosopher and facilitator. This shift allows for the designer to position themselves and their designs within the context of the world around them creating better awareness. This awareness is important because in the designer's attempt to answer a question, "[they] must address all other related questions about values, perceptions, and worldview". Therefore, by shifting the role of the designer not only do the designs better address their cultural context yet so do the discussions around them.
Discourses
Discourses in the PD literature have been sculpted by three main concerns: (1) the politics of design, (2) the nature of participation, and (3) methods, tools and techniques for carrying out design projects (Finn Kensing & Jeanette Blomberg, 1998, p. 168).
Politics of design
The politics of design have been the concern for many design researchers and practitioners. Kensing and Blomberg illustrate the main concerns which related to the introduction of new frameworks such as system design which related to the introduction of computer-based systems and power dynamics that emerge within the workspace. The automation introduced by system design has created concerns within unions and workers as it threatened their involvement in production and their ownership over their work situation. Asaro (2000) offers a detailed analysis of the politics of design and the inclusion of "users" in the design process.
Nature of participation
Major international organizations such as Project for Public Spaces create opportunities for rigorous participation in the design and creation of place, believing that it is the essential ingredient for successful environments. Rather than simply consulting the public, PPS creates a platform for the community to participate and co-design new areas, which reflect their intimate knowledge. Providing insights, which independent design professionals such as architects or even local government planners may not have.
Using a method called Place Performance Evaluation or (Place Game), groups from the community are taken on the site of proposed development, where they use their knowledge to develop design strategies, which would benefit the community.
"Whether the participants are schoolchildren or professionals, the exercise produces dramatic results because it relies on the expertise of people who use the place every day, or who are the potential users of the place." This successfully engages with the ultimate idea of participatory design, where various stakeholders who will be the users of the end product, are involved in the design process as a collective.
Similar projects have had success in Melbourne, Australia particularly in relation to contested sites, where design solutions are often harder to establish. The Talbot Reserve in the suburb of St. Kilda faced numerous problems of use, such as becoming a regular spot for sex workers and drug users to congregate. A Design In, which incorporated a variety of key users in the community about what they wanted for the future of the reserve allowed traditionally marginalised voices to participate in the design process. Participants described it as 'a transforming experience as they saw the world through different eyes.' (Press, 2003, p. 62). This is perhaps the key attribute of participatory design, a process which, allows multiple voices to be heard and involved in the design, resulting in outcomes which suite a wider range of users. It builds empathy within the system and users where it is implemented, which makes solving larger problems more holistically. As planning affects everyone it is believed that "those whose livelihoods, environments and lives are at stake should be involved in the decisions which affect them" (Sarkissian and Perglut, 1986, p. 3). C. West Churchman said systems thinking "begins when first you view the world through the eyes of another".
In the built environment
Participatory design has many applications in development and changes to the built environment. It has particular currency to planners and architects, in relation to placemaking and community regeneration projects. It potentially offers a far more democratic approach to the design process as it involves more than one stakeholder. By incorporating a variety of views there is greater opportunity for successful outcomes. Many universities and major institutions are beginning to recognise its importance. The UN, Global studio involved students from Columbia University, University of Sydney and Sapienza University of Rome to provide design solutions for Vancouver's downtown eastside, which suffered from drug- and alcohol-related problems. The process allowed cross-discipline participation from planners, architects and industrial designers, which focused on collaboration and the sharing of ideas and stories, as opposed to rigid and singular design outcomes. (Kuiper, 2007, p. 52)
Public interest design
Public interest design is a design movement, extending to architecture, with the main aim of structuring design around the needs of the community. At the core of its application is participatory design. Through allowing individuals to have a say in the process of design of their own surrounding built environment, design can become proactive and tailored towards addressing wider social issues facing that community. Public interest design is meant to reshape conventional modern architectural practice. Instead of having each construction project solely meet the needs of the individual, public interest design addresses wider social issues at their core. This shift in architectural practice is a structural and systemic one, allowing design to serve communities responsibly. Solutions to social issues can be addressed in a long-term manner through such design, serving the public, and involving it directly in the process through participatory design. The built environment can become the very reason for social and community issues to arise if not executed properly and responsibly. Conventional architectural practice often does cause such problems since only the paying client has a say in the design process. That is why many architects throughout the world are employing participatory design and practicing their profession more responsibly, encouraging a wider shift in architectural practice. Several architects have largely succeeded in disproving theories that deem public interest design and participatory design financially and organizationally not feasible. Their work is setting the stage for the expansion of this movement, providing valuable data on its effectiveness and the ways in which it can be carried out.
Difficulties of Adoption and Involvement
Participatory Design is a growing practice within the field of design yet has not yet been widely implemented. Some barriers to the adoption of participatory design are listed below.
Doubt of universal creativity
A belief that creativity is a restricted skill would invalidate the proposal of participatory design to allow a wider reach of affected people to participate in the creative process of designing. However, this belief is based on a limited view of creativity which does not recognize that creativity can manifest in a wide range of activities and experiences. This doubt can be damaging not only to individuals but also to society as a whole. By assuming that only a select few possess creative talent, we may overlook the unique perspectives, ideas, and solutions.
Lack of technology in software based co-op design
Often co-op based design technology assumes users have equal knowledge of technology used. For example: Co-op 3d-design program can let multiple people design at same time, but does not have support for guided help - tell the other guy what to do through markings and text, without talking to the person.
In programming, one also have the lack of guided help support, concerning co-op based programing. One have support for letting multiple people programming at same time, but here one also have lack of guided help support - text saying write this code, hints from other user, that one can mark relevant stuff on screen and so on. This is a problem in pair-programming, with communication as a bottle neck - one should have possibility to mark, configure and guide the user without knowledge.
Self-serving hierarchies
In a profit-motivated system, the commercial field of design may feel fearful of relinquishing some control in order to empower those who are typically not involved in the process of design. Commercial organizational structures often prioritize profit, individual gain, or status over the well-being of the community or other externalities. However, participatory practices are not impossible to implement in commercial settings. It may be difficult for those who have acquired success in a hierarchical structure to imagine alternative systems of open collaboration.
Lack of investment
Although participatory design has been of interest in design academia, applied uses require funding and dedication from many individuals. The high time and financial costs make research and development of participatory design less appealing for speculative investors. It also may be difficult to find or convince enough shareholders or community members to commit their time and effort to a project. However, widespread and involved participation is critical to the process.
Successful examples of participatory design are critical because they demonstrate the benefits of this approach and inspire others to adopt it. A lack of funding or interest can cause participatory projects to revert to practices where the designer initiates and dominates rather than facilitating design by the community.
Differing priorities between designers and participants
Participatory design projects which involve a professional designer as a facilitator to a larger group can have difficulty with competing objectives. Designers may prioritize aesthetics while end-users may prioritize functionality and affordability. Addressing these differing priorities may involve finding creative solutions that balance the needs of all stakeholders, such as using low-cost materials that meet functional requirements while also being aesthetically pleasing. Despite any potential predetermined assumptions, "the users’ knowledge has to be considered as important as the knowledge of the other professionals in the team, [as this] can be an obstacle to the co-design practice." "[The future of] co-designing will be a close collaboration between all the stakeholders in the design development process together with a variety of professionals having hybrid design/research skills."
Emotional and ethical dimensions in participatory design
Recent scholarship has highlighted the complex emotional landscape navigated by researchers engaged in participatory design, especially in contexts involving vulnerable or marginalized communities. Emotional challenges such as guilt and shame often emerge as researchers confront the disparity between their professional objectives and the lived realities of the communities they engage with. These emotions may stem from unmet expectations, perceived exploitation, or limited project impact. For instance, researchers may experience a sense of guilt when project outcomes fail to meet community needs or when research goals appear to benefit academic careers more than the communities themselves. The ethical dilemmas associated with balancing research agendas, funding constraints, and community needs can create a conflict between professional obligations and personal commitments, potentially leading to emotional burnout or moral distress. Consequently, there is a growing call within the field for frameworks that address these emotional aspects, advocate for ethical reflexivity, and promote sustained engagement strategies that align more closely with community well-being and autonomy. This perspective broadens the traditional scope of participatory design by acknowledging the emotional toll on researchers, thereby emphasizing the need for supportive structures that account for these emotional and ethical intricacies.
From Community Consultation to Community Design
Many local governments require community consultation in any major changes to the built environment. Community involvement in the planning process is almost a standard requirement in most strategic changes. Community involvement in local decision making creates a sense of empowerment. The City of Melbourne Swanston Street redevelopment project received over 5000 responses from the public allowing them to participate in the design process by commenting on seven different design options. While the City of Yarra recently held a "Stories in the Street" consultation, to record peoples ideas about the future of Smith Street. It offered participants a variety of mediums to explore their opinions such as mapping, photo surveys and storytelling. Although local councils are taking positive steps towards participatory design as opposed to traditional top down approaches to planning, many communities are moving to take design into their own hands.
Portland, Oregon City Repair Project is a form of participatory design, which involves the community co-designing problem areas together to make positive changes to their environment. It involves collaborative decision-making and design without traditional involvement from local government or professionals but instead runs on volunteers from the community. The process has created successful projects such as intersection repair, which saw a misused intersection develop into a successful community square.
In Malawi, a UNICEF WASH programme trialled participatory design development for latrines in order to ensure that users participate in creating and selecting sanitation technologies that are appropriate and affordable for them. The process provided an opportunity for community members to share their traditional knowledge and skills in partnership with designers and researchers.
Peer-to-peer urbanism is a form of decentralized, participatory design for urban environments and individual buildings. It borrows organizational ideas from the open-source software movement, so that knowledge about construction methods and urban design schemes is freely exchanged.
In software development
In the English-speaking world, the term has a particular currency in the world of software development, especially in circles connected to Computer Professionals for Social Responsibility (CPSR), who have put on a series of Participatory Design Conferences. It overlaps with the approach extreme programming takes to user involvement in design, but (possibly because of its European trade union origins) the Participatory Design tradition puts more emphasis on the involvement of a broad population of users rather than a small number of user representatives.
Participatory design can be seen as a move of end-users into the world of researchers and developers, whereas empathic design can be seen as a move of researchers and developers into the world of end-users. There is a very significant differentiation between user-design and user-centered design in that there is an emancipatory theoretical foundation, and a systems theory bedrock (Ivanov, 1972, 1995), on which user-design is founded. Indeed, user-centered design is a useful and important construct, but one that suggests that users are taken as centers in the design process, consulting with users heavily, but not allowing users to make the decisions, nor empowering users with the tools that the experts use. For example, Wikipedia content is user-designed. Users are given the necessary tools to make their own entries. Wikipedia's underlying wiki software is based on user-centered design: while users are allowed to propose changes or have input on the design, a smaller and more specialized group decide about features and system design.
Participatory work in software development has historically tended toward two distinct trajectories, one in Scandinavia and northern Europe, and the other in North America. The Scandinavian and northern European tradition has remained closer to its roots in the labor movement (e.g., Beck, 2002; Bjerknes, Ehn, and Kyng, 1987). The North American and Pacific rim tradition has tended to be both broader (e.g., including managers and executives as "stakeholders" in design) and more circumscribed (e.g., design of individual features as contrasted with the Scandinavian approach to the design of entire systems and design of the work that the system is supposed to support) (e.g., Beyer and Holtzblatt, 1998; Noro and Imada, 1991). However, some more recent work has tended to combine the two approaches (Bødker et al., 2004; Muller, 2007).
Research methodology
Increasingly researchers are focusing on co-design as a way of doing research, and therefore are developing parts of its research methodology. For instance, in the field of generative co-design Vandekerckhove et al. have proposed a methodology to assemble a group of stakeholders to participate in generative co-design activities in the early innovation process. They propose first to sample a group of potential stakeholders through snowball sampling, afterwards interview these people and assess their knowledge and inference experience, lastly they propose to assemble a diverse group of stakeholders according to their knowledge and inference experience.
Though not completely synonymous, research methods of Participatory Design can be defined under Participatory Research (PR): a term for research designs and frameworks using direct collaboration with those affected by the studied issue. More specifically, Participatory Design has evolved from Community-Based Research and Participatory Action Research (PAR). PAR is a qualitative research methodology involving: "three types of change, including critical consciousness development of researchers and participants, improvement of lives of those participating in research, and transformation of societal 'decolonizing' research methods with the power of healing and social justice". Participatory Action Research (PAR) is a subset of Community-Based Research aimed explicitly at including participants and empowering people to create measurable action. PAR practices across various disciplines, with research in Participatory Design being an application of its different qualitative methodologies. Just as PAR is often used in social sciences, for example, to investigate a person's lived experience concerning systemic structures and social power relations, Participatory Design seeks to deeply understand stakeholders' experiences by directly engaging them in the problem-defining and solving processes. Therefore, in Participatory Design, research methods extend beyond simple qualitative and quantitative data collection. Rather than being concentrated within data collection, research methods of Participatory Design are tools and techniques used throughout co-designing research questions, collecting, analyzing, and interpreting data, knowledge dissemination, and enacting change.
When facilitating research in Participatory Design, decisions are made in all research phases to assess what will produce genuine stakeholder participation. By doing so, one of Participatory Design's goals is to dismantle the power imbalance existing between 'designers' and 'users.' Applying PR and PAR research methods seeks to engage communities and question power hierarchies, which "makes us aware of the always contingent character of our presumptions and truths... truths are logical, contingent and intersubjective... not directed toward some specific and predetermined end goal... committed to denying us the (seeming) firmness of our commonsensical assumptions". Participatory design offers this denial of our "commonsensical assumptions" because it forces designers to consider knowledge beyond their craft and education. Therefore, a designer conducting research for Participatory Design assumes the role of facilitator and co-creator.
See also
Co-creation
Computer-supported cooperative work
Design thinking
Participatory action research
Permaculture
Public participation
Service design
User innovation
User participation in architecture (N.J. Habraken, Giancarlo De Carlo, and Structuralists such as Aldo van Eyck)
Notes
References
Asaro, Peter M. (2000). "Transforming society by transforming technology: the science and politics of participatory design." Accounting Management and Information Technology 10: 257–290.
Banathy, B.H. (1992). Comprehensive systems design in education: building a design culture in education. Educational Technology, 22(3) 33–35.
Beck, E. (2002). P for Political - Participation is Not Enough. SJIS, Volume 14 – 2002
Belotti, V. and Bly, S., 1996. Walking away from desktop computer: distributed collaboration and mobility in a product design team. In Proceedings of CSCW "96, Cambridge, Mass., November 16–20, ACM press: 209–218.
Beyer, H., and Holtzblatt, K. (1998). Contextual design: Defining customer-centered systems. San Francisco: Morgan Kaufmann.
Button, G. and Sharrock, W. 1996. Project work: the organisation of collaborative design and development in software engineering. CSCW Journal, 5 (4), p. 369–386.
Bødker, S. and Iversen, O. S. (2002): Staging a professional participatory design practice: moving PD beyond the initial fascination of user involvement. In Proceedings of the Second Nordic Conference on Human-Computer interaction (Aarhus, Denmark, October 19–23, 2002). NordiCHI '02, vol. 31. ACM Press, New York, NY, 11-18
Bødker, K., Kensing, F., and Simonsen, J. (2004). Participatory IT design: Designing for business and workplace realities. Cambridge, MA, USA: MIT Press.
Bødker, S., Christiansen, E., Ehn, P., Markussen, R., Mogensen, P., & Trigg, R. (1993). The AT Project: Practical research in cooperative design, DAIMI No. PB-454. Department of Computer Science, Aarhus University.
Bødker, S., Ehn, P., Kammersgaard, J., Kyng, M., & Sundblad, Y. (1987). A Utopian experience: In G. Bjerknes, P. Ehn, & M. Kyng. (Eds.), Computers and democracy: A Scandinavian challenge (pp. 251–278). Aldershot, UK: Avebury.
Carr, A.A. (1997). User-design in the creation of human learning systems. Educational Technology Research and Development, 45 (3), 5–22.
Carr-Chellman, A.A., Cuyar, C., & Breman, J. (1998). User-design: A case application in health care training. Educational Technology Research and Development, 46 (4), 97–114.
Divitini, M. & Farshchian, B.A. 1999. Using Email and WWW in a Distributed Participatory Design Project. In SIGGROUP Bulletin 20(1), pp. 10–15.
Ehn, P. & Kyng, M., 1991. Cardboard Computers: Mocking-it-up or Hands-on the Future. In, Greenbaum, J. & Kyng, M. (Eds.) Design at Work, pp. 169 – 196. Hillsdale, New Jersey: Laurence Erlbaum Associates.
Ehn, P. (1988). Work-oriented design of computer artifacts. Falköping: Arbetslivscentrum/Almqvist & Wiksell International, Hillsdale, NJ: Lawrence Erlbaum Associates
Ehn, P. and Sandberg, Å. (1979). God utredning: In Sandberg, Å. (Ed.): Utredning och förändring i förvaltningen[Investigation and change in administration]. Stockholm: Liber.
Grudin, J. (1993). Obstacles to Participatory Design in Large Product Development Organizations: In Namioka, A. & Schuler, D. (Eds.), Participatory design. Principles and practices (pp. 99–122). Hillsdale NJ: Lawrence Erlbaum Associates.
Grønbæk, K., Kyng, M. & P. Mogensen (1993). CSCW challenges: Cooperative Design in Engineering Projects, Communications of the ACM, 36, 6, pp. 67–77
Ivanov, K. (1972). Quality-control of information: On the concept of accuracy of information in data banks and in management information systems. The University of Stockholm and The Royal Institute of Technology. Doctoral dissertation.
Ivanov, K. (1995). A subsystem in the design of informatics: Recalling an archetypal engineer. In B. Dahlbom (Ed.), The infological equation: Essays in honor of Börje Langefors, (pp. 287–301). Gothenburg: Gothenburg University, Dept. of Informatics (). Note #16.
Kensing, F. & Blomberg, J. 1998. Participatory Design: Issues and Concerns In Computer Supported Cooperative Work, Vol. 7, pp. 167–185.
Kensing, F. 2003. Methods and Practices in Participatory Design. ITU Press, Copenhagen, Denmark.
Kuiper, Gabrielle, June 2007, Participatory planning and design in the downtown eastside: reflections on Global Studio Vancouver, Australian Planner, v.44, no.2, pp. 52–53
Kyng, M. (1989). Designing for a dollar a day. Office, Technology and People, 4(2): 157–170.
Muller, M.J. (2007). Participatory design: The third space in HCI (revised). In J. Jacko and A. Sears (eds.), Handbook of HCI 2nd Edition. Mahway NJ USA: Erlbaum.
Naghsh, A. M., Ozcan M. B. 2004. Gabbeh - A Tool For Computer Supported Collaboration in Electronic Paper-Prototyping. In *Dearden A & Watts L. (Eds). Proceedings of HCI "04: Design for Life volume 2. British HCI Group pp77–80
Näslund, T., 1997. Computers in Context –But in Which Context? In Kyng, M. & Mathiassen, L. (Eds). Computers and Design in Context. MIT Press, Cambridge, MA. pp. 171–200.
Nichols, Dave, (2009) Planning Thought and History Lecture, The University of Melbourne
Noro, K., & Imada, A. S. (Eds.). (1991) Participatory ergonomics. London: Taylor and Francis.
Perry, M. & Sanderson, D. 1998. Coordinating Joint Design Work: The Role of Communication and Artefacts. Design Studies, Vol. 19, pp. 273–28
Press, Mandy, 2003. "Communities for Everyone: redesigning contested public places in Victoria", Chapter 9 of end Weeks et al. (eds), Community Practices in Australia (French Forests NSW: Pearson Sprint Print), pp. 59–65
Pan, Y., 2018. From Field to Simulator: Visualising Ethnographic Outcomes to Support Systems Developers. University of Oslo. Doctoral dissertation.
Reigeluth, C. M. (1993). Principles of educational systems design. International Journal of Educational Research, 19 (2), 117–131.
Sarkissian, W, Perglut, D. 1986, Community Participation in Practice, The Community Participation handbook, Second edition, Murdoch University
Sanders, E. B. N., & Stappers, P. J. (2008). Co-creation and the new landscapes of design. Codesign, 4(1), 5–18.
Santa Rosa, J.G. & Moraes, A. Design Participativo: técnicas para inclusão de usuários no processo de ergodesign de interfaces. Rio de Janeiro: RioBooks, 2012.
Schuler, D. & Namioka, A. (1993). Participatory design: Principles and practices. Hillsdale, NJ: Erlbaum.
Trainer, Ted 1996, Towards a sustainable economy: The need for fundamental change Envirobook/ Jon Carpenter, Sydney/Oxford, pp. 135–167
Trischler, Jakob, Simon J. Pervan, Stephen J. Kelly and Don R. Scott (2018). The value of codesign: The effect of customer involvement in service design teams. Journal of Service Research, 21(1): 75–100. https://doi.org/10.1177/1094670517714060
Wojahn, P. G., Neuwirth, C. M., Bullock, B. 1998. Effects of Interfaces for Annotation on Communication in a Collaborative Task. In Proceedings of CHI "98, LA, CA, April 18–23, ACM press: 456–463
Von Bertalanffy, L. (1968). General systems theory. New York: Braziller.
Design
Innovation
Product development
Citizen science models | Participatory design | [
"Engineering"
] | 8,600 | [
"Design"
] |
966,262 | https://en.wikipedia.org/wiki/Stunt%20%28botany%29 | In botany and agriculture, stunting describes a plant disease that results in dwarfing and loss of vigor. It may be caused by infectious or noninfectious means. Stunted growth can affect foliage and crop yields, as well as eating quality in edible plants. Stunted growth can be prevented through controlling quality of seeds, soil, and proper watering practices. Treatment will vary greatly depending on the root cause of the stunting.
Infectious
A stunt caused by infection can either be prevented or treated. Anti-microbial peptides may offer generalized protection against plant diseases that cause stunted growth.
Noninfectious
Stunted growth not caused by infection may be due to a wide variety of environmental factors. Environmental factors that affect plant growth include light, temperature, water, humidity and nutrition. There may be water imbalance, poor planting practices, poor nutrition, or physical injury to the plant. Using high quality seed and soil may mitigate stunted growth.
See also
Soil retrogression and degradation
Soil pH
Soil types
Ramu stunt disease, a disease of the sugarcane widespread throughout Papua New Guinea
References
Plant pathogens and diseases | Stunt (botany) | [
"Biology"
] | 226 | [
"Plant pathogens and diseases",
"Plants"
] |
966,493 | https://en.wikipedia.org/wiki/Mountain%20tapir | The mountain tapir, also known as the Andean tapir or woolly tapir (Tapirus pinchaque), is the smallest of the four widely recognized species of tapir. It is found only in certain portions of the Andean Mountain Range in northwestern South America. As such, it is the only tapir species to live outside of tropical rainforests in the wild. It is most easily distinguished from other tapirs by its thick woolly coat and white lips.
The species name comes from the term "La Pinchaque", an imaginary beast said to inhabit the same regions as the mountain tapir.
Description
Mountain tapirs are black or very dark brown, with occasional pale hairs flecked in amongst the darker fur. The fur becomes noticeably paler on the underside, around the anal region, and on the cheeks. A distinct white band runs around the lips, although it may vary in extent, and there are usually also white bands along the upper surface of the ears. In adults, the rump has paired patches of bare skin, which may help to indicate sexual maturity. The eyes are initially blue, but change to a pale brown as the animal ages. Unlike all other species of tapir, the fur is long and woolly, especially on the underside and flanks, reaching or more in some individuals.
Adults are usually around in length and in height at the shoulder. They typically weigh between , and while the sexes are of similar size, females tend to be around heavier than the males.
Like the other types of tapir, they have small, stubby tails and long, flexible proboscises. They have four toes on each front foot and three toes on each back foot, each with large nails and supported by a padded sole. A patch of bare skin, pale pink or grey in colour, extends just above each toe.
Reproduction
Female mountain tapirs have a 30-day estrous cycle, and typically breed only once every other year. During courtship, the male chases the female and uses soft bites, grunts, and squeals to get her attention, while the female responds with frequent squealing. After a gestation period of 392 or 393 days, the female gives birth to a single young; multiple births are very rare.
Newborn mountain tapirs weigh about and have a brown coat with yellowish-white spots and stripes. Like adults, baby mountain tapirs have thick, woolly fur to help keep them warm. Weaning begins at around three months of age. The immature coloration fades after about a year, but the mother continues to care for her young for around 18 months. Mountain tapirs reach sexual maturity at age three and have lived up to 27 years in captivity.
Ecology
Tapirs are herbivores, and eat a wide range of plants, including leaves, grasses, and bromeliads. In the wild, particularly common foods include lupins, Gynoxys, ferns, and umbrella plants. It also seeks out natural salt licks to satisfy its need for essential minerals.
Mountain tapirs are also important seed dispersers in their environments, and have been identified as a keystone species of the high Andes. A relatively high proportion of plant seeds eaten by mountain tapirs successfully germinate in their dung, probably due to a relatively inefficient digestive system and a tendency to defecate near water. Although a wide range of seeds are dispersed in this manner, those of the endangered wax palm seem to rely almost exclusively on mountain tapirs for dispersal, and this plant, along with the highland lupine, declines dramatically whenever the animal is extirpated from an area.
Predators of mountain tapirs include cougars, spectacled bears, and, less commonly, jaguars. Attacks by invasive domestic dogs have also been reported.
Behavior
When around other members of their species, mountain tapirs communicate through high-pitched whistles, and the males occasionally fight over estrous females by trying to bite each other's rear legs. But for the most part, mountain tapirs are shy and lead solitary lives, spending their waking hours foraging for food on their own along well-worn tapir paths. Despite their bulk, they travel easily through dense foliage, up the steep slopes of their hilly habitats, and in water, where they often wallow and swim.
Mountain tapirs are generally crepuscular, although they are more active during the day than other species of tapirs. They sleep from roughly midnight to dawn, with an additional resting period during the hottest time of the day for a few hours after noon, and prefer to bed down in areas with heavy vegetation cover. Mountain tapirs forage for tender plants to eat. When trying to access high plants, they will sometimes rear up on their hind legs to reach and then grab with their prehensile snouts. Though their eyesight is lacking, they get by on their keen senses of smell and taste, as well as the sensitive bristles on their proboscises.
Males will frequently mark their territory with dung piles, urine, and rubbings on trees, and females will sometimes engage in these behaviors, as well. The territories of individuals usually overlap, with each animal claiming over , and females tend to have larger territories than males.
Distribution and habitat
The mountain tapir is found in the cloud forests and páramo of the Eastern and Central Cordilleras mountains in Colombia, Ecuador, and the far north of Peru. Its range may once have extended as far as western Venezuela, but it has long been extirpated from that region. It commonly lives at elevations between , and since at this altitude temperatures routinely fall below freezing, the animal's woolly coat is essential. During the wet season, mountain tapirs tend to inhabit the forests of the Andes, while during the drier months, they move to the páramo, where fewer biting insects pester them.
The mountain tapir has no recognised subspecies.
In Peru, it is protected in the National Sanctuary Tabaconas Namballe. The species needs continuous stretches of cloud forest and páramo, rather than isolated patches, to successfully breed and maintain a healthy population, and this obstacle is a major concern for conservationists trying to protect the endangered animal.
Evolution
The mountain tapir is the least specialised of the living species of tapir, and has changed the least since the origin of the genus in the early Miocene. Genetic studies have shown that mountain tapirs diverged from its closest relative, the Brazilian tapir, in the late Pliocene, around three million years ago. This would have been shortly after the formation of the Panamanian Isthmus, allowing the ancestors of the two living species to migrate southward from their respective points of origin in Central America as part of the Great American Interchange. However, the modern species most likely originated in the Andes, some time after this early migration.
Molecular dating methods based on three mitochondrial cytochrome genes found T. pinchaque to be within a paraphyletic T. terrestris complex.
Vulnerability
The mountain tapir is the most threatened of the five Tapirus species, classified as "Endangered" by the IUCN in 1996. According to the IUCN, there was a 20% chance the species could have been extinct as early as 2014. Due to the fragmentation of its surviving range, populations may already have fallen below the level required to sustain genetic diversity.
Historically, mountain tapirs have been hunted for their meat and hides, while the toes, proboscises, and intestines are used in local folk medicines and as aphrodisiacs. Since they will eat crops when available, they are also sometimes killed by farmers protecting their produce. Today, deforestation for agriculture and mining, and poaching are the main threats to the species.
There may be only 2,500 individuals left in the wild today, making it all the more difficult for scientists to study them. Also, very few individuals are found in zoos. Only a handful of breeding pairs of this species exists in captivity in the world — at the Los Angeles Zoo, the Cheyenne Mountain Zoo in Colorado Springs, and, as of 2006, the San Francisco Zoo. In Canada, a mating pair is kept in Langley, BC, at the Mountain View Conservation and Breeding Centre. The nine individuals in captivity are descendants of just two founder animals. This represents a distinct lack of genetic diversity and may not bode well for their continued existence in captivity. The three zoos that house this species are working to ensure that the remaining wild populations of mountain tapirs are protected. Two mountain tapirs were sent from San Francisco Zoo to Cali Zoo, making them be the only captive tapirs in their natural home range; one male is kept in Pitalito, it could be moved to the Cali Zoo to make a breeding pair.
References
Video/Multimedia
Video - Mountain Tapirs at the San Francisco Zoo
External links
Tapir Specialist Group – Mountain tapir
ARKive – images and movies of the mountain tapir (Tapirus pinchaque)
Tapirs
Mammals of the Andes
Mammals of Colombia
Mammals of Ecuador
Mammals of Peru
EDGE species
Mammals described in 1829
Páramo fauna | Mountain tapir | [
"Biology"
] | 1,859 | [
"EDGE species",
"Biodiversity"
] |
15,876,296 | https://en.wikipedia.org/wiki/Autonomous%20Province%20of%20Kosovo%20and%20Metohija | The Autonomous Province of Kosovo and Metohija (; ), commonly known as Kosovo (; ) and abbreviated to Kosmet (from Kosovo and Metohija; ) or KiM (), is an autonomous province that occupies the southernmost corner of Serbia, as defined by the country's constitution. The territory is the subject of an ongoing political and territorial dispute between the Republic of Serbia and the partially recognised Republic of Kosovo, with the APKM being viewed as the de jure interpretation of the territory under Serbian law; however, the Serbian government currently does not control the territories because they are de facto administered by the Republic of Kosovo. Its claimed administrative capital and largest city is Pristina.
The territory of the province, as recognised by Serbian laws, lies in the southern part of Serbia and covers the regions of Kosovo and Metohija. The capital of the province is Priština. The territory was previously an autonomous province of Serbia during Socialist Yugoslavia (1946–1990), and acquired its current status in 1990. The province was governed as part of Serbia until the Kosovo War (1998–99), when it became a United Nations (UN) protectorate in accordance with United Nations Security Council Resolution 1244, but still internationally recognized as part of Serbia. The control was then transferred to the UN administration of UNMIK. On 17 February 2008, representatives of the people of Kosovo () unilaterally and extra-institutionally declared Kosovo's independence, which is internationally recognized by 104 UN members. While it is de facto independent from Serbia, Serbia still regards it as its province.
Overview
In 1990, the Socialist Autonomous Province of Kosovo, an autonomous province of Serbia within Yugoslavia, had undergone the anti-bureaucratic revolution by Slobodan Milošević's government which resulted in the reduction of its powers, effectively returning it to its constitutional status of 1971–74. The same year, its Albanian majority—as well as the Republic of Albania—supported the proclamation of an independent Republic of Kosova. Following the end of the Kosovo War 1999, and as a result of NATO intervention, Serbia and the federal government no longer exercised de facto control over the territory.
In February 2008, the Republic of Kosovo declared independence. While Serbia has not recognised Kosovo's independence, in the 2013 Brussels Agreement, it abolished all its institutions in the Autonomous Province. , Kosovo's independence is currently recognized by UN member states. In 2013, the Serbian government announced it was dissolving the Serb minority assemblies it had created in northern Kosovo, in order to allow the integration of the Kosovo Serb minority into the general population of Kosovo.
History
Constitutional changes were made in Yugoslavia in 1990. The parliaments of all Yugoslavian republics and provinces, which until then had MPs only from the League of Communists of Yugoslavia, were dissolved and multi-party elections were held within them. Kosovar Albanians refused to participate in the elections so they held their own unsanctioned elections instead. As election laws required (and still require) turnout higher than 50%, a parliament in Kosovo could not be established.
The new constitution abolished the individual provinces' official media, integrating them within the official media of Serbia while still retaining some programs in the Albanian language. The Albanian-language media in Kosovo were suppressed. Funding was withdrawn from state-owned media, including those in the Albanian language in Kosovo. The constitution made the creation of privately owned media possible, however their operation was very difficult because of high rents and restrictive laws. State-owned Albanian language television or radio was also banned from broadcasting from Kosovo. However, privately owned Albanian media outlets appeared; of these, probably the most famous is "Koha Ditore", which was allowed to operate until late 1998 when it was closed after publishing a calendar glorifying ethnic Albanian separatists.
The constitution also transferred control over state-owned companies to the Yugoslav central government. In September 1990, up to 123,000 Albanian workers were dismissed from their positions in government and media, as were teachers, doctors, and civil servants, provoking a general strike and mass unrest. Some of those who were not sacked quit in sympathy, refusing to work for the Serbian government. Although the sackings were widely seen as a purge of ethnic Albanians, the government maintained that it was removing former communist directors.
Albanian educational curriculum textbooks were withdrawn and replaced by new ones. The curriculum was (and still is, as this is the curriculum used for Albanians in Serbia outside Kosovo) identical to its Serbian counterpart and that of all other nationalities in Serbia except that it had education on and in the Albanian language. Education in Albanian was withdrawn in 1992 and re-established in 1994. At the University of Pristina, which was seen as a centre of Kosovo Albanian cultural identity, education in the Albanian language was abolished and Albanian teachers were also dismissed in large numbers. Albanians responded by boycotting state schools and setting up an unofficial parallel system of Albanian-language education.
Kosovo Albanians were outraged by what they saw as an attack on their rights. Following mass rioting and unrest from Albanians as well as outbreaks of inter-communal violence, in February 1990, a state of emergency was declared and the presence of the Yugoslav Army and police was significantly increased to quell the unrest.
Unsanctioned elections were held in 1992, which overwhelmingly elected Ibrahim Rugova as "president" of a self-declared Republic of Kosova; Serb authorities rejected the election results, and tried to capture and prosecute those who had voted. In 1995, thousands of Serb refugees from Croatia were settled in Kosovo, which further worsened relations between the two communities.
Albanian opposition to the sovereignty of Yugoslavia and especially Serbia had previously surfaced in rioting (1968 and March 1981) in the capital Pristina. Rugova initially advocated non-violent resistance, but later opposition took the form of separatist agitation by opposition political groups and armed action from 1995 by the "Kosovo Liberation Army" (Ushtria Çlirimtare e Kosovës, or UÇK) whose activities led to the Insurgency in Kosovo which led to the Kosovo War in 1998 ending with the 1999 NATO bombing of the Federal Republic of Yugoslavia and establishment of the United Nations Interim Administration Mission in Kosovo (UNMIK).
In 2003, the Federal Republic of Yugoslavia was renamed the State Union of Serbia and Montenegro (Montenegro left the federation in 2006 and recognised Kosovo's independence in 2008).
Politics
Since 1999, the Serb-inhabited areas of Kosovo have been governed as a de facto independent region from the Albanian-dominated government in Pristina. They continue to use Serbian national symbols and participate in Serbian national elections, which are boycotted in the rest of Kosovo; in turn, they boycott Kosovo's elections. The municipalities of Leposavić, Zvečan and Zubin Potok are run by local Serbs, while the Kosovska Mitrovica municipality had rival Serbian and Albanian governments until a compromise was agreed in November 2002.
The Serb areas have united into a community, the Union of Serbian Districts and District Units of Kosovo and Metohija established in February 2003 by Serbian delegates meeting in North Mitrovica, which has since served as the de facto "capital." The Union's president is Dragan Velić. There is also a central governing body, the Serbian National Council for Kosovo and Metohija (SNV). The President of SNV in North Kosovo is Dr Milan Ivanović, while the head of its Executive Council is Rada Trajković.
Local politics are dominated by the Serbian List for Kosovo and Metohija. The Serbian List was led by Oliver Ivanović, an engineer from Kosovska Mitrovica.
In February 2007 the Union of Serbian Districts and District Units of Kosovo and Metohija has transformed into the Serbian Assembly of Kosovo and Metohija presided by Marko Jakšić. The Assembly strongly criticised the secessionist movements of the Albanian-dominated PISG Assembly of Kosovo and demanded unity of the Serb people in Kosovo, boycott of EULEX and announced massive protests in support of Serbia's sovereignty over Kosovo. On 18 February 2008, day after Kosovo's unilateral declaration of independence, the Assembly declared it "null and void".
Also, there was a Ministry for Kosovo and Metohija within the Serbian government, with Goran Bogdanović as Minister for Kosovo and Metohija. In 2012, that ministry was downgraded to the Office for Kosovo and Metohija, with Aleksandar Vulin as the head of the new office. However, in 2013, the post was raised to that of a Minister without portfolio in charge of Kosovo and Metohija.
Administrative divisions
Under the Serbian system of administration, Kosovo is divided into five districts comprising 28 municipalities and 1 city. In 2000, UNMIK established a system with 7 districts and 30 municipalities. Serbia has not exercised effective control over Kosovo since 1999. For the UNMIK created districts of Kosovo, see Districts of Kosovo.
See also
Albanians in Serbia
North Kosovo
Republic of Serbia (1992–2006)
References
External links
Office for the Autonomous Province of Kosovo and Metohija, Government of Serbia
Kosovo and Metohija
Kosovo and Metohija
States and territories established in 1992
Countries and territories where Albanian is an official language
Countries and territories where Serbian is an official language
History of Kosovo
Historical regions in Serbia
Serbia and Montenegro
1992 establishments in Kosovo
Kosovo–Serbia relations
Kosovo and Metohija | Autonomous Province of Kosovo and Metohija | [
"Mathematics"
] | 1,912 | [
"Statistical regions of Serbia",
"Statistical concepts",
"Statistical regions"
] |
15,876,400 | https://en.wikipedia.org/wiki/Calcium%20chloride%20transformation | Calcium chloride (CaCl2) transformation is a laboratory technique in prokaryotic (bacterial) cell biology. The addition of calcium chloride to a cell suspension promotes the binding of plasmid DNA to lipopolysaccharides (LPS). Positively charged calcium ions attract both the negatively charged DNA backbone and the negatively charged groups in the LPS inner core. The plasmid DNA can then pass into the cell upon heat shock, where chilled cells (+4 degrees Celsius) are heated to a higher temperature (+42 degrees Celsius) for a short time.
History of bacterial transformation
Frederick Griffith published the first report of bacteria's potential for transformation in 1928. Griffith observed that mice did not succumb to the "rough" type of pneumococcus (Streptococcus pneumoniae), referred to as nonvirulent, but did succumb to the "smooth" strain, which is referred to as virulent. The smooth strain's virulence could be suppressed with heat-killing. However, when the nonvirulent rough strain was combined with the heat-killed smooth strain, the rough strain managed to pick up the smooth phenotype and thus become virulent. Griffith's research indicated that the change was brought on by a nonliving, heat-stable substance generated from the smooth strain. Later on, Oswald Avery, Colin MacLeod, and Maclyn McCarty identified this transformational substance as DNA in 1944.
Principle of calcium chloride transformation
Since DNA is a very hydrophilic molecule, it often cannot penetrate through the bacterial cell membrane. Therefore, it is necessary to make bacteria competent in order to internalize DNA. This may be accomplished by suspending bacteria in a solution with a high calcium concentration, which creates tiny holes in the bacterium's cells. Calcium suspension, along with the incubation of DNA together with competent cells on ice, followed by a brief heat shock, will directly lead extra-chromosomal DNA to forcedly enter the cell.
According to previous research, the LPS receptor molecules on the competent cell surface bind to a bare DNA molecule. This binding occurs in view of the fact that the negatively charged DNA molecules and LPS form coordination complexes with the divalent cations. Due to its size, DNA cannot pass through the cell membrane on its own to reach the cytoplasm. The cell membrane of CaCl2-treated cells is severely depolarized during the heat shock stage, and as a result, the drop in membrane potential reduces the negative nature of the cell's internal potential, allowing negatively charged DNA to flow into the interior of the cell. Afterwards, the membrane potential can be raised back to its initial value by subsequent cold shock.
Competent cells
Competent cells are bacterial cells with re-designed cell walls that make it easier for foreign DNA to get through. Without particular chemical or electrical treatments to make them capable, the majority of cell types cannot successfully take up DNA, for that reason, treatment with calcium ions is the typical procedure for modifying bacteria to be permeable to DNA. In bacteria, competence is closely regulated, and different bacterial species have different competence-related characteristics. Although they share some similarity, the competence proteins generated by Gram-positive and Gram-negative bacteria are different.
Natural Competence
Natural competence sums up in three methods where bacteria can acquire DNA from their surroundings: conjugation, transformation, and transduction. As DNA is inserted into the cell during transformation, the recipient cells must be at certain physiological condition known as the competent state in order to take up transforming DNA. Once the DNA has entered the cell's cytoplasm, enzymes such as nuclease can break it down. In cases where the DNA is extremely similar to the cell's own genetic material, DNA-repairing-enzymes recombine it with the chromosome instead.
Artificial Competence
Evidently, a cell's genes do not include any information on artificial competence. This type of competence requires a laboratory process that creates conditions that do not often exist in nature so that cells can become permeable to DNA. Although the efficiency of transformation is often poor, this process is relatively simple and quick to be applied in bacterial genetic engineering. Mandel and Higa, who created an easy procedure based on soaking the cells in cold CaCl2, provided the basis for obtaining synthetic competent cells. Chemical transformation, such as calcium chloride transformation and electroporation are the most commonly used methods to transform bacterial cells, like E.coli cells, with plasmid DNA.
Method for calcium chloride transformation
Calcium chloride treatment is generally used for the transformation of E. coli and other bacteria. It enhances plasmid DNA incorporation by the bacterial cell, promoting genetic transformation. Plasmid DNA can attach to LPS by being added to the cell solution together with CaCl2. Thus, when heat shock is applied, the negatively charged DNA backbone and LPS combine, allowing plasmid DNA to enter the bacterial cell.
The process is summarized in the following steps according to The Undergraduate Journal of Experimental Microbiology and Immunology (UJEMI) protocol:
Prepare a bacterial culture in LB broth
Before starting the main procedure, use the required volume of the previously made culture to inoculate the required volume of fresh LB broth
Pellet the cells by centrifuging at 4°C at 4000 rpm for 10 minutes
Pour off the supernatant and resuspend cells in 20 mL ice-cold 0.1 M CaCl2, then leave immediately on ice for 20 minutes
Centrifuge as in step 3, a more diffused pellet will be obtained as an indication of competent cells
Resuspend in cold CaCl2 as in step 4
Pour off supernatant and resuspend cells in 5 mL ice-cold 0.1 M CaCl2 along with 15% glycerol to combine pellets
Transfer the suspensions to sterile thin glass tubes for effective heat shocks
Add the required mg amount of DNA in the suspension tubes, and immediately leave on ice
Place the tubes on a 42°C water bath for a 30 seconds and return immediately to ice for 2 minutes
Add 1 mL of LB or SOC medium
Transfer each tube to the required mL LB broth amount in a new flask
Incubate accordingly with shaking at 37°C at 200 rpm for 60 min, however, it is advised to leave it for 90 minutes in order to allow bacteria to recover
Plate 1:10 and 1:100 dilutions of the incubated cultures on selective/ screening plates (e.g. Ampicillin and/or X-gal) onto LB plates to which the antibiotics to be used for selection have been added
Incubate overnight at 37°C
Finally, observe isolated colonies on the plates
References
External links
Animation of Calcium chloride (CaCl2) transformation
https://www.youtube.com/watch?v=7Ul9RVYG5CM&ab_channel=NewEnglandBiolabs
Cell biology
Molecular biology techniques | Calcium chloride transformation | [
"Chemistry",
"Biology"
] | 1,444 | [
"Molecular biology techniques",
"Cell biology",
"Molecular biology"
] |
15,877,107 | https://en.wikipedia.org/wiki/Monge%20equation | In the mathematical theory of partial differential equations, a Monge equation, named after Gaspard Monge, is a first-order partial differential equation for an unknown function u in the independent variables x1,...,xn
that is a polynomial in the partial derivatives of u. Any Monge equation has a Monge cone.
Classically, putting u = x0, a Monge equation of degree k is written in the form
and expresses a relation between the differentials dxk. The Monge cone at a given point (x0, ..., xn) is the zero locus of the equation in the tangent space at the point.
The Monge equation is unrelated to the (second-order) Monge–Ampère equation.
References
Partial differential equations | Monge equation | [
"Mathematics"
] | 161 | [
"Mathematical analysis",
"Mathematical analysis stubs"
] |
15,878,352 | https://en.wikipedia.org/wiki/Sample%20preparation%20in%20mass%20spectrometry | Sample preparation for mass spectrometry is used for the optimization of a sample for analysis in a mass spectrometer (MS). Each ionization method has certain factors that must be considered for that method to be successful, such as volume, concentration, sample phase, and composition of the analyte solution.
Quite possibly the most important consideration in sample preparation is knowing what phase the sample must be in for analysis to be successful. In some cases the analyte itself must be purified before entering the ion source. In other situations, the matrix, or everything in the solution surrounding the analyte, is the most important factor to consider and adjust. Often, sample preparation itself for mass spectrometry can be avoided by coupling mass spectrometry to a chromatography method, or some other form of separation before entering the mass spectrometer.
In some cases, the analyte itself must be adjusted so that analysis is possible, such as in protein mass spectrometry, where usually the protein of interest is cleaved into peptides before analysis, either by in-gel digestion or by proteolysis in solution.
Sample phase
The first and most important step in sample preparation for mass spectrometry is determining what phase the sample needs to be in. Different ionization methods require different sample phases. Solid phase samples can be ionized through methods such as field desorption, plasma-desorption, fast atom bombardment, and secondary-ion ionization.
Liquids with the analyte dissolved in them, or solutions, can be ionized through methods such as matrix-assisted laser desorption, electrospray ionization, and atmospheric-pressure chemical ionization. Both solid and liquid samples may be ionized with ambient ionization techniques.
Gas samples, or volatile samples, can be ionized using methods such as electron ionization, photoionization, and chemical ionization.
These lists are the most commonly used state of matter for each ionization method, but the ionization methods are not necessarily limited to these states of matter. For example, fast atom bombardment ionization is typically used to ionize solid samples, but this method is typically used on solids dissolved into solutions, and can also be used to analyze components that have entered the gas phase.
Chromatography as a sample preparation method
In many mass spectrometry ionization methods, the sample must be in the liquid or gas phase for the ionization method to work. Sample preparation to ensure proper ionization can be difficult, but can be made easier by coupling the mass spectrometer to some chromatographic equipment. Gas chromatography(GC) or liquid chromatography(LC) can be used as a sample preparation method.
Gas chromatography
GC is a method involving the separation of different analytes within a sample of mixed gases. The separated gases can be detected multiple ways, but one of the most powerful detection methods for gas chromatography is mass spectrometry. After the gases separate, they enter the mass spectrometer and are analyzed. This combination not only separates the analytes, but gives structural information about each one. The GC sample must be volatile, or able to enter the gas phase, while also being thermally stable so that it does not break down as it is heated to enter the gas phase. Mass spectrometry ionization techniques requiring the sample to be in the gas phase have similar concerns.
Electron ionization (EI) in mass spectrometry requires samples that are small molecules, volatile, and thermally stable, similar to that of gas chromatography. This ensures that as long as GC is performed on the sample before entering the mass spectrometer, the sample will be prepared for ionization by EI.
Chemical ionization (CI) is another method that requires samples to be in the gas phase. This is so that the sample can react with a reagent gas to form an ion that can be analyzed by the mass spectrometer. CI has many of the same requirements in sample preparation as EI, such as volatility and thermal stability of the sample. GC is useful for sample preparation for this technique as well. One advantage of CI is that larger molecules separated by GC can be analyzed by this ionization method. CI has a larger mass range than that of EI and can analyze molecules that EI may not be able to . CI also has the advantage of being less damaging to the sample molecule, so that less fragmentation occurs and more information about the original analyte can be determined.
Photoionization (PI) was a method that was first applied as an ionization method to detecting gases separated by GC. Years later, it was also applied as a detector for LC, though the samples must be vaporized first to be detected by the photoionization detector. Eventually PI was applied to mass spectrometry, particularly as an ionization method for gas chromatography-mass spectrometry. Sample preparation for PI includes first ensuring the sample is in the gas phase. PI ionizes molecules by exciting the sample molecules with photons of light. This method only works if the sample and other components in the gas phase are excited by different wavelengths of light. It is important when preparing the sample, or photon source, that the wavelengths of ionization are adjusted to excite the sample analyte and nothing else.
Liquid chromatography
Liquid chromatography (LC) is a method that in some ways is more powerful than GC, but can be coupled to mass spectrometry just as easily. In LC, the concerns involving sample preparation can be minimal. In LC, both the stationary and mobile phase can affect the separation, whereas in GC only the stationary phase should be influential. This allows for the sample preparation to be minimal if one is willing to adjust the stationary phase or mobile phase before running the sample. The primary concern is the concentration of analyte. If the concentration is too high then separation can be unsuccessful, but mass spectrometry as a detection method does not need complete separation, showing another benefit of coupling LC to a mass spectrometer.
LC can be coupled to mass spectrometry through the vaporization of the liquid samples as they enter the mass spectrometer. This method can allow for ionization methods that require gaseous samples to be used, such as CI or PI, particularly atmospheric-pressure chemical ionization or atmospheric pressure photoionization, which allows for more interactions and more ionization.
Other ionization methods may not require the liquid sample to be vaporized, and can analyze the liquid sample itself. One example is fast-atom bombardment ionization which can allow for liquid samples separated by the LC to flow into the ionization chamber and be ionized easily. The most common ionization method coupled to LC is some form of spray ionization, which includes thermospray ionization and more commonly, electrospray (ESI) ionization.
Thermospray was first developed as a way to effectively remove solvent and vaporize samples more easily. This method involves the liquid sample from the LC flowing through an electrically heated vaporizer that simply heats the sample, removing any solvent and therefore putting the sample in the gas phase. Electrospray ionization (ESI) is similar to thermospray in the principle of removing the liquid solvent from the sample as much as possible, creating charged sample molecules either in small droplets or in gas form. Studies have shown that ESI can be as much as ten times more sensitive than other ionization methods coupled to LC. The spray methods are particularly useful considering that non-volatile samples can be analyzed easily through this method since the sample is not itself turned into a gas, the liquid is simply removed, pushing the sample into a gaseous or mist phase.
One sample preparation issue with liquid chromatography-mass spectrometry is possible matrix effects due to the presence of background molecules. These matrix effects have been shown to decrease the signal in methods such as PI and ESI by amounts as much as 60% depending on the sample being analyzed. The matrix effect can also cause an increase in signal, producing false positive results. This can be corrected by purifying the sample as much as possible before LC is performed, but in the case of analyzing environmental samples where everything in the sample is of concern, sample preparation may not be the ideal solution to fix the problem. Another method that can be applied to correct the issue is by using the standard addition method.
Fast atom bombardment
Fast atom bombardment (FAB) is a method involving using a beam of high energy atoms to strike a surface and generate ions. These solid analyte particles must be dissolved into some form of matrix, or non-volatile liquid to protect and assist in the ionization of the solid analyte. It has been shown that as the matrix is depleted, the ion formation diminishes, so choosing the right matrix compound is vital.
The overall goal of the matrix compound is to present the sample to the atom beam at a high mobile surface concentration. For maximum sensitivity, the sample should form a perfect monolayer at the surface of a substrate having low volatility. This monolayer effect can be seen in that once a certain concentration of analyte in matrix is reached, any concentration above that is seen to exhibit no effect, because once the monlayer is formed, any additional analyte is beneath the monolayer, and thus not affected by the atom beam. The concentration needed to cause this effect is seen to change as the amount of non-volatile matrix changes. So concentration of solid analyte needs to be considered in the preparation of the solution for analysis so that signal from "hidden" analyte is not missed.
To choose the matrix for each solid analyte, three criteria must be considered. First, it should dissolve the solid compound to be analysed (with or without the aid of a cosolvent or additive), thus allowing molecules of that compound to diffuse to the surface layers, replenishing the sample molecules that have been ionized or destroyed by interaction with the fast atom beam. Another mechanism for explanation of ion formation in FAB involves the idea that sputtering occurs from the bulk rather than the surface, but in that case, the solubility is still largely important to insure homogeneity of solid analyte in the bulk solution. Secondly, the matrix should have a low volatility under the conditions of the mass spectrometer. As mentioned above, as the matrix is depleted, the ionization decreases as well, so maintaining the matrix is vital. Thirdly, the matrix should not react with the solid analyte in question, or if it does react, it should be in an understood and reproducible way. This ensures reproducibility of analysis and identification of the actual analyte rather than a derivative of the analyte.
The most commonly used compounds as a matrix are variations of glycerol, such as glycerol, deuteroglycerol, thioglycerol, and aminoglycerol. If the sample cannot dissolve in the chosen matrix, such as glycerol, a cosolvent or additive can be mixed with the matrix to facilitate the dissolving of the solid analyte. For example, chlorophyll A is completely insoluble in glycerol, but by mixing in a small amount of Triton X-100, a derivative of polyethylene glycol, the chlorophyll becomes highly soluble within the matrix. It is important to note that though a good signal may be achieved through glycerol or glycerol with an additive, there could be other matrix compounds that can offer an even better signal. Optimization of matrix compounds and concentration of solid analyte are vital for FAB measurements.
Secondary ion mass spectrometry
Secondary ion mass spectrometry (SIMS) is a method very similar to FAB in that a beam of particles is fired against the surface of a sample in order to cause sputtering, in which the molecules of the sample ionize and leave the surface, thus allowing for the ions or the sample to be analyzed. The primary difference is that in SIMS, an ion beam is fired against the surface, but in FAB, an atom beam is fired against the surface. The other primary difference, of more interest to this page, is that, unlike FAB, SIMS is typically performed on a solid sample with little sample preparation required.
The main consideration with SIMS is ensuring that the sample is stable under ultra-high vacuum, or pressures less than 10−8 torr. The nature of the ultra-high vacuum is that it ensures the sample remains constant during analysis as well as ensuring the high energy ion beam strikes the sample. Ultra-high vacuum solves many of the problems that need to be considered during sample preparation. When preparing the sample for analysis, another thing that should be considered is the thickness of the film. Typically, if a thin monolayer can be deposited onto the surface of a noble metal, analysis should be successful. If the film thickness is too large, which is common in real world analysis, the problem can be solved by methods such as depositing a perforated silver foil over a nickel grid onto the film surface. This yields similar results to thin films deposited directly onto a noble metal.
Matrix-assisted laser desorption/ionization
For matrix-assisted laser desorption/ionization (MALDI) mass spectrometry a solid or liquid sample is mixed with a matrix solution, to help the sample avoid processes such as aggregation or precipitation, while helping the sample remain stable during the ionization process. The matrix crystallizes with the sample and is then deposited on a sample plate, which can be made of a range of materials, from inert metals to inert polymers. The matrix containing the sample molecules is then transferred to the gas phase by pulsed laser irradiation. The makeup of the matrix, interactions between the sample and the matrix, and how the sample is deposited are all extremely important during sample preparation to ensure the best possible results.
The selection of a matrix is the first step when preparing samples for MALDI analysis. The primary goals of the matrix are to absorb the energy from a laser, thus transferring it to the analyte molecules, and to separate the analyte molecules from each other. A consideration that should be taken into account when choosing a matrix is what type of analyte ion is expected or desired. Knowing the acidity or basicity of the analyte molecule compared with the acidity or basicity of the matrix, for example, is valuable knowledge when choosing a matrix. The matrix should not compete with the analyte molecule, so the matrix should not want to form the same type of ion as the analyte. For example, if the desired analyte has a high amount of acidity, it would be logical to choose a matrix with a high amount of basicity to avoid competition and facilitate the formation of an ion. The pH of the matrix can also be used to select what sample you want to obtain spectra for. For example, in the case of proteins, a very acidic pH can show very little of the peptide components, but can show very good signal for those components that are larger. If the pH is increased towards a more basic pH, then smaller components become easier to see.
The concentration of salt in the sample is a factor that needs to be considered when preparing a MALDI sample as well. Salts can aid a MALDI spectra by preventing aggregation or precipitation while stabilizing the sample. However, interfering signals can be observed due to side reactions of the matrix with the sample, such as in the case of the matrix interacting with alkali metal ions which can impair the analysis of the spectra. Typically the amount of salt in the matrix only becomes a problem in very high concentrations, such as 1 molar. The problem of having too high a concentration of salt in the sample can be solved by first running the solution through liquid chromatography to help purify the sample, but this method is time-consuming and results in the loss of some of the sample to be analyzed. Another method is focused on purification once the sample solution is deposited onto the sample probe. Many sample probes can be designed to have a membrane on the surface that can selectively bind the sample in question to the probe surface. The surface can then be rinsed off to remove all unnecessary salts or background molecules. The matrix of appropriate salt concentration can then be deposited directly onto the sample on the probe surface and crystallized there. Despite these negative effects of salt concentration, a separate desalting step is usually not necessary in the case of proteins, because the selection of appropriate buffer salts prevents the occurrence of this problem.
How the sample and matrix is deposited on the surface of the sample probe needs to be a consideration in sample preparation as well. The dried drop method is the simplest of deposition methods. The matrix and sample solution are mixed together and then a small drop of the mixture is placed on the sample probe surface and allowed to dry, thus crystallizing. The sandwich method involves depositing a layer of matrix onto the surface of the probe and allowing it to dry. A drop of the sample followed by a drop of additional matrix is then applied to the layer of dried matrix and allowed to dry as well. Variations on the sandwich technique involve depositing the matrix on the surface and then depositing the sample directly on top of the matrix. A particularly useful method involves depositing the matrix solution on the surface of the sample probe in a solvent that will evaporate very rapidly, thus forming a very thin fine layer of matrix. The sample solution is then placed on top of the matrix layer and allowed to evaporate slowly, thus integrating the sample into the top layer of matrix as the sample solution evaporates. An addition concern when depositing the sample on the surface of the probe is the solubility of the sample in the matrix. If the sample is insoluble in the matrix, additional methods must be employed. A method used in this case involves mechanical grinding and mixing of solid sample and solid matrix crystals. Once blended well, this powder can be deposited on the surface of the sample probe in free powder form or as a pill. Another possible method is placing the sample on the surface of the probe and applying vaporized matrix to the sample probe to allow the matrix to condense around the sample.
Electrospray ionization
Electrospray ionization (ESI) is a technique that involves using high voltages to create an electrospray, or a fine aerosol created by the high voltages. ESI sample preparation can be very important and the quality of results can be heavily determined by the characteristics of the sample. ESI experiments can be run on-line or off-line. In on-line measurements the mass spectrometer is connected to a liquid chromatograph and as the samples are separated they are ionized into the mass spectrometer by the ESI system; sample preparation is actually performed before the LC separation. In off-line measurements, the analyte solution is applied directly to the mass spectrometer by a spray capillary . Off-line sample preparation has many considerations, such as the fact that the capillary used allows for the application of volumes in the nanoliter range, which can contain a concentration too small for analysis of many compounds, such as proteins. An additional problem can be loss of ESI signal due to interference between the analyte sample and background components. Unfortunately, it has been shown that sample preparation itself can only slightly alleviate this problem which is due more to the nature of the analyte itself than the preparation. In ESI the principal problem comes not from reactions in the gas phase but rather from problems involving the solution phase of the droplets themselves. Issues can be due to non-volatile substances remaining in the drops, which can change the efficiency of droplet formation or droplet evaporation, which in turn affects the amount of charged ions in the gas phase that ultimately reach the mass spectrometer. These problems can be fixed in multiple ways, including increasing the amount of concentration of analyte compared to matrix in the sample solution or by running the sample through a more extensive chromatographic technique before analysis. An example of a chromatographic technique that can aid in signal in ESI involves using 2-D liquid chromatography, or running the sample through two separate chromatography columns, giving better separation of the analyte from the matrix.
ESI variations
There are some ESI methods that require little to no sample preparation. One such method is a method termed extractive electrospray ionization (EESI). This method involves having an electrospray of solvent directed at an angle against a different spray of the sample solution, produced by a separate nebulizer. This method requires no sample preparation in that the electrospray of solvent extracts the sample from the complex mixture, effectively removing any background contaminants. Another particularly powerful variation on ESI is desorption electrospray ionization (DESI), which involves directing an electrospray at a surface with the sample deposited on top of it. The sample is ionized in the electrospray as it splashes off the surface, then traveling to the mass spectrometer. This method is important because no sample preparation is needed for this method. A sample simply needs to be deposited on a surface, such as paper. Atmospheric pressure chemical ionization (APCI) is similar to ESI in that the sample is nebulized in droplets that are then evaporated, leaving behind a charged ion to be analyzed. APCI experiences few of the negative matrix effects experienced by ESI due to the fact that ionization occurs in the gas phase in this method rather than the within the liquid droplets as in ESI and the fact that in APCI there is an overabundance of reaction gas, thus minimizing the effect of the matrix on the ionization process.
Protein ESI
A major application for ESI is the field of protein mass spectrometry. Here, the MS is used for the identification and sizing of proteins. The identification of a protein sample can be done in an ESI-MS by de novo peptide sequencing (using tandem mass spectrometry) or peptide mass fingerprinting. Both methods require the previous digestion of proteins to peptides, mostly accomplished enzymatically using proteases. As well for the digestion in solution as for the in-gel digestion buffered solutions are needed, whose content in salts is too high and in analyte is too low for a successful ESI-MS measurement. Therefore, a combined desalting and concentration step is performed. Usually a reversed phase liquid chromatography is used, in which the peptides stay bound to the chromatography matrix whereas the salts are removed by washing. The peptides can be eluted from the matrix by the use of a small volume of a solution containing a large portion of organic solvent, which results in the reduction of the final volume of the analyte. In LC-MS the desalting/concentration is realised with a pre-column, in off-line measurements reversed phase micro columns are used, which can be used directly with microliter pipettes. Here, the peptides are eluted with the spray solution containing an appropriate portion of organic solvent. The resulting solution (usually a few microliters) is enriched with the analyte and, after transfer to the spray capillary, can be directly used in the MS.
See also
In-gel digestion
References
Mass spectrometry
Proteomics | Sample preparation in mass spectrometry | [
"Physics",
"Chemistry"
] | 4,876 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Matter"
] |
15,878,680 | https://en.wikipedia.org/wiki/Butler%E2%80%93Volmer%20equation | In electrochemistry, the Butler–Volmer equation (named after John Alfred Valentine Butler and Max Volmer), also known as Erdey-Grúz–Volmer equation, is one of the most fundamental relationships in electrochemical kinetics. It describes how the electrical current through an electrode depends on the voltage difference between the electrode and the bulk electrolyte for a simple, unimolecular redox reaction, considering that both a cathodic and an anodic reaction occur on the same electrode:
The Butler–Volmer equation
The Butler–Volmer equation is:
or in a more compact form:
where:
: electrode current density, A/m2 (defined as j = I/S)
: exchange current density, A/m2
: electrode potential, V
: equilibrium potential, V
: absolute temperature, K
: number of electrons involved in the electrode reaction
: Faraday constant
: universal gas constant
: so-called cathodic charge transfer coefficient, dimensionless
: so-called anodic charge transfer coefficient, dimensionless
: activation overpotential (defined as ).
The right hand figure shows plots valid for .
The limiting cases
There are two limiting cases of the Butler–Volmer equation:
the low overpotential region (called "polarization resistance", i.e., when E ≈ Eeq), where the Butler–Volmer equation simplifies to:
;
the high overpotential region, where the Butler–Volmer equation simplifies to the Tafel equation. When , the first term dominates, and when , the second term dominates.
for a cathodic reaction, when E << Eeq, or
for an anodic reaction, when E >> Eeq
where and are constants (for a given reaction and temperature) and are called the Tafel equation constants. The theoretical values of the Tafel equation constants are different for the cathodic and anodic processes. However, the Tafel slope can be defined as:
where is the faradaic current, expressed as , being and the cathodic and anodic partial currents, respectively.
The extended Butler–Volmer equation
The more general form of the Butler–Volmer equation, applicable to the mass transfer-influenced conditions, can be written as:
where:
j is the current density, A/m2,
co and cr refer to the concentration of the species to be oxidized and to be reduced, respectively,
c(0,t) is the time-dependent concentration at the distance zero from the surface of the electrode.
The above form simplifies to the conventional one (shown at the top of the article) when the concentration of the electroactive species at the surface is equal to that in the bulk.
There are two rates which determine the current-voltage relationship for an electrode. First is the rate of the chemical reaction at the electrode, which consumes reactants and produces products. This is known as the charge transfer rate. The second is the rate at which reactants are provided, and products removed, from the electrode region by various processes including diffusion, migration, and convection. The latter is known as the mass-transfer rate
. These two rates determine the concentrations of the reactants and products at the electrode, which are in turn determined by them. The slowest of these rates will determine the overall rate of the process.
The simple Butler–Volmer equation assumes that the concentrations at the electrode are practically equal to the concentrations in the bulk electrolyte, allowing the current to be expressed as a function of potential only. In other words, it assumes that the mass transfer rate is much greater than the reaction rate, and that the reaction is dominated by the slower chemical reaction rate. Despite this limitation, the utility of the Butler–Volmer equation in electrochemistry is wide, and it is often considered to be "central in the phenomenological electrode kinetics".
The extended Butler–Volmer equation does not make this assumption, but rather takes the concentrations at the electrode as given, yielding a relationship in which the current is expressed as a function not only of potential, but of the given concentrations as well. The mass-transfer rate may be relatively small, but its only effect on the chemical reaction is through the altered (given) concentrations. In effect, the concentrations are a function of the potential as well. A full treatment, which yields the current as a function of potential only, will be expressed by the extended Butler–Volmer equation, but will require explicit inclusion of mass transfer effects in order to express the concentrations as functions of the potential.
Derivation
General expression
The following derivation of the extended Butler–Volmer equation is adapted from that of Bard and Faulkner and Newman and Thomas-Alyea. For a simple unimolecular, one-step reaction of the form:
O+ne− → R
The forward and backward reaction rates (vf and vb) and, from Faraday's laws of electrolysis, the associated electrical current densities (j), may be written as:
where kf and kb are the reaction rate constants, with units of frequency (1/time) and co and cr are the surface concentrations (mol/area) of the oxidized and reduced molecules, respectively (written as co(0,t) and cr(0,t) in the previous section). The net rate of reaction v and net current density j are then:
The figure above plots various Gibbs energy curves as a function of the reaction coordinate ξ. The reaction coordinate is roughly a measure of distance, with the body of the electrode being on the left, the bulk solution being on the right. The blue energy curve shows the increase in Gibbs energy for an oxidized molecule as it moves closer to the surface of the electrode when no potential is applied. The black energy curve shows the increase in Gibbs energy as a reduced molecule moves closer to the electrode. The two energy curves intersect at . Applying a potential E to the electrode will move the energy curve downward (to the red curve) by nFE and the intersection point will move to . and are the activation energies (energy barriers) to be overcome by the oxidized and reduced species respectively for a general E, while and are the activation energies for E=0.
Assume that the rate constants are well approximated by an Arrhenius equation,
where the Af and Ab are constants such that Af co = Ab cr is the "correctly oriented" O-R collision frequency, and the exponential term (Boltzmann factor) is the fraction of those collisions with sufficient energy to overcome the barrier and react.
Assuming that the energy curves are practically linear in the transition region, they may be represented there by:
{|
|-
| || (blue curve)
|-
| || (red curve)
|-
| || (black curve)
|}
The charge transfer coefficient for this simple case is equivalent to the symmetry factor, and can be expressed in terms of the slopes of the energy curves:
It follows that:
For conciseness, define:
The rate constants can now be expressed as:
where the rate constants at zero potential are:
The current density j as a function of applied potential E may now be written:
Expression in terms of the equilibrium potential
At a certain voltage Ee, equilibrium will attain and the forward and backward rates (vf and vb) will be equal. This is represented by the green curve in the above figure. The equilibrium rate constants will be written as kfe and kbe, and the equilibrium concentrations will be written coe and cre. The equilibrium currents (jce and jae) will be equal and are written as jo, which is known as the exchange current density.
Note that the net current density at equilibrium will be zero. The equilibrium rate constants are then:
Solving the above for kfo and kbo in terms of the equilibrium concentrations coe and cre and the exchange current density jo, the current density j as a function of applied potential E may now be written:
Assuming that equilibrium holds in the bulk solution, with concentrations and , it follows that and , and the above expression for the current density j is then the Butler–Volmer equation. Note that E-Ee is also known as η, the activation overpotential.
Expression in terms of the formal potential
For the simple reaction, the change in Gibbs energy is:
where aoe and are are the activities at equilibrium. The activities a are related to the concentrations c by a=γc where γ is the activity coefficient. The equilibrium potential is given by the Nernst equation:
where is the standard potential
Defining the formal potential:
the equilibrium potential is then:
Substituting this equilibrium potential into the Butler–Volmer equation yields:
which may also be written in terms of the standard rate constant ko as:
The standard rate constant is an important descriptor of electrode behavior, independent of concentrations. It is a measure of the rate at which the system will approach equilibrium. In the limit as , the electrode becomes an ideal polarizable electrode and will behave electrically as an open circuit (neglecting capacitance). For nearly ideal electrodes with small ko, large changes in the overpotential are required to generate a significant current. In the limit as , the electrode becomes an ideal non-polarizable electrode and will behave as an electrical short. For a nearly ideal electrodes with large ko, small changes in the overpotential will generate large changes in current.
See also
Advanced Simulation Library
Nernst equation
Goldman equation
Tafel equation
Notes
References
External links
Chemical kinetics
Electrochemical equations
Physical chemistry | Butler–Volmer equation | [
"Physics",
"Chemistry",
"Mathematics"
] | 1,973 | [
"Chemical reaction engineering",
"Applied and interdisciplinary physics",
"Mathematical objects",
"Equations",
"Electrochemistry",
"nan",
"Chemical kinetics",
"Physical chemistry",
"Electrochemical equations"
] |
15,878,779 | https://en.wikipedia.org/wiki/Contact%20dynamics | Contact dynamics deals with the motion of multibody systems subjected to unilateral contacts and friction. Such systems are omnipresent in many multibody dynamics applications. Consider for example
Contacts between wheels and ground in vehicle dynamics
Squealing of brakes due to friction induced oscillations
Motion of many particles, spheres which fall in a funnel, mixing processes (granular media)
Clockworks
Walking machines
Arbitrary machines with limit stops, friction.
Anatomic tissues (skin, iris/lens, eyelids/anterior ocular surface, joint cartilages, vascular endothelium/blood cells, muscles/tendons, et cetera)
In the following it is discussed how such mechanical systems with unilateral contacts and friction can be modeled and how the time evolution of such systems can be obtained by numerical integration. In addition, some examples are given.
Modeling
The two main approaches for modeling mechanical systems with unilateral contacts and friction are the regularized and the non-smooth approach. In the following, the two approaches are introduced using a simple example. Consider a block which can slide or stick on a table (see figure 1a). The motion of the block is described by the equation of motion, whereas the friction force is unknown (see figure 1b). In order to obtain the friction force, a separate force law must be specified which links the friction force to the associated velocity of the block.
Non-smooth approach
A more sophisticated approach is the non-smooth approach, which uses set-valued force laws to model mechanical systems with unilateral contacts and friction. Consider again the block which slides or sticks on the table. The associated set-valued friction law of type Sgn is depicted in figure 3. Regarding the sliding case, the friction force is given. Regarding the sticking case, the friction force is set-valued and determined according to an additional algebraic constraint.
To conclude, the non-smooth approach changes the underlying mathematical structure if required and leads to a proper description of mechanical systems with unilateral contacts and friction. As a consequence of the changing mathematical structure, impacts can occur, and the time evolutions of the positions and the velocities can not be assumed to be smooth anymore. As a consequence, additional impact equations and impact laws have to be defined. In order to handle the changing mathematical structure, the set-valued force laws are commonly written as inequality or inclusion problems. The evaluation of these inequalities/inclusions is commonly done by solving linear (or nonlinear) complementarity problems, by quadratic programming or by transforming the inequality/inclusion problems into projective equations which can be solved iteratively by Jacobi or Gauss–Seidel techniques.
The non-smooth approach provides a new modeling approach for mechanical systems with unilateral contacts and friction, which incorporates also the whole classical mechanics subjected to bilateral constraints. The approach is associated to the classical DAE theory and leads to robust integration schemes.
Numerical integration
The integration of regularized models can be done by standard stiff solvers for ordinary differential equations. However, oscillations induced by the regularization can occur. Considering non-smooth models of mechanical systems with unilateral contacts and friction, two main classes of integrators exist, the event-driven and the so-called time-stepping integrators.
Event-driven integrators
Event-driven integrators distinguish between smooth parts of the motion in which the underlying structure of the differential equations does not change, and in events or so-called switching points at which this structure changes, i.e. time instants at which a unilateral contact closes or a stick slip transition occurs. At these switching points, the set-valued force (and additional impact) laws are evaluated in order to obtain a new underlying mathematical structure on which the integration can be continued. Event-driven integrators are very accurate but are not suitable for systems with many contacts.
Time-stepping integrators
Time-stepping integrators are dedicated numerical schemes for mechanical systems with many contacts. The first time-stepping integrator was introduced by J.J. Moreau. The integrators do not aim at resolving switching points and are therefore very robust in application. As the integrators work with the integral of the contact forces and not with the forces itself, the methods can handle both motion and impulsive events like impacts. As a drawback, the accuracy of time-stepping integrators is low. This can be fixed by using a step-size refinement at switching points. Smooth parts of the motion are processed by larger step sizes, and higher order integration methods can be used to increase the integration order.
Examples
This section gives some examples of mechanical systems with unilateral contacts and friction. The results have been obtained by a non-smooth approach using time-stepping integrators.
Granular materials
Time-stepping methods are especially well suited for the simulation of granular materials. Figure 4 depicts the simulation of mixing 1000 disks.
Billiard
Consider two colliding spheres in a billiard play. Figure 5a shows some snapshots of two colliding spheres, figure 5b depicts the associated trajectories.
Wheely of a motorbike
If a motorbike is accelerated too fast, it does a wheelie. Figure 6 shows some snapshots of a simulation.
Motion of the woodpecker toy
The woodpecker toy is a well known benchmark problem in contact dynamics. The toy consists of a pole, a sleeve with a hole that is slightly larger than the diameter of the pole, a spring and the woodpecker body. In operation, the woodpecker moves down the pole performing some kind of pitching motion, which is controlled by the sleeve. Figure 7 shows some snapshots of a simulation.
A simulation and visualization can be found at https://github.com/gabyx/Woodpecker.
See also
Multibody dynamics
Contact mechanics: Applications with unilateral contacts and friction. Static applications (contact between deformable bodies) and dynamic applications (Contact dynamics).
Lubachevsky-Stillinger algorithm of simulating compression of large assemblies of hard particles
References
Further reading
Acary V. and Brogliato, B. Numerical Methods for Nonsmooth Dynamical Systems. Applications in Mechanics and Electronics. Springer Verlag, LNACM 35, Heidelberg, 2008.
Brogliato B. Nonsmooth Mechanics. Models, Dynamics and Control Communications and Control Engineering Series Springer-Verlag, London, 2016 (third Ed.)
Drumwright, E. and Shell, D. Modeling Contact Friction and Joint Friction in Dynamic Robotic Simulation Using the Principle of Maximum Dissipation. Springer Tracks in Advanced Robotics: Algorithmic Foundations of Robotics IX, 2010
Glocker, Ch. Dynamik von Starrkoerpersystemen mit Reibung und Stoessen, volume 18/182 of VDI Fortschrittsberichte Mechanik/Bruchmechanik. VDI Verlag, Düsseldorf, 1995
Glocker Ch. and Studer C. Formulation and preparation for Numerical Evaluation of Linear Complementarity Systems. Multibody System Dynamics 13(4):447-463, 2005
Jean M. The non-smooth contact dynamics method. Computer Methods in Applied mechanics and Engineering 177(3-4):235-257, 1999
Moreau J.J. Unilateral Contact and Dry Friction in Finite Freedom Dynamics, volume 302 of Non-smooth Mechanics and Applications, CISM Courses and Lectures. Springer, Wien, 1988
Pfeiffer F., Foerg M. and Ulbrich H. Numerical aspects of non-smooth multibody dynamics. Comput. Methods Appl. Mech. Engrg 195(50-51):6891-6908, 2006
Potra F.A., Anitescu M., Gavrea B. and Trinkle J. A linearly implicit trapezoidal method for integrating stiff multibody dynamics with contacts, joints and friction. Int. J. Numer. Meth. Engng 66(7):1079-1124, 2006
Stewart D.E. and Trinkle J.C. An Implicit Time-Stepping Scheme for Rigid Body Dynamics with Inelastic Collisions and Coulomb Friction. Int. J. Numer. Methods Engineering 39(15):2673-2691, 1996
Studer C. Augmented time-stepping integration of non-smooth dynamical systems, PhD Thesis ETH Zurich, ETH E-Collection, to appear 2008
Studer C. Numerics of Unilateral Contacts and Friction—Modeling and Numerical Time Integration in Non-Smooth Dynamics, Lecture Notes in Applied and Computational Mechanics, Volume 47, Springer, Berlin, Heidelberg, 2009
External links
Multibody research group, Center of Mechanics, ETH Zurich.
Lehrstuhl für angewandte Mechanik TU Munich.
BiPoP Team, INRIA Rhone-Alpes, France,
Siconos software. An open-source software dedicated to the modeling and the simulation or nonsmooth dynamical systems, especially mechanical systems with contact and Coulomb's friction
Multibody dynamics, Rensselaer Polytechnic Institute.
dynamY software
LMGC90 software
MigFlow software
Solfec software
GRSFramework Granular Rigid Body Simulation Framework developed at IMES in Ch. Glocker's group (High-Performance Computing with MPI), 2016
Chrono, an open source multi-physics simulation engine, see also project website 2017
Mechanics
Dynamical systems | Contact dynamics | [
"Physics"
] | 1,985 | [
"Physical phenomena",
"Motion (physics)",
"Classical mechanics",
"Dynamics (mechanics)"
] |
15,878,841 | https://en.wikipedia.org/wiki/Unilateral%20contact | In contact mechanics, the term unilateral contact, also called unilateral constraint, denotes a mechanical constraint which prevents penetration between two rigid/flexible bodies.
Constraints of this kind are omnipresent in non-smooth multibody dynamics applications, such as granular flows, legged robot, vehicle dynamics, particle damping, imperfect joints, or rocket landings. In these applications, the unilateral constraints result in impacts happening, therefore requiring suitable methods to deal with such constraints.
Modelling of the unilateral constraints
There are mainly two kinds of methods to model the unilateral constraints. The first kind is based on smooth contact dynamics, including methods using Hertz's models, penalty methods, and some regularization force models, while the second kind is based on the non-smooth contact dynamics, which models the system with unilateral contacts as variational inequalities.
Smooth contact dynamics
In this method, normal forces generated by the unilateral constraints are modelled according to the local material properties of bodies. In particular, contact force models are derived from continuum mechanics, and expressed as functions of the gap and the impact velocity of bodies. As an example, an illustration of the classic Hertz contact model is shown in the figure on the right. In such model, the contact is explained by the local deformation of bodies. More contact models can be found in some review scientific works or in the article dedicated to contact mechanics.
Non-smooth contact dynamics
In non-smooth method, unilateral interactions between bodies are fundamentally modelled by the Signorini condition for non-penetration, and impact laws are used to define the impact process. The Signorini condition can be expressed as the complementarity problem:
,
where denotes the distance between two bodies and denotes the contact force generated by the unilateral constraints, as shown in the figure below. Moreover, in terms of the concept of proximal point of convex theory, the Signorini condition can be equivalently expressed as:
,
where denotes an auxiliary parameter, and represents the proximal point in the set to the variable , defined as:
.
Both the expressions above represent the dynamic behaviour of unilateral constraints: on the one hand, when the normal distance is above zero, the contact is open, which means that there is no contact force between bodies, ; on the other hand, when the normal distance is equal to zero, the contact is closed, resulting in .
When implementing non-smooth theory based methods, the velocity Signorini condition or the acceleration Signorini condition are actually employed in most cases. The velocity Signorini condition is expressed as:
,
where denotes the relative normal velocity after impact. The velocity Signorini condition should be understood together with the previous conditions . The acceleration Signorini condition is considered under closed contact (), as:
,
where the overdots denote the second-order derivative with respect to time.
When using this method for unilateral constraints between two rigid bodies, the Signorini condition alone is not enough to model the impact process, so impact laws, which give the information about the states before and after the impact, are also required. For example, when the Newton restitution law is employed, a coefficient of restitution will be defined as: , where denotes the relative normal velocity before impact.
Frictional unilateral constraints
For frictional unilateral constraints, the normal contact forces are modelled by one of the methods above, while the friction forces are commonly described by means of Coulomb's friction law. Coulomb's friction law can be expressed as follows: when the tangential velocity is not equal to zero, namely when the two bodies are sliding, the friction force is proportional to the normal contact force ; when instead the tangential velocity is equal to zero, namely when the two bodies are relatively steady, the friction force is no more than the maximum of the static friction force. This relationship can be summarised using the maximum dissipation principle, as
where
represents the friction cone, and denotes the kinematic friction coefficient. Similarly to the normal contact force, the formulation above can be equivalently expressed in terms of the notion of proximal point as:
,
where denotes an auxiliary parameter.
Solution techniques
If the unilateral constraints are modelled by the continuum mechanics based contact models, the contact forces can be computed directly through an explicit mathematical formula, that depends on the contact model of choice. If instead the non-smooth theory based method is employed, there are two main formulations for the solution of the Signorini conditions: the nonlinear/linear complementarity problem (N/LCP) formulation and the augmented Lagrangian formulation. With respect to the solution of contact models, the non-smooth method is more tedious, but less costly from the computational viewpoint. A more detailed comparison of solution methods using contact models and non-smooth theory was carried out by Pazouki et al.
N/LCP formulations
Following this approach, the solution of dynamics equations with unilateral constraints is transformed into the solution of N/LCPs. In particular, for frictionless unilateral constraints or unilateral constraints with planar friction, the problem is transformed into LCPs, while for frictional unilateral constraints, the problem is transformed into NCPs. To solve LCPs, the pivoting algorithm, originating from the algorithm of Lemek and Dantzig, is the most popular method. Unfortunately, however, numerical experiments show that the pivoting algorithm may fail when handling systems with a large number of unilateral contacts, even using the best optimizations. For NCPs, using a polyhedral approximation can transform the NCPs into a set of LCPs, which can then be solved by the LCP solver. Other approaches beyond these methods, such NCP-functions or cone complementarity problems (CCP) based methods are also employed to solve NCPs.
Augmented Lagrangian formulation
Different from the N/LCP formulations, the augmented Lagrangian formulation uses the proximal functions described above, . Together with dynamics equations, this formulation is solved by means of root-finding algorithms. A comparative study between LCP formulations and the augmented Lagrangian formulation was carried out by Mashayekhi et al.
See also
Collision response
Variational inequalities
References
Further reading
Open-source software
Open-source codes and non-commercial packages using the non-smooth based method:
Chrono, an open source multi-physics simulation engine, see also project website
Books and articles
Acary V., Brogliato B. Numerical Methods for Nonsmooth Dynamical Systems. Applications in Mechanics and Electronics. Springer Verlag, LNACM 35, Heidelberg, 2008.
Brogliato B. Nonsmooth Mechanics. Communications and Control Engineering Series Springer-Verlag, London, 1999 (2dn Ed.)
Demyanov, V.F., Stavroulakis, G.E., Polyakova, L.N., Panagiotopoulos, P.D. "Quasidifferentiability and Nonsmooth Modelling in Mechanics, Engineering and Economics" Springer 1996
Glocker, Ch. Dynamik von Starrkoerpersystemen mit Reibung und Stoessen, volume 18/182 of VDI Fortschrittsberichte Mechanik/Bruchmechanik. VDI Verlag, Düsseldorf, 1995
Glocker Ch. and Studer C. Formulation and preparation for Numerical Evaluation of Linear Complementarity Systems. Multibody System Dynamics 13(4):447-463, 2005
Jean M. The non-smooth contact dynamics method. Computer Methods in Applied mechanics and Engineering 177(3-4):235-257, 1999
Moreau J.J. Unilateral Contact and Dry Friction in Finite Freedom Dynamics, volume 302 of Non-smooth Mechanics and Applications, CISM Courses and Lectures. Springer, Wien, 1988
Pfeiffer F., Foerg M. and Ulbrich H. Numerical aspects of non-smooth multibody dynamics. Comput. Methods Appl. Mech. Engrg 195(50-51):6891-6908, 2006
Potra F.A., Anitescu M., Gavrea B. and Trinkle J. A linearly implicit trapezoidal method for integrating stiff multibody dynamics with contacts, joints and friction. Int. J. Numer. Meth. Engng 66(7):1079-1124, 2006
Stewart D.E. and Trinkle J.C. An Implicit Time-Stepping Scheme for Rigid Body Dynamics with Inelastic Collisions and Coulomb Friction. Int. J. Numer. Methods Engineering 39(15):2673-2691, 1996
Studer C. Augmented time-stepping integration of non-smooth dynamical systems, PhD Thesis ETH Zurich, ETH E-Collection, to appear 2008
Studer C. Numerics of Unilateral Contacts and Friction -- Modeling and Numerical Time Integration in Non-Smooth Dynamics, Lecture Notes in Applied and Computational Mechanics, Volume 47, Springer, Berlin, Heidelberg, 2009
Mechanics | Unilateral contact | [
"Physics",
"Engineering"
] | 1,914 | [
"Mechanics",
"Mechanical engineering"
] |
15,879,259 | https://en.wikipedia.org/wiki/Boreout | Boredom boreout syndrome is a psychological disorder that causes physical illness, mainly caused by mental underload at the workplace due to lack of either adequate quantitative or qualitative workload. One reason for boreout could be that the initial job description does not match the actual work.
The syndrome was first given this name in 2007 in Diagnose Boreout, a book by Peter Werder and Philippe Rothlin, two Swiss business consultants.
It had earlier been published about under the name "underchallenged burnout" by American teacher Barry A. Farber in 1991.
Symptoms and consequences
Symptoms of the bore-out syndrome are described by the Frankfurt psychotherapist Wolfgang Merkle as similar to the burnout syndrome. These include depression, listlessness and insomnia, but also tinnitus, susceptibility to infection, stomach upset, headache and dizziness.
Elements
According to Peter Werder and Philippe Rothlin, the absence of meaningful tasks, rather than the presence of stress, is many workers' chief problem. Ruth Stock-Homburg defines boreout as a negative psychological state with low work-related arousal.
Boreout has been studied in terms of its key dimensions. In their practitioners book, Werder and Rothlin suggest elements: boredom, lack of challenge, and lack of interest. These authors disagree with the common perceptions that a demotivated employee is lazy; instead, they claim that the employee has lost interest in work tasks. Those suffering from boreout are "dissatisfied with their professional situation" in that they are frustrated at being prevented, by institutional mechanisms or obstacles as opposed to by their own lack of aptitude, from fulfilling their potential (as by using their skills, knowledge, and abilities to contribute to their company's development) and/or from receiving official recognition for their efforts.
Relying on empirical data from service employees, Stock-Homburg identifies three components of boreout: job boredom, crisis of meaning and crisis of growth, which arise from a loss of resources due to a lack of challenges.
Peter Werder and Philippe Rothlin suggest that the reason for researchers' and employers' overlooking the magnitude of boreout-related problems is that they are underreported because revealing them exposes a worker to the risk of social stigma and adverse economic effects. (By the same token, many managers and co-workers consider an employee's level of workplace stress to be indicative of that employee's status in the workplace.)
There are several reasons boreout might occur. The authors note that boreout is unlikely to occur in many non-office jobs where the employee must focus on finishing a specific task (e.g., a surgeon) or helping people in need (e.g., a childcare worker or nanny). In terms of group processes, it may well be that the boss or certain forceful or ambitious individuals with the team take all the interesting work leaving only a little of the most boring tasks for the others. Alternatively, the structure of the organization may simply promote this inefficiency. Of course, few if any employees (even among those who would prefer to leave) want to be fired or laid off, so the vast majority are unwilling and unlikely to call attention to the dispensable nature of their role.
As such, even if an employee has very little work to do or would only expect to be given qualitative inadequate work, they give the appearance of "looking busy" (e.g., ensuring that a work-related document is open on one's computer, covering one's desk with file folders, and carrying briefcases (whether empty or loaded) from work to one's home and vice versa).
Coping strategies
The symptoms of boreout lead employees to adopt coping or work-avoidance strategies that create the appearance that they are already under stress, suggesting to management both that they are heavily "in demand" as workers and that they should not be given additional work: "The boreout sufferer's aim is to look busy, to not be given any new work by the boss and, certainly, not to lose the job."
Boreout strategies include:
Stretching work strategy: This involves drawing out tasks so they take much longer than necessary. For example, if an employee's sole assignment during a work week is a report that takes three work days, the employee will "stretch" this three days of work over the entire work week. Stretching strategies vary from employee to employee. Some employees may do the entire report in the first three days, and then spend the remaining days surfing the Internet, planning their holiday, browsing online shopping websites, sending personal e-mails, and so on (all the while ensuring that their workstation is filled with the evidence of "hard work", by having work documents ready to be switched-to on the screen). Alternatively, some employees may "stretch" the work over the entire work week by breaking up the process with a number of pauses to send personal e-mails, go outside for a cigarette, get a coffee, chat with friends in other parts of the company, or even go to the washroom for a 10-minute nap.
Pseudo-commitment strategy: The pretence of commitment to the job by attending work and sitting at the desk, sometimes after work hours. As well, demotivated employees may stay at their desks to eat their lunch to give the impression that they are working through the lunch hour; in fact, they may be sending personal e-mails or reading online articles unrelated to work. An employee who spends the afternoon on personal phone calls may learn how to mask this by sounding serious and professional during their responses, to give the impression that it is a work-related call. For example, if a bureaucrat is chatting with a friend to set up a dinner date, when the friend suggests a time, the bureaucrat can respond that "we can probably fit that meeting time in."
Consequences for employees
Consequences of boreout for employees include dissatisfaction, fatigue as well as ennui and low self-esteem. The paradox of boreout is that despite hating the situation, employees feel unable to ask for more challenging tasks, to raise the situation with superiors or even look for a new job. The authors do, however, propose a solution: first, one must analyse one's personal job situation, then look for a solution within the company and finally if that does not help, look for a new job. If all else fails, turning to friends, family, or other co-workers for support can be extremely beneficial until any of the previously listed options become viable.
Consequences for businesses
Stock-Homburg empirically investigated the impact of the three boreout dimensions among service employees - showing that a crisis of meaning as well as a crisis of growth had a negative impact on the innovative work behavior. Another study showed that boreout negatively affects customer orientation of service employees.
Prammer studied a variety of boreout effects on businesses:
Whereabouts of dissatisfied employees, who do not work because they have internally terminated, cost the company money.
If employees actively quit internally, they can damage the operation by demonstrating their ability to mentally restore the employment contract.
The qualification of the employee is not recognized (the company cannot use its potential).
The qualified employee changes jobs (and takes their experience), which can endanger entire business locations.
As long as a recession continues, the affected employee remains in the company and leaves the company at the appropriate opportunity. In-house, a problem of distribution of work orders arises.
Tabooing causes real problems to go undetected.
Whole generations of employees are lost (because they have no opportunity to fully realize their potential).
See also
Acedia (from Greek), a state of listlessness or torpor, of not caring or not being concerned with one's position or condition in the world
Boredom
Bullshit Jobs: A Theory, a 2018 book by anthropologist David Graeber that postulates the existence of meaningless jobs and analyzes their societal harm
Occupational burnout
Group dynamics
Social alienation
Stress (biology)
Office Space, a film that features bored employees in unfulfilling jobs
Banishment room
References
Further reading
Boreout! Overcoming workplace demotivation. Peter Werder and Philippe Rothlin, (English edition) Kogan Page, October 2008.
The Living Dead: Switched Off, Zoned Out – The Shocking Truth About Office Life. David Bolchover, Capstone, September 2005.
City Slackers: Workers of the world you are wasting your time. Steve McKevitt, Cyan Books, April 2006.
Bullshit Jobs: A Theory. David Graeber, May 2018
External links
Wasted Time At Work Costing Companies Billions Salary.com
No-Nonsense Answers and Advice 75 things you can do when you are bored.
Psychopathological syndromes
Organizational behavior
Work | Boreout | [
"Biology"
] | 1,803 | [
"Behavior",
"Organizational behavior",
"Human behavior"
] |
15,879,414 | https://en.wikipedia.org/wiki/SN%201992bd | SN 1992bd was a type II supernova event in NGC 1097, positioned some 1.5″ east and 9″ south of the galactic nucleus. It was discovered by astronomers Chris Smith and Lisa Wells on October 12, 1992. Spectra of the object collected October 17 showed it to have an expansion velocity of 7,500 km/s. Subsequent examination of archival images from the Hubble Space Telescope showed an image of the supernova had been captured on September 20, 1992, 12 days prior to its discovery with ground-based telescopes. The eruption occurred in the circumnuclear star-forming region of the galaxy.
See also
Spiral Galaxy NGC 1097
References
External links
Simbad
SN 1992bd
Astronomical objects discovered in 1992 | SN 1992bd | [
"Astronomy"
] | 153 | [
"Fornax",
"Constellations"
] |
15,879,602 | https://en.wikipedia.org/wiki/SN%201999eu | SN 1999eu was a type IIP supernova that happened in NGC 1097, a barred spiral galaxy about 45 million light years away, in the constellation Fornax. It was discovered 5 November 1999, possibly three months after its initial brightening, and is unusually under-luminous for a type II supernova.
References
External links
Light curves and spectra on the Open Supernova Catalog
Supernovae
SN 1999eu
Astronomical objects discovered in 1999 | SN 1999eu | [
"Chemistry",
"Astronomy"
] | 92 | [
"Supernovae",
"Astronomical events",
"Constellations",
"Explosions",
"Fornax"
] |
15,879,621 | https://en.wikipedia.org/wiki/Stalk%20%28sheaf%29 | In mathematics, the stalk of a sheaf is a mathematical construction capturing the behaviour of a sheaf around a given point.
Motivation and definition
Sheaves are defined on open sets, but the underlying topological space consists of points. It is reasonable to attempt to isolate the behavior of a sheaf at a single fixed point of . Conceptually speaking, we do this by looking at small neighborhoods of the point. If we look at a sufficiently small neighborhood of , the behavior of the sheaf on that small neighborhood should be the same as the behavior of at that point. Of course, no single neighborhood will be small enough, so we will have to take a limit of some sort.
The precise definition is as follows: the stalk of at , usually denoted , is:
Here the direct limit is indexed over all the open sets containing , with order relation induced by reverse inclusion By definition (or universal property) of the direct limit, an element of the stalk is an equivalence class of elements , where two such sections and are considered equivalent if the restrictions of the two sections coincide on some neighborhood of .
Alternative definition
There is another approach to defining a stalk that is useful in some contexts. Choose a point of , and let be the inclusion of the one point space into . Then the stalk is the same as the inverse image sheaf . Notice that the only open sets of the one point space are and , and there is no data over the empty set. Over , however, we get:
Remarks
For some categories C the direct limit used to define the stalk may not exist. However, it exists for most categories that occur in practice, such as the category of sets or most categories of algebraic objects such as abelian groups or rings, which are namely cocomplete.
There is a natural morphism for any open set containing : it takes a section in to its germ, that is, its equivalence class in the direct limit. This is a generalization of the usual concept of a germ, which can be recovered by looking at the stalks of the sheaf of continuous functions on .
Examples
Constant sheaves
The constant sheaf associated to some set, , (or group, ring, etc) is a sheaf for which for all in .
Sheaves of analytic functions
For example, in the sheaf of analytic functions on an analytic manifold, a germ of a function at a point determines the function in a small neighborhood of a point. This is because the germ records the function's power series expansion, and all analytic functions are by definition locally equal to their power series. Using analytic continuation, we find that the germ at a point determines the function on any connected open set where the function can be everywhere defined. (This does not imply that all the restriction maps of this sheaf are injective!)
Sheaves of smooth functions
In contrast, for the sheaf of smooth functions on a smooth manifold, germs contain some local information, but are not enough to reconstruct the function on any open neighborhood. For example, let be a bump function that is identically one in a neighborhood of the origin and identically zero far away from the origin. On any sufficiently small neighborhood containing the origin, is identically one, so at the origin it has the same germ as the constant function with value 1. Suppose that we want to reconstruct from its germ. Even if we know in advance that is a bump function, the germ does not tell us how large its bump is. From what the germ tells us, the bump could be infinitely wide, that is, could equal the constant function with value 1. We cannot even reconstruct on a small open neighborhood containing the origin, because we cannot tell whether the bump of fits entirely in or whether it is so large that is identically one in .
On the other hand, germs of smooth functions can distinguish between the constant function with value one and the function , because the latter function is not identically one on any neighborhood of the origin. This example shows that germs contain more information than the power series expansion of a function, because the power series of is identically one. (This extra information is related to the fact that the stalk of the sheaf of smooth functions at the origin is a non-Noetherian ring. The Krull intersection theorem says that this cannot happen for a Noetherian ring.)
Quasi-coherent sheaves
On an affine scheme , the stalk of a quasi-coherent sheaf corresponding to an -module in a point corresponding to a prime ideal is just the localization .
Skyscraper sheaf
On any topological space, the skyscraper sheaf associated to a closed point and a group or ring has the stalks off and on —hence the name skyscraper. This idea makes more sense if one adopts the common visualisation of functions mapping from some space above to a space below; with this visualisation, any function that maps has positioned directly above . The same property holds for any point if the topological space in question is a T1 space, since every point of a T1 space is closed. This feature is the basis of the construction of Godement resolutions, used for example in algebraic geometry to get functorial injective resolutions of sheaves.
Properties of the stalk
As outlined in the introduction, stalks capture the local behaviour of a sheaf. As a sheaf is supposed to be determined by its local restrictions (see gluing axiom), it can be expected that the stalks capture a fair amount of the information that the sheaf is encoding. This is indeed true:
A morphism of sheaves is an isomorphism, epimorphism, or monomorphism, respectively, if and only if the induced morphisms on all stalks have the same property. (However it is not true that two sheaves, all of whose stalks are isomorphic, are isomorphic, too, because there may be no map between the sheaves in question.)
In particular:
A sheaf is zero (if we are dealing with sheaves of groups), if and only if all stalks of the sheaf vanish. Therefore, the exactness of a given functor can be tested on the stalks, which is often easier as one can pass to smaller and smaller neighbourhoods.
Both statements are false for presheaves. However, stalks of sheaves and presheaves are tightly linked:
Given a presheaf and its sheafification , the stalks of and agree. This follows from the fact that the sheaf is the image of through the left adjoint (because the sheafification functor is left adjoint to the inclusion functor ) and the fact that left adjoints preserve colimits.
Reference
External links
stalk in nLab
Kiran Kedlaya. 18.726 Algebraic Geometry (LEC # 3 - 5 Sheaves)Spring 2009. Massachusetts Institute of Technology: MIT OpenCourseWare Creative Commons BY-NC-SA.
Sheaf theory
de:Garbe (Mathematik)#Halme und Keime | Stalk (sheaf) | [
"Mathematics"
] | 1,444 | [
"Topology",
"Sheaf theory",
"Mathematical structures",
"Category theory"
] |
15,880,199 | https://en.wikipedia.org/wiki/Polog%20Statistical%20Region | The Polog Statistical Region (; ) is one of eight statistical regions of the Republic of North Macedonia. Polog, located in the northwestern part of the country, borders Albania and Kosovo. Internally, it borders the Southwestern and Skopje statistical regions.
Municipalities
Polog is divided into 9 municipalities:
Bogovinje
Brvenica
Gostivar
Jegunovce
Mavrovo and Rostuša
Tearce
Tetovo
Vrapčište
Želino
Demographics
Population
The current population of the Polog statistical region is 304,125 citizens, according to the last population census in 2002.
Ethnicities
Polog is the only statistical region in North Macedonia where Macedonians are not the majority.
See also
Polog
References
Statistical regions of North Macedonia | Polog Statistical Region | [
"Mathematics"
] | 148 | [
"Statistical regions of North Macedonia",
"Statistical concepts",
"Statistical regions"
] |
15,880,542 | https://en.wikipedia.org/wiki/7%20post%20shaker | The 7 post shaker is a piece of test equipment used to perform technical analysis on race cars. By applying shaking forces the shaker can emulate banking loads, lateral load transfer, longitudinal weight transfer and ride height sensitive downforce to emulate specific racetracks.
Uses
The 7 post shaker is used for many vehicles in different driving conditions. Earlier versions were the 5 post shaker and the 4 post shaker. The 4 post shaker is commonly used by vehicle manufacturers to investigate squeaks and rattles. This technology was first used in Formula 1 in the late 1990s, and is now also used by other series such as NASCAR and the Indy Racing League. NASCAR teams with 7 post rigs include Hendrick Motorsports, Richard Childress Racing, Chip Ganassi Racing, Furniture Row Racing, and Roush Fenway Racing. The car driven by Jeff Gordon is shown on a 7 post rig in this video.
Vehicle designers use the results of the testing on the 7 post shaker to adjust spring rates, shock valving and steering ratio to best suit conditions of a specific emulated track.
Manufacturers do not normally use a 7 post rig for road cars because these vehicles are not normally subject to the same aerodynamic effects as a race car operating at high speeds. However, German suspension company KW suspensions is one of the few companies to make use of a 7 post rig for the development of their road suspension components.
Operation
The 7 post shaker places forces on a vehicle and records the forces that the vehicle puts back into the system. The 7 post applies lift, downforce, road irregularity forces and load transfer due to braking, acceleration and cornering. The vehicle suspension and drivetrain components respond to these forces, chassis and suspension frequency oscillations (under 30 Hz), and tire, engine, transmission and drive axle vibrations at higher frequencies. The forces applied are calculated from a model of the racetrack, the weight of the car and driver, tire pressure, engine RPM, and driveline RPM. The forces that the testing engineers want are placed on the car through the use of four main hydraulic actuators capable of generating of force with a maximum velocity in excess of that act on the tires. While the actuators are capable of producing frequencies as high as 500 Hz, this is not necessary as the elasticity of the rubber and air in the tires will absorb most inputs above 50 Hz. The remaining three posts are known as aeroloader actuators, and are responsible for the sprung mass of the vehicle. The forces that these three actuators represent are inertial loads that come from entering a curve or aerodynamic loading and unloading in the form of downforce or lift from a wing. These forces are small on road cars where speeds are not normally greater than , but are significant on a race car where speeds can exceed .
The very basic parameters that need to be initialized are the vertical input forces to the vehicle from the road surface. The drivers and engineers want to look at how the car reacts to specific tracks, as the car will respond differently at the tri oval at Talladega Superspeedway where speeds can approach than Bristol Motor Speedway where the corners are banked 24 to 30 degrees. This data is extremely hard to collect and assemble as the road surface is highly irregular. Once a racetrack is loaded into the testing computer, the vehicle can be loaded onto the 7 post, in the absence of actual track data swept-sine waves can be used. Further variables are eliminated by using ballast for the weight of the driver and the weight of the fuel in the tank. The test lab temperature is highly regulated to standard temperature of . Once the unit is started transducers in the form of accelerometers and strain gauges, convert the mechanical movement of the vehicle into an electrical signal. This signal is sent to a processor which converts and amplifies the signal, and sends it on to the computer.
Also of particular interest to the engineer is the force between the tire and the road. This is of interest to the car designer because it reflects the grip that the tire has on the road surface. This is more difficult to test because the sampling frequency has to be at least five times as high as the highest frequency. In this case the incoming frequency is 100 Hz, so the sampling frequency must be at least 500 Hz.
In vibration analysis, as in all engineering problems, the output data must be looked at in a methodical way. When testing on the 7 post shaker, all variables are inter-related and can be analyzed while the effects of the actual installation can be quantified. The damping force curve can be extracted from the data to understand how installation stiffness and other variables affect the damping force. Some seemingly unimportant trends need to be verified so the engineers can be sure that the trend will not continue or that the trend is expected.
The analysis path in this case is:
Input - The road or race track
Unsprung mass - Weight not felt by the springs
Tires - Act as dampers to the input forces
Wheels - Add weight
Brakes - Add weight
Springs - Respond directly to the input forces
Sprung mass - The rest of the vehicle, in particular:
Shocks - Dampen input forces appreciably
Frame/Rollcage - Distributes input forces over the entire vehicle
Driver - Directly fatigued by vibration, body roll, and steering wheel feedback
See also
4-poster
Automobile handling
Automotive suspension design
Nyquist rate
Roll center
Scrub radius
Shock absorber
Suspension (vehicle)
Unsprung mass
References
NASCAR
Auto racing equipment
Automotive engineering | 7 post shaker | [
"Engineering"
] | 1,134 | [
"Automotive engineering",
"Mechanical engineering by discipline"
] |
15,880,710 | https://en.wikipedia.org/wiki/Window%20shutter%20hardware | Window shutter hardware, usually made of iron, are hinges and latches that attach to the shutter and a window frame (and in some cases directly attached to stone or brick). The hinges hold the shutter to the structure and allow the shutter to open and close over the window. The latches secure the shutter in the closed (over the window) position. Tie-back hardware can be used to hold the shutter in the open position.
Exterior shutters were vital elements of homes in the colonies. Raised panel shutters provided security against access from ground level. Exterior shutters also proved a first barrier against the elements. In cities, shutters provided privacy screens for the residents. Louvered upstairs shutters were often later additions to the home.
This article describes the evolution of early exterior window shutter hardware, terms and terminology related to shutter hardware and blacksmithing, and American regional styles of installation.
History
Early hardware
In its earliest forms, most hardware was simple and hand-made – usually of readily available materials such as wood or leather. A patch of leather spanning between the stile and jamb and fastened with wooden pegs served to hinge a door or shutter. Hand-carved wooden hinges and pintles, slide bolts and lift-latches were whittled from a variety of woods.
The earliest examples of iron hardware were sponsored by the nobility. Iron itself was expensive and a valued resource for any kingdom and had many other more valuable uses in weaponry and tools.
In the post-Renaissance period industrial advances provided more iron and the emerging merchant/tradesman classes had money to purchase hardware for their homes and warehouses. Examples of hardware excavated from the Jamestown and Plymouth colonies of the 17th century were very ornate in design – typical of that being produced in England at the time.
In Colonial America, hardware was made in England and imported to the colonies. It was illegal for the colonials to produce manufactured goods. America sold iron and charcoal to the British, who used those raw materials and their resident labor force to produce hardware which was then sold back to the captive market in the colonies. Virtually all of the early hardware in New York, Philadelphia, Annapolis, Alexandria, Key West, or anyplace else where British ships could berth, was made in England. Away from the ports and cities where British authority was centered, many locally-made examples of early hardware can be found. Examples of German, French, and Dutch hardware remain in the inland river valleys – the homelands of the early settlers. English hardware, however, was the overwhelming standard in colonial America and set the pattern for all that evolved.
Virtually all of the shutters in colonial times were hung with strap hinges – following the examples in Britain. Strap hinges were strong and secure. The frames of windows were hewn from a single heavy piece of wood into which a heavy pintle could be driven. The rails of the shutter were often six or eight inches high and provided room to position the strap hinge across most of the width of the shutter. The hinges were fastened to the shutters with rivets or nails driven through and clinched on the inside of the closed shutter. Locks of the period followed the form of the strap hinges. The rolled barrel was replaced by a pin of about " in diameter and twice the length of the thickness of the shutter mounted perpendicular to the face of the lock. The lock would be nailed or riveted on the lock rail of one shutter with the pin positioned about two inches beyond the edge of the shutter. The opposite shutter would be drilled through with a hole to accept the pin protruding from the lock.
To close and secure the shutter: from the inside close the shutter with the hole then close the shutter with the lock. The lock pin passes through the hole and the user drops a simple nail-like key into the hole in the lock pin. The shutter is virtually impregnable from the exterior.
Tie-backs of the Colonial era were mostly of English origin and many were of the "Rattail" style. Variations are noted as different British manufacturers vied to produce a less expensive product. Inland, where local smiths were producing hardware on their own, a wide range of patterns have been noted.
Shutter hardware and the Industrial Revolution
Around 1750, colonial raw materials poured into the British Isles, and factories began to appear. The earlier hardware with its chiseled and filed details fast gave way to less expensive, but equally functional hardware of similar but unadorned design. H and HL hinges are a good example of this transition.
After the American Revolution machines were invented to make screws and to produce rolled iron in thin sheets. By about 1800 cheap screws were readily available. Cast iron technology had long been available – now machine-made screws allowed such hardware to be economically mounted. Butt type hinges can be seen during this "Federal" Period (1800–1830) – but they quickly fell from favor, probably because they were subject to breakage.
A more obvious change in the shutter hardware was noted in shutter bolts. The common slide plate and keeper style of bolt started to appear. It was simpler to fabricate and operate than the earlier "strap style lock". This bolt relied on both the new cheap fasteners and the readily available plate iron. This bolt also relied on machines and "dies." This form of shutter bolt has been made continually ever since.
Strap hinges continued to dominate in the marketplace for hanging shutters. Drive pintles started to be replaced by similar pintles cut off and mounted on a piece of thin plate material and again fastened with the new screws. This is the precursor of the "plate pintle".
Changes in construction have been noted in the same period. Structures were built with openings into which pre-fabricated windows were installed. The earliest examples date from around 1810 and used a variation on the strap hinge. Instead of mounting the pintle to the surface of the structure, a new form was designed. This pintle was a flat plate of about two inches high and notched to one half of its height and formed to a female barrel. Holes were punched in the side of the pintle, and it was screwed directly to the side of the window before the window was installed on the structure. The strap hinges were modified to match the new pintles and the hinge was of the same width as the pintle and notched to one half of its height. A pin to mate with the female pintle was welded in the hinge. Examples of this type proved to be very durable and were in regular and widespread use through the 1870s.
Often when the shutters were removed – usually in the 20th century – cast type pintles were hit with a hammer and broken off flush with the edge of the window. The shutters often found their way into the basements of the home to provide coal bins for newly installed central heat or were nailed up in the barn to partition off pig sties or calf pens.
Cast iron tie-backs became much more popular during the Federal period – usually mounted on arms extending from the window sills. The "Federal Shell" was the dominant pattern in this period.
The American Civil War Era and beyond
The next major change in shutter hardware coincided with the American Civil War era. Heavy presses and punches were in use in factories around the country and a maturing rail transportation system opened inland areas for the products of the factories. Iron was the norm up until that time – steel had been expensive to produce. Hardware makers were quick to take advantage of this new material. They produced the first of the "butt" and "H" or "Parliament" style lift-off hinges. Quick and easy to produce and strong enough to hold heavy shutters, they found favor in the new construction of the period.
Around 1880 the first examples of "New York" style hardware appeared. Plate steel elements were assembled by unskilled labor in sprawling factories. This hardware style evolved into the many imported forms seen today. It provided the ability to surface mount hinges and tie the wooden elements of the shutters together, and also allowed for smaller and less expensive window and shutter elements. About this time the first commercially produced "S" style tie-backs were seen – manufactured by Stanley Works in Connecticut. Historically an "S" is a very difficult form to forge. Stanley forged the first simple styles for commercial consumption but it wasn't until the 1930s that they started to stamp them.
Shutter and hardware terminology
Nuts and bolts terms
Battens – the horizontal elements on "board and batten" shutters. Strap hinges usually mount centered on the battens. This is the standard construction approach for most barn doors.
Butt mounted – hinges that mortise into the sides of the hinges – only the barrel of the hinge is visible when the shutter is in the closed position.
Casement – the wood surrounding the window upon which the pintle is typically mounted.
Hinges – mate with the Pintles and are mounted on the shutter.
Pintles – the "pins" on which hinges swing. The pintles are, by definition, mounted to the structure. Pintles are offered in various configurations to match different installation situations.
Rails – with louvered or raised panel shutters, the rails are the horizontal elements of wood that frame the shutter. The width of the rails is an important consideration when choosing surface mounted hardware.
Show hinges – hinges arranged to mount so as to be visible when the shutter is in the open position.
Stiles – when a shutter is louvered or of the raised panel style, the stiles are the vertical elements of the frame. Knowing the width of the stiles allows positioning of the first fastener on strap hinges on their mid-line.
Surface mounted – hinges that mount to the face of the shutter – strap hinges and the "New York Style" hinges are examples. The hinges are visible when the shutter is in the closed position.
Installation terminology
Offset – the total dimension that the shutter will travel outwards when moved from the closed to the open position. The offset is typically the distance from the face of the casement to the outermost surface of the structure.
The offset is developed in shutter hardware by selection of a pintle made to "stand off" the casement a given distance – the shutter hinge has a sharp bend which moves the hinge barrel away from the face of the shutter at a distance to match the pintle standoff.
When measuring offset, it is critical to allow for irregularities in construction. Because brick and stone openings are rarely plumb and or perfectly flat, it is typical to use the greatest dimension and allow about ½" cushion. If the offset is too small the shutters will not open fully. If the offset is too great, the shutter will function well but sit off of the wall.
Standoff – The pintle standoff is the distance from the face of the casement to the mid-line of the pintle pin. The hinge standoff is the distance from the face of the shutter to the center-line of the hinge barrel. Adding the pintle standoff to the hinge standoff results in the total offset.
Virtually all commercially available shutter hardware is provided with matching standoff on the hinge and pintle. This assumes that the face of the shutter will lie on the same plane as the casement with the shutter in the closed position.
Hinge and pintle standoffs can be custom made to a user's situation. This eases installation and insures proper shutter function.
Throw – This is the measure of the horizontal movement of the edge of the shutter as it swings from the open to the closed position and varies greatly between hinge styles. If too little throw, the open shutter will cover the window molding. Too much throw and too much brick or siding shows between the open shutter edge and the window frame. Proper throw insures that the shutter will comfortably "frame" the window – not obstruct or detract from window detail.
Regional installation styles
Shutter mounts on face of structure and closes within masonry opening
The shutter is fitted to the dimensions of the masonry opening. The pintle is embedded or surface mounted to the structure itself. The pintle pin is positioned on the outside corner of the masonry. This approach can be seen on brick structures, especially post-civil war commercial multi-story buildings. Also common in the south of Europe, France, Italy, and Austria, it allows the shutter to sit almost fully parallel to the structure.
The European structures are typically stucco coated, with a drive type pintle built diagonally into the masonry prior to stucco finish. A lag screw pintle can be substituted for the drive pintle. Brick structures can employ a similar embedded pintle, or a surface mounted pintle. Storm type strap hinges are typically in Europe. American examples are often tapered.
Flush installation with shutter closing within casement
The shutter in the closed position fits within the window casement. This was the prevalent approach in the Colonies from New York and south. An advantage is the additional security because the shutters can not be lifted from the pintles in the closed position. A disadvantage is that the shutters must be matched closely to the inside dimension of the casing and the shutter rabbet should match the thickness of the shutters.
Any surface mounted hinge and pintle can be used, assuming there is sufficient width to the casing to accept the pintle. The hinge has a minimal standoff and the pintle would have the same matching standoff. Together an offset of 1–1½ inches will hold the shutter at the same distance from the structure and not quite parallel to the wall.
Flush installation shutter sits proud on casing
Historically, this approach was seen through the New England colonies. Virtually every old home is a clapboard structure fitted with shutters applied in this manner. They were likely hung on the casing to allow for the frost heaves and movement of the structures in the harsh New England winters. The shutters simply allowed the house to heave and settle behind them.
A strap hinge with a zero offset and an angle pintle matched to the thickness of the shutter will serve in every case. The shutter is removed from the face of the casing by the thickness of the shutter plus the diameter of the pintle pin leaving the shutter to clear the corner of the casing.
Offset installation shutter closes within casing
This style is traditional to suburbs of Philadelphia, Pennsylvania, including Chester, Bucks, and Montgomery Counties. The amount of the required offset is divided evenly between the hinge and the pintle.
References
Windows
Window coverings
History of metallurgy | Window shutter hardware | [
"Chemistry",
"Materials_science"
] | 2,972 | [
"Metallurgy",
"History of metallurgy"
] |
15,880,746 | https://en.wikipedia.org/wiki/Samuel%20Karlin | Samuel Karlin (June 8, 1924 – December 18, 2007) was an American mathematician at Stanford University in the late 20th century.
Education and career
Karlin was born in Janów, Poland and immigrated to Chicago as a child. Raised in an Orthodox Jewish household, Karlin became an atheist in his teenage years and remained an atheist for the rest of his life. Later in life he told his three children, who all became scientists, that walking down the street without a yarmulke on his head for the first time was a milestone in his life.
Karlin earned his undergraduate degree from Illinois Institute of Technology; and then his doctorate in mathematics from Princeton University in 1947 (at the age of 22) under the supervision of Salomon Bochner. He was on the faculty of Caltech from 1948 to 1956, before becoming a professor of mathematics and statistics at Stanford.
Throughout his career, Karlin made fundamental contributions to the fields of mathematical economics, bioinformatics, game theory, evolutionary theory, biomolecular sequence analysis, and total positivity. Karlin authored ten books and more than 450 articles. He did extensive work in mathematical population genetics. In the early 1990s, Karlin and Stephen Altschul developed the Karlin-Altschul statistics, a basis for the highly used sequence similarity software program BLAST.
Honors and awards
Karlin was a member of the American Academy of Arts and Sciences, the National Academy of Sciences, and the American Philosophical Society. He won a Lester R. Ford Award in 1973. In 1989, President George H. W. Bush bestowed Karlin the National Medal of Science "for his broad and remarkable research in mathematical analysis, probability theory and mathematical statistics, and in the application of these ideas to mathematical economics, mechanics, and population genetics." He was elected to the 2002 class of Fellows of the Institute for Operations Research and the Management Sciences.
Personal life
One of Karlin's sons, Kenneth D. Karlin, is a professor of chemistry at Johns Hopkins University and the 2009 winner of the American Chemical Society's F. Albert Cotton Award for Synthetic Chemistry. His other son, Manuel, is a physician in Portland, Oregon. His daughter, Anna R. Karlin, is a theoretical computer scientist, the Microsoft Professor of Computer Science & Engineering at the University of Washington.
Selected publications
S. Karlin and H. M. Taylor. A First Course in Stochastic Processes. Academic Press, 1975 (second edition).
S. Karlin and H. M. Taylor. A Second Course in Stochastic Processes. Academic Press, 1981.
S. Karlin and H. M. Taylor. An Introduction to Stochastic Modeling, Third Edition. Academic Press, 1998.
S. Karlin, D. Eisenberg, and R. Altman. Bioinformatics: Unsolved Problems and Challenges. National Academic Press Inc., 2005. .
S. Karlin (Ed.). Econometrics, Time Series, and Multivariate Statistics. Academic Press, 1983. .
S. Karlin (Author) and E. Nevo (Editor). Evolutionary Processes and Theory. Academic Press, 1986. .
S. Karlin. Mathematical Methods and Theory in Games, Programming, and Economics. Dover Publications, 1992. .
S. Karlin and E. Nevo (Eds.). Population Genetics and Ecology. Academic Press, 1976. .
S. Karlin and W. J. Studden. Tchebycheff systems: With applications in analysis and statistics (pure and applied mathematics). Interscience Publishers, 1966 (1st edition). ASIN B0006BNV2C.
S Karlin and S. Lessard. Theoretical Studies on Sex Ratio Evolution. Princeton University Press, 1986.
S. Karlin. Theory of Infinite Games. Addison Wesley Longman Ltd. Inc., 1959. ASIN B000SNID12.
S. Karlin. Total Positivity, Vol. 1. Stanford, 1968. ASIN B000LZG0Xu.
See also
Karlin–McGregor polynomials
References
External links
"Math in the News: Mathematician Sam Karlin, Known for Contributions in Computational Biology, has Died." Math Gateway of the Mathematical Association of America, February 5, 2008.
Obituary, I.M.S. Bulletin, May 2008
Biography of Samuel Karlin from the Institute for Operations Research and the Management Sciences
National Medal of Science laureates
Members of the United States National Academy of Sciences
Fellows of the American Academy of Arts and Sciences
Fellows of the Institute for Operations Research and the Management Sciences
John von Neumann Theory Prize winners
American geneticists
Probability theorists
American operations researchers
Game theorists
Mathematical economists
Functional analysts
20th-century American mathematicians
Stanford University Department of Mathematics faculty
Stanford University Department of Statistics faculty
Princeton University alumni
Illinois Institute of Technology alumni
Jewish American atheists
American atheists
American people of Polish-Jewish descent
Polish emigrants to the United States
1924 births
2007 deaths
Members of the American Philosophical Society | Samuel Karlin | [
"Mathematics"
] | 1,018 | [
"Game theorists",
"Game theory"
] |
15,880,848 | https://en.wikipedia.org/wiki/Northeastern%20Statistical%20Region | The Northeastern Statistical Region (; Albanian: Rajoni verilindor) is one of eight statistical regions in North Macedonia. It borders Kosovo and Serbia to the north and Bulgaria to the east, while internally, it borders the Skopje and Eastern statistical regions.
Municipalities
Northeastern Statistical Region is divided into six municipalities:
.
.
.
.
.
.
Demographics
Population
The current population of the Northeastern Statistical Region is 152,982 citizens or 8.3% of the total population of North Macedonia, according to the last population census in 2021.
Ethnicities
The largest group in the region is the Macedonians. Albanians, Serbs, and Roma also account for a significant population.
Religions
Religious affiliation according to the 2002 and 2021 Macedonian censuses:
References
Statistical regions of North Macedonia | Northeastern Statistical Region | [
"Mathematics"
] | 152 | [
"Statistical regions of North Macedonia",
"Statistical concepts",
"Statistical regions"
] |
15,881,178 | https://en.wikipedia.org/wiki/Displacement%20chromatography | Displacement chromatography is a chromatography technique in which a sample is placed onto the head of the column and is then displaced by a solute that is more strongly sorbed than the components of the original mixture. The result is that the components are resolved into consecutive "rectangular" zones of highly concentrated pure substances rather than solvent-separated "peaks". It is primarily a preparative technique; higher product concentration, higher purity, and increased throughput may be obtained compared to other modes of chromatography.
Discovery
The advent of displacement chromatography can be attributed to Arne Tiselius, who in 1943 first classified the modes of chromatography as frontal, elution, and displacement. Displacement chromatography found a variety of applications including isolation of transuranic elements and biochemical entities.
The technique was redeveloped by Csaba Horváth, who employed modern high-pressure columns and equipment. It has since found many applications, particularly in the realm of biological macromolecule purification.
Principle
The basic principle of displacement chromatography is: there are only a finite number of binding sites for solutes on the matrix (the stationary phase), and if a site is occupied by one molecule, it is unavailable to others. As in any chromatography, equilibrium is established between molecules of a given kind bound to the matrix and those of the same kind free in solution. Because the number of binding sites is finite, when the concentration of molecules free in solution is large relative to the dissociation constant for the sites, those sites will mostly be filled. This results in a downward-curvature in the plot of bound vs free solute, in the simplest case giving a Langmuir isotherm. A molecule with a high affinity for the matrix (the displacer) will compete more effectively for binding sites, leaving the mobile phase enriched in the lower-affinity solute. Flow of mobile phase through the column preferentially carries off the lower-affinity solute and thus at high concentration the higher-affinity solute will eventually displace all molecules with lesser affinities.
Mode of operation
Loading
At the beginning of the run, a mixture of solutes to be separated is applied to the column, under conditions selected to promote high retention. The higher-affinity solutes are preferentially retained near the head of the column, with the lower-affinity solutes moving farther downstream. The fastest moving component begins to form a pure zone downstream. The other components also begin to form zones, but the continued supply of the mixed feed at head of the column prevents full resolution.
Displacement
After the entire sample is loaded, the feed is switched to the displacer, chosen to have higher affinity than any sample component. The displacer forms a sharp-edged zone at the head of the column, pushing the other components downstream. Each sample component now acts as a displacer for the lower-affinity solutes, and the solutes sort themselves out into a series of contiguous bands (a "displacement train"), all moving downstream at the rate set by the displacer. The size and loading of the column are chosen to let this sorting process reach completion before the components reach the bottom of the column. The solutes appear at the bottom of the column as a series of contiguous zones, each consisting of one purified component, with the concentration within each individual zone effectively uniform.
Regeneration
After the last solute has been eluted, it is necessary to strip the displacer from the column. Since the displacer was chosen for high affinity, this can pose a challenge. On reverse-phase materials, a wash with a high percentage of organic solvent may suffice. Large pH shifts are also often employed. One effective strategy is to remove the displacer by chemical reaction; for instance if hydrogen ion was used as displacer it can be removed by reaction with hydroxide, or a polyvalent metal ion can be removed by reaction with a chelating agent. For some matrices, reactive groups on the stationary phase can be titrated to temporarily eliminate the binding sites, for instance weak-acid ion exchangers or chelating resins can be converted to the protonated form. For gel-type ion exchangers, selectivity reversal at very high ionic strength can also provide a solution. Sometimes the displacer is specifically designed with a titratable functional group to shift its affinity. After the displacer is washed out, the column is washed as needed to restore it to its initial state for the next run.
Comparison with elution chromatography
Common fundamentals
In any form of chromatography, the rate at which the solute moves down the column is a direct reflection of the percentage of time the solute spends in the mobile phase. To achieve separation in either elution or displacement chromatography, there must be appreciable differences in the affinity of the respective solutes for the stationary phase. Both methods rely on movement down the column to amplify the effect of small differences in distribution between the two phases. Distribution between the mobile and stationary phases is described by the binding isotherm, a plot of solute bound to (or partitioned into) the stationary phase as a function of concentration in the mobile phase. The isotherm is often linear, or approximately so, at low concentrations, but commonly curves (concave-downward) at higher concentrations as the stationary phase becomes saturated.
Characteristics of elution mode
In elution mode, solutes are applied to the column as narrow bands and, at low concentration, move down the column as approximately Gaussian peaks. These peaks continue to broaden as they travel, in proportion to the square root of the distance traveled. For two substances to be resolved, they must migrate down the column at sufficiently different rates to overcome the effects of band spreading. Operating at high concentration, where the isotherm is curved, is disadvantageous in elution chromatography because the rate of travel then depends on concentration, causing the peaks to spread and distort.
Retention in elution chromatography is usually controlled by adjusting the composition of the mobile phase (in terms of solvent composition, pH, ionic strength, and so forth) according to the type of stationary phase employed and the particular solutes to be separated. The mobile phase components generally have lower affinity for the stationary phase than do the solutes being separated, but are present at higher concentration and achieve their effects due to mass action. Resolution in elution chromatography is generally better when peaks are strongly retained, but conditions that give good resolution of early peaks lead to long run-times and excessive broadening of later peaks unless gradient elution is employed. Gradient equipment adds complexity and expense, particularly at large scale.
Advantages and disadvantages of displacement mode
In contrast to elution chromatography, solutes separated in displacement mode form sharp-edged zones rather than spreading peaks. Zone boundaries in displacement chromatography are self-sharpening: if a molecule for some reason gets ahead of its band, it enters a zone in which it is more strongly retained, and will then run more slowly until its zone catches up. Furthermore, because displacement chromatography takes advantage of the non-linearity of the isotherms, loadings are deliberately high; more material can be separated on a given column, in a given time, with the purified components recovered at significantly higher concentrations. Retention conditions can still be adjusted, but the displacer controls the migration rate of the solutes. The displacer is selected to have higher affinity for the stationary phase than does any of the solutes being separated, and its concentration is set to approach saturation of the stationary phase and to give the desired migration rate of the concentration wave. High-retention conditions can be employed without gradient operation, because the displacer ensures removal of all solutes of interest in the designed run time.
Because of the concentrating effect of loading the column under high-retention conditions, displacement chromatography is well suited to purify components from dilute feed streams. However, it is also possible to concentrate material from a dilute stream at the head of a chromatographic column and then switch conditions to elute the adsorbed material in conventional isocratic or gradient modes. Therefore, this approach is not unique to displacement chromatography, although the higher loading capacity and less dilution allow greater concentration in displacement mode.
A disadvantage of displacement chromatography is that non-idealities always give rise to an overlap zone between each pair of components; this mixed zone must be collected separately for recycle or discard to preserve the purity of the separated materials. The strategy of adding spacer molecules to form zones between the components (sometimes termed "carrier displacement chromatography") has been investigated and can be useful when suitable, readily removable spacers are found. Another disadvantage is that the raw chromatogram, for instance a plot of absorbance or refractive index vs elution volume, can be difficult to interpret for contiguous zones, especially if the displacement train is not fully developed. Documentation and troubleshooting may require additional chemical analysis to establish the distribution of a given component. Another disadvantage is that the time required for regeneration limits throughput.
According to John C. Ford's article in the Encyclopedia of Chromatography, theoretical studies indicate that at least for some systems, optimized overloaded elution chromatography offers higher throughput than displacement chromatography, though limited experimental tests suggest that displacement chromatography is superior (at least before consideration of regeneration time).
Applications
Historically, displacement chromatography was applied to preparative separations of amino acids and rare earth elements and has also been investigated for isotope separation.
Proteins
The chromatographic purification of proteins from complex mixtures can be quite challenging, particularly when the mixtures contain similarly retained proteins or when it is desired to enrich trace components in the feed. Further, column loading is often limited when high resolutions are required using traditional modes of chromatography (e.g. linear gradient, isocratic chromatography). In these cases, displacement chromatography is an efficient technique for the purification of proteins from complex mixtures at high column loadings in a variety of applications.
An important advance in the state of the art of displacement chromatography was the development of low molecular mass displacers for protein purification in ion exchange systems. This research was significant in that it represented a major departure from the conventional wisdom that large polyelectrolyte polymers are required to displace proteins in ion exchange systems.
Low molecular mass displacers have significant operational advantages as compared to large polyelectrolyte displacers. For example, if there is any overlap between the displacer and the protein of interest, these low molecular mass materials can be readily separated from the purified protein during post-displacement processing using standard size-based purification methods (e.g. size exclusion chromatography, ultrafiltration). In addition, the salt-dependent adsorption behavior of these low MW displacers greatly facilitates column regeneration. These displacers have been employed for a wide variety of high resolution separations in ion exchange systems. In addition, the utility of displacement chromatography for the purification of recombinant growth factors, antigenic vaccine proteins and antisense oligonucleotides has also been demonstrated. There are several examples in which displacement chromatography has been applied to the purification of proteins using ion exchange, hydrophobic interaction, as well as reversed-phase chromatography.
Displacement chromatography is well suited for obtaining mg quantities of purified proteins from complex mixtures using standard analytical chromatography columns at the bench scale. It is also particularly well suited for enriching trace components in the feed. Displacement chromatography can be readily carried out using a variety of resin systems including, ion exchange, HIC and RPLC.
Two-dimensional chromatography
Two-dimensional chromatography represents the most thorough and rigorous approach to evaluation of the proteome. While previously accepted approaches have utilized elution mode chromatographic approaches such as cation exchange to reversed phase HPLC, yields are typically very low requiring analytical sensitivities in the picomolar to femtomolar range. As displacement chromatography offers the advantage of concentration of trace components, two dimensional chromatography utilizing displacement rather than elution mode in the upstream chromatography step represents a potentially powerful tool for analysis of trace components, modifications, and identification of minor expressed components of the proteome.
Notes
References
Chromatography | Displacement chromatography | [
"Chemistry"
] | 2,653 | [
"Chromatography",
"Separation processes"
] |
15,881,412 | https://en.wikipedia.org/wiki/Ellrod%20index | In meteorology the Ellrod index is a technique for forecasting clear-air turbulence (CAT). It is calculated based on the product of horizontal deformation and vertical wind shear derived from numerical model forecast winds aloft.
The deformation predictors are calculated using following information.
Shearing deformation:
.
Stretching deformation:
.
Where u and v are horizontal components of the wind.
Total deformation equals to:
.
Convergence:
Vertical wind shear:
And the resulting index is given by:
To correspond to clear-air turbulence pilot reports the following table can be used:
See also
Aviation Weather Center
External links
Aviation Weather Center Ellrod Forecast
Eponymous indices
Meteorological indices
Turbulence | Ellrod index | [
"Chemistry"
] | 132 | [
"Turbulence",
"Fluid dynamics"
] |
15,881,980 | https://en.wikipedia.org/wiki/Vardar%20Statistical%20Region | The Vardar Statistical Region () is one of eight statistical regions of North Macedonia. Vardar, located in the central part of North Macedonia, borders Greece to the south. Internally, it borders the Pelagonia, Southwestern, Skopje, Southeastern, and Eastern. The Vardar Statistical Region is named after the Vardar River, which runs through the region.
Municipalities
Vardar statistical region is divided into 9 municipalities:
Čaška
Demir Kapija
Gradsko
Kavadarci
Lozovo
Negotino
Rosoman
Sveti Nikole
Veles
Geography
The Vardar Statistical Region is bisected by the Vardar River and is bounded to the south by Greece. The region is flatter than most of the rest of the country.
Demographics
Population
The current population of the Vardar Statistical Region is 154,535 citizens, according to the last population census in 2002, making it the least populous of the eight statistical regions.
Ethnicities
The largest ethnic group in the region is the Macedonians.
See also
Vardar
References
Statistical regions of North Macedonia | Vardar Statistical Region | [
"Mathematics"
] | 214 | [
"Statistical regions of North Macedonia",
"Statistical concepts",
"Statistical regions"
] |
11,693,892 | https://en.wikipedia.org/wiki/Uromyces%20junci | Uromyces junci is a fungus species and plant pathogen which causes rust on various plants including (Rushes) Juncus species.
It appears as a whitish peridium and a pale yellow mass of spores, it can be found on Pulicaria dysenterica, Juncus articulatus, Juncus bufonius, Juncus effusus, Juncus inflexus and Juncus subnodulosus.
It is mainly found in Europe, North America, New Zealand and parts of South America.
In 1994, it was found in Japan.
References
junci
Fungal plant pathogens and diseases
Fungi described in 1854
Fungus species | Uromyces junci | [
"Biology"
] | 133 | [
"Fungi",
"Fungus species"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.