id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
346,721
https://en.wikipedia.org/wiki/FairPlay
FairPlay is a family of digital rights management (DRM) technologies developed by Apple Inc. for protecting videos, books and apps and historically for music. Music The initial version of FairPlay was created to protect music on the iTunes Store, and is the only version of FairPlay that is no longer actively used. Technical details FairPlay is built into the MP4 multimedia file format as an encrypted AAC audio layer. FairPlay-protected files are regular MP4 container files with an encrypted AAC audio layer. The layer is encrypted using the AES algorithm. The master key required to decrypt the audio layer is also stored in encrypted form in the MP4 container file. The key required to decrypt the master key is called the "user key". When a user registers a new computer with iTunes, the device requests authorization from Apple's servers, thereby gaining a user key. Upon attempting to play a file, the master key stored within the file is then matched to the user key, and if successful, allows playing. FairPlay allows music to be synchronized to an unlimited number of iPods and tracks to be burned to an unlimited number of CDs, though a given playlist can only be burned 7 times without being modified (a limitation which can be circumvented by changing a song's placement). Playback is limited to five computers which were authorized through iTunes; a computer can be deauthorized and another authorized in its place. Before April 2004, the limits were ten playlist burns, and three computers; Apple reduced the playlist limit to seven due to demands from record labels. Lawsuit In January 2005, an iTunes customer filed a lawsuit against Apple, alleging that the company broke antitrust laws by using FairPlay with iTunes in a way that purchased music would work only with the company's own music player, the iPod, freezing out competitors. In March 2011, Bloomberg reported that Apple's then-CEO Steve Jobs would be required to provide testimony through a deposition. In May 2012, the case was changed into a class action lawsuit. Around the same time, the main antitrust allegation was changed to cover the belief that Apple had deliberately updated the iTunes software with security patches in a way that prevented synchronization compatibility with competing music stores. All iPod owners who had purchased their device between September 12, 2006, and March 31, 2009, were included in the class action lawsuit, unless they opted out. In December 2014, Apple went to trial against the claims raised, with the opposing party's plaintiff lawyers seeking $350 million in damages for nearly eight million affected customers. A few weeks later, the case was closed, with the jury deciding in Apple's favor, citing a then-new version of iTunes as being a "genuine product improvement". Circumvention/removal The restrictions imposed by FairPlay, mainly limited device compatibility, have sparked criticism, with a lawsuit alleging antitrust violation that was eventually closed in Apple's favor, and various successful efforts to remove the DRM protection from files, with Apple continually updating its software to counteract such projects. After the introduction of the FairPlay system, multiple parties have attempted and succeeded to circumvent or remove the encryption of FairPlay-protected files. In October 2006, Jon Lech Johansen announced he had reverse engineered FairPlay and would start to license the technology to companies wanting their media to play on Apple's devices. Various media publications have written about DRM removal software, though Apple has continually made efforts in updating its software to counteract these options, resulting in upgraded DRM systems and discontinued DRM removal software. RealNetworks and Harmony technology In July 2004, RealNetworks introduced its Harmony technology. The Harmony technology was built into the company's RealPlayer and allowed users of the RealPlayer Music Store to play their songs on the iPod. In a press release, RealNetworks argued that Harmony was a boon to consumers that "frees" them "from the limitation of being locked into a specific portable device when they buy digital music." In response, Apple issued a statement: We are stunned that RealNetworks has adopted the tactics and ethics of a hacker to break into the iPod, and we are investigating the implications of their actions under the DMCA and other laws. RealNetworks launched an Internet petition titled "Hey Apple! Don't break my iPod", encouraging iPod users to sign up to support Real's action. The petition backfired, with comments criticizing Real's tactics, though some commentators also supported it. At the end of 2004, Apple had updated its software in a way that broke the Harmony technology, prompting RealNetworks to promise a then-upcoming fix. In August 2005, an SEC filing by RealNetworks disclosed that continued use of the Harmony technology put themselves at considerable risk because of the possibility of a lawsuit from Apple, which would be expensive to defend against, even if the court agreed that the technology was legal. Additionally, the possibility that Apple could change its technology to purposefully "break" Harmony's function raised the possibility that Real's business could be harmed. Hymn Hymn (which stands for Hear Your Music aNywhere) was an open-source tool that allowed users to remove the FairPlay DRM of music bought from the iTunes Store. It was later supplanted by QTFairUse6. The Hymn project later shut down after a cease and desist from Apple. Steve Jobs' "Thoughts on Music" open letter On February 6, 2007, Steve Jobs, then-CEO of Apple, published an open letter titled "Thoughts on Music" on the Apple website, calling on the "big four" record labels to sell their music without DRM technology. According to the letter, Apple did not want to use DRM, but was forced to by the four major music labels, with whom Apple has license agreements for iTunes sales of music. Jobs' main points were: DRM has never been, and will never be, perfect. Hackers will always find a method to break DRM. DRM restrictions only hurt people using music legally. Illegal users aren't affected by DRM. The restrictions of DRM encourage users to obtain unrestricted music, which is usually only possible via illegal methods; thus, circumventing iTunes and their revenues. The vast majority of music is sold without DRM via CDs, which have proven commercial success. Reactions Although the open letter initially caused mixed industry reactions, Apple signed a deal with a major record label the following month to offer iTunes customers a purchase option for a higher-quality, DRM-free version of the label's tracks. Jobs' letter was met with mixed reactions. Bloomberg highlighted several viewpoints. David Pakman, President of non-DRM music retailer eMusic, agreed with Jobs, stating that "consumers prefer a world where the media they purchase is playable on any device, regardless of its manufacturer, and is not burdened by arbitrary usage restrictions. DRM only serves to restrict consumer choice, prevents a larger digital music market from emerging, and often makes consumers unwitting accomplices to the ambitions of technology companies". Mike Bebel, CEO of music subscription service Ruckus, explained his view that the letter was an effort to shift focus, saying that "This is a way for Steve Jobs to take the heat off the fact that he won't open up his proprietary DRM. ... The labels have every right to protect their content, and I don't see it as a vow of good partnership to turn the tables on the labels and tell them they should just get rid of all DRM... He is trying to spin the controversy." An anonymous music label executive said that "it's ironic that the guy who has the most successful example of DRM at every step of the process, the one where people bought boatloads of music last Christmas, is suddenly changing his tune". In an article from The New York Times, Ted Cohen, managing partner at TAG Strategic, commented that the change could be "a clear win for the consumer electronics device world, but a potential disaster for the content companies". The Recording Industry Association of America put particular emphasis on Jobs' self-rejected idea about licensing its FairPlay technology to other companies, saying that such licensing would be "a welcome breakthrough and would be a real victory for fans, artists and labels". iTunes Store DRM changes In April 2007, Apple and the record label EMI announced that iTunes Store would begin offering, as an additional higher purchasing option, tracks from EMI's catalog encoded as 256 kbit/s AAC without FairPlay or any other DRM. In January 2009, Apple announced that the entire iTunes Store music catalog would become available in the higher-quality, DRM-free format, after reaching agreements with all the major record labels as well as "thousands of independent labels". Apple Music, Apple's subscription-based music streaming service launched on June 30, 2015, uses the DRM technology. FairPlay Streaming FairPlay Streaming (FPS) protects video transferred over HTTP Live Streaming (HLS) on iOS devices, in Apple TV, and in Safari on macOS. The content provider's server first delivers video to the client application encrypted with the content key using the AES cipher. The application then requests a session key from the device's FairPlay module. The session key is a randomly generated nonce which is RSA encrypted with the provider's public key and delivered to the provider's server. The provider's server encrypts the content key using the session key and delivers it to the FairPlay module, which decrypts it and uses it to decrypt the content for playback. On iOS and Apple TV, the session key handling and content decryption is done in the kernel, while on macOS it is done using Safari's FairPlay Content Decryption Module. Books Apps Apps downloaded from the App Store are protected and code signed using a variant of FairPlay DRM for apps. FairPlay DRM creates a public/private key pair when a device is registered with an iCloud account, and encrypting app encryption keys using the "public" key (which is kept on Apple's servers) in order to decrypt them on the device using the "private" key. Problems In July 2012, an issue with the creation of FairPlay-protected apps caused binaries to become corrupt and stop working. A flaw allowing a form of man-in-the-middle attack can be used to install malware when an iOS device is connected to a computer. References External links FairPlay Streaming QuickTime Audio software Digital rights management systems ITunes Digital rights management for macOS
FairPlay
[ "Engineering" ]
2,223
[ "Audio engineering", "Audio software" ]
346,769
https://en.wikipedia.org/wiki/Order%20of%20approximation
In science, engineering, and other quantitative disciplines, order of approximation refers to formal or informal expressions for how accurate an approximation is. Usage in science and engineering In formal expressions, the ordinal number used before the word order refers to the highest power in the series expansion used in the approximation. The expressions: a zeroth-order approximation, a first-order approximation, a second-order approximation, and so forth are used as fixed phrases. The expression a zero-order approximation is also common. Cardinal numerals are occasionally used in expressions like an order-zero approximation, an order-one approximation, etc. The omission of the word order leads to phrases that have less formal meaning. Phrases like first approximation or to a first approximation may refer to a roughly approximate value of a quantity. The phrase to a zeroth approximation indicates a wild guess. The expression order of approximation is sometimes informally used to mean the number of significant figures, in increasing order of accuracy, or to the order of magnitude. However, this may be confusing, as these formal expressions do not directly refer to the order of derivatives. The choice of series expansion depends on the scientific method used to investigate a phenomenon. The expression order of approximation is expected to indicate progressively more refined approximations of a function in a specified interval. The choice of order of approximation depends on the research purpose. One may wish to simplify a known analytic expression to devise a new application or, on the contrary, try to fit a curve to data points. Higher order of approximation is not always more useful than the lower one. For example, if a quantity is constant within the whole interval, approximating it with a second-order Taylor series will not increase the accuracy. In the case of a smooth function, the nth-order approximation is a polynomial of degree n, which is obtained by truncating the Taylor series to this degree. The formal usage of order of approximation corresponds to the omission of some terms of the series used in the expansion. This affects accuracy. The error usually varies within the interval. Thus the terms (zeroth, first, second, etc.) used above meaning do not directly give information about percent error or significant figures. For example, in the Taylor series expansion of the exponential function, the zeroth-order term is the first-order term is second-order is and so forth. If each higher order term is smaller than the previous. If then the first order approximation, is often sufficient. But at the first-order term, is not smaller than the zeroth-order term, And at even the second-order term, is greater than the zeroth-order term. Zeroth-order Zeroth-order approximation is the term scientists use for a first rough answer. Many simplifying assumptions are made, and when a number is needed, an order-of-magnitude answer (or zero significant figures) is often given. For example, "the town has a few thousand residents", when it has 3,914 people in actuality. This is also sometimes referred to as an order-of-magnitude approximation. The zero of "zeroth-order" represents the fact that even the only number given, "a few", is itself loosely defined. A zeroth-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be constant, or a flat line with no slope: a polynomial of degree 0. For example, could be – if data point accuracy were reported – an approximate fit to the data, obtained by simply averaging the x values and the y values. However, data points represent results of measurements and they do differ from points in Euclidean geometry. Thus quoting an average value containing three significant digits in the output with just one significant digit in the input data could be recognized as an example of false precision. With the implied accuracy of the data points of ±0.5, the zeroth order approximation could at best yield the result for y of ~3.7 ± 2.0 in the interval of x from −0.5 to 2.5, considering the standard deviation. If the data points are reported as the zeroth-order approximation results in The accuracy of the result justifies an attempt to derive a multiplicative function for that average, for example, One should be careful though, because the multiplicative function will be defined for the whole interval. If only three data points are available, one has no knowledge about the rest of the interval, which may be a large part of it. This means that y could have another component which equals 0 at the ends and in the middle of the interval. A number of functions having this property are known, for example y = sin πx. Taylor series is useful and helps predict an analytic solution, but the approximation alone does not provide conclusive evidence. First-order First-order approximation is the term scientists use for a slightly better answer. Some simplifying assumptions are made, and when a number is needed, an answer with only one significant figure is often given ("the town has , or four thousand, residents"). In the case of a first-order approximation, at least one number given is exact. In the zeroth-order example above, the quantity "a few" was given, but in the first-order example, the number "4" is given. A first-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be a linear approximation, straight line with a slope: a polynomial of degree 1. For example: is an approximate fit to the data. In this example there is a zeroth-order approximation that is the same as the first-order, but the method of getting there is different; i.e. a wild stab in the dark at a relationship happened to be as good as an "educated guess". Second-order Second-order approximation is the term scientists use for a decent-quality answer. Few simplifying assumptions are made, and when a number is needed, an answer with two or more significant figures ("the town has , or thirty-nine hundred, residents") is generally given. As in the examples above, the term "2nd order" refers to the number of exact numerals given for the imprecise quantity. In this case, "3" and "9" are given as the two successive levels of precision, instead of simply the "4" from the first order, or "a few" from the zeroth order found in the examples above. A second-order approximation of a function (that is, mathematically determining a formula to fit multiple data points) will be a quadratic polynomial, geometrically, a parabola: a polynomial of degree 2. For example: is an approximate fit to the data. In this case, with only three data points, a parabola is an exact fit based on the data provided. However, the data points for most of the interval are not available, which advises caution (see "zeroth order"). Higher-order While higher-order approximations exist and are crucial to a better understanding and description of reality, they are not typically referred to by number. Continuing the above, a third-order approximation would be required to perfectly fit four data points, and so on. See polynomial interpolation. Colloquial usage These terms are also used colloquially by scientists and engineers to describe phenomena that can be neglected as not significant (e.g. "Of course the rotation of the Earth affects our experiment, but it's such a high-order effect that we wouldn't be able to measure it." or "At these velocities, relativity is a fourth-order effect that we only worry about at the annual calibration.") In this usage, the ordinality of the approximation is not exact, but is used to emphasize its insignificance; the higher the number used, the less important the effect. The terminology, in this context, represents a high level of precision required to account for an effect which is inferred to be very small when compared to the overall subject matter. The higher the order, the more precision is required to measure the effect, and therefore the smallness of the effect in comparison to the overall measurement. See also Linearization Perturbation theory Taylor series Chapman–Enskog method Big O notation Order of accuracy References Perturbation theory Numerical analysis
Order of approximation
[ "Physics", "Mathematics" ]
1,738
[ "Computational mathematics", "Quantum mechanics", "Mathematical relations", "Numerical analysis", "Approximations", "Perturbation theory" ]
346,781
https://en.wikipedia.org/wiki/Modeling%20language
A modeling language is any artificial language that can be used to express data, information or knowledge or systems in a structure that is defined by a consistent set of rules. The rules are used for interpretation of the meaning of components in the structure of a programming language. Overview A modeling language can be graphical or textual. Graphical modeling languages use a diagram technique with named symbols that represent concepts and lines that connect the symbols and represent relationships and various other graphical notation to represent constraints. Textual modeling languages may use standardized keywords accompanied by parameters or natural language terms and phrases to make computer-interpretable expressions. An example of a graphical modeling language and a corresponding textual modeling language is EXPRESS. Not all modeling languages are executable, and for those that are, the use of them doesn't necessarily mean that programmers are no longer required. On the contrary, executable modeling languages are intended to amplify the productivity of skilled programmers, so that they can address more challenging problems, such as parallel computing and distributed systems. A large number of modeling languages appear in the literature. Type of modeling languages Graphical types Example of graphical modeling languages in the field of computer science, project management and systems engineering: Behavior Trees are a formal, graphical modeling language used primarily in systems and software engineering. Commonly used to unambiguously represent the hundreds or even thousands of natural language requirements that are typically used to express the stakeholder needs for a large-scale software-integrated system. Business Process Modeling Notation (BPMN, and the XML form BPML) is an example of a Process Modeling language. C-K theory consists of a modeling language for design processes. DRAKON is a general-purpose algorithmic modeling language for specifying software-intensive systems, a schematic representation of an algorithm or a stepwise process, and a family of programming languages. EXPRESS and EXPRESS-G (ISO 10303-11) is an international standard general-purpose data modeling language. Extended Enterprise Modeling Language (EEML) is commonly used for business process modeling across a number of layers. Flowchart is a schematic representation of an algorithm or a stepwise process. Fundamental Modeling Concepts (FMC) modeling language for software-intensive systems. IDEF is a family of modeling languages, which include IDEF0 for functional modeling, IDEF1X for information modeling, IDEF3 for business process modeling, IDEF4 for Object-Oriented Design and IDEF5 for modeling ontologies. Jackson Structured Programming (JSP) is a method for structured programming based on correspondences between data stream structure and program structure. LePUS3 is an object-oriented visual Design Description Language and a formal specification language that is suitable primarily for modeling large object-oriented (Java, C++, C#) programs and design patterns. Lifecycle Modeling Language is an open-standard language for systems engineering that supports the full system lifecycle: conceptual, utilization, support and retirement stages. Object-Role Modeling (ORM) in the field of software engineering is a method for conceptual modeling, and can be used as a tool for information and rules analysis. Petri nets use variations on exactly one diagramming technique and topology, namely the bipartite graph. The simplicity of its basic user interface easily enabled extensive tool support over the years, particularly in the areas of model checking, graphically oriented simulation, and software verification. Southbeach Notation is a visual modeling language used to describe situations in terms of agents that are considered useful or harmful from the modeler's perspective. The notation shows how the agents interact with each other and whether this interaction improves or worsens the situation. Specification and Description Language (SDL) is a specification language targeted at the unambiguous specification and description of the behavior of reactive and distributed systems. SysML is a Domain-Specific Modeling language for systems engineering that is defined as a UML profile (customization). Unified Modeling Language (UML) is a general-purpose modeling language that is an industry standard for specifying software-intensive systems. UML 2.0, the current version, supports thirteen different diagram techniques, and has widespread tool support. FLINT — language which allows a high-level description of normative systems. Service-oriented modeling framework (SOMF) is a holistic language for designing enterprise and application level architecture models in the space of enterprise architecture, virtualization, service-oriented architecture (SOA), cloud computing, and more. Architecture description language (ADL) is a language used to describe and represent the systems architecture of a system. Architecture Analysis & Design Language (AADL) is a modeling language that supports early and repeated analyses of a system's architecture with respect to performance-critical properties through an extendable notation, a tool framework, and precisely defined semantics. Examples of graphical modeling languages in other fields of science. EAST-ADL is a Domain-Specific Modeling language dedicated to automotive system design. Energy Systems Language (ESL), a language that aims to model ecological energetics & global economics. IEC 61499 defines Domain-Specific Modeling language dedicated to distribute industrial process measurement and control systems. Textual types Information models can also be expressed in formalized natural languages, such as Gellish. Gellish has natural language variants such as Gellish Formal English and Gellish Formal Dutch (Gellish Formeel Nederlands), etc. Gellish Formal English is an information representation language or semantic modeling language that is defined in the Gellish English Dictionary-Taxonomy, which has the form of a Taxonomy-Ontology (similarly for Dutch). Gellish Formal English is not only suitable to express knowledge, requirements and dictionaries, taxonomies and ontologies, but also information about individual things. All that information is expressed in one language and therefore it can all be integrated, independent of the question whether it is stored in central or distributed or in federated databases. Information models in Gellish Formal English consists of collections of Gellish Formal English expressions, that use natural language terms and formalized phrases. For example, a geographic information model might consist of a number of Gellish Formal English expressions, such as: - the Eiffel tower <is located in> Paris - Paris <is classified as a> city whereas information requirements and knowledge can be expressed for example as follows: - tower <shall be located in a> geographical area - city <is a kind of> geographical area Such Gellish Formal English expressions use names of concepts (such as "city") and phrases that represent relation types (such as and ) that should be selected from the Gellish English Dictionary-Taxonomy (or of your own domain dictionary). The Gellish English Dictionary-Taxonomy enables the creation of semantically rich information models, because the dictionary contains more than 600 standard relation types and contains definitions of more than 40000 concepts. An information model in Gellish can express facts or make statements, queries and answers. More specific types In the field of computer science recently more specific types of modeling languages have emerged. Algebraic Algebraic Modeling Languages (AML) are high-level programming languages for describing and solving high complexity problems for large scale mathematical computation (i.e. large scale optimization type problems). One particular advantage of AMLs like AIMMS, AMPL, GAMS, Gekko, Mosel, OPL, MiniZinc, and OptimJ is the similarity of its syntax to the mathematical notation of optimization problems. This allows for a very concise and readable definition of problems in the domain of optimization, which is supported by certain language elements like sets, indices, algebraic expressions, powerful sparse index and data handling variables, constraints with arbitrary names. The algebraic formulation of a model does not contain any hints how to process it. Behavioral Behavioral languages are designed to describe the observable behavior of complex systems consisting of components that execute concurrently. These languages focus on the description of key concepts such as: concurrency, nondeterminism, synchronization, and communication. The semantic foundations of Behavioral languages are process calculus or process algebra. Discipline-specific A discipline-specific modeling (DspM) language is focused on deliverables affiliated with a specific software development life cycle stage. Therefore, such language offers a distinct vocabulary, syntax, and notation for each stage, such as discovery, analysis, design, architecture, contraction, etc. For example, for the analysis phase of a project, the modeler employs specific analysis notation to deliver an analysis proposition diagram. During the design phase, however, logical design notation is used to depict the relationship between software entities. In addition, the discipline-specific modeling language best practices does not preclude practitioners from combining the various notations in a single diagram. Domain-specific Domain-specific modeling (DSM) is a software engineering methodology for designing and developing systems, most often IT systems such as computer software. It involves the systematic use of a graphical domain-specific language (DSL) to represent the various facets of a system. DSM languages tend to support higher-level abstractions than General-purpose modeling languages, so they require less effort and fewer low-level details to specify a given system. Framework-specific A framework-specific modeling language (FSML) is a kind of domain-specific modeling language which is designed for an object-oriented application framework. FSMLs define framework-provided abstractions as FSML concepts and decompose the abstractions into features. The features represent implementation steps or choices. A FSML concept can be configured by selecting features and providing values for features. Such a concept configuration represents how the concept should be implemented in the code. In other words, concept configuration describes how the framework should be completed in order to create the implementation of the concept. Information and knowledge modeling Linked data and ontology engineering require 'host languages' to represent entities and the relations between them, constraints between the properties of entities and relations, and metadata attributes. JSON-LD and RDF are two major (and semantically almost equivalent) languages in this context, primarily because they support statement reification and contextualisation which are essential properties to support the higher-order logic needed to reason about models. Model transformation is a common example of such reasoning. Object-oriented Object modeling languages are modeling languages based on a standardized set of symbols and ways of arranging them to model (part of) an object oriented software design or system design. Some organizations use them extensively in combination with a software development methodology to progress from initial specification to an implementation plan and to communicate that plan to an entire team of developers and stakeholders. Because a modeling language is visual and at a higher-level of abstraction than code, using models encourages the generation of a shared vision that may prevent problems of differing interpretation later in development. Often software modeling tools are used to construct these models, which may then be capable of automatic translation to code. Virtual reality Virtual Reality Modeling Language (VRML), before 1995 known as the Virtual Reality Markup Language is a standard file format for representing 3-dimensional (3D) interactive vector graphics, designed particularly with the World Wide Web in mind. Others Architecture Description Language Face Modeling Language Generative Modelling Language Java Modeling Language Promela Rebeca Modeling Language Service Modeling Language Web Services Modeling Language X3D Applications Various kinds of modeling languages are applied in different disciplines, including computer science, information management, business process modeling, software engineering, and systems engineering. Modeling languages can be used to specify: system requirements, structures and behaviors. Modeling languages are intended to be used to precisely specify systems so that stakeholders (e.g., customers, operators, analysts, designers) can better understand the system being modeled. The more mature modeling languages are precise, consistent and executable. Informal diagramming techniques applied with drawing tools are expected to produce useful pictorial representations of system requirements, structures and behaviors, which can be useful for communication, design, and problem solving but cannot be used programmatically. Executable modeling languages applied with proper tool support, however, are expected to automate system verification and validation, simulation and code generation from the same representations. Quality A review of modelling languages is essential to be able to assign which languages are appropriate for different modelling settings. In the term settings we include stakeholders, domain and the knowledge connected. Assessing the language quality is a means that aims to achieve better models. Framework for evaluation Here language quality is stated in accordance with the SEQUAL framework for quality of models developed by Krogstie, Sindre and Lindland (2003), since this is a framework that connects the language quality to a framework for general model quality. Five areas are used in this framework to describe language quality and these are supposed to express both the conceptual as well as the visual notation of the language. We will not go into a thorough explanation of the underlying quality framework of models but concentrate on the areas used to explain the language quality framework. Domain appropriateness The framework states the ability to represent the domain as domain appropriateness. The statement appropriateness can be a bit vague, but in this particular context it means able to express. You should ideally only be able to express things that are in the domain but be powerful enough to include everything that is in the domain. This requirement might seem a bit strict, but the aim is to get a visually expressed model which includes everything relevant to the domain and excludes everything not appropriate for the domain. To achieve this, the language has to have a good distinction of which notations and syntaxes that are advantageous to present. Participant appropriateness To evaluate the participant appropriateness we try to identify how well the language expresses the knowledge held by the stakeholders. This involves challenges since a stakeholder's knowledge is subjective. The knowledge of the stakeholder is both tacit and explicit. Both types of knowledge are of dynamic character. In this framework only the explicit type of knowledge is taken into account. The language should to a large extent express all the explicit knowledge of the stakeholders relevant to the domain. Modeller appropriateness Last paragraph stated that knowledge of the stakeholders should be presented in a good way. In addition it is imperative that the language should be able to express all possible explicit knowledge of the stakeholders. No knowledge should be left unexpressed due to lacks in the language. Comprehensibility appropriateness Comprehensibility appropriateness makes sure that the social actors understand the model due to a consistent use of the language. To achieve this the framework includes a set of criteria. The general importance that these express is that the language should be flexible, easy to organize and easy to distinguish different parts of the language internally as well as from other languages. In addition to this, the goal should be as simple as possible and that each symbol in the language has a unique representation. This is in connection to also to the structure of the development requirements. . Tool appropriateness To ensure that the domain actually modelled is usable for analyzing and further processing, the language has to ensure that it is possible to reason in an automatic way. To achieve this it has to include formal syntax and semantics. Another advantage by formalizing is the ability to discover errors in an early stage. It is not always that the language best fitted for the technical actors is the same as for the social actors. Organizational appropriateness The language used is appropriate for the organizational context, e.g. that the language is standardized within the organization, or that it is supported by tools that are chosen as standard in the organization. See also Model-based testing (MBT) Model-driven engineering (MDE) References Further reading John Krogstie (2003) "Evaluating UML using a generic quality framework" . SINTEF Telecom and Informatics and IDI, NTNU, Norway Krogstie and Sølvsberg (2003). Information Systems Engineering: Conceptual Modeling in a Quality Perspective. Institute of computer and information sciences.\ Anna Gunhild Nysetvold and John Krogstie (2005). "Assessing business processing modeling languages using a generic quality framework". Institute of computer and information sciences. External links Fundamental Modeling Concepts Software Modeling Languages Portal BIP -- Incremental Component-based Construction of Real-time Systems Gellish Formal English Specification languages
Modeling language
[ "Engineering" ]
3,297
[ "Software engineering", "Specification languages" ]
346,814
https://en.wikipedia.org/wiki/Tetrahydrogestrinone
Tetrahydrogestrinone (THG), known by the nickname The Clear, is a synthetic and orally active anabolic–androgenic steroid (AAS) which was never marketed for medical use. It was developed by Patrick Arnold and was used by a number of high-profile athletes such as Barry Bonds and Dwain Chambers. Non-medical uses THG was developed completely in secret by Patrick Arnold as a designer drug, on the basis that doping testers would be unlikely to detect a totally new compound. Arnold developed a chemical similar to two obscure steroids marketed by BALCO, norbolethone and desoxymethyltestosterone, which had been reported in scientific literature but never entered mass production, and the banned anabolic steroids trenbolone and gestrinone, the latter of which was used to synthesize it. In 2003, whistleblower Trevor Graham passed a spent syringe containing a small amount of the drug to the United States Anti-Doping Agency. This was then transferred to the research group of pharmacologist Don Catlin, who identified the drug using mass spectrometry techniques and gave it its present name. THG has never been fully tested for safety and has never entered legitimate medical use, although some studies have been made of its properties. A synthesis was devised to ensure a source of material for comparison and it was scheduled by the Food and Drug Administration (FDA) in 2005. Concerns have also been raised about its potential use in animals such as in horse-racing. Side effects Side effects from prolonged use are likely to include infertility in both men and women, as well as other steroid side effects such as acne and hirsutism. Unlike most other anabolic steroids, THG also binds with high affinity to the glucocorticoid receptor, and while this effect may cause additional weight loss, it is also likely to cause additional side effects such as immunosuppression that are not seen with most other steroids. Pharmacology Pharmacodynamics THG is a highly potent agonist of the androgen and progesterone receptors, around 10 times more potent than the comparison drugs nandrolone or trenbolone, but with no estrogenic activity. It has been found to bind to the androgen receptor with similar affinity to dihydrotestosterone and produces growth of muscle tissue. According to Patrick Arnold, due to the drug's potency, he never had to supply significant quantities to BALCO, because "just a couple of drops under the tongue" were a sufficient dose. When THG reaches the nucleus of a cell, it binds to the androgen receptor at the ligand-binding pocket. Here it changes the expression of a variety of genes, turning on several anabolic and androgenic functions. It is the ligand's structure which determines the number of interactions that can take place with the human androgen receptor ligand-binding domain. Even minor modifications in the ligand's structure have a great impact on the strength of the interactions this ligand has with the androgen receptor. THG, possessing a high affinity, establishes more van der Waals contacts with the receptor than with many other steroids. It is this higher affinity and specific geometry of THG which makes these interactions with the androgen receptor so strong, resulting in THG's potency. Chemistry THG, also known as 17α-ethyl-18-methyl-δ9,11-19-nortestosterone or as 17α-ethyl-18-methylestra-4,9,11-trien-17β-ol-3-one, is a synthetic estrane steroid and a 17α-alkylated derivative of nandrolone (19-nortestosterone). It is a modification of gestrinone (17α-ethynyl-18-methyl-19-nor-δ9,11-testosterone) in which the ethynyl group has been hydrogenated into an ethyl group, thereby converting the steroid from a norethisterone (17α-ethynyl-19-nortestosterone) derivative with weak AR activity into a norethandrolone (17α-ethyl-19-nortestosterone) derivative with powerful AR activity. THG is closely related to RU-2309 (the 17α-methyl variant), trenbolone (δ9,11-19-nortestosterone), metribolone (17α-methyl-δ9,11-19-nortestosterone), and norboletone (17α-ethyl-18-methyl-19-nortestosterone). History For a time, THG was considered the drug of choice for safe and "invisible" world record breaking in athletics, being used by several high-profile gold medal winners such as the sprinter Marion Jones, who resigned from her athletic career in 2007 after admitting to using THG prior to the 2000 Sydney Olympics, where she had won three gold medals. It has also been used by formerly banned British athlete Dwain Chambers, Major League Baseball left fielder Barry Bonds, and Major League Baseball first baseman Jason Giambi. THG was developed by Patrick Arnold for the Bay Area Laboratory Co-operative (BALCO), which claimed to be a nutritional supplement company. The company manufactured the drug through palladium-charcoal catalyzed hydrogenation from gestrinone, a substance used in gynecology for treatment of endometriosis (Australian Medicines handbook 2011). In 2003, U.S. sprint coach Trevor Graham delivered a syringe containing traces of THG to the United States Anti-Doping Agency (USADA). This helped Don Catlin, MD, the founder and then-director of the UCLA Olympic Analytical Lab, to identify and develop a test for THG, the second reported designer anabolic steroid. References External links The identity of the whistle-blowing coach "This Is Very Clever Chemistry" from The Washington Post, December 4, 2004 Alcohols Anabolic–androgenic steroids Designer drugs Estranes Glucocorticoids Hepatotoxins Ketones Progestogens World Anti-Doping Agency prohibited substances
Tetrahydrogestrinone
[ "Chemistry" ]
1,306
[ "Ketones", "Functional groups" ]
346,883
https://en.wikipedia.org/wiki/IEEE%20802.1X
IEEE 802.1X is an IEEE Standard for port-based network access control (PNAC). It is part of the IEEE 802.1 group of networking protocols. It provides an authentication mechanism to devices wishing to attach to a LAN or WLAN. The standard directly addresses an attack technique called Hardware Addition where an attacker posing as a guest, customer or staff smuggles a hacking device into the building that they then plug into the network giving them full access. A notable example of the issue occurred in 2005 when a machine attached to Walmart's network hacked thousands of their servers. IEEE 802.1X defines the encapsulation of the Extensible Authentication Protocol (EAP) over wired IEEE 802 networks and over 802.11 wireless networks, which is known as "EAP over LAN" or EAPOL. EAPOL was originally specified for IEEE 802.3 Ethernet, IEEE 802.5 Token Ring, and FDDI (ANSI X3T9.5/X3T12 and ISO 9314) in 802.1X-2001, but was extended to suit other IEEE 802 LAN technologies such as IEEE 802.11 wireless in 802.1X-2004. The EAPOL was also modified for use with IEEE 802.1AE ("MACsec") and IEEE 802.1AR (Secure Device Identity, DevID) in 802.1X-2010 to support service identification and optional point to point encryption over the internal LAN segment. 802.1X is part of the logical link control (LLC) sublayer of the 802 reference model. Overview 802.1X authentication involves three parties: a supplicant, an authenticator, and an authentication server. The supplicant is a client device (such as a laptop) that wishes to attach to the LAN/WLAN. The term 'supplicant' is also used interchangeably to refer to the software running on the client that provides credentials to the authenticator. The authenticator is a network device that provides a data link between the client and the network and can allow or block network traffic between the two, such as an Ethernet switch or wireless access point; and the authentication server is typically a trusted server that can receive and respond to requests for network access, and can tell the authenticator if the connection is to be allowed, and various settings that should apply to that client's connection or setting. Authentication servers typically run software supporting the RADIUS and EAP protocols. In some cases, the authentication server software may be running on the authenticator hardware. The authenticator acts like a security guard to a protected network. The supplicant (i.e., client device) is not allowed access through the authenticator to the protected side of the network until the supplicant's identity has been validated and authorized. With 802.1X port-based authentication, the supplicant must initially provide the required credentials to the authenticator - these will have been specified in advance by the network administrator and could include a user name/password or a permitted digital certificate. The authenticator forwards these credentials to the authentication server to decide whether access is to be granted. If the authentication server determines the credentials are valid, it informs the authenticator, which in turn allows the supplicant (client device) to access resources located on the protected side of the network. Protocol operation EAPOL operates over the data link layer, and in Ethernet II framing protocol has an EtherType value of 0x888E. Port entities 802.1X-2001 defines two logical port entities for an authenticated port—the "controlled port" and the "uncontrolled port". The controlled port is manipulated by the 802.1X PAE (Port Access Entity) to allow (in the authorized state) or prevent (in the unauthorized state) network traffic ingress and egress to/from the controlled port. The uncontrolled port is used by the 802.1X PAE to transmit and receive EAPOL frames. 802.1X-2004 defines the equivalent port entities for the supplicant; so a supplicant implementing 802.1X-2004 may prevent higher-level protocols from being used if it is not content that authentication has successfully completed. This is particularly useful when an EAP method providing mutual authentication is used, as the supplicant can prevent data leakage when connected to an unauthorized network. Typical authentication progression The typical authentication procedure consists of: Initialization On detection of a new supplicant, the port on the switch (authenticator) is enabled and set to the "unauthorized" state. In this state, only 802.1X traffic is allowed; other traffic, such as the Internet Protocol (and with that TCP and UDP), is dropped. Initiation To initiate authentication the authenticator will periodically transmit EAP-Request Identity frames to a special Layer 2 MAC address () on the local network segment. The supplicant listens at this address, and on receipt of the EAP-Request Identity frame, it responds with an EAP-Response Identity frame containing an identifier for the supplicant such as a User ID. The authenticator then encapsulates this Identity response in a RADIUS Access-Request packet and forwards it on to the authentication server. The supplicant may also initiate or restart authentication by sending an EAPOL-Start frame to the authenticator, which will then reply with an EAP-Request Identity frame. Negotiation (Technically EAP negotiation)'' The authentication server sends a reply (encapsulated in a RADIUS Access-Challenge packet) to the authenticator, containing an EAP Request specifying the EAP Method (The type of EAP based authentication it wishes the supplicant to perform). The authenticator encapsulates the EAP Request in an EAPOL frame and transmits it to the supplicant. At this point, the supplicant can start using the requested EAP Method, or do a NAK ("Negative Acknowledgement") and respond with the EAP Methods it is willing to perform. Authentication If the authentication server and supplicant agree on an EAP Method, EAP Requests and Responses are sent between the supplicant and the authentication server (translated by the authenticator) until the authentication server responds with either an EAP-Success message (encapsulated in a RADIUS Access-Accept packet), or an EAP-Failure message (encapsulated in a RADIUS Access-Reject packet). If authentication is successful, the authenticator sets the port to the "authorized" state and normal traffic is allowed. If it is unsuccessful, the port remains in the "unauthorized" state. When the supplicant logs off, it sends an EAPOL-logoff message to the authenticator, the authenticator then sets the port to the "unauthorized" state, once again blocking all non-EAP traffic. Implementations An open-source project named Open1X produces a client, Xsupplicant. This client is currently available for both Linux and Windows. The main drawbacks of the Open1X client are that it does not provide comprehensible and extensive user documentation and that most Linux vendors do not provide a package for it. The more general wpa_supplicant can be used for 802.11 wireless networks and wired networks. Both support a very wide range of EAP types. The iPhone and iPod Touch support 802.1X since the release of iOS 2.0. Android has support for 802.1X since the release of 1.6 Donut. ChromeOS has supported 802.1X since mid-2011. macOS has offered native support since 10.3. Avenda Systems provides a supplicant for Windows, Linux and macOS. They also have a plugin for the Microsoft NAP framework. Avenda also offers health checking agents. Windows Windows defaults to not responding to 802.1X authentication requests for 20 minutes after a failed authentication. This can cause significant disruption to clients. The block period can be configured using the HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\dot3svc\BlockTime DWORD value (HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\wlansvc\BlockTime for wireless networks) in the registry (entered in minutes). A hotfix is required for Windows XP SP3 and Windows Vista SP2 to make the period configurable. Wildcard server certificates are not supported by EAPHost, the Windows component that provides EAP support in the operating system. The implication of this is that when using a commercial certification authority, individual certificates must be purchased. Windows XP Windows XP has major issues with its handling of IP address changes resulting from user-based 802.1X authentication that changes the VLAN and thus subnet of clients. Microsoft has stated that it will not backport the SSO feature from Vista that resolves these issues. If users are not logging in with roaming profiles, a hotfix must be downloaded and installed if authenticating via PEAP with PEAP-MSCHAPv2. Windows Vista Windows Vista-based computers that are connected via an IP phone may not authenticate as expected and, as a result, the client can be placed into the wrong VLAN. A hotfix is available to correct this. Windows 7 Windows 7 based computers that are connected via an IP phone may not authenticate as expected and, consequently, the client can be placed into the wrong VLAN. A hotfix is available to correct this. Windows 7 does not respond to 802.1X authentication requests after initial 802.1X authentication fails. This can cause significant disruption to clients. A hotfix is available to correct this. Windows PE For most enterprises deploying and rolling out operating systems remotely, Windows PE does not have native support for 802.1X. However, support can be added to WinPE 2.1 and WinPE 3.0 through hotfixes that are available from Microsoft. Although full documentation is not yet available, preliminary documentation for the use of these hotfixes is available via a Microsoft blog. Linux Most Linux distributions support 802.1X via wpa_supplicant and desktop integration like NetworkManager. Apple devices As of iOS 17 and macOS 14, Apple devices support connecting to 802.1X networks using EAP-TLS with TLS 1.3 (EAP-TLS 1.3). Additionally, devices running iOS/iPadOS/tvOS 17 or later support wired 802.1X networks. Federations eduroam (the international roaming service), mandates the use of 802.1X authentication when providing network access to guests visiting from other eduroam-enabled institutions. BT (British Telecom, PLC) employs Identity Federation for authentication in services delivered to a wide variety of industries and governments. Proprietary extensions MAB (MAC Authentication Bypass) Not all devices support 802.1X authentication. Examples include network printers, Ethernet-based electronics like environmental sensors, cameras, and wireless phones. For those devices to be used in a protected network environment, alternative mechanisms must be provided to authenticate them. One option would be to disable 802.1X on that port, but that leaves that port unprotected and open for abuse. Another slightly more reliable option is to use the MAB option. When MAB is configured on a port, that port will first try to check if the connected device is 802.1X compliant, and if no reaction is received from the connected device, it will try to authenticate with the AAA server using the connected device's MAC address as username and password. The network administrator then must make provisions on the RADIUS server to authenticate those MAC addresses, either by adding them as regular users or implementing additional logic to resolve them in a network inventory database. Many managed Ethernet switches offer options for this. Vulnerabilities in 802.1X-2001 and 802.1X-2004 Shared media In the summer of 2005, Microsoft's Steve Riley posted an article (based on the original research of Microsoft MVP Svyatoslav Pidgorny) detailing a serious vulnerability in the 802.1X protocol, involving a man in the middle attack. In summary, the flaw stems from the fact that 802.1X authenticates only at the beginning of the connection, but after that authentication, it's possible for an attacker to use the authenticated port if they have the ability to physically insert themselves (perhaps using a workgroup hub) between the authenticated computer and the port. Riley suggests that for wired networks the use of IPsec or a combination of IPsec and 802.1X would be more secure. EAPOL-Logoff frames transmitted by the 802.1X supplicant are sent in the clear and contain no data derived from the credential exchange that initially authenticated the client. They are therefore trivially easy to spoof on shared media and can be used as part of a targeted DoS on both wired and wireless LANs. In an EAPOL-Logoff attack a malicious third party, with access to the medium the authenticator is attached to, repeatedly sends forged EAPOL-Logoff frames from the target device's MAC Address. The authenticator (believing that the targeted device wishes to end its authentication session) closes the target's authentication session, blocking traffic ingressing from the target, denying it access to the network. The 802.1X-2010 specification, which began as 802.1af, addresses vulnerabilities in previous 802.1X specifications, by using MACsec IEEE 802.1AE to encrypt data between logical ports (running on top of a physical port) and IEEE 802.1AR (Secure Device Identity / DevID) authenticated devices. As a stopgap, until these enhancements are widely implemented, some vendors have extended the 802.1X-2001 and 802.1X-2004 protocol, allowing multiple concurrent authentication sessions to occur on a single port. While this prevents traffic from devices with unauthenticated MAC addresses ingressing on an 802.1X authenticated port, it will not stop a malicious device snooping on traffic from an authenticated device and provides no protection against MAC spoofing, or EAPOL-Logoff attacks. Alternatives The IETF-backed alternative is the Protocol for Carrying Authentication for Network Access (PANA), which also carries EAP, although it works at layer 3, using UDP, thus not being tied to the 802 infrastructure. See also AEGIS SecureConnect IEEE 802.11i-2004 References External links IEEE page on 802.1X GetIEEE802 Download 802.1X-2020 GetIEEE802 Download 802.1X-2010 GetIEEE802 Download 802.1X-2004 GetIEEE802 Download 802.1X-2001 Ultimate wireless security guide: Self-signed certificates for your RADIUS server WIRE1x Wired Networking with 802.1X Authentication on Microsoft TechNet IEEE 802.01x Networking standards Computer access control protocols Computer network security
IEEE 802.1X
[ "Technology", "Engineering" ]
3,128
[ "Cybersecurity engineering", "Computer standards", "Computer networks engineering", "Computer network security", "Networking standards" ]
346,906
https://en.wikipedia.org/wiki/Gauss%20gun
The Gauss gun (often called a Gauss rifle or Gauss cannon) is a device that uses permanent magnets and the physics of the Newton's cradle to accelerate a projectile. Gauss guns are distinct from and predate coil guns, although many works of science fiction (and occasionally educators) have confused the two. Typical use of the Gauss rifle is to demonstrate the effects of energy and momentum transfer, however, self-assembling microbots based on the principle have been proposed for tissue penetration. Mechanism In its frequent incarnation as a physics demonstration, a Gauss gun usually consists of series of ferromagnetic balls on a nonmagnetic track. On the track is a permanent magnet with a ball, the projectile, stuck to the front of it. Between the projectile and the magnet is a spacer, usually consisting of one or more additional balls. Yet another ball, the trigger ball, is released from behind the magnet. It is attracted to and accelerates toward the magnet. When it strikes the back of the magnet, it transfers its momentum to the projectile ball, which is knocked off the front of the stack, as in a Newton's cradle. Because the spacer kept it far away from the magnet, the projectile loses less energy escaping from the magnet's influence than the trigger ball gave it, so it leaves the stack with a higher velocity than the trigger ball entered with. Once the ball is launched, the trigger ball must be pried off the back of the magnet before it can be used again. This is where the energy to shoot the gun ultimately comes from. Multi-stage Gauss guns are also possible, with the projectile of each stage becoming the trigger for the next, carrying its energy forward so that each stage contributes energy to the final projectile. See also Newton's Cradle List of science demonstrations Galilean cannon References Sources External Links A Youtube video explaining the Gauss gun Science demonstrations Physics education
Gauss gun
[ "Physics" ]
396
[ "Applied and interdisciplinary physics", "Physics education" ]
346,925
https://en.wikipedia.org/wiki/Regular%20Language%20description%20for%20XML
REgular LAnguage description for XML (RELAX) is a specification for describing XML-based languages. A description written in RELAX is called a RELAX grammar. RELAX Core has been approved as an ISO/IEC Technical Report 22250–1 in 2002 (ISO/IEC TR 22250-1:2002). It was developed by ISO/IEC JTC 1/SC 34 (ISO/IEC Joint Technical Committee 1, Subcommittee 34 - Document description and processing languages). RELAX was designed by Murata Makoto. In 2001, an XML schema language RELAX NG was created by unifying of RELAX Core and James Clark's TREX. It was published as ISO/IEC 19757–2 in 2003. See also RELAX NG Document Schema Definition Languages References External links RELAX home page ISO/IEC TR 22250-1:2002 - Information technology -- Document description and processing languages -- Regular Language Description for XML (RELAX) -- Part 1: RELAX Core Computer-related introductions in 2000 Data modeling languages ISO/IEC standards XML-based standards de:RELAX
Regular Language description for XML
[ "Technology" ]
215
[ "Computer standards", "XML-based standards" ]
346,952
https://en.wikipedia.org/wiki/Glycyrrhizin
Glycyrrhizin (glycyrrhizic acid or glycyrrhizinic acid) is the chief sweet-tasting constituent of Glycyrrhiza glabra (liquorice) root. Structurally, it is a saponin used as an emulsifier and gel-forming agent in foodstuffs and cosmetics. Its aglycone is enoxolone. Pharmacokinetics After oral ingestion, glycyrrhizin is hydrolysed to 18β-glycyrrhetinic acid (enoxolone) by intestinal bacteria. After absorption from the gut, 18β-glycyrrhetinic acid is metabolised to 3β-monoglucuronyl-18β-glycyrrhetinic acid in the liver. This metabolite circulates in the bloodstream. Consequently, its oral bioavailability is poor. Most of it is eliminated by bile and only a minor part (0.31–0.67%) by urine. After oral ingestion of 600 mg of glycyrrhizin the metabolite appeared in urine after 1.5 to 14 hours. Maximal concentrations (0.49 to 2.69 mg/L) were achieved after 1.5 to 39 hours and metabolite can be detected in the urine after 2 to 4 days. Flavouring properties Glycyrrhizin is obtained as an extract from licorice root after maceration and boiling in water. Licorice extract (glycyrrhizin) is sold in the United States as a liquid, paste, or spray-dried powder. When in specified amounts, it is approved for use as a flavor and aroma in manufactured foods, beverages, candies, dietary supplements, and seasonings. It is 30 to 50 times as sweet as sucrose (table sugar). Adverse effects The most widely reported side effect of glycyrrhizin use via consumption of black liquorice is reduction of blood potassium levels, which can affect body fluid balance and function of nerves. Chronic consumption of black licorice, even in moderate amounts, is associated with an increase in blood pressure, may cause irregular heart rhythm, and may have adverse interactions with prescription drugs. In extreme cases, death can occur as a result of excess consumption. See also 11α-Hydroxyprogesterone Glycyrrhetinic acid List of unusual deaths in the 21st century References External links 11β-Hydroxysteroid dehydrogenase inhibitors Triterpene glycosides Sugar substitutes Saponins
Glycyrrhizin
[ "Chemistry" ]
545
[ "Biomolecules by chemical classification", "Natural products", "Saponins" ]
346,955
https://en.wikipedia.org/wiki/List%20of%20number%20theory%20topics
This is a list of topics in number theory. See also: List of recreational number theory topics Topics in cryptography Divisibility Composite number Highly composite number Even and odd numbers Parity Divisor, aliquot part Greatest common divisor Least common multiple Euclidean algorithm Coprime Euclid's lemma Bézout's identity, Bézout's lemma Extended Euclidean algorithm Table of divisors Prime number, prime power Bonse's inequality Prime factor Table of prime factors Formula for primes Factorization RSA number Fundamental theorem of arithmetic Square-free Square-free integer Square-free polynomial Square number Power of two Integer-valued polynomial Fractions Rational number Unit fraction Irreducible fraction = in lowest terms Dyadic fraction Recurring decimal Cyclic number Farey sequence Ford circle Stern–Brocot tree Dedekind sum Egyptian fraction Modular arithmetic Montgomery reduction Modular exponentiation Linear congruence theorem Method of successive substitution Chinese remainder theorem Fermat's little theorem Proofs of Fermat's little theorem Fermat quotient Euler's totient function Noncototient Nontotient Euler's theorem Wilson's theorem Primitive root modulo n Multiplicative order Discrete logarithm Quadratic residue Euler's criterion Legendre symbol Gauss's lemma (number theory) Congruence of squares Luhn formula Mod n cryptanalysis Arithmetic functions Multiplicative function Additive function Dirichlet convolution Erdős–Kac theorem Möbius function Möbius inversion formula Divisor function Liouville function Partition function (number theory) Integer partition Bell numbers Landau's function Pentagonal number theorem Bell series Lambert series Analytic number theory: additive problems Twin prime Brun's constant Cousin prime Prime triplet Prime quadruplet Sexy prime Sophie Germain prime Cunningham chain Goldbach's conjecture Goldbach's weak conjecture Second Hardy–Littlewood conjecture Hardy–Littlewood circle method Schinzel's hypothesis H Bateman–Horn conjecture Waring's problem Brahmagupta–Fibonacci identity Euler's four-square identity Lagrange's four-square theorem Taxicab number Generalized taxicab number Cabtaxi number Schnirelmann density Sumset Landau–Ramanujan constant Sierpinski number Seventeen or Bust Niven's constant Algebraic number theory See list of algebraic number theory topics Quadratic forms Unimodular lattice Fermat's theorem on sums of two squares Proofs of Fermat's theorem on sums of two squares L-functions Riemann zeta function Basel problem on ζ(2) Hurwitz zeta function Bernoulli number Agoh–Giuga conjecture Von Staudt–Clausen theorem Dirichlet series Euler product Prime number theorem Prime-counting function Meissel–Lehmer algorithm Offset logarithmic integral Legendre's constant Skewes' number Bertrand's postulate Proof of Bertrand's postulate Proof that the sum of the reciprocals of the primes diverges Cramér's conjecture Riemann hypothesis Critical line theorem Hilbert–Pólya conjecture Generalized Riemann hypothesis Mertens function, Mertens conjecture, Meissel–Mertens constant De Bruijn–Newman constant Dirichlet character Dirichlet L-series Siegel zero Dirichlet's theorem on arithmetic progressions Linnik's theorem Elliott–Halberstam conjecture Functional equation (L-function) Chebotarev's density theorem Local zeta function Weil conjectures Modular form modular group Congruence subgroup Hecke operator Cusp form Eisenstein series Modular curve Ramanujan–Petersson conjecture Birch and Swinnerton-Dyer conjecture Automorphic form Selberg trace formula Artin conjecture Sato–Tate conjecture Langlands program modularity theorem Diophantine equations Pythagorean triple Pell's equation Elliptic curve Nagell–Lutz theorem Mordell–Weil theorem Mazur's torsion theorem Congruent number Arithmetic of abelian varieties Elliptic divisibility sequences Mordell curve Fermat's Last Theorem Mordell conjecture Euler's sum of powers conjecture abc Conjecture Catalan's conjecture Pillai's conjecture Hasse principle Diophantine set Matiyasevich's theorem Hundred Fowls Problem 1729 Diophantine approximation Davenport–Schmidt theorem Irrational number Square root of two Quadratic irrational Integer square root Algebraic number Pisot–Vijayaraghavan number Salem number Transcendental number e (mathematical constant) pi, list of topics related to pi Squaring the circle Proof that e is irrational Lindemann–Weierstrass theorem Hilbert's seventh problem Gelfond–Schneider theorem Erdős–Borwein constant Liouville number Irrationality measure Simple continued fraction Mathematical constant (sorted by continued fraction representation) Khinchin's constant Lévy's constant Lochs' theorem Gauss–Kuzmin–Wirsing operator Minkowski's question mark function Generalized continued fraction Kronecker's theorem Thue–Siegel–Roth theorem Prouhet–Thue–Morse constant Gelfond–Schneider constant Equidistribution mod 1 Beatty's theorem Littlewood conjecture Discrepancy function Low-discrepancy sequence Illustration of a low-discrepancy sequence Constructions of low-discrepancy sequences Halton sequences Geometry of numbers Minkowski's theorem Pick's theorem Mahler's compactness theorem Mahler measure Effective results in number theory Mahler's theorem Sieve methods Brun sieve Function field sieve General number field sieve Large sieve Larger sieve Quadratic sieve Selberg sieve Sieve of Atkin Sieve of Eratosthenes Sieve of Sundaram Turán sieve Named primes Chen prime Cullen prime Fermat prime Sophie Germain prime, safe prime Mersenne prime New Mersenne conjecture Great Internet Mersenne Prime Search Newman–Shanks–Williams prime Primorial prime Wagstaff prime Wall–Sun–Sun prime Wieferich prime Wilson prime Wolstenholme prime Woodall prime Prime pages Combinatorial number theory Covering system Small set (combinatorics) Erdős–Ginzburg–Ziv theorem Polynomial method Van der Waerden's theorem Szemerédi's theorem Collatz conjecture Gilbreath's conjecture Erdős–Graham conjecture Znám's problem Computational number theory Note: Computational number theory is also known as algorithmic number theory. Residue number system Cunningham project Quadratic residuosity problem Primality tests Prime factorization algorithm Trial division Sieve of Eratosthenes Probabilistic algorithm Fermat primality test Pseudoprime Carmichael number Euler pseudoprime Euler–Jacobi pseudoprime Fibonacci pseudoprime Probable prime Baillie–PSW primality test Miller–Rabin primality test Lucas–Lehmer primality test Lucas–Lehmer test for Mersenne numbers AKS primality test Integer factorization Pollard's p − 1 algorithm Pollard's rho algorithm Lenstra elliptic curve factorization Quadratic sieve Special number field sieve General number field sieve Shor's algorithm RSA Factoring Challenge Pseudo-random numbers Pseudorandom number generator Pseudorandomness Cryptographically secure pseudo-random number generator Middle-square method Blum Blum Shub ACORN ISAAC Lagged Fibonacci generator Linear congruential generator Mersenne twister Linear-feedback shift register Shrinking generator Stream cipher see also List of random number generators. Arithmetic dynamics Aliquot sequence and Aliquot sum dynamics Abundant number Almost perfect number Amicable number Betrothed numbers Deficient number Quasiperfect number Perfect number Sociable number Collatz conjecture Digit sum dynamics Additive persistence Digital root Digit product dynamics Multiplicative digital root Multiplicative persistence Lychrel number Perfect digital invariant Happy number History Disquisitiones Arithmeticae "On the Number of Primes Less Than a Given Magnitude" Vorlesungen über Zahlentheorie Prime Obsession Number theory Number theory
List of number theory topics
[ "Mathematics" ]
1,664
[ "Discrete mathematics", "Number theory" ]
346,992
https://en.wikipedia.org/wiki/Dimension%20theorem%20for%20vector%20spaces
In mathematics, the dimension theorem for vector spaces states that all bases of a vector space have equally many elements. This number of elements may be finite or infinite (in the latter case, it is a cardinal number), and defines the dimension of the vector space. Formally, the dimension theorem for vector spaces states that: As a basis is a generating set that is linearly independent, the dimension theorem is a consequence of the following theorem, which is also useful: In particular if is finitely generated, then all its bases are finite and have the same number of elements. While the proof of the existence of a basis for any vector space in the general case requires Zorn's lemma and is in fact equivalent to the axiom of choice, the uniqueness of the cardinality of the basis requires only the ultrafilter lemma, which is strictly weaker (the proof given below, however, assumes trichotomy, i.e., that all cardinal numbers are comparable, a statement which is also equivalent to the axiom of choice). The theorem can be generalized to arbitrary -modules for rings having invariant basis number. In the finitely generated case the proof uses only elementary arguments of algebra, and does not require the axiom of choice nor its weaker variants. Proof Let be a vector space, be a linearly independent set of elements of , and be a generating set. One has to prove that the cardinality of is not larger than that of . If is finite, this results from the Steinitz exchange lemma. (Indeed, the Steinitz exchange lemma implies every finite subset of has cardinality not larger than that of , hence is finite with cardinality not larger than that of .) If is finite, a proof based on matrix theory is also possible. Assume that is infinite. If is finite, there is nothing to prove. Thus, we may assume that is also infinite. Let us suppose that the cardinality of is larger than that of . We have to prove that this leads to a contradiction. By Zorn's lemma, every linearly independent set is contained in a maximal linearly independent set . This maximality implies that spans and is therefore a basis (the maximality implies that every element of is linearly dependent from the elements of , and therefore is a linear combination of elements of ). As the cardinality of is greater than or equal to the cardinality of , one may replace with , that is, one may suppose, without loss of generality, that is a basis. Thus, every can be written as a finite sum where is a finite subset of As is infinite, has the same cardinality as . Therefore has cardinality smaller than that of . So there is some which does not appear in any . The corresponding can be expressed as a finite linear combination of s, which in turn can be expressed as finite linear combination of s, not involving . Hence is linearly dependent on the other s, which provides the desired contradiction. Kernel extension theorem for vector spaces This application of the dimension theorem is sometimes itself called the dimension theorem. Let be a linear transformation. Then that is, the dimension of U is equal to the dimension of the transformation's range plus the dimension of the kernel. See rank–nullity theorem for a fuller discussion. Notes References Theorems in abstract algebra Theorems in linear algebra Articles containing proofs
Dimension theorem for vector spaces
[ "Mathematics" ]
687
[ "Theorems in algebra", "Theorems in linear algebra", "Articles containing proofs", "Theorems in abstract algebra" ]
347,002
https://en.wikipedia.org/wiki/Audio%20equipment
Audio equipment refers to devices that reproduce, record, or process sound. This includes microphones, radio receivers, AV receivers, CD players, tape recorders, amplifiers, mixing consoles, effects units, headphones, and speakers. Audio equipment is widely used in many different scenarios, such as concerts, bars, meeting rooms and the home where there is a need to reproduce, record and enhance sound volume. Electronic circuits considered a part of audio electronics may also be designed to achieve certain signal processing operations, in order to make particular alterations to the signal while it is in the electrical form. Audio signals can be created synthetically through the generation of electric signals from electronic devices. Audio electronics were traditionally designed with analog electric circuit techniques until advances in digital technologies were developed. Moreover, digital signals are able to be manipulated by computer software much the same way audio electronic devices would, due to its compatible digital nature. Both analog and digital design formats are still used today, and the use of one or the other largely depends on the application. See also Sound recording and reproduction Sound system (disambiguation) References Further reading Sontheimer, R. (1998). Designing audio circuits. Netherlands: Elektor International Media. Audio electronics Consumer electronics
Audio equipment
[ "Engineering" ]
252
[ "Audio electronics", "Audio engineering" ]
347,005
https://en.wikipedia.org/wiki/RELAX%20NG
In computing, RELAX NG (REgular LAnguage for XML Next Generation) is a schema language for XML—a RELAX NG schema specifies a pattern for the structure and content of an XML document. A RELAX NG schema is itself an XML document but RELAX NG also offers a popular compact, non-XML syntax. Compared to other XML schema languages RELAX NG is considered relatively simple. It was defined by a committee specification of the OASIS RELAX NG technical committee in 2001 and 2002, based on Murata Makoto's RELAX and James Clark's TREX, and also by part two of the international standard ISO/IEC 19757: Document Schema Definition Languages (DSDL). ISO/IEC 19757-2 was developed by ISO/IEC JTC 1/SC 34 and published in its first version in 2003. Schema examples Suppose we want to define an extremely simple XML markup scheme for a book: a book is defined as a sequence of one or more pages; each page contains text only. A sample XML document instance might be: <book> <page>This is page one.</page> <page>This is page two.</page> </book> XML syntax A RELAX NG schema can be written in a nested structure by defining a root element that contains further element definitions, which may themselves contain embedded definitions. A schema for our book in this style, using the full XML syntax, would be written: <element name="book" xmlns="http://relaxng.org/ns/structure/1.0"> <oneOrMore> <element name="page"> <text/> </element> </oneOrMore> </element> Nested structure becomes unwieldy with many sublevels and cannot define recursive elements, so most complex RELAX NG schemas use references to named pattern definitions located separately in the schema. Here, a "flattened schema" defines precisely the same book markup as the previous example: <grammar xmlns="http://relaxng.org/ns/structure/1.0"> <start> <element name="book"> <oneOrMore> <ref name="page"/> </oneOrMore> </element> </start> <define name="page"> <element name="page"> <text/> </element> </define> </grammar> Compact syntax RELAX NG compact syntax is a non-XML format inspired by extended Backus–Naur form and regular expressions, designed so that it can be unambiguously translated to its XML counterpart, and back again, with one-to-one correspondence in structure and meaning, in much the same way that Simple Outline XML (SOX) relates to XML. It shares many features with the syntax of DTDs. Here is the compact form of the above schema: element book { element page { text }+ } With named patterns, this can be flattened to: start = element book { page+ } page = element page { text } A compact RELAX NG parser will treat these two as the same pattern. Comparison with W3C XML Schema Although the RELAX NG specification was developed at roughly the same time as the W3C XML Schema specification, the latter was arguably better known and more widely implemented in both open-source and proprietary XML parsers and editors when it became a W3C Recommendation in 2001. Since then, however, RELAX NG support has increasingly found its way into XML software, and its acceptance has been aided by its adoption as a primary schema for popular document-centric markup languages such as DocBook, the TEI Guidelines, OpenDocument, and EPUB. RELAX NG shares with W3C XML Schema many features that set both apart from traditional DTDs: data typing, regular expression support, namespace support, ability to reference complex definitions. Filename extensions By informal convention, RELAX NG schemas in the regular syntax are typically named with the filename extension ".rng". For schemas in the compact syntax, the extension ".rnc" is used. Determinism Relax NG schemas are not necessarily "deterministic" or "unambiguous". Converting Relax NG to DTD Relax NG schemas can be converted to DTDs by applying Trang which can be found at: . The manual for Trang is located at . Note that Trang is unable to convert the OASIS DITA 1.3 schema to DTDs, failing with messages like: sorry, combining definitions with combine="choice" is not supported See also XML schemas DTD (Document Type Definition) Document Structure Description XML Schema (W3C) Schematron ODD (One Document Does it all) SXML References External links RELAX NG home page "The Design of RELAX NG" by James Clark RELAX NG tutorial for the XML syntax RELAX NG tutorial for the compact syntax Design patterns for structuring XML documents RELAX NG Book by Eric van der Vlist, released under the GNU Free Documentation License Relax NG Reference by ZVON RELAX NG Java community projects at java.net Sun Multi-Schema Validator (MSV) open-source Java XML toolkit Relax NG Compact Syntax validator open-source C program XSD to Relax NG Converter Web-based converter https://github.com/relaxng/jing-trang Computer-related introductions in 2001 Data modeling languages ISO/IEC standards XML XML-based standards
RELAX NG
[ "Technology" ]
1,157
[ "Computer standards", "XML-based standards" ]
347,027
https://en.wikipedia.org/wiki/Amorphous%20metal
An amorphous metal (also known as metallic glass, glassy metal, or shiny metal) is a solid metallic material, usually an alloy, with disordered atomic-scale structure. Most metals are crystalline in their solid state, which means they have a highly ordered arrangement of atoms. Amorphous metals are non-crystalline, and have a glass-like structure. But unlike common glasses, such as window glass, which are typically electrical insulators, amorphous metals have good electrical conductivity and can show metallic luster. Amorphous metals can be produced in several ways, including extremely rapid cooling, physical vapor deposition, solid-state reaction, ion irradiation, and mechanical alloying. Small batches of amorphous metals have been produced through a variety of quick-cooling methods, such as amorphous metal ribbons produced by sputtering molten metal onto a spinning metal disk (melt spinning). The rapid cooling (millions of degrees Celsius per second) comes too fast for crystals to form and the material is "locked" in a glassy state. Alloys with cooling rates low enough to allow formation of amorphous structure in thick layers (over ) have been produced; bulk metallic glasses. Batches of amorphous steel with three times the strength of conventional steel alloys have been produced. New techniques such as 3D printing, also characterised by high cooling rates, are an active research topic. History The first reported metallic glass was Au75Si25, produced at Caltech by Klement, Willens, and Duwez in 1960. This and other early glass-forming alloys had to be rapidly cooled (on the order of one megakelvin per second, 106 K/s) to avoid crystallization. An important consequence of this was that metallic glasses could be produced in a few forms (typically ribbons, foils, or wires) in which one dimension was small so that heat could be extracted quickly enough to achieve the required cooling rate. As a result, metallic glass specimens (with a few exceptions) were limited to thicknesses of less than one hundred microns. In 1969, an alloy of 77.5% palladium, 6% copper, and 16.5% silicon was found to have critical cooling rate between 100 and 1000 K/s. In 1976, Liebermann and Graham developed a method of manufacturing thin ribbons of amorphous metal on a supercooled fast-spinning wheel. This was an alloy of iron, nickel, and boron. The material, known as Metglas, was commercialized in the early 1980s and became used for low-loss power distribution transformers (amorphous metal transformer). Metglas-2605 is composed of 80% iron and 20% boron, has a Curie temperature of and a room temperature saturation magnetization of 1.56 teslas. In the early 1980s, glassy ingots with a diameter of were produced with an alloy of 55% palladium, 22.5% lead, and 22.5% antimony, by surface etching followed with heating-cooling cycles. Using boron oxide flux, the achievable thickness increased to one centimeter. In 1982, a study on amorphous metal structural relaxation indicated a relationship between the specific heat and temperature of (Fe0.5Ni0.5)83P17. As the material was heated, the two properties displayed a negative relationship starting at 375 K, due to the change in relaxed amorphous states. When the material was annealed for periods from 1 to 48 hours, the properties instead displayed a positive relationship starting at 475 K for all annealing periods, since the annealing induced structure disappears at that temperature. In this study, amorphous alloys demonstrated glass transition and a super cooled liquid region. Between 1988 and 1992, more studies found more glass-type alloys with glass transition and a super cooled liquid region. From those studies, bulk glass alloys were made of La, Mg, and Zr, and these alloys demonstrated plasticity even with ribbon thickness from 20 μm to 50 μm. The plasticity was a stark difference to past amorphous metals that became brittle at those thicknesses. In 1988, alloys of lanthanum, aluminium, and copper ore were revealed to be glass-forming. Al-based metallic glasses containing scandium exhibited a record-type tensile mechanical strength of about . Bulk amorphous alloys of several millimeters in thickness were rare, although Pd-based amorphous alloys had been formed into rods with a diameter by quenching, and spheres with a diameter were formed by repetition flux melting with B2O3 and quenching. New techniques were found in 1990, producing alloys that form glasses at cooling rates as low as one kelvin per second. These cooling rates can be achieved by simple casting into metallic molds. These alloys can be cast into parts several centimeters thick while retaining an amorphous structure. The best glass-forming alloys were based on zirconium and palladium, but alloys based on iron, titanium, copper, magnesium, and other metals are known. The process exploited a phenomenon called "confusion". Such alloys contain many elements (often four or more) such that upon cooling sufficiently quickly, constituent atoms cannot achieve an equilibrium crystalline state before their mobility is lost. In this way, the random disordered state of the atoms is "locked in". In 1992, the commercial amorphous alloy, Vitreloy 1 (41.2% Zr, 13.8% Ti, 12.5% Cu, 10% Ni, and 22.5% Be), was developed at Caltech, as a part of Department of Energy and NASA research of new aerospace materials. By 2000, research in Tohoku University and Caltech yielded multicomponent alloys based on lanthanum, magnesium, zirconium, palladium, iron, copper, and titanium, with critical cooling rate between 1 K/s and 100 K/s, comparable to oxide glasses. In 2004, bulk amorphous steel was successfully produced by a groups at Oak Ridge National Laboratory, which refers to their product as "glassy steel", and another at University of Virginia, named "DARVA-Glass 101". The product is non-magnetic at room temperature and significantly stronger than conventional steel. In 2018, a team at SLAC National Accelerator Laboratory, the National Institute of Standards and Technology (NIST) and Northwestern University reported the use of artificial intelligence to predict and evaluate samples of 20,000 different likely metallic glass alloys in a year. Properties Amorphous metal is usually an alloy rather than a pure metal. The alloys contain atoms of significantly different sizes, leading to low free volume (and therefore up to orders of magnitude higher viscosity than other metals and alloys) in molten state. The viscosity prevents the atoms from moving enough to form an ordered lattice. The material displays low shrinkage during cooling, and resistance to plastic deformation. The absence of grain boundaries, the weak spots of crystalline materials, leads to better wear resistance and lesscorrosion. Amorphous metals, while technically glasses, are much tougher and less brittle than oxide glasses and ceramics. Amorphous metals are either non-ferromagnetic, if they are composed of Ln, Mg, Zr, Ti, Pd, Ca, Cu, Pt and Au, or ferromagnetic, if they are composed of Fe, Co, and Ni. Thermal conductivity is lower than in crystalline metals. As formation of amorphous structure relies on fast cooling, this limits the thickness of amorphous structures. To form amorphous structure despite slower cooling, the alloy has to be made of three or more components, leading to complex crystal units with higher potential energy and lower odds of formation. The atomic radius of the components has to be significantly different (over 12%), to achieve high packing density and low free volume. The combination of components should have negative mixing heat, inhibiting crystal nucleation and prolonging the time the molten metal stays in supercooled state. As temperatures change, the electrical resistivity of amorphous metals behaves very different than that of regular metals. While resistivity in crystalline metals generally increases with temperature, following Matthiessen's rule, resistivity in many amorphous metals decreases with increasing temperature. This effect can be observed in amorphous metals of high resistivities between 150 and 300 microohm-centimeters. In these metals, the scattering events causing the resistivity of the metal are not statistically independent, thus explaining the breakdown of Matthiessen's rule. The fact that the thermal change of the resistivity in amorphous metals can be negative over a large range of temperatures and correlated to their absolute resistivity values was identified by Mooij in 1973, becoming Mooijs-rule. Alloys of boron, silicon, phosphorus, and other glass formers with magnetic metals (iron, cobalt, nickel) have high magnetic susceptibility, with low coercivity and high electrical resistance. Usually the electrical conductivity of a metallic glass is of the same low order of magnitude as of a molten metal just above the melting point. The high resistance leads to low losses by eddy currents when subjected to alternating magnetic fields, a property useful for e.g. transformer magnetic cores. Their low coercivity also contributes to low loss. Buckel and Hilsch discovered the superconductivity of amorphous metal thin films experimentally in the early 1950s. For certain metallic elements the superconducting critical temperature Tc can be higher in the amorphous state (e.g. upon alloying) than in the crystalline state, and in several cases Tc increases upon increasing the structural disorder. This behavior can be explained by the effect of structural disorder on electron-phonon coupling. Amorphous metals have higher tensile yield strengths and higher elastic strain limits than polycrystalline metal alloys, but their ductilities and fatigue strengths are lower. Amorphous alloys have a variety of potentially useful properties. In particular, they tend to be stronger than crystalline alloys of similar chemical composition, and they can sustain larger reversible ("elastic") deformations than crystalline alloys. Amorphous metals derive their strength directly from their non-crystalline structure, which does not have defects (such as dislocations) that limit their strength. Vitreloy is an amorphous metal with a tensile strength almost double that of high-grade titanium. However, metallic glasses at room temperature are not ductile and tend to fail suddenly and surprisingly when loaded in tension, which limits applicability in reliability-critical applications. Metal matrix composites consisting of a ductile crystalline metal matrix containing dendritic particles or fibers of an amorphous glass metal are an alternative. Perhaps the most useful property of bulk amorphous alloys is that they are true glasses, which means that they soften and flow upon heating. This allows for easy processing, such as by injection molding, in much the same way as polymers. As a result, amorphous alloys have been commercialized for use in sports equipment, medical devices, and as cases for electronic equipment. Thin films of amorphous metals can be deposited as protective coatings via high velocity oxygen fuel. Applications Commercial The most important application exploits the magnetic properties of some ferromagnetic metallic glasses. The low magnetization loss is used in high efficiency transformers at line frequency and in some higher frequency transformers. Amorphous steel is very brittle that makes it difficult to punch into motor laminations. Electronic article surveillance (such as passive ID tags) often uses metallic glasses because of these magnetic properties. Ti-based metallic glass, when made into thin pipes, have a high tensile strength of , elastic elongation of 2% and high corrosion resistance. A Ti–Zr–Cu–Ni–Sn metallic glass was used to improve the sensitivity of a Coriolis flow meter. This flow meter is about 28-53 times more sensitive than conventional meters, which can be applied in fossil-fuel, chemical, environmental, semiconductor and medical science industries. Zr-Al-Ni-Cu based metallic glass can be shaped into pressure sensors for automobile and other industries. Such sensors are smaller, more sensitive, and possess greater pressure endurance than conventional stainless steel. Additionally, this alloy was used to make the world's smallest geared motor with diameter at the time. Potential Amorphous metals exhibit unique softening behavior above their glass transition and this softening has been increasingly explored for thermoplastic forming of metallic glasses. Such low softening temperature supports simple methods for making nanoparticlecomposites (e.g. carbon nanotubes) and bulk metallic glasses. It has been shown that metallic glasses can be patterned on extremely length scales as small as 10 nm. This may solve problems of nanoimprint lithography where expensive nano-molds made of silicon break easily. Nano-molds made from metallic glasses are easy to fabricate and more durable than silicon molds. The superior electronic, thermal and mechanical properties of bulk metallic glasses compared to polymers make them a good option for developing nanocomposites for electronic application such as field electron emission devices. Ti40Cu36Pd14Zr10 is believed to be noncarcinogenic, is about three times stronger than titanium, and its elastic modulus nearly matches bones. It has a high wear resistance and does not produce abrasion powder. The alloy does not undergo shrinkage on solidification. A surface structure can be generated that is biologically attachable by surface modification using laser pulses, allowing better joining with bone. Laser powder bed fusion (LPBF) has been used to process Zr-based bulk metallic glass (BMG) for biomedical applications. Zr-based BMGs shows good biocompatibility, supporting osteoblastic cell growth similar to Ti-6Al-4V alloy. The favorable response coupled with the ability to tailor surface properties through SLM highlights the promise of SLM Zr- based BMGs like AMLOY-ZR01 for orthopaedic implant applications. However, their degradation under inflammatory conditions requires further investigation. Mg60Zn35Ca5 is under investigation as a biomaterial for implantation into bones as screws, pins, or plates, to fix fractures. Unlike traditional steel or titanium, this material dissolves in organisms at a rate of roughly 1 millimeter per month and is replaced with bone tissue. This speed can be adjusted by varying the zinc content. Bulk metallic glasses seem to exhibit superior properties. SAM2X5-630 is claimed to have the highest recorded plasticity for any steel alloy, essentially the highest threshold at which a material can withstand an impact without deforming permanently. The alloy can withstand pressure and stress of up to without permanent deformation. This is the highest impact resistance of any bulk metallic glass ever recorded . This makes it as an attractive option for armour material and other applications that require high stress tolerance. Additive manufacturing One challenge when synthesising a metallic glass is that the techniques often only produce very small samples, due to the need for high cooling rates. 3D-printing methods have been suggested as a method to create larger bulk samples. Selective laser melting (SLM) is one example of an additive manufacturing method that has been used to make iron based metallic glasses. Laser foil printing (LFP) is another method where foils of the amorphous metals are stacked and welded together, layer by layer. Modeling and theory Bulk metallic glasses have been modeled using atomic scale simulations (within the density functional theory framework) in a similar manner to high entropy alloys. This has allowed predictions to be made about their behavior, stability and many more properties. As such, new bulk metallic glass systems can be tested and tailored for a specific purpose (e.g. bone replacement or aero-engine component) without as much empirical searching of the phase space or experimental trial and error. Ab-initio molecular dynamics (MD) simulation confirmed that the atomic surface structure of a Ni-Nb metallic glass observed by scanning tunneling microscopy is a kind of spectroscopy. At negative applied bias it visualizes only one soft of atoms (Ni) owing to the structure of electronic density of states calculated using ab-initio MD simulation. One common way to try and understand the electronic properties of amorphous metals is by comparing them to liquid metals, which are similarly disordered, and for which established theoretical frameworks exist. For simple amorphous metals, good estimations can be reached by semi-classical modelling of the movement of individual electrons using the Boltzmann equation and approximating the scattering potential as the superposition of the electronic potential of each nucleus in the surrounding metal. To simplify the calculations, the electronic potentials of the atomic nuclei can be truncated to give a muffin-tin pseudopotential. In this theory, there are two main effects that govern the change of resistivity with increasing temperatures. Both are based on the induction of vibrations of the atomic nuclei of the metal as temperatures increase. One is, that the atomic structure gets increasingly smeared out as the exact positions of the atomic nuclei get less and less well defined. The other is the introduction of phonons. While the smearing out generally decreases the resistivity of the metal, the introduction of phonons generally adds scattering sites and therefore increases resistivity. Together, they can explain the anomalous decrease of resistivity in amorphous metals, as the first part outweighs the second. In contrast to regular crystalline metals, the phonon contribution in an amorphous metal does not get frozen out at low temperatures. Due to the lack of a defined crystal structure, there are always some phonon wavelengths that can be excited. While this semi-classical approach holds well for many amorphous metals, it generally breaks down under more extreme conditions. At very low temperatures, the quantum nature of the electrons leads to long range interference effects of the electrons with each other in what is called "weak localization effects". In very strongly disordered metals, impurities in the atomic structure can induce bound electronic states in what is called "Anderson localization", effectively binding the electrons and inhibiting their movement. See also Bioabsorbable metallic glass Glass-ceramic-to-metal seals Liquidmetal Materials science Structure of liquids and glasses Amorphous brazing foil References Further reading External links Liquidmetal Design Guide "Metallic glass: a drop of the hard stuff" at New Scientist Glass-Like Metal Performs Better Under Stress Physical Review Focus, June 9, 2005 "Overview of metallic glasses" New Computational Method Developed By Carnegie Mellon University Physicist Could Speed Design and Testing of Metallic Glass (2004) (the alloy database developed by Marek Mihalkovic, Michael Widom, and others) New tungsten-tantalum-copper amorphous alloy developed at the Korea Advanced Institute of Science and Technology Digital Chosunilbo (English Edition) : Daily News in English About Korea Amorphous Metals in Electric-Power Distribution Applications Amorphous and Nanocrystalline Soft Magnets Metallic glasses and those composites, Materials Research Forum LLC, Millersville, PA, USA, (2018), p. 336 Alloys Metallurgy Glass
Amorphous metal
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,975
[ "Glass", "Metallurgy", "Unsolved problems in physics", "Materials science", "Homogeneous chemical mixtures", "Amorphous metals", "Alloys", "Chemical mixtures", "nan", "Amorphous solids" ]
347,049
https://en.wikipedia.org/wiki/Valuation%20%28algebra%29
In algebra (in particular in algebraic geometry or algebraic number theory), a valuation is a function on a field that provides a measure of the size or multiplicity of elements of the field. It generalizes to commutative algebra the notion of size inherent in consideration of the degree of a pole or multiplicity of a zero in complex analysis, the degree of divisibility of a number by a prime number in number theory, and the geometrical concept of contact between two algebraic or analytic varieties in algebraic geometry. A field with a valuation on it is called a valued field. Definition One starts with the following objects: a field and its multiplicative group K×, an abelian totally ordered group . The ordering and group law on are extended to the set } by the rules for all ∈ , for all ∈ . Then a valuation of is any map that satisfies the following properties for all a, b in K: if and only if , , , with equality if v(a) ≠ v(b). A valuation v is trivial if v(a) = 0 for all a in K×, otherwise it is non-trivial. The second property asserts that any valuation is a group homomorphism on K×. The third property is a version of the triangle inequality on metric spaces adapted to an arbitrary Γ (see Multiplicative notation below). For valuations used in geometric applications, the first property implies that any non-empty germ of an analytic variety near a point contains that point. The valuation can be interpreted as the order of the leading-order term. The third property then corresponds to the order of a sum being the order of the larger term, unless the two terms have the same order, in which case they may cancel and the sum may have larger order. For many applications, is an additive subgroup of the real numbers in which case ∞ can be interpreted as +∞ in the extended real numbers; note that for any real number a, and thus +∞ is the unit under the binary operation of minimum. The real numbers (extended by +∞) with the operations of minimum and addition form a semiring, called the min tropical semiring, and a valuation v is almost a semiring homomorphism from K to the tropical semiring, except that the homomorphism property can fail when two elements with the same valuation are added together. Multiplicative notation and absolute values The concept was developed by Emil Artin in his book Geometric Algebra writing the group in multiplicative notation as : Instead of ∞, we adjoin a formal symbol O to Γ, with the ordering and group law extended by the rules for all ∈ , for all ∈ . Then a valuation of is any map satisfying the following properties for all a, b ∈ K: if and only if , , , with equality if . (Note that the directions of the inequalities are reversed from those in the additive notation.) If is a subgroup of the positive real numbers under multiplication, the last condition is the ultrametric inequality, a stronger form of the triangle inequality , and is an absolute value. In this case, we may pass to the additive notation with value group by taking . Each valuation on defines a corresponding linear preorder: . Conversely, given a "" satisfying the required properties, we can define valuation }, with multiplication and ordering based on and . Terminology In this article, we use the terms defined above, in the additive notation. However, some authors use alternative terms: our "valuation" (satisfying the ultrametric inequality) is called an "exponential valuation" or "non-Archimedean absolute value" or "ultrametric absolute value"; our "absolute value" (satisfying the triangle inequality) is called a "valuation" or an "Archimedean absolute value". Associated objects There are several objects defined from a given valuation ; the value group or valuation group = v(K×), a subgroup of (though v is usually surjective so that = ); the valuation ring Rv is the set of a ∈ with v(a) ≥ 0, the prime ideal mv is the set of a ∈ K with v(a) > 0 (it is in fact a maximal ideal of Rv), the residue field kv = Rv/mv, the place of associated to v, the class of v under the equivalence defined below. Basic properties Equivalence of valuations Two valuations v1 and v2 of with valuation group Γ1 and Γ2, respectively, are said to be equivalent if there is an order-preserving group isomorphism such that v2(a) = φ(v1(a)) for all a in K×. This is an equivalence relation. Two valuations of K are equivalent if and only if they have the same valuation ring. An equivalence class of valuations of a field is called a place. Ostrowski's theorem gives a complete classification of places of the field of rational numbers these are precisely the equivalence classes of valuations for the p-adic completions of Extension of valuations Let v be a valuation of and let L be a field extension of . An extension of v (to L) is a valuation w of L such that the restriction of w to is v. The set of all such extensions is studied in the ramification theory of valuations. Let L/K be a finite extension and let w be an extension of v to L. The index of Γv in Γw, e(w/v) = [Γw : Γv], is called the reduced ramification index of w over v. It satisfies e(w/v) ≤ [L : K] (the degree of the extension L/K). The relative degree of w over v is defined to be f(w/v) = [Rw/mw : Rv/mv] (the degree of the extension of residue fields). It is also less than or equal to the degree of L/K. When L/K is separable, the ramification index of w over v is defined to be e(w/v)pi, where pi is the inseparable degree of the extension Rw/mw over Rv/mv. Complete valued fields When the ordered abelian group is the additive group of the integers, the associated valuation is equivalent to an absolute value, and hence induces a metric on the field . If is complete with respect to this metric, then it is called a complete valued field. If K is not complete, one can use the valuation to construct its completion, as in the examples below, and different valuations can define different completion fields. In general, a valuation induces a uniform structure on , and is called a complete valued field if it is complete as a uniform space. There is a related property known as spherical completeness: it is equivalent to completeness if but stronger in general. Examples p-adic valuation The most basic example is the -adic valuation νp associated to a prime integer p, on the rational numbers with valuation ring where is the localization of at the prime ideal . The valuation group is the additive integers For an integer the valuation νp(a) measures the divisibility of a by powers of p: and for a fraction, νp(a/b) = νp(a) − νp(b). Writing this multiplicatively yields the -adic absolute value, which conventionally has as base , so . The completion of with respect to νp is the field of p-adic numbers. Order of vanishing Let K = F(x), the rational functions on the affine line X = F1, and take a point a ∈ X. For a polynomial with , define va(f) = k, the order of vanishing at x = a; and va(f /g) = va(f) − va(g). Then the valuation ring R consists of rational functions with no pole at x = a, and the completion is the formal Laurent series ring F((x−a)). This can be generalized to the field of Puiseux series K{{t}} (fractional powers), the Levi-Civita field (its Cauchy completion), and the field of Hahn series, with valuation in all cases returning the smallest exponent of t appearing in the series. -adic valuation Generalizing the previous examples, let be a principal ideal domain, be its field of fractions, and be an irreducible element of . Since every principal ideal domain is a unique factorization domain, every non-zero element a of can be written (essentially) uniquely as where the es are non-negative integers and the pi are irreducible elements of that are not associates of . In particular, the integer ea is uniquely determined by a. The π-adic valuation of K is then given by If π' is another irreducible element of such that (π') = (π) (that is, they generate the same ideal in R), then the π-adic valuation and the π'-adic valuation are equal. Thus, the π-adic valuation can be called the P-adic valuation, where P = (π). P-adic valuation on a Dedekind domain The previous example can be generalized to Dedekind domains. Let be a Dedekind domain, its field of fractions, and let P be a non-zero prime ideal of . Then, the localization of at P, denoted RP, is a principal ideal domain whose field of fractions is . The construction of the previous section applied to the prime ideal PRP of RP yields the -adic valuation of . Vector spaces over valuation fields Suppose that ∪ {0} is the set of non-negative real numbers under multiplication. Then we say that the valuation is non-discrete if its range (the valuation group) is infinite (and hence has an accumulation point at 0). Suppose that X is a vector space over K and that A and B are subsets of X. Then we say that A absorbs B if there exists a α ∈ K such that λ ∈ K and |λ| ≥ |α| implies that B ⊆ λ A. A is called radial or absorbing if A absorbs every finite subset of X. Radial subsets of X are invariant under finite intersection. Also, A is called circled if λ in K and |λ| ≥ |α| implies λ A ⊆ A. The set of circled subsets of L is invariant under arbitrary intersections. The circled hull''' of A is the intersection of all circled subsets of X containing A. Suppose that X and Y are vector spaces over a non-discrete valuation field K, let A ⊆ X, B ⊆ Y, and let f : X → Y be a linear map. If B is circled or radial then so is . If A is circled then so is f(A) but if A is radial then f(A) will be radial under the additional condition that f'' is surjective. See also Discrete valuation Euclidean valuation Field norm Absolute value (algebra) Notes References . A masterpiece on algebra written by one of the leading contributors. Chapter VI of External links Algebraic geometry Field (mathematics)
Valuation (algebra)
[ "Mathematics" ]
2,315
[ "Fields of abstract algebra", "Algebraic geometry" ]
347,066
https://en.wikipedia.org/wiki/Document%20Schema%20Definition%20Languages
Document Schema Definition Languages (DSDL) is a framework within which multiple validation tasks of different types can be applied to an XML document in order to achieve more complete validation results than just the application of a single technology. It is specified as a multi-part ISO/IEC Standard, ISO/IEC 19757. It was developed by ISO/IEC JTC 1/SC 34 (ISO/IEC Joint Technical Committee 1, Subcommittee 34 - Document description and processing languages). DSDL defines a modular set of specifications for describing the document structures, data types, and data relationships in structured information resources. Part 2: Regular-grammar-based validation – RELAX NG Part 3: Rule-based validation – Schematron Part 4: Namespace-based Validation Dispatching Language (NVDL) Part 5: Extensible Datatypes Part 7: Character Repertoire Description Language (CREPDL) Part 8: Document Semantics Renaming Language (DSRL) Part 9: Namespace and datatype declaration in Document Type Definitions (DTDs) (Datatype- and namespace-aware DTDs) Part 11: Schema Association See also RELAX NG Schematron DTD NVDL W3C Schema References External links Home page for DSDL Archived from the original on 2016-01-22. ISO/IEC 19757-2:2003 - Information technology -- Document Schema Definition Language (DSDL) -- Part 2: Regular-grammar-based validation -- RELAX NG Data modeling languages ISO/IEC standards XML XML-based standards
Document Schema Definition Languages
[ "Technology" ]
314
[ "Computer standards", "XML-based standards" ]
347,105
https://en.wikipedia.org/wiki/Mother%20Teresa
Mary Teresa Bojaxhiu (born Anjezë Gonxhe Bojaxhiu, ; 26 August 1910 – 5 September 1997), better known as Mother Teresa or Saint Mother Teresa, was an Albanian-Indian Catholic nun, founder of the Missionaries of Charity and is a Catholic saint. Born in Skopje, then part of the Ottoman Empire, she was raised in a devoutly Catholic family. At the age of 18, she moved to Ireland to join the Sisters of Loreto and later to India, where she lived most of her life and carried out her missionary work. On 4 September 2016, she was canonised by the Catholic Church as Saint Teresa of Calcutta. The anniversary of her death, 5 September, is now observed as her feast day. Mother Teresa founded the Missionaries of Charity, a religious congregation that was initially dedicated to serving "the poorest of the poor" in the slums of Calcutta. Over the decades, the congregation grew to operate in over 133 countries, , with more than 4,500 nuns managing homes for those dying from HIV/AIDS, leprosy, and tuberculosis, as well as running soup kitchens, dispensaries, mobile clinics, orphanages, and schools. Members of the order take vows of chastity, poverty, and obedience and also profess a fourth vow: to give "wholehearted free service to the poorest of the poor." Mother Teresa received several honours, including the 1962 Ramon Magsaysay Peace Prize and the 1979 Nobel Peace Prize. Her life and work have inspired books, documentaries, and films. Her authorized biography, written by Navin Chawla, was published in 1992, and on 6 September 2017, she was named a co-patron of the Roman Catholic Archdiocese of Calcutta alongside St Francis Xavier. However, she was also a controversial figure, drawing criticism for her staunch opposition to abortion, divorce and contraception, as well as the poor conditions and lack of medical care or pain relief in her houses for the dying. Biography Early life and family Mother Teresa's given name was Anjezë Gonxhe (or Gonxha) Bojaxhiu ( is a cognate of Agnes; means "flower bud" in Albanian). She was born on 26 August 1910 into a Kosovar Albanian family in Skopje, Ottoman Empire (now the capital of North Macedonia). She was baptised in Skopje the day after her birth. She later considered 27 August, the day she was baptised, her "true birthday". She was the youngest child of Nikollë and Dranafile Bojaxhiu (Bernai). Her father, who was involved in Albanian-community politics in Ottoman Macedonia, was probably poisoned, an act attributed to Serbian agents, after he had visited Belgrade for a political meeting in 1919 when she was eight years old. He was born in Prizren (today in Kosovo), however, his family was from Mirdita (present-day Albania). Her mother may have been from a village near Gjakova, believed by her offspring to be Bishtazhin. According to a biography by Joan Graff Clucas, Anjezë was in her early years when she became fascinated by stories of the lives of missionaries and their service in Bengal; by age 12, she was convinced that she should commit herself to religious life. Her resolve strengthened on 15 August 1928 as she prayed at the shrine of the Black Madonna of Vitina-Letnice, where she often went on pilgrimages. Anjezë left home in 1928 at age 18 to join the Sisters of Loreto at Loreto Abbey in Rathfarnham, Ireland, to learn English with the intent of becoming a missionary; English was the language of instruction of the Sisters of Loreto in India. She saw neither her mother nor her sister again. Her family lived in Skopje until 1934, when they moved to Tirana. During communist leader Enver Hoxha's rule, she was considered a dangerous agent of the Vatican. Despite multiple requests and despite the fact that many countries made requests on her behalf, she was denied a chance to see her family and was not granted the opportunity to see her mother and sister. Both of them died during Hoxha's rule, and Anjezë herself was only able to visit Albania five years after the communist regime collapsed. Dom Lush Gjergji in his book "Our Mother Teresa" describes one of her trips to the embassy where she was crying as she was leaving the building, saying:Dear God, I can understand and accept that I should suffer, but it is so hard to understand and accept why my mother has to suffer. In her old age she has no other wish than to see us one last time.She arrived in India in 1929 and began her novitiate in Darjeeling, in the lower Himalayas, where she learned Bengali and taught at St. Teresa's School near her convent. She took her first religious vows on 24 May 1931. She chose to be named after Thérèse de Lisieux, the patron saint of missionaries; because a nun in the convent had already chosen that name, she opted for its Spanish spelling of Teresa. Teresa took her solemn vows on 14 May 1937 while she was a teacher at the Loreto convent school in Entally, eastern Calcutta, taking the style of 'Mother' as part of Loreto custom. She served there for nearly twenty years and was appointed its headmistress in 1944. Although Mother Teresa enjoyed teaching at the school, she was increasingly disturbed by the poverty surrounding her in Calcutta. The Bengal famine of 1943 brought misery and death to the city, and the August 1946 Direct Action Day began a period of Muslim-Hindu violence. In 1946, during a visit to Darjeeling by train, Mother Teresa felt that she heard the call of her inner conscience to serve the poor of India for Jesus. She asked for and received permission to leave the school. In 1950, she founded the Missionaries of Charity, choosing a white sari with two blue borders as the order's habit. Missionaries of Charity On 10 September 1946, Teresa experienced what she later described as "the call within the call" when she travelled by train to the Loreto convent in Darjeeling from Calcutta for her annual retreat. "I was to leave the convent and help the poor while living among them. It was an order. To fail would have been to break the faith." Joseph Langford, MC, founder of her congregation of priests, the Missionaries of Charity Fathers, later wrote, "Though no one knew it at the time, Sister Teresa had just become Mother Teresa". She began missionary work with the poor in 1948, replacing her traditional Loreto habit with a simple, white cotton sari with a blue border. Mother Teresa adopted Indian citizenship, spent several months in Patna to receive basic medical training at Holy Family Hospital and ventured into the slums. She founded a school in Motijhil, Calcutta, before she began tending to the poor and hungry. At the beginning of 1949, Mother Teresa was joined in her effort by a group of young women, and she laid the foundation for a new religious community helping the "poorest among the poor". Her efforts quickly caught the attention of Indian officials, including the prime minister. Mother Teresa wrote in her diary that her first year was fraught with difficulty. With no income, she begged for food and supplies and experienced doubt, loneliness and the temptation to return to the comfort of convent life during these early months: On 7 October 1950, Mother Teresa received Vatican permission for the diocesan congregation, which would become the Missionaries of Charity. In her words, it would care for "the hungry, the naked, the homeless, the crippled, the blind, the lepers, all those people who feel unwanted, unloved, uncared for throughout society, people that have become a burden to the society and are shunned by everyone". In 1952, Mother Teresa opened her first hospice with help from Calcutta officials. She converted an abandoned Hindu temple into the Kalighat Home for the Dying, free for the poor, and renamed it Kalighat, the Home of the Pure Heart (Nirmal Hriday). Those brought to the home received medical attention and the opportunity to die with dignity in accordance with their faith: Muslims were to read the Quran, Hindus received water from the Ganges, and Catholics received extreme unction. "A beautiful death", Mother Teresa said, "is for people who lived like animals to die like angels—loved and wanted." She opened a hospice for those with leprosy, calling it Shanti Nagar (City of Peace). The Missionaries of Charity established leprosy-outreach clinics throughout Calcutta, providing medication, dressings and food. The Missionaries of Charity took in an increasing number of homeless children; in 1955, Mother Teresa opened Nirmala Shishu Bhavan, the Children's Home of the Immaculate Heart, as a haven for orphans and homeless youth. The congregation began to attract recruits and donations, and by the 1960s it had opened hospices, orphanages and leper houses throughout India. Mother Teresa then expanded the congregation abroad, opening a house in Venezuela in 1965 with five sisters. Houses followed in Italy (Rome), Tanzania and Austria in 1968, and, during the 1970s, the congregation opened houses and foundations in the United States and dozens of countries in Asia, Africa and Europe. The Missionaries of Charity Brothers was founded in 1963, and a contemplative branch of the Sisters followed in 1976. Lay Catholics and non-Catholics were enrolled in the Co-Workers of Mother Teresa, the Sick and Suffering Co-Workers, and the Lay Missionaries of Charity. Responding to requests by many priests, in 1981, Mother Teresa founded the Corpus Christi Movement for Priests and with Joseph Langford founded the Missionaries of Charity Fathers in 1984 to combine the vocational aims of the Missionaries of Charity with the resources of the priesthood. By 1997, the 13-member Calcutta congregation had grown to more than 4,000 sisters who managed orphanages, AIDS hospices and charity centres worldwide, caring for refugees, the blind, the disabled, the aged, alcoholics, the poor and homeless and victims of floods, epidemics and famine. By 2007, the Missionaries of Charity numbered about 450 brothers and 5,000 sisters worldwide, operating 600 missions, schools and shelters in 120 countries. International charity Mother Teresa said, "By blood, I am Albanian. By citizenship, an Indian. By faith, I am a Catholic nun. As to my calling, I belong to the world. As to my heart, I belong entirely to the Heart of Jesus." Fluent in five languages – Bengali, Albanian, Serbian, English and Hindi – she made occasional trips outside India for humanitarian reasons. These included, in 1971, a visit with four of her sisters, to Troubles-era Belfast. Her suggestion that the conditions she had found justified an ongoing mission was the cause of some embarrassment. Reportedly under pressure from senior clergy, who believed "the missionary traffic should be in other direction", and despite local welcome and support, she and her sisters abruptly left the city in 1973. At the height of the Siege of Beirut in 1982, Mother Teresa rescued 37 children trapped in a front-line hospital by brokering a temporary cease-fire between the Israeli army and Palestinian guerrillas. Accompanied by Red Cross workers, she travelled through the war zone to the hospital to evacuate the young patients. When Eastern Europe experienced increased openness in the late 1980s, Mother Teresa expanded her efforts to Communist countries which had rejected the Missionaries of Charity. She began dozens of projects, undeterred by criticism of her stands against abortion and divorce: "No matter who says what, you should accept it with a smile and do your own work". She visited Armenia after the 1988 earthquake and met with Soviet Premier Nikolai Ryzhkov. Mother Teresa travelled to assist the hungry in Ethiopia, radiation victims at Chernobyl and earthquake victims in Armenia. In 1991 she returned to Albania for the first time, opening a Missionaries of Charity Brothers home in Tirana. By 1996, the Missionaries of Charity operated 517 missions in over 100 countries. The number of sisters in the Missionaries of Charity grew from twelve to thousands, serving the "poorest of the poor" in 450 centres worldwide. The first Missionaries of Charity home in the United States was established in the South Bronx area of New York City, and by 1984 the congregation operated 19 establishments throughout the country. Declining health and death Mother Teresa had a heart attack in Rome in 1983 while she was visiting Pope John Paul II. Following a second attack in 1989, she received a pacemaker. In 1991, after a bout of pneumonia in Mexico, she had additional heart problems. Although Mother Teresa offered to resign as head of the Missionaries of Charity, in a secret ballot the sisters of the congregation voted for her to stay, and she agreed to continue. In April 1996, Mother Teresa fell, breaking her collarbone, and four months later she had malaria and heart failure. Although she underwent heart surgery, her health was clearly declining. According to the Archbishop of Calcutta Henry Sebastian D'Souza, he ordered a priest to perform an exorcism (with her permission) when she was first hospitalised with cardiac problems because he thought she might be under attack by the devil. On 13 March 1997, Mother Teresa resigned as head of the Missionaries of Charity. She died on 5 September. Reactions Mother Teresa lay in repose in an open casket in St Thomas, Calcutta, for a week before her funeral. She received a state funeral from the Indian government in gratitude for her service to the poor of all religions in the country. Cardinal Secretary of State Angelo Sodano, the Pope's representative, delivered the homily at the service. Mother Teresa's death was mourned in the secular and religious communities. Prime Minister of Pakistan Nawaz Sharif called her "a rare and unique individual who lived long for higher purposes. Her life-long devotion to the care of the poor, the sick, and the disadvantaged was one of the highest examples of service to our humanity." According to former U.N. Secretary-General Javier Pérez de Cuéllar, "She is the United Nations. She is peace in the world." Recognition and reception India From the Indian government, under the name of Mary Teresa Bojaxhiu, Mother Teresa was issued a diplomatic passport. She received the Padma Shri in 1962 and the Jawaharlal Nehru Award for International Understanding in 1969. She later received other Indian awards, including the Bharat Ratna (India's highest civilian award) in 1980. Mother Teresa's official biography, by Navin Chawla, was published in 1992. In Calcutta, she is worshipped as a deity by some Hindus. To commemorate the 100th anniversary of her birth, the government of India issued a special 5 coin (the amount of money Mother Teresa had when she arrived in India) on 28 August 2010. President Pratibha Patil said, "Clad in a white sari with a blue border, she and the sisters of Missionaries of Charity became a symbol of hope to many—namely, the aged, the destitute, the unemployed, the diseased, the terminally ill, and those abandoned by their families." Indian views of Mother Teresa are not uniformly favourable. Aroup Chatterjee, a physician born and raised in Calcutta who was an activist in the city's slums for years around 1980 before moving to the UK, said that he "never even saw any nuns in those slums". His research, involving more than 100 interviews with volunteers, nuns and others familiar with the Missionaries of Charity, was described in a 2003 book critical of Mother Teresa. Chatterjee criticized her for promoting a "cult of suffering" and a distorted, negative image of Calcutta, exaggerating work done by her mission and misusing funds and privileges at her disposal. According to him, some of the hygiene problems he had criticized (such as the reuse of needles) improved after Mother Teresa's death in 1997. Bikash Ranjan Bhattacharya, mayor of Calcutta from 2005 to 2010, said that "she had no significant impact on the poor of this city", glorified illness instead of treating it and misrepresented the city: "No doubt there was poverty in Calcutta, but it was never a city of lepers and beggars, as Mother Teresa presented it." On the Hindu right, the Bharatiya Janata Party clashed with Mother Teresa over the Christian Dalits but praised her in death and sent a representative to her funeral. Vishwa Hindu Parishad, however, opposed the government decision to grant her a state funeral. Secretary Giriraj Kishore said that "her first duty was to the Church and social service was incidental", accusing her of favouring Christians and conducting "secret baptisms" of the dying. In a front-page tribute, the Indian fortnightly Frontline dismissed the charges as "patently false" and said that they had "made no impact on the public perception of her work, especially in Calcutta". Praising her "selfless caring", energy and bravery, the author of the tribute criticised Teresa's public campaign against abortion and her claim to be non-political. In February 2015 Mohan Bhagwat, leader of the Hindu right-wing organisation Rashtriya Swayamsevak Sangh, said that Mother Teresa's objective was "to convert the person, who was being served, into a Christian". Former RSS spokesperson M. G. Vaidhya supported Bhagwat's assessment, and the organisation accused the media of "distorting facts about Bhagwat's remarks". Trinamool Congress MP Derek O'Brien, CPI leader Atul Anjan and Delhi chief minister Arvind Kejriwal protested Bhagwat's statement. In 1991 the country's first modern University, Senate of Serampore College (University) awarded a honorary doctorate during registrarship of D. S. Satyaranjan. Elsewhere Mother Teresa received the Ramon Magsaysay Award for Peace and International Understanding, given for work in South or East Asia, in 1962. According to its citation, "The Board of Trustees recognises her merciful cognisance of the abject poor of a foreign land, in whose service she has led a new congregation". By the early 1970s, Mother Teresa was an international celebrity. She had been catapulted to fame via Malcolm Muggeridge's 1969 BBC documentary, Something Beautiful for God, before he released a 1971 book of the same name. Muggeridge was undergoing a spiritual journey of his own at the time. During filming, footage shot in poor lighting (particularly at the Home for the Dying) was thought unlikely to be usable by the crew; the crew had been using new, untested photographic film. In England, the footage was found to be extremely well-lit and Muggeridge called it a miracle of "divine light" from Teresa. Other crew members said that it was due to a new type of ultra-sensitive Kodak film. Muggeridge later converted to Catholicism. Around this time, the Catholic world began to honour Mother Teresa publicly. Pope Paul VI gave her the inaugural Pope John XXIII Peace Prize in 1971, commending her work with the poor, her display of Christian charity and her efforts for peace. She received the Pacem in Terris Award in 1976. After her death, Teresa progressed rapidly on the road to sainthood. She was honoured by governments and civilian organisations and appointed an honorary Companion of the Order of Australia in 1982 "for service to the community of Australia and humanity at large". The United Kingdom and the United States bestowed a number of awards, culminating in the Order of Merit in 1983 and honorary citizenship of the United States on 16 November 1996. Mother Teresa's Albanian homeland gave her the Golden Honour of the Nation in 1994, but her acceptance of this and the Haitian Legion of Honour was controversial. Mother Teresa was criticised for implicitly supporting the Duvaliers and corrupt businessmen such as Charles Keating and Robert Maxwell; she wrote to the judge of Keating's trial requesting clemency. Universities in India and the West granted her honorary degrees. Other civilian awards included the Balzan Prize for promoting humanity, peace and brotherhood among peoples (1978) and the Albert Schweitzer International Prize (1975). In April 1976, Mother Teresa visited the University of Scranton in northeastern Pennsylvania, where she received the La Storta Medal for Human Service from university president William J. Byron. She challenged an audience of 4,500 to "know poor people in your own home and local neighbourhood", feeding others or simply spreading joy and love. Mother Teresa continued: "The poor will help us grow in sanctity, for they are Christ in the guise of distress". In August 1987, Mother Teresa received an honorary doctor of social science degree from the university in recognition of her service and her ministry to help the destitute and sick. She spoke to over 4,000 students and members of the Diocese of Scranton about her service to the "poorest of the poor", telling them to "do small things with great love". During her lifetime, Mother Teresa was among the top 10 women in the annual Gallup's most admired man and woman poll 18 times, finishing first several times in the 1980s and 1990s. In 1999 she headed Gallup's List of Most Widely Admired People of the 20th Century, out-polling all other volunteered answers by a wide margin. She was first in all major demographic categories except the very young. Nobel Peace Prize In 1979, Mother Teresa received the Nobel Peace Prize "for work undertaken in the struggle to overcome poverty and distress, which also constitutes a threat to peace". She refused the conventional ceremonial banquet for laureates, asking that its $192,000 cost be given to the poor in India and saying that earthly rewards were important only if they helped her to help the world's needy. When Mother Teresa received the prize she was asked, "What can we do to promote world peace?" She answered, "Go home and love your family." Building on this theme in her Nobel lecture, she said: "Around the world, not only in the poor countries, but I found the poverty of the West so much more difficult to remove. When I pick up a person from the street, hungry, I give him a plate of rice, a piece of bread, I have satisfied. I have removed that hunger. But a person that is shut out, that feels unwanted, unloved, terrified, the person that has been thrown out from society – that poverty is so hurtable and so much, and I find that very difficult." Social and political views Opposition to abortion Mother Teresa singled out abortion as "the greatest destroyer of peace today. Because if a mother can kill her own child – what is left for me to kill you and you kill me – there is nothing between." Barbara Smoker of the secular humanist magazine The Freethinker criticised Mother Teresa after the Peace Prize award, saying that her promotion of Catholic moral teachings on abortion and contraception diverted funds from effective methods to solve India's problems. At the Fourth World Conference on Women in Beijing, Mother Teresa said: "Yet we can destroy this gift of motherhood, especially by the evil of abortion, but also by thinking that other things like jobs or positions are more important than loving." Abortion-rights groups have also criticised Mother Teresa's stance against abortion and contraception. Conversion practices Navin B. Chawla points out that Mother Teresa never intended to build hospitals, but to provide a place where those who had been refused admittance "could at least die being comforted and with some dignity." He also counters critics of Mother Teresa by stating that her periodic hospitalizations were instigated by staff members against her wishes and he disputes the claim that she conducted unethical conversions. "Those who are quick to criticise Mother Teresa and her mission, are unable or unwilling to do anything to help with their own hands." Similarly, Sister Mary Prema Pierick, the former Superior General of the Missionaries of Charity, also stated that Mother Teresa's homes were never intended to be a substitute for hospitals, but rather "homes for those not accepted in the hospital... But if they need hospital care, then we have to take them to the hospital, and we do that." Sister Pierick also contested the claims that Mother Teresa deliberately cultivated suffering, and affirmed her order's goal was to alleviate suffering. Fr Des Wilson, who had hosted her in Belfast in 1971, argued that "Mother Theresa was content to pick up the sad pieces left by a vicious political and economic system" and he noted that hers was a fate very different to that of Archbishop Óscar Romero of El Salvador. While she got the Nobel Prize, "Romero, who attacked the causes of misery as well as picking up the pieces, was shot in the head". Defence of contentious priests In 1994, Mother Teresa argued that the sexual abuse allegations against Jesuit priest Donald McGuire were untrue. When he was convicted of sexually molesting multiple children in 2006, Mother Teresa's defence of him was criticised. Inadequate care and alleged cruelty According to a paper by Canadian academics Serge Larivée, Geneviève Chénard and Carole Sénéchal, Mother Teresa's clinics received millions of dollars in donations but lacked medical care, systematic diagnosis, necessary nutrition, and sufficient analgesics for those in pain; in the opinion of the three academics, "Mother Teresa believed the sick must suffer like Christ on the cross". It was said that the additional money might have transformed the health of the city's poor by creating advanced palliative care facilities. One of Mother Teresa's most outspoken critics was English journalist Christopher Hitchens, who wrote in a 2003 article: "This returns us to the medieval corruption of the church, which sold indulgences to the rich while preaching hellfire and continence to the poor. [Mother Teresa] was not a friend of the poor. She was a friend of . She said that suffering was a gift from God. She spent her life opposing the only known cure for poverty, which is the empowerment of women and the emancipation of them from a livestock version of compulsory reproduction." He accused her of hypocrisy for choosing advanced treatment for her heart condition. Hitchens said that "her intention was not to help people", and that she lied to donors about how their contributions were used. "It was by talking to her that I discovered, and she assured me, that she wasn't working to alleviate poverty", he said, "She was working to expand the number of Catholics. She said, 'I'm not a social worker. I don't do it for this reason. I do it for Christ. I do it for the church. Spiritual life Analysing her deeds and achievements, Pope John Paul II said: "Where did Mother Teresa find the strength and perseverance to place herself completely at the service of others? She found it in prayer and in the silent contemplation of Jesus Christ, his Holy Face, his Sacred Heart." Privately, Mother Teresa experienced doubts and struggle in her religious beliefs which lasted nearly 50 years, until the end of her life. Mother Teresa expressed grave doubts about God's existence and pain over her lack of faith: Other saints (including Teresa's namesake Thérèse of Lisieux, who called it a "night of nothingness") had similar experiences of spiritual dryness. According to James Langford, these doubts were typical and would not be an impediment to canonisation. After ten years of doubt, Mother Teresa described a brief period of renewed faith. After Pope Pius XII's death in 1958, she was praying for him at a requiem Mass when she was relieved of "the long darkness: that strange suffering." Five weeks later her spiritual dryness returned. Mother Teresa wrote many letters to her confessors and superiors over a 66-year period, most notably to Calcutta Archbishop Ferdinand Perier and Jesuit priest Celeste van Exem (her spiritual advisor since the formation of the Missionaries of Charity). She requested that her letters be destroyed, concerned that "people will think more of me – less of Jesus." The correspondence was nevertheless compiled in Mother Teresa: Come Be My Light. Mother Teresa wrote to spiritual confidant Michael van der Peet, "Jesus has a very special love for you. [But] as for me, the silence and the emptiness is so great, that I look and do not see – listen and do not hear – the tongue moves [in prayer] but does not speak.[...] I want you to pray for me – that I let Him have [a] free hand." In (his first encyclical), Pope Benedict XVI mentioned Mother Teresa three times and used her life to clarify one of the encyclical's main points: "In the example of Blessed Teresa of Calcutta we have a clear illustration of the fact that time devoted to God in prayer not only does not detract from effective and loving service to our neighbour but is in fact the inexhaustible source of that service." She wrote, "It is only by mental prayer and spiritual reading that we can cultivate the gift of prayer." Although her order was not connected with the Franciscan orders, Mother Teresa admired Francis of Assisi and was influenced by Franciscan spirituality. The Sisters of Charity recite the prayer of Saint Francis every morning at Mass during the thanksgiving after Communion, and their emphasis on ministry and many of their vows are similar. Francis emphasised poverty, chastity, obedience and submission to Christ. He devoted much of his life to serving the poor, particularly lepers. Canonization Miracle and beatification After Mother Teresa's death in 1997, the Holy See began the process of beatification (the second of three steps towards canonization) and Brian Kolodiejchuk was appointed postulator by the Diocese of Calcutta. Although he said, "We didn't have to prove that she was perfect or never made a mistake", he had to prove that Mother Teresa's virtue was heroic. Kolodiejchuk submitted 76 documents, totalling 35,000 pages, which were based on interviews with 113 witnesses who were asked to answer 263 questions. The process of canonisation requires the documentation of a miracle resulting from the intercession of the prospective saint. In 2002 the Vatican recognised as a miracle the healing of a tumour in the abdomen of Monica Besra, an Indian woman, after the application of a locket containing Teresa's picture. According to Besra, a beam of light emanated from the picture and her cancerous tumour was cured; her husband and some of her medical staff, however, said that conventional medical treatment eradicated the tumour. Ranjan Mustafi, who told The New York Times he had treated Besra, said that the cyst was caused by tuberculosis: "It was not a miracle ... She took medicines for nine months to one year." According to Besra's husband, "My wife was cured by the doctors and not by any miracle [...] This miracle is a hoax." Besra said that her medical records, including sonograms, prescriptions and physicians' notes, were confiscated by Sister Betta of the Missionaries of Charity. According to Time, calls to Sister Betta and the office of Sister Nirmala (Teresa's successor as head of the order) produced no comment. Officials at Balurghat Hospital, where Besra sought medical treatment, said that they were pressured by the order to call her cure miraculous. In February 2000, former West Bengal health minister Partho De ordered a review of Besra's medical records at the Department of Health in Calcutta. According to De, there was nothing unusual about her illness and cure based on her lengthy treatment. He said that he had refused to give the Vatican the name of a doctor who would certify that Monica Besra's healing was a miracle. During Mother Teresa's beatification and canonisation, the Vatican studied published and unpublished criticism of her life and work. Christopher Hitchens and Chatterjee (author of The Final Verdict, a book critical of Mother Teresa) spoke to the tribunal; according to Vatican officials, the allegations raised were investigated by the Congregation for the Causes of Saints. The group found no obstacle to Mother Teresa's canonisation, and issued its on 21 April 1999. Because of the attacks on her, some Catholic writers called her a sign of contradiction. Mother Teresa was beatified on 19 October 2003 and was known by Catholics as "Blessed". Canonization On 17 December 2015, the Vatican Press Office confirmed that Pope Francis recognised a second miracle attributed to Mother Teresa: the healing of a Brazilian man with multiple brain tumours back in 2008. The miracle first came to the attention of the postulation (officials managing the cause) during the events of World Youth Day 2013 when the pope was in Brazil that July. A subsequent investigation took place in Brazil from 19–26 June 2015 which was later transferred to the Congregation for the Causes of Saints who issued a decree recognizing the investigation to be completed. Pope Francis canonised her at a ceremony on 4 September 2016 in St. Peter's Square in Vatican City. Tens of thousands of people witnessed the ceremony, including 15 government delegations and 1,500 homeless people from across Italy. It was televised live on the Vatican channel and streamed online; Skopje, Mother Teresa's hometown, announced a week-long celebration of her canonisation. In India, a special Mass was celebrated by the Missionaries of Charity in Calcutta. Co-Patron of Calcutta Archdiocese On 4 September 2017, during a celebration honouring the 1st anniversary of her canonisation, Sister Mary Prema Pierick, Superior-General of the Missionaries of Charity, announced that Mother Teresa would be made the co-patron of the Calcutta Archdiocese during a Mass in the Cathedral of the Most Holy Rosary on 6 September 2017. On 5 September 2017, Archbishop Thomas D'Souza, who serves as head of the Roman Catholic Archdiocese of Calcutta, confirmed that Mother Teresa would be named co-patron of the Calcutta Diocese, alongside Francis Xavier. On 6 September 2017, about 500 people attended the Mass at a cathedral where Dominique Gomes, the local Vicar General, read the decree instituting her as the second patron saint of the archdiocese. The ceremony was also presided over by D'Souza and the Vatican's ambassador to India, Giambattista Diquattro, who lead the Mass and inaugurated a bronze statue in the church of Mother Teresa carrying a child. The Catholic Church declared St. Francis Xavier the first patron saint of Calcutta in 1986. Legacy and depictions in popular culture At the time of her death, the Missionaries of Charity had over 4,000 sisters and an associated brotherhood of 300 members operating 610 missions in 123 countries. These included hospices and homes for people with HIV/AIDS, leprosy and tuberculosis, soup kitchens, children's and family counselling programmes, orphanages and schools. The Missionaries of Charity were aided by co-workers numbering over one million by the 1990s. Commemorations Mother Teresa has been commemorated by museums and named the patroness of a number of churches. She has had buildings, roads and complexes named after her, including Albania's international airport. Mother Teresa Day (), 5 September, is a public holiday in Albania. In 2009, the Memorial House of Mother Teresa was opened in her hometown of Skopje, North Macedonia. The Cathedral of Blessed Mother Teresa in Pristina, Kosovo, is named in her honour. The demolition of a historic high school building to make way for the new construction initially sparked controversy in the local community, but the high school was later relocated to a new, more spacious campus. Consecrated on 5 September 2017, it became the first cathedral in Mother Teresa's honour and the second extant one in Kosovo. Mother Teresa Women's University, in Kodaikanal, was established in 1984 as a public university by the government of Tamil Nadu. The Mother Teresa Postgraduate and Research Institute of Health Sciences, in Pondicherry, was established in 1999 by the government of Puducherry. The charitable organisation Sevalaya runs the Mother Teresa Girls Home, providing poor and orphaned girls near the underserved village of Kasuva in Tamil Nadu with free food, clothing, shelter and education. A number of tributes by Mother Teresa's biographer, Navin Chawla, have appeared in Indian newspapers and magazines. Indian Railways introduced the "Mother Express", a new train named after Mother Teresa, on 26 August 2010 to commemorate the centenary of her birth. The Tamil Nadu government organised centenary celebrations honouring Mother Teresa on 4 December 2010 in Chennai, headed by chief minister M Karunanidhi. Beginning on 5 September 2013, the anniversary of her death has been designated the International Day of Charity by the United Nations General Assembly. In 2012, Mother Teresa was ranked number 5 in Outlook India's poll of the Greatest Indian. Ave Maria University in Ave Maria, Florida is home to the Mother Teresa Museum. Film and literature Documentaries and books Mother Teresa is the subject of a 1969 documentary film and 1971 book, Something Beautiful for God, by Malcolm Muggeridge. The film has been credited with drawing the Western world's attention to Mother Teresa. Christopher Hitchens' 1994 documentary, Hell's Angel, argues that Mother Teresa urged the poor to accept their fate; the rich are portrayed as favoured by God. It was the precursor of Hitchens' essay, The Missionary Position: Mother Teresa in Theory and Practice. Mother of The Century (2001) and Mother Teresa (2002) are short documentary films, about the life and work of Mother Teresa among the poor of India, directed by Amar Kumar Bhattacharya. They were produced by the Films Division of the Government of India. Mother Teresa: No Greater Love (2022) is a documentary film featuring unusual access to institutional archives and how her vision to serve Christ among the poor is being implemented through the Missionaries of Charity. Films and television Mother Teresa appeared in Bible Ki Kahaniyan, an Indian Christian television series based on the Bible which aired on DD National during the early 1990s. She introduced some of the episodes, laying down the importance of the Bible's message. Geraldine Chaplin played Mother Teresa in Mother Teresa: In the Name of God's Poor, which received a 1997 Art Film Festival award. She was played by Olivia Hussey in a 2003 Italian television miniseries, Mother Teresa of Calcutta. Re-released in 2007, it received a CAMIE award. Mother Teresa was played by Juliet Stevenson in the 2014 film The Letters, which was based on her letters to Vatican priest Celeste van Exem. Mother Teresa, played by Cara Francis the FantasyGrandma, rap battled Sigmund Freud in Epic Rap Battles of History, a comedy rap YouTube series created by Nice Peter and Epic Lloyd. The rap was released on YouTube on 22 September 2019. In the 2020 animated film Soul, Mother Teresa briefly appears as one of 22's past mentors. Mother Teresa & Me (or Kavita & Teresa), a 2022 film by Indian-Swiss director Kamal Musale showcases her work among the poor and needy of Calcutta and the legacy and inspiration she has left behind. She was portrayed by Jacqueline Fritschi-Cornaz in the film. Theatre Teresa, la Obra en Musical is a 2004 Argentine musical based on the life of Mother Teresa See also Abdul Sattar Edhi Albanians List of Albanians List of female Nobel laureates The Greatest Indian Roman Catholicism in Albania Roman Catholicism in Kosovo Roman Catholicism in North Macedonia Notes References Sources , introduction and first three chapters of fourteen (without pictures). Critical examination of Agnes Bojaxhiu's life and work. First published by Sinclair-Stevenson, UK (1992), since translated into 14 languages in India and abroad. Indian language editions include Hindi, Bengali, Gujarati, Malayalam, Tamil, Telugu, and Kannada. The foreign language editions include French, German, Dutch, Spanish, Italian, Polish, Japanese, and Thai. In both Indian and foreign languages, there have been multiple editions. The bulk of royalty income goes to charity. Dwivedi, Brijal. Mother Teresa: Woman of the Century . . . Muntaykkal, T.T. Blessed Mother Teresa: Her Journey to Your Heart. . . . Scott, David. A Revolution of Love: The Meaning of Mother Teresa. Chicago: Loyola Press, 2005. . Sebba, Anne. Mother Teresa: Beyond the Image. New York: Doubleday, 1997. . Slavicek, Louise. Mother Teresa. New York: Infobase Publishing, 2007. . Spink, Kathryn. Mother Teresa: A Complete Authorized Biography. New York: HarperCollins, 1997. Teresa, Mother et al., Mother Teresa: In My Own Words. Gramercy Books, 1997. . Teresa, Mother, Mother Teresa: Come Be My Light: The Private Writings of the "Saint of Calcutta", edited with commentary by Brian Kolodiejchuk, New York: Doubleday, 2007. . Teresa, Mother, Where There Is Love, There Is God, edited and with an introduction by Brian Kolodiejchuk, New York: Doubleday, 2010. . Williams, Paul. Mother Teresa. Indianapolis: Alpha Books, 2002. . Wüllenweber, Walter. "Nehmen ist seliger denn geben. Mutter Teresa – wo sind ihre Millionen?" Stern (illustrated German weekly), 10 September 1998. English translation. External links Mother Teresa memorial with gallery Mother Teresa at Missionaries of Charity Fathers Mother Teresa contrasts: 1910 births 1997 deaths 20th-century Albanian women 20th-century Indian Roman Catholic nuns Albanian people of Kosovan descent Albanian Roman Catholic religious sisters and nuns Albanian Roman Catholic saints Anti-abortion activists Anti-contraception activists Beatifications by Pope John Paul II Catholic pacifists Canonizations by Pope Francis Christian female saints of the Late Modern era Congressional Gold Medal recipients Deified women Female Roman Catholic missionaries Founders of Catholic religious communities Honorary companions of the Order of Australia Honorary members of the Order of Merit Indian Nobel laureates Indian pacifists Indian people of Albanian descent Indian people of Kosovan descent Indian people of Macedonian descent Indian philanthropists Indian Roman Catholic saints Indian women philanthropists Nobel Peace Prize laureates People from Darjeeling People from Kolkata People from Skopje Naturalised citizens of India Presidential Medal of Freedom recipients Ramon Magsaysay Award winners Recipients of the Bharat Ratna Recipients of the Padma Shri in social work Roman Catholic missionaries in India Social workers from West Bengal Superiors general Templeton Prize laureates Venerated Catholics by Pope John Paul II Women Nobel laureates Immigrants to India Yugoslav emigrants
Mother Teresa
[ "Technology" ]
8,804
[ "Women Nobel laureates", "Women in science and technology" ]
347,113
https://en.wikipedia.org/wiki/Tiger%20team
A tiger team is a team of specialists assembled to work on a specific goal, or to solve a particular problem. Origin of the term A 1964 paper entitled Program Management in Design and Development used the term tiger teams and defined it as "a team of undomesticated and uninhibited technical specialists, selected for their experience, energy, and imagination, and assigned to track down relentlessly every possible source of failure in a spacecraft subsystem or simulation". Walter C. Williams gave this definition in response to the question "How best can advancements in reliability/maintainability state-of-the-art be attained and used with compressed schedules?" Williams was an engineer at the Manned Spacecraft Center and part of the Edwards Air Force Base National Advisory Committee for Aeronautics. The paper consists of anecdotes and answers to questions from a panel on improving issues in program management concerning testing and quality assurance in aerospace vehicle development and production. The panel consisted of Williams, Col. J. R. Dempsey of General Dynamics, Lt. Gen. W. A. Davis from the Ballistic Systems Div., Norton Air Force Base, A. S. Crossfield from North American Aviation. Examples A tiger team was crucial to the Apollo 13 crewed lunar mission in 1970. During the mission, part of the Apollo 13 Service Module malfunctioned and exploded. A team of specialists was formed to address the resulting problems and bring the astronauts back to Earth safely, led by NASA Flight and Mission Operations Director Gene Kranz. Kranz and the members of his "White Team", later designated the "Tiger Team", received the Presidential Medal of Freedom for their efforts in the Apollo 13 mission. In security work, a tiger team is a group that tests an organization's ability to protect its assets by attempting to defeat its physical or information security. In this context, the tiger team is often a permanent team as security is typically an ongoing priority. For example, one implementation of an information security tiger team approach divides the team into two co-operating groups: one for vulnerability research, which finds and researches the technical aspects of a vulnerability, and one for vulnerability management, which manages communication and feedback between the team and the organization, as well as ensuring each discovered vulnerability is tracked throughout its life-cycle and ultimately resolved. An initiative involving tiger teams was implemented by the United States Department of Energy (DOE) under then-Secretary James D. Watkins. From 1989 through 1992 the DOE formed tiger teams to assess 35 DOE facilities for compliance with environment, safety, and health requirements. Beginning in October 1991 smaller tiger teams were formed to perform more detailed follow-up assessments to focus on the most pressing issues. The NASA Engineering and Safety Center (NESC) puts together "tiger teams" of engineers and scientists from multiple NASA centers to assist solving complex problems when requested by a project or program. See also Penetration test Red team References Hacking (computer security) Software testing Emergency management Aerospace engineering Biological engineering Problem solving
Tiger team
[ "Engineering", "Biology" ]
607
[ "Software engineering", "Biological engineering", "Software testing", "Aerospace engineering" ]
347,136
https://en.wikipedia.org/wiki/List%20of%20TCP%20and%20UDP%20port%20numbers
This is a list of TCP and UDP port numbers used by protocols for operation of network applications. The Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) only need one port for bidirectional traffic. TCP usually uses port numbers that match the services of the corresponding UDP implementations, if they exist, and vice versa. The Internet Assigned Numbers Authority (IANA) is responsible for maintaining the official assignments of port numbers for specific uses, However, many unofficial uses of both well-known and registered port numbers occur in practice. Similarly, many of the official assignments refer to protocols that were never or are no longer in common use. This article lists port numbers and their associated protocols that have experienced significant uptake. Table legend Well-known ports The port numbers in the range from 0 to 1023 (0 to 210 − 1) are the well-known ports or system ports. They are used by system processes that provide widely used types of network services. On Unix-like operating systems, a process must execute with superuser privileges to be able to bind a network socket to an IP address using one of the well-known ports. Registered ports The range of port numbers from 1024 to 49151 (210 to 215 + 214 − 1) are the registered ports. They are assigned by IANA for specific service upon application by a requesting entity. On most systems, registered ports can be used without superuser privileges. Dynamic, private or ephemeral ports The range 49152–65535 (215 + 214 to 216 − 1), 16 384 ports, contains dynamic or private ports that cannot be registered with IANA. This range is used for private or customized services, for temporary purposes, and for automatic allocation of ephemeral ports. Note See also Comparison of file transfer protocols Internet protocol suite Port (computer networking) List of IP numbers Lists of network protocols References and notes Further reading External links Computing-related lists Internet-related lists Lists of network protocols Transmission Control Protocol
List of TCP and UDP port numbers
[ "Technology" ]
413
[ "Computing-related lists", "Lists of network protocols", "Internet-related lists" ]
347,161
https://en.wikipedia.org/wiki/Bullroarer
The bullroarer, rhombus, or turndun, is an ancient ritual musical instrument and a device historically used for communicating over great distances. It consists of a piece of wood attached to a string, which when swung in a large circle produces a roaring vibration sound. It dates to the Paleolithic period, examples dating from 18,000 BC having been found in Ukraine. Anthropologist Michael Boyd, a bullroarer expert, documents a number found in Europe, Asia, Africa, the Americas, and Australia. In Ancient Greece it was a sacred instrument used in the Dionysian Mysteries and is still used in rituals worldwide. It was a prominent musical technology among the Australian Aboriginal people, used in ceremonies and to communicate with different people groups across the continent. Many different cultures believe that the sounds they make have the power to ward off evil influences. Design, use, and sound A bullroarer consists of a weighted airfoil (a rectangular thin slat of wood about long and about wide) attached to a long cord. Typically, the wood slat is trimmed down to a sharp edge and serrations along the length of the wooden slat may or may not be used, depending on the cultural traditions of the region in question. The cord is given a slight initial twist, and the roarer is then swung in a large circle in a horizontal plane, or in a smaller circle in a vertical plane. The aerodynamics of the roarer will keep it spinning about its axis even after the initial twist has unwound. The cord winds fully first in one direction and then the other, alternating. It makes a characteristic roaring vibrato sound with notable sound modulations occurring from the rotation of the roarer along its longitudinal axis, and the choice of whether a shorter or longer length of cord is used to spin the bullroarer. By modifying the expansiveness of its circuit and the speed given it, and by changing the plane in which the bullroarer is whirled from horizontal to vertical or vice versa, the modulation of the sound produced can be controlled, making the coding of information possible. The low-frequency component of the sound travels extremely long distances, clearly audible over many miles on a quiet night. In culture Various cultures have used bullroarers as musical, ritual, and religious instruments and long-range communication devices for at least 19,000 years. {| class="wikitable" style="float:right" ! colspan="3"|North American Indigenous Bullroarers. |- | | | |-style="text-align:center;" |Navajotsin ndi'ni'''"groaning stick"Young, R & Morgan, W An Analytical Lexicon of Navajo, (1992) University of New Mexico Press , p. 461. |Apachetzi-ditindi"sounding wood" |Gros Ventrenakaantan"making cold" |} This instrument has been used by numerous early and traditional cultures in both the northern and southern hemispheres but in the popular consciousness it is perhaps best known for its use by Australian Aborigines (it is from one of their languages that the name turndun comes). Henry Cowell composed a composition for two violins, viola, two celli, and two bullroarers. A bullroarer featured in the Kate Bush Before The Dawn concerts in London 2014. Australian Aboriginal culture Bullroarers have been used in initiation ceremonies and in burials to ward off evil spirits, and for bad tidings. Bullroarers are considered secret men's business by all or almost all Aboriginal tribal groups, and hence forbidden for women, children, non-initiated men, or outsiders to even hear. Fison and Howitt documented this in "Kamilaroi and Kurnai" (page 198). Anyone caught breaching the imposed secrecy was to be punished by death. They are used in men's initiation ceremonies, and the sound they produce is considered in some indigenous cultures to represent the sound of the Rainbow Serpent. In the cultures of southeastern Australia, the sound of the bullroarer is the voice of Daramulan, and a successful bullroarer can be made only if it has been cut from a tree containing his spirit. The bullroarer can also be used as a tool in Aboriginal art. Bullroarers have sometimes been referred to as "wife-callers" by Indigenous Australians. A bullroarer is used by Paul Hogan in the 1988 film Crocodile Dundee II. John Antill included one in the orchestration of his ballet Corroboree (1946). See: Corroboree. An Australian band Midnight Oil included a recording of an imitation bullroarer on their album Diesel and Dust (1987) at the beginning of the song "Bullroarer". In an interview, the band's drummer Rob Hirst stated "it's a sacred instrument... only initiated men are supposed to hear those sounds. So we didn't use a real bullroarer as that would have been cultural imperialism. Instead we used an imitation bullroarer that school kids in Australia use. It is a ruler with a piece of rope wrapped around it." Ancient Greece In Ancient Greece, bullroarers were especially used in the ceremonies of the cult of Cybele. A bullroarer was known as a rhombos (literally meaning "whirling" or "rumbling"), both to describe its sonic character and its typical shape, the rhombus. (Rhombos also sometimes referred to the rhoptron, a buzzing drum). Great Britain and Ireland In Great Britain and Ireland, the bullroarer—under a number of different names and styles—is used chiefly for amusement, although formerly it may have been used for ceremonial purposes. In parts of Scotland it was known as a "thunder-spell" and was thought to protect against being struck by lightning. In the Elizabeth Goudge novel Gentian Hill (1949), set in Devon in the early 19th century, a bullroarer figures as a toy cherished by Sol, an elderly farm labourer, who uses it occasionally to express strong emotion; however, the sound it makes is perceived as being both eerie and unlucky by two other characters, who have an uneasy sense that ominous spirits of the air ("Them") are being invoked by its whirring whistle. Scandinavia Scandinavian Stone Age cultures used the bullroarer. In 1991, the archeologists Hein B. Bjerck and Martinius Hauglid found a 6.4 cm-long piece of slate that turned out to be a 5000-year-old bullroarer (called a brummer in Scandinavia). It was found in Tuv in northern Norway, a place that was inhabited in the Stone Age. Mali The Dogon use bullroarers to announce the beginning of ceremonies conducted during the Sigui festival held every sixty years over a seven-year period. The sound has been identified as the voice of an ancestor from whom all Dogon are descended. Māori culture (New Zealand) The pūrerehua is a traditional Māori bullroarer. Its name comes from the Māori word for moth. Made from wood, stone or bone and attached to a long string, the instruments were traditionally used for healing or making rain. Native North American Almost all the native tribes in North America used bullroarers in religious and healing ceremonies and as toys. There are many styles. North Alaskan Inupiat bullroarers are known as imigluktaaq or imigluktaun and described as toy noise maker of bone or wood and braided sinew (wolf-scare). Banks Island Eskimos were still using Bullroarers in 1963, when a 59 year old woman named Susie scared off four polar bears armed only with three seal hooks acting as such accompanied by vocals. Aleut, Eskimo and Inuit used bullroarers occasionally as a children's toy or musical instruments, but preferred drums and rattles. Pomo The inland Pomo tribes of California used bullroarers as a central part of the xalimatoto or Thunder ceremony. Four male tribe members, accompanied by a drummer, would spin bullroarers made from cottonwood, imitating the sound of a thunder storm. Native South American Shamans of the Amazon basin, for example in Tupi, Kamayurá and Bororo culture used bullroarers as musical instrument for rituals. In Tupian languages, the bullroarer is known as hori hori.See also Buzzer (whirligig) References Other sources Franciscan Fathers. An Ethnologic Dictionary of the Navaho Language. Saint Michaels, Arizona: Navajo Indian Mission (1910. Lang, A. "Bull-roarer", in J. Hastings, "Encyclopedia of Religion and Ethics II", p. 889-890 (1908-1927). Kroeber, A.L. "Ethnology of the Gros Ventre", Anthropological Papers of the American Museum of Natural History pp. 145–283. New York: Published by Order of the Trustees (1908). Powell, J.W. (Director). Ninth Annual Report of the Bureau of Ethnology to the Secretary of the Smithsonian Institution 1887-'88. Washington, D.C.: Government Printing Office (1892). Hart, Mickey Planet Drum, A Celebration of Percussion and Rhythm pp. 154–155. New York: HarperCollins (1991). Battaglia, R., Sopravvivenze del rombo nelle Province Venete (con 7 illustrazioni)'', Studi e Materiali di Storia delle Religioni 1 (1925), pp. 190–217. External links Rotating and whirling aerophones Australian Aboriginal bushcraft History of telecommunications Australian Aboriginal music Australian musical instruments Sacred musical instruments Anthropology of religion Magic (supernatural) Folklore Religious objects Objects believed to protect from evil Amulets Talismans Toy instruments and noisemakers
Bullroarer
[ "Physics" ]
2,093
[ "Magic items", "Religious objects", "Physical objects", "Matter" ]
347,322
https://en.wikipedia.org/wiki/Holotype
A holotype (Latin: holotypus) is a single physical example (or illustration) of an organism used when the species (or lower-ranked taxon) was formally described. It is either the single such physical example (or illustration) or one of several examples, but explicitly designated as the holotype. Under the International Code of Zoological Nomenclature (ICZN), a holotype is one of several kinds of name-bearing types. In the International Code of Nomenclature for algae, fungi, and plants (ICN) and ICZN, the definitions of types are similar in intent but not identical in terminology or underlying concept. For example, the holotype for the butterfly Plebejus idas longinus is a preserved specimen of that subspecies, held by the Museum of Comparative Zoology at Harvard University. In botany and mycology, an isotype is a duplicate of the holotype, generally pieces from the same individual plant or samples from the same genetic individual. A holotype is not necessarily "typical" of that taxon, although ideally it is. Sometimes just a fragment of an organism is the holotype, particularly in the case of a fossil. For example, the holotype of Pelorosaurus humerocristatus (Duriatitan), a large herbivorous dinosaur from the early Cretaceous period, is a fossil leg bone stored at the Natural History Museum in London. Even if a better specimen is subsequently found, the holotype is not superseded. Replacements for holotypes Under the ICN, an additional and clarifying type could be designated an epitype under article 9.8, where the original material is demonstrably ambiguous or insufficient. A conserved type (ICN article 14.3) is sometimes used to correct a problem with a name which has been misapplied; this specimen replaces the original holotype. In the absence of a holotype, another type may be selected, out of a range of different kinds of type, depending on the case, a lectotype or a neotype. For example, in both the ICN and the ICZN a neotype is a type that was later appointed in the absence of the original holotype. Additionally, under the ICZN the commission is empowered to replace a holotype with a neotype, when the holotype turns out to lack important diagnostic features needed to distinguish the species from its close relatives. For example, the crocodile-like archosaurian reptile Parasuchus hislopi Lydekker, 1885 was described based on a premaxillary rostrum (part of the snout), but this is no longer sufficient to distinguish Parasuchus from its close relatives. This made the name Parasuchus hislopi a nomen dubium. Indian-American paleontologist Sankar Chatterjee proposed that a new type specimen, a complete skeleton, be designated. The International Commission on Zoological Nomenclature considered the case and agreed to replace the original type specimen with the proposed neotype. The procedures for the designation of a new type specimen when the original is lost come into play for some recent, high-profile species descriptions in which the specimen designated as the holotype was a living individual that was allowed to remain in the wild (e.g. a new species of capuchin monkey, genus Cebus, the bee species Marleyimyia xylocopae, or the Arunachal macaque Macaca munzala). In such a case, there is no actual type specimen available for study, and the possibility exists that—should there be any perceived ambiguity in the identity of the species—subsequent authors can invoke various clauses in the ICZN Code that allow for the designation of a neotype. Article 75.3.7 of the ICZN requires that the designation of a neotype must be accompanied by "a statement that the neotype is, or immediately upon publication has become, the property of a recognized scientific or educational institution, cited by name, that maintains a research collection, with proper facilities for preserving name-bearing types, and that makes them accessible for study", but there is no such requirement for a holotype. See also Allotype (zoology) Genetypes—genetic sequence data from type specimens Paratype Type (biology) Type species References External links BOA Photographs of type specimens of Neotropical Rhopalocera. Zoological nomenclature Botanical nomenclature
Holotype
[ "Biology" ]
916
[ "Botanical nomenclature", "Zoological nomenclature", "Botanical terminology", "Biological nomenclature" ]
347,352
https://en.wikipedia.org/wiki/Muir%20Woods%20National%20Monument
Muir Woods National Monument ( ) is a United States National Monument managed by the National Park Service and named after naturalist John Muir. It is located on Mount Tamalpais near the Pacific coast in southwestern Marin County, California. The Monument is part of the Golden Gate National Recreation Area, and is north of San Francisco. It protects , of which are old growth coast redwood (Sequoia sempervirens) forests, one of a few such stands remaining in the San Francisco Bay Area. Geography Ecosystem The Muir Woods National Monument is an old-growth coastal redwood forest. Due to its proximity to the Pacific Ocean, the forest is regularly shrouded in a coastal marine layer fog, contributing to a wet environment that encourages vigorous plant growth. The fog is also vital for the growth of the redwoods as they use moisture from the fog during drought seasons, particularly during dry summers. Climate The monument remains cool and moist year round with daytime temperatures averaging between 40 and 70 degrees Fahrenheit (). Rainfall is heavy during the winter and summers are almost completely dry with the exception of fog drip caused by the fog passing through the trees. Annual precipitation in the park ranges from 39.4 inches (1,000 mm) in the lower valley to 47.2 inches () higher up in the mountain slopes. Soils and bedrock The redwoods grow on brown humus-rich loam which may be gravelly, stony or somewhat sandy. This soil has been assigned to the Centissima series, which is always found on sloping ground. It is well drained, moderately deep, and slightly to moderately acidic. It has developed from a mélange in the Franciscan Formation. More open areas of the park have shallow gravelly loam of the Barnabe series, or deep hard loam of the Cronkhite series. History One hundred fifty million years ago ancestors of redwood and sequoia trees grew throughout the United States. Today, the Sequoia sempervirens can be found only in a narrow, cool coastal belt from Monterey County, California, in the south to Oregon in the north. Before the logging industry came to California, there were an estimated 2 million acres (8,000 km2) of old growth forest containing redwoods growing in a narrow strip along the coast. By the early 20th century, most of these forests had been cut down. Just north of the San Francisco Bay, one valley named Redwood Canyon remained uncut, mainly due to its relative inaccessibility. This was noticed by William Kent, a rising California politician who would soon be elected to the U.S. Congress. He and his wife, Elizabeth Thacher Kent, purchased of land from the Tamalpais Land and Water Company for $45,000 in 1905 with the goal of protecting the redwoods and the mountain above them. The deal was facilitated by banker Lovell White and his activist wife, Laura Lyon White. In 1907, a water company in nearby Sausalito planned to dam Redwood Creek, thereby flooding the valley. When Kent objected to the plan, the water company threatened to use eminent domain and took him to court to attempt to force the project to move ahead. Kent sidestepped the water company's plot by donating of the redwood forest to the federal government, thus bypassing the local courts. Muir Woods became a national monument on January 9th, 1908 before the National Park Service existed when it was signed into law under the Antiquities Act by president Theodore Roosevelt. Prior to this, Muir Woods was called "Redwoods Canyon" before it was bought by the Kent family. The family bought the area to protect and preserve it and worked to get President Roosevelt to declare it a monument. In legislation written to protect Muir Woods, it was described as, "of extraordinary scientific interest and importance because of the primeval character of the forest in which it is located, and of the character, age and size of the trees". Once declared a national monument, Muir Woods was immediately protected and placed under the care of the United States Government. The Antiquities Act was the first of its kind to provide protection for natural resources. The original suggested name of the monument was the Kent Monument but Kent insisted the monument be named after naturalist John Muir, whose environmental campaigns helped to establish the National Park system, and President Roosevelt agreed with this proposition. Kent and Muir had become friends over shared views of wilderness preservation, but Kent's later support for the flooding of Hetch Hetchy caused Muir to end their friendship. In December 1928, the Kent Memorial was erected at the Kent Tree in Fern Canyon. This tree—a Douglas fir, not a redwood—was said to be Kent's favorite. Due to its height of and location on a slope, the tree leaned towards the valley for more than 100 years. Storms in El Niño years of 1981 and 1982 caused the tree to tilt even more and took out the top of the tree. During the winter of 2002–03, many storms brought high winds to Muir Woods causing the tree to lean so much that a fissure developed in January 2003. This fissure grew larger as the tree slowly leaned more and more, forcing the closure of some trails. On March 18, 2003, at around 8:28 pm, the tree fell, damaging several other trees nearby. The closed trails have since been reconfigured and reopened. In 1937, the Golden Gate Bridge was completed and park attendance tripled, reaching over 180,000. Muir Woods is one of the major tourist attractions of the San Francisco Bay Area, with 776,000 visitors in 2005. President Franklin Delano Roosevelt died on April 12, 1945, shortly before he was to have opened the United Nations Conference on International Organization for which delegates from 50 countries met in San Francisco to draft and sign the United Nations Charter. On May 19, the delegates held a commemorative ceremony in tribute to his memory in Muir Woods' Cathedral Grove, where a dedication plaque was placed in his honor. The monument was listed on the National Register of Historic Places on January 9, 2008. Biology Flora The main attraction of Muir Woods are the coast redwood (Sequoia sempervirens) trees. They are known for their height, and are related to the giant sequoia of the Sierra Nevada. While redwoods can grow to nearly (), the tallest tree in the Muir Woods is . The trees come from a seed no bigger than that of a tomato seed. Most of the redwoods in the monument are between 500 and 800 years old. The oldest is at least 1,200 years old. Other tree species grow in the understory of the redwood groves. Three of the most common are the California bay laurel, the bigleaf maple and the tanoak. Each of these species has developed a unique adaptation to the low level of dappled sunlight that reaches them through the redwoods overhead. The California bay laurel has a strong root system that allows the tree to lean towards openings in the canopy. The bigleaf maple, true to its name, has developed the largest leaf of any maple species. These large leaves allow it to capture more of the forest's dim light. The tanoak has a unique internal leaf structure that enables it to make effective use of the light that filters through the canopy. Fish Redwood Creek provides a critical spawning and rearing habitat for coho or silver salmon (Oncorhynchus kisutch) and steelhead (Oncorhynchus mykiss). Steelhead are listed as threatened species (2011) in the Central California Coast distinct population segment. Coho salmon are listed as endangered in their evolutionary significant unit (2011). The creek is near the southernmost limit of coho habitat and the fish have never been stocked, so they have a distinctive DNA. The Redwood Creek salmon are Central Coast coho salmon which have been listed as federally threatened species since October 2006 and as federally endangered species in June 2005. Coho migrate from the ocean back to freshwater for a single chance at reproduction, generally after two years in the ocean. The spawning migrations begin after heavy late fall or winter rains breach the sandbar at Muir Beach allowing the fish to move upstream (usually in December and January). No salmon were seen in the 2007–2008 winter run, nor the 2008–2009 winter run. Evidence points to exhaustion of smolt oversummering in the creek due to a loss of large woody debris and deep pools where young salmon can rest. Starting in 2009, the National Park Service began restoring Muir Beach to create a functional, self-sustaining ecosystem and improve visitor access. The intervention was almost too late, since the coho only has a three-year life span. But, as of January 2010, and for the first time in three years, an estimated 45 coho swam up Redwood Creek to spawn, creating 23 redds or clusters of eggs. In 2011, 11 live adult coho and 1 coho carcass was observed, along with three redds, a modest increase over the 2007–2008 spawning season. Statewide the coho population is 1% of its levels in the 1940s and the fish have vanished from 90% of the streams they formerly visited. The Watershed Alliance of Marin reported that no salmon returned to spawn in 2014, prompting concerns that the fish may now be extirpated from the creek. Birds Muir Woods is home to over 50 species of birds. This relatively low number is due to the lack of insects. The tannin in the trees repels the insects and the volume of flowers and fruits produced by plants below the canopy is limited by the shade of the redwoods. It is occasionally possible to see northern spotted owls or pileated woodpeckers in the forest. While decreasing in numbers elsewhere, the spotted owls appear to be thriving in the monument and other evergreen forests in the area. A National Park Service monitoring project of the owls is ongoing within the monument. The project has found that adult owls are finding mates, raising young to adulthood and the young are having new broods of their own. Mammals The monument is home to a variety of mammals ranging in size from the small, four-inch long American shrew mole to the much larger black-tailed subspecies of mule deer, Odocoileus hemionus columbianus. The majority of the monument's mammals are either nocturnal or are burrowing animals that live underground or in the forest floor's dense plant litter. Most commonly seen are Sonoma chipmunks and western gray squirrels. Bears historically roamed the area but were largely exterminated by habitat destruction. In 2003 a male black bear was spotted wandering in various areas of Marin County, including Muir Woods. There are 11 species of bats that call the monument home, often using hollows burned into the redwoods by past fires as a maternity colony. In November 2010 sea otters (Enhydra lutris) have been spotted swimming in the new stream channel constructed in the lagoon area of Redwood Creek. Recreation Muir Woods, part of the Golden Gate National Recreation Area, is a park which caters to pedestrians, as parking of vehicles is only allowed at the very entrance. Hiking trails vary in the level of difficulty and distance. Picnicking, camping and pets are not permitted. As of 2015, the park sees up to 6000 visitors per day during peak times (April to October, Thanksgiving weekend, and Christmas through New Years), more than 80% of which arrive by car, and most of the rest with a tour bus or shuttle bus. Currently, parking is extremely limited and lots often fill early in the day. The county and the National Park Service introduced a reservation system in early 2018 which restricts the number of vehicles allowed to enter and park in Muir Woods every day. Residents of neighboring Mill Valley had protested against earlier plans to set up an additional parking lot, and together with a group named "Mount Tam Task Force" sued to prevent the building of a shuttle bus station. Facilities Parking and shuttle Reservations are required for all vehicles and shuttle riders since 2018. Marin Transit operates a shuttle on all weekends and holidays and during select peak weekdays, providing service to Muir Woods from Sausalito, Marin City, or Mill Valley (Route 66); the National Park Service recommends that visitors use the shuttle when it is operating to avoid difficulties in finding parking. The NPS requires reservations made in advance for all vehicles and shuttle riders; a parking reservation is $8.50 per vehicle while a spot on the shuttle is $3.25 per person. Furthermore, an entrance fee of $15.00 per person is charged in order to gain entry to Muir Woods. The shuttle service and park are open every day of the year including holidays. The park is open from 8:00 am and closes at sunset. Lodging and camping There are no camping or lodging facilities in Muir Woods. The monument is a day-use area only. There are camping facilities in the adjacent Mount Tamalpais State Park. Comfort facilities Restrooms located in entrance plaza and cafe Muir Woods Trading Company cafe and gift shop – offers deli food items and souvenirs. The cafe also has a permanent display of historic photographs. The main trail (paved and boardwalk) through Muir Woods is a loop. A loop from the Visitor Center, through Founders Grove, to Bridge 2 and back is ADA accessible. Interpretive facilities The Visitor Center, located in the entrance plaza, features permanent and changing exhibits on redwood ecology and conservation, as well as a store selling books and gift items. Activities Hiking and biking The paved/boardwalk main trail begins at the entrance plaza (Trail Map) and travels into the old growth redwood forest alongside Redwood Creek. Other unpaved walking trails extend from the main trail to connect with Mt. Tamalpais State Park trails outside of the monument boundaries. Bicycles are only allowed on designated fire roads. Athletic events The annual Dipsea Race, a footrace which goes between Mill Valley and Stinson Beach, passes through Muir Woods on the second Sunday in June. The Double Dipsea, later in June and the Quad Dipsea, in November, follow the same course. Ranger-led activities Rangers and volunteers present 15-minute interpretive talks and guided one-hour tours when staffing permits. Program topics include redwood ecology and conservation, the impact of climate change, and the history of Muir Woods. Longer hikes or other special programs are offered several times per month, and require a reservation. Weddings and Special Events Weddings, commercial filming, and special events are allowed in the monument only with a proper permit. Impact of tourism Positive In 2018, more than 17.5 million visitors visited Golden Gate National Recreation Area, Muir Woods National Monument, and Fort Point National Historic Site, and spent $1.2 billion in communities near the parks. The protections given to these areas by the federal government helped to  establish them as natural tourist destinations creating an attraction that brought positive externalities in the form of increased business to the surrounding communities.  According to the National Park Service, the spending and cash flow brought to the area through these visitors created 12,658 new local jobs and had a net benefit of $1.6 billion in additional revenue. A peer-reviewed visitor spending analysis was conducted by economists Catherine Thomas and Egan Cornachione of the U.S. Geological Survey and Lynne Koontz of the National Park Service. The report shows $20.2 billion of direct spending by more than 318 million park visitors in communities within 60 miles of a national park. This spending supported 329,000 jobs nationally; 268,000 of those jobs are found in these gateway communities. The cumulative benefit to the U.S. economy was $40.1 billion. Negative The popularity of Muir Woods as a tourist destination has created a great deal of congestion and delay on the two lane California State Highway 1. As a result, many civilians and residents living near Muir Woods and affected by the increased traffic on the pathway to the national monument have voiced concern with the National Park Service. In response, policy was designed that would create a parking reservation system, detailed road shoulder parking limits and enhanced parking enforcement in order to combat the negative externalities caused by Muir Woods. However, according to the Mount Tam Task Force, created to address the traffic concerns created by Muir Woods, the policy aimed towards fixing the issue has proved ineffective. The introduction of a website where all parking and shuttle reservations can be made, has helped to decongest the traffic to an extent. Despite the number of attending visitors trending downward at Muir Woods, parking and congestion remain a problem for tourists and locals alike. Surrounding ecosystems According to the National Parks Conservation Association, in an attempt to protect the wildlife and environment of Muir Woods, which is given certain protections due to the Antiquities Act, it is necessary to take care of the ecosystems of surrounding areas outside of park boundaries that have impacts on the wildlife and ecosystems within the park. As a result, the park is involved in restoration and conservation efforts in Redwood Creek at the Banducci Flower Farm site, which is managed by the Golden Gate National Recreation Area, and at Big Lagoon, which is outside the monument at Muir Beach, to improve ecosystem health and salmonid habitat. These efforts typically require a mixture of effort from the National Park Service, federal, state, and local governments; and even private landowners. The restoration efforts in surrounding areas outside of Muir Woods have helped to protect and restore the habitats of wildlife and fish such as coho salmon and the northern spotted owl. The National Park Service and the Golden Gate National Parks Conservancy have worked together to restore the last 1/2 mile of the Redwood Creek before it enters the Pacific Ocean. Before restoration efforts, the Redwood Creek mouth functioned poorly in conveying water and sediment from a nine square mile watershed to the ocean. The National Park Service claims that for over 100 years agriculture, logging, and road-building increased the erosion and degradation of the creek. As a result, local species of coho salmon and steelhead trout were threatened. Moreover, due to the poor state of Redwood Creek, even moderate and slight storms would cause flooding leaving residents flooded and local roads untraversable. While technically within the protected lands of Muir Woods, the poor state of Redwood Creek was having negative effects on wildlife and the ability to travel to Muir Woods. The efforts to restore Redwood Creek illustrate a way in which the government protection of Muir Woods leads to positive benefits for the surrounding area. Music American keyboardist and composer George Duke composed Muir Woods Suite in 1993. The Suite, a major orchestral piece, was premiered and recorded live at the Montreux Jazz Festival. Japanese composer Toru Takemitsu composed a three movement solo classical guitar piece called "In the Woods" in 1995. The third movement of the piece is called Muir Woods and it was inspired in this forest. In fiction Characters played by James Stewart and Kim Novak visit the Muir Woods National Monument in Alfred Hitchcock's 1958 film Vertigo; however, the scene was actually shot in Big Basin Redwoods State Park. The monument was a setting in Rise of the Planet of the Apes (2011), Dawn of the Planet of the Apes (2014) and the first act of War for the Planet of the Apes (2017), though all three films were in fact filmed in British Columbia. Jack Kerouac discusses hiking though Muir Woods in his 1958 novel The Dharma Bums. It appears in the Big Hero 6: The Series, set in an alternate history that San Francisco was under much Japanese influence after the 1906 earthquake, as "Muirahara Woods" in the eponymous episode. See also List of national monuments of the United States References External links NPS page for Muir Woods (includes "Plan your visit") Web Soil Survey (select Marin County) Muir Woods Guide National Park Service national monuments in California Mount Tamalpais Parks in Marin County, California Coast redwood groves Forests of California Golden Gate National Recreation Area History of Marin County, California National Register of Historic Places in Marin County, California Old-growth forests 1908 establishments in California Protected areas established in 1908
Muir Woods National Monument
[ "Biology" ]
4,064
[ "Old-growth forests", "Ecosystems" ]
347,428
https://en.wikipedia.org/wiki/Elsbett%20engine
The Elsbett, or Elko (known for "Elsbett Konstruktion"), engine is an 89 horsepower (66 Kilowatt), direct-injection diesel engine that was invented by Ludwig Elsbett It is designed to run on pure plant oil (PVO). Elsbett AG, the current manufactuerer, is based in Thalmässing, Bavaria. The design limits of the loss of energy as heat by a variety of technologies: The fuel charge is injected in such a manner as to "blend perfectly with the air" and combust within a central core of hot air, not contacting the chamber walls. The engine also doesn't use any water cooling. Instead, oil is used as the singular coolant. References External links Elsbett-museum website Piston engines Diesel engines Flexible-fuel vehicles
Elsbett engine
[ "Technology" ]
177
[ "Piston engines", "Engines" ]
347,473
https://en.wikipedia.org/wiki/Imino%20acid
In organic chemistry, an imino acid is any molecule that contains both imine (>C=NH) and carboxyl (-C(=O)-OH) functional groups. Imino acids are structurally related to amino acids, which have amino group instead of imine—a difference of single vs double-bond between nitrogen and carbon. The simplest example is dehydroglycine. D-Amino acid oxidase is an enzyme that is able to convert amino acids into imino acids. Also the direct biosynthetic precursor to the amino acid proline is the imino acid (S)-Δ1-pyrroline-5-carboxylate (P5C). Related terminology Secondary amino acids, amino acids containing a secondary amine group are sometimes named imino acids, though this usage is obsolescent. The only proteinogenic amino acid of this type is proline, although the related non-proteinogenic amino acids hydroxyproline and pipecolic acid have often been included in studies of this class of compounds. The term imino acid is also the obsolete term for imidic acids, structures containing the -C(=NH)-OH group, and should not be used for them. References External links Carboxylic acids
Imino acid
[ "Chemistry" ]
266
[ "Carboxylic acids", "Functional groups" ]
347,560
https://en.wikipedia.org/wiki/Inheritance%20%28genetic%20algorithm%29
In genetic algorithms, inheritance is the ability of modeled objects to mate, mutate (similar to biological mutation), and propagate their problem solving genes to the next generation, in order to produce an evolved solution to a particular problem. The selection of objects that will be inherited from in each successive generation is determined by a fitness function, which varies depending upon the problem being addressed. The traits of these objects are passed on through chromosomes by a means similar to biological reproduction. These chromosomes are generally represented by a series of genes, which in turn are usually represented using binary numbers. This propagation of traits between generations is similar to the inheritance of traits between generations of biological organisms. This process can also be viewed as a form of reinforcement learning, because the evolution of the objects is driven by the passing of traits from successful objects which can be viewed as a reward for their success, thereby promoting beneficial traits. Process Once a new generation is ready to be created, all of the individuals that have been successful and have been chosen for reproduction are randomly paired together. Then the traits of these individuals are passed on through a combination of crossover and mutation. This process follows these basic steps: Pair off successful objects for mating. Determine randomly a crossover point for each pair. Switch the genes after the crossover point in each pair. Determine randomly if any genes are mutated in the child objects. After following these steps, two child objects will be produced for every pair of parent objects used. Then, after determining the success of the objects in the new generation, this process can be repeated using whichever new objects were most successful. This will usually be repeated until either a desired generation is reached or an object that meets a minimum desired result from the fitness function is found. While crossover and mutation are the common genetic operators used in inheritance, there are also other operators such as regrouping and colonization-extinction. Example Assume these two strings of bits represent the traits being passed on by two parent objects: Object 1: 1100011010110001 Object 2: 1001100110011001 Now, consider that the crossover point is randomly positioned after the fifth bit: Object 1: 11000 | 11010110001 Object 2: 10011 | 00110011001 During crossover, the two objects will swap all of the bits after the crossover point, leading to: Object 1: 11000 | 00110011001 Object 2: 10011 | 11010110001 Finally, mutation is simulated on the objects by there being zero or more bits flipped randomly. Assuming the tenth bit for object 1 is mutated, and the second and seventh bits are mutated for object 2, the final children produced by this inheritance would be: Object 1: 1100000111011001 Object 2: 1101110010110001 See also Artificial intelligence Bioinformatics Speciation (genetic algorithm) References External links BoxCar 2D An interactive example of the use of a genetic algorithm to construct 2-dimensional cars. Genetic algorithms
Inheritance (genetic algorithm)
[ "Biology" ]
607
[ "Genetics techniques", "Genetic algorithms" ]
347,574
https://en.wikipedia.org/wiki/Jonathan%20Zenneck
Jonathan Adolf Wilhelm Zenneck (; ; 15 April 1871 – 8 April 1959) was a German physicist and electrical engineer. Zenneck improved the cathode-ray tube by adding a second deflection structure at right angles to the first, which allowed two-dimensional viewing of a waveform. This two-dimensional display is fundamental to the oscilloscope. Early years Zenneck was born in Ruppertshofen, Württemberg. In 1885, Zenneck entered the Evangelical-Theological Seminary in Maulbronn. In 1887, while in a Blaubeuren seminary, Zenneck learned Latin, Greek, French, and Hebrew. In 1889, Zenneck enrolled in the University of Tübingen. At the Tübingen Seminary, he studied mathematics and natural sciences. In 1894, Zenneck took the state examination in mathematics and natural sciences and the examination for his doctor's degree. His dissertation, supervised by Theodor Eimer, was on grass snake embryos. In 1894, Zenneck conducted zoological research (Natural History Museum, London). Between 1894 and 1895, he served in the military. Middle years In 1895, Zenneck left zoology and turned over to the new field of radio science, He became assistant to Ferdinand Braun and lecturer at "Physikalisches Institut" in Strasbourg, Alsace. Nikola Tesla's lectures introduced him to the wireless sciences. In 1899, Zenneck started propagation studies of wireless telegraphy, first over land, but then became more interested in the larger ranges that were reached over sea. In 1900 he started ship-to-coast experiments in the North Sea near Cuxhaven, Germany. in 1902 he conducted tests of directional antennas. In 1905, Zenneck left Strasbourg since he was appointed assistant-professor at the Danzig Technische Hochschule and in 1906, he became professor of experimental physics in the Braunschweig Technische Hochschule. Also in 1906, Zenneck wrote "Electromagnetic Oscillations and Wireless Telegraphy", the then standard textbook on the subject). In 1909, he joined Badische Anilin und Sodafabrik in Ludwigshafen to experiment with electrical discharges in air to produce bound nitrogen as fertilizer. In 1913, he became director of the newly created Physics Institute of the Technische Hochschule München. Zenneck analyzed solutions to Maxwell's equations that are localized around an interface between a conducting medium and a non-conducting medium. In these solutions, the electric field strength decays exponentially in each medium as distance from the interface increases. These waves are sometimes called Zenneck waves. Zenneck analyzed plane wave solutions having this property; he also analyzed solutions with cylindrical symmetry having this property. Later years Around World War I, Zenneck served on the front-lines as a captain in the Marines. However, in 1914, the German government sent him and Karl Ferdinand Braun to the United States as technical advisor in a patent case involving Telefunken. The US Marconi Company sued Telefunken for patent infringement, a case spurred by the British government in an attempt to shut down transatlantic wireless telegraph between the US and Germany. The case stalled and event went moot when the United States entered the war, declaring Zenneck a Prisoner of War. He was dismissed only in 1920 when he could finally take over the professorship of experimental physics at the Technische Hochschule München. In that time he resumed propagation studies, now with shortwaves and was first in Germany to study the Ionosphere with vertical sounding at his station at Kochel/Bavaria. From the 1930s, Zenneck directed the Deutsches Museum in Munich, and rebuilt it after World War II. Zenneck was awarded the 1928 IRE Medal of Honor for his achievements in basic research on radio technology and for fostering academic and technical offspring he received the Siemens-Ring in 1956. See also Kugelbake Spread spectrum Surface plasmon Ionosonde Patents Bibliography Articles Jonathan Zenneck,”Über die Fortpflanzung ebener elektromagnetischer Wellen längs einer ebenen Leiterfläche und ihre Beziehung zur drahtlose n Telegraphie” (“On the propagation of plane electromagnetic waves along a planar conductor surface and its relation to wireless telegraphy”), Ann. Physik [4] 23, 846 (1907). Books Electromagnetic oscillations and wireless telegraphy (Gr., Elektromagnetische Schwingungen und drahtlose Telegraphie). F. Enke, 1905. Microwaves and electroacoustics (Gr., Hochfrequenztechnik und Elektroakustik). Volume 1. Academic publishing company Geest & Portig., 1908 Wireless telegraphy. McGraw-Hill Book Company, inc., 1915. References Citations General information Jonathan Zenneck (1871–1959) Physik Departments an der Technischen Universität München, "Booklet". Chapter 11. History (PDF) External links 1871 births 1959 deaths German electrical engineers Academic staff of the Technical University of Munich Presidents of the Technical University of Munich IEEE Medal of Honor recipients Werner von Siemens Ring laureates Commanders Crosses of the Order of Merit of the Federal Republic of Germany 19th-century German engineers 20th-century German engineers 20th-century German physicists Telecommunications engineers University of Tübingen alumni Engineers from Baden-Württemberg People from Ostalbkreis Microwave engineers
Jonathan Zenneck
[ "Engineering" ]
1,115
[ "Telecommunications engineering", "Telecommunications engineers" ]
347,694
https://en.wikipedia.org/wiki/List%20of%20algebraic%20topology%20topics
This is a list of algebraic topology topics. Homology (mathematics) Simplex Simplicial complex Polytope Triangulation Barycentric subdivision Simplicial approximation theorem Abstract simplicial complex Simplicial set Simplicial category Chain (algebraic topology) Betti number Euler characteristic Genus Riemann–Hurwitz formula Singular homology Cellular homology Relative homology Mayer–Vietoris sequence Excision theorem Universal coefficient theorem Cohomology List of cohomology theories Cocycle class Cup product Cohomology ring De Rham cohomology Čech cohomology Alexander–Spanier cohomology Intersection cohomology Lusternik–Schnirelmann category Poincaré duality Fundamental class Applications Jordan curve theorem Brouwer fixed point theorem Invariance of domain Lefschetz fixed-point theorem Hairy ball theorem Degree of a continuous mapping Borsuk–Ulam theorem Ham sandwich theorem Homology sphere Homotopy theory Homotopy Path (topology) Fundamental group Homotopy group Seifert–van Kampen theorem Pointed space Winding number Simply connected Universal cover Monodromy Homotopy lifting property Mapping cylinder Mapping cone (topology) Wedge sum Smash product Adjunction space Cohomotopy Cohomotopy group Brown's representability theorem Eilenberg–MacLane space Fibre bundle Möbius strip Line bundle Canonical line bundle Vector bundle Associated bundle Fibration Hopf bundle Classifying space Cofibration Homotopy groups of spheres Plus construction Whitehead theorem Weak equivalence Hurewicz theorem H-space Further developments Künneth theorem De Rham cohomology Obstruction theory Characteristic class Chern class Chern–Simons form Pontryagin class Pontryagin number Stiefel–Whitney class Poincaré conjecture Cohomology operation Steenrod algebra Bott periodicity theorem K-theory Topological K-theory Adams operation Algebraic K-theory Whitehead torsion Twisted K-theory Cobordism Thom space Suspension functor Stable homotopy theory Spectrum (homotopy theory) Morava K-theory Hodge conjecture Weil conjectures Directed algebraic topology Applied topology Example: DE-9IM Homological algebra Chain complex Commutative diagram Exact sequence Five lemma Short five lemma Snake lemma Splitting lemma Extension problem Spectral sequence Abelian category Group cohomology Sheaf Sheaf cohomology Grothendieck topology Derived category History Combinatorial topology See also Glossary of algebraic topology topology glossary List of topology topics List of general topology topics List of geometric topology topics Publications in topology Topological property Mathematics-related lists Outlines of mathematics and logic Outlines
List of algebraic topology topics
[ "Mathematics" ]
533
[ "Fields of abstract algebra", "Topology", "nan", "Algebraic topology" ]
347,726
https://en.wikipedia.org/wiki/Schematron
Schematron is a rule-based validation language for making assertions about the presence or absence of patterns in XML trees. It is a structural schema language expressed in XML using a small number of elements and XPath languages. In many implementations, the Schematron XML is processed into XSLT code for deployment anywhere that XSLT can be used. Schematron is capable of expressing constraints in ways that other XML schema languages like XML Schema and DTD cannot. For example, it can require that the content of an element be controlled by one of its siblings. Or it can request or require that the root element, regardless of what element that is, must have specific attributes. Schematron can also specify required relationships between multiple XML files. Constraints and content rules may be associated with "plain-English" (or any language) validation error messages, allowing translation of numeric Schematron error codes into meaningful user error messages. Users of Schematron define all the error messages themselves. The current ISO recommendation is Information technology, Document Schema Definition Languages (DSDL), Part 3: Rule-based validation, Schematron (ISO/IEC 19757-3:2020). Uses Constraints are specified in Schematron using an XPath-based language that can be deployed as XSLT code, making it practical for applications such as the following: Adjunct to Structural Validation By testing for co-occurrence constraints, non-regular constraints, and inter-document constraints, Schematron can extend the validations that can be expressed in languages such as DTDs, RELAX NG or XML Schema. Lightweight Business Rules Engine Schematron is not a comprehensive, Rete rules engine, but it can be used to express rules about complex structures with an XML document. XML Editor Syntax Highlighting Rules Some XML editors use Schematron rules to conditionally highlight XML files for errors. Not all XML editors support Schematron. Versions Schematron was invented by Rick Jelliffe while at Academia Sinica Computing Centre, Taiwan. He described Schematron as "a feather duster to reach the parts other schema languages cannot reach". The most common versions of Schematron are: Schematron 1.0 (1999) Schematron 1.3 (2000): This version used the namespace http://xml.ascc.net/schematron/. It was supported by an XSLT implementation with a plug-in architecture. Schematron 1.5 (2001): This version was widely implemented and can still be found. Schematron 1.6 (2002): This version was the base of ISO Schematron and obsoleted by it. ISO Schematron (2006): This version regularizes several features, and provides an XML output format, Schematron Validation Report Language (SVRL). It uses the new namespace http://purl.oclc.org/dsdl/schematron. ISO Schematron (2010) ISO Schematron (2016): This version added support for XSLT2. ISO Schematron (2020): This version added support for XSLT3. Schematron as an ISO Standard Schematron has been standardized by the ISO as Information technology, Document Schema Definition Languages (DSDL), Part 3: Rule-based validation, Schematron (ISO/IEC 19757-3:2020). This standard is currently not listed on the ISO Publicly Available Specifications list. Paper versions may be purchased from ISO or national standards bodies. Schemas that use ISO/IEC FDIS 19757-3 should use the following namespace: http://purl.oclc.org/dsdl/schematron Sample rule Schematron rules can be created using a standard XML editor or XForms application. The following is a sample schema: <schema xmlns="http://purl.oclc.org/dsdl/schematron"> <pattern> <title>Date rules</title> <rule context="Contract"> <assert test="ContractDate > current-date()">ContractDate should be in the past because future contracts are not allowed.</assert> </rule> </pattern> </schema> This rule checks to make sure that the XML element has a date that is before the current date. If this rule fails the validation will fail and an error message which is the body of the assert element will be returned to the user. Implementation Schematron schemas are suitable for use in XML Pipelines, thereby allowing workflow process designers to build and maintain rules using XML manipulation tools. The W3C's XProc pipelining language, for example, has native support for Schematron schema processing through its "validate-with-schematron" step. Since Schematron schemas can be transformed into XSLT stylesheets, these can themselves be used in XML Pipelines which support XSLT transformation. An Apache Ant task can be used to convert Schematron rules into XSLT files. There exists also native Schematron implementation, like the Java implementation from Innovimax/INRIA, QuiXSchematron, that also do streaming. See also XML Schema Language comparison - Comparison to other XML Schema languages. Service Modeling Language - Service Modeling Language uses Schematron. Document Schema Definition Languages References External links Academia Sinica Computing Centre's Schematron Home Page A book on Schematron (in German) Schematron online tutorial and reference Data modeling languages ISO/IEC standards XML XML-based programming languages XML-based standards
Schematron
[ "Technology" ]
1,177
[ "Computer standards", "XML-based standards" ]
347,832
https://en.wikipedia.org/wiki/Fifth%20Generation%20Computer%20Systems
The Fifth Generation Computer Systems (FGCS; ) was a 10-year initiative launched in 1982 by Japan's Ministry of International Trade and Industry (MITI) to develop computers based on massively parallel computing and logic programming. The project aimed to create an "epoch-making computer" with supercomputer-like performance and to establish a platform for future advancements in artificial intelligence. Although FGCS was ahead of its time, its ambitious goals ultimately led to commercial failure. However, on a theoretical level, the project significantly contributed to the development of concurrent logic programming. The term "fifth generation" was chosen to emphasize the system's advanced nature. In the history of computing hardware, there had been four prior "generations" of computers: the first generation utilized vacuum tubes; the second, transistors and diodes; the third, integrated circuits; and the fourth, microprocessors. While earlier generations focused on increasing the number of logic elements within a single CPU, it was widely believed at the time that the fifth generation would achieve enhanced performance through the use of massive numbers of CPUs. Background In the late 1960s until the early 1970s, there was much talk about "generations" of computer hardware, then usually organized into three generations. First generation: Thermionic vacuum tubes. Mid-1940s. IBM pioneered the arrangement of vacuum tubes in pluggable modules. The IBM 650 was a first-generation computer. Second generation: Transistors. 1956. The era of miniaturization begins. Transistors are much smaller than vacuum tubes, draw less power, and generate less heat. Discrete transistors are soldered to circuit boards, with interconnections accomplished by stencil-screened conductive patterns on the reverse side. The IBM 7090 was a second-generation computer. Third generation: Integrated circuits (silicon chips containing multiple transistors). 1964. A pioneering example is the ACPX module used in the IBM 360/91, which, by stacking layers of silicon over a ceramic substrate, accommodated over 20 transistors per chip; the chips could be packed together onto a circuit board to achieve unprecedented logic densities. The IBM 360/91 was a hybrid second and third-generation computer. Omitted from this taxonomy is the "zeroth-generation" computer based on metal gears (such as the IBM 407) or mechanical relays (such as the Mark I), and the post-third-generation computers based on Very Large Scale Integrated (VLSI) circuits. There was also a parallel set of generations for software: First generation: Machine language. Second generation: Low-level programming languages such as Assembly language. Third generation: Structured high-level programming languages such as C, COBOL and FORTRAN. Fourth generation: "Non-procedural" high-level programming languages (such as object-oriented languages). Throughout these multiple generations up to the 1970s, Japan built computers following U.S. and British leads. In the mid-1970s, the Ministry of International Trade and Industry stopped following western leads and started looking into the future of computing on a small scale. They asked the Japan Information Processing Development Center (JIPDEC) to indicate a number of future directions, and in 1979 offered a three-year contract to carry out more in-depth studies along with industry and academia. It was during this period that the term "fifth-generation computer" started to be used. Prior to the 1970s, MITI guidance had successes such as an improved steel industry, the creation of the oil supertanker, the automotive industry, consumer electronics, and computer memory. MITI decided that the future was going to be information technology. However, the Japanese language, particularly in its written form, presented and still presents obstacles for computers. As a result of these hurdles, MITI held a conference to seek assistance from experts. The primary fields for investigation from this initial project were: Inference computer technologies for knowledge processing Computer technologies to process large-scale data bases and knowledge bases High-performance workstations Distributed functional computer technologies Super-computers for scientific calculation Project launch The aim was to build parallel computers for artificial intelligence applications using concurrent logic programming. The project imagined an "epoch-making" computer with supercomputer-like performance running on top of large databases (as opposed to a traditional filesystem) using a logic programming language to define and access the data using massively parallel computing/processing. They envisioned building a prototype machine with performance between 100M and 1G LIPS, where a LIPS is a Logical Inference Per Second. At the time typical workstation machines were capable of about 100k LIPS. They proposed to build this machine over a ten-year period, 3 years for initial R&D, 4 years for building various subsystems, and a final 3 years to complete a working prototype system. In 1982 the government decided to go ahead with the project, and established the Institute for New Generation Computer Technology (ICOT) through joint investment with various Japanese computer companies. After the project ended, MITI would consider an investment in a new "sixth generation" project. Ehud Shapiro captured the rationale and motivations driving this project: "As part of Japan's effort to become a leader in the computer industry, the Institute for New Generation Computer Technology has launched a revolutionary ten-year plan for the development of large computer systems which will be applicable to knowledge information processing systems. These Fifth Generation computers will be built around the concepts of logic programming. In order to refute the accusation that Japan exploits knowledge from abroad without contributing any of its own, this project will stimulate original research and will make its results available to the international research community." Logic programming The target defined by the FGCS project was to develop "Knowledge Information Processing systems" (roughly meaning, applied Artificial Intelligence). The chosen tool to implement this goal was logic programming. Logic programming approach as was characterized by Maarten Van Emden – one of its founders – as: The use of logic to express information in a computer. The use of logic to present problems to a computer. The use of logical inference to solve these problems. More technically, it can be summed up in two equations: Program = Set of axioms. Computation = Proof of a statement from axioms. The Axioms typically used are universal axioms of a restricted form, called Horn-clauses or definite-clauses. The statement proved in a computation is an existential statement. The proof is constructive, and provides values for the existentially quantified variables: these values constitute the output of the computation. Logic programming was thought of as something that unified various gradients of computer science (software engineering, databases, computer architecture and artificial intelligence). It seemed that logic programming was a key missing connection between knowledge engineering and parallel computer architectures. Results After having influenced the consumer electronics field during the 1970s and the automotive world during the 1980s, the Japanese had developed a strong reputation. The launch of the FGCS project spread the belief that parallel computing was the future of all performance gains, producing a wave of apprehension in the computer field. Soon parallel projects were set up in the US as the Strategic Computing Initiative and the Microelectronics and Computer Technology Corporation (MCC), in the UK as Alvey, and in Europe as the European Strategic Program on Research in Information Technology (ESPRIT), as well as the European Computer‐Industry Research Centre (ECRC) in Munich, a collaboration between ICL in Britain, Bull in France, and Siemens in Germany. The project ran from 1982 to 1994, spending a little less than ¥57 billion (about US$320 million) total. After the FGCS Project, MITI stopped funding large-scale computer research projects, and the research momentum developed by the FGCS Project dissipated. However MITI/ICOT embarked on a neural-net project which some called the Sixth Generation Project in the 1990s, with a similar level of funding. Per-year spending was less than 1% of the entire R&D expenditure of the electronics and communications equipment industry. For example, the project's highest expenditure year was 7.2 million yen in 1991, but IBM alone spent 1.5 billion dollars (370 billion yen) in 1982, while the industry spent 2150 billion yen in 1990. Concurrent logic programming In 1982, during a visit to the ICOT, Ehud Shapiro invented Concurrent Prolog, a novel programming language that integrated logic programming and concurrent programming. Concurrent Prolog is a process oriented language, which embodies dataflow synchronization and guarded-command indeterminacy as its basic control mechanisms. Shapiro described the language in a Report marked as ICOT Technical Report 003, which presented a Concurrent Prolog interpreter written in Prolog. Shapiro's work on Concurrent Prolog inspired a change in the direction of the FGCS from focusing on parallel implementation of Prolog to the focus on concurrent logic programming as the software foundation for the project. It also inspired the concurrent logic programming language Guarded Horn Clauses (GHC) by Ueda, which was the basis of KL1, the programming language that was finally designed and implemented by the FGCS project as its core programming language. The FGCS project and its findings contributed greatly to the development of the concurrent logic programming field. The project produced a new generation of promising Japanese researchers. Commercial failure Five running Parallel Inference Machines (PIM) were eventually produced: PIM/m, PIM/p, PIM/i, PIM/k, PIM/c. The project also produced applications to run on these systems, such as the parallel database management system Kappa, the legal reasoning system HELIC-II, and the automated theorem prover MGTP, as well as bioinformatics applications. The FGCS Project did not meet with commercial success for reasons similar to the Lisp machine companies and Thinking Machines. The highly parallel computer architecture was eventually surpassed in speed by less specialized hardware (for example, Sun workstations and Intel x86 machines). A primary problem was the choice of concurrent logic programming as the bridge between the parallel computer architecture and the use of logic as a knowledge representation and problem solving language for AI applications. This never happened cleanly; a number of languages were developed, all with their own limitations. In particular, the committed choice feature of concurrent constraint logic programming interfered with the logical semantics of the languages. The project found that the benefits of logic programming were largely negated using committed choice. Another problem was that existing CPU performance quickly overcame the barriers that experts anticipated in the 1980s, and the value of parallel computing dropped to the point where it was for some time used only in niche situations. Although a number of workstations of increasing capacity were designed and built over the project's lifespan, they generally found themselves soon outperformed by "off the shelf" units available commercially. The project also failed to incorporate outside innovations. During its lifespan, GUIs became mainstream in computers; the internet enabled locally stored databases to become distributed; and even simple research projects provided better real-world results in data mining. The FGCS workstations had no appeal in a market where general purpose systems could replace and outperform them. This is parallel to the Lisp machine market, where rule-based systems such as CLIPS could run on general-purpose computers, making expensive Lisp machines unnecessary. Ahead of its time In summary, the Fifth-Generation project was revolutionary, and accomplished some basic research that anticipated future research directions. Many papers and patents were published. MITI established a committee which assessed the performance of the FGCS Project as having made major contributions in computing, in particular eliminating bottlenecks in parallel processing software and the realization of intelligent interactive processing based on large knowledge bases. However, the committee was strongly biased to justify the project, so this overstates the actual results. Many of the themes seen in the Fifth-Generation project are now being re-interpreted in current technologies, as the hardware limitations foreseen in the 1980s were finally reached in the 2000s. When clock speeds of CPUs began to move into the 3–5 GHz range, CPU power dissipation and other problems became more important. The ability of industry to produce ever-faster single CPU systems (linked to Moore's Law about the periodic doubling of transistor counts) began to be threatened. In the early 21st century, many flavors of parallel computing began to proliferate, including multi-core architectures at the low-end and massively parallel processing at the high end. Ordinary consumer machines and game consoles began to have parallel processors like the Intel Core, AMD K10, and Cell. Graphics card companies like Nvidia and AMD began introducing large parallel systems like CUDA and OpenCL. It appears, however, that these new technologies do not cite FGCS research. It is not clear if FGCS was leveraged to facilitate these developments in any significant way. No significant impact of FGCS on the computing industry has been demonstrated. External Links FGCS Museum - contains a large archive of nearly all of the output of the FGCS project, including technical reports, technical memoranda, hardware specifications, and software. References 第五世代コンピュータ・プロジェクト 最終評価報告書 [Fifth Generation Computer Project Final Evaluation Report] (March 30, 1993) Classes of computers History of artificial intelligence MITI projects Parallel computing Research projects Supercomputing in Japan
Fifth Generation Computer Systems
[ "Technology" ]
2,765
[ "Classes of computers", "Computers", "Computer systems" ]
347,836
https://en.wikipedia.org/wiki/List%20of%20polynomial%20topics
This is a list of polynomial topics, by Wikipedia page. See also trigonometric polynomial, list of algebraic geometry topics. Terminology Degree: The maximum exponents among the monomials. Factor: An expression being multiplied. Linear factor: A factor of degree one. Coefficient: An expression multiplying one of the monomials of the polynomial. Root (or zero) of a polynomial: Given a polynomial p(x), the x values that satisfy p(x) = 0 are called roots (or zeroes) of the polynomial p. Graphing End behaviour – Concavity – Orientation – Tangency point – Inflection point – Point where concavity changes. Basics Polynomial Coefficient Monomial Polynomial long division Synthetic division Polynomial factorization Rational function Partial fraction Partial fraction decomposition over R Vieta's formulas Integer-valued polynomial Algebraic equation Factor theorem Polynomial remainder theorem Elementary abstract algebra See also Theory of equations below. Polynomial ring Greatest common divisior of two polynomials Symmetric function Homogeneous polynomial Polynomial SOS (sum of squares) Theory of equations Polynomial family Quadratic function Cubic function Quartic function Quintic function Sextic function Septic function Octic function Completing the square Abel–Ruffini theorem Bring radical Binomial theorem Blossom (functional) Root of a function nth root (radical) Surd Square root Methods of computing square roots Cube root Root of unity Constructible number Complex conjugate root theorem Algebraic element Horner scheme Rational root theorem Gauss's lemma (polynomial) Irreducible polynomial Eisenstein's criterion Primitive polynomial Fundamental theorem of algebra Hurwitz polynomial Polynomial transformation Tschirnhaus transformation Galois theory Discriminant of a polynomial Resultant Elimination theory Gröbner basis Regular chain Triangular decomposition Sturm's theorem Descartes' rule of signs Carlitz–Wan conjecture Polynomial decomposition, factorization under functional composition Calculus with polynomials Delta operator Bernstein–Sato polynomial Polynomial interpolation Lagrange polynomial Runge's phenomenon Spline (mathematics) Weierstrass approximation theorem Bernstein polynomial Linear algebra Characteristic polynomial Minimal polynomial Invariant polynomial Named polynomials and polynomial sequences Abel polynomials Actuarial polynomials Additive polynomials All one polynomials Appell sequence Askey–Wilson polynomials Bell polynomials Bernoulli polynomials Bernstein polynomial Bessel polynomials Binomial type Brahmagupta polynomials Caloric polynomial Charlier polynomials Chebyshev polynomials Chihara–Ismail polynomials Cyclotomic polynomials Dickson polynomial Ehrhart polynomial Exponential polynomials Favard's theorem Fibonacci polynomials Gegenbauer polynomials Hahn polynomials Hall–Littlewood polynomials Heat polynomial — see caloric polynomial Heckman–Opdam polynomials Hermite polynomials Hurwitz polynomial Jack function Jacobi polynomials Koornwinder polynomials Kostka polynomial Kravchuk polynomials Laguerre polynomials Laurent polynomial Linearised polynomial Littlewood polynomial Legendre polynomials Associated Legendre polynomials Spherical harmonic Lucas polynomials Macdonald polynomials Meixner polynomials Necklace polynomial Newton polynomial Orthogonal polynomials Orthogonal polynomials on the unit circle Permutation polynomial Racah polynomials Rogers polynomials Rogers–Szegő polynomials Rook polynomial Schur polynomials Shapiro polynomials Sheffer sequence Spread polynomials Tricomi–Carlitz polynomials Touchard polynomials Wilkinson's polynomial Wilson polynomials Zernike polynomials Pseudo-Zernike polynomials Knot polynomials Alexander polynomial HOMFLY polynomial Jones polynomial Algorithms Karatsuba multiplication Lenstra–Lenstra–Lovász lattice basis reduction algorithm (for polynomial factorization) Lindsey–Fox algorithm Schönhage–Strassen algorithm Other Polynomial mapping Mathematics-related lists List
List of polynomial topics
[ "Mathematics" ]
710
[ "Polynomials", "Algebra" ]
347,838
https://en.wikipedia.org/wiki/Nuclear%20medicine
Nuclear medicine (nuclear radiology, nucleology), is a medical specialty involving the application of radioactive substances in the diagnosis and treatment of disease. Nuclear imaging is, in a sense, radiology done inside out, because it records radiation emitted from within the body rather than radiation that is transmitted through the body from external sources like X-ray generators. In addition, nuclear medicine scans differ from radiology, as the emphasis is not on imaging anatomy, but on the function. For such reason, it is called a physiological imaging modality. Single photon emission computed tomography (SPECT) and positron emission tomography (PET) scans are the two most common imaging modalities in nuclear medicine. Diagnostic medical imaging Diagnostic In nuclear medicine imaging, radiopharmaceuticals are taken internally, for example, through inhalation, intravenously, or orally. Then, external detectors (gamma cameras) capture and form images from the radiation emitted by the radiopharmaceuticals. This process is unlike a diagnostic X-ray, where external radiation is passed through the body to form an image. There are several techniques of diagnostic nuclear medicine. 2D: Scintigraphy ("scint") is the use of internal radionuclides to create two-dimensional images. 3D: SPECT is a 3D tomographic technique that uses gamma camera data from many projections and can be reconstructed in different planes. Positron emission tomography (PET) uses coincidence detection to image functional processes. Nuclear medicine tests differ from most other imaging modalities in that nuclear medicine scans primarily show the physiological function of the system being investigated as opposed to traditional anatomical imaging such as CT or MRI. Nuclear medicine imaging studies are generally more organ-, tissue- or disease-specific (e.g.: lungs scan, heart scan, bone scan, brain scan, tumor, infection, Parkinson etc.) than those in conventional radiology imaging, which focus on a particular section of the body (e.g.: chest X-ray, abdomen/pelvis CT scan, head CT scan, etc.). In addition, there are nuclear medicine studies that allow imaging of the whole body based on certain cellular receptors or functions. Examples are whole body PET scans or PET/CT scans, gallium scans, indium white blood cell scans, MIBG and octreotide scans. While the ability of nuclear metabolism to image disease processes from differences in metabolism is unsurpassed, it is not unique. Certain techniques such as fMRI image tissues (particularly cerebral tissues) by blood flow and thus show metabolism. Also, contrast-enhancement techniques in both CT and MRI show regions of tissue that are handling pharmaceuticals differently, due to an inflammatory process. Diagnostic tests in nuclear medicine exploit the way that the body handles substances differently when there is disease or pathology present. The radionuclide introduced into the body is often chemically bound to a complex that acts characteristically within the body; this is commonly known as a tracer. In the presence of disease, a tracer will often be distributed around the body and/or processed differently. For example, the ligand methylene-diphosphonate (MDP) can be preferentially taken up by bone. By chemically attaching technetium-99m to MDP, radioactivity can be transported and attached to bone via the hydroxyapatite for imaging. Any increased physiological function, such as due to a fracture in the bone, will usually mean increased concentration of the tracer. This often results in the appearance of a "hot spot", which is a focal increase in radio accumulation or a general increase in radio accumulation throughout the physiological system. Some disease processes result in the exclusion of a tracer, resulting in the appearance of a "cold spot". Many tracer complexes have been developed to image or treat many different organs, glands, and physiological processes. Hybrid scanning techniques In some centers, the nuclear medicine scans can be superimposed, using software or hybrid cameras, on images from modalities such as CT or MRI to highlight the part of the body in which the radiopharmaceutical is concentrated. This practice is often referred to as image fusion or co-registration, for example SPECT/CT and PET/CT. The fusion imaging technique in nuclear medicine provides information about the anatomy and function, which would otherwise be unavailable or would require a more invasive procedure or surgery. Practical concerns in nuclear imaging Although the risks of low-level radiation exposures are not well understood, a cautious approach has been universally adopted that all human radiation exposures should be kept As Low As Reasonably Practicable, "ALARP". (Originally, this was known as "As Low As Reasonably Achievable" (ALARA), but this has changed in modern draftings of the legislation to add more emphasis on the "Reasonably" and less on the "Achievable".) Working with the ALARP principle, before a patient is exposed for a nuclear medicine examination, the benefit of the examination must be identified. This needs to take into account the particular circumstances of the patient in question, where appropriate. For instance, if a patient is unlikely to be able to tolerate a sufficient amount of the procedure to achieve a diagnosis, then it would be inappropriate to proceed with injecting the patient with the radioactive tracer. When the benefit does justify the procedure, then the radiation exposure (the amount of radiation given to the patient) should also be kept "ALARP". This means that the images produced in nuclear medicine should never be better than required for confident diagnosis. Giving larger radiation exposures can reduce the noise in an image and make it more photographically appealing, but if the clinical question can be answered without this level of detail, then this is inappropriate. As a result, the radiation dose from nuclear medicine imaging varies greatly depending on the type of study. The effective radiation dose can be lower than or comparable to or can far exceed the general day-to-day environmental annual background radiation dose. Likewise, it can also be less than, in the range of, or higher than the radiation dose from an abdomen/pelvis CT scan. Some nuclear medicine procedures require special patient preparation before the study to obtain the most accurate result. Pre-imaging preparations may include dietary preparation or the withholding of certain medications. Patients are encouraged to consult with the nuclear medicine department prior to a scan. Analysis The result of the nuclear medicine imaging process is a dataset comprising one or more images. In multi-image datasets the array of images may represent a time sequence (i.e. cine or movie) often called a "dynamic" dataset, a cardiac gated time sequence, or a spatial sequence where the gamma-camera is moved relative to the patient. SPECT (single photon emission computed tomography) is the process by which images acquired from a rotating gamma-camera are reconstructed to produce an image of a "slice" through the patient at a particular position. A collection of parallel slices form a slice-stack, a three-dimensional representation of the distribution of radionuclide in the patient. The nuclear medicine computer may require millions of lines of source code to provide quantitative analysis packages for each of the specific imaging techniques available in nuclear medicine. Time sequences can be further analysed using kinetic models such as multi-compartment models or a Patlak plot. Interventional nuclear medicine Radionuclide therapy can be used to treat conditions such as hyperthyroidism, thyroid cancer, skin cancer and blood disorders. In nuclear medicine therapy, the radiation treatment dose is administered internally (e.g. intravenous or oral routes) or externally direct above the area to treat in form of a compound (e.g. in case of skin cancer). The radiopharmaceuticals used in nuclear medicine therapy emit ionizing radiation that travels only a short distance, thereby minimizing unwanted side effects and damage to noninvolved organs or nearby structures. Most nuclear medicine therapies can be performed as outpatient procedures since there are few side effects from the treatment and the radiation exposure to the general public can be kept within a safe limit. In some centers the nuclear medicine department may also use implanted capsules of isotopes (brachytherapy) to treat cancer. History The history of nuclear medicine contains contributions from scientists across different disciplines in physics, chemistry, engineering, and medicine. The multidisciplinary nature of nuclear medicine makes it difficult for medical historians to determine the birthdate of nuclear medicine. This can probably be best placed between the discovery of artificial radioactivity in 1934 and the production of radionuclides by Oak Ridge National Laboratory for medicine-related use, in 1946. The origins of this medical idea date back as far as the mid-1920s in Freiburg, Germany, when George de Hevesy made experiments with radionuclides administered to rats, thus displaying metabolic pathways of these substances and establishing the tracer principle. Possibly, the genesis of this medical field took place in 1936, when John Lawrence, known as "the father of nuclear medicine", took a leave of absence from his faculty position at Yale Medical School, to visit his brother Ernest Lawrence at his new radiation laboratory (now known as the Lawrence Berkeley National Laboratory) in Berkeley, California. Later on, John Lawrence made the first application in patients of an artificial radionuclide when he used phosphorus-32 to treat leukemia. Many historians consider the discovery of artificially produced radionuclides by Frédéric Joliot-Curie and Irène Joliot-Curie in 1934 as the most significant milestone in nuclear medicine. In February 1934, they reported the first artificial production of radioactive material in the journal Nature, after discovering radioactivity in aluminum foil that was irradiated with a polonium preparation. Their work built upon earlier discoveries by Wilhelm Konrad Roentgen for X-ray, Henri Becquerel for radioactive uranium salts, and Marie Curie (mother of Irène Curie) for radioactive thorium, polonium and coining the term "radioactivity." Taro Takemi studied the application of nuclear physics to medicine in the 1930s. The history of nuclear medicine will not be complete without mentioning these early pioneers. Nuclear medicine gained public recognition as a potential specialty when on May 11, 1946, an article in the Journal of the American Medical Association (JAMA) by Massachusetts General Hospital's Dr. Saul Hertz and Massachusetts Institute of Technology's Dr. Arthur Roberts, described the successful use of treating Graves' Disease with radioactive iodine (RAI) was published. Additionally, Sam Seidlin. brought further development in the field describing a successful treatment of a patient with thyroid cancer metastases using radioiodine (I-131). These articles are considered by many historians as the most important articles ever published in nuclear medicine. Although the earliest use of I-131 was devoted to therapy of thyroid cancer, its use was later expanded to include imaging of the thyroid gland, quantification of the thyroid function, and therapy for hyperthyroidism. Among the many radionuclides that were discovered for medical-use, none were as important as the discovery and development of Technetium-99m. It was first discovered in 1937 by C. Perrier and E. Segre as an artificial element to fill space number 43 in the Periodic Table. The development of a generator system to produce Technetium-99m in the 1960s became a practical method for medical use. Today, Technetium-99m is the most utilized element in nuclear medicine and is employed in a wide variety of nuclear medicine imaging studies. Widespread clinical use of nuclear medicine began in the early 1950s, as knowledge expanded about radionuclides, detection of radioactivity, and using certain radionuclides to trace biochemical processes. Pioneering works by Benedict Cassen in developing the first rectilinear scanner and Hal O. Anger's scintillation camera (Anger camera) broadened the young discipline of nuclear medicine into a full-fledged medical imaging specialty. By the early 1960s, in southern Scandinavia, Niels A. Lassen, David H. Ingvar, and Erik Skinhøj developed techniques that provided the first blood flow maps of the brain, which initially involved xenon-133 inhalation; an intra-arterial equivalent was developed soon after, enabling measurement of the local distribution of cerebral activity for patients with neuropsychiatric disorders such as schizophrenia. Later versions would have 254 scintillators so a two-dimensional image could be produced on a color monitor. It allowed them to construct images reflecting brain activation from speaking, reading, visual or auditory perception and voluntary movement. The technique was also used to investigate, e.g., imagined sequential movements, mental calculation and mental spatial navigation. By the 1970s most organs of the body could be visualized using nuclear medicine procedures. In 1971, American Medical Association officially recognized nuclear medicine as a medical specialty. In 1972, the American Board of Nuclear Medicine was established, and in 1974, the American Osteopathic Board of Nuclear Medicine was established, cementing nuclear medicine as a stand-alone medical specialty. In the 1980s, radiopharmaceuticals were designed for use in diagnosis of heart disease. The development of single photon emission computed tomography (SPECT), around the same time, led to three-dimensional reconstruction of the heart and establishment of the field of nuclear cardiology. More recent developments in nuclear medicine include the invention of the first positron emission tomography scanner (PET). The concept of emission and transmission tomography, later developed into single photon emission computed tomography (SPECT), was introduced by David E. Kuhl and Roy Edwards in the late 1950s. Their work led to the design and construction of several tomographic instruments at the University of Pennsylvania. Tomographic imaging techniques were further developed at the Washington University School of Medicine. These innovations led to fusion imaging with SPECT and CT by Bruce Hasegawa from University of California, San Francisco (UCSF), and the first PET/CT prototype by D. W. Townsend from University of Pittsburgh in 1998. PET and PET/CT imaging experienced slower growth in its early years owing to the cost of the modality and the requirement for an on-site or nearby cyclotron. However, an administrative decision to approve medical reimbursement of limited PET and PET/CT applications in oncology has led to phenomenal growth and widespread acceptance over the last few years, which also was facilitated by establishing 18F-labelled tracers for standard procedures, allowing work at non-cyclotron-equipped sites. PET/CT imaging is now an integral part of oncology for diagnosis, staging and treatment monitoring. A fully integrated MRI/PET scanner is on the market from early 2011. Sources of radionuclides 99mTc is normally supplied to hospitals through a radionuclide generator containing the parent radionuclide molybdenum-99. 99Mo is typically obtained as a fission product of 235U in nuclear reactors, however global supply shortages have led to the exploration of other methods of production. About a third of the world's supply, and most of Europe's supply, of medical isotopes is produced at the Petten nuclear reactor in the Netherlands. Another third of the world's supply, and most of North America's supply, was produced at the Chalk River Laboratories in Chalk River, Ontario, Canada until its permanent shutdown in 2018. The most commonly used radioisotope in PET, 18F, is not produced in a nuclear reactor, but rather in a circular accelerator called a cyclotron. The cyclotron is used to accelerate protons to bombard the stable heavy isotope of oxygen 18O. The 18O constitutes about 0.20% of ordinary oxygen (mostly oxygen-16), from which it is extracted. The 18F is then typically used to make FDG. A typical nuclear medicine study involves administration of a radionuclide into the body by intravenous injection in liquid or aggregate form, ingestion while combined with food, inhalation as a gas or aerosol, or rarely, injection of a radionuclide that has undergone micro-encapsulation. Some studies require the labeling of a patient's own blood cells with a radionuclide (leukocyte scintigraphy and red blood cell scintigraphy). Most diagnostic radionuclides emit gamma rays either directly from their decay or indirectly through electron–positron annihilation, while the cell-damaging properties of beta particles are used in therapeutic applications. Refined radionuclides for use in nuclear medicine are derived from fission or fusion processes in nuclear reactors, which produce radionuclides with longer half-lives, or cyclotrons, which produce radionuclides with shorter half-lives, or take advantage of natural decay processes in dedicated generators, i.e. molybdenum/technetium or strontium/rubidium. The most commonly used intravenous radionuclides are technetium-99m, iodine-123, iodine-131, thallium-201, gallium-67, fluorine-18 fluorodeoxyglucose, and indium-111 labeled leukocytes. The most commonly used gaseous/aerosol radionuclides are xenon-133, krypton-81m, (aerosolised) technetium-99m. Policies and procedures Radiation dose A patient undergoing a nuclear medicine procedure will receive a radiation dose. Under present international guidelines it is assumed that any radiation dose, however small, presents a risk. The radiation dose delivered to a patient in a nuclear medicine investigation, though unproven, is generally accepted to present a very small risk of inducing cancer. In this respect it is similar to the risk from X-ray investigations except that the dose is delivered internally rather than from an external source such as an X-ray machine, and dosage amounts are typically significantly higher than those of X-rays. The radiation dose from a nuclear medicine investigation is expressed as an effective dose with units of sieverts (usually given in millisieverts, mSv). The effective dose resulting from an investigation is influenced by the amount of radioactivity administered in megabecquerels (MBq), the physical properties of the radiopharmaceutical used, its distribution in the body and its rate of clearance from the body. Effective doses can range from 6 μSv (0.006 mSv) for a 3 MBq chromium-51 EDTA measurement of glomerular filtration rate to 11.2 mSv (11,200 μSv) for an 80 MBq thallium-201 myocardial imaging procedure. The common bone scan with 600 MBq of technetium-99m MDP has an effective dose of approximately 2.9 mSv (2,900 μSv). Formerly, units of measurement were: the curie (Ci), equal to 3.7 × 1010 Bq, and also equal to 1.0 grams of radium (Ra-226); the rad (radiation absorbed dose), now replaced by the gray; and the rem (Röntgen equivalent man), now replaced by the sievert. The rad and rem are essentially equivalent for almost all nuclear medicine procedures, and only alpha radiation will produce a higher Rem or Sv value, due to its much higher Relative Biological Effectiveness (RBE). Alpha emitters are nowadays rarely used in nuclear medicine, but were used extensively before the advent of nuclear reactor and accelerator produced radionuclides. The concepts involved in radiation exposure to humans are covered by the field of Health Physics; the development and practice of safe and effective nuclear medicinal techniques is a key focus of Medical Physics. Regulatory frameworks and guidelines Different countries around the world maintain regulatory frameworks that are responsible for the management and use of radionuclides in different medical settings. For example, in the US, the Nuclear Regulatory Commission (NRC) and the Food and Drug Administration (FDA) have guidelines in place for hospitals to follow. With the NRC, if radioactive materials aren't involved, like X-rays for example, they are not regulated by the agency and instead are regulated by the individual states. International organizations, such as the International Atomic Energy Agency (IAEA), have regularly published different articles and guidelines for best practices in nuclear medicine as well as reporting on emerging technologies in nuclear medicine. Other factors that are considered in nuclear medicine include a patient's medical history as well as post-treatment management. Groups like International Commission on Radiological Protection have published information on how to manage the release of patients from a hospital with unsealed radionuclides. See also Human subject research List of Nuclear Medicine Societies Nuclear medicine physician Nuclear pharmacy Nuclear technology Radiographer References Further reading External links Solving the Medical Isotope Crisis Hearing before the Subcommittee on Energy and Environment of the Committee on Energy and Commerce, House of Representatives, One Hundred Eleventh Congress, First Session, September 9, 2009 Radiology Medicinal radiochemistry
Nuclear medicine
[ "Chemistry" ]
4,364
[ "Medicinal radiochemistry", "Medicinal chemistry" ]
14,661,792
https://en.wikipedia.org/wiki/Nicotinate-nucleotide%E2%80%94dimethylbenzimidazole%20phosphoribosyltransferase
In enzymology, a nicotinate-nucleotide-dimethylbenzimidazole phosphoribosyltransferase () is an enzyme that catalyzes the chemical reaction beta-nicotinate D-ribonucleotide + 5,6-dimethylbenzimidazole nicotinate + alpha-ribazole 5'-phosphate Thus, the two substrates of this enzyme are beta-nicotinate D-ribonucleotide and 5,6-dimethylbenzimidazole, whereas its two products are nicotinate and alpha-ribazole 5'-phosphate. This enzyme belongs to the family of glycosyltransferases, specifically the pentosyltransferases. The systematic name of this enzyme class is nicotinate-nucleotide:5,6-dimethylbenzimidazole phospho-D-ribosyltransferase. Other names in common use include CobT, nicotinate mononucleotide-dimethylbenzimidazole phosphoribosyltransferase, nicotinate ribonucleotide:benzimidazole (adenine) phosphoribosyltransferase, nicotinate-nucleotide:dimethylbenzimidazole phospho-D-ribosyltransferase, and nicotinate mononucleotide (NaMN):5,6-dimethylbenzimidazole phosphoribosyltransferase. This enzyme is part of the biosynthetic pathway to cobalamin (vitamin B12) in bacteria. Function This enzyme plays a central role in the synthesis of alpha-ribazole-5'-phosphate, an intermediate for the lower ligand of cobalamin. It is one of the enzymes of the anaerobic pathway of cobalamin biosynthesis, and one of the four proteins (CobU, CobT, CobC, and CobS) involved in the synthesis of the lower ligand and the assembly of the nucleotide loop. Biosynthesis of cobalamin Vitamin B12 (cobalamin) is used as a cofactor in a number of enzyme-catalysed reactions in bacteria, archaea and eukaryotes. The biosynthetic pathway to adenosylcobalamin from its five-carbon precursor, 5-aminolaevulinic acid, can be divided into three sections: (1) the biosynthesis of uroporphyrinogen III from 5-aminolaevulinic acid; (2) the conversion of uroporphyrinogen III into the ring-contracted, deacylated intermediate precorrin 6 or cobalt-precorrin 6; and (3) the transformation of this intermediate to form adenosylcobalamin. Cobalamin is synthesised by bacteria and archaea via two alternative routes that differ primarily in the steps of section 2 that lead to the contraction of the macrocycle and excision of the extruded carbon molecule (and its attached methyl group). One pathway (exemplified by Pseudomonas denitrificans) incorporates molecular oxygen into the macrocycle as a prerequisite to ring contraction, and has consequently been termed the aerobic pathway. The alternative, anaerobic, route (exemplified by Salmonella typhimurium) takes advantage of a chelated cobalt ion, in the absence of oxygen, to set the stage for ring contraction. Structural studies As of late 2007, 28 structures have been solved for this class of enzymes, with PDB accession codes , , , , , , , , , , , , , , , , , , , , , , , , , , , and . References Further reading Protein domains EC 2.4.2 Enzymes of known structure
Nicotinate-nucleotide—dimethylbenzimidazole phosphoribosyltransferase
[ "Biology" ]
819
[ "Protein domains", "Protein classification" ]
14,661,877
https://en.wikipedia.org/wiki/Cytochrome%20c%20oxidase%20subunit%202
Cytochrome c oxidase II is a protein in eukaryotes that is encoded by the MT-CO2 gene. Cytochrome c oxidase subunit II, abbreviated COXII, COX2, COII, or MT-CO2, is the second subunit of cytochrome c oxidase. It is also one of the three mitochondrial DNA (mtDNA) encoded subunits (MT-CO1, MT-CO2, MT-CO3) of respiratory complex IV. Structure In humans, the MT-CO2 gene is located on the p arm of mitochondrial DNA at position 12 and it spans 683 base pairs. The MT-CO2 gene produces a 25.6 kDa protein composed of 227 amino acids. MT-CO2 is a subunit of the enzyme Cytochrome c oxidase () (Complex IV), an oligomeric enzymatic complex of the mitochondrial respiratory chain involved in the transfer of electrons from cytochrome c to oxygen. In eukaryotes this enzyme complex is located in the mitochondrial inner membrane; in aerobic prokaryotes it is found in the plasma membrane. The enzyme complex consists of 3-4 subunits (prokaryotes) to up to 13 polypeptides (mammals). The N-terminal domain of cytochrome C oxidase contains two transmembrane alpha-helices. The structure of MT-CO2 is known to contain one redox center and a binuclear copper A center (CuA). The CuA is located in a conserved cysteine loop at 196 and 200 amino acid positions and conserved histidine at 204. Several bacterial MT-CO2 have a C-terminal extension that contains a covalently bound haem c. Function The MT-CO2 gene encodes for the second subunit of cytochrome c oxidase (complex IV), a component of the mitochondrial respiratory chain that catalyzes the reduction of oxygen to water. MT-CO2 is one of the three subunits which are responsible for the formation of the functional core of the cytochrome c oxidase. MT-CO2 plays an essential role in the transfer of electrons from cytochrome c to the bimetallic center of the catalytic subunit 1 by utilizing its binuclear copper A center. It contains two adjacent transmembrane regions in its N-terminus and the major part of the protein is exposed to the periplasmic or to the mitochondrial intermembrane space, respectively. MT-CO2 provides the substrate-binding site and contains the binuclear copper A center, probably the primary acceptor in cytochrome c oxidase. Clinical significance Mitochondrial complex IV deficiency Variants of MT-CO2 have been associated with the mitochondrial Complex IV deficiency, a deficiency in an enzyme complex of the mitochondrial respiratory chain that catalyzes the oxidation of cytochrome c utilizing molecular oxygen. The deficiency is characterized by heterogeneous phenotypes ranging from isolated myopathy to severe multisystem disease affecting several tissues and organs. Other Clinical Manifestations include hypertrophic cardiomyopathy, hepatomegaly and liver dysfunction, hypotonia, muscle weakness, exercise intolerance, developmental disability, delayed motor development and mental retardation. Mutations of MT-CO2 is also known to cause Leigh's disease, which may be caused by an abnormality or deficiency of cytochrome oxidase. A wide range of symptoms have been found in patients with pathogenic mutations in the MT-CO2 gene with the mitochondrial Complex IV deficiency. A deletion mutation of a single nucleotide (7630delT) in the gene has been found to cause symptoms of reversible aphasia, right hemiparesis, hemianopsia, exercise intolerance, progressive mental impairment, and short stature. Furthermore, a patient with a nonsense mutation (7896G>A) of the gene resulted in phenotypes such as short stature, low weight, microcephaly, skin abnormalities, severe hypotonia, and normal reflexes. A novel heteroplasmic mutation (7587T>C) which altered the initiation codon of the MT-CO2 gene in patients have shown clinical manifestations such as progressive gait ataxia, cognitive impairment, bilateral optic atrophy, pigmentary retinopathy, a decrease in color vision, and mild distal-muscle wasting. Others Juvenile myopathy, encephalopathy, lactic acidosis, and stroke have also been associated with mutations in the MT-CO2 gene. Interactions MT-CO2 is known to interact with cytochrome c by the utilization of a lysine ring around the carboxyl containing heme edge of cytochrome c in MT-CO2, including glutamate 129, aspartate 132, and glutamate 19. References Further reading Protein domains Protein families Transmembrane proteins Human mitochondrial genes
Cytochrome c oxidase subunit 2
[ "Biology" ]
1,041
[ "Protein families", "Protein domains", "Protein classification" ]
14,662,101
https://en.wikipedia.org/wiki/Biositemap
A Biositemap is a way for a biomedical research institution of organisation to show how biological information is distributed throughout their Information Technology systems and networks. This information may be shared with other organisations and researchers. The Biositemap enables web browsers, crawlers and robots to easily access and process the information to use in other systems, media and computational formats. Biositemaps protocols provide clues for the Biositemap web harvesters, allowing them to find resources and content across the whole interlink of the Biositemap system. This means that human or machine users can access any relevant information on any topic across all organisations throughout the Biositemap system and bring it to their own systems for assimilation or analysis. File framework The information is normally stored in a biositemap.rdf or biositemap.xml file which contains lists of information about the data, software, tools material and services provided or held by that organisation. Information is presented in metafields and can be created online through sites such as the biositemaps online editor. The information is a blend of sitemaps and RSS feeds and is created using the Information Model (IM) and Biomedical Resource Ontology (BRO). The IM is responsible for defining the data held in the metafields and the BRO controls the terminology of the data held in the resource_type field. The BRO is critical in aiding the interactivity of both the other organisations and third parties to search and refine those searches. Data formats The Biositemaps Protocol allows scientists, engineers, centers and institutions engaged in modeling, software tool development and analysis of biomedical and informatics data to broadcast and disseminate to the world the information about their latest computational biology resources (data, software tools and web services). The biositemap concept is based on ideas from Efficient, Automated Web Resource Harvesting and Crawler-friendly Web Servers, and it integrates the features of sitemaps and RSS feeds into a decentralized mechanism for computational biologists and bio-informaticians to openly broadcast and retrieve meta-data about biomedical resources. These site, institution, or investigator specific biositemap descriptions are published in RDF format online and are searched, parsed, monitored and interpreted by web search engines, web applications specific to biositemaps and ontologies, and other applications interested in discovering updated or novel resources for bioinformatics and biomedical research investigations. The biositemap mechanism separates the providers of biomedical resources (investigators or institutions) from the consumers of resource content (researchers, clinicians, news media, funding agencies, educational and research initiatives). A Biositemap is an RDF file that lists the biomedical and bioinformatics resources for a specific research group or consortium. It allows developers of biomedical resources to describe the functionality and usability of each of their software tools, databases or web-services. Biositemaps supplement and do not replace the existing frameworks for dissemination of data, tools and services. Using a biositemap does not guarantee that resources will be included in search indexes nor does it influence the way that tools are ranked or perceived by the community. What the Biositemaps protocol will do is provide clues, information and directives to all Biositemap web harvesters that point to the existence and content of biomedical resources at different sites. Biositemap Information Model The Biositemap protocol relies on an extensible information model that includes specific properties that are commonly used and necessary for characterizing biomedical resources: Name Description URL Stage of development Organization Resource Ontology Label Keywords License Up-to-date documentation on the information model is available at the Biositemaps website. See also Information visualization ITools Resourceome Sitemaps References External links Biomedical Resource Ontology Biositemaps online editor Domain-specific knowledge representation languages Biological techniques and tools Bioinformatics
Biositemap
[ "Engineering", "Biology" ]
798
[ "Bioinformatics", "Biological engineering", "nan" ]
14,662,238
https://en.wikipedia.org/wiki/Balance%20%28ability%29
Balance in biomechanics, is an ability to maintain the line of gravity (vertical line from centre of mass) of a body within the base of support with minimal postural sway. Sway is the horizontal movement of the centre of gravity even when a person is standing still. A certain amount of sway is essential and inevitable due to small perturbations within the body (e.g., breathing, shifting body weight from one foot to the other or from forefoot to rearfoot) or from external triggers (e.g., visual distortions, floor translations). An increase in sway is not necessarily an indicator of dysfunctional balance so much as it is an indicator of decreased sensorimotor control. Maintaining balance Maintaining balance requires coordination of input from multiple sensory systems including the vestibular, somatosensory, and visual systems. Vestibular system: sense organs that regulate equilibrium (equilibrioception); directional information as it relates to head position (internal gravitational, linear, and angular acceleration) Somatosensory system: senses of proprioception and kinesthesia of joints; information from skin and joints (pressure and vibratory senses); spatial position and movement relative to the support surface; movement and position of different body parts relative to each other Visual system: Reference to verticality of body and head motion; spatial location relative to objects The senses must detect changes of spatial orientation with respect to the base of support, regardless of whether the body moves or the base is altered. There are environmental factors that can affect balance such as light conditions, floor surface changes, alcohol, drugs, and ear infection. Balance impairments There are balance impairments associated with aging. Age-related decline in the ability of the above systems to receive and integrate sensory information contributes to poor balance in older adults. As a result, the elderly are at an increased risk of falls. In fact, one in three adults aged 65 and over will fall each year. In the case of an individual standing quietly upright, the limit of stability is defined as the amount of postural sway at which balance is lost and corrective action is required. Body sway can occur in all planes of motion, which make it an increasingly difficult ability to rehabilitate. There is strong evidence in research showing that deficits in postural balance is related to the control of medial-lateral stability and an increased risk of falling. To remain balanced, a person standing must be able to keep the vertical projection of their center of mass within their base of support, resulting in little medial-lateral or anterior-posterior sway. Ankle sprains are one of the most frequently occurring injuries among athletes and physically active people. The most common residual disability post ankle sprain is instability along with body sway. Mechanical instability includes insufficient stabilizing structures and mobility that exceed physiological limits. Functional instability involves recurrent sprains or a feeling of giving way of the ankle. Nearly 40% of patients with ankle sprains suffer from instability and an increase in body sway. Injury to the ankle causes a proprioceptive deficit and impaired postural control. Individuals with muscular weakness, occult instability, and decreased postural control are more susceptible to ankle injury than those with better postural control. Balance can be severely affected in individuals with neurological conditions. People who suffer a stroke or spinal cord injury for example, can struggle with this ability. Impaired balance is strongly associated with future function and recovery after a stroke, and is the strongest predictor of falls. Another population where balance is severely affected is Parkinson's disease patients. A study done by Nardone and Schieppati (2006) showed that individuals with Parkinson's disease problems in balance have been related to a reduced limit of stability and an impaired production of anticipatory motor strategies and abnormal calibration. Balance can also be negatively affected in a normal population through fatigue in the musculature surrounding the ankles, knees, and hips. Studies have found, however, that muscle fatigue around the hips (gluteals and lumbar extensors) and knees have a greater effect on postural stability (sway). It is thought that muscle fatigue leads to a decreased ability to contract with the correct amount of force or accuracy. As a result, proprioception and kinesthetic feedback from joints are altered so that conscious joint awareness may be negatively effected. Balance training Since balance is a key predictor of recovery and is required in many activities of daily living, it is often introduced into treatment plans by physiotherapists and occupational therapists when dealing with geriatrics, patients with neurological conditions, or others for whom balance training has been determined to be beneficial. Balance training in stroke patients has been supported in the literature. Methods commonly used and proven to be effective for this population include sitting or standing balance practice with various progressions including reaching, variations in base of support, use of tilt boards, gait training varying speed, and stair climbing exercises. Another method to improve balance is perturbation training, which is an external force applied to a person's center of mass in an attempt to move it from the base of support. The type of training should be determined by a physiotherapist and will depend on the nature and severity of the stroke, stage of recovery, and the patient's abilities and impairments after the stroke. Populations such as the elderly, children with neuromuscular diseases, and those with motor deficits such as chronic ankle instability have all been studied and balance training has been shown to result in improvements in postural sway and improved "one-legged stance balance" in these groups. The effects of balance training can be measured by more varied means, but typical quantitative outcomes are centre of pressure (CoP), postural sway, and static/dynamic balance, which are measured by the subject's ability to maintain a set body position while undergoing some type of instability. Studies have suggested, higher level of physical activity have shown to reduce the morbidity and mortality along with risk of fall up to 30% to 50%. Some types of exercise (gait, balance, co-ordination and functional tasks; strengthening exercise; 3D exercise and multiple exercise types) improve clinical balance outcomes in older people, and are seemingly safe. A study has shown to be effective in improving ability to balance after undergoing aerobic exercises along with resistance exercises. There is still insufficient evidence supporting general physical activity, computerized balance programs or vibration plates. Functional balance assessments Functional tests of balance focus on maintenance of both static and dynamic balance, whether it involves a type of perturbation/change of center of mass or during quiet stance. Standardized tests of balance are available to allow allied health care professionals to assess an individual's postural control. Some functional balance tests that are available are: Romberg Test: used to determine proprioceptive contributions to upright balance. Subject remains in quiet standing while eyes are open. If this test is not difficult enough, there is a Sharpened Romberg's test. Subjects would have to have their arms crossed, feet together and eyes closed. This decreases the base of support, raises the subject's center of mass, and prevents them from using their arms to help balance. Functional Reach Test: measures the maximal distance one can reach forward beyond arm's length while maintaining feet planted in a standing position. Berg Balance Scale: measures static and dynamic balance abilities using functional tasks commonly performed in everyday life. One study reports that the Berg Balance Scale is the most commonly used assessment tool throughout stroke rehabilitation, and found it to be a sound measure of balance impairment in patients following a stroke. Berg balance scale is known to be the golden test. BBS was first published in 1989 and to this day in 2022, it's still effective which is pretty remarkable. Not every test and every study that was made stuck around this long so its truly a golden test. Performance-Oriented Mobility Assessment (POMA): measures both static and dynamic balance using tasks testing balance and gait. Timed Up and Go Test: measures dynamic balance and mobility. Balance Efficacy Scale: self-report measure that examines an individual's confidence while performing daily tasks with or without assistance. Star Excursion Test: A dynamic balance test that measures single stance maximal reach in multiple directions. Balance Evaluation Systems Test (BESTest): Tests for 6 unique balance control methods to create a specialized rehabilitation protocol by identifying specific balance deficits. The Mini-Balance Evaluation Systems Test (Mini-BESTest): Is a short form of the Balance Evaluation System Test that is used widely in both clinical practice and research. The test is used to assess balance impairments and includes 14 items of dynamic balance task, divided in to four subcomponents: anticipatory postural adjustments, reactive postural control, sensory orientation and dynamic gait. Mini-BESTest has been tested for mainly neurological diseases, but also other diseases. A review of psychometric properties of the test support the reliability, validity and responsiveness, and according to the review, it can be considered a standard balance measure. BESS: The BESS (Balance Error Scoring System) is a commonly used way to assess balance. It is known as a simple and affordable way to get an accurate assessment of balance, although the validity of the BESS protocol has been questioned. The BESS is often used in sports settings to assess the effects of mild to moderate head injury on one's postural stability. The BESS tests three separate stances (double leg, single leg, tandem) on two different surfaces (firm surface and medium density foam) for a total of six tests. Each test is 20 seconds long, with the entire time of the assessment approximately 5–7 minutes. The first stance is the double leg stance. The participant is instructed to stand on a firm surface with feet side by side with hands on hips and eyes closed. The second stance is the single leg stance. In this stance the participant is instructed to stand on their non-dominant foot on a firm surface with hands on hips and eyes closed. The third stance is the tandem stance. The participant stands heel to toe on a firm surface with hands on hips and eyes closed. The fourth, fifth, and sixth stances repeat in order stances one, two, and three except the participant performs these stances on a medium density foam surface. The BESS is scored by an examiner who looks for deviations from the proper stances. A deviation is noted when any of the following occurs in the participant during testing: opening the eyes, removing hands from the hips, stumbling forward or falling, lifting the forefoot or heel off the testing surface, abduction or flexion of the hip beyond 30 degrees, or remaining out of the proper testing position for more than 5 seconds. Concussion (or mild traumatic brain injury) have been associated with imbalance among sports participants and military personnel. Some of the standard balance tests may be too easy or time-consuming for application to these high-functioning groups, s. Expert recommendations have been gathered concerning balance assessments appropriate to military service-members. Quantitative (computerized) assessments Due to recent technological advances, a growing trend in balance assessments has become the monitoring of center of pressure (terrestrial locomotion) (CoP), the reaction vector of center of mass on the ground, path length for a specified duration. With quantitative assessments, minimal CoP path length is suggestive of good balance. Laboratory-grade force plates are considered the "gold-standard" of measuring CoP. The NeuroCom Balance Manager (NeuroCom, Clackamas, OR, United States) is a commercially available dynamic posturography system that uses computerized software to track CoP during different tasks. These different assessments range from the sensory organization test looking at the different systems that contribute through sensory receptor input to the limits of stability test observing a participant's ankle range of motion, velocity, and reaction time. While the NeuroCom is considered the industry standard for balance assessments, it does come at a steep price (about $250,000). Within the past 5 years research has headed toward inexpensive and portable devices capable of measuring CoP accurately. Recently, Nintendo's Wii balance board (Nintendo, Kyoto, Japan) has been validated against a force plate and found to be an accurate tool to measure CoP This is very exciting as the price difference in technology ($25 vs $10,000) makes the Wii balance board a suitable alternative for clinicians to use quantitative balance assessments. Other inexpensive, custom-built force plates are being integrated into this new dynamic to create a growing field of research and clinical assessment that will benefit many populations. Fatigue's effect on balance The complexity of balance allows for many confounding variables to affect a person's ability to stay upright. Fatigue, causing central nervous system (CNS) dysfunction, can indirectly result in the inability to remain upright. This is seen repeatedly in clinical populations (e.g. Parkinson's disease, multiple sclerosis). Another major concern regarding fatigue's effect on balance is in the athletic population. Balance testing has become a standard measure to help diagnose concussions in athletes, but due to the fact that athletes can be extremely fatigued has made it hard for clinicians to accurately determine how long the athletes need to rest before fatigue is gone, and they can measure balance to determine if the athlete is concussed. So far, researchers have only been able to estimate that athletes need anywhere from 8–20 minutes of rest before testing balance That can be a huge difference depending on the circumstances. Other factors influencing balance Age, gender, and height have all been shown to impact an individual's ability to balance and the assessment of that balance. Typically, older adults have more body sway with all testing conditions. Tests have shown that older adults demonstrate shorter functional reach and larger body sway path lengths. Height also influences body sway in that as height increases, functional reach typically decreases. However, this test is only a measure of anterior and posterior sway. This is done to create a repeatable and reliable clinical balance assessment tool. A 2011 Cochrane Review found that specific types of exercise (such as gait, balance, co-ordination and functional tasks; strengthening exercises; 3D exercises [e.g. Tai Chi] and combinations of these) can help improve balance in older adults. However, there was no or limited evidence on the effectiveness of general physical activities, such as walking and cycling, computer-based balance games and vibration plates. Voluntary control of balance While balance is mostly an automatic process, voluntary control is common. Active control usually takes place when a person is in a situation where balance is compromised. This can have the counter-intuitive effect of increasing postural sway during basic activities such as standing. One explanation for this effect is that conscious control results in over-correcting an instability and "may inadvertently disrupt relatively automatic control processes." While concentration on an external task "promotes the utilization of more automatic control processes." Balance and dual-tasking Supra-postural tasks are those activities that rely on postural control while completing another behavioral goal, such as walking or creating a text message while standing upright. Research has demonstrated that postural stability operates to permit the achievement of other activities. In other words, standing in a stable upright position is not at all beneficial if one falls as soon as any task is attempted. In a healthy individual, it is believed that postural control acts to minimize the amount of effort required (not necessarily to minimize sway), while successfully accomplishing the supra-postural task. Research has shown that spontaneous reductions in postural sway occur in response to the addition of a secondary goal. McNevin and Wulf (2002) found an increase in postural performance when directing an individual's attention externally compared to directing attention internally That is, focusing attention on the effects of one's movements rather than on the movement itself will boost performance. This results from the use of more automatic and reflexive control processes. When one is focused on their movements (internal focus), they may inadvertently interfere with these automatic processes, decreasing their performance. Externally focusing attention improves postural stability, despite increasing postural sway at times. It is believed that utilizing automatic control processes by focusing attention externally enhances both performance and learning. Adopting an external focus of attention subsequently improves the performance of supra-postural tasks, while increasing postural stability. References Further reading Biomechanics Physical fitness
Balance (ability)
[ "Physics" ]
3,356
[ "Biomechanics", "Mechanics" ]
14,662,471
https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282013%29
This page summarizes projects that propose to bring more than of new liquid fuel capacity to market, with the first production of fuel beginning in 2013. This is part of the Wikipedia summary of oil megaprojects. Quick links to other years Detailed list of projects for 2013 Terminology Year startup: year of first oil; specific date if available Operator: company undertaking the project Area: onshore (LAND), offshore (OFF), offshore deep water (ODW), tar sands (TAR) Type: liquid category (i.e. natural gas liquids, natural gas condensate, crude oil) Grade: oil quality (light, medium, heavy, sour) or API gravity 2P resvs: 2P (proven + probable) oil reserves in giga barrels (Gb) GOR: ratio of produced gas to produced oil, commonly abbreviated GOR Peak year: year of the production plateau/peak Peak: maximum production expected (thousand barrels/day) Discovery: year of discovery Capital investment: expected capital cost; FID (Final Investment Decision); if no FID, then normally no project development contracts can be awarded. For many projects, a FEED stage (Front End Engineering Design) precedes the FID. Notes: comments about the project (footnotes) Ref.: sources References 2013 Oil fields Proposed energy projects Projects established in 2013 2013 in the environment 2013 in technology
Oil megaprojects (2013)
[ "Engineering" ]
285
[ "Oil megaprojects", "Megaprojects" ]
14,662,602
https://en.wikipedia.org/wiki/Carboxylesterase%20type%20B
Carboxylesterase, type B is a family of evolutionarily related proteins that belongs to the superfamily of proteins with the Alpha/beta hydrolase fold. Higher eukaryotes have many distinct esterases. The different types include those that act on carboxylic esters (). Carboxyl-esterases have been classified into three categories (A, B and C) on the basis of differential patterns of inhibition by organophosphates. The sequence of a number of type-B carboxylesterases indicates that the majority are evolutionarily related. As is the case for lipases and serine proteases, the catalytic apparatus of esterases involves three residues (catalytic triad): a serine, a glutamate or aspartate and a histidine. Subfamilies Neuroligin Cholinesterase Examples Human genes that encode proteins containing the carboxylesterase domain include: ACHE ARACHE BCHE CEL CES1 CES2 CES3 CES4 CES7 CES8 NLGN1 NLGN2 NLGN3 NLGN4X NLGN4Y TG See also Carboxylesterase References External links Carboxylesterases type-B in PROSITE Protein families Peripheral membrane proteins
Carboxylesterase type B
[ "Biology" ]
272
[ "Protein families", "Protein classification" ]
14,662,789
https://en.wikipedia.org/wiki/Gusums%20Bruk
Gusums Bruk was a foundry in Gusum, Sweden, specializing in brass production. The foundry commenced operations in 1653. Chandeliers and industrial products were its main products. Cannons, copper and brass wires, button pins, candlesticks, paper fabric for fabrication, safety needles, and zippers were also made. It filed for bankruptcy and shut down in 1988. It is no longer active, and its ruins have been demolished and the area decontaminated. A former subsidiary is still in operation at another site in Gusum. References External links Jornmark (site about abandoned places) http://www.bondandbowery.com/item/13025 Foundries Industrial buildings in Sweden Metal companies of Sweden 1653 establishments in Sweden
Gusums Bruk
[ "Chemistry" ]
153
[ "Foundries", "Metallurgical facilities" ]
14,662,815
https://en.wikipedia.org/wiki/Double-negation%20translation
In proof theory, a discipline within mathematical logic, double-negation translation, sometimes called negative translation, is a general approach for embedding classical logic into intuitionistic logic. Typically it is done by translating formulas to formulas that are classically equivalent but intuitionistically inequivalent. Particular instances of double-negation translations include Glivenko's translation for propositional logic, and the Gödel–Gentzen translation and Kuroda's translation for first-order logic. Propositional logic The easiest double-negation translation to describe comes from Glivenko's theorem, proved by Valery Glivenko in 1929. It maps each classical formula φ to its double negation ¬¬φ. Results Glivenko's theorem states: If φ is a propositional formula, then φ is a classical tautology if and only if ¬¬φ is an intuitionistic tautology. Glivenko's theorem implies the more general statement: If T is a set of propositional formulas and φ a propositional formula, then T ⊢ φ in classical logic if and only if T ⊢ ¬¬φ in intuitionistic logic. In particular, a set of propositional formulas is intuitionistically consistent if and only if it is classically satisfiable. First-order logic The Gödel–Gentzen translation (named after Kurt Gödel and Gerhard Gentzen) associates with each formula φ in a first-order language another formula φN, which is defined inductively: If φ is atomic, then φN is ¬¬φ as above, but furthermore (φ ∨ θ)N is ¬(¬φN ∧ ¬θN) (∃x φ)N is ¬(∀x ¬φN) and otherwise (φ ∧ θ)N is φN ∧ θN (φ → θ)N is φN → θN (¬φ)N is ¬φN (∀x φ)N is ∀x φN This translation has the property that φN is classically equivalent to φ. Troelstra and Van Dalen (1988, Ch. 2, Sec. 3) give a description, due to Leivant, of formulas that do imply their Gödel–Gentzen translation in intuitionistic first-order logic also. There, this is not the case for all formulas. (This is related to the fact that propositions with additional double-negations can be stronger than their simpler variant. E.g., ¬¬φ → θ always implies φ → θ, but the schema in the other direction would imply double-negation elimination.) Equivalent variants Due to constructive equivalences, there are several alternative definitions of the translation. For example, a valid De Morgan's law allows one to rewrite a negated disjunction. One possibility can thus succinctly be described as follows: Prefix "¬¬" before every atomic formula, but also to every disjunction and existential quantifier, (φ ∨ θ)N is ¬¬(φN ∨ θN) (∃x φ)N is ¬¬∃x φN Another procedure, known as Kuroda's translation, is to construct a translated φ by putting "¬¬" before the whole formula and after every universal quantifier. This procedure exactly reduces to the propositional translation whenever φ is propositional. Thirdly, one may instead prefix "¬¬" before every subformula of φ, as done by Kolmogorov. Such a translation is the logical counterpart to the call-by-name continuation-passing style translation of functional programming languages along the lines of the Curry–Howard correspondence between proofs and programs. The Gödel-Gentzen- and Kuroda-translated formulas of each φ are provenly equivalent to one another, and this result holds already in minimal propositional logic. And further, in intuitionistic propositional logic, the Kuroda- and Kolmogorov-translated formulas are equivalent also. The mere propositional mapping of φ to ¬¬φ does not extend to a sound translation of first-order logic, as the so called double negation shift is not a theorem of intuitionistic predicate logic. So the negations in φN have to be placed in a more particular way. Results Let TN consist of the double-negation translations of the formulas in T. The fundamental soundness theorem (Avigad and Feferman 1998, p. 342; Buss 1998 p. 66) states: If T is a set of axioms and φ is a formula, then T proves φ using classical logic if and only if TN proves φN using intuitionistic logic. Arithmetic The double-negation translation was used by Gödel (1933) to study the relationship between classical and intuitionistic theories of the natural numbers ("arithmetic"). He obtains the following result: If a formula φ is provable from the axioms of Peano arithmetic then φN is provable from the axioms of Heyting arithmetic. This result shows that if Heyting arithmetic is consistent then so is Peano arithmetic. This is because a contradictory formula is interpreted as , which is still contradictory. Moreover, the proof of the relationship is entirely constructive, giving a way to transform a proof of in Peano arithmetic into a proof of in Heyting arithmetic. By combining the double-negation translation with the Friedman translation, it is in fact possible to prove that Peano arithmetic is Π02-conservative over Heyting arithmetic. See also Dialectica interpretation Modal companion References J. Avigad and S. Feferman (1998), "Gödel's Functional ("Dialectica") Interpretation", Handbook of Proof Theory, S. Buss, ed., Elsevier. S. Buss (1998), "Introduction to Proof Theory", Handbook of Proof Theory, S. Buss, ed., Elsevier. G. Gentzen (1936), "Die Widerspruchfreiheit der reinen Zahlentheorie", Mathematische Annalen, v. 112, pp. 493–565 (German). Reprinted in English translation as "The consistency of arithmetic" in The collected papers of Gerhard Gentzen, M. E. Szabo, ed. V. Glivenko (1929), Sur quelques points de la logique de M. Brouwer, Bull. Soc. Math. Belg. 15, 183-188 K. Gödel (1933), "Zur intuitionistischen Arithmetik und Zahlentheorie", Ergebnisse eines mathematischen Kolloquiums, v. 4, pp. 34–38 (German). Reprinted in English translation as "On intuitionistic arithmetic and number theory" in The Undecidable, M. Davis, ed., pp. 75–81. A. N. Kolmogorov (1925), "O principe tertium non datur" (Russian). Reprinted in English translation as "On the principle of the excluded middle" in From Frege to Gödel, van Heijenoort, ed., pp. 414–447. A. S. Troelstra (1977), "Aspects of Constructive Mathematics", Handbook of Mathematical Logic, J. Barwise, ed., North-Holland. A. S. Troelstra and D. van Dalen (1988), Constructivism in Mathematics. An Introduction, volumes 121, 123 of Studies in Logic and the Foundations of Mathematics, North–Holland. External links "Intuitionistic logic", Stanford Encyclopedia of Philosophy. Proof theory Intuitionism
Double-negation translation
[ "Mathematics" ]
1,597
[ "Mathematical logic", "Proof theory" ]
14,662,924
https://en.wikipedia.org/wiki/Tarry%20point
In geometry, the Tarry point for a triangle is a point of concurrency of the lines through the vertices of the triangle perpendicular to the corresponding sides of the triangle's first Brocard triangle . The Tarry point lies on the other endpoint of the diameter of the circumcircle drawn through the Steiner point. The point is named for Gaston Tarry. See also Concurrent lines Notes Triangle centers
Tarry point
[ "Physics", "Mathematics" ]
83
[ "Point (geometry)", "Triangle centers", "Points defined for a triangle", "Geometric centers", "Symmetry" ]
14,662,962
https://en.wikipedia.org/wiki/ATP%20synthase%20subunit%20C
ATPase, subunit C of Fo/Vo complex is the main transmembrane subunit of V-type, A-type and F-type ATP synthases.  Subunit C (also called subunit 9, or proteolipid in F-ATPases, or the 16 kDa proteolipid in V-ATPases) was found in the Fo or Vo complex of F- and V-ATPases, respectively. The subunits form an oligomeric c ring that make up the Fo/Vo/Ao rotor, where the actual number of subunits vary greatly among specific enzymes. ATPases (or ATP synthases) are membrane-bound enzyme complexes/ion transporters that combine ATP synthesis and/or hydrolysis with the transport of protons across a membrane. ATPases can harness the energy from a proton gradient, using the flux of ions across the membrane via the ATPase proton channel to drive the synthesis of ATP. Some ATPases work in reverse, using the energy from the hydrolysis of ATP to create a proton gradient. There are different types of ATPases, which can differ in function (ATP synthesis and/or hydrolysis), structure (F-, V- and A-ATPases contain rotary motors) and in the type of ions they transport. The F-ATPases (or F1Fo ATPases) and V-ATPases (or V1Vo ATPases) are each composed of two linked complexes: the F1 or V1 complex contains the catalytic core that synthesizes/hydrolyses ATP, and the Fo or Vo complex that forms the membrane-spanning pore. The F- and V-ATPases all contain rotary motors, one that drives proton translocation across the membrane and one that drives ATP synthesis/hydrolysis. In F-ATPases, the flux of protons through the ATPase channel drives the rotation of the C subunit ring, which in turn is coupled to the rotation of the F1 complex gamma subunit rotor due to the permanent binding between the gamma and epsilon subunits of F1 and the C subunit ring of Fo. The sequential protonation and deprotonation of Asp61 of subunit C is coupled to the stepwise movement of the rotor. In V-ATPases, there are three proteolipid subunits (c, c′ and c′′) that form part of the proton-conducting pore, each containing a buried glutamic acid residue that is essential for proton transport. In a recent study c-subunit has been indicated as a critical component of the mitochondrial permeability transition pore. Subfamilies ATPase, Vo complex, proteolipid subunit C, ATPase, Fo complex, subunit C Human proteins containing this domain ATP5MC1; ATP5G2; ATP5G3; ATP6V0B; ATP6V0C; See also Diarylquinoline References Protein domains Protein families Transmembrane proteins
ATP synthase subunit C
[ "Biology" ]
627
[ "Protein families", "Protein domains", "Protein classification" ]
14,663,527
https://en.wikipedia.org/wiki/Center%20of%20balance%20%28horse%29
In horsemanship, the center of balance of a horse is a position on the horse's back which correlates closely to the center of gravity of the horse itself. The term may also refer to the horse's center of gravity. For the best performance by the horse, as well as for better balance of the rider, the rider must be positioned over the center of balance of the horse. The location of the horse's center of balance depends on a combination of speed and degree of collection. For a standing or quietly walking horse, it is slightly behind the heart girth and below the withers. If a horse is moving at a trot or canter, the center of balance shifts slightly forward, and it moves even more forward when the horse is galloping or jumping. If a horse is highly collected, the center of balance will be farther back, regardless of gait, than if the horse is in an extended frame. For movements such as a rein back or the levade, the center of balance of horse and rider may be further back than at a standstill, due to the shift of weight and balance to the hindquarters of the horse Accordingly, a saddle designed for a specific discipline will attempt to place a rider naturally at the most suitable position for the anticipated activity of the horse. For example, a "close contact" style of English saddle, designed for show jumping, places the rider's seat farther forward than does a dressage style English saddle. References Riding techniques and movements Balance (horse), Center of
Center of balance (horse)
[ "Physics", "Mathematics" ]
316
[ "Point (geometry)", "Geometric centers", "Symmetry" ]
14,663,808
https://en.wikipedia.org/wiki/Bone%20remodeling
In osteology, bone remodeling or bone metabolism is a lifelong process where mature bone tissue is removed from the skeleton (a process called bone resorption) and new bone tissue is formed (a process called ossification or new bone formation). Recent research has identified a specialised subset of blood vessels, termed Type R endothelial cells, in the bone microenvironment. These blood vessels play a crucial role in adult bone remodelling by mediating interactions between bone-resorbing osteoclasts and bone-forming osteoblasts. Type R blood vessels are characterised by their association with post-arterial capillaries and exhibit unique remodelling properties crucial for bone homeostasis. These processes also control the reshaping or replacement of bone following injuries like fractures but also micro-damage, which occurs during normal activity. Remodeling responds also to functional demands of the mechanical loading. In the first year of life, almost 100% of the skeleton is replaced. In adults, remodeling proceeds at about 10% per year. An imbalance in the regulation of bone remodeling's two sub-processes, bone resorption and bone formation, results in many metabolic bone diseases, such as osteoporosis. Physiology Bone homeostasis involves multiple but coordinated cellular and molecular events. Two main types of cells are responsible for bone metabolism: osteoblasts (which secrete new bone), and osteoclasts (which break bone down). The structure of bones as well as adequate supply of calcium requires close cooperation between these two cell types and other cell populations present at the bone remodeling sites (e.g. immune cells). Bone metabolism relies on complex signaling pathways and control mechanisms to achieve proper rates of growth and differentiation. These controls include the action of several hormones, including parathyroid hormone (PTH), vitamin D, growth hormone, steroids, and calcitonin, as well as several bone marrow-derived membrane and soluble cytokines and growth factors (e.g. M-CSF, RANKL, VEGF and IL-6 family). It is in this way that the body is able to maintain proper levels of calcium required for physiological processes. Thus bone remodeling is not just occasional "repair of bone damage" but rather an active, continual process that is always happening in a healthy body. Subsequent to appropriate signaling, osteoclasts move to resorb the surface of the bone, followed by deposition of bone by osteoblasts. Together, the cells that are responsible for bone remodeling are known as the basic multicellular unit (BMU), and the temporal duration (i.e. lifespan) of the BMU is referred to as the bone remodeling period. Gallery See also Biomineralization, the general class of forming and maintaining mineralized tissues Tissue remodeling Wolff's law References animal physiology bones
Bone remodeling
[ "Biology" ]
623
[ "Animals", "Animal physiology" ]
14,664,386
https://en.wikipedia.org/wiki/Force%20platform
Force platforms or force plates are measuring instruments that measure the ground reaction forces generated by a body standing on or moving across them, to quantify balance, gait and other parameters of biomechanics. Most common areas of application are medicine and sports. Operation The simplest force platform is a plate with a single pedestal, instrumented as a load cell. Better designs have a pair of rectangular plates, although triangular can also work, one over another with load cells or triaxial force transducers between them at the corners. Like single-force platforms, dual-force platforms can be used to assess performance in double leg tests and strength and power asymmetries in unilateral jump and isometric tests. However, they also provide an additional level of intelligence on neuromuscular status by evaluating the force distribution between limbs during double-limb tests, revealing critical information on strength asymmetries and compensatory strategies. The simplest force plates measure only the vertical component of the force in the geometric center of the platform. More advanced models measure the three-dimensional components of the single equivalent force applied to the surface and its point of application, usually called the centre of pressure (CoP), as well as the vertical moment of force. Cylindrical force plates have also been constructed for studying arboreal locomotion, including brachiation. Force platforms may be classified as single-pedestal or multi-pedestal and by the transducer (force and moment transducer) type: strain gauge, piezoelectric sensors, capacitance gauge, piezoresistive, etc., each with its advantages and drawbacks. Single pedestal models, sometimes called load cells, are suitable for forces that are applied over a small area. For studies of movements, such as gait analysis, force platforms with at least three pedestals and usually four are used to permit forces that migrate across the plate. For example, during walking ground reaction forces start at the heel and finish near the big toe. Force platforms should be distinguished from pressure measuring systems that, although they too quantify centre of pressure, do not directly measure the applied force vector. Pressure measuring plates are useful for quantifying the pressure patterns under a foot over time but cannot quantify horizontal or shear components of the applied forces. The measurements from a force platform can be either studied in isolation, or combined with other data, such as limb kinematics to understand the principles of locomotion. If an organism makes a standing jump from a force plate, the data from the plate alone is sufficient to calculate acceleration, work, power output, jump angle, and jump distance using basic physics. Simultaneous video measurements of leg joint angles and force plate output can allow the determination of torque, work and power at each joint using a method called inverse dynamics. Recent developments in technology Advancements in technology have allowed force platforms to take on a new role within the kinetics field. Traditional laboratory-grade force plates cost (usually in the thousands) have made them very impractical for the everyday clinician. However, Nintendo introduced the Wii Balance Board (WBB) (Nintendo, Kyoto, Japan) in 2007 and changed the structure of what a force plate can be. By 2010, it was found that the WBB is a valid and reliable instrument to measure the weight distribution, when directly compared to the "gold-standard" laboratory-grade force plate, while costing less than $100. More so, this has been verified in both healthy and clinical populations. This is possible due to the four force transducers found in the corners of the WBB. These studies are conducted using customized software, such as LabVIEW (National Instruments, Austin, TX, USA) that can be integrated with the board to be able to measure the amount of body sway or the CoP path length during trials for time. The other benefit to having a posturography system, such as the WBB, is that it is portable so clinicians around the world are able to measure body sway quantitatively, instead of relying on the subjective, clinical balance assessments currently in use. According to Digital Trends, Nintendo's Wii and the WiiU successor product have both been discontinued as of March 2016. This exemplifies one of the issues arising from the adoption of inexpensive off-the-shelf consumer products re-purposed for medical measurements. Further issues with such adoption arise from the regulatory and standards bodies around the world. Force platforms used for measuring a patient's balance and mobility performance are classified by the U.S. FDA (United States Food and Drug Administration) as Class I Medical Devices. As such they must be manufactured to certain quality standards as established by ISO (International Standards Organization)ISO 9001 Quality Management Principles or ISO 13485 Medical Device Quality Management Systems. The European Union's MDD (Medical Device Directive) also classifies force platforms used for medical measurements as Class I medical devices and require medical CE certification for importation and use in the European Union for such medical applications. A notable recent standard, ASTM F3109-16 Standard Test Method for Verification of Multi-Axis Force Measuring Platforms presents a framework for manufactures and users to verify the performance of Force platforms across the extents of their working surface. Standards such as these are used by manufactures of medical grade force platforms to ensure that measurements made on a patient population are accurate, repeatable and reliable. In short, inexpensive consumer grade entertainment components may be a poor choice for medical measurements given the lack of continuity of such products and their legal, regulatory and perhaps quality unsuitability for such applications. Use in sport Force plates are commonly used in sport to access an athlete's force producing capabilities, strength and imbalance . A practitioner can use a force plate to assess training needs, readiness to train, and also during the return to play process. Typical force plate assessments in sport include the countermovement jump (CMJ), squat jump (SJ), drop jump (DJ), countermovement rebound jump, and isometric mid thigh pull (IMTP). Practitioners often have trouble understanding which metrics to track when using force plates. A leading biomechanist out of the University of Chichester has created a system for easily selecting force plate metrics. This system is called the 'ODSF System' by Dr. Jason Lake. History Chronology •1976• Advanced Mechanical Technology, Inc. (AMTI) constructed the first commercially available strain gauge force plate for gait analysis at the biomechanics laboratory of the Boston Children's Hospital. •2017• Hawkin Dynamics created the first wireless force platform and mobile app. See also Gait analysis Posturography References Physiological instruments Biomechanics de:Biomechanik#Kraftmessplatten
Force platform
[ "Physics", "Technology", "Engineering" ]
1,380
[ "Biomechanics", "Physiological instruments", "Mechanics", "Measuring instruments" ]
14,664,694
https://en.wikipedia.org/wiki/Compact%20Model%20Coalition
The Compact Model Coalition (formerly the Compact Model Council) is a working group in the electronic design automation (EDA) industry formed to choose, maintain and promote the use of standard semiconductor device models. Commercial and industrial analog simulators (such as SPICE) need to add device models as technology advances (see Moore's law) and earlier models become inaccurate. Before this group was formed, new transistor models were largely proprietary, which severely limited the choice of simulators that could be used. It was formed in August, 1996, for the purpose developing and standardizing the use and implementation of SPICE models and the model interfaces. In May 2013, the Silicon Integration Initiative (Si2) and TechAmerica announced the transfer of the Compact Model Council to Si2 and a renaming to Compact Model Coalition. New models are submitted to the Coalition, where their technical merits are discussed, and then potential standard models are voted on. Some of the models supported by the Compact Modeling Coalition include: BSIM3, a MOSFET model from UC Berkeley (see BSIM). BSIM4, a more modern MOSFET model, also from UC Berkeley. PSP, another MOSFET model. PSP originally stood for Penn State-Philips, but one author moved to ASU, and Philips spun off their semiconductor group as NXP Semiconductors. PSP is now developed and supported at CEA-Leti. BSIMSOI, a model for silicon on insulator MOSFETs. L-UTSOI, a model for fully-depleted silicon on insulator MOSFETs, developed and supported by CEA-Leti. HICUM or HIgh CUrrent Model for bipolar transistors, from CEDIC, Dresden University of Technology, Germany, and UC San Diego, USA. MEXTRAM, a compact model for bipolar transistors that aims to support the design of bipolar transistor circuits at high frequencies in Si and SiGe based process technologies. MEXTRAM was originally developed at NXP Semiconductors and is now developed and supported at Auburn University. ASM-HEMT, and MVSG, the newest standard models for Gallium Nitride (GaN) transistors. To address the increasing need for Reliability (ageing) simulation the CMC nominated the OMI Interface as the new EDA vendor independent solution for ageing simulations. Technically the Interface is very close the TMI2 Interface developed by TSMC. The standardization will allow Silicon Foundries to develop a common set of aging models that will work with all significant analog simulators. See also Electronic circuit simulation References External links Member list at CMC website Site map of CMC website including links to working groups Transistor modeling Electronic engineering
Compact Model Coalition
[ "Technology", "Engineering" ]
556
[ "Electrical engineering", "Electronic engineering", "Computer engineering" ]
14,664,777
https://en.wikipedia.org/wiki/Tracer%20use%20in%20the%20oil%20industry
Tracers are used in the oil industry in order to qualitatively or quantitatively gauge how fluid flows through the reservoir, as well as being a useful tool for estimating residual oil saturation. Tracers can be used in either interwell tests or single well tests. In interwell tests, the tracer is injected at one well along with the carrier fluid (water in a waterflood or gas in a gasflood) and detected at a producing well after some period of time, which can be anything from days to years. In single well tests, the tracer is injected into the formation from a well and then produced out the same well. The delay between a tracer that does not react with the formation (a conservative tracer) and one that does (a partitioning tracer) will give an indication of residual oil saturation, a piece of information that is difficult to acquire by other means. Tracers can be radioactive or chemical, gas or liquid and have been used extensively in the oil industry and hydrology for decades. References Petroleum engineering
Tracer use in the oil industry
[ "Chemistry", "Engineering" ]
218
[ "Petroleum", "Petroleum engineering", "Energy engineering", "Petroleum stubs" ]
14,664,887
https://en.wikipedia.org/wiki/Digital%20artifactual%20value
Digital artifactual value, a preservation term, is the intrinsic value of a digital object, rather than the informational content of the object. Though standards are lacking, born-digital objects and digital representations of physical objects may have a value attributed to them as artifacts. Intrinsic value in analog materials With respect to analog or non-digital materials, artifacts are determined to have singular research or archival value if they possess qualities and characteristics that make them the only acceptable form for long-term preservation. These qualities and characteristics are commonly referred to as the item's intrinsic value and form the basis upon which digital artifactual value is currently evaluated. Artifactual value based on this idea is predicated upon the artifact's originality, faithfulness, fixity, and stability. The intrinsic value of a particular object, as interpreted by archival professionals, largely determines the selection process for archives. The National Archives and Records Administration Committee on Intrinsic Value in "Intrinsic Value in Archival Material" classified an analog object as having intrinsic value if it possessed one or more of the follow qualities: Physical form that may be the subject for study if the records provide meaningful documentation or significant examples of the form. Aesthetic or artistic quality. Unique or curious physical features. Age that provides a quality of uniqueness. Value for use in exhibits. Questionable authenticity, date, author, or other characteristic that is significant and ascertainable by physical examination. General and substantial public interest because of direct association with famous or historically significant people, places, things, issues or events. Significance as documentation of the establishment or continuing legal basis of an agency or institution. Significance as documentation of the formulation of policy at the highest executive levels when the policy has significance and broad effect throughout or beyond the agency or institution. Other archival professionals such as Lynn Westney have written that the characteristics of materials exhibiting intrinsic value include age, content, usage, particularities of creation, signatures, and attached seals. Westney and others have stated that paper-based artifacts can be thought to have evidentiary value, or significant contextual markings, insofar that the original manifestation of the artifact can attest to the originality, faithfulness or authenticity, fixity, and stability of the content. For other analog materials, properly articulating intrinsic value remains essential for determining artifactual value. Similar to paper-based objects in many respects, artifactual value for images typically takes into account artistic value, age, authorial prestige, significant provenance, and institutional priorities. Analog audio preservation is based upon similar factors, including the cultural value of the item, its historical uniqueness, the estimated longevity of the medium, the current condition of the item, and the state of playback equipment, among other things. Analog conventions in a digital realm The standard definition of artifactual value, as it has applied to analog or non-digital materials in the twentieth century, is based upon a set of conventions which do not ordinarily apply to digital objects in toto. The Council on Library and Information Resources (CLIR) has stated that printed texts and other paper-based manuscripts, when considered as objects, are imbued with meaning distilled from a general set of understandings inherent to these conventions: The object is of a fixed and stable composition/form. Authorship and intellectual property are a recognizable concept. Duplication is possible. Fungibility of informational content (or, in other words, the ability to be replaced by another identical object). These conventions are important to consider because they help to describe the physical and even metaphysical relationship between a document's content and its physical manifestation. The underpinnings of this relationship are not identical and do not apply with the same degree of clarity to an immaterial digital realm. The idea of fixity with regard to printed materials, for example, is largely predicated on the notion that an object has been recorded on a relatively stable medium. The physical presence of a print text serves as proof of its authenticity as an object or artifact, as well as its scarcity and uniqueness in relation to other print materials. Variations in the chemical properties and storage conditions of print-based materials, as well as other cultural variables, certainly impact the fixity or stability of print materials, but there is little controversy about determining its fundamental existence or originality. However, uniqueness in the physical, paper-based sense does not translate to a digital realm in which immaterial objects are subject to theoretically infinite levels of reproduction and dissemination. Born-digital and digital surrogates may or may not look any different from each other on a server, and alterations can be made without explicit notice to the user. These alterations are normally called migration events, or actions taken on the digital object that change the original object's composition. They can enact subtle but fundamental alterations to the original document, thereby compromising its existence as an original object. Furthermore, because the tools used to generate and access digital objects have historically evolved quite rapidly, issues of playback obsolescence, incapability, data loss, and broken pathways to information have changed traditional ideas of fixity and stability. Therefore, artifactual value in a digital realm requires a modified set of generalized standards for determining artifactual originality. Michael J. Giarlo and Ronald Jantz, only two of many, have posited a list of methods for establishing digital intrinsic value by way of careful metadata generation and records maintenance. In their report, a digital original possesses three key characteristics that distinguishes it from identical copies. These include continuous verification and re-verification of the document's digital signature starting from the date of creation; retaining versions and recordings of all changes to the object in an audit trail; and having the archival master contain the creation date of the digital object. They also reported that originality in digital sources could be verified or produced by the following techniques: Digital object is given a date-time stamp that's automatically inserted into the METS-XML header upon creation. Date-time is inserted into archival metadata. Encapsulation. Digital signatures. The role of digital surrogates Digital surrogates are considered a utility for aiding in the preservation and increased access of certain artifacts. However, digital surrogates can have different utilities for objects depending on the nature of the original artifact and the condition the artifact is in. In 2001 the Council on Library and Information Resources (CLIR) published a report on the artifact in library collections. The CLIR states that the utility of the digital surrogate can be determined by dividing the original material (artifact) into two different categories, artifacts that are rare and those that are not. These two categories can be further divided by two categories, artifacts that are frequently used and those that are not. Materials that are frequently used and not rare According to the CLIR "it is not obvious that digital surrogates provide all the functionality, all the information, or all the aesthetic value of originals. Therefore, while it may be sensible to recommend that digital surrogates be used to reduce the cost and increase the availability of library holdings that circulate frequently, the decision to deaccession a physical object in library collections and replace it with a digital surrogate should be based on a careful assessment of the way in which library patrons use the original object or objects of its kind." Materials that are infrequently used and not rare Keeping the original is always the best solution for libraries and especially archives but in the case of libraries where an artifact is not rare or used infrequently there must be a barometer that is developed to help "balance functionality with actual use in order to help decide when digital surrogates that provide most of the functionality of originals are acceptable." Materials that are rare and frequently used A professional in the field of Library and Information Science (LIS) would almost certainly not argue that a digital surrogate could replace a rare object. However, in the case of a rare object that is falling into poor shape due to heavy use a digital surrogate could be extremely useful in reducing the wear and in the long run aid in preserving the artifact. A digital surrogate is not the ultimate end in preserving artifacts, but are very useful partners in the process. Materials that are rare and infrequently used For materials that are rare and infrequently used the idea of making a digital surrogate is often not viewed as a viable option, because digitization is so expensive. However, if the cost of housing the artifact becomes too burdensome making a digital surrogate might become a viable option. In some cases a library might even contemplate to deaccession the artifact once it is digitized. "Here again, libraries need to be aware of the actual or potential rarity of even those materials used infrequently today. Tomorrow, those may very well be the most valuable of artifacts, perhaps for users, or uses, that one could not predict today." Evidential and intrinsic value of digital surrogates Probably one of the biggest benefits that are expressed in all categories of digital surrogates is the increased access and potential increase of use due to ease of retrieving the artifact. Even though the digital surrogate might seem like a suitable replacement, the possibility of contextual loss (evidential value) needs to be seriously thought out before the inception of a mass digital surrogate project. The digital surrogate can aid in preservation and helps increase access but they can lose valuable evidential value. According to Lynn Westney, digital surrogates do not have intrinsic value to make up for potential loss of evidential value. "The major risk posed by digital surrogates is the loss of evidential value due to the destruction of evidence as to the context and circumstances of their origin. Intrinsic value is lost when the testimony of the original is not completely preserved when converted to a different medium. It is based on features whose testimony is dependent on the form of the original and can therefore not be converted." Furthermore, Westney believes that with the increases in technology, and the availability to the public, it is very easy to manipulate and alter digital information and in turn losing the original authentic information perhaps permanently. It is harder to ensure the integrity of digital materials in this modern age. The problem of integrity must be considered when deciding to make digital surrogates or preserving born digital objects as integrity is key component to artifacts. Establishing value in the digital realm Digital integrity can be classified as having artifactual value; however, as stated in Going Digital, this qualification varies and changes because of the nature of the medium and market. "The original document made available electronically is not necessarily what the viewer receives because of the strong influence of the equipment and software that both the recorder and the viewer happen to be using. Complicating the matter even further, equipment and software are subject to significant change in the short term and over time." A digital born object or a document that has migrated into a digital format can have artifactual value as long as the original software is linked with the document or image. If the software is updated or enhanced, the originality or integrity of the document or image is altered. Digital integrity's constancy fluctuates with the advancement of computer technologies. Certain documents and the partnering software may stay consistent for several years before an update trumps the current edition. This is a common viewpoint within the profession and one that reoccurs in this report. The author's points stated in Going Digital is a proponent of the importance of congruity in software and content; however, in examining Preserving Digital Periodicals, the emphasis is on the text and especially in a specific format. "The core content of most periodicals is text. The text of a periodical or periodical article, however, can be created and maintained in a number of ways." The text and content of a digital document is the focus and importance whereas the presentational platform can be modified or changed completely. Since the platforms of digital records will be altered more than once throughout its course, the importance of the content becomes the artifactual value. Only time will tell as opinions and perceptions of artifactual value for digital content changes within the library profession. Differing notions of artifactual value Integrity of a digital surrogate is hindered even if the original is still available. "Only original documents contain intrinsic value although digital surrogates attempt to capture and convey it." The element of the originality is the key issue. When the original is gone and a replica remains, the document will not contain the same originality. Even though the replica is distorted the replica does represent the original which means value is associated with the document, but at a reduced rate. 'The simple thing to do would be to provide the best reproduction possible within our means and hope for approval...the original will be distorted by the process. Librarians need to be aware of the implications of enhancing one informational element while obscuring another, whatever the profile of these changes and the technical reasons for them may be.' This topic of artifactual value is one that has more scholarly research suggesting that digital records do not contain artifactual value; however, there are some views that oppose that particular line of thinking. Sources discussing digital integrity and digital artifactual value change, but most would agree that digital files can possess artifactual value even if that time frame is short. Kenneth Thibodeau shares a good point and again follows to the comments made in Going Digital. "Preserving an object means keeping it intact, unchanged. Maintaining a digital object unchanged entails sustaining attributes and methods that bind the object to the technology originally used to create or capture it. Over time this binding will tend to raise greater and greater barriers to using state of the art technology to access to objects." Again the idea of digital artifactual value like software is ephemeral. The longevity of an original digital record and software is fleeting. Because there is money to be made in this field, computing professionals are striving to make updates and additions. Computing professional competition is straining the scholarly consideration for digital work in the library profession. Lorna M. Hughes' definitive comments from the book Digitizing Collections explains that although a beneficial new medium digital born records have not been proven as a valid artifactual source: Hughes' comments are important to note because even with the many benefits digitized material brings, presently there is a lack in statistical judgments regarding digital artifactual value. The author's tone and statement seem to leave the opinion open because viewpoints could very well change in the near future. Reading the literature regarding digital archives with regard to the best strategies for implementation and how the records are then classified indicates concerns about jumping forward toward this new medium of digital. The digital medium is new and professionals are hesitant because of possible loss of information and not enough knowledge surrounding the topic. "The danger of provide a partial view of something that seems to be complete is just as alarming as that of decontextualization." From this point of view, it is clearly a demotion and an advisory to be careful. Another source follows the same cautious guidelines in Networking for Digital Preservation of a digital library in Germany. The attitude is more positive and suggests a tone of encouragement. Die Deutsche Bibliothek (DDB) is a library that has been digitizing information since 1998. Perhaps since the professional idea to digitize as a legitimate method of archiving is still new, examining the plan of the DDB sheds light on this notion. "Initially the thinking on future strategies was led by the idea of giving priority to safeguarding the content of a digital publication; however, keeping the 'original look and feel' of a document is now also considered to be an important aspect. Therefore, both approaches will be taken into account when drawing up preservation strategies." This comment is important because it shares a different perspective of how the profession deals with digital material. As opposed to shutting down the opportunity to enhance digital medium in the profession like the above example, the DDB is taking an active approach to working with digital material. The text from Mark Bide's publication from "The ebook Revolution" supports this movement."If ebooks become a significant medium for the consumption of backlist titles, it can only be a matter of time before market demand drives publishers in the direction of publishing their frontlist in ebook formats. 'Ebook-only' publishing may currently be largely confined to what many dismiss as an enhanced form of vanity publishing in the US; this is unlikely to remain the case for long...libraries cannot afford to ignore developments in ebook publishing." Thus, ebooks relate to artifactual value with a focus on economic value. Because there is a demand within the market for this media, the pricing and importance increases. Bide's report is a fortuitous one. The marketplace has called for this medium regardless of the significance of its content. It is deemed as important because its creation has established a new level within the digital medium that could develop into the norm by the end of this decade. From "The Need to Archive Blog Content", this article discusses the advent of legitimate blogs that professional journalists and newspapers cite for genuine information. Because creators of blogs and social media have the ability to edit or delete postings or the entire content at the creator's will. This author has concerns about the editing, "but will anyone be able to see the actual blog entries? Will these primary source documents be available?" Here is a comment made in the article concerning the longevity of a news source and how it has both potential and danger in being totally unavailable once the information has passed its initial interest level: "Since the "Daily Nightly" blog is not archived anywhere other than the MSNBC site, there is no guarantee that future generations will have access to these posts, considered by traditional media to be insightful and helpful for understanding part of the news process." The author describes possible answers to archiving blogs; however, the points align themselves with artifactual value concerns within this medium. The premise is structured around truth, validity and authenticity along with proper linking methods to accredited sources so that this form of communication and information does not go undetected. "Trust is of fundamental importance in digital document management," this is an important part of establishing value to this type of sites. As more and more accredited news sources cite blogs and other traditional transient sites, the term of artifactual value will be linked to blogs and social media. Standards for establishing value Currently, there are no widely held standards for what constitutes artifactual value for digital objects. Nonetheless, professionals working in digital curation and preservation have made several attempts to establish guidelines for defining the authenticity and value of digital objects. Task Force on Archiving of Digital Information As early as 1995, the Task Force on Archiving of Digital Information made efforts to define the attributes of a digital object that distinguish it as a whole and singular work. These attributes include a digital object's content, fixity, reference, provenance, and context. National Archives and Records Administration The United States National Archives and Records Administration (NARA) has also suggested several criteria for establishing value for digital objects, specifically for Web 2.0 records, in a 2010 report. The report, while not official NARA policy, suggests that the contextual value of a digital object, such as its functionality, layout, and metadata, contributes to the informational content of the object. According to the NARA report, changing or removing properties of a digital object, such as its appearance or format, may affect the digital object's artifactual value. Council on Library and Information Resources The Council on Library and Information Resources (CLIR) has also contributed guidelines for establishing a digital object's artifactual value. The 2001 CLIR report, "The Evidence in Hand: Report of the Task Force on the Artifact in Library Collections" states that successful preservation of a digital artifact is measured by the degree to which an object's "chief distinctions" are maintained over time. This includes attributes such as functionality, formatting, or whatever else is primarily important for a particular user community or use environment. In a 1998 publication, CLIR indicates that retaining an original digital object may not, in fact, mean retaining the original medium. Given rapid changes in technology, most mediums for digital objects quickly decay or become obsolete. Rather, the CLIR 1998 report suggests that preserving a digital artifact should mean retaining the "functionality, look, and feel of the original "object". Value in new media art New media art that is born-digital has intrinsic value that may exist in only digital form. Preserving the digital artifactual value of new media art is usually done with preservation strategies such as emulation and data migration. See also References Digital preservation Digital media
Digital artifactual value
[ "Technology" ]
4,215
[ "Multimedia", "Digital media" ]
14,664,896
https://en.wikipedia.org/wiki/Daily%20urban%20system
The daily urban system (DUS) refers to the area around a city, in which daily commuting occurs. It is a means for defining an urban region by including the areas from which individuals commute. Daily Urban System is a concept first introduced by the American geographer Berry, and then introduced into Europe by the British geographer Hall. Definition Daily urban system (DUS) mainly focuses on urban cities, where majority of the commuting flow take place. However, some case studies do look at the outskirts and suburban areas around specific urban cities that are being studied. The daily commuting includes both work and leisure, contributing to high density in the daily urban system. That results in slowing the speed of transportation. The speed of transportation is about 16 km/h at the central parts of urban cities, but speed may vary from place to place. Urban sprawl is the possible outcome of an expansion of the daily urban system. Therefore, it includes multiple local governments, economies, and demographics. The researchers that study daily urban system (DUS) are basically focusing on the transportation planning. Some are interested in micro-analysis of an urban area per se, some carry out macro analysis of a region that is made up of several urban cities. The difference between an agglomeration of an urban area and the daily urban system is that, an agglomeration is a multivariate means of combining townships, counties, and other defined areas. It looks at shared economic relationships and other factors. A daily urban system, on the other hand, only show how far away people who commute into a city are living. It shows how much sprawl has occurred, or how people are living far away from where they commute to everyday due to differences in conditions between the regions. Cases Olomouc, Czechia A study in Olomouc research on the traveling distance among locations such as schools, hospitals, retail stores, and culture and sports centers of the region, and the towns in the hinterland of Olomouc too. The findings were different for the region and towns. The towns had shown reduction in traveling distance for commercial services, such as pharmacies and banking services. There were groceries and retail shops moving to large commercial centers too, cutting down traveling distance, which people were able to shop at the malls without having to travel much. However, the study found little changes region wise, especially in health care and education sectors. Commute for schools and health care institutions had remained stable. Randstad area, Netherlands Researchers looked into the daily urban systems within Randstad to determine if the megalopolis is considered a network city. Commute within Randstad remained loose despite the rising proportions of the population commuting to work over a decade (1992 to 2002). Many residents from the suburban areas were still commuting to work at the urban areas - large daily urban system - every day after ten years due to low job growth. Randstad was not considered as a network city because of its negative commuting balance, where large proportion of residents had to commute to large daily urban system for work. Paris, France Paris' central urban population is 2,125,246. Its agglomerated population is 9,644,507. That's a big difference. Roughly 7 million people live outside of Paris proper, but are easily within the greater Parisian area. Paris's daily urban system has a population of 11,174,743. That's 1.5 million people living outside of what can (at the most generous) be called Paris, and yet are commuting there every day. 10% of the city lives far enough away that they cannot really say they 'live outside of Paris,' but commute there daily. See also 15 minute city Commuter town Commuting zone Exurb Isochrone map Transit desert Travel to work area Urban Employment Area References Human geography Urban planning Urbanization Economic geography Transportation planning
Daily urban system
[ "Engineering", "Environmental_science" ]
808
[ "Urban planning", "Environmental social science", "Human geography", "Architecture" ]
14,664,948
https://en.wikipedia.org/wiki/Replication%20protein%20A
Replication protein A (RPA) is the major protein that binds to single-stranded DNA (ssDNA) in eukaryotic cells. In vitro, RPA shows a much higher affinity for ssDNA than RNA or double-stranded DNA. RPA is required in replication, recombination and repair processes such as nucleotide excision repair and homologous recombination.  It also plays roles in responding to damaged DNA. Structure RPA is a heterotrimer, composed of the subunits RPA1 (RPA70) (70kDa subunit), RPA2 (RPA32) (32kDa subunit) and RPA3 (RPA14) (14kDa subunit). The three RPA subunits contain six OB-folds (oligonucleotide/oligosaccharide binding), with DNA-binding domains (DBD) designated DBDs A-F, that bind RPA to single-stranded DNA. DBDs A, B, C and F are located on RPA1, DBD D is located on RPA2, and DBD E is located on RPA3.  DBDs C, D, and E make up the trimerization core of the protein with flexible linker regions connecting them all together.  Due to these flexible linker regions RPA is considered highly flexible and this supports the dynamic binding that RPA is able to achieve.  Because of this dynamic binding, RPA is also capable of different conformations that leads to varied numbers of nucleotides that it can engage. DBDs A, B, C and D are the sites that are involved in ssDNA binding.  Protein-protein interactions between RPA and other proteins happen at the N-terminal of RPA1, specifically DBD F, along with the C-terminal of RPA2. Phosphorylation of RPA takes place at the N-terminus of RPA2. RPA shares many features with the CST complex heterotrimer, although RPA has a more uniform 1:1:1 stoichiometry. Functions During DNA replication, RPA prevents single-stranded DNA (ssDNA) from winding back on itself or from forming secondary structures. It also helps protect the ssDNA from being attacked by endonucleases. This keeps DNA unwound for the polymerase to replicate it. RPA also binds to ssDNA during the initial phase of homologous recombination, an important process in DNA repair and prophase I of meiosis. RPA has a key role in the maintenance of the recombination checkpoint during meiosis of the yeast Saccharomyces cerevisiae. RPA appears to act as a sensor of single-strand DNA for the activation of the meiotic DNA damage response. Hypersensitivity to DNA damaging agents can be caused by mutations in the RPA gene. Like its role in DNA replication, this keeps ssDNA from binding to itself (self-complementizing) so that the resulting nucleoprotein filament can then be bound by Rad51 and its cofactors. RPA also binds to DNA during the nucleotide excision repair process. This binding stabilizes the repair complex during the repair process. A bacterial homolog is called single-strand binding protein (SSB). See also Single-stranded binding protein Replication protein A1 Replication protein A2 Replication protein A3 References Genetics
Replication protein A
[ "Biology" ]
716
[ "Molecular genetics", "Cellular processes", "DNA repair" ]
14,665,450
https://en.wikipedia.org/wiki/Neurotransmitter%20sodium%20symporter
A neurotransmitter sodium symporter (NSS) (TC# 2.A.22) is type of neurotransmitter transporter that catalyzes the uptake of a variety of neurotransmitters, amino acids, osmolytes and related nitrogenous substances by a solute:Na+ symport mechanism. The NSS family is a member of the APC superfamily. Its constituents have been found in bacteria, archaea and eukaryotes. Function Neurotransmitter transport systems are responsible for the release, re-uptake and recycling of neurotransmitters at synapses. High affinity transport proteins found in the plasma membrane of presynaptic nerve terminals and glial cells are responsible for the removal, from the extracellular space, of released-transmitters, thereby terminating their actions. The majority of the transporters constitute an extensive family of homologous proteins that derive energy from the co-transport of Na+ and Cl−, in order to transport neurotransmitter molecules into the cell against their concentration gradient. Neurotransmitter sodium symporters (NSS) are targets for anti-depressants, psychostimulants and other drugs. Transport reaction The generalized transport reaction for the members of this family is: solute (out) + Na+ (out) → solute (in) + Na+ (in). Structure The family has a common structure of 12 presumed transmembrane helices and includes carriers for gamma-aminobutyric acid (GABA), noradrenaline/adrenaline, dopamine, serotonin, proline, glycine, choline, betaine, taurine and other small molecules. NSS carriers are structurally distinct from the second more-restricted family of plasma membrane transporters, which are responsible for excitatory amino acid transport (see TC# 2.A.23). The latter couple glutamate and aspartate uptake to the cotransport of Na+ and the counter-transport of K+, with no apparent dependence on Cl−. In addition, both of these transporter families are distinct from the vesicular neurotransmitter transporters. Sequence analysis of the Na+/Cl− neurotransmitter superfamily reveals that it can be divided into four subfamilies, these being transporters for monoamines, the amino acids proline and glycine, GABA, and a group of orphan transporters. Tavoulari et al. (2011) described conversion of the Cl− -independent prokaryotic tryptophan transporter TnaT (2.A.22.4.1) to a fully functional Cl− -dependent form by a single point mutation, D268S. Mutations in TnaT-D268S, in wild type TnaT and in a serotonin transporter (SERT; 2.A.22.1.1) provided direct evidence for the involvement of each of the proposed residues in Cl− coordination. In both SERT and TnaT-D268S, Cl− and Na+ mutually increase each other's potency, consistent with electrostatic interaction through adjacent binding sites. Crystal structures There are several crystal structures available for a couple members of the NSS family: 2.A.22.1.7 - Dopamine transporter: , , , , 2.A.22.4.2 - The amino acid (leucine):2 Na+ symporter, LeuTAa: , , , , , , , (more) Subfamilies Several characterized proteins are classified within the NSS family and can be found in the Transporter Classification Database. Betaine transporter () Creatine transporter () Dopamine neurotransmitter transporter () Inebriated neurotransmitter transporter GABA neurotransmitter transporter GAT-1 () GABA neurotransmitter transporter GAT-2 () GABA neurotransmitter transporter GAT-3 () Glycine neurotransmitter transporter, type 1 () Noradrenaline neurotransmitter transporter () Orphan neurotransmitter transporter () Serotonin (5-HT) neurotransmitter transporter, N-terminal () Taurine transporter () Human proteins containing this domain SLC6A1, SLC6A2, SLC6A3, SLC6A4, SLC6A5, SLC6A6, SLC6A7, SLC6A8, SLC6A9, SLC6A11, SLC6A12, SLC6A13, SLC6A14, SLC6A15, SLC6A16, SLC6A17, SLC6A18, SLC6A19, SLC6A20 See also APC superfamily Membrane transport proteins References External links Transporter Classification Database (tcdb.org) for more detailed description of this family Protein domains Protein families Membrane proteins Transport proteins Integral membrane proteins Transmembrane proteins Transmembrane transporters Protein pages needing a picture
Neurotransmitter sodium symporter
[ "Biology" ]
1,117
[ "Protein families", "Protein domains", "Protein classification", "Membrane proteins" ]
14,665,670
https://en.wikipedia.org/wiki/MicrobesOnline
MicrobesOnline is a publicly and freely accessible website that hosts multiple comparative genomic tools for comparing microbial species at the genomic, transcriptomic and functional levels. MicrobesOnline was developed by the Virtual Institute for Microbial Stress and Survival, which is based at the Lawrence Berkeley National Laboratory in Berkeley, California. The site was launched in 2005, with regular updates until 2011. The main aim of MicrobesOnline is to provide an easy-to-use resource that integrates a wealth of data from multiple sources. This integrated platform facilitates studies in comparative genomics, metabolic pathway analysis, genome composition, functional genomics as well as in protein domain and family data. It also provides tools to search or browse the database with genes, species, sequences, orthologous groups, gene ontology (GO) terms or pathway keywords, etc. Another one of its main features is the Gene Cart, which allows users to keep a record of their genes of interest. One of the highlights of the database is the overall navigation accessibility and interconnection between the tools. Background The development of high-throughput methods for genome sequencing has brought about a wealth of data that requires sophisticated bioinformatics tools for their analysis and interpretation. Nowadays, numerous tools exist to study genomics sequence data and extract information from different perspectives. However, the lack of unification of nomenclature and standardised protocols between tools, makes direct comparison between their results very difficult. Additionally, the user is forced to constantly switch from various websites or software, adjusting the format of their data to fit with individual requirements. MicrobesOnline was developed with the aim to integrate the capacities of different tools into a unified platform for easy comparison between analysis results, with a focus on prokaryote species and basal eukaryotes. Species included in the database MicrobesOnline hosts genomic, gene expression and fitness data for a wide range of microbial species. Genomic data is available for 1752 bacteria, 94 archaea and 119 eukaryotes, for a total of 3707 genomes, 2842 of which are marked as being complete. Gene expression data is available for 113 species, and fitness data is available for 4 organisms. Functions and Site Architecture MicrobesOnline provides diverse tools for searching, analysing and integrating information related to bacteria genomes for applications in four major areas: genetic information, functional genomics, comparative genomics and metabolic pathway studies. The homepage of MicrobesOnline is the portal for accessing its functions, which includes six main sections: the top navigation elements, a genome selector, examples of the tutorial based on E.coli K-12, a link to the Genome-Linked Application for Metabolic Maps (GLAMM), website highlights and the “about MicrobesOnline” list. As an ongoing project, the authors of MicrobesOnline claim that the tools for data analysis and the support of more data types will be expanded. Genetic information Information of microbial genes stored in MicrobesOnline includes sequences (genes, transcripts and proteins), genomic loci, gene annotations and some statistics of sequences. This information can be accessed through three features displayed on the homepage of MicrobesOnline: sequence search and advanced search in the top navigation section, and the genome selector. For the sequence search tool, MicrobesOnline integrates BLAT, FastHMM and FastBLAST to search protein sequences, and uses MEGABLAST to search nucleotide sequences. It also provides a link to BLAST as an alternative way for searching sequences. On the other hand, the advanced search tool enables a user to search genetic information by categories, custom query, wild-card search and field-specific search, which uses the gene name, the description, the cluster of orthologous groups (COGs) id, the GO term, the KEGG enzyme commission (EC) number, etc. as key words. The “genomes selected” box of the genome selector lists genomes added from the favourite genome list on the left or the ones searched by keywords. On the right side of the genome selector, four actions can be applied after selecting genomes: the “find genes” interface searches the gene name in the selected genomes and displays results in the gene list view; the “info” button lists a brief summary of selected genomes in the Summary View; the “GO” button opens a GO Browser called VertiGo which tabulates the number of genes under different GO items; finally, the “pathway” button initiates a pathway browser that illustrates the complete pathways of all organisms in the MicrobesOnline database. In addition, the genome names shown in the summary view leads to a single-genome data view that presents a wealth of information about the selected genome. In the gene list view, the links “G O D H S T B...” lead the user to a locus information tool, where detailed information such as operon & regulon, domains & families, sequences, annotations, etc. are shown. Gene carts An important feature to store a user's work is the Gene Cart. Many web pages of MicrobesOnline displaying genetic information contain a link to add genes of interest to the session gene cart, which is available for all users. This is a temporary gene cart, and as such it loses information as a user closes the web browser. Genes in the session gene cart can be saved to the permanent gene cart which is only available to registered users after logging in. Functional genomics One goal of setting up MicrobesOnline is to store functional information of microbial genomes. Such information includes gene ontology and microarray-based gene expression profiles, which can be accessed through two interfaces called GO browser and Expression Data Viewer respectively. The GO browser provides links to genes organised by gene ontology terms and the Expression Data Viewer provides both the access to expression profiles and information of experimental conditions. Gene ontology hierarchy The GO Browser, also known as VertiGo, is used by MicrobesOnline to search and visualise the GO hierarchy, which is a unified verbal system that describes properties of gene products, including cellular components, molecular function and biological process. The Genome Selector of the MicrobesOnline homepage provides a direct way to browse the GO hierarchy of the selected genomes, as well as provide a list of genes under a selected GO term, which can then be added to the session gene cart for further analysis. Gene expression information The Expression Data Viewer is an interface for searching and inspecting microarray-base gene expression experiments and expression profiles. It consists of several components: an experiment browser for searching specific experiments in selected genomes under selected experimental conditions, an expression experiment viewer providing details of each microarray experiment, a gene expression viewer showing a heat map of the expression levels of the selected gene and genes in the same operon, and finally, a profile search tool for searching gene expression profiles. The Expression Data Viewer can be accessed through three ways: the “Browse Functional Data” in the navigator bar, the “Gene Expression Data” in the homepage and the “Gene expression” list in the single-genome data view, where the expression data are available. The single-genome data view can also show a protein-protein interaction browser that allows the inspection of interaction complexes and the download of expression data (e.g. Escherichia coli str. K-12 substr. MG1655). Furthermore, the user can launch a MultiExperiment Viewer (MeV) in the single-genome data view for analysing and visualising expression data. Comparative genomics MicrobesOnline stores information of gene homology and phylogeny for comparative genomic studies, which can be accessed through two interfaces. The first one is the Tree Browser, which draws a species tree or a gene tree for the selected gene and its gene neighbourhood. The second one is the Orthology Browser, which is an extension of the Genome Browser and demonstrates the selected gene within the context of its gene neighbourhood aligned with orthologs in other selected genomes. Both browsers provide options to save a gene in the session gene cart for further analysis. Tree browser The tree browser can be accessed by searching a gene by the Find Genes tool on the homepage with its VIMSS id (e.g. VIMSS15779). Once the gene context view has been accessed through the “Browse genomes by trees” option, a gene tree and a gene context diagram are displayed. In addition, the “View species tree” option opens a species tree view, which shows a species tree alongside the gene tree. Additionally, the tree browser enables users to choose both genes and genomes according to their similarity. Furthermore, it also demonstrates horizontal gene transfers among genomes. Orthology browser The Orthology Browser displays orthologs of genomes compared to the query genome by choosing multiple genomes from the “Select Organism(s) to Display” box. The locus information can be viewed through the “view genes” option, and this gene can be added to the session gene cart, or its gene expression data (including the heatmap) can be downloaded. Alternatively, a gene context view appears when browsing genomes by trees. Metabolic pathway information The Pathway Browser lets users to navigate the Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway maps displaying predicted presence or absence of enzymes for up to two selected genomes. The map of a particular pathway and a comparison between two kinds of microbes can be shown in the pathway browser. The enzyme commission number (e.g. 3.1.3.25) provides a link to the gene list view that shows information of the selected enzyme and allows the user to add genes to the session gene cart. The GLAMM is another tool for searching and visualising metabolic pathways in a unified web interface. It helps users to identify or construct novel, transgenic pathways. Bioinformatics MicrobesOnline has integrated numerous tools for analysing sequences, gene expression profiles and protein-protein interactions into an interface called Bioinformatics Workbench, which is accessed via gene carts. Analyses currently supported include multiple sequence alignments, construction of phylogenetic trees, motif searches and scans, summaries of gene expression profiles and protein-protein interactions. In order to save computational resources, a user is allowed to run two concurrent jobs for at most four hours and all results are saved temporarily until the session is terminated. Results can be shared with other users or groups via the resource access control tool. Supporting databases MicrobesOnline is built on the integration of the data of an array of databases that manage different aspects of its capabilities. A comprehensive list is as follows: Sequence information: Non-redundant protein, gene and transcript sequences and annotations are extracted from RefSeq and Uniprot. Taxonomic classification of species and sequences: NCBI Taxonomy is used to classify the species and sequences into phylogenetic groups, and build a phylogenetic tree. Identification of non-annotated proteins from sequences: CRITICA is used to finds stretches of DNA sequence that code for proteins. Both a comparative genomics and a comparison and annotation-independent method are used. Identification of non-annotated genes from sequences: MicrobesOnline relies on Glimmer to automatically find genes in bacteria, archaea and viral sequences. Classification of proteins: The classification of proteins by their conserved domain, family and superfamilies determined by PIRSF, Pfam, SMART and SUPERFAMILY repositories are included. Gene Orthology information: The orthologous groups of genes across species are based on the information on the COG database, which relies on protein sequence comparison for the detection of homology. Functional information of genes and proteins: The range of functional information provided is contributed by the following: GOA for Gene Ontology annotation of genes into functional categories, KEGG for metabolic, molecular and signaling pathways of genes, and PANTHER for information about molecular and functional pathways, in the context of the relationships between protein families and their evolution . TIGRFAMs and Gene3D are referred to for structural information and annotation of proteins. Gene expression data: Both NCBI GEO and Many Microbe Microarrays Database support the gene expression data of MicrobesOnline. The datasets compiled by Many Microbe Microarrays Database have the added advantage of being directly comparable, since only data generated by single-channel Affymetrix microarrays are accepted, and are subsequently normalised. Detection of CRISPRs: CRISPR are DNA loci involved in the immunity against invasive sequences, where short direct repeats are separated by spacer sequences. The databases generated by the CRT and PILER-CL algorithms are used to detect CRISPRs. Detection of tRNAs: The tRNAscan-SE database is used as a reference to identify tRNA sequences. Submission of data by users: Users have the capacity of uploading both genomes and expression files to MicrobesOnline and analyse them with the analysis tools offered, with the option of keeping the data private (in the case of unpublished data) or releasing it to the public. Microarray data should include a clear identification of the organisms, platforms, treatments and controls, experimental conditions, time points and normalization techniques used, as well as the expression data in either log ratio or log levels format. Although draft genome sequences are accepted, they must be compliant with certain guidelines: (1) the assembled genome must have less than 100 scaffolds, (2) the FASTA file format should be used, having a unique label per contig, (3) preferably gene predictions should be present (in this case, accepted formats include GenBank, EMBL, tab-delimited and FASTA), (4) the name of the genome and the NCBI taxonomy ID should be provided. Updates MicrobesOnline was updated every 3 to 9 months from 2007 to 2011, where new features as well as new species data were added. However, there have been no new release notes since March 2011. Compatibility with other sites MicrobesOnline is compatible with other similar platforms of integrated microbe data, such as IMG and RegTransBase, given that standard identifiers of genes are maintained throughout the database. MicrobesOnline in the realm of microbe analysis platforms There have been other efforts to create a unified platform for prokaryote analysis tools, however, most of them focus on one set of analysis types. A few examples of these focused databases include those with an emphasis on metabolic data analysis (Microme ), comparative genomics (MBGD and the OMA Browser ), regulons and transcription factors (RegPrecise ), comparative functional genomics (Pathline ), among many others. However, notable efforts have been made by other teams to create comprehensive platforms that largely overlap with the capabilities of MicrobesOnline. MicroScope and the Integrated Microbial Genomes System (IMG) are examples of popular and recently updated databases (). Extension of metagenome analysis: metaMicrobesOnline metaMicrobesOnline was compiled by the same developers as MicrobesOnline, and constitutes an extension of MicrobesOnline capacities, by focusing on the phylogenetic analysis of metagenomes. With a similar web interface to MicrobesOnline, the user is capable of toggling between sites via the “switch to” link on the homepage. See also Integrated Microbial Genomes System (IMG) BLAST BLAT (bioinformatics) FASTA format Gene ontology Hidden Markov model (HMM) Homology (biology) KEGG Multiple Sequence Alignment External links MicrobesOnline home page IMG home page reference: Nucleic Acids Research, 2006, Vol. 34, Database issue D344-D348 Gene Ontology Consortium (GOC): an international collaboration on developing gene ontology and creating annotations of genetic functions. The Open Biological and Biomedical Ontologies: a community-based platform that integrates databases and tools of gene ontology. Kyoto Encyclopedia of Genes and Genomes (KEGG): a large collection of databases of genes, genomes, biological pathways, chemicals, drugs and diseases. NCBI COGs: resources of phylogenetic studies of proteins encoded in complete genomes. GLAMM: the Genome-Linked Application for Metabolic Maps, which is an interactive viewer for metabolic pathways and experiments. GLAMM Tutorial: a comprehensive guidance for using GLAMM. MEGABLAST Search: NCBI's introduction to the MEGABLAST algorithm. MeV: MultiExperiment Viewer: a versatile tool for analysing microarray data. Microme: a platform that integrates a database, tools and browsers for studying bacterial metabolism. MBGD: Microbial Genome Database for Comparative Analysis Virtual Institute for Microbial Stress and Survival (VIMSS): the supporter for an integrated program to study how microbes respond to and survive environmental stresses. References Genome databases
MicrobesOnline
[ "Biology" ]
3,511
[ "Genome projects" ]
14,665,788
https://en.wikipedia.org/wiki/Major%20intrinsic%20proteins
Major intrinsic proteins comprise a large superfamily of transmembrane protein channels that are grouped together on the basis of homology. The MIP superfamily includes three subfamilies: aquaporins, aquaglyceroporins and S-aquaporins. The aquaporins (AQPs) are water selective. The aquaglyceroporins are permeable to water, but also to other small uncharged molecules such as glycerol. The third subfamily, with little conserved amino acid sequences around the NPA boxes, include 'superaquaporins' (S-aquaporins). The phylogeny of insect MIP family channels has been published. Families There are two families that belong to the MIP Superfamily. 1.A.8 - The Major Intrinsic Protein (MIP) Family 1.A.16 - The Formate-Nitrite Transporter (FNT) Family The Major Intrinsic Protein Family (TC# 1.A.8) The MIP family is large and diverse, possessing thousands of members that form transmembrane channels. These channel proteins function in transporting water, small carbohydrates (e.g., glycerol), urea, NH3, CO2, H2O2 and ions by energy-independent mechanisms. For example, the glycerol channel, FPS1p of Saccharomyces cerevisiae mediates uptake of arsenite and antimonite. Ion permeability appears to occur through a pathway different than that used for water/glycerol transport and may involve a channel at the 4 subunit interface rather than the channels through the subunits. MIP family members are found ubiquitously in bacteria, archaea and eukaryotes. Phylogenetic clustering of the proteins is primarily based according to phylum of the organisms of origin, but one or more clusters are observed for each phylogenetic kingdom (plants, animals, yeast, bacteria and archaea). MIPs are classified into five subfamilies in higher plants, including plasma membrane (PIPs), tonoplast (TIPs), NOD26-like (NIPs), small basic (SIPs) and unclassified X (XIPs) intrinsic proteins. One of the plant clusters includes only tonoplast (TIP) proteins, while another includes plasma membrane (PIP) proteins. Major Intrinsic Protein The Major Intrinsic Protein (MIP) of the human lens of the eye (Aqp0), after which the MIP family was named, represents about 60% of the protein in the lens cell. In the native form, it is an aquaporin (AQP), but during lens development, it becomes proteolytically truncated. The channel, which normally houses 6-9 water molecules, becomes constricted so only three remain, and these are trapped in a closed conformation. These truncated tetramers form intercellular adhesive junctions (head to head), yielding a crystalline array that mediates lens formation with cells tightly packed as required to form a clear lens. Lipids crystallize with the protein. Ion channel activity has been shown for Aquaporins 0, 1, and 6, Drosophila 'Big Brain' (bib) and plant Nodulin-26. Roles of aquaporins in human cancer have been reviewed as have their folding pathways. AQPs may act as transmembrane osmosensors in red cells, secretory granules and microorganisms. MIP superfamily proteins and variations of their selectivity filters have been reviewed. Aquaporin The currently known aquaporins cluster loosely together as do the known glycerol facilitators. MIP family proteins are believed to form aqueous pores that selectively allow passive transport of their solute(s) across the membrane with minimal apparent recognition. Aquaporins selectively transport glycerol as well as water while glycerol facilitators selectively transport glycerol but not water. Some aquaporins can transport NH3 and CO2. Glycerol facilitators function as solute nonspecific channels, and may transport glycerol, dihydroxyacetone, propanediol, urea and other small neutral molecules in physiologically important processes. Some members of the family, including the yeast Fps1 protein (TC# 1.A.8.5.1) and tobacco NtTIPa (TC# 1.A.8.10.2) may transport both water and small solutes. Examples A list of nearly 100 currently classified members of the MIP family can be found in the Transporter Classification Database. Some of the MIP family channels include: Mammalian major intrinsic protein (MIP). MIP is the major component of lens fibre gap junctions. Mammalian aquaporins. () These proteins form water-specific channels that provide the plasma membranes of red cells, as well as kidney proximal and collecting tubules with high permeability to water, thereby permitting water to move in the direction of an osmotic gradient. Soybean nodulin-26, a major component of the peribacteroid membrane induced during nodulation in legume roots after Rhizobium infection. Plant tonoplast intrinsic proteins (TIP). There are various isoforms of TIP : alpha (seed), gamma, Rt (root), and Wsi (water-stress induced). These proteins may allow the diffusion of water, amino acids and/or peptides from the tonoplast interior to the cytoplasm. Bacterial glycerol facilitator protein (gene glpF), which facilitates the movement of glycerol non-specifically across the cytoplasmic membrane. Salmonella typhimurium propanediol diffusion facilitator (gene pduF). Yeast FPS1, a glycerol uptake/efflux facilitator protein. Drosophila neurogenic protein 'big brain' (bib). This protein may mediate intercellular communication; it may functions by allowing the transport of certain molecules(s) and thereby sending a signal for an exodermal cell to become an epidermoblast instead of a neuroblast. Yeast hypothetical protein YFL054c. A hypothetical protein from the pepX region of Lactococcus lactis. Structure MIP family channels consist of homotetramers (e.g., GlpF of E. coli; TC #1.A.8.1.1, AqpZ of E. coli; TC #1.A.8.3.1, and MIP or Aqp0 of Bos taurus; TC #1.A.8.8.1). Each subunit spans the membrane six times as putative α-helices. The 6 TMS domains are believed to have arisen from a 3-spanner-encoding genetic element by a tandem, intragenic duplication event. The two halves of the proteins are therefore of opposite orientation in the membrane. A well-conserved region between TMSs 2 and 3 and TMSs 5 and 6 dip into the membrane, each loop forming a half TMS. A common amino acyl motif in these transporters is an asparagine–proline–alanine (NPA) motif. Aquaporins generally have the NPA motif in both halves, the glycerol facilitators generally have an NPA motif in the first haves and a DPA motif in the second halves, and the super-aquaporins have poorly conserved NPA motifs in both halves. Glycerol Uptake Facilitator The crystal structure of the glycerol facilitator of E. coli (TC# 1.A.8.1.1) was solved at 2.2 Å resolution (). Glycerol molecules create a single file within the channel and pass through a narrow selectivity filter. The two conserved D-P-A motifs in the loops between TMSs 2 and 3 and TMSs 5 and 6 form the interface between the two duplicated halves of each subunit. Thus each half of the protein forms 3.5 TMSs surrounding the channel. The structure explains why GlpF is selectively permeable to straight chain carbohydrates, and why water and ions are largely excluded. Aquaporin-1 (AQP1) and the bacterial glycerol facilitator, GlpF can transport O2, CO2, NH3, glycerol, urea, and water to varying degrees. For small solutes passing through AQP1, there is an anti-correlation between permeability and solute hydrophobicity. AQP1 is thus a selective filter for small polar solutes, whereas GlpF is highly permeable to small solutes and less permeable to larger solutes. Aquaporin-1 Aquaporin-1 (Aqp1) from the human red blood cell has been solved by electron crystallography to 3.8 Å resolution (). The aqueous pathway is lined with conserved hydrophobic residues that permit rapid water transport. Water selectivity is due to a constriction of the inner pore diameter to about 3 Å over the span of a single residue, superficially similar to that in the glycerol facilitator of E. coli. Several other more recently resolved crystal structures are available in RCSB, including but not limited to: , , . Aquaporin-Z AqpZ, a homotetramer (tAqpZ) of four water-conducting channels that facilitate rapid water movements across the plasma membrane of E. coli, has been solved to 3.2 Å resolution (). All channel-lining residues in the four monomeric channels are orientated in nearly identical positions except at the narrowest channel constriction, where the side chain of a conserved Arg-189 adopts two distinct orientations. In one of the four monomers, the guanidino group of Arg-189 points toward the periplasmic vestibule, opening up the constriction to accommodate the binding of a water molecule through a tridentate H-bond. In the other three monomers, the Arg-189 guanidino group bends over to form an H-bond with carbonyl oxygen of Thr-183 occluding the channel. Therefore, the tAqpZ structure has two different Arg-189 conformations which provide water permeation through the channel. Alternating between the two Arg-189 conformations disrupts continuous flow of water, thus regulating the open probability of the water pore. Further, the difference in Arg-189 displacements is correlated with a strong electron density found between the first transmembrane helices of two open channels, suggesting that the observed Arg-189 conformations are stabilized by asymmetrical subunit interactions in tAqpZ. Other resolved crystal structures for AqpZ include: , , . PIP1 and PIP2 The 3-D structures of the open and closed forms of plant aquaporins, PIP1 and PIP2, have been solved (). In the closed conformation, loop D caps the channel from the cytoplasm and thereby occludes the pore. In the open conformation, loop D is displaced up to 16 Å, and this movement opens a hydrophobic gate blocking the channel entrance from the cytoplasm. These results reveal a molecular gating mechanism which appears conserved throughout all plant plasma membrane aquaporins. In plants it regulates water intake/export in response to water availability and cytoplasmic pH during anoxia. Human proteins containing this domain AQP1, AQP2, AQP3, AQP4, AQP5, , AQP7, AQP8, AQP9, , MIP See also MIPModDB MIP (gene) Aquaporins Integral membrane protein Transporter Classification Database Protein Superfamily Protein family References Protein domains Protein families Transmembrane proteins
Major intrinsic proteins
[ "Biology" ]
2,562
[ "Protein families", "Protein domains", "Protein classification" ]
14,666,804
https://en.wikipedia.org/wiki/Pentatricopeptide%20repeat
The pentatricopeptide repeat (PPR) is a 35-amino acid sequence motif. Pentatricopeptide-repeat-containing proteins are a family of proteins commonly found in the plant kingdom. They are distinguished by the presence of tandem degenerate PPR motifs and by the relative lack of introns in the genes coding for them. Approximately 450 such proteins have been identified in the Arabidopsis genome, and another 477 in the rice genome. Despite the large size of the protein family, genetic data suggest that there is little or no redundancy of function between the PPR proteins in Arabidopsis. The purpose of PPR proteins is currently under dispute. It has been shown that a good deal of those in Arabidopsis interact (often essentially) with mitochondria and other organelles and that they are possibly involved in RNA editing. However many trans proteins are required for this editing to occur and research continues to look at which proteins are needed. The structure of the PPR has been resolved. It folds into a helix-turn-helix structure similar to those found in the tetratricopeptide repeat. Several repeats of the protein forms a ring around a single-strand RNA molecule in a sequence-sensitive way reminiscent of TAL effectors. Examples Human genes encoding proteins containing this repeat include: DENND4A, DENND4B, DENND4C LRPPRC PTCD1, PTCD2, PTCD3 MRPS27 References Amino acid motifs
Pentatricopeptide repeat
[ "Chemistry" ]
309
[ "Molecular biology stubs", "Molecular biology" ]
14,667,005
https://en.wikipedia.org/wiki/Indoor%20residual%20spraying
Indoor residual spraying or IRS is the process of spraying the inside of dwellings with an insecticide to kill mosquitoes that spread malaria. A dilute solution of insecticide is sprayed on the inside walls of certain types of dwellings—those with walls made from porous materials such as mud or wood but not plaster as in city dwellings. Mosquitoes are killed or repelled by the spray, preventing the transmission of the disease. In 2008, 44 countries employed the IRS as a malaria control strategy. Several pesticides have historically been used for IRS, the first and most well-known being DDT. World Health Organization recommendations The World Health Organization (WHO) recommends IRS as one of three primary means of malaria control, the others being use of insecticide treated bednets (ITNs) and prompt treatment of confirmed cases with artemisinin-based combination therapies (ACTs). While previously the WHO had recommended IRS only in areas of sporadic malaria transmission, in 2006 it began recommending IRS in areas of endemic, stable transmission as well. According to the WHO: Furthermore, for IRS to be effective: There must be a high percentage of sprayable surfaces within each dwelling. The vector (mosquitos) must feed or rest indoors. The targeted vectors must be susceptible (i.e. not resistant) to the insecticide being sprayed. The WHO further states that "insecticide susceptibility and vector behaviour; safety for humans and the environment; and efficacy and cost-effectiveness" are factors that must be considered when selecting an insecticide for IRS. Approved insecticides Currently, the WHO has approved 13 different insecticides for the IRS. Cost effectiveness and efficacy According to 2010 Cochrane review, IRS is an effective strategy for reducing malaria incidence. It is about as effective as using insecticide treated nets (ITN)s, though ITNs may be more effective at reducing morbidity in some situations. Few studies have directly compared the cost effectiveness of IRS directly with other methods of malaria control. A study from 2008 assessed the cost effectiveness of seven African anti-malaria campaigns: two IRS campaigns and five insecticide treated bednet (ITN) distribution campaigns. The authors found that on a cost-per-child-death-averted basis, all were about the same, but the ITN campaigns were slightly more cost effective. With regard to the cost effectiveness of various pesticides vis-a-vis each other for IRS, historically DDT has been considered the most cost effective, mainly because it lasts longer than alternatives and therefore dwellings can be sprayed less frequently. But actual studies on cost effectiveness are lacking, and none have taken into account the adverse health and environmental effects of DDT or its alternatives. The United Nations Environment Programme (UNEP) concluded in 2008 that "IRS with DDT remains affordable and effective in many situations but, with regard to the direct costs, the relative advantage of DDT vis-à-vis alternative insecticides seems to be diminishing. The contextual evidence base on cost-effectiveness needs strengthening, and the external costs of DDT use vis-à-vis alternative insecticides require a careful assessment." Residents' opposition to IRS For IRS to be effective, at least 80% of premises (houses and animal shelters) in an area must be sprayed, and if enough residents refuse spraying, the effectiveness of the whole program can be jeopardized. Many residents resist spraying of DDT in particular. This is due to a variety of factors, including its smell and the stains it leaves on the walls. While that stain makes it easier to check whether the room has been sprayed, it causes some villagers to resist the spraying of their homes or to resurface the wall, which eliminates the residual insecticidal effect. Pyrethroid insecticides are reportedly more acceptable since they do not leave visible residues on the walls. In addition, DDT is not suitable for this type of spraying in Western-style plastered or painted walls, only traditional dwellings with unpainted walls made of mud, sticks, dung, thatch, clay, or cement. As rural areas of South Africa become more prosperous, there is a shift towards Western style housing, leaving fewer homes suitable for DDT spraying, and necessitating the use of alternative insecticides. Other villagers object to DDT spraying because it does not kill cockroaches or bedbugs; rather, it excites such pests making them more active, so that often the use of another insecticide is additionally required. Pyrethroids such as deltamethrin and lambdacyhalothrin, on the other hand, are more acceptable to residents because they kill these nuisance insects as well as mosquitoes. DDT has also been known to kill beneficial insects, such as wasps that kill caterpillars that, unchecked, destroy thatched roofs. As a result, Mozambique's chief of infectious disease control, Avertino Barreto, says that resistance to DDT spraying is "homegrown", not due to "pressure from environmentalists". "They only want us to use DDT on poor, rural black people," he says. "So whoever suggests DDT use, I say, 'Fine, I'll start spraying in your house first. Use of DDT As discussed above, DDT is one of several insecticides currently approved by the WHO for use in malaria control. The following table shows recent per country use of DDT for IRS. Unless otherwise noted, data for 2003–2007 is from the 2008 Stockholm Convention/UNEP monograph on the current status of DDT, 2008 data is from the WHO's World Malaria Report 2009, and 2009 data is from the 2010 report of the Stockholm Convention's DDT expert group. The World Malaria Report 2009 does not report the amount of DDT used in each country, only whether it is used or not. Accordingly, countries are listed as using 0 or "some" DDT. Use statistics for 2009–2011 are available from a report of Stockholm Convention's DDT Export Group References External links MALARIA VECTOR CONTROL AND PERSONAL PROTECTION, WHO Technical Report Series No. 936. 2006. Pesticides Malaria
Indoor residual spraying
[ "Biology", "Environmental_science" ]
1,257
[ "Biocides", "Toxicology", "Pesticides" ]
14,667,031
https://en.wikipedia.org/wiki/HD%20141937
HD 141937 is a star in the southern zodiac constellation of Libra, positioned a couple of degrees to the north of Lambda Librae. It is a yellow-hued star with an apparent visual magnitude of 7.25, which means it is too faint to be seen with the naked eye. This object is located at a distance of 108.9 light years from the Sun based on parallax, but is drifting closer with a radial velocity of −2.2 km/s. It has an absolute magnitude of 4.71. This is a G-type main-sequence star with a stellar classification of G1V. It is a solar-type star with slightly higher mass and radius compared to the Sun. The metallicity is higher than solar. It is an estimated 3.8 billion years old and is spinning with a projected rotational velocity of 6 km/s. The star is radiating 1.2 times the luminosity of the Sun from its photosphere at an effective temperature of 5,890 K. The star has a substellar companion (HD 141937 b) announced in April 2001 by the European Southern Observatory. It has a minimum mass of 9.7 . In 2020, the inclination of the orbit was measured, revealing its true mass to be 27.4 , which makes it a brown dwarf. A 653-day orbit places the orbital distance 1.5 times farther away from the star as Earth is from the Sun, with a high eccentricity of 41%. See also HD 142022 HD 142415 List of extrasolar planets References G-type main-sequence stars Brown dwarfs Libra (constellation) Durchmusterung objects 141937 077740
HD 141937
[ "Astronomy" ]
353
[ "Libra (constellation)", "Constellations" ]
14,667,240
https://en.wikipedia.org/wiki/HD%20142415
HD 142415 is a single star in the southern constellation of Norma, positioned next to the southern constellation border with Triangulum Australe and less than a degree to the west of NGC 6025. With an apparent visual magnitude of 7.33, it is too faint to be visible to the naked eye. The distance to this star is 116 light years from the Sun based on parallax, but it is drifting closer with a radial velocity of −12 km/s. It is a candidate member of the NGC 1901 open cluster of stars. This is an ordinary G-type main-sequence star with a stellar classification of G1V. It has been identified as a solar twin by Datson et al. (2012), which means its physical properties are very similar to the Sun. It has 10% more mass than the Sun but only a 3% larger radius. The star is estimated to be 1.6 billion years old and is spinning with a projected rotational velocity of 4.2 km/s. It is radiating 1.16 times the luminosity of the Sun from its photosphere at an effective temperature of 5,869 K. The star is currently known to have one planet, designated HD 142415 b. This was detected via the radial velocity method and announced in 2004. The orbital period is just over a year, which made a determination of the orbital eccentricity more difficult due to undersampling over part of the orbit, in combination with jitter. The authors chose to pin the eccentricity value to 0.5, although solutions in the range 0.2–0.8 would be equally plausible. See also HD 141937 HD 142022 List of extrasolar planets References G-type main-sequence stars Planetary systems with one confirmed planet Norma (constellation) Durchmusterung objects 142415 078169
HD 142415
[ "Astronomy" ]
383
[ "Norma (constellation)", "Constellations" ]
14,667,365
https://en.wikipedia.org/wiki/Scotophobin
Scotophobin () is a peptide discovered by neuroscientist Georges Ungar in 1965 and reported in 1968. The results of Ungar and his collaborators seemed to show that scotophobin induces fear of the dark in various mammals and fish. It was discovered in the brain of laboratory rats conditioned to have a fear of darkness. Moreover, it was claimed that its injection could transfer fear to unconditioned rats. It was the core argument for the hypothesis about memory transfer: that memories are molecularly stored in the brain. Chemical memory transfer was a subject of conferences and books. According to the current knowledge, scotophobin cannot have the effect attributed to it. The history of scotophobin is covered in the 2006 book Scotophobin: Darkness at the Dawn of the Search for Memory Molecules, a personal account of Louis Neal Irwin, who participated in this research. Experimental setup In his main work Ungar made rats choose to enter either a lighted box or a dark box. Normally nocturnal animals, upon entering the dark, rats were given an electric shock, and the rats were quickly trained to enter the lighted box. After a prolonged training, an extract was prepared from their brains, which was injected into mice which were tested in the same lighted/dark setup. By measuring time spent by the mice in the boxes, it was found that the mice injected with an extract from the treated rats could be distinguished from the ones injected with the extract from the untreated rats. References Further reading B. Setlow, "Georges Ungar and memory transfer", 2009, "This paper reviews Ungar's work on memory transfer (and in particular on the scotophobin molecule), with an analysis of its successes and failures." Obsolete scientific theories History of chemistry Neuroscience of memory Cognitive science Peptides Molecular neuroscience
Scotophobin
[ "Chemistry" ]
381
[ "Biomolecules by chemical classification", "Molecular neuroscience", "Peptides", "Molecular biology" ]
14,667,413
https://en.wikipedia.org/wiki/Glycoside%20hydrolase%20family%201
Glycoside hydrolase family 1 is a family of glycoside hydrolases. Glycoside hydrolases are a widespread group of enzymes that hydrolyse the glycosidic bond between two or more carbohydrates, or between a carbohydrate and a non-carbohydrate moiety. A classification system for glycoside hydrolases, based on sequence similarity, has led to the definition of >100 different families. This classification is available on the CAZy web site, and also discussed at CAZypedia, an online encyclopedia of carbohydrate active enzymes. Glycoside hydrolase family 1 CAZY GH_1 comprises enzymes with a number of known activities; beta-glucosidase (); beta-galactosidase (); 6-phospho-beta-galactosidase (); 6-phospho-beta-glucosidase (); lactase-phlorizin hydrolase (), lactase (); beta-mannosidase (); myrosinase (). Subfamilies 6-phospho-beta-galactosidase Human proteins containing this domain GBA3; KL; KLB; LCT; LCTL; External links GH1 in CAZypedia References Peripheral membrane proteins EC 3.2.1 Glycoside hydrolase families Protein families
Glycoside hydrolase family 1
[ "Biology" ]
314
[ "Protein families", "Protein classification" ]
14,667,665
https://en.wikipedia.org/wiki/HD%20142022
HD 142022 is a binary star system located in the southernmost constellation of Octans. It is too faint to be visible to the naked eye, having an apparent visual magnitude of 7.70. The distance to this system is based on parallax, but it is drifting closer to the Sun with a radial velocity of −10 km/s. The primary, designated component A, is an old, Population I G-type star with a stellar classification of G9IV-V, showing a spectrum with mixed traits of a main sequence and a subgiant star. It is an estimated 7.6 billion years old and is spinning with a projected rotational velocity of 2 km/s. The star has similar mass and dimensions as the Sun, but has a 55% higher metallicity. It is radiating 89% of the luminosity of the Sun from its photosphere at an effective temperature of 5516 K. The magnitude 11.19 companion has the designation LTT 6384 and appears gravitationally bound to the primary. The pair have an angular separation of , which corresponds to a projected separation of . The estimated semimajor axis of their orbit is . The secondary is a red dwarf star with a stellar classification of M1V. The primary star has a single known planetary companion, HD 142022 Ab, discovered in 2005. In 2023, the inclination and true mass of HD 142022 Ab were determined via astrometry. See also HD 141937 HD 142415 List of extrasolar planets References G-type main-sequence stars M-type main-sequence stars Planetary systems with one confirmed planet Binary stars Octans CD-83 00202 9536 142022 079242
HD 142022
[ "Astronomy" ]
351
[ "Octans", "Constellations" ]
14,667,733
https://en.wikipedia.org/wiki/HD%20149143
HD 149143, also called Rosalíadecastro, is a star with a close orbiting exoplanet in the Ophiuchus constellation. Its apparent visual magnitude is 7.89 (a binocular object) and the absolute magnitude is 3.87. The system is located at a distance of 239 light years from the Sun based on parallax measurements, and it is drifting further away with a radial velocity of 12 km/s. On December 17, 2019, as part of the IAU's NameExoWorlds project, the star HD 149143 was given the name Rosalíadecastro in honour of the Galician poet Rosalía de Castro, who was a significant figure of Galician culture and prominent writer in Galician overall, but in Spanish too, whose work often referenced the night and celestial objects. The exoplanet companion was named Riosar in honour of the Sar River in Galicia that was present in much of the literary work of the author Rosalía de Castro. This is a slightly evolved star with a stellar classification of G0 that is overluminous for a high-metallicity G-type dwarf. It has 1.1 times the mass of the Sun and 1.3 times the Sun's radius. The star has an estimated age of around 7.6 billion years and is spinning with a projected rotational velocity of 3.9 km/s. It is radiating 2.3 times the luminosity of the Sun from its photosphere at an effective temperature of 6,213 K. Planetary system HD 149143 b, the planet that orbits HD 149143, was discovered by the N2K Consortium during their search for short-period gas giant planets around metal-rich stars. The planet was independently discovered by the Elodie metallicity-biased search for transiting Hot Jupiters. See also HD 109749 HD 150706 List of proper names of stars Lists of exoplanets References G-type main-sequence stars Planetary systems with one confirmed planet Planetary transit variables Ophiuchus Durchmusterung objects 149143 081022 Rosalíadecastro
HD 149143
[ "Astronomy" ]
441
[ "Ophiuchus", "Constellations" ]
14,667,812
https://en.wikipedia.org/wiki/HD%20178911
|- ! colspan="2" style="text-align: center;" | HD 178911 B |- | Mass || |- | Radius || |- | Luminosity || |- | Surface gravity (log g) || |- | Temperature || |- | Metallicity [Fe/H] || 0.23 |- | Rotational velocity (v sin i) || 4.6 |- | Age || |- HD 178911 is a triple star system with an exoplanetary companion in the northern constellation of Lyra. With a combined apparent visual magnitude of 6.70, it is a challenge to view with the naked eye. The system is located at a distance of approximately 161 light years from the Sun based on parallax measurements, but is drifting closer with a radial velocity of −38 km/s. Stellar system A companion star, designated component B, was first reported by F. G. W. Struve in 1823. As of 2019, the two have an angular separation of along a position angle of 263°. Component B shares a common motion through space with the primary, and thus they form a wide binary. This secondary is a magnitude 7.88 G-type main-sequence star with a stellar classification of G5V. The physical properties of this star are similar to the Sun, although it has a higher metallicity. In 1985, the primary was determined to be a spectroscopic binary pair using the CHARA speckle interferometry program. Designated components Aa and Ab, these have an orbital period of and an eccentricity (ovalness) of 0.6. They are magnitude 6.89 and 8.96. Based on based on a combined class of G5V for the pair, they have derived main sequence stellar classifications of G1V and K1V, respectively. C. D. Farrington and associates (2014) found dynamic masses for the components of 0.80 and 0.62, respectively. However, based on the classes, the expected masses should be around 1.0 and 0.8. Manuel Andrade (2019) derived higher dynamic masses of 1.20 and 0.94. An additional companion HD 178911 C is a chance optical alignment and is not part of the system. Planetary system In 2001 an extrasolar planet was discovered in orbit around HD 178911 B. See also List of extrasolar planets References External links G-type main-sequence stars K-type main-sequence stars Triple star systems Planetary systems with one confirmed planet Lyra Durchmusterung objects 178911 094076 7272
HD 178911
[ "Astronomy" ]
545
[ "Lyra", "Constellations" ]
14,667,955
https://en.wikipedia.org/wiki/HD%20183263
HD 183263 is a star with a pair of orbiting exoplanets located in the equatorial constellation of Aquila. It has an apparent visual magnitude of 7.86, which is too faint to be visible to the naked eye. The distance to this system is 178 light years based on parallax measurements, but it is drifting closer with a heliocentric radial velocity of −50 km/s. Judging from its motion through space, this star is predicted to approach to within of the Sun in around 952,000 years. At that distance, it will be faintly visible to the naked eye. This is an older star with a spectrum matching a stellar classification of G2 IV, indicating it is about to leave the main sequence after exhausting the supply of hydrogen at its core. It will then evolve into a red giant before dying as a white dwarf. This star has an absolute magnitude (apparent magnitude at 10 pc) of 4.16 compared to the Sun’s 4.83, which indicates the star is more luminous than the Sun, and therefore hotter by about 100 K. At the age of 8.1 billion years, the magnetic activity in its chromosphere is quiet and it is spinning slowly with a rotation period of 32 days. Planetary system The star has two known super-jovian exoplanets in orbit around it. Exoplanet b was discovered in 2005 while exoplanet c was discovered in 2008. A 2022 study estimated the true mass of HD 183263 c at about via astrometry, although this estimate is poorly constrained. See also List of multiplanetary systems List of exoplanetary host stars References External links G-type subgiants Planetary systems with two confirmed planets Aquila (constellation) Durchmusterung objects 183263 095740
HD 183263
[ "Astronomy" ]
373
[ "Aquila (constellation)", "Constellations" ]
14,668,187
https://en.wikipedia.org/wiki/SecY%20protein
The SecY protein is the main transmembrane subunit of the bacterial Sec export pathway and of a protein-secreting ATPase complex, also known as a SecYEG translocon. Homologs of the SecYEG complex are found in eukaryotes and in archaea, where the subunit is known as Sec61α. Secretion of some proteins carrying a signal-peptide across the inner membrane in Gram-negative bacteria occurs via the preprotein translocase pathway. Proteins are produced in the cytoplasm as precursors, and require a chaperone subunit to direct them to the translocase component within the membrane. From there, the mature proteins are either targeted to the outer membrane or remain as periplasmic proteins. The translocase protein subunits are encoded on the bacterial chromosome. The translocase pathway comprises 7 proteins, including a chaperone protein (SecB), an ATPase (SecA), an integral membrane complex (SecY, SecE, and SecG), and two additional membrane proteins that promote the release of the mature peptide into the periplasm (SecD and SecF). The chaperone protein SecB is a highly acidic homotetrameric protein that exists as a "dimer of dimers" in the bacterial cytoplasm. SecB maintains preproteins in an unfolded state after translation and targets these to the peripheral membrane protein ATPase SecA for secretion. Cytoplasmic regions 2 and 3, and TM domains 1, 2, 4, 5, 7, and 10 are well conserved: the conserved cytoplasmic regions are believed to interact with cytoplasmic secretion factors, while the TM domains may participate in protein export. SecY is also encoded in the chloroplast genome of some algae where it could be involved in a prokaryotic-like protein export system across the two membranes of the chloroplast endoplasmic reticulum (CER) which is present in chromophyte and cryptophyte algae. Subfamilies SecY-related translocase Human proteins containing this domain SEC61A1; SEC61A2; See also Sec61 Translocon Protein targeting Bacterial secretion system References Further reading Protein targeting Protein domains Protein families Transmembrane proteins Secretion
SecY protein
[ "Biology" ]
495
[ "Protein targeting", "Protein classification", "Protein domains", "Cellular processes", "Protein families" ]
14,668,195
https://en.wikipedia.org/wiki/Gel%20point
In polymer chemistry, the gel point is an abrupt change in the viscosity of a solution containing polymerizable components. At the gel point, a solution undergoes gelation, as reflected in a loss in fluidity. After the monomer/polymer solution has passed the gel point, internal stress builds up in the gel phase, which can lead to volume shrinkage. Gelation is characteristic of polymerizations that include crosslinkers that can form 2- or 3-dimensional networks. For example, the condensation of a dicarboxylic acid and a triol will give rise to a gel whereas the same dicarboxylic acid and a diol will not. The gel is often a small percentage of the mixture, even though it greatly influences the properties of the bulk. Mathematical definition An infinite polymer network appears at the gel point. Assuming that it is possible to measure the extent of reaction, , defined as the fraction of monomers that appear in cross-links, the gel point can be determined. The critical extent of reaction for the gel point to be formed is given by: For example, a polymer with N≈200 is able to reach the gel point with only 0.5% of monomers reacting. This shows the ease at which polymers are able to form infinite networks. The critical extent of reaction for gelation can be determined as a function of the properties of the monomer mixture, , , and : See also Pour point Cold filter plugging point Petroleum References Further reading Polymer physics Chemical properties
Gel point
[ "Chemistry", "Materials_science" ]
310
[ "Polymer physics", "Polymer chemistry", "nan" ]
14,668,445
https://en.wikipedia.org/wiki/Animal%20testing%20regulations
Animal testing regulations are guidelines that permit and control the use of non-human animals for scientific experimentation. They vary greatly around the world, but most governments aim to control the number of times individual animals may be used; the overall numbers used; and the degree of pain that may be inflicted without anesthetic. Europe Experiments on vertebrate animals in the European Union are since January 1, 2013. subject to Directive 2010/63/EU on the protection of animals used for scientific purposes, which was finalized in November 2010 and updated and replaced the Directive 86/609/EEC on the protection of Animals used for Experimental and other scientific purposes, adopted in 1986. Directive 86/609/EEC showed considerable variation in the manner member countries chose to exercise the directive: compare, for example, legislation from Sweden, The Netherlands, and Germany. With a 2004 amendment to the Cosmetics Directive, the animal testing for cosmetic products is forbidden in the EU. Also animal testing for cosmetic ingredients is prohibited since March 2009. The amendment also prohibited, since 11 March 2009, to market cosmetic products containing ingredients which have been tested on animals. The amendment does not prohibit companies to use animal testing to fulfill regulatory requirements in other countries. France In France, legislation (principally the decree of October 19, 1980) requires an institutional and project license before testing on vertebrates is carried out. An institution must submit details of their facilities and the reason for the experiments, after which a five-year license may be granted following an inspection of the premises. The project licensee must be trained and educated to an appropriate level. Personal licenses are not required for individuals working under the supervision of a project license holder. These regulations do not apply to research using invertebrates. United Kingdom The types of institutions conducting animal research in the UK in 2015 were: universities (47.7%); commercial organizations (25.1%); government departments and other public bodies (13.8%); non-profit organizations (12.4%); National Health Service hospitals (0.7%); public health laboratories (0.2%). The Animals (Scientific Procedures) Act 1986 requires experiments to be regulated by three licences: a project licence for the scientist in charge of the project, which details the numbers and types of animals to be used, the experiments to be performed and their purpose; a certificate for the institution to ensure it has adequate facilities and staff; and a personal licence for each scientist or technician who carries out any procedure. In deciding whether to grant a licence, the Home Office refers to the Act's cost-benefit analysis, which is defined as "the likely adverse effects on the animals concerned against the benefit likely to accrue as a result of the programme to be specified in the licence" (Section 5(4)). A licence should not be granted if there is a "reasonably practicable method not entailing the use of protected animals" (Section 5(5) (a)). The experiments must use "the minimum number of animals, involve animals with the lowest degree of neurophysiological sensitivity, cause the least pain, suffering, distress, or lasting harm, and [be the] most likely to produce satisfactory results" (Section 5(5) (b)). During a 2002 House of Lords select committee inquiry into animal testing in the UK, witnesses stated that the UK has the tightest regulatory system in the world, and is the only country to require a cost-benefit assessment of every licence application. There are 29 qualified inspectors covering 230 establishments, which are visited on average 11–12 times a year in both announced and unannounced inspections. As a result of the transposition of Directive 2010/63/EU, changes were made to the way research is reviewed and approved in the UK. All licensed establishments must have an Animal Welfare and Ethical Review Body (commonly referred to as AWERBs) which considers and monitors project applications for the site. The assessment of severity has also changed under the amendments to the Animals (Scientific Procedures) Act (1986). Working examples of severity bands are provided by European Commission Expert Working Group. The assessment of severity must also be conducted retrospectively, which results in severity being assigned on the basis of the actual suffering experienced by the animals, rather than what is presumed during study design. This in turn leads to more accurate prospective assignment of severity bands. Germany The German Animal Welfare Act, 1972, is designed to enforce the utilitarian principle that there must be good reason for one to cause an animal harm and identifies that it is the responsibility of human beings to protect the lives and well-being of their fellow creatures. The Animal Welfare Act is supplemented by the Animal Protection Laboratory Animal Regulations, 2013, and the European Directive 2010/63/EU. All animal research facilities must be inspected at least every three years, with facilities conducting primate research being inspected at least once per year. Asia Japan Animal Experimentation in Japan is regulated by several documents - the Law for the Humane Treatment and Management of Animals, 2005, The Standards Relating to the Care and Management, and Alleviation of Pain and Distress of Experimental Animals, 2006, and guidelines by various ministries and organizations. The law states that causing distress to animals is not allowed without due cause (Article 2), and that when conducting animal experiments, methods that reduce the pain and distress of the animals as much as possible shall be used. It also states that consideration shall be given as to the appropriate use of animals, for example by reducing the number of animals used when possible (Article 41). The Standards state that usage of animals for scientific purpose is necessary. They include regulations for the refinement of experiments, in order to reduce the pain and distress of the experimental animals, and consideration for replacing animal experiments with alternatives or reducing the number of animals used. MEXT (The Ministry of Education, Culture, Sports, Science and Technology) and MHLW (The Ministry of Health, Labor and Welfare) established the guidelines named "Basic policies on animal experimentation" as quasi-regulations on June 1, 2006. The SCJ (Science Council of Japan) formulated more detailed guidelines, also in 2006, to be used when institutions formate their local regulations. The SCJ's guidelines state that the director of each research institution bears the responsibility for animal experiments conducted at their facilities, that animal experiments are indispensable, and that each institution should formulate voluntary in-house regulations for proper scientific conduct of animal experiments based on the guidelines. As well, they state that each institution should form an in-house review committee in order to inspect the experiments at that institution, from the standpoint of scientific rationale, with consideration to the Law and Standards mentioned above. However, ALIVE Foundation conducted a survey of Japanese universities and research facilities in 2011, and concluded that: "There appears to be little consciousness about the use of animals in experiments. Although there is an official guideline that should be followed, national universities are not complying with the guideline (in particular, in choosing particular kinds of animal, self-assessment and care/management of animals)." United States In the United States, animal testing on vertebrates is primarily regulated by the Animal Welfare Act of 1966 (AWA), and the Animal Welfare Regulations which is enforced by the Animal Care division of the Animal and Plant Health Inspection Service (APHIS) of the United States Department of Agriculture (USDA). The AWA contains provisions to ensure that individuals of covered species used in research receive a certain standard of care and treatment, provided that the standard of care and treatment does not interfere with "the design, outlines, or guidelines of actual research or experimentation." Currently, AWA only protects mammals. In 2002, the Farm Security Act of 2002, the fifth amendment to the AWA, specifically excluded purpose-bred birds, rats, and mice (as opposed to wild-captured mice, rats, and birds) from regulations. Even though most animals used in research are mice, rats, and fish, over a million other research animals per year are covered by the Animal Welfare Act and Animal Welfare Regulations. The AWA requires each institution using covered species to maintain an Institutional Animal Care and Use Committee (IACUC), which is responsible for local compliance with the Act. In addition, the IACUC reviews and approves each animal use protocol, which is a written description the researchers submit describing all procedures to be done with laboratory animals. Researchers must consult with a veterinarian for each procedure that may cause more than momentary pain or distress to the animals. In addition a written justification for these procedures, as well as documentation of a search for alternatives to these procedures, must be included with the protocol. The IACUC must review and approve these protocols at least annually. The IACUC also inspects all the animal facilities, including satellite facilities, every 6 months. As a part of this semi-annual inspection the committee also reviews the entire animal care and use program, and submits a "semi-annual report" to the Institutional Official. The Guide (enforced by OLAW) also has requirements for IACUC responsibilities and program reviews. Animal care and use in research in the United States are largely controlled by Institutional Animal Care and Use Committees. The following information is based on IACUC activity in the United States over 15 years ago. In addition, the purpose of an IACUC is not to provide "consistent" oversight across studies or institutions. Each institution has its own culture, priorities, and interpretations. A study conducted in 2001 by Psychology Professor Scott Plous of Wesleyan University that evaluated the reliability of IACUCs found little consistency between decisions made by IACUCs at different institutions. A Wesleyan University press release summarized part of the findings: In response to the Plous study, a rebuttal letter to Science written by animal researchers, animal care staff, and members of professional research societies stated: Institutions are also subject to unannounced annual inspections from USDA APHIS Veterinarian inspectors. There are about 70 inspectors monitoring around 1100 research institutions. The inspectors also conduct pre-licensing checks for sites that do not engage in animal research or transportation, of which more than 4000 exist (e.g. dog kennels). Another regulatory instrument is the Office of Laboratory Animal Welfare (OLAW), which is an office within the US National Institutes of Health. OLAW oversees all animal studies funded by the Public Health Service (including NIH). The Health Research Extension Act of 1985 directed the NIH to write the Public Health Service (PHS) Policy on Humane Care and Use of Laboratory Animals. This Policy applies to any individual scientist or institution in receipt of federal funds and requires each institution to have an IACUC, among other stipulations. OLAW enforces the recommendations in the Guide for the Care and Use of Laboratory AnimalsGuide for the Care and Use of Laboratory Animals: Eighth Edition published by the Institute for Laboratory Animal Research,Page Not Found : Division on Earth and Life Studies which covers all vertebrate species, including rodents, birds, fish, amphibians, and reptiles Guide for the Care and Use of Laboratory Animals: Eighth Edition This means that IACUCs oversee the use of all vertebrate species in research at facilities receiving federal funds, even if the species are not covered by the AWA. OLAW does not carry out scheduled inspections, but requires that "As a condition of receipt of PHS support for research involving laboratory animals, awardee institutions must provide a written Animal Welfare Assurance of Compliance (Assurance) to OLAW describing the means they will employ to comply with the PHS Policy." OLAW conducts inspections only when there is a suspected or alleged violation that cannot be resolved through written correspondence. Accreditation from the Association for Assessment and Accreditation of Laboratory Animal Care International (AAALAC), a non-governmental, nonprofit association, is regarded by the industry as the "gold standard" of accreditation. Accreditation is maintained through a prearranged AAALAC site visit and program evaluation hosted by the member institution once every three years. Accreditation is intended to ensure compliance with the standards in the Guide for the Care and Use of Laboratory Animals, as well as any other national or local laws on animal welfare. Canada The Canadian Council on Animal Care (CCAC) is set up to act in the interests of the people of Canada to ensure through programs of education, assessment and guidelines development that the use of animals, where necessary, for research, teaching and testing employs optimal physical and psychological care according to acceptable scientific standards, and to promote an increased level of knowledge, awareness and sensitivity to relevant ethical principles. At the inaugural meeting on January 30, 1968, the CCAC adopted the following statement of objective: "to develop guiding principles for the care of experimental animals in Canada, and to work for their effective application". The federal government does not have jurisdiction to pass laws that involve experiments on animals. The provinces have jurisdiction concerning that area. The federal government, however, is involved in three areas: the criminal law power, the health power, and the spending power. The Criminal Code of Canada Section 446 and 447 of the Criminal Code protect animals from cruelty, abuse and neglect. This section of the Criminal Code has been under review for several years. The Health of Animals Act The Health of Animals Act (1990) and its regulations are aimed primarily at protecting Canadian livestock from a variety of infectious diseases that would threaten both the health of the animals and people, and Canadian trade in livestock with other countries. This act is used both to deal with named disease outbreaks in Canada, and to prevent the entry of unacceptable diseases that do not exist in Canada. The Spending Power The other mechanism through which the federal government has lent its support to the humane treatment of animals is not strictly speaking legislative in nature, but in many respects it is one of the most powerful instruments available to the federal government for setting national standards. The federal government's power to provide for grants subject to conditions imposed on the recipients, be they provincial governments or individual or corporate recipients, may take a variety of different forms. One form is that of the conditional federal grant or contract. This manifestation of the federal power is what currently underpins the imposition of CCAC standards on facilities receiving funding from the Canadian Institutes of Health Research and the Natural Sciences and Engineering Research Council. Where the government itself awards a contract on an academic or non-academic institution, clause A9015C of Public Works Standard Acquisition Clauses and Conditions Manual imposes conditions related to the care and use of experimental animals in public works and government services. All of the provinces in Canada have created and passed laws that pertain to animal welfare, but only certain provinces have made their own laws. These provinces are Alberta, Manitoba, Saskatchewan, Ontario, New Brunswick, Nova Scotia, and Prince Edward Island. Alberta In 2006, the Alberta Animal Protection Act was revised and declared. Previously in Alberta, only academic institutions were subject to provincial regulations referencing CCAC standards, as these standards were referenced exclusively in the Alberta Universities Act. In 2005, the Universities Act and two other laws were examined by the Alberta Agriculture, Food and Rural Development Ministry (AAFRD), in hopes of combining them and update their content. Article 2(1) of the Animal Protection Regulations was revised by the CCAC and AAFRD and now states that "a person who owns or has custody, care or control of an animal for research activities must comply with the following Canadian Council on Animal Care documents", and lists all 22 CCAC standards, including the CCAC Guide to the Care and Use of Experimental Animals and the various guidelines and policies published by the CCAC. Prince Edward Island In Prince Edward Island, the Animal Protection Regulations made under the Animal Health and Protection Act state that the rules controlling the care of animals used for medical or scientific research can be found in Volumes 1 and 2 of the Guide to the Care and Use of Experimental Animals published by the CCAC. in the Prince Edward Islands Manitoba In the province of Manitoba, according to the Animal Care Act, it is not allowed for a person to cause suffering to an animal. The use of animals for research and teaching is acceptable as long as it follows the rules set out in the Act. All institutions that use animals for research and teaching purposes have to submit to obey the system put in place by the CCAC. Failing to do so, any harm done to an animal in a research or teaching program will be regarded as an offense under the Act. Ontario All of the research facilities in Ontario must be registered and licensed based on the legislation Animals for Research Act. Among the provisions of the Animals for Research Act, one should note the duty to establish an animal care committee, the responsibilities and powers of which are similar to those required under the CCAC system, and the requirement for any operator of a research facility to submit to the person designated by the Minister of Agriculture, Food and Rural Affairs a report respecting the animals used in the research facility for research. Regulation 24 governs the housing and care of the animals. Regulation 25 controls the conditions for transportation of the animals that are used or going to be used by a research facility. Australia In Australia, Animal Ethics Committees (AECs) determine whether the use of an animal is valid or not. AECs must follow the Code in order to ensure the wellbeing of the animals used for research. The Code emphasizes the responsibilities of investigators, teachers and institutions using animals to: ensure that the use of animals is justified, taking into consideration the scientific or educational benefits and the potential effects on the welfare of the animals; ensure that the welfare of animals is always considered; promote the development and use of techniques that replace the use of animals in scientific and teaching activities; minimise the number of animals used in projects; and refine methods and procedures to avoid pain or distress in animals used in scientific and teaching activities. Scientific and teaching activities using animals may be performed only when they are essential: to obtain and establish significant information relevant to the understanding of humans and/or animals; for the maintenance and improvement of human and/or animal health and welfare; for the improvement of animal management or production; to obtain and establish significant information relevant to the understanding, maintenance or improvement of the natural environment; or for the achievement of educational objectives. Researchers can only conduct their studies once it has approved the validity of the use of the animals and that there is more educational or scientific gain that outweighs the possible effects on the welfare of the animals. The researchers must submit a written proposal to an AEC stating what is to be accomplished, a defense for the study, and the ethical and wellbeing of the animals used reflecting the 3Rs. New Zealand New Zealand's Animal Welfare Act 1999 requires owners and people in charge of animals to ensure the physical, health and behavioural needs of animals are met, and that pain and distress are alleviated. In New Zealand, as in many countries, laboratory animals (mainly rodents) and farm animals (mainly cattle and sheep) are used in research, testing and teaching – commonly referred to as RTT. Animal use in RTT is strictly controlled under the Animal Welfare Act 1999 and organisations using animals must follow an approved code of ethical conduct. This sets out the policies and procedures that need to be adopted and followed by the organisation and its animal ethics committee. Every project must be approved and monitored by an animal ethics committee. These committees must have three external members: a nominee of an approved animal welfare organisation (such as the SPCA), a nominee of the New Zealand Veterinary Association and, a lay person to represent the public interest (and nominated by a local government body). Code holders and their animal ethics committees are independently reviewed (by MPI accredited reviewers) at least once every five years. All code holders have to submit annual animal use statistics on the number of animals used in research, testing or teaching, and its impact on them, from little or none to severe. The Ministry for Primary Industries (MPI) administers the Act and leads animal welfare policy and practice in New Zealand. The National Animal Ethics Advisory Committee (NAEAC) was established under the Animal Welfare Act to provide independent advice to the Minister for Primary Industries about: ethical and animal welfare issues relating to the use of animals in research, testing and teaching recommendations on the restrictions of use of non-human hominids advice to Animal Ethics Committees the development and review of codes of ethical conduct Brazil The federal law for the scientific use of animals was passed in 2008. The law established the National Council for the Control of Animal Experimentation (CONCEA) and demanded that institutions create an ethics committee on the use of animals. In 2009, Decree 6899/2009 defined CONCEA as the governing and advisory body, under the Ministry of Science and Technology, to authorize accreditation to registered institutions and to license those institutions to use animals in research. The same decree also states that an electronic database be developed to allow breeding and research facilities to register in order to apply for CONCEA accreditation. Brazil also reinforces the 3Rs. See also Animal–industrial complex Animal rights Cruelty to animals Federation of European Laboratory Animal Science Associations Notes Animal rights Animal testing Regulation Statutory law
Animal testing regulations
[ "Chemistry" ]
4,313
[ "Animal testing" ]
14,668,747
https://en.wikipedia.org/wiki/Chrysler%20Air-Raid%20Siren
The Chrysler Air Raid Siren is an outdoor warning siren produced during the Cold War era that has an output of 138 dB(C) at 100 feet. It was known as the Chrysler Bell Victory Siren during its first generation, which was between the end of World War II and the fall of the Berlin Wall. It is reputed to be the loudest air raid siren ever produced in the US. History Built during the Cold War era from 1952 to 1957 by Chrysler, its power plant contained a newly designed Firepower Hemi V8 engine with a displacement of and producing . They are long, built atop a quarter section of a Dodge truck chassis rail, and weigh an estimated . Its six horns are each long. The siren has an output of 138 dB(C) (30,000 watts), and can be heard as far as away. In 1952, the cost of a Chrysler Air Raid Siren was $5,500 (equivalent to $65,076 as of May 2024). The United States government helped buy sirens for selected state and county law enforcement agencies. In Los Angeles County, six were placed around key locations of populated areas, and another ten were sold to other government agencies in the state of California. These "Big Red Whistles" (as they were nicknamed) only saw testing use. Some were located so remotely that they deteriorated due to lack of maintenance. The main purpose of the siren was to warn the public in the event of a nuclear attack by the Soviet Union during the Cold War. The operator's job was to start the engine and bring it up to operating speed, then to pull and release the transmission handle to start the wailing signal generation. The Chrysler Air Raid Siren produced the loudest sound ever achieved by an air raid siren. Today Some sirens are still located above buildings and watchtowers. Many are rusted, and in some cases, the salvage value is less than the cost to remove them. A majority have been moved to museums, and some have been restored to fully functioning condition. In Seattle's Phinney Ridge neighborhood, a decommissioned air-raid siren remains standing as a local landmark. Since 2014, the air raid tower is decorated as a Holiday GloCone annually from Thanksgiving to New Year's. Cities with Chrysler Sirens References Civil defense Warning systems Chrysler Aerophones Sirens
Chrysler Air-Raid Siren
[ "Technology", "Engineering" ]
471
[ "Warning systems", "Safety engineering", "Measuring instruments" ]
14,668,772
https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282014%29
This page summarizes projects that propose to bring more than of new liquid fuel capacity to market with the first production of fuel beginning in 2014. This is part of the Wikipedia summary of Oil Megaprojects. Quick links to other years Detailed list of projects for 2014 Terminology Year Startup: year of first oil. put specific date if available. Operator: company undertaking the project. Area: onshore (LAND), offshore (OFF), offshore deep water (ODW), tar sands (TAR). Type: liquid category (i.e. Natural Gas Liquids, Natural gas condensate, Crude oil) Grade: oil quality (light, medium, heavy, sour) or API gravity 2P resvs: 2P (proven + probable) oil reserves in giga barrels (Gb). GOR: The ratio of produced gas to produced oil, commonly abbreviated GOR. Peak year: year of the production plateau/peak. Peak: maximum production expected (thousand barrels/day). Discovery: year of discovery. Capital investment: expected capital cost; FID (Final Investment Decision) - If no FID, then normally no project development contracts can be awarded. For many projects, a FEED stage (Front End Engineering Design) precedes the FID. Notes: comments about the project (footnotes). Ref: list of sources. References 2014 Oil fields Proposed energy projects Projects established in 2014 2014 in the environment 2014 in technology
Oil megaprojects (2014)
[ "Engineering" ]
297
[ "Oil megaprojects", "Megaprojects" ]
14,668,793
https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282015%29
This page summarizes projects that propose to bring more than of new liquid fuel capacity to market with the first production of fuel beginning in 2015. This is part of the Wikipedia summary of Oil Megaprojects. Quick links to other years Detailed list of projects for 2015 Terminology Year Startup: year of first oil. put specific date if available. Operator: company undertaking the project. Area: onshore (LAND), offshore (OFF), offshore deep water (ODW), tar sands (TAR). Type: liquid category (i.e. Natural Gas Liquids, Natural gas condensate, Crude oil) Grade: oil quality (light, medium, heavy, sour) or API gravity 2P resvs: 2P (proven + probable) oil reserves in giga barrels (Gb). GOR: The ratio of produced gas to produced oil, commonly abbreviated GOR. Peak year: year of the production plateau/peak. Peak: maximum production expected (thousand barrels/day). Discovery: year of discovery. Capital investment: expected capital cost; FID (Final Investment Decision) - If no FID, then normally no project development contracts can be awarded. For many projects, a FEED stage (Front End Engineering Design) precedes the FID. Notes: comments about the project (footnotes). Ref: list of sources. References 2015 Oil fields Proposed energy projects Projects established in 2015 2015 in the environment 2015 in technology
Oil megaprojects (2015)
[ "Engineering" ]
297
[ "Oil megaprojects", "Megaprojects" ]
14,668,818
https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282016%29
This page summarizes projects that propose to bring more than of new liquid fuel capacity to market with the first production of fuel beginning in 2016. This is part of the Wikipedia summary of Oil Megaprojects. Quick links to other years Detailed list of projects for 2016 Terminology Year Startup: year of first oil. put specific date if available. Operator: company undertaking the project. Area: onshore (LAND), offshore (OFF), offshore deep water (ODW), tar sands (TAR). Type: liquid category (i.e. Natural Gas Liquids, Natural gas condensate, Crude oil) Grade: oil quality (light, medium, heavy, sour) or API gravity 2P resvs: 2P (proven + probable) oil reserves in giga barrels (Gb). GOR: The ratio of produced gas to produced oil, commonly abbreviated GOR. Peak year: year of the production plateau/peak. Peak: maximum production expected (thousand barrels/day). Discovery: year of discovery. Capital investment: expected capital cost; FID (Final Investment Decision) - If no FID, then normally no project development contracts can be awarded. For many projects, a FEED stage (Front End Engineering Design) precedes the FID. Notes: comments about the project (footnotes). Ref: list of sources. References 2016 Oil fields Proposed energy projects Projects established in 2016 2016 in the environment 2016 in technology
Oil megaprojects (2016)
[ "Engineering" ]
297
[ "Oil megaprojects", "Megaprojects" ]
14,668,841
https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282017%29
This page summarizes projects that propose to bring more than of new liquid fuel capacity to market with the first production of fuel beginning in 2017. This is part of the Wikipedia summary of Oil Megaprojects. Quick links to other years Detailed list of projects for 2017 References 2017 Oil fields Proposed energy projects Projects established in 2017 2017 in the environment 2017 in technology
Oil megaprojects (2017)
[ "Engineering" ]
74
[ "Oil megaprojects", "Megaprojects" ]
14,668,894
https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282018%29
This page summarizes projects that propose to bring more than of new liquid fuel capacity to market with the first production of fuel beginning in 2018. This is part of the Wikipedia summary of Oil Megaprojects. Quick links to other years Detailed list of projects for 2018 References Oil megaprojects Oil fields Proposed energy projects 2018 in technology
Oil megaprojects (2018)
[ "Engineering" ]
70
[ "Oil megaprojects", "Megaprojects" ]
14,668,917
https://en.wikipedia.org/wiki/Oil%20megaprojects%20%282019%29
This page summarizes projects that propose to bring more than of new liquid fuel capacity to market with the first production of fuel beginning in 2019. This is part of the Wikipedia summary of oil megaprojects. Quick links to other years Detailed list of projects for 2019 References 2. Chevron Sanctions Anchor Project in the Deepwater U.S. Gulf of Mexico Oil megaprojects Oil fields Proposed energy projects 2019 in technology
Oil megaprojects (2019)
[ "Engineering" ]
88
[ "Oil megaprojects", "Megaprojects" ]
14,669,190
https://en.wikipedia.org/wiki/Spectrin%20repeat
Spectrin repeats are found in several proteins involved in cytoskeletal structure. These include spectrin, alpha-actinin, dystrophin and more recently the plakin family. The spectrin repeat forms a three-helix bundle. These conform to the rules of the heptad repeat. Spectrin repeats give rise to linear proteins. This however may be due to sample bias in which linear and rigid structures are more amenable to crystallization. There are hints however, that some proteins harbouring spectrin repeats may also be flexible. This is most likely due to specifically evolved functional purposes. Human proteins containing this domain ACTN1; ACTN2; ACTN3; ACTN4; AKAP6; SYNE3; CATX-15; DMD; DRP2; DST; KALRN; MACF1; MCF2L; SPTA1; SPTAN1; SPTB; SPTBN1; SPTBN2; SPTBN4; SPTBN5; SYNE1; SYNE2; TRIO; UTRN; References Further reading Peripheral membrane proteins Protein domains Protein superfamilies
Spectrin repeat
[ "Biology" ]
247
[ "Protein superfamilies", "Protein domains", "Protein classification" ]
14,669,196
https://en.wikipedia.org/wiki/Barren%20vegetation
Barren vegetation describes an area of land where plant growth may be sparse, stunted, and/or contain limited biodiversity. Environmental conditions such as toxic or infertile soil, high winds, coastal salt-spray, and climatic conditions are often key factors in poor plant growth and development. Barren vegetation can be categorized depending on the climate, geology, and geographic location of a specific area. Pine barrens, coastal barrens, and serpentine barrens are some of the more distinct ecoregions for barren vegetation and are the most commonly researched by scientists. Often referred to as "heathlands", barrens can be excellent environments for unique biological diversity and taxonomic compositions. Serpentine Barrens Biological diversity Serpentine barren habitats include grasslands, chaparral, and woodlands as well as some areas that are very sparsely vegetated. Areas of sparse vegetation are often characterized by annual and perennial herbaceous plant species. The flora of the serpentines is recognized globally for its high level of biological diversity which includes over 1600 taxa of plants occurring in serpentine areas of the eastern U.S., with as many as 2000 taxa considered to be endemic to serpentine rich soils. Geology Serpentine barrens are distinct due to the serpentine-rich soil produced by the hydration weathering and metamorphic transformation of ultramafic igneous bedrock. Serpentine barrens are often characterized as high-stress environments with low water and nutrient availability. These areas are often depleted in basic nutrients such as nitrogen and phosphorus. The soil is often shallow and can be toxic due to high heavy metal concentrations such as nickel, cobalt and chromium. As a result of the harsh conditions and unique edaphic properties presented by serpentine barrens these environments support stress-tolerant plant communities characterized by distinct and locally defined plant species. Pine barrens The Pine Barrens comprise 550,000 hectares of a heavily forested area of coastal plain and are home to at least 850 species of plant life, including many which are endangered or threatened. The Pine Barrens are primarily formed on unconsolidated, acidic, medium-to-coarse grained sands and gravel. The mature soils are considered to be true podzols and are siliceous and highly permeable. The low moisture holding capacity and nutrient status of the soil create low vegetation growth rates throughout much of the Pine Barrens. Coastal barrens Coastal Barrens are characterized by short vegetation, sparse tree cover, exposed bedrock, and bog pockets. Often, coastal barrens exhibit stressful climatic conditions and are subject to consistently windy conditions and salt-spray. Coastal Barrens typically host low growing shrub communities with sparse tree cover and are often dominated by ericaceous species such as the black huckleberry (Gaylussacia baccata) and low bush blueberry (Vaccinium angustifolium). The coastal barrens of Atlantic Canada host a variety of taxonomic species such as macro lichens, mosses, and vascular plants. Studies have recorded 173 different species in various coastal barren regions of the province of Nova Scotia. This number included 105 vascular plants, 41 macro lichens, and 27 moss species with six provincially rare vascular species that were found predominantly in nearshore areas that contained high levels of substrate salt and nutrients, variable substrate depth, and short vegetation. In Sydney, Australia, the coastal area is mostly dominated by mallee or stunted forms of eucalyptus trees, and scrubby vegetation such as Allocasuarina distyla, Angophora hispida, Banksia ericifolia and Grevillea oleoides, among other species, typically in an exposed coastal sandstone plateau with infertile, shallow, fairly damp soils. Unique to New South Wales, such vegetation is found from Gosford to Royal National Park, with southern outliers at Barren Grounds and Jervis Bay. Climate zones Although barren lands are generally located in areas associated with arid, semi-arid, polar and tundra climates, they can also be extensively found in milder, temperate, and/or humid climates as well, such as: The Buck Creek Serpentine Barrens in North Carolina received around 1770 mm of precipitation in the last ten years. The Nottingham Serpentine Barren are very humid and has an average temperature of 11 degrees Celsius. Here, the average precipitation averages at 1200 mm and is spread out evenly throughout the year. Another region of barren vegetation is located in the Appalachian Mountains. In the low elevation of the northern part of this mountain chain, the annual precipitation is a little lower than on the southern Appalachians high peaks. In this location, precipitation falls mostly as rain rather than snow; and also precipitation occurs mostly in the summer The ecoregion known as the Pine Barrens are found to spread across much of the northeastern United States, primarily in the state of New Jersey. Along the Atlantic coast of Nova Scotia and northeastern United States, there are patches of unforested coastal barrens spread throughout areas that contain exposed bedrock and/or little soil cover within a forested landscape. More extensive barrens can be found in much of Newfoundland and Labrador and further north in mainland Canada. In 1819, during the early European colonization of Australia, Sydney's landform was described by English explorer William Wentworth as "extremely barren, being a poor hungry sand, thickly studded with rocks". Moreover, Adelaide Plains, a region with a Mediterranean climate, was described as "barren" by early settlers. Calcareous glades are sometimes described as barrens. They occur on dolomite and limestone in humid climates. Anthropogenic relationships Anthropogenic interactions have been used over the years to help change and drive vegetation in the eastern US. Meaning that the actions of human-beings will play a role in what type of vegetation will grow in some locations. This is including things like fires and fire suppression, grazing, logging, and agriculture clearing. Research has been done and anecdotal evidence has been shown to suggest vegetation structures and composition in the eastern serpentine barrens may have also been influenced by local disturbance regimes associated with these events as well as mining Savannahs and barrens are ecosystems that are rare in North America. This is due in part to human impacts, such as agriculture, urbanization, and altering the natural fire regimes. Over the past 50 years, the area of savannah-like openings and pine woodland has been continuously reduced over the years, a tendency opposite to that of hardwood forests. These changes in vegetation structures along with the composition are caused by, in part to anthropogenic changes in the fire regime. Following the burning of vegetation there is a release of inorganic nutrients into the ecosystem caused by the combustion of the plants biomass and therefore, releasing the nutrients. This release of nutrients, after the occurrence, is thought to be a reason for an increase in plant productivity. Global distribution and geography Regions on the Earth’s surface where soils are dominating the ecosystems with little to no plant cover are often referred to as “Barren”. These places are areas like deserts, Polar Regions, areas of high elevation, and zones of glacier retreat. For barren zones that are situated in mountain ranges, they are often called the "Subnival Zone", and are found at elevations between the upper limit of the vegetation zone and the lower limit of the ice-covered zone. Subnival zones in places like the Rockies, Andes, and Himalayas have increased greatly in the past few years due to the retreat of high elevation glaciers and the ice caps. One area for study is The Nottingham Serpentine Barrens, which covers 200 ha in southern Chester Country, Pennsylvania on the Pennsylvania-Maryland border. The typical serpentine barren is either a prairie or savannah grassland. The soils here in this location are a section of the Neshaminy-Chrome-Conowingo association. These soils are deep and are derived from the serpentine bedrock. This series of soils are well-drained and also moderately sloping. With this, these specific locations have been under heavy erosion forces and have a depth to its parent bedrock within a distance of 15–75 cm. Here, there is also low permeability which makes it difficult for plants to have availability to water and therefore hard to collect moisture. Mean elevation and elevation range limits both vegetation zones and individual species should be defined with increasing latitude. For example, in the southern Appalachians, high-elevation outcrops, composition gradients are a function of elevation, potential solar radiation, a geographic gradient that corresponds to broad geological differences (mafic rocks to the northwest vs. felsic rocks in the southwest direction), and surficial geomorphology (bedrock surfaces that are less fractured in the southeast). See also Alvar Sclerophyll References External links Ecoregions Ecosystems
Barren vegetation
[ "Biology" ]
1,777
[ "Symbiosis", "Ecosystems" ]
14,669,433
https://en.wikipedia.org/wiki/CyberCIEGE
CyberCIEGE is a serious game designed to teach network security concepts. Its development was sponsored by the U.S. Navy, and it is used as a training tool by agencies of the U.S. government, universities and community colleges. CyberCIEGE covers a broad range of cybersecurity topics. Players purchase and configure computers and network devices to keep demanding users happy (e.g., by providing Internet access) all while protecting assets from a variety of attacks. The game includes a number of different scenarios, some of which focus on basic training and awareness, others on more advanced network security concepts. A "Scenario Development Kit" is available for creating and customizing scenarios. Network security components include configurable firewalls, VPN gateways, VPN clients, link encryptors and authentication servers. Workstations and servers include access control lists (ACLs) may be configured with operating systems that enforce label-based mandatory access control policies. Players can deploy Public Key Infrastructure (PKI)-based cryptography to protect email, web traffic and VPNs. The game also includes identity management devices such as biometric scanners and card readers to control access to workstations and physical areas. The CyberCIEGE game engine consumes a “scenario development language” that describes each scenario in terms of users (and their goals), assets (and their values), the initial state of the scenario in terms of pre-existing components, and the conditions and triggers that provide flow to the scenario. The game engine is defined with enough fidelity to host scenarios ranging from e-mail attachment awareness to cyber warfare. Game play CyberCIEGE scenarios place the player into situations in which the player must make information assurance decisions. The interactive simulation illustrates potential consequences of player choices in terms of attacks on information assets and disruptions to authorized user access to assets. The game employs hyperbole as a means of engaging students in the scenario, and thus the simulation is not intended to always identify the actual consequences of specific choices. The game confronts the student with problems, conflicts and questions that should be considered when developing and implementing a security policy. The game is designed as a "construction and management simulation" set in a three-dimensional virtual world. Players build networks and observe virtual users and their thoughts. Each scenario is divided into multiple phases, and each phase includes one or more objectives the player must achieve prior to moving on to the next phase. Players view the status of the virtual user’s success in achieving goals (i.e., accessing enterprise assets via computers and networks). Unproductive users express unhappy thoughts, utter comic book style speech bubbles and bang on their keyboards. Players see the consequences of attacks as lost money, pop-up messages, video clips and burning computers. Game Engine CyberCIEGE includes a sophisticated attack engine that assesses network topologies, component configurations, physical security, user training and procedural security settings. The attack engine weighs resultant vulnerabilities against the attacker motives to compromise assets on the network—and this motive may vary by asset. Thus, some assets might be defended via a firewall, while other assets might require an air gap or high assurance protection mechanisms. Attack types include Trojan horses, viruses, trap doors, denial of service, insiders (i.e., bribed users who lack background checks), un-patched flaws and physical attacks. The attack engine is coupled with an economy engine that measures the virtual user’s ability to achieve goals (i.e., read or write assets) using computers and networks. This combination supports scenarios that illustrate real-world trade-offs such as the use of air-gaps versus the risks of cross-domain solutions when accessing assets on both classified and unclassified networks. The game engine includes a defined set of assessable conditions and resultant triggers that allow the scenario designer to provide players with feedback, (e.g., bubble speech from characters, screen tickers, pop-up messages, etc.), and to transition the game to new phases. CyberCIEGE Fidelity The fidelity of the game engine is intended to be high enough for players to make meaningful choices with respect to deploying network security countermeasures, but not be so high as to engulf the player with administrative minutiae. CyberCIEGE illustrates abstract functions of technical protection mechanisms and configuration-related vulnerabilities. For example, an attack might occur because a particular firewall port is left open and a specific software service is not patched. CyberCIEGE has been designed to provide a fairly consistent level of abstraction among the various network and computer components and technical countermeasures. This can be seen by considering several CyberCIEGE game components. CyberCIEGE firewalls include network filters that let players block traffic over selected application “ports” (e.g., Telnet). Players can configure these filters for different network interfaces and different traffic directions. This lets players see the consequences of leaving ports open (e.g., attacks). And this allows players to experience the need to open some ports (e.g., one of the characters might be unable to achieve a goal unless the filter is configured to allow SSH traffic). CyberCIEGE includes VPN gateways and computer based VPN mechanisms that players configure to identify the characteristics of the protection (e.g., encryption, authentication or neither) provided to network traffic, depending on its source and destination. This allows CyberCIEGE to illustrate risks associated with providing unprotected Internet access to the same workstation that has a VPN tunnel into the corporate network. Other network components (e.g., workstations) include configuration choices related to the type of component. CyberCIEGE lets players select consequential password policies and other procedural and configuration settings. References External links CyberCIEGE official site Computer network security Serious games 2004 video games
CyberCIEGE
[ "Engineering" ]
1,218
[ "Cybersecurity engineering", "Computer networks engineering", "Computer network security" ]
14,669,487
https://en.wikipedia.org/wiki/Productive%20nanosystems
In 2007, productive nanosystems were defined as functional nanoscale systems that make atomically-specified structures and devices under programmatic control, i.e., performing atomically precise manufacturing. As of 2015, such devices were only hypothetical, and productive nanosystems represented a more advanced approach among several to perform Atomically Precise Manufacturing. A workshop on Integrated Nanosystems for Atomically Precise Manufacturing was held by the Department of Energy in 2015. Present-day technologies are limited in various ways. Large atomically precise structures (that is, virtually defect-free) do not exist. Complex 3D nanoscale structures exist in the form of folded linear molecules such as DNA origami and proteins. As of 2018, it was also possible to build very small atomically precise structures using scanning probe microscopy to construct molecules such as FeCO and Triangulene, or to perform hydrogen depassivation lithography. But it is not yet possible to combine components in a systematic way to build larger, more complex systems. Principles of physics and examples from nature both suggest that it will be possible to extend atomically precise fabrication to more complex products of larger size, involving a wider range of materials. An example of progress in this direction would be Christian Schafmeister's work on bis-peptides. Stages of progress in nanotechnology In 2005, Mihail Roco, one of the architects of the USA's National Nanotechnology Initiative, proposed four states of nanotechnology that seem to parallel the technical progress of the Industrial Revolution, of which productive nanosystems is the most advanced. 1. Passive nanostructures - nanoparticles and nanotubes that provide added strength, electrical and thermal conductivity, toughness, hydrophilic/phobic and/or other properties that emerge from their nanoscale structure. 2. Active nanodevices - nanostructures that change states in order to transform energy, information, and/or to perform useful functions. There is some debate about whether or not state-of-the art integrated circuits qualify here, since they operate despite emergent nanoscale properties, not because of them. Therefore, the argument goes, they don't qualify as "novel" nanoscale properties, even though the devices themselves are between one and a hundred nanometers. 3. Complex nanomachines - the assembly of different nanodevices into a nanosystem to accomplish a complex function. Some would argue that Zettl's machines fit in this category; others argue that modern microprocessors and FPGAs also fit. 4. Systems of nanosystems/Productive nanosystems - these will be complex nanosystems that produce atomically precise parts for other nanosystems, not necessarily using novel nanoscale-emergent properties, but well-understood fundamentals of manufacturing. Because of the discrete (i.e. atomic) nature of matter and the possibility of exponential growth, this stage is seen as the basis of another industrial revolution. There are currently many different approaches to building productive nanosystems: including top-down approaches like Patterned atomic layer epitaxy and Diamondoid Mechanosynthesis. There are also bottom-up approaches like DNA Origami and Bis-peptide Synthesis. A fifth step, info/bio/nano convergence, was added later by Roco. This is the convergence of the three most revolutionary technologies, since every living thing is made up of atoms and information. See also Clanking replicator Ribosome Synthetic biology References Nanotechnology
Productive nanosystems
[ "Materials_science", "Engineering" ]
725
[ "Nanotechnology", "Materials science" ]
14,669,563
https://en.wikipedia.org/wiki/Bid-to-cover%20ratio
Bid-To-Cover Ratio is a ratio used to measure the demand for a particular security during offerings and auctions. In general, it is used for shares, bonds, and other securities. It may be computed in two ways: either the number of bids received divided by the number of bids accepted, or the value of bids received divided by the value of bids accepted. The higher the ratio, the higher the demand. A ratio above 2.0 indicates a successful auction with aggressive bids. A lower reading indicates weak demand and is said to have a long tail (a wide spread between the average and the high yield). Example For example, suppose debt managers are seeking to raise $10 billion in ten-year notes with a 5.130% coupon, and, in aggregate, they have received seven bids from lenders as follows: Bid 1 for $1.00 billion at 5.115% Bid 2 for $2.50 billion at 5.120% Bid 3 for $3.50 billion at 5.125% Bid 4 for $4.50 billion at 5.130% Bid 5 for $3.75 billion at 5.135% Bid 6 for $2.75 billion at 5.140% Bid 7 for $1.50 billion at 5.145% The total of all bids received is $19.5 billion, and the number of bids accepted would be $10 billion, therefore leading to a bid-to-cover ratio of 1.95 (calculated by the value method). Since the managers are interested in raising the cheapest debt possible, bids 1, 2, 3 will be covered in full ($7 billion). Bid 4 will be partially covered ($3 billion out of $4.5 billion). Bids 5, 6, 7 will be rejected. The final coupon will be fixed at 5.130% (the rate of the last bid accepted) for all the bids covered. See also Dutch auction Overallotment option References External links How do treasury auctions work? Auction theory Financial ratios
Bid-to-cover ratio
[ "Mathematics" ]
418
[ "Metrics", "Quantity", "Financial ratios", "Auction theory", "Game theory" ]
14,669,633
https://en.wikipedia.org/wiki/Alpha/beta%20hydrolase%20superfamily
The alpha/beta hydrolase superfamily is a superfamily of hydrolytic enzymes of widely differing phylogenetic origin and catalytic function that share a common fold. The core of each enzyme is an alpha/beta-sheet (rather than a barrel), containing 8 beta strands connected by 6 alpha helices. The enzymes are believed to have diverged from a common ancestor, retaining little obvious sequence similarity, but preserving the arrangement of the catalytic residues. All have a catalytic triad, the elements of which are borne on loops, which are the best-conserved structural features of the fold. The alpha/beta hydrolase fold includes proteases, lipases, peroxidases, esterases, epoxide hydrolases and dehalogenases. Database The ESTHER database provides a large collection of information about this superfamily of proteins. Subfamilies 3-oxoadipate enol-lactonase Human proteins containing this domain ABHD10; ABHD11; ABHD12; ABHD12B; ABHD13; ABHD2; ABHD3; ABHD4; ABHD5; ABHD6; ABHD7; ABHD8; ABHD9; BAT5; BPHL; C20orf135; EPHX1; EPHX2; FAM108B1; LIPA; LIPF; LIPJ; LIPK; LIPM; LIPN; LYPLAL1; MEST; MGLL; PPME1; SERHL; SERHL2; SPG21; CES1; CES2; C4orf29 See also Ecdysteroid-phosphate phosphatase - structure of a steroid phosphate phosphotase Serine hydrolase - an enzyme family that is composed largely of proteins with alpha-beta hydrolase folds External links The ESTHER database References Protein domains Peripheral membrane proteins Hydrolases Protein superfamilies
Alpha/beta hydrolase superfamily
[ "Biology" ]
392
[ "Protein superfamilies", "Protein domains", "Protein classification" ]
14,669,738
https://en.wikipedia.org/wiki/Snyder-Middleswarth%20Natural%20Area
Snyder-Middleswarth Natural Area is a 500 acre (202 ha) National Natural Landmark within Bald Eagle State Forest in Spring Township, Snyder County, Pennsylvania in the United States. It is named for two Pennsylvania politicians from Snyder County: Simon Snyder and Ner Alexander Middleswarth. It was formerly a Pennsylvania state park and was the only one in Snyder County, but lost its state park status in the mid 1990s. Name Snyder-Middleswarth Natural Area is named for two Pennsylvania politicians from Snyder County: Simon Snyder and Ner Alexander Middleswarth. Snyder County is also named for Simon Snyder. Snyder (1759 – 1819) was a three-time Speaker of the Pennsylvania House of Representatives and the third governor of Pennsylvania. He was elected to the United States Senate, but died before he could take office. As of 2007 he remains the only Pennsylvania governor from Snyder County. Middleswarth (1783 – 1865) was twice Speaker of the Pennsylvania House, and served in the Pennsylvania State Senate and the United States House of Representatives. The United States Geological Survey Geographic Names Information System (GNIS) lists the name as "Snyder Middleswarth Natural Area". As of 2023, the hyphen is used by the Pennsylvania Department of Conservation and Natural Resources, as well as the National Park Service in its entry for the National Natural Landmark. Location Snyder-Middleswarth Natural Area is in Spring Township in western Snyder County, about 5 miles (8 km) west of Troxelville on Swift Run Road. It is 23 miles (37 km) southwest of Lewisburg and 31 miles (50 km) southeast of State College. The natural area is in the Ridge-and-valley Appalachians, in a narrow east-west valley between Jacks Mountain to the south and Buck and Penns Creek Mountains to the north. Swift Run, a tributary of Middle Creek, flows east through the area. The Rock Springs Picnic Area is at the eastern end of the preserve, with the Snyder-Middleswarth Picnic Area west of this, in about the center of the tract, just where Swift Run Road leaves Swift Run. Tall Timbers Natural Area is the western border, while Bald Eagle State Forest lands surround Snyder-Middleswarth Natural Area in all other directions. History In the 19th and early 20th centuries, almost all of Pennsylvania's forests were clear cut, with only a few isolated tracts of virgin forest surviving. The land that became Snyder-Middleswarth Natural Area was purchased by the state in 1902, as part of a larger 14,000 acre (56.66 km) parcel. On April 12, 1921 the governor signed the law creating "Snyder-Middleswarth State Forest Park", making it Pennsylvania's ninth state park. By 1923 the park had a telephone and some structures, and in 1937 the state named it a "Forest Monument" as an "area of botanical or historic interest". Early in the park's history a fire tower was built just west of it, but this was eventually abandoned and only the foundations remained by 1992. Snyder-Middleswarth was still a "State Forest Park" on the official 1965 Pennsylvania Department of Highways Snyder County map. In November 1967, the park was named a National Natural Landmark, as an "outstanding example of a relict forest composed predominantly of hemlock, birch, and pine, with scattered oaks". In 1980 an airplane carrying the New York Times crashed with one fatality. The crash site is on the summit of Thick Mountain, on the southern edge of the park. By 1981, both the Snyder-Middleswarth and Tall Timbers Natural Areas had been established, the former as part of the state park and the latter as part of Bald Eagle State Forest. While both areas are on Swift Run, Tall Timbers is old second-growth forest. Snyder-Middleswarth's virgin forest is thought to have survived at least in part due to its location and the difficulty of transporting the cut timber, although the fact that many of the trees were brittle hemlock may also have preserved them. Despite being Snyder County's only state park and a National Natural Landmark, Snyder-Middleswarth lost its status as a state park sometime between 1992 and 1996, becoming just a Natural Area within the state forest system. Sources differ as to the size of the former Snyder-Middleswarth State Park. As of December 2007, at least ten years after the park ceased to exist, the DCNR webpage "State Parks near the Bald Eagle State Forest" still lists Snyder-Middleswarth State Park, and gives its size as 425 acres (172 ha). However, Thwaites (1992) wrote that the park was only the 8 acre (3.2 ha) picnic area, but distinguished it from the "much larger Snyder Middleswarth National Natural Landmark" (without giving its exact size). According to the DCNR, as of 2007 Snyder-Middleswarth Natural Area is 500 acres (202 ha), of which 250 acres (101 ha) is virgin forest. The tallest trees at Snyder-Middleswarth are more than 150 feet (46 m) tall and measure more than 40 inches (102 cm) diameter at breast height. As measured by its growth rings, one fallen tree was found to be 347 years old. The adjoining Tall Timbers Natural Area is 660 acres (267 ha), and has a "second growth forest of oak, white pine, hemlock, and hard pine". References Old-growth forests National Natural Landmarks in Pennsylvania Protected areas established in 1921 Protected areas of Snyder County, Pennsylvania
Snyder-Middleswarth Natural Area
[ "Biology" ]
1,131
[ "Old-growth forests", "Ecosystems" ]
14,669,989
https://en.wikipedia.org/wiki/Viola%E2%80%93Jones%20object%20detection%20framework
The Viola–Jones object detection framework is a machine learning object detection framework proposed in 2001 by Paul Viola and Michael Jones. It was motivated primarily by the problem of face detection, although it can be adapted to the detection of other object classes. In short, it consists of a sequence of classifiers. Each classifier is a single perceptron with several binary masks (Haar features). To detect faces in an image, a sliding window is computed over the image. For each image, the classifiers are applied. If at any point, a classifier outputs "no face detected", then the window is considered to contain no face. Otherwise, if all classifiers output "face detected", then the window is considered to contain a face. The algorithm is efficient for its time, able to detect faces in 384 by 288 pixel images at 15 frames per second on a conventional 700 MHz Intel Pentium III. It is also robust, achieving high precision and recall. While it has lower accuracy than more modern methods such as convolutional neural network, its efficiency and compact size (only around 50k parameters, compared to millions of parameters for typical CNN like DeepFace) means it is still used in cases with limited computational power. For example, in the original paper, they reported that this face detector could run on the Compaq iPAQ at 2 fps (this device has a low power StrongARM without floating point hardware). Problem description Face detection is a binary classification problem combined with a localization problem: given a picture, decide whether it contains faces, and construct bounding boxes for the faces. To make the task more manageable, the Viola–Jones algorithm only detects full view (no occlusion), frontal (no head-turning), upright (no rotation), well-lit, full-sized (occupying most of the frame) faces in fixed-resolution images. The restrictions are not as severe as they appear, as one can normalize the picture to bring it closer to the requirements for Viola-Jones. any image can be scaled to a fixed resolution for a general picture with a face of unknown size and orientation, one can perform blob detection to discover potential faces, then scale and rotate them into the upright, full-sized position. the brightness of the image can be corrected by white balancing. the bounding boxes can be found by sliding a window across the entire picture, and marking down every window that contains a face. This would generally detect the same face multiple times, for which duplication removal methods, such as non-maximal suppression, can be used. The "frontal" requirement is non-negotiable, as there is no simple transformation on the image that can turn a face from a side view to a frontal view. However, one can train multiple Viola-Jones classifiers, one for each angle: one for frontal view, one for 3/4 view, one for profile view, a few more for the angles in-between them. Then one can at run time execute all these classifiers in parallel to detect faces at different view angles. The "full-view" requirement is also non-negotiable, and cannot be simply dealt with by training more Viola-Jones classifiers, since there are too many possible ways to occlude a face. Components of the framework A full presentation of the algorithm is in. Consider an image of fixed resolution . Our task is to make a binary decision: whether it is a photo of a standardized face (frontal, well-lit, etc) or not. Viola–Jones is essentially a boosted feature learning algorithm, trained by running a modified AdaBoost algorithm on Haar feature classifiers to find a sequence of classifiers . Haar feature classifiers are crude, but allows very fast computation, and the modified AdaBoost constructs a strong classifier out of many weak ones. At run time, a given image is tested on sequentially. If at any point, , the algorithm immediately returns "no face detected". If all classifiers return 1, then the algorithm returns "face detected". For this reason, the Viola-Jones classifier is also called "Haar cascade classifier". Haar feature classifiers Consider a perceptron defined by two variables . It takes in an image of fixed resolution, and returns A Haar feature classifier is a perceptron with a very special kind of that makes it extremely cheap to calculate. Namely, if we write out the matrix , we find that it takes only three possible values , and if we color the matrix with white on , black on , and transparent on , the matrix is in one of the 5 possible patterns shown on the right. Each pattern must also be symmetric to x-reflection and y-reflection (ignoring the color change), so for example, for the horizontal white-black feature, the two rectangles must be of the same width. For the vertical white-black-white feature, the white rectangles must be of the same height, but there is no restriction on the black rectangle's height. Rationale for Haar features The Haar features used in the Viola-Jones algorithm are a subset of the more general Haar basis functions, which have been used previously in the realm of image-based object detection. While crude compared to alternatives such as steerable filters, Haar features are sufficiently complex to match features of typical human faces. For example: The eye region is darker than the upper-cheeks. The nose bridge region is brighter than the eyes. Composition of properties forming matchable facial features: Location and size: eyes, mouth, bridge of nose Value: oriented gradients of pixel intensities Further, the design of Haar features allows for efficient computation of using only constant number of additions and subtractions, regardless of the size of the rectangular features, using the summed-area table. Learning and using a Viola–Jones classifier Choose a resolution for the images to be classified. In the original paper, they recommended . Learning Collect a training set, with some containing faces, and others not containing faces. Perform a certain modified AdaBoost training on the set of all Haar feature classifiers of dimension , until a desired level of precision and recall is reached. The modified AdaBoost algorithm would output a sequence of Haar feature classifiers . The details of the modified AdaBoost algorithm is detailed below. Using To use a Viola-Jones classifier with on an image , compute sequentially. If at any point, , the algorithm immediately returns "no face detected". If all classifiers return 1, then the algorithm returns "face detected". Learning algorithm The speed with which features may be evaluated does not adequately compensate for their number, however. For example, in a standard 24x24 pixel sub-window, there are a total of possible features, and it would be prohibitively expensive to evaluate them all when testing an image. Thus, the object detection framework employs a variant of the learning algorithm AdaBoost to both select the best features and to train classifiers that use them. This algorithm constructs a "strong" classifier as a linear combination of weighted simple “weak” classifiers. Each weak classifier is a threshold function based on the feature . The threshold value and the polarity are determined in the training, as well as the coefficients . Here a simplified version of the learning algorithm is reported: Input: Set of positive and negative training images with their labels . If image is a face , if not . Initialization: assign a weight to each image . For each feature with Renormalize the weights such that they sum to one. Apply the feature to each image in the training set, then find the optimal threshold and polarity that minimizes the weighted classification error. That is where Assign a weight to that is inversely proportional to the error rate. In this way best classifiers are considered more. The weights for the next iteration, i.e. , are reduced for the images that were correctly classified. Set the final classifier to Cascade architecture On average only 0.01% of all sub-windows are positive (faces) Equal computation time is spent on all sub-windows Must spend most time only on potentially positive sub-windows. A simple 2-feature classifier can achieve almost 100% detection rate with 50% FP rate. That classifier can act as a 1st layer of a series to filter out most negative windows 2nd layer with 10 features can tackle “harder” negative-windows which survived the 1st layer, and so on... A cascade of gradually more complex classifiers achieves even better detection rates. The evaluation of the strong classifiers generated by the learning process can be done quickly, but it isn't fast enough to run in real-time. For this reason, the strong classifiers are arranged in a cascade in order of complexity, where each successive classifier is trained only on those selected samples which pass through the preceding classifiers. If at any stage in the cascade a classifier rejects the sub-window under inspection, no further processing is performed and continue on searching the next sub-window. The cascade therefore has the form of a degenerate tree. In the case of faces, the first classifier in the cascade – called the attentional operator – uses only two features to achieve a false negative rate of approximately 0% and a false positive rate of 40%. The effect of this single classifier is to reduce by roughly half the number of times the entire cascade is evaluated. In cascading, each stage consists of a strong classifier. So all the features are grouped into several stages where each stage has certain number of features. The job of each stage is to determine whether a given sub-window is definitely not a face or may be a face. A given sub-window is immediately discarded as not a face if it fails in any of the stages. A simple framework for cascade training is given below: f = the maximum acceptable false positive rate per layer. d = the minimum acceptable detection rate per layer. Ftarget = target overall false positive rate. P = set of positive examples. N = set of negative examples. F(0) = 1.0; D(0) = 1.0; i = 0 while F(i) > Ftarget increase i n(i) = 0; F(i)= F(i-1) while F(i) > f × F(i-1) increase n(i) use P and N to train a classifier with n(i) features using AdaBoost Evaluate current cascaded classifier on validation set to determine F(i) and D(i) decrease threshold for the ith classifier (i.e. how many weak classifiers need to accept for strong classifier to accept) until the current cascaded classifier has a detection rate of at least d × D(i-1) (this also affects F(i)) N = ∅ if F(i) > Ftarget then evaluate the current cascaded detector on the set of non-face images and put any false detections into the set N. The cascade architecture has interesting implications for the performance of the individual classifiers. Because the activation of each classifier depends entirely on the behavior of its predecessor, the false positive rate for an entire cascade is: Similarly, the detection rate is: Thus, to match the false positive rates typically achieved by other detectors, each classifier can get away with having surprisingly poor performance. For example, for a 32-stage cascade to achieve a false positive rate of , each classifier need only achieve a false positive rate of about 65%. At the same time, however, each classifier needs to be exceptionally capable if it is to achieve adequate detection rates. For example, to achieve a detection rate of about 90%, each classifier in the aforementioned cascade needs to achieve a detection rate of approximately 99.7%. Using Viola–Jones for object tracking In videos of moving objects, one need not apply object detection to each frame. Instead, one can use tracking algorithms like the KLT algorithm to detect salient features within the detection bounding boxes and track their movement between frames. Not only does this improve tracking speed by removing the need to re-detect objects in each frame, but it improves the robustness as well, as the salient features are more resilient than the Viola-Jones detection framework to rotation and photometric changes. References External links Slides Presenting the Framework Information Regarding Haar Basis Functions - open-source tool for image mining An improved algorithm on Viola-Jones object detector Citations of the Viola–Jones algorithm in Google Scholar - Adaboost Explanation from ppt by Qing Chen, Discovery Labs, University of Ottawa and a video lecture by Ramsri Goutham. Implementations MATLAB: , OpenCV: implemented as cvHaarDetectObjects(). Haar Cascade Detection in OpenCV Cascade Classifier Training in OpenCV Object recognition and categorization Facial recognition Articles with example pseudocode Gesture recognition Computer vision
Viola–Jones object detection framework
[ "Engineering" ]
2,688
[ "Artificial intelligence engineering", "Packaging machinery", "Computer vision" ]
14,670,096
https://en.wikipedia.org/wiki/Regulator%20of%20G%20protein%20signaling
Regulators of G protein signaling (RGS) are protein structural domains or the proteins that contain these domains, that function to activate the GTPase activity of heterotrimeric G-protein α-subunits. RGS proteins are multi-functional, GTPase-accelerating proteins that promote GTP hydrolysis by the α-subunit of heterotrimeric G proteins, thereby inactivating the G protein and rapidly switching off G protein-coupled receptor signaling pathways. Upon activation by receptors, G proteins exchange GDP for GTP, are released from the receptor, and dissociate into a free, active GTP-bound α-subunit and βγ-dimer, both of which activate downstream effectors. The response is terminated upon GTP hydrolysis by the α-subunit (), which can then re-bind the βγ-dimer ( ) and the receptor. RGS proteins markedly reduce the lifespan of GTP-bound α-subunits by stabilising the G protein transition state. Whereas receptors stimulate GTP binding, RGS proteins stimulate GTP hydrolysis. RGS proteins have been conserved in evolution. The first to be identified was Sst2 ("SuperSensiTivity to pheromone") in yeast (Saccharomyces cerevisiae). All RGS proteins contain an RGS-box (or RGS domain), which is required for activity. Some small RGS proteins such as RGS1 and RGS4 are little more than an RGS domain, while others also contain additional domains that confer further functionality. RGS domains in the G protein-coupled receptor kinases are able to bind to Gq family α-subunits, but do not accelerate their GTP hydrolysis. Instead, GRKs appear to reduce Gq signaling by sequestering the active α-subunits away from effectors such as phospholipase C-β. Plants have RGS proteins but do not have canonical G protein-coupled receptors. Thus G proteins and GTPase accelerating proteins appear to have evolved before any known G protein activator. RGS domains can be found within the same protein in combination with a variety of other domains, including: DEP for membrane targeting (), PDZ for binding to GPCRs (), PTB for phosphotyrosine-binding (), RBD for Ras-binding (), GoLoco for guanine nucleotide inhibitor activity (), PX for phosphoinositide-binding (), PXA that is associated with PX (), PH for phosphatidylinositol-binding (), and GGL (G protein gamma subunit-like) for binding G protein beta subunits ( Those RGS proteins that contain GGL domains can interact with G protein beta subunits to form novel dimers that prevent G protein gamma subunit binding and G protein alpha subunit association, thereby preventing heterotrimer formation. Examples Human proteins containing this domain include: AXIN1, AXIN2 GRK1, GRK2, GRK3, GRK4, GRK5, GRK6, GRK7 RGS1, RGS2, RGS3, RGS4, RGS5, RGS6, RGS7, RGS8, RGS9, RGS10, RGS11, RGS12, RGS13, RGS14, RGS16, RGS17, RGS18, RGS19, RGS20, RGS21 SNX13 See also GTP-binding protein regulators: GEF GAP References Further reading External links in PROSITE G proteins Protein domains Peripheral membrane proteins
Regulator of G protein signaling
[ "Chemistry", "Biology" ]
772
[ "G proteins", "Protein domains", "Protein classification", "Signal transduction" ]
14,670,825
https://en.wikipedia.org/wiki/Mark%E2%80%93Houwink%20equation
The Mark–Houwink equation, also known as the Mark–Houwink–Sakurada equation or the Kuhn–Mark–Houwink–Sakurada equation or the Landau–Kuhn–Mark–Houwink–Sakurada equation or the Mark-Chrystian equation gives a relation between intrinsic viscosity and molecular weight : From this equation the molecular weight of a polymer can be determined from data on the intrinsic viscosity and vice versa. The values of the Mark–Houwink parameters, and , depend on the particular polymer-solvent system as well as temperature. For solvents, a value of is indicative of a theta solvent. A value of is typical for good solvents. For most flexible polymers, . For semi-flexible polymers, . For polymers with an absolute rigid rod, such as Tobacco mosaic virus, . It is named after Herman F. Mark and Roelof Houwink. Applications The Mark-Houwink equation is used in size-exclusion chromatography (SEC) to construct the so called universal calibration curve which can be used to determine the molecular weight of a polymer A using a calibration done with polymer B. In SEC molecules are separated based on hydrodynamic volume, i.e. the size of the coil a given polymer forms in solution. The hydrodynamic volume, however, cannot simply be related to molecular weight (compare comb-like polystyrene vs. linear polystyrene). This means that the molecular weight associated with a given retention volume is substance specific and that in order to determine the molecular weight of a given polymer a molecular-weight size marker of the same substance must be available. However, the product of the intrinsic viscosity and the molecular weight, , is proportional to the hydrodynamic radius and therefore independent of substance. It follows that is true at any given retention volume. Substitution of using the Mark-Houwink equation gives: which can be used to relate the molecular weight of any two polymers using their Mark-Houwink constants (i.e. "universally" applicable for calibration). For example, if narrow molar mass distribution standards are available for polystyrene, these can be used to construct a calibration curve (typically vs. retention volume ) in eg. toluene at 40 °C. This calibration can then be used to determine the "polystyrene equivalent" molecular weight of a polyethylene sample if the Mark-Houwink parameters for both substances are known in this solvent at this temperature. References Polymer chemistry
Mark–Houwink equation
[ "Chemistry", "Materials_science", "Engineering" ]
538
[ "Materials science", "Polymer chemistry" ]
14,670,996
https://en.wikipedia.org/wiki/Time%E2%80%93temperature%20superposition
The time–temperature superposition principle is a concept in polymer physics and in the physics of glass-forming liquids. This superposition principle is used to determine temperature-dependent mechanical properties of linear viscoelastic materials from known properties at a reference temperature. The elastic moduli of typical amorphous polymers increase with loading rate but decrease when the temperature is increased. Curves of the instantaneous modulus as a function of time do not change shape as the temperature is changed but appear only to shift left or right. This implies that a master curve at a given temperature can be used as the reference to predict curves at various temperatures by applying a shift operation. The time-temperature superposition principle of linear viscoelasticity is based on the above observation. The application of the principle typically involves the following steps: experimental determination of frequency-dependent curves of isothermal viscoelastic mechanical properties at several temperatures and for a small range of frequencies computation of a translation factor to correlate these properties for the temperature and frequency range experimental determination of a master curve showing the effect of frequency for a wide range of frequencies application of the translation factor to determine temperature-dependent moduli over the whole range of frequencies in the master curve. The translation factor is often computed using an empirical relation first established by Malcolm L. Williams, Robert F. Landel and John D. Ferry (also called the Williams-Landel-Ferry or WLF model). An alternative model suggested by Arrhenius is also used. The WLF model is related to macroscopic motion of the bulk material, while the Arrhenius model considers local motion of polymer chains. Some materials, polymers in particular, show a strong dependence of viscoelastic properties on the temperature at which they are measured. If you plot the elastic modulus of a noncrystallizing crosslinked polymer against the temperature at which you measured it, you will get a curve which can be divided up into distinct regions of physical behavior. At very low temperatures, the polymer will behave like a glass and exhibit a high modulus. As you increase the temperature, the polymer will undergo a transition from a hard “glassy” state to a soft “rubbery” state in which the modulus can be several orders of magnitude lower than it was in the glassy state. The transition from glassy to rubbery behavior is continuous and the transition zone is often referred to as the leathery zone. The onset temperature of the transition zone, moving from glassy to rubbery, is known as the glass transition temperature, or Tg. In the 1940s Andrews and Tobolsky showed that there was a simple relationship between temperature and time for the mechanical response of a polymer. Modulus measurements are made by stretching or compressing a sample at a prescribed rate of deformation. For polymers, changing the rate of deformation will cause the curve described above to be shifted along the temperature axis. Increasing the rate of deformation will shift the curve to higher temperatures so that the transition from a glassy to a rubbery state will happen at higher temperatures. It has been shown experimentally that the elastic modulus (E) of a polymer is influenced by the load and the response time. Time–temperature superposition implies that the response time function of the elastic modulus at a certain temperature resembles the shape of the same functions of adjacent temperatures. Curves of E vs. log(response time) at one temperature can be shifted to overlap with adjacent curves, as long as the data sets did not suffer from ageing effects during the test time (see Williams-Landel-Ferry equation). The Deborah number is closely related to the concept of time-temperature superposition. Physical principle Consider a viscoelastic body that is subjected to dynamic loading. If the excitation frequency is low enough the viscous behavior is paramount and all polymer chains have the time to respond to the applied load within a time period. In contrast, at higher frequencies, the chains do not have the time to fully respond and the resulting artificial viscosity results in an increase in the macroscopic modulus. Moreover, at constant frequency, an increase in temperature results in a reduction of the modulus due to an increase in free volume and chain movement. Time–temperature superposition is a procedure that has become important in the field of polymers to observe the dependence upon temperature on the change of viscosity of a polymeric fluid. Rheology or viscosity can often be a strong indicator of the molecular structure and molecular mobility. Time–temperature superposition avoids the inefficiency of measuring a polymer's behavior over long periods of time at a specified temperature by utilizing the fact that at higher temperatures and shorter time the polymer will behave the same, provided there are no phase transitions. Time-temperature superposition Consider the relaxation modulus E at two temperatures T and T0 such that T > T0. At constant strain, the stress relaxes faster at the higher temperature. The principle of time-temperature superposition states that the change in temperature from T to T0 is equivalent to multiplying the time scale by a constant factor aT which is only a function of the two temperatures T and T0. In other words, The quantity aT is called the horizontal translation factor or the shift factor and has the properties: The superposition principle for complex dynamic moduli (G* = G' + i G'' ) at a fixed frequency ω is obtained similarly: A decrease in temperature increases the time characteristics while frequency characteristics decrease. Relationship between shift factor and intrinsic viscosities For a polymer in solution or "molten" state the following relationship can be used to determine the shift factor: where ηT0 is the viscosity (non-Newtonian) during continuous flow at temperature T0 and ηT is the viscosity at temperature T. The time–temperature shift factor can also be described in terms of the activation energy (Ea). By plotting the shift factor aT versus the reciprocal of temperature (in K), the slope of the curve can be interpreted as Ea/k, where k is the Boltzmann constant = 8.64x10−5 eV/K and the activation energy is expressed in terms of eV. Shift factor using the Williams-Landel-Ferry (WLF) model The empirical relationship of Williams-Landel-Ferry, combined with the principle of time-temperature superposition, can account for variations in the intrinsic viscosity η0 of amorphous polymers as a function of temperature, for temperatures near the glass transition temperature Tg. The WLF model also expresses the change with the temperature of the shift factor. Williams, Landel and Ferry proposed the following relationship for aT in terms of (T-T0) : where is the decadic logarithm and C1 and C2 are positive constants that depend on the material and the reference temperature. This relationship holds only in the approximate temperature range [Tg, Tg + 100 °C]. To determine the constants, the factor aT is calculated for each component M′ and of the complex measured modulus M*. A good correlation between the two shift factors gives the values of the coefficients C1 and C2 that characterize the material. If T0 = Tg: where Cg1 and Cg2 are the coefficients of the WLF model when the reference temperature is the glass transition temperature. The coefficients C1 and C2 depend on the reference temperature. If the reference temperature is changed from T0 to 0, the new coefficients are given by In particular, to transform the constants from those obtained at the glass transition temperature to a reference temperature T0, These same authors have proposed the "universal constants" Cg1 and Cg2 for a given polymer system be collected in a table. These constants are approximately the same for a large number of polymers and can be written Cg1 ≈ 15 and Cg2 ≈ 50 K. Experimentally observed values deviate from the values in the table. These orders of magnitude are useful and are a good indicator of the quality of a relationship that has been computed from experimental data. Construction of master curves The principle of time-temperature superposition requires the assumption of thermorheologically simple behavior (all curves have the same characteristic time variation law with temperature). From an initial spectral window [ω1, ω2] and a series of isotherms in this window, we can calculate the master curves of a material which extends over a broader frequency range. An arbitrary temperature T0 is taken as a reference for setting the frequency scale (the curve at that temperature undergoes no shift). In the frequency range [ω1, ω2], if the temperature increases from T0, the complex modulus E′(ω) decreases. This amounts to explore a part of the master curve corresponding to frequencies lower than ω1 while maintaining the temperature at T0. Conversely, lowering the temperature corresponds to the exploration of the part of the curve corresponding to high frequencies. For a reference temperature T0, shifts of the modulus curves have the amplitude log(aT). In the area of glass transition, aT is described by an homographic function of the temperature. The viscoelastic behavior is well modeled and allows extrapolation beyond the field of experimental frequencies which typically ranges from 0.01 to 100 Hz . Shift factor using Arrhenius law The WLF-model can be developed from Doolittle's concept of free volume and the thermal expansion coefficient . has a discontinuity when going below for these types of materials, which can be seen as a phase shift going to more of a solid state (the glassy region). The WLF-model is inaccurate when the material is in the solid state and an Arrhenius equation can be used instead, see equation (). The shift factor can then be defined, see equation (), using equation () where is the activation energy, is the universal gas constant, is both the glass transition temperature and the reference temperature in kelvin (however, it is possible to use another reference below as well), is the variable temperature in kelvin, and which is a conversion factor between the and . This Arrhenius law, under this glass transition temperature, applies to secondary transitions (relaxation) called β-transitions. Limitations For the superposition principle to apply, the sample must be homogeneous, isotropic and amorphous. The material must be linear viscoelastic under the deformations of interest, i.e., the deformation must be expressed as a linear function of the stress by applying very small strains, e.g. 0.01%. To apply the WLF relationship, such a sample should be sought in the approximate temperature range [Tg, Tg + 100 °C], where α-transitions are observed (relaxation). The study to determine aT and the coefficients C1 and C2 requires extensive dynamic testing at a number of scanning frequencies and temperature, which represents at least a hundred measurement points. References See also Viscoelasticity Temperature dependence of liquid viscosity Williams-Landel-Ferry equation Polymer physics Glass physics Rubber properties
Time–temperature superposition
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,265
[ "Polymer physics", "Glass engineering and science", "Glass physics", "Condensed matter physics", "Polymer chemistry" ]
14,671,319
https://en.wikipedia.org/wiki/Topological%20indistinguishability
In topology, two points of a topological space X are topologically indistinguishable if they have exactly the same neighborhoods. That is, if x and y are points in X, and Nx is the set of all neighborhoods that contain x, and Ny is the set of all neighborhoods that contain y, then x and y are "topologically indistinguishable" if and only if Nx = Ny. (See Hausdorff's axiomatic neighborhood systems.) Intuitively, two points are topologically indistinguishable if the topology of X is unable to discern between the points. Two points of X are topologically distinguishable if they are not topologically indistinguishable. This means there is an open set containing precisely one of the two points (equivalently, there is a closed set containing precisely one of the two points). This open set can then be used to distinguish between the two points. A T0 space is a topological space in which every pair of distinct points is topologically distinguishable. This is the weakest of the separation axioms. Topological indistinguishability defines an equivalence relation on any topological space X. If x and y are points of X we write x ≡ y for "x and y are topologically indistinguishable". The equivalence class of x will be denoted by [x]. Examples By definition, any two distinct points in a T0 space are topologically distinguishable. On the other hand, regularity and normality do not imply T0, so we can find nontrivial examples of topologically indistinguishable points in regular or normal topological spaces. In fact, almost all of the examples given below are completely regular. In an indiscrete space, any two points are topologically indistinguishable. In a pseudometric space, two points are topologically indistinguishable if and only if the distance between them is zero. In a seminormed vector space, x ≡ y if and only if ‖x − y‖ = 0. For example, let L2(R) be the space of all measurable functions from R to R which are square integrable (see Lp space). Then two functions f and g in L2(R) are topologically indistinguishable if and only if they are equal almost everywhere. In a topological group, x ≡ y if and only if x−1y ∈ cl{e} where cl{e} is the closure of the trivial subgroup. The equivalence classes are just the cosets of cl{e} (which is always a normal subgroup). Uniform spaces generalize both pseudometric spaces and topological groups. In a uniform space, x ≡ y if and only if the pair (x, y) belongs to every entourage. The intersection of all the entourages is an equivalence relation on X which is just that of topological indistinguishability. Let X have the initial topology with respect to a family of functions . Then two points x and y in X will be topologically indistinguishable if the family does not separate them (i.e. for all ). Given any equivalence relation on a set X there is a topology on X for which the notion of topological indistinguishability agrees with the given equivalence relation. One can simply take the equivalence classes as a base for the topology. This is called the partition topology on X. Specialization preorder The topological indistinguishability relation on a space X can be recovered from a natural preorder on X called the specialization preorder. For points x and y in X this preorder is defined by x ≤ y if and only if x ∈ cl{y} where cl{y} denotes the closure of {y}. Equivalently, x ≤ y if the neighborhood system of x, denoted Nx, is contained in the neighborhood system of y: x ≤ y if and only if Nx ⊂ Ny. It is easy to see that this relation on X is reflexive and transitive and so defines a preorder. In general, however, this preorder will not be antisymmetric. Indeed, the equivalence relation determined by ≤ is precisely that of topological indistinguishability: x ≡ y if and only if x ≤ y and y ≤ x. A topological space is said to be symmetric (or R0) if the specialization preorder is symmetric (i.e. x ≤ y implies y ≤ x). In this case, the relations ≤ and ≡ are identical. Topological indistinguishability is better behaved in these spaces and easier to understand. Note that this class of spaces includes all regular and completely regular spaces. Properties Equivalent conditions There are several equivalent ways of determining when two points are topologically indistinguishable. Let X be a topological space and let x and y be points of X. Denote the respective closures of x and y by cl{x} and cl{y}, and the respective neighborhood systems by Nx and Ny. Then the following statements are equivalent: x ≡ y for each open set U in X, U contains either both x and y or neither of them Nx = Ny x ∈ cl{y} and y ∈ cl{x} cl{x} = cl{y} x ∈ ∩Ny and y ∈ ∩Nx ∩Nx = ∩Ny x ∈ cl{y} and x ∈ ∩Ny x belongs to every open set and every closed set containing y a net or filter converges to x if and only if it converges to y These conditions can be simplified in the case where X is symmetric space. For these spaces (in particular, for regular spaces), the following statements are equivalent: x ≡ y for each open set U, if x ∈ U then y ∈ U Nx ⊂ Ny x ∈ cl{y} x ∈ ∩Ny x belongs to every closed set containing y x belongs to every open set containing y every net or filter that converges to x converges to y Equivalence classes To discuss the equivalence class of x, it is convenient to first define the upper and lower sets of x. These are both defined with respect to the specialization preorder discussed above. The lower set of x is just the closure of {x}: while the upper set of x is the intersection of the neighborhood system at x: The equivalence class of x is then given by the intersection Since ↓x is the intersection of all the closed sets containing x and ↑x is the intersection of all the open sets containing x, the equivalence class [x] is the intersection of all the open sets and closed sets containing x. Both cl{x} and ∩Nx will contain the equivalence class [x]. In general, both sets will contain additional points as well. In symmetric spaces (in particular, in regular spaces) however, the three sets coincide: In general, the equivalence classes [x] will be closed if and only if the space is symmetric. Continuous functions Let f : X → Y be a continuous function. Then for any x and y in X x ≡ y implies f(x) ≡ f(y). The converse is generally false (There are quotients of T0 spaces which are trivial). The converse will hold if X has the initial topology induced by f. More generally, if X has the initial topology induced by a family of maps then x ≡ y if and only if fα(x) ≡ fα(y) for all α. It follows that two elements in a product space are topologically indistinguishable if and only if each of their components are topologically indistinguishable. Kolmogorov quotient Since topological indistinguishability is an equivalence relation on any topological space X, we can form the quotient space KX = X/≡. The space KX is called the Kolmogorov quotient or T0 identification of X. The space KX is, in fact, T0 (i.e. all points are topologically distinguishable). Moreover, by the characteristic property of the quotient map any continuous map f : X → Y from X to a T0 space factors through the quotient map q : X → KX. Although the quotient map q is generally not a homeomorphism (since it is not generally injective), it does induce a bijection between the topology on X and the topology on KX. Intuitively, the Kolmogorov quotient does not alter the topology of a space. It just reduces the point set until points become topologically distinguishable. See also References General topology Separation axioms
Topological indistinguishability
[ "Mathematics" ]
1,814
[ "General topology", "Topology" ]
14,672,486
https://en.wikipedia.org/wiki/Software%20testing%20outsourcing
Software Testing Outsourcing is software testing carried out by an independent company or a group of people not directly involved in the process of software development. Software testing is an essential phase of software development. However, it is often viewed as a non-core activity for most organizations. Outsourcing enables an organization to concentrate on its core development activities while external software testing experts handle the independent validation work. This offers many business benefits, which include independent assessment leading to enhanced delivery confidence, reduced time to market, lower infrastructure investment, predictable software quality, de-risking of deadlines, and increased time to focus on development. Software Testing Outsourcing can come in different forms: Full outsourcing, insourcing or remote insourcing of the entire test process (strategy, planning, execution, and closure), often referred to as a Managed Testing Service or dedicated testing teams. Provision of additional resources for major projects One-off tests often related to loading, stress or performance testing Beta User Acceptance Testing. Utilizing specialist focus groups coordinated by an external organization Software Testing Outsourcing is utilized when a company does not have the resources or capabilities in-house to address testing needs. Outsourcing can be given to organizations with expertise in many areas, including testing software for the web, mobile, printing, or even Fax performance. Testing companies can provide outsourcing services located in the home country of business or many other onshore or offshore sites. A testing partner could mean someone in the same city or another city across the country. It could also mean onshore but rurally sourced. Near-shore options are located in the same time-zone but cheaper markets like Mexico, while offshore testing usually takes place in countries like the Caribbean, Ukraine, and India. Onshore testing – Software testing companies based in the US and typically include Canada. Onshore often refers to your home country. Offshore testing – Software testing companies in a country other than your home country. Near-shore – Software testing companies located outside of the home country but in the same or similar time zone. Software testing offshore is considered more ideal when pricing is a key factor and when the task is simple enough for lesser experienced staff with limited direction. Offshore is also a more common choice when there can be tight coordination and time zone overlap is not an impediment. If the testing is more complicated and requires focused coordination and frequent interfacing with internal teams, onshore services will be more critical. Security and cultural alignment are also important factors that are most often satisfied by an onshore partner. Pros of Software Testing Outsourcing Onshore: On-hand information: Fluid and first-hand information from throughout the process. Face-to-face communication: enables on-time detection of emerging issues and efficient problem-solving. Effective communication: With no time and distance gap or cultural differences, there are almost no misunderstandings within teams. Time-effectiveness: Real-time work model with no time zone delays ensures efficiencies. Enhanced Time to market: Based on all of the above, speed to market is guaranteed. Pros of Software Testing Outsourcing Offshore: Best choice for long-term projects: The results are typical but not proven. Low costs:  The cost of IT projects can be cheaper when outsourced to countries with low labor costs. Round-the-clock support: Typically, offshore testing companies offer 24/7 support services. Fast Scalability: Access to a large pool of resources capable of fast test activation. Hybrid: Software Testing Outsourcing Offshore in execution with Onshore Over-site Some companies offer an onshore, local project lead to oversee an offshore outsourced team. Advantages of Onsite-Offshore Outsourced Testing Model If used right, this model can ensure that there is work going on every minute of the 24 hours on a project. Direct client interaction helps in better communication and also improves the business relationship. Cost-effective – Offshore teams cost less than setting up the entire QA team onsite. Considerations of the time zone differences and manage expectations accordingly. Top established global outsourcing cities According to Tholons Global Services - Top 50, in 2009, Top Established and Emerging Global Outsourcing Cities in Testing function were: Bengaluru, India Cebu City, Philippines Shanghai, China Beijing, China Kraków, Poland Ho Chi Minh City, Vietnam Vietnam outsourcing Vietnam has become a major player in software outsourcing. Ho Chi Minh City's ability to meet clients’ needs in scale and capacity, its maturing business environment, the country's stability in political and labor conditions, its increasing number of English speakers and its high service-level maturity make it attractive to foreign interests. Vietnam's software industry has maintained annual growth rate of 30-50% during the past 10 years. From 2002 to 2013 revenue of the software industry increased to nearly 3 US$ billion and the hardware industry increased to 36.8 US$ billion. Many Vietnamese enterprises have been granted international certificates (CMM) for their software development. According to Global Services Location Index 2017 by A.T. Kearney, Vietnam ranks sixth in the global software outsourcing market. Vietnam's position in this year's index reflects its growing popularity for Business process outsourcing (BPO). Its BPO industry earned US$2 billion in 2015 and has annually grown by 20-25% in the past decade. Argentina outsourcing Argentina's software industry has experienced exponential growth in the last decade, positioning itself as one of the strategic economic activities in the country. As Argentina is just one hour ahead of North America's east coast, communication takes place in real-time. Argentina's Internet culture and industry is one of the best, Facebook penetration in Argentina ranks 3rd worldwide and the country has the highest penetration of smartphones in Latin America (24%). Perhaps one of the most surprising facts is that the percentage that internet contributes to Argentina's Gross National Product (2.2%) ranks 10th in the world. India outsourcing India's software outsourcing industry plays a critical role in the country's economy, contributing significantly to its Gross Domestic Product (GDP). As of 2021, the Information Technology and Business Process Management (IT-BPM) sector in India accounted for approximately 8% of the country's GDP. The sector also employed over 4.5 million people, making it one of the largest private-sector employers in the country. India's outsourcing industry is dominated by software services, which contributed about US$150 billion to the country's export revenues, according to the Economic Times. References Software testing Offshoring Outsourcing
Software testing outsourcing
[ "Engineering" ]
1,355
[ "Software engineering", "Software testing" ]
14,672,508
https://en.wikipedia.org/wiki/Hexachlorocyclohexa-2%2C5-dien-1-one
Hexachlorocyclohexa-2,5-dien-1-one, sometimes informally called hexachlorophenol (HCP), is an organochlorine compound. It can be prepared from phenol. Despite the informal name, the compound is not a phenol but is a ketone. The informal name is derived from its method of preparation which includes phenol as a reagent. Preparation HCP is normally produced by chlorination of phenol by chlorine in the presence of metal chloride catalyst, such as ferric chloride. It can also be produced by alkaline hydrolysis of polychlorinated benzenes at high temperature and pressure, by conversion of diazonium salts of chlorinated anilines, or by chlorination of phenolsulfonic acids and benzenesulfonic acids followed by removal of the sulfonic acid group. The hydrolysis of HCP gives chloranil. References See also Pentachlorophenol Hexachlorobenzene Disinfectants Fungicides Organochlorides Ketones
Hexachlorocyclohexa-2,5-dien-1-one
[ "Chemistry", "Biology" ]
239
[ "Ketones", "Fungicides", "Biocides", "Functional groups" ]
14,673,055
https://en.wikipedia.org/wiki/Receptor%20editing
Receptor editing is a process that occurs during the maturation of B cells, which are part of the adaptive immune system. This process forms part of central tolerance to attempt to change the specificity of the antigen receptor of self reactive immature B-cells, in order to rescue them from programmed cell death, called apoptosis. It is thought that 20-50% of all peripheral naive B cells have undergone receptor editing making it the most common method of removing self reactive B cells. During maturation in the bone marrow, B cells are tested for interaction with self antigens, which is called negative selection. If the maturing B cells strongly interact with these self antigens, they undergo death by apoptosis. Negative selection is important to avoid the production of B cells that could cause autoimmune diseases. They can avoid apoptosis by modifying the sequence of light chain V and J genes (components of the antigen receptor) so that it has a different specificity and may not recognize self antigens anymore. This process of changing the specificity of the immature B cell receptor is called receptor editing. References Immunology 3. Kleinfield R, Hardy RR, Tarlinton, D (1986). 'Recombination between an expressed immunoglobulin heavy-chain gene and a germline variable gene segment in a Ly1+ B-cell lymphoma'. Nature 322 (6082): 843-6.
Receptor editing
[ "Biology" ]
300
[ "Immunology" ]
14,673,465
https://en.wikipedia.org/wiki/Xerocomellus%20chrysenteron
Xerocomellus chrysenteron, formerly known as Boletus chrysenteron or Xerocomus chrysenteron, is a small, edible, wild mushroom in the family Boletaceae. These mushrooms have tubes and pores instead of gills beneath their caps. It is commonly known as the red cracking bolete. Taxonomy This mushroom was first described and named as Boletus communis in 1789 by the eminent French botanist Jean Baptiste Francois Pierre Bulliard. Two years later, in 1791, it was given the specific epithet chrysenteron by the same author, the species name coming from the Ancient Greek words khrysos "gold" and enteron "innards". In 1888, Lucien Quelet placed it in the new genus Xerocomus, retaining the chrysenteron epithet. This binomial was generally accepted until 1985 when Marcel Bon decided to resurrect the former specific epithet communis, which resulted in the binomial Xerocomus communis. While it recently resided back in the genus Boletus, as B. chrysenteron Bull., recent phylogenetic analysis supports its placement as the type species of the new genus Xerocomellus, described by Šutara in 2008. Description Young specimens often have a dark, dry surface, and tomentose caps. When fully expanded, the brownish cap ranges from in diameter with very little substance and thin flesh that turns a blue color when slightly cut or bruised. The caps mature to convex and plane in old age. Cracks in the mature cap reveal a thin layer of light red flesh below the skin. The 1 to 2 cm-diameter stems have no ring, are mostly bright yellow and the lower part is covered in coral-red fibrils and has a constant elliptical to fusiform diameter throughout its length of 4 to 10 cm tall. The cream-colored stem flesh turns blue when cut. The species has large, yellow, angular pores, and produces an olive brown spore print. The fruit bodies of X. chrysenteron are prone to infestation by the bolete eater (Hypomyces chrysospermus). Distribution and habitat Xerocomellus chrysenteron grows singly or in small groups in hardwood/conifer woods from early summer to mid-winter. It is mycorrhizal with hardwood trees, often beech on well drained soils. It is frequent in parts of the northern temperate zones. The species has been recorded in Taiwan. It has been introduced to New Zealand, where it grows in groups under introduced deciduous trees. This species may not be as common as once thought, having been often mistaken for the recently recognised B. cisalpinus Simonini, Ladurner & Peintner. Edibility Xerocomellus chrysenteron is considered edible but not desirable due to bland flavor and soft texture. The pores are recommended to be removed immediately after mushrooms are picked as they rapidly decay. Young fungi are palatable and suitable for drying, but they become slimy when cooked; mature specimens are rather tasteless and decay quickly. Gallery Similar species Xerocomellus chrysenteron cannot be identified with certainty without the aid of a microscope, as many intermediate forms occur between it and other taxa, in particular, some forms of Boletus pruinatus and Hortiboletus rubellus. B. porosporus is also similar to this species, but it is easily separated on account of the whitish under layer and truncate (chopped off) spores. This species is also easily confused with B. cisalpinus, B. declivitatum, B. dryophilus, B. mirabilis, B. truncatus, and B. zelleri. The caps are similar to Imleria badia, the bay bolete. See also List of North American boletes References External links Xerocomus chrysenteron About: Xerocomus chrysenteron (Bull.) Quél. Boletaceae Fungi of Asia Fungi of Europe Fungi of New Zealand Fungi of North America Edible fungi Fungi described in 1789 Taxa named by Jean Baptiste François Pierre Bulliard Fungus species
Xerocomellus chrysenteron
[ "Biology" ]
870
[ "Fungi", "Fungus species" ]
13,524,569
https://en.wikipedia.org/wiki/The%20Ferns%20of%20Great%20Britain%20and%20Ireland
The Ferns of Great Britain and Ireland was a book published in 1855 that featured 51 plates of nature printing by Henry Bradbury. Description The text was a scientific description of all the varieties of ferns found in the British Isles. The author of this work was the botanist Thomas Moore, the editor was John Lindley. The book was released at a time of so-called "pteridomania" in Britain. Along with William Grosart Johnstone and Alexander Croall's Nature-Printed British Sea-Weeds (London, 1859–1860), the book featured Bradbury's innovative nature printing process. The publisher of the work was Bradbury and Evans. Bradbury patented the process after seeing the invention of Alois Auer - though the identity of its inventor grew to be a subject of debate. The technique was briefly in vogue, but did not persist in printing. Bradbury, along with Auer, believed the technique to be an enormous advance in printing. However, the plants and other subjects that could be successfully printed in this way were few. Ferns were one of the few plants with a form that could be replicated, the shape of the fronds being largely two-dimensional. In this work the ferns, a plant highly suited to the process, were impressed upon soft lead plates. These were electroplated to become the printing plate, the details of the fronds and stem were hand-coloured at this stage. The resulting image was in two colours and provided a highly detailed and realistic depiction of the species. See also Pteridomania References See also List of Irish botanical illustrators List of Irish plant collectors External links Botanicus: "The Ferns of Great Britain and Ireland" — online edition. Florae (publication) . Flora of Great Britain Flora of Ireland Botany in Europe Botanical art British books 1855 books
The Ferns of Great Britain and Ireland
[ "Biology" ]
373
[ "Flora", "Florae (publication)" ]
13,524,576
https://en.wikipedia.org/wiki/Mu%20Piscium
Mu Piscium (μ Piscium) is a solitary, orange-hued star in the zodiac constellation of Pisces. It is visible to the naked eye with an apparent visual magnitude of 4.84. Based upon an annual parallax shift of 10.73 mas as seen from Earth, it is located about 304 light years from the Sun. Given this distance, it has a relatively high proper motion, advancing 296 mas per year across the sky. This is an evolved K-type giant star with a stellar classification of K4 III. It has an estimated 1.25 times the mass of the Sun and, at the age of 5.6 billion years, has expanded to about 37 times the Sun's radius. From this enlarged photosphere, it is radiating 186 times the Sun's luminosity at an effective temperature of 4,126 K. It has a magnitude 12.02 visual companion at an angular separation of 209.30 arc seconds along a position angle of 298°, as of 2012. Naming In Chinese, (), meaning Outer Fence, refers to an asterism consisting of μ Piscium, δ Piscium, ε Piscium, ζ Piscium, ν Piscium, ξ Piscium and α Piscium. Consequently, the Chinese name for μ Piscium itself is (, .) References K-type giants Pisces (constellation) Piscium, Mu Durchmusterung objects Piscium, 098 009138 007007 0434
Mu Piscium
[ "Astronomy" ]
320
[ "Pisces (constellation)", "Constellations" ]
13,525,027
https://en.wikipedia.org/wiki/Dual%20norm
In functional analysis, the dual norm is a measure of size for a continuous linear function defined on a normed vector space. Definition Let be a normed vector space with norm and let denote its continuous dual space. The dual norm of a continuous linear functional belonging to is the non-negative real number defined by any of the following equivalent formulas: where and denote the supremum and infimum, respectively. The constant map is the origin of the vector space and it always has norm If then the only linear functional on is the constant map and moreover, the sets in the last two rows will both be empty and consequently, their supremums will equal instead of the correct value of Importantly, a linear function is not, in general, guaranteed to achieve its norm on the closed unit ball meaning that there might not exist any vector of norm such that (if such a vector does exist and if then would necessarily have unit norm ). R.C. James proved James's theorem in 1964, which states that a Banach space is reflexive if and only if every bounded linear function achieves its norm on the closed unit ball. It follows, in particular, that every non-reflexive Banach space has some bounded linear functional that does not achieve its norm on the closed unit ball. However, the Bishop–Phelps theorem guarantees that the set of bounded linear functionals that achieve their norm on the unit sphere of a Banach space is a norm-dense subset of the continuous dual space. The map defines a norm on (See Theorems 1 and 2 below.) The dual norm is a special case of the operator norm defined for each (bounded) linear map between normed vector spaces. Since the ground field of ( or ) is complete, is a Banach space. The topology on induced by turns out to be stronger than the weak-* topology on The double dual of a normed linear space The double dual (or second dual) of is the dual of the normed vector space . There is a natural map . Indeed, for each in define The map is linear, injective, and distance preserving. In particular, if is complete (i.e. a Banach space), then is an isometry onto a closed subspace of . In general, the map is not surjective. For example, if is the Banach space consisting of bounded functions on the real line with the supremum norm, then the map is not surjective. (See space). If is surjective, then is said to be a reflexive Banach space. If then the space is a reflexive Banach space. Examples Dual norm for matrices The defined by is self-dual, i.e., its dual norm is The , a special case of the induced norm when , is defined by the maximum singular values of a matrix, that is, has the nuclear norm as its dual norm, which is defined by for any matrix where denote the singular values. If the Schatten -norm on matrices is dual to the Schatten -norm. Finite-dimensional spaces Let be a norm on The associated dual norm, denoted is defined as (This can be shown to be a norm.) The dual norm can be interpreted as the operator norm of interpreted as a matrix, with the norm on , and the absolute value on : From the definition of dual norm we have the inequality which holds for all and The dual of the dual norm is the original norm: we have for all (This need not hold in infinite-dimensional vector spaces.) The dual of the Euclidean norm is the Euclidean norm, since (This follows from the Cauchy–Schwarz inequality; for nonzero the value of that maximises over is ) The dual of the -norm is the -norm: and the dual of the -norm is the -norm. More generally, Hölder's inequality shows that the dual of the -norm is the -norm, where satisfies that is, As another example, consider the - or spectral norm on . The associated dual norm is which turns out to be the sum of the singular values, where This norm is sometimes called the . Lp and ℓp spaces For -norm (also called -norm) of vector is If satisfy then the and norms are dual to each other and the same is true of the and norms, where is some measure space. In particular the Euclidean norm is self-dual since For , the dual norm is with positive definite. For the -norm is even induced by a canonical inner product meaning that for all vectors This inner product can expressed in terms of the norm by using the polarization identity. On this is the defined by while for the space associated with a measure space which consists of all square-integrable functions, this inner product is The norms of the continuous dual spaces of and satisfy the polarization identity, and so these dual norms can be used to define inner products. With this inner product, this dual space is also a Hilbert spaces. Properties Given normed vector spaces and let be the collection of all bounded linear mappings (or ) of into Then can be given a canonical norm. A subset of a normed space is bounded if and only if it lies in some multiple of the unit sphere; thus for every if is a scalar, then so that The triangle inequality in shows that for every satisfying This fact together with the definition of implies the triangle inequality: Since is a non-empty set of non-negative real numbers, is a non-negative real number. If then for some which implies that and consequently This shows that is a normed space. Assume now that is complete and we will show that is complete. Let be a Cauchy sequence in so by definition as This fact together with the relation implies that is a Cauchy sequence in for every It follows that for every the limit exists in and so we will denote this (necessarily unique) limit by that is: It can be shown that is linear. If , then for all sufficiently large integers and . It follows that for sufficiently all large Hence so that and This shows that in the norm topology of This establishes the completeness of When is a scalar field (i.e. or ) so that is the dual space of Let denote the closed unit ball of a normed space When is the scalar field then so part (a) is a corollary of Theorem 1. Fix There exists such that but, for every . (b) follows from the above. Since the open unit ball of is dense in , the definition of shows that if and only if for every . The proof for (c) now follows directly. As usual, let denote the canonical metric induced by the norm on and denote the distance from a point to the subset by If is a bounded linear functional on a normed space then for every vector where denotes the kernel of See also Notes References External links Notes on the proximal mapping by Lieven Vandenberge Functional analysis Linear algebra Mathematical optimization Linear functionals
Dual norm
[ "Mathematics" ]
1,432
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Mathematical objects", "Mathematical relations", "Linear algebra", "Algebra", "Mathematical optimization" ]
13,525,100
https://en.wikipedia.org/wiki/Enterprise%20test%20software
Enterprise test software (ETS) is a type of software that electronics manufacturers and other manufacturers use to standardize product testing enterprise-wide, rather than simply in the test engineering department. It is designed to integrate and synchronize test systems to other enterprise functions such as research and development (R&D), new product introduction (NPI), manufacturing, and supply chain, overseeing the collaborative test processes between engineers and managers in their respective departments. Details Like most enterprise software subcategories, ETS represents an evolution away from custom-made, in-house software development by original equipment manufacturers (OEMs). It typically replaces a cumbersome, unsophisticated, test management infrastructure that manufacturers have to redesign for every new product launch. Some large companies, such as Alcatel, Cisco, and Nortel, develop ETS systems internally to standardize and accelerate their test engineering activities, while others such as Harris Corporation and Freescale Semiconductor choose commercial off-the-shelf ETS options for advantages that include test data management and report generation. This need results from the extensive characterization efforts associated with IC design, characterization, validation, and verification. ETS accelerates design improvements through test system management and version control. ETS supports test system development and can be interconnected with manufacturing execution systems (MES), enterprise resource planning (ERP), and product lifecycle management (PLM) software packages to eliminate double-data entry and enable real-time information sharing throughout all company departments. Enterprise-wide test applications ETS covers five major enterprise-wide test applications. Test and automation—By using ETS in conjunction with virtual instrumentation programming tools, design and test engineers avoid custom software programming unrelated to device characterization, and can thereby accelerate test system development. Product data management—In this application, the engineer collects, stores, aggregates, distributes, and shares product and test data in a central database. One significant advantage of ETS is reported to be product traceability. It is designed to make performance reporting easier, and to increase the chances that products are built correctly, from the first prototype, reducing design iterations. Process control—The engineer manages the flow of activities between test, repair, and assembly stations, and enforces or suggests how a set of work instructions are to be applied. This application is designed to orchestrate product test strategies from validation to delivery. Multi-site collaboration—This purpose of this application is avoid time zone difficulties and error-prone updates of new product and test system releases, by sharing information with other departments and remote contract manufacturers electronically. It is designed to offer real-time visibility over global test operations. Test asset management—This application is intended to help design and test engineers track product configurations, tolerances, test sequences, and software being used by each test system, as well as distribute new updates electronically in real-time. Engineers can update their test systems automatically, in synch with their new releases. See also Commercial off-the-shelf Electronic test equipment Enterprise Data Management Enterprise resource planning Process control Test automation References General Wall Street Journal June 19, 2007 Wall Street & Technology, February 21, 2007 BusinessWeek, May 24, 2006 Dell (white paper), February 2006 University of California (lecture), November 30, 2004 Evaluation Engineering, September 2002 World Intellectual Property Organization, 2002 Evaluation Engineering, October 1998 Evaluation Engineering, May 1996, Software testing tools Electronic engineering Electronics manufacturing Hardware testing Quality control
Enterprise test software
[ "Technology", "Engineering" ]
703
[ "Electrical engineering", "Electronic engineering", "Electronics manufacturing", "Computer engineering" ]
13,525,570
https://en.wikipedia.org/wiki/Oracle%20WebCenter
Oracle WebCenter is Oracle's portfolio of user engagement software products built on top of the JSF-based Oracle Application Development Framework. There are three main products that make up the WebCenter portfolio, and they can be purchased together as a suite or individually: Oracle WebCenter Content (includes WebCenter Imaging) Oracle WebCenter Sites Oracle WebCenter Portal Each of these products are in separate but connected markets. WebCenter Content competes in the Enterprise Content Management market. WebCenter Sites competes in the Web Experience Management market, and WebCenter Portal competes in the self-service portal and content delivery market space. Different combinations of these products are frequently used together, so Oracle has bundled them together within the same WebCenter product family. Oracle WebCenter contains a set of components for building rich web applications, portals, and team collaboration and social sites. Oracle WebCenter is targeted at enterprise and larger accounts that have significant content management requirements and the need to deliver that information with internal or external portals, customer-facing websites or within integrated business applications. Oracle has made a particular effort to integrate WebCenter into its leading business applications such as E-Business Suite, PeopleSoft and JD Edwards so that content can be centrally managed in one location and shared across multiple applications. For the development community and advanced business users, WebCenter provides a development environment that includes WebCenter Framework and WebCenter Services, along with an out-of-the-box application for team collaboration and enterprise social networking. According to Oracle, this is the strategic portal product, eventually replacing Oracle Portal as well as the portal products acquired from BEA. Versions WebCenter 12c (12.2.1.4) released Oct 2019 WebCenter 12c (12.2.1.3) released Aug 2017 WebCenter 12c (12.2.1) released Oct 2016 WebCenter 11gR1 PS8 (11.1.1.9.0) released May 2014 WebCenter 11gR1 PS7 (11.1.1.8.0) released Aug 2013 WebCenter 11gR1 PS6 (11.1.1.7.0) released Apr 2013 WebCenter 11gR1 PS5 (11.1.1.6.0) released Feb 2012 WebCenter 11gR1 PS4 (11.1.1.5.0) released May 2011 WebCenter 11gR1 PS3 (11.1.1.4.0) released Jan 2011 WebCenter 11gR1 PS2 (11.1.1.3.0) released Apr 2010 WebCenter 11gR1 PS1 (11.1.1.2.0) released Nov 2009 WebCenter 11gR1 (11.1.1.1.0) released July 2, 2009 WebCenter 10g (10.1.3.2.0) released January 2007 Cost The product costs $70,000 per CPU for the WebCenter Services, and $125,000 per CPU for WebCenter Suite. In a production installation, users can expect to deploy at least 4 CPUs as a base system, with likely additional CPUs for development and testing. WebCenter includes embedded US licenses of Oracle Secure Enterprise Search, Oracle Universal Content Management, and Oracle BPEL Process Manager. In addition, WebCenter needs a database to store information: any supported and licensed database such as Oracle database, MS SQL Server or IBM Db2 will work. WebCenter product stack There are three major products in the WebCenter product stack. The base WebCenter Framework allows a user to embed portlets, ADF Taskflows and Pages, content, and customizable components in an Oracle ADF application. All Framework pieces are integrated into the Oracle JDeveloper IDE, providing access to these resources. WebCenter Services are a set of independently deployable collaboration services. It incorporates Web 2.0 components such as content, collaboration, and communication services the full list is provided below. WebCenter Services includes Oracle ADF user interface components (called Taskflows) that can be embedded directly into ADF applications. In addition, APIs can be utilized to create custom UIs and to integrate some of these services into non-ADF applications. Finally, WebCenter Spaces is a closed source application built on WebCenter Framework and Services that offers a prebuilt project collaboration solution. It can be compared with solutions like Microsoft SharePoint and Atlassian Confluence. There are limited mechanisms to extend this application. Note that there is a product called WebCenter Interaction which is not built on the core WebCenter stack it is the former Plumtree portal product. Also, all Oracle portal products at Oracle are included in the WebCenter Suite, which is an umbrella of products. Products can be included in the suite regardless of whether they are built on the ADF based WebCenter Framework. WebCenter comprises furthermore several editions, among others WebCenter Suite Plus, WebCenter Portal, WebCenter Content, WebCenter Sites, WebCenter Sites Satellite Server (a distributed caching mechanism which stores and assembles "pagelets," or elements of output), WebCenter Universal Content Management. Seven WebCenter Adapters and one WCE Management are available. WebCenter services capabilities Social Networking Services - Enables users to maximize productivity through collaboration. People Connection – Enables users to assemble their business networks like linked-in. Discussions Provides the ability to create and participate in threaded discussion. This is an embedded version of Forums provided by Jive Software. Announcements Enables users to post, personalize, and manage announcements. Instant Messaging and Presence (IMP) Provides the ability to observe the online presence status of other authenticated users (whether online, offline, busy, or idle) and to contact them. Blog Enables blogging functionality within the context of an application. Wiki Self-service, community, oriented-content publishing and sharing. Shared Services - Provides features for both social networking and personal productivity. Documents Provides content management and storage capabilities, including content upload, file and folder creation and management, file check out, versioning, and so on. WebCenter Portal includes a restricted-use license of Oracle's enterprise content management product called WebCenter Content (formerly known as Universal Content Management). Links Provides the ability to view, access, and associate related information; for example, you can link to a solution document from a discussion thread. Lists Enables users to create, publish, and manage lists. (Available only in WebCenter Spaces). Page Provides the ability to create and manage pages at run time. Tags Provides the ability to assign one or more personally relevant keywords to a given page or document. This feature is similar to the del.cio.us website. Events Provides group calendars, which users can use to schedule meetings, appointments, and any other type of team get-together. This feature requires deployment of a separate calendaring server, which may be Oracle Beehive or Microsoft Exchange (Available only in WebCenter Spaces). Personal Productivity Services Focuses on the requirements of an individual, rather than a group. Mail Provides integration with IMAP and SMTP mail servers to enable users to perform simple mail functions such as viewing, reading, creating, and deleting messages, creating messages with attachments, and replying to or forwarding existing messages. Notes Provides the ability to "jot down" and retain quick bits of personally relevant information (Available only in WebCenter Spaces). Recent Activities Provides a summary view of recent changes to documents, discussions, and announcements. RSS Provides the ability to publish content from WebCenter Web 2.0 Services as news feeds in RSS 2.0 and Atom 1.0 formats. Search Provides the ability to search tags, services, an application, or an entire site. This makes use of a license limited version of Oracle's Secure Enterprise Search (SES) product. Worklist Provides a personal, at-a-glance view of business processes that require attention. These can include a request for document review and other types of business process that come directly from enterprise applications. Official and de facto standards support WebCenter Framework supports the following standards: J2EE 1.4 and above (Java EE) JSR 168 and JSR 286 WSRP 1.0 and 2.0 JCR 1.0 JSF JSR 116 Release of WebCenter 11g R1 Patch Set 5 (PS5) On 22 February 2012 Oracle released WebCenter 11g Release 1 Patch Set 5. It includes many bug fixes in addition to several new enhancements. This patch set is mainly targeted at releasing customer bug fixes. Release of WebCenter 11g R1 Patch Set 3 (PS3) In January 2011 Oracle released WebCenter 11g Release 1 Patch Set 3. As the converged portal platform, this is a major new release with many features integrated from previously acquired portal products, including a greatly improved and flexible portal framework, improved GUI, personalization server, brand new navigation model, support for hierarchical pages and spaces, JSR 286, improved performance, and more. WebCenter Framework and Services lacks support for these notable technologies: Internet Explorer 6.0 Eclipse (software) IDE but Oracle JDeveloper is provided as part of the suite of tools. Notes External links WebCenter Official Home WebCenter Content WebCenter Sites WebCenter Portal WebCenter Imaging Oracle WebCenter Official Blog Oracle Application Server Oracle WebCenter page on the Oracle Wiki Oracle software Java platform Portal software Middleware Content management systems
Oracle WebCenter
[ "Technology", "Engineering" ]
1,989
[ "Computing platforms", "IT infrastructure", "Software engineering", "Middleware", "Java platform" ]
13,525,690
https://en.wikipedia.org/wiki/Intention%20economy
The intention economy is an approach to viewing markets and economies focusing on buyers as a scarce commodity. Customers' intention to buy drives the production of goods to meet their specific needs. It is also the title of Doc Searls book: The Intention Economy: When Customers Take Charge published in May, 2012. Concept Doc Searls coined the term in an article for Linux Journal. He wrote: "The Intention Economy grows around buyers, not sellers. It leverages the simple fact that buyers are the first source of money, and that they come ready-made. You don't need advertising to make them." Despite the advancement of the internet, businesses are still seller oriented. Even successful businesses like Google still have the point of view of the sellers, with their revenue coming nearly all from advertising. Searls describes the current condition as a series of silos. The only option a buyer has is merely moving from silo to silo. Nothing has fundamentally changed. Some sites have similar characteristics of an intention economy. For example, flight booking services Priceline.com, which let users name their price for an airline ticket still functions like a "silo." In an intention economy a site like Priceline might serve as an intermediary with the airline coordinating new flight dates and times that correspond around the buyers intentions. Companies need to be able to respond to a customer's precise needs. "Mass customization, in a lot of areas it is no longer inherently necessary that I get the exact same thing as a million other people. A computer manufacturer can be geared for assembling a computer just for me, to my specifications. A travel agency can construct a travel plan particularly for me." Examples Searls gives an example of intention economy scenario: "A car rental customer should be able to say to the car rental market, 'I'll be skiing in Park City from March 20–25. I want to rent a 4-wheel drive SUV. I belong to Avis Wizard, Budget FastBreak and Hertz 1 Club. I don't want to pay up front for gas or get any insurance. What can any of you companies do for me?' — and have the sellers compete for the buyer's business." Trendwatching.com describes two problems with intention economy sites. "...Most of these ‘information brokers’ focus on only one product/category. Many of them also work (too) closely with a limited set of suppliers. Sites that seem to act like intention economy sites are not. For example, Priceline which lets customers name their own price and then matches it with the (pre-set) minimum prices that airlines, hotels and rental car companies have provided Priceline.com with this space remains wide open for intention-brokers who can handle a variety of intentions per customer, and genuinely operate on behalf of those customers." Priceline ended its Name Your Own Price model for flights in 2016 and car rentals in 2018, and for hotels in 2020. Trendwatching in 2007 listed examples of intention economy sites then online: Igglo lets potential buyers bid on houses that aren't on the market (unclear if still doing so in 2025) Zillow lets home owners name their "Make Me Move" before putting their house on the market. Eventful allowed users to collectively persuade performers to come to their town (no longer doing it in 2025) SellaBand allowed users to collectively sponsor and help manage a band for a cut of the revenue (last done in 2014) Kleemi allowed members to create list of intentions that friends and vendors can; comment, review, or make offers on (no evidence of a functioning website since 2011) Infinite Buyer allows registered consumers to make their offer on seller listings (abandoned in 2019) Intently.co allows users to request any service anywhere. Current operational services: Angi (formerly Angie's List) - founded in 1995, Angi is a US-based online service that allows people to openly put out projects for bid to local contractors. Reactions With the emergence of Artificial general intelligence and its increasing adoption in consumer information spaces, some have expressed more pessimism about the intention economy, suggesting that it: "will test democratic norms by subjecting users to clandestine modes of subverting, redirecting, and intervening on commodified signals of intent." References External links Economic systems E-commerce
Intention economy
[ "Technology" ]
900
[ "Information technology", "E-commerce" ]
13,525,871
https://en.wikipedia.org/wiki/Epsilon%20Piscium
Epsilon Piscium (Epsilon Psc, ε Piscium, ε Psc) is the Bayer designation for a star approximately away from the Earth, in the constellation Pisces. It is a yellow-orange star of the G9 III or K0 III spectral type. This is a giant star, slightly cooler in surface temperature, yet brighter and larger than the Sun. It is a suspected occultation double, with both stars having the same magnitude, separated by 0.25 arcsecond. Naming In Chinese, (), meaning Outer Fence, refers to an asterism consisting of ε Piscium, δ Piscium, ζ Piscium, μ Piscium, ν Piscium, ξ Piscium and α Piscium. Consequently, the Chinese name for ε Piscium itself is (, .) In Japanese, 悠翔星 (Haruto-boshi), meaning "Soaring Forever Star," refers to the Japanese description of ε Piscium. Planetary system In 2021, a gas giant planetary candidate was detected by radial velocity method. References K-type giants Pisces (constellation) Piscium, Epsilon BD+07 153 Piscium, 071 004906 006186 0294 Hypothetical planetary systems
Epsilon Piscium
[ "Astronomy" ]
265
[ "Pisces (constellation)", "Constellations" ]
13,526,782
https://en.wikipedia.org/wiki/SaltMod
SaltMod is computer program for the prediction of the salinity of soil moisture, groundwater and drainage water, the depth of the watertable, and the drain discharge (hydrology) in irrigated agricultural lands, using different (geo)hydrologic conditions, varying water management options, including the use of ground water for irrigation, and several cropping rotation schedules. The water management options include irrigation, drainage, and the use of subsurface drainage water from pipe drains, ditches or wells for irrigation. Soil salinity models The majority of the computer models available for water and solute transport in the soil (e.g. Swatre, DrainMod ) are based on Richard's differential equation for the movement of water in unsaturated soil in combination with a differential salinity dispersion equation. The models require input of soil characteristics like the relation between unsaturated soil moisture content, water tension, hydraulic conductivity and dispersivity. These relations vary to a great extent from place to place and are not easy to measure. The models use short time steps and need at least a daily data base of hydrological phenomena. Altogether this makes model application to a fairly large project the job of a team of specialists with ample facilities. Simplified salinity model: SaltMod Literature references (chronological) to case studies after 2000: Older examples of application can be found in: Salinity in the Nile Delta Integration of irrigation and drainage management Rationale There is a need for a computer program that is easier to operate and that requires a simpler data structure than most currently available models. Therefore, the SaltModod program was designed keeping in mind a relative simplicity of operation to facilitate the use by field technicians, engineers and project planners instead of specialized geo-hydrologists. It aims at using input data that are generally available, or that can be estimated with reasonable accuracy, or that can be measured with relative ease. Although the calculations are done numerically and have to be repeated many times, the final results can be checked by hand using the formulas in the manual. SaltMod's objective is to predict the long-term hydro-salinity in terms of general trends, not to arrive at exact predictions of how, for example, the situation would be on the first of April in ten years from now. Further, SaltMod gives the option of the re-use of drainage and well water (e.g. for irrigation) and it can account for farmers' response to waterlogging, soil salinity, water scarcity and over-pumping from the aquifer. Also it offers the possibility to introduce subsurface drainage systems at varying depths and with varying capacity so that they can be optimized. Other features of Saltmod are found in the next section. Principles Seasonal approach The computation method Saltmod is based on seasonal water balances of agricultural lands. Four seasons in one year can be distinguished, e.g. dry, wet, cold, hot, irrigation or fallow seasons. The number of seasons (Ns) can be chosen between a minimum of one and a maximum of four. The larger the number of seasons becomes, the larger is the number of input data required. The duration of each season (Ts) is given in number of months (0 < Ts < 12). Day to day water balances are not considered for several reasons: daily inputs would require much information, which may not be readily available; the method is especially developed to predict long term, not day-to-day, trends and predictions for the future are more reliably made on a seasonal (long term) than on a daily (short term) basis, due to the high variability of short-term data; even though the precision of the predictions for the future may still not be very high, a lot is gained when the trend is sufficiently clear; for example, it need not be a major constraint to design appropriate soil salinity control measures when a certain salinity level, predicted by Saltmod to occur after 20 years, will in reality occur after 15 or 25 years. Hydrological data The method uses seasonal water balance components as input data. These are related to the surface hydrology (like rainfall, evaporation, irrigation, use of drain and well water for irrigation, runoff), and the aquifer hydrology (like upward seepage, natural drainage, pumping from wells). The other water balance components (like downward percolation, upward capillary rise, subsurface drainage) are given as output. The quantity of drainage water, as an output, is determined by two drainage intensity factors for drainage above and below drain level respectively (to be given with the input data), a drainage reduction factor (to simulate a limited operation of the drainage system), and the height of the water table, resulting from the computed water balance. Variation of the drainage intensity factors and the drainage reduction factor gives the opportunity to simulate the effect of different drainage options. Agricultural data The input data on irrigation, evaporation, and surface runoff are to be specified per season for three kinds of agricultural practices, which can be chosen at the discretion of the user: A: irrigated land with crops of group A B: irrigated land with crops of group B U: non-irrigated land with rainfed crops or fallow land The groups, expressed in fractions of the total area, may consist of combinations of crops or just of a single kind of crop. For example, as the A type crops one may specify the lightly irrigated cultures, and as the B type the more heavily irrigated ones, such as sugarcane and rice. But one can also take A as rice and B as sugarcane, or perhaps trees and orchards. The A, B and/or U crops can be taken differently in different seasons, e.g. A=wheat+barley in winter and A=maize in summer while B=vegetables in winter and B=cotton in summer. Un-irrigated land can be specified in two ways: (1) as U=1−A−B and (2) as A and/or B with zero irrigation. A combination can also be made. Further, a specification must be given of the seasonal rotation of the different land uses over the total area, e.g. full rotation, no rotation at all, or incomplete rotation. This occurs with a rotation index. The rotations are taken over the seasons within the year. To obtain rotations over the years it is advisable to introduce annual input changes. When a fraction A1, B1 and/or U1 in the first season differs from fractions are A2, B2 and/or U2 in the second season, because the irrigation regimes in the seasons differ, the program will detect that a certain rotation occurs. If one wishes to avoid this, one may specify the same fractions in all seasons (A2=A1, B2=B1, U2=U1), but the crops and irrigation quantities may have to be adjusted in proportion. Cropping rotation schedules vary widely in different parts of the world. Creative combinations of area fractions, rotation indexes, irrigation quantities and annual input changes can accommodate many types of agricultural practices. Variation of the area fractions and/or the rotational schedule gives the opportunity to simulate the effect of different agricultural practices on the water and salt balance. Soil strata Saltmod accepts four different reservoirs, three of which are in the soil profile: a surface reservoir an upper (shallow) soil reservoir or root zone an intermediate soil reservoir or transition zone a deep reservoir or aquifer. The upper soil reservoir is defined by the soil depth from which water can evaporate or be taken up by plant roots. It can be equal to the rootzone. The root zone can be saturated, unsaturated, or partly saturated, depending on the water balance. All water movements in this zone are vertical, either upward or downward, depending on the water balance. (In a future version of Saltmod, the upper soil reservoir may be divided into two equal parts to detect the trend in the vertical salinity distribution.) The transition zone can also be saturated, unsaturated or partly saturated. All flows in this zone are vertical, except the flow to subsurface drains. If a horizontal subsurface drainage system is present, this must be placed in the transition zone, which is then divided into two parts: an upper transition zone (above drain level) and a lower transition zone (below drain level). If one wishes to distinguish an upper and lower part of the transition zone in the absence of a subsurface drainage system, one may specify in the input data a drainage system with zero intensity. The aquifer has mainly horizontal flow. Pumped wells, if present, receive their water from the aquifer only. Water balances The water balances are calculated for each reservoir separately as shown in the article Hydrology (agriculture). The excess water leaving one reservoir is converted into incoming water for the next reservoir. The three soil reservoirs can be assigned a different thickness and storage coefficients, to be given as input data. In a particular situation, the transition zone or the aquifer need not be present. Then, it must be given a minimum thickness of 0.1 m. The depth of the water table, calculated from the water balances, is assumed to be the same for the whole area. If this assumption is not acceptable, the area must be divided into separate units. Under certain conditions, the height of the water table influences the water balance components. For example, a rise of the water table towards the soil surface may lead to an increase of evaporation, surface runoff, and subsurface drainage, or a decrease of percolation losses from canals. This, in turn, leads to a change of the water balance, which again influences the height of the water table, etc. This chain of reactions is one of the reasons why Saltmod has been developed into a computer program. It takes a number of repeated calculations (iterations) to find the correct equilibrium of the water balance, which would be a tedious job if done by hand. Other reasons are that a computer program facilitates the computations for different water management options over long periods of time (with the aim to simulate their long-term effects) and for trial runs with varying parameters. Drains, wells, and re-use The sub-surface drainage can be accomplished through drains or pumped wells. The subsurface drains are characterized by drain depth and drainage capacity factor . The drains are located in the transition zone. The subsurface drainage facility can be applied to natural or artificial drainage systems. The functioning of an artificial drainage system can be regulated through a drainage control factor. When no drainage system is present, installing drains with zero capacity offers the opportunity to obtain separate water and salt balances for an upper and lower part of the transition zone. The pumped wells are located in the aquifer. Their functioning is characterized by the well discharge. The drain and well water can be used for irrigation through a re-use factor. This may affect the salt balance and the irrigation efficiency or sufficiency. Salt balances The salt balances are calculated for each reservoir separately. They are based on their water balances, using the salt concentrations of the incoming and outgoing water. Some concentrations must be given as input data, like the initial salt concentrations of the water in the different soil reservoirs, of the irrigation water and of the incoming ground water in the aquifer. The concentrations are expressed in terms of electric conductivity (EC in dS/m). When the concentrations are known in terms of g salt/L water, the rule of thumb: 1 g/L -> 1.7 dS/m can be used. Usually, salt concentrations of the soil are expressed in ECe, the electric conductivity of an extract of a saturated soil paste (saturation extract). In Saltmod, the salt concentration is expressed as the EC of the soil moisture when saturated under field conditions. As a rule, one can use the conversion rate EC : ECe = 2 : 1. Salt concentrations of outgoing water (either from one reservoir into the other or by subsurface drainage) are computed on the basis of salt balances, using different leaching or salt mixing efficiencies to be given with the input data. The effects of different leaching efficiencies can be simulated by varying their input value. If drain or well water is used for irrigation, the method computes the salt concentration of the mixed irrigation water in the course of the time and the subsequent effect on the soil and ground water salinities, which again influences the salt concentration of the drain and well water. By varying the fraction of used drain or well water (to be given in the input data), the long-term effect of different fractions can be simulated. The dissolution of solid soil minerals or the chemical precipitation of poorly soluble salts is not included in the computation method, but to some extent it can be accounted for through the input data, e.g. by increasing or decreasing the salt concentration of the irrigation water or of the incoming water in the aquifer. Farmers' responses If required, farmers' responses to water logging and soil salinity can be automatically accounted for. The method can gradually decrease: the amount of irrigation water applied when the water table becomes shallower; the fraction of irrigated land when the available irrigation water is scarce; the fraction of irrigated land when the soil salinity increases; for this purpose, the salinity is given a stochastic interpretation. Response (1) is different for ponded (submerged) rice (paddy) and "dry foot" crops. The responses influence the water and salt balances, which, in their turn, slow down the process of water logging and salinization. Ultimately an equilibrium situation will be brought about. The user can also introduce farmers' responses by manually changing the relevant input data. Perhaps it will be useful first to study the automatic farmers' responses and their effect and thereafter decide what the farmers' responses will be in the view of the user. The responses influence the water and salt balances, which, in their turn, slow down the process of water logging and salinization. Ultimately an equilibrium situation will be brought about. The user can also introduce farmers' responses by manually changing the relevant input data. Perhaps it will be useful first to study the automatic farmers' responses and their effect and thereafter decide what the farmers' responses will be in the view of the user. Annual input changes The program may run with fixed input data for the number of years determined by the user. This option can be used to predict future developments based on long-term average input values, e.g. rainfall, as it will be difficult to assess the future values of the input data year by year. The program also offers the possibility to follow historic records with annually changing input values (e.g. rainfall, irrigation, agricultural practices), the calculations must be made year by year. If this possibility is chosen, the program creates transfer files by which the final conditions of the previous year (e.g. water table and salinity) are automatically used as the initial conditions for the subsequent period. This facility makes it possible to use various generated rainfall sequences drawn randomly from a known rainfall probability distribution and obtain a stochastic prediction of the resulting output parameters. If the computations are made with annual changes, not all input parameters can be changed, notably the thickness of the soil reservoirs and their total porosities as these would cause illogical shifts in the water and salt balances. Output data The output of Saltmod is given for each season of any year during any number of years, as specified with the input data. The output data comprise hydrological and salinity aspects. The data are filed in the form of tables that can be inspected directly or further analyzed with spreadsheet programs. As the soil salinity is very variable from place to place (figure left) SaltMod includes frequency distributions in the output. The figure was made with the CumFreq program . The program offers the possibility to develop a multitude of relations between varied input data, resulting outputs and time. However, as it is not possible to foresee all different uses that may be made, the program offers only a limited number of standard graphics. The program is designed to make use of spreadsheet programs for the detailed output analysis, in which the relations between various input and output variables can be established according to the scenario developed by the user. Although the computations need many iterations, all the end results can be checked by hand using the equations presented in the manual. See also Spragg Bags References External links The model can be freely downloaded from : . The manual is freely available from : or directly as pdf file : . Soil chemistry Soil physics Environmental chemistry Environmental soil science Agricultural soil science Hydrology models Irrigation Drainage Land management Land reclamation Scientific simulation software
SaltMod
[ "Physics", "Chemistry", "Biology", "Environmental_science" ]
3,498
[ "Hydrology", "Applied and interdisciplinary physics", "Biological models", "Environmental chemistry", "Soil physics", "Soil chemistry", "Hydrology models", "nan", "Environmental soil science", "Environmental modelling" ]
13,526,813
https://en.wikipedia.org/wiki/Fermat%27s%20Last%20Theorem%20in%20fiction
The problem in number theory known as "Fermat's Last Theorem" has repeatedly received attention in fiction and popular culture. It was proved by Andrew Wiles in 1994. Prose fiction The theorem plays a key role in the 1948 mystery novel Murder by Mathematics by Hector Hawton. Arthur Porges' short story "The Devil and Simon Flagg" features a mathematician who bargains with the Devil that the latter cannot produce a proof of Fermat's Last Theorem within twenty-four hours. The devil is not successful and is last seen beginning a collaboration with the hero. The story was first published in 1954 in The Magazine of Fantasy and Science Fiction. In Douglas Hofstadter's 1979 book Gödel, Escher, Bach, the statement, "I have discovered a truly remarkable proof of this theorem which this margin is too small to contain" is repeatedly rephrased and satirized, including a pun on "fermata". In Robert Forward's 1984/1985 science fiction novel Rocheworld, Fermat's Last Theorem is unproved far enough into the future for interstellar explorers to describe it to one of the mathematically inclined natives of another star system, who finds a proof. In the 2003 book The Oxford Murders by Guillermo Martinez, Wiles's announcement in Cambridge of his proof of Fermat's Last Theorem forms a peripheral part of the action. In Stieg Larsson's 2006 book The Girl Who Played with Fire, the main character Lisbeth Salander is mesmerized by the theorem. Fields medalist Timothy Gowers criticized Larsson's portrayal of the theorem as muddled and confused. In Jasper Fforde's 2007 book First Among Sequels, 9-year-old Tuesday Next, seeing the equation on the sixth-form's math classroom's chalkboard, and thinking it homework, finds a simple counterexample. Arthur C. Clarke and Frederik Pohl's 2008 novel The Last Theorem tells of the rise to fame and world prominence of a young Sri Lankan mathematician who devises an elegant proof of the theorem. Television "The Royale", an episode (first aired 27 March 1989) of Star Trek: The Next Generation, begins with Picard attempting to solve the puzzle in his ready room; he remarks to Riker that the theorem had remained unproven for 800 years. The captain ends the episode with the line "Like Fermat's theorem, it is a puzzle we may never solve." Wiles' proof was released five years after the episode aired. The theorem was again mentioned in a subsequent Star Trek: Deep Space Nine episode called "Facets" in June 1995, in which Jadzia Dax comments that one of her previous hosts, Tobin Dax, had "the most original approach to the proof since Wiles over 300 years ago". A sum, proved impossible by the theorem, appears in the 1995 episode of The Simpsons, "Treehouse of Horror VI". In the three-dimensional world in "Homer3", the equation is visible, just as the dimension begins to collapse. The joke is that the twelfth root of the sum does evaluate to 1922 due to rounding errors when entered into most handheld calculators. A second "counterexample" appeared in the 1998 episode, "The Wizard of Evergreen Terrace": , again forming a near-miss that appears true when evaluated on a handheld calculator. In the Doctor Who 2010 episode "The Eleventh Hour", the Doctor transmits a proof of Fermat's Last Theorem by typing it in just a few seconds on a laptop, to prove his genius to a collection of world leaders discussing the latest threat to the human race. Films Fermat's equation appears in the 2000 film Bedazzled with Elizabeth Hurley and Brendan Fraser. Hurley plays the devil who, in one of her many forms, appears as a school teacher who assigns Fermat's Last Theorem as a homework problem. In the 2008 film adaptation of The Oxford Murders, Fermat's Last Theorem became "Bormat's". Theater In Tom Stoppard's 1993 play Arcadia, Septimus Hodge poses the problem of proving Fermat's Last Theorem to the precocious Thomasina Coverly (who is perhaps a mathematical prodigy), in an attempt to keep her busy. Thomasina responds that Fermat had no proof and claimed otherwise in order to torment later generations. Shortly after Arcadia opened in London, Andrew Wiles announced his proof of Fermat's Last Theorem, a coincidence of timing that resulted in news stories about the proof quoting Stoppard. Fermat's Last Tango is a 2000 stage musical by Joanne Sydney Lessner and Joshua Rosenblum. Protagonist "Daniel Keane" is a fictionalized Andrew Wiles. The characters include Fermat, Pythagoras, Euclid, Newton, and Gauss, the singing, dancing mathematicians of "the aftermath". References Fiction about science Fermat's Last Theorem
Fermat's Last Theorem in fiction
[ "Mathematics" ]
1,038
[ "Theorems in number theory", "Fermat's Last Theorem" ]
13,526,853
https://en.wikipedia.org/wiki/Column%20wave
The column wave is a 16th-century stage machine created to mimic movement of the ocean. Developed by Nicola Sabbatini, the machine was an effective way to give the appearance of a wave-filled sea. It was used to great effect through the following centuries. The machine was documented in Practica di Fabricar Scene e Machine ne' Teatri as the third method of showing a sea. The column wave was built by attaching slightly bent bars through cylinders made of wood and burlap. The burlap was painted blue and black (with hints of silver for the whitecaps). These tubes were attached to cranks that, when turned, made the stretched burlap quiver while the disks created a flowing motion. Combining several of these in a row gave the audience a more realistic sea than had been seen on stage before. References Sabbatini, N. Pratica di fabricar scene e macchine ne' teatri, Ravenna, 1638. https://web.archive.org/web/20010104221100/http://www.acs.appstate.edu/orgs/spectacle/index.html Scenic design
Column wave
[ "Engineering" ]
247
[ "Scenic design", "Design" ]
13,527,273
https://en.wikipedia.org/wiki/Upsilon%20Piscium
Upsilon Piscium is a solitary, white-hued star in the zodiac constellation of Pisces. It is faintly visible to the naked eye, having an apparent visual magnitude of +4.75. Based upon an annual parallax shift of as seen from Earth, it is located about 308 light years from the Sun. The star is drifting further away with a heliocentric radial velocity of +6 km/s. This is an ordinary A-type main sequence star with a stellar classification of A3 V. It is 461 million years old – about 98% of the way through its main sequence lifetime – and is spinning with a projected rotational velocity of 91 km/s. The star has 2.8 times the mass of the Sun, about 2.2 times the Sun's radius, and is radiating 117 times the Sun's luminosity from its photosphere at an effective temperature of 9183 K. Naming υ Piscium is the Bayer designation for this star, which is Latinized as Upsilon Piscium. It has the Flamsteed designation 90 Piscium. In Chinese, (), meaning Legs (asterism), refers to an asterism composed of υ Piscium, η Andromedae, 65 Piscium, ζ Andromedae, ε Andromedae, δ Andromedae, π Andromedae, ν Andromedae, μ Andromedae, β Andromedae, σ Piscium, τ Piscium, 91 Piscium, φ Piscium, χ Piscium and ψ¹ Piscium. Consequently, the Chinese name for υ Piscium itself is (, .) References A-type main-sequence stars Pisces (constellation) Piscium, Upsilon Durchmusterung objects Piscium, 090 007964 006193 0383
Upsilon Piscium
[ "Astronomy" ]
404
[ "Pisces (constellation)", "Constellations" ]
13,527,566
https://en.wikipedia.org/wiki/Heine%27s%20identity
In mathematical analysis, Heine's identity, named after Heinrich Eduard Heine is a Fourier expansion of a reciprocal square root which Heine presented as where is a Legendre function of the second kind, which has degree, m − , a half-integer, and argument, z, real and greater than one. This expression can be generalized for arbitrary half-integer powers as follows where is the Gamma function. References Special functions Mathematical identities
Heine's identity
[ "Mathematics" ]
90
[ "Special functions", "Combinatorics", "Mathematical problems", "Mathematical identities", "Mathematical theorems", "Algebra" ]
13,527,766
https://en.wikipedia.org/wiki/Conservation%20officer
A conservation officer is a law enforcement officer who protects wildlife and the environment. A conservation officer may also be referred to as an environmental technician/technologist, game warden, park ranger, forest watcher, forest guard, forester, gamekeeper, investigator, wilderness officer, wildlife officer, or wildlife trooper. History Conservation officers can be traced back to the Middle Ages (see gamekeeper). Conservation law enforcement goes back to King Canute who enacted a forest law that made unauthorized hunting punishable by death. In 1861, Archdeacon Charles Thorp arranged purchase of some of the Farne Islands off the north-east coast of England and employment of a warden to protect threatened seabird species. The modern history of the office is linked to that of the conservation movement and has varied greatly across the world. History in New York State Conservation officers in New York State are known as "environmental conservation officers", or ECOs. The position was created in the late nineteenth century. Originally, they were known as "game protectors". The first game protectors recorded comprised a group of eight men authorized to arrest anyone who killed wildlife on protected land. Their job was to protect game and catch poachers. They also chose to protect streams from pollution. In 1960, their title was changed to "conservation officers", then in 1970, they were renamed "environmental conservation officers", after the Conservation Department and the State Health Department merged to become the "Department of Environmental Conservation". At the same time, the role's status was changed, giving ECOs more legal power than they had previously had. Education Conservation officers generally have a degree in areas specific to criminal justice, fish and wildlife management, recreation management, wildlife resources, or a science major related to these. Most start out their careers as a trainee under the supervision of an experienced conservation officer. After graduation and completion of the trainee program, many go on to law enforcement training to become a peace officer. In America, conservation officers must also take and pass the state civil service exam for ECOs. The Western Conservation Law Enforcement Academy is the academy that all Officers employed in western Canada including Yukon, British Columbia, Alberta, Saskatchewan and Manitoba must graduate from in order to be appointed as Officers in their respective jurisdictions. The program is 6 months long with about 2 of those months spent as on-the-job training with a direct supervisor. Training includes dress and deportment, investigations, firearm handling, use of force, swiftwater rescue, off-road vehicle use, search warrant application and execution and much more. Recognizing the wardens' roles As noted at the North American Game Warden Museum, confronting armed poachers in rural and even remote locations can be lonely, dangerous and even fatal work for game wardens. Recognition of the ultimate sacrifice of these officers at this museum is considered to be important, concomitant to recognition at the National Law Enforcement Officers Memorial. Officers are exposed to other risks beyond being killed by hunters, trappers and armed fishermen. Motor vehicle, boating, snowmobile and airplane accidents, animal attacks, drowning, and hypothermia are other risk they face while on duty. In North America game wardens are typically employees of state or provincial governments. 26 of the 50 U.S. states have government departments entitled Department of Natural Resources or a similar title. These departments typically patrol state or provincial parks and public lands and waterways dedicated to hunting and fishing, and also enforce state or provincial game and environmental laws on private property. In some states such as Maryland, Massachusetts, and Connecticut, conservation officers serve in the role of marine law enforcement as well, responsible for the enforcement of local, state, and federal boating laws along with search and rescue and homeland security. Game wardens/conservation officers are front and center in keeping out (or in check) invasive species. In an increasingly interconnected and globalized world, their concerns are much more comprehensive than local enforcement. While conservation officers enforce wildlife, hunting, and game laws, they have transitioned to aiding other law enforcement agencies with drug enforcement, serving warrants, and at times provide effort to homeland security. They also enforce broader conservation laws, such as the Endangered Species Act, the Migratory Bird Treaty Act of 1918 and similar laws/treaties. or the Wild Animal and Plant Protection and Regulation of International and Interprovincial Trade Act (in Canada) which implements the Convention on the International Trade in Endangered Species of Wild Flora and Fauna As necessary, they will work in tandem with appropriate national or federal agencies, such as the U.S. Fish and Wildlife Service or Environment Canada. Conservation officers by region Australia Australian Capital Territory Environment, Planning and Sustainable Development Directorate Northern Territory Department of Environment, Parks and Water Security Department of Primary Industries (New South Wales) Queensland Department of Environment and Science South Australia Department for Environment and Water Tasmania Department of Natural Resources and Environment Victoria Department of Energy, Environment and Climate Action Western Australia Conservation and Parks Commission Western Australia Department of Biodiversity, Conservation and Attractions Canada British Columbia Conservation Officer Service Ontario Conservation Officers Prince Edward Island Conservation Officers Protection de la faune du Québec (Québec fish and wildlife services) Manitoba conservation officers Alberta fish and wildlife services New Brunswick conservation officers Saskatchewan Conservation Officer Service Yukon department of fish and wildlife services North West territories fish and game Nunavut wildlife protection officers Canadian Wildlife and environmental protection officer (Canadian game officers) Department of Fisheries And Oceans Canada officers. Canadian Park wardens British Columbia Park ranger services NCC conservation officers United States Federal: United States Forest Service United States Fish and Wildlife Service State: India Andaman and Nicobar Department of Wildlife and Forests Andhra Pradesh Forest Department Assam Department of Environment and Forests Arunachal Pradesh Department of Environment and Forests Bihar Department of Environment, Forests and Climate Change Chandigarh Department of Forests and Wildlife Chhattisgarh Forest and Climate Change Department Goa Forest Department Gujarat Forest Department Haryana Forest Department Jammu and Kashmir Forest Department Kerala Forest and Wildlife Department Ladakh Department of Forests, Ecology and Environment Madhya Pradesh Forest Department Maharashtra Forest Department Meghalaya Forests and Environment Department Nagaland Department of Environment, Forests and Climate Change Punjab Department of Forest and Wildlife Preservation Sikkim Department of Forests and Wildlife Uttarakhand Forest Department Uttar Pradesh Department of Environment, Forests and Climate Change Tamil Nadu Forest Department Telangana Forest Department West Bengal Forest Department Spain Nature Protection Service from the Civil Guard Notable game wardens Guy Bradley Dave Jackson Paul Kroegel See also Pennsylvania DCNR rangers North American Game Warden Museum Park ranger Trooper Gamekeeper References Bibliography External links Association of Fish & Wildlife Agencies. North American Wildlife Enforcement Officers Association. Wildlife conservation Law enforcement occupations de:Wildhüter
Conservation officer
[ "Biology" ]
1,331
[ "Wildlife conservation", "Biodiversity" ]