id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
25,150,322
https://en.wikipedia.org/wiki/Orthogonal%20diagonalization
In linear algebra, an orthogonal diagonalization of a normal matrix (e.g. a symmetric matrix) is a diagonalization by means of an orthogonal change of coordinates. The following is an orthogonal diagonalization algorithm that diagonalizes a quadratic form q(x) on n by means of an orthogonal change of coordinates X = PY. Step 1: find the symmetric matrix A which represents q and find its characteristic polynomial Step 2: find the eigenvalues of A which are the roots of . Step 3: for each eigenvalue of A from step 2, find an orthogonal basis of its eigenspace. Step 4: normalize all eigenvectors in step 3 which then form an orthonormal basis of n. Step 5: let P be the matrix whose columns are the normalized eigenvectors in step 4. Then X = PY is the required orthogonal change of coordinates, and the diagonal entries of will be the eigenvalues which correspond to the columns of P. References Maxime Bôcher (with E.P.R. DuVal)(1907) Introduction to Higher Algebra, § 45 Reduction of a quadratic form to a sum of squares via HathiTrust Linear algebra
Orthogonal diagonalization
[ "Mathematics" ]
252
[ "Linear algebra", "Algebra" ]
25,153,936
https://en.wikipedia.org/wiki/Performance-based%20building%20design
Performance-Based Building Design is an approach to the design of any complexity of building, from single-detached homes up to and including high-rise apartments and office buildings. A building constructed in this way is required to meet certain measurable or predictable performance requirements, such as energy efficiency or seismic load, without a specific prescribed method by which to attain those requirements. This is in contrast to traditional prescribed building codes, which mandate specific construction practises, such as stud size and distance between studs in wooden frame construction. Such an approach provides the freedom to develop tools and methods to evaluate the entire life cycle of the building process, from the business dealings, to procurement, through construction and the evaluation of results. Background One of the first implementations of performance-based building design requirements was in Hammurabi's Code (c. 1795 to 1750 BC), where is stated that "a house should not collapse and kill anybody". This concept is also described in Vitruvius's "De architectura libri decem" ("The Ten Books of Architecture") in first century BC.In modern times, the first definition of performance-based building design was introduced in 1965 in France by Blachère with the Agrément system Despite this, the building process remained relatively conventional for the next 50 years, based solely on experience and codes, regulations prescribed by law which stifled innovations and change. The prescription approach is a technical procedure based on past experience which consists of comparing the proposed design with standardized codes, so no simulation or verification tools are needed for the design and building process. A new approach began to emerge during the second half of the 20th century, when many local building markets began to show that they needed greater flexibility in the procurement procedures to facilitate the exchange of building goods between countries and to improve the speed of procedures and innovations in the building process. This innovative approach to the procurement, design, contracting, management and maintenance of buildings was performance-based building design (PBBD). Most recently the clearest definition of performance based building approach was explained in 1982 by the CIB W60 Commission in the report n.64, where Gibson stated that "first and foremost, the performance approach is [...] the practice of thinking and working in terms of ends rather than means.[ …] It is concerned with what a building or building product is required to do, and not with prescribing how it is to be constructed". Many research establishments have studied the implementation of PBBD during the last fifty years. A majority of areas of building design remain open to innovation. During 1998-2001, the CIB Board and Programme Committee initiated the Proactive Programme on Performance-Based Building in order to practically implement technical developments of performance-based building. This programme was followed by the establishment of the Performance-Based Building (PeBBu), running from October 2001 to October 2005, thanks to funds from the European Commission (EC) Fifth Framework Programme. The PeBBu Network had a broad and varied programme, a set of activities and produced many papers to aid in the implementation of such vision. PeBBu Thematic Network PeBBu Thematic Network was managed by the CIB General Secretariat (International Council for Research and Innovation in Building Construction), particularly by the CIB Development Foundation (CIBdf). The PeBBu Network started working in 2001 and completed in 2005. In the PeBBu Network 73 organisations, included CIBdf (coordinating contractor), BBRI (Belgium), VTT (Finland), CSTB (France), EGM (Netherlands), TNO (Netherlands), BRE (UK), cooperated to this project bringing people together to share their work, their information and knowledge. The objectives of the Network was to stimulate and facilitate international dissemination and implementation of Performance Based Building in building and construction sector, maximising the contribution to this by the international Research and Development community. The PeBBu Thematic Network result is described and explained in 26 final reports which included three reports with an overall PBB scope, a multitude of research reports from the PeBBu Domains, User Platforms and Regional Platforms, a Final Management report and four practice reports for providing practical support to the actual application of PBB concept in building and construction sector. PBB: Conceptual framework A conceptual framework for implementing a PBB market was identified while reviewing various viewpoints during the compilation of the 2nd International State of the Art Report for the PeBBu Thematic Network (Becker and Foliente 2005). The building facility is a multi-component system with a generally very long life cycle. The system's design agenda as a whole, and the more specific design objectives of its parts, originate from relevant user requirements. These requirements evolve into a comprehensive set of Performance Requirements that should be established by a large number of stakeholders (the users, entrepreneur/owner, regulatory framework, design team, and manufacturers). The main steps in a Performance Based Building Design process are: identifying and formulating the relevant User Requirements transforming the User Requirements identified into Performance Requirements and quantitative performance criteria using reliable design and evaluation tools to assess whether proposed solutions meet the stated criteria at a satisfactory level Performance concept In a Performance-based approach, the focus of all decisions, is on the required performance-in-use and on the evaluations and testing of building asset. Performance Based Building (PBB) is focused on performance required in use for the business processes and the needs of the users, and then on the evaluations and verification of building assets result. The Performance approach can be used whether the process is about an existing or new assets. It is applicable to the procurement of constructed assets and to any phase of the whole life cycle Building Process, such as strategic planning, asset management, briefing/programming, design and construction, operation and maintenance, management and use, renovations and alterations, codes, regulations and standards. It includes many topics and criteria, which can be categorized as physical, functional, environmental, financial, economical, psychological, social, facilities, and other more. These criteria are related to singular project, according to the context and the situation. Two key characteristics of performance concept Performance concept is based on two key characteristics: the use of two languages, one for the clients/users requirements and the other for the supply of the performance the need for validation and verification of results against performance targets Two languages The Performance concept requires two languages: the language of demand requirements and the language of the required performance which should have a capability to fulfill the demand. It is important to recognize that these languages are different. Szigeti and Davis (Performance Based Building: Conceptual Framework, 2005) explain that "the dialog between client and supplier can be described as two halves of a "hamburger bun", with the statement of the requirement in functional or performance language (FC - functional concept) matched to a solution (SC - solution concept) in more technical language, and the matching, verification / validation that needs to occur in between". In a recent paper Ang, Groosman, and Scholten (2005) explain that the functional concept represents the set of unquantified objectives and scopes to be satisfied by the supply solutions, related to performance requirements. The solution concept represents technical realization that satisfies at least the required performance. Design decision is a development of a solution concept. Assessing result – match and compare Building performance evaluation is the process of systematically comparing and matching the performance in use of building assets with explicitly documented or implicitly criteria for their expected performance. In the PBB approach is essential matching and comparing demand and supply. It can be done by using a validation method, by measurement, calculation, or testing. Tools and methods are used to permit some form of measurement of testing of the requirements, and the relating measurement of the capability of assets to perform. There are many types of in-depth specialized technical evaluations and audits. These validations generally require time, a major effort by the customer group, and a high level of funding. Normally, the most valuable methods and tools are comprehensive scans which are performance based and include metrics that can easily be measured without lab-type instruments. Evaluations and reviews, are integral part of asset and portfolio management, design, construction, commissioning. Evaluations can be used for different purposes, depending on the requirements being considered, for example they could be used in support of funding decisions, they could include a condition assessment to ensure that the level of degradation or the obsolescence is known, they could include an assessment of the utilization or an assessment of the capability of the product result to perform functional expected requirements. Such evaluations can be used at any time during the life cycle of the asset. PBB evaluations should be done in a routine manner, really the evaluations are often done only as part of Commissioning or shortly thereafter, or when there is a problem. There are two different kinds of performance verifications. Performance evaluations rate the physical asset according to a set of existing criteria and indicators of capability, and match the results against the required levels of performance. The Occupant Satisfaction Surveys record the perceptions of the users, usually through a scale of satisfaction measurements. Both types of evaluations complement each other. Tools Innovative decision-support methodologies are taking place in building sector. There are some tools explicitly based on the demand and supply concepts and other ones which employ standardized performance metrics that for the first time link facility condition to the functional requirements of organizations and their customers. Projects can be planned, prioritize, and budgeted using a multi-criteria approach, that is transparent, comprehensive and auditable. One of the methodologies that can be used is a gap analysis based on calibrated scales that measure both the levels of requirements and the capability of the asset that is either already used, or being designed, or offer to be bought, or leased. Such methodology is an ASTM and American National (ANSI) standard and is currently being considered as an ISO standard. It is particularly useful when the information about the "gap", if any, can be presented in support of funding decisions and actions. There are a large number of verification methodologies, (e.g. POEs, CRE-FM), and all of these need to refer back to explicit statements of requirements to be able to compare with expected performance. To evaluate the result of a building asset against the expected performance requirements it is necessary to fix some tools used during the process. These tools are the reference of whole life cycle building process, so organizations use 'key performance indicators (KPI)' to prove that they are meeting the targets that have been set by senior management. At the same time performance measurement (PM) becomes central to managing organizations, their operations and logistic support. These methodologies include the feedback loop that links a facility in use to the requirements and capabilities that are compared and matched whenever decisions are needed. Performance approach and prescriptive approach A prescriptive approach describes the way a building asset must be constructed, rather than the end result of the building process, and is related to the type and quality of materials used, the method of construction, and the workmanship. This type of approach is strictly mandated by a combination of law, codes, standards, and regulations, and is based on past experience and consolidated know-how. The content of prescriptive codes and standards is usually a consequence of an accident causing injury or death which requires a remedy to avoid a repeat, as a consequence of some hazardous situation, or as a consequence of some recognized social need. In many countries, in both the public and private sector, research is taking place into a different set of codes, methods and tools based on performance criteria to complement the traditional prescriptive codes. In the 1970s, this search produced the "Nordic Model" (NKB 1978), which constituted the reference model of next performance-based codes. This model links easily to one of the key characteristics of the Performance approach, the dialog between the why, the what and the how. Using a Performance Based approach does not preclude the use of prescriptive specifications. Although the benefits of the adopting of a PBBD approach are significant, it is recognized that employing a performance-based approach at any stage in the building process is more complex and expensive than using the simpler prescriptive route. So, the application of this approach should not be regarded as an end in itself. When simple building are concerned or well proven technologies are used, the use of prescriptive codes results more effective, efficient, faster, or less costly, so prescriptive specifications will continue to be useful in many situations. At the same time for the complex projects use of the performance based route at every stage is indispensable, in particular during design and evaluation phases. It is not likely that a facility will be planned, procured, delivered, maintained, used and renovated using solely Performance Based documents at each step of the way, down the supply chain, to the procurement of products and materials, because there is not yet enough experience with the Performance Based Building approach. At the same time the prescriptive approach can bring to stifle changes and innovations, so best way to set building process is blending both different approaches. Statements of Requirements (SoR) The Statements of Requirements represents a reference for the whole life cycle management of facilities, they are the core of the conceptual framework came up from the PeBBu Thematic Network. They constitute the key to implementation of the PBB in the construction sector. The SoRs is a document prepared by clients, or in the verbal statements communicated to supplies, it is based on the user functional needs. These user requirements are converted into performance requirements, which can be explicit or implicit. Such document should include information about what is essential to the client. SoRs will take different forms depending on the kind of client and what is being procured, at what phase of the Life Cycle or where in the supply chain a document is being used. The SoRs should be, dynamic, not static, and should include more and more details as projects proceed. This document should be prepared at different levels of granularity, how detailed the documentation is at each stage depends on the complexity of the project and on the procurement route chosen for the project. The SoRs represent a very important part of a continuous process of communication between clients (demand) and their project team (supply), they will be updated and managed using computerized tools and will contain all requirements throughout the life of the facility. This process is called "briefing" in UK and Commonwealth English, and "programming" in American English. An SoR is normally prepared for any project, whether it is a PBB project or not. Assembling such a document usually leads to a more appropriate match between the needs of clients and users and the constructed assets. Statements of Requirements have to be very carefully stated so that it is easy to verify that a proposed solution can meet those requirements. High level statement of requirements need to be paired with indicators of capability so design solutions can be evaluated before they are built in order to avoid mistakes. In the SoRs it is important to take into account some design aspect like flexibility indicators because constructed assets need for change during their life cycle, uses and activities can change very rapidly, so it is essential to test different solutions way that the spaces might be used according to anticipate changes. SoRs, as understood in ISO 9000, include not only what the client requires and is prepared to pay for, but also the process and indicators that will provide the means to verify, and validate, that the product or service delivered meets those stated requirements. As part of the worldwide movement to implement a PBB approach and to develop tools that will make it easier to shift to PBB, the International Alliance for Interoperability (IAI) set up projects to map the processes that are part of Whole Life Cycle Management as Portfolio and Asset Management: Performance (PAMPeR) and Early Design" (ED). The IAI efforts are complemented by many other efforts to create standards for the information to be captured and analyzed to verify performance-in-use. Performance requirements (PR) Performance requirements translate user requirements in more precise quantitative measurable and technical terms, usually for a specific purpose. Supply team prepares a document that includes, objectives and goals, performance requirements and criteria. It is important to include "indicators of performance" in the way that it can be measured the results against explicit requirements, whether qualitative or quantitative. Performance indicators need to be easily understood by the users and the evaluators. To validate the indicators and verify that required performance-in-use has been achieved it is necessary using appropriate methods and tools. Levels of performance requirements can be stated as part of the preparation of SoRs, as part of project programs, or as part of requests for proposals and procurement contracts. It is preferable adopting a flexible approach to the expression and comparison of performance levels, so required and achieved performance can be expressed not as single values but as bands between upper and lower limits. In consequence, in performance terms the criteria can be expressed as graduated scales, divided into broad bands. Performance based codes In the building and construction industry, until 25–30 years old, prescriptive codes, regulations and standards made innovation and change difficult and costly to implement, and created technical restrictions to trade. These concerns have been the major drivers towards the use of a Performance Based approach to codes, regulations and standards. Performance-based building regulations have been implemented or are being developed in many countries but they have not yet reached their full potential. In part, this can be attributed to the fact that the overall regulatory system has not yet been fully addressed, and gaps exist in several key areas. Bringing the regulatory and non-regulatory models together is probably the best way to work. This is shown in the "Total Performance System Models" diagram (Meacham, et al. 2002), that maps the flow of decision making from society and business objectives to construction solutions. The difference between the regulatory and non-regulatory parts of the Total Performance System Models is that the first one is mandated by codes and regulations based on the law, while those other functional requirements, included in Statements of Requirements, are an integral part of what the client requires and is willing to pay for. Consequences relating to procedure For procurements in the public sector and for publicly traded corporations, it's important that the decisions and choices are transparent and explicit, regardless of the specific procurement route. All procurement processes can be either Prescriptive or Performance Based. Design-Build, Public Private Partnerships (PPP), private finance initiative (PFI) and similar procurement procedures are particularly suited to the use of a strong Performance Based Building application. If the expected performance are not stated explicitly and verifiably then these procurement methods will likely be more subject to disappointments and legal problems. To get the benefits from these procurement approaches, it is essential to organize the services of the supply chain in order to get innovative, less costly, or better solutions by shifting decisions about "how" to the integrated team. References regulatory ISO 6240: 1980, Performance standards in building – Contents and presentation ISO 6241: 1984, Performance standards in building – Principles for their preparation and factors to be considered ISO 6242: 1992, Building construction – Expression of user's requirements – Part 1: Thermal requirements ISO 6242: 1992, Building construction – Expression of user's requirements – Part 2: Air purity requirements ISO 6242: 1992, Building construction – Expression of user's requirements – Part 3: Acoustical requirements ISO 6243: 1997, Climatic data for building design: proposed systems of symbols ISO 7162: 1992, Performance standards in building – Contents and format of standards for evaluation of performance ISO 19208: 2016, Framework for specifying performance in buildings ISO 9836: 1992, Performance standards in building – Definition and calculation of area and space indicators ISO 9000-00: 2000a, Quality Management system - Fundamentals and vocabulary ISO 9001-00: 2000b, Quality Management system - Requirements CEN (2002). EN 12152:2002 Curtain Walling — Air Permeability —Performance Requirements and Classification. CEN, European Commission for Standardization, Brussels. CEN (2002 − 2007). Structural Eurocodes (EN 1990 — Eurocode: Basis of structural design. EN 1991 —Eurocode 1: Actions on structures. EN 1992 — Eurocode 2: Design of concrete structures. EN 1993 —Eurocode 3: Design of steel structures. EN 1994 — Eurocode 4: Design of composite steel and concrete structures. EN 1995 — Eurocode 5: Design of timber structures. EN 1996 — Eurocode 6: Design of masonry structures. EN 1997 — Eurocode 7: Geotechnical design. EN 1998 — Eurocode 8: Design of structures for earthquake resistance. EN 1999 — Eurocode 9: Design of aluminium structures). CEN, European Commission for Standardization, Brussels. CEN (2004). EN 13779:2004 — Ventilation for Non-residential Buildings — Performance Requirements for Ventilation and Room-Conditioning Systems. CEN, European committee for standardization, Brussels UNI 8290 – 1: 1981 + A122: 1983, Residential building. Building elements. Classification and terminology UNI 8290 – 2: 1983, Residential building. Building elements. Analysis of requirements UNI 8290 – 3: 1987, Residential building. Building elements. Agents list UNI 8289: 1981, Building. Functional requirements of final users. Classification UNI 10838: 1999, Building. Terminology for users, performances, quality and building process See also Evidence-based design Feedback loop Post-occupancy evaluation References BAKENS W., PeBBu Finalized, CIB News Article, January 2006 BECKER R., Fundamentals of Performance-Based Building Design, Faculty of Civil and Environmental Engineering Technion – Israel Institute of Technology, Haifa, November 2008 FOLIENTE G., HUEVILA P., ANG G., SPEKKINK D., BACKENS W., Performance Based Building R&D Roadmap, PeBBu Final Report, CIBdf, Rotterdam, 2005 SZIGETI F., The PeBBuCo Study: Compendium of Statements of Performance Based (PB) Statements of Requirements (SoR), International Center for Facilities (ICF), Ottawa, 2005 SZIGETI F., DAVIS G., Performance Based Building: Conceptual Framework, PeBBu Final Report, CIBdf, Rotterdam, October 2005 Further reading BECKER R., FOLIENTE G., Performance Based International State of the art, PeBBu 2nd International SotA Report, CIBdf, Rotterdam, 2005 BLACHERE G., General consideration of standards, agreement and the assessment of fitness for use, Paper presented at the 3rd CIB Congress on Towards Industrialised Building held in Copenhagen, Denmark, 1965 BLACHERE G., Building Principles, Commission of the European Communities, Industrial Processes, Building and Civil Engineering, Directorate General, Internal Market and Industrial Affairs, EUR 11320 EN, 1987 GIBSON E.J., Working with the Performance Approach in Building, CIB Report Publication n.64, Rotterdam, 1982 GROSS J.G., Developments in the application of the performance concept in building, Proceedings of the 3rd symposium of CIB-ASTM-ISO-RILEM, National Building Research Institute, Israel, 1996 External links BRE – Building Research Establishment CIB - International Council for Research and Innovation in Building and Construction CSTB – Centre Scientifique et Technique du Bâtiment IAI – International Alliance for Interoperability Building engineering Methodology
Performance-based building design
[ "Engineering" ]
4,837
[ "Building engineering", "Civil engineering", "Architecture" ]
25,154,546
https://en.wikipedia.org/wiki/Mechanical%20filter
A mechanical filter is a signal processing filter usually used in place of an electronic filter at radio frequencies. Its purpose is the same as that of a normal electronic filter: to pass a range of signal frequencies, but to block others. The filter acts on mechanical vibrations which are the analogue of the electrical signal. At the input and output of the filter, transducers convert the electrical signal into, and then back from, these mechanical vibrations. The components of a mechanical filter are all directly analogous to the various elements found in electrical circuits. The mechanical elements obey mathematical functions which are identical to their corresponding electrical elements. This makes it possible to apply electrical network analysis and filter design methods to mechanical filters. Electrical theory has developed a large library of mathematical forms that produce useful filter frequency responses and the mechanical filter designer is able to make direct use of these. It is only necessary to set the mechanical components to appropriate values to produce a filter with an identical response to the electrical counterpart. Steel alloys and iron–nickel alloys are common materials for mechanical filter components; nickel is sometimes used for the input and output couplings. Resonators in the filter made from these materials need to be machined to precisely adjust their resonance frequency before final assembly. While the meaning of mechanical filter in this article is one that is used in an electromechanical role, it is possible to use a mechanical design to filter mechanical vibrations or sound waves (which are also essentially mechanical) directly. For example, filtering of audio frequency response in the design of loudspeaker cabinets can be achieved with mechanical components. In the electrical application, in addition to mechanical components which correspond to their electrical counterparts, transducers are needed to convert between the mechanical and electrical domains. A representative selection of the wide variety of component forms and topologies for mechanical filters are presented in this article. The theory of mechanical filters was first applied to improving the mechanical parts of phonographs in the 1920s. By the 1950s mechanical filters were being manufactured as self-contained components for applications in radio transmitters and high-end receivers. The high "quality factor", Q, that mechanical resonators can attain, far higher than that of an all-electrical LC circuit, made possible the construction of mechanical filters with excellent selectivity. Good selectivity, being important in radio receivers, made such filters highly attractive. Contemporary researchers are working on microelectromechanical filters, the mechanical devices corresponding to electronic integrated circuits. Elements The elements of a passive linear electrical network consist of inductors, capacitors and resistors which have the properties of inductance, elastance (inverse capacitance) and resistance, respectively. The mechanical counterparts of these properties are, respectively, mass, stiffness and damping. In most electronic filter designs, only inductor and capacitor elements are used in the body of the filter (although the filter may be terminated with resistors at the input and output). Resistances are not present in a theoretical filter composed of ideal components and only arise in practical designs as unwanted parasitic elements. Likewise, a mechanical filter would ideally consist only of components with the properties of mass and stiffness, but in reality some damping is present as well. The mechanical counterparts of voltage and electric current in this type of analysis are, respectively, force (F) and velocity (v) and represent the signal waveforms. From this, a mechanical impedance can be defined in terms of the imaginary angular frequency, jω, which entirely follows the electrical analogy. {| class="wikitable" width=80% ! Mechanical element !! Formula !! Mechanical impedance !! Electrical counterpart |- | Stiffness, || || || Elastance, , |- | Mass, || || || Inductance, |- | Damping, || || || Resistance, |} {| |+ |- | |- | |} The scheme presented in the table is known as the impedance analogy. Circuit diagrams produced using this analogy match the electrical impedance of the mechanical system seen by the electrical circuit, making it intuitive from an electrical engineering standpoint. There is also the mobility analogy, in which force corresponds to current and velocity corresponds to voltage. This has equally valid results but requires using the reciprocals of the electrical counterparts listed above. Hence, where is electrical conductance (the reciprocal of resistance, if there is no reactance). Equivalent circuits produced by this scheme are similar, but are the dual impedance forms whereby series elements become parallel, capacitors become inductors, and so on. Circuit diagrams using the mobility analogy more closely match the mechanical arrangement of the circuit, making it more intuitive from a mechanical engineering standpoint. In addition to their application to electromechanical systems, these analogies are widely used to aid analysis in acoustics. Any mechanical component will unavoidably possess both mass and stiffness. This translates in electrical terms to an LC circuit, that is, a circuit consisting of an inductor and a capacitor, hence mechanical components are resonators and are often used as such. It is still possible to represent inductors and capacitors as individual lumped elements in a mechanical implementation by minimising (but never quite eliminating) the unwanted property. Capacitors may be made of thin, long rods, that is, the mass is minimised and the compliance is maximised. Inductors, on the other hand, may be made of short, wide pieces which maximise the mass in comparison to the compliance of the piece. Mechanical parts act as a transmission line for mechanical vibrations. If the wavelength is short in comparison to the part then a lumped-element model as described above is no longer adequate and a distributed-element model must be used instead. The mechanical distributed elements are entirely analogous to electrical distributed elements and the mechanical filter designer can use the methods of electrical distributed-element filter design. History Harmonic telegraph Mechanical filter design was developed by applying the discoveries made in electrical filter theory to mechanics. However, a very early example (1870s) of acoustic filtering was the "harmonic telegraph", which arose precisely because electrical resonance was poorly understood but mechanical resonance (in particular, acoustic resonance) was very familiar to engineers. This situation was not to last for long; electrical resonance had been known to science for some time before this, and it was not long before engineers started to produce all-electric designs for filters. In its time, though, the harmonic telegraph was of some importance. The idea was to combine several telegraph signals on one telegraph line by what would now be called frequency division multiplexing thus saving enormously on line installation costs. The key of each operator activated a vibrating electromechanical reed which converted this vibration into an electrical signal. Filtering at the receiving operator was achieved by a similar reed tuned to precisely the same frequency, which would only vibrate and produce a sound from transmissions by the operator with the identical tuning. Versions of the harmonic telegraph were developed by Elisha Gray, Alexander Graham Bell, Ernest Mercadier and others. Its ability to act as a sound transducer to and from the electrical domain was to inspire the invention of the telephone. Mechanical equivalent circuits Once the basics of electrical network analysis began to be established, it was not long before the ideas of complex impedance and filter design theories were carried over into mechanics by analogy. Kennelly, who was also responsible for introducing complex impedance, and Webster were the first to extend the concept of impedance into mechanical systems in 1920. Mechanical admittance and the associated mobility analogy came much later and are due to Firestone in 1932. It was not enough to just develop a mechanical analogy. This could be applied to problems that were entirely in the mechanical domain, but for mechanical filters with an electrical application it is necessary to include the transducer in the analogy as well. Poincaré (1907) was the first to describe a transducer as a pair of linear algebraic equations relating electrical variables (voltage and current) to mechanical variables (force and velocity). These equations can be expressed as a matrix relationship in much the same way as the z-parameters of a two-port network in electrical theory, to which this is entirely analogous: where and represent the voltage and current respectively on the electrical side of the transducer. Wegel, in 1921, was the first to express these equations in terms of mechanical impedance as well as electrical impedance. The element is the open circuit mechanical impedance, that is, the impedance presented by the mechanical side of the transducer when no current is entering the electrical side. The element , conversely, is the clamped electrical impedance, that is, the impedance presented to the electrical side when the mechanical side is clamped and prevented from moving (velocity is zero). The remaining two elements, and describe the transducer forward and reverse transfer functions respectively. Once these ideas were in place, engineers were able to extend electrical theory into the mechanical domain and analyse an electromechanical system as a unified whole. Sound reproduction An early application of these new theoretical tools was in phonographic sound reproduction. A recurring problem with early phonograph designs was that mechanical resonances in the pickup and sound transmission mechanism caused excessively large peaks and troughs in the frequency response, resulting in poor sound quality. In 1923, Harrison of the Western Electric Company filed a patent for a phonograph in which the mechanical design was entirely represented as an electrical circuit. The horn of the phonograph is represented as a transmission line, and is a resistive load for the rest of the circuit, while all the mechanical and acoustic parts—from the pickup needle through to the horn—are translated into lumped components according to the impedance analogy. The circuit arrived at is a ladder topology of series resonant circuits coupled by shunt capacitors. This can be viewed as a bandpass filter circuit. Harrison designed the component values of this filter to have a specific passband corresponding to the desired audio passband (in this case 100 Hz to 6 kHz) and a flat response. Translating these electrical element values back into mechanical quantities provided specifications for the mechanical components in terms of mass and stiffness, which in turn could be translated into physical dimensions for their manufacture. The resulting phonograph has a flat frequency response in its passband and is free of the resonances previously experienced. Shortly after this, Harrison filed another patent using the same methodology on telephone transmit and receive transducers. Harrison used Campbell's image filter theory, which was the most advanced filter theory available at the time. In this theory, filter design is viewed essentially as an impedance matching problem. More advanced filter theory was brought to bear on this problem by Norton in 1929 at Bell Labs. Norton followed the same general approach though he later described to Darlington the filter he designed as being "maximally flat". Norton's mechanical design predates the paper by Butterworth who is usually credited as the first to describe the electronic maximally flat filter. The equations Norton gives for his filter correspond to a singly terminated Butterworth filter, that is, one driven by an ideal voltage source with no impedance, whereas the form more usually given in texts is for the doubly terminated filter with resistors at both ends, making it hard to recognise the design for what it is. Another unusual feature of Norton's filter design arises from the series capacitor, which represents the stiffness of the diaphragm. This is the only series capacitor in Norton's representation, and without it, the filter could be analysed as a low-pass prototype. Norton moves the capacitor out of the body of the filter to the input at the expense of introducing a transformer into the equivalent circuit (Norton's figure 4). Norton has used here the "turning round the L" impedance transform to achieve this. The definitive description of the subject from this period is Maxfield and Harrison's 1926 paper. There, they describe not only how mechanical bandpass filters can be applied to sound reproduction systems, but also apply the same principles to recording systems and describe a much improved disc cutting head. Volume production Modern mechanical filters for intermediate frequency (IF) applications were first investigated by Robert Adler of Zenith Electronics who built a 455 kHz filter in 1946. The idea was taken up by Collins Radio Company who started the first volume production of mechanical filters from the 1950s onwards. These were originally designed for telephone frequency-division multiplex applications where there is commercial advantage in using high quality filters. Precision and steepness of the transition band leads to a reduced width of guard band, which in turn leads to the ability to squeeze more telephone channels into the same cable. This same feature is useful in radio transmitters for much the same reason. Mechanical filters quickly also found popularity in VHF/UHF radio IF stages of the high end radio sets (military, marine, amateur radio and the like) manufactured by Collins. They were favoured in the radio application because they could achieve much higher -factors than the equivalent LC filter. High allows filters to be designed which have high selectivity, important for distinguishing adjacent radio channels in receivers. They also had an advantage in stability over both LC filters and monolithic crystal filters. The most popular design for radio applications was torsional resonators because radio IF typically lies in the 100 to 500 kHz band. Transducers Both magnetostrictive and piezoelectric transducers are used in mechanical filters. Piezoelectric transducers are favoured in recent designs since the piezoelectric material can also be used as one of the resonators of the filter, thus reducing the number of components and thereby saving space. They also avoid the susceptibility to extraneous magnetic fields of the magnetostrictive type of transducer. Magnetostrictive A magnetostrictive material is one which changes shape when a magnetic field is applied. In reverse, it produces a magnetic field when distorted. The magnetostrictive transducer requires a coil of conducting wire around the magnetostrictive material. The coil either induces a magnetic field in the transducer and sets it in motion or else picks up an induced current from the motion of the transducer at the filter output. It is also usually necessary to have a small magnet to bias the magnetostrictive material into its operating range. It is possible to dispense with the magnets if the biasing is taken care of on the electronic side by providing a d.c. current superimposed on the signal, but this approach would detract from the generality of the filter design. The usual magnetostrictive materials used for the transducer are either ferrite or compressed powdered iron. Mechanical filter designs often have the resonators coupled with steel or nickel-iron wires, but on some designs, especially older ones, nickel wire may be used for the input and output rods. This is because it is possible to wind the transducer coil directly on to a nickel coupling wire since nickel is slightly magnetostrictive. However, it is not strongly so and coupling to the electrical circuit is weak. This scheme also has the disadvantage of eddy currents, a problem that is avoided if ferrites are used instead of nickel. The coil of the transducer adds some inductance on the electrical side of the filter. It is common practice to add a capacitor in parallel with the coil so that an additional resonator is formed which can be incorporated into the filter design. While this will not improve performance to the extent that an additional mechanical resonator would, there is some benefit and the coil has to be there in any case. Piezoelectric A piezoelectric material is one which changes shape when an electric field is applied. In reverse, it produces an electric field when it is distorted. A piezoelectric transducer, in essence, is made simply by plating electrodes on to the piezoelectric material. Early piezoelectric materials used in transducers such as barium titanate had poor temperature stability. This precluded the transducer from functioning as one of the resonators; it had to be a separate component. This problem was solved with the introduction of lead zirconate titanate (abbreviated PZT) which is stable enough to be used as a resonator. Another common piezoelectric material is quartz, which has also been used in mechanical filters. However, ceramic materials such as PZT are preferred for their greater electromechanical coupling coefficient. One type of piezoelectric transducer is the Langevin type, named after a transducer used by Paul Langevin in early sonar research. This is good for longitudinal modes of vibration. It can also be used on resonators with other modes of vibration if the motion can be mechanically converted into a longitudinal motion. The transducer consists of a layer of piezoelectric material sandwiched transversally into a coupling rod or resonator. Another kind of piezoelectric transducer has the piezoelectric material sandwiched in longitudinally, usually into the resonator itself. This kind is good for torsional vibration modes and is called a torsional transducer. As miniaturized by using thin film manufacturing methods piezoelectric resonators are called thin-film bulk acoustic resonators (FBARs). Resonators It is possible to achieve an extremely high with mechanical resonators. Mechanical resonators typically have a of 10,000 or so, and 25,000 can be achieved in torsional resonators using a particular nickel-iron alloy. This is an unreasonably high figure to achieve with LC circuits, whose is limited by the resistance of the inductor coils. Early designs in the 1940s and 1950s started by using steel as a resonator material. This has given way to nickel-iron alloys, primarily to maximise the since this is often the primary appeal of mechanical filters rather than price. Some of the metals that have been used for mechanical filter resonators and their are shown in the table. Piezoelectric crystals are also sometimes used in mechanical filter designs. This is especially true for resonators that are also acting as transducers for inputs and outputs. One advantage that mechanical filters have over LC electrical filters is that they can be made very stable. The resonance frequency can be made so stable that it varies only 1.5 parts per billion (ppb) from the specified value over the operating temperature range (), and its average drift with time can be as low as 4 ppb per day. This stability with temperature is another reason for using nickel-iron as the resonator material. Variations with temperature in the resonance frequency (and other features of the frequency function) are directly related to variations in the Young's modulus, which is a measure of stiffness of the material. Materials are therefore sought that have a small temperature coefficient of Young's modulus. In general, Young's modulus has a negative temperature coefficient (materials become less stiff with increasing temperature) but additions of small amounts of certain other elements in the alloy can produce a material with a temperature coefficient that changes sign from negative through zero to positive with temperature. Such a material will have a zero coefficient of temperature with resonance frequency around a particular temperature. It is possible to adjust the point of zero temperature coefficient to a desired position by heat treatment of the alloy. Resonator modes It is usually possible for a mechanical part to vibrate in a number of different modes, however the design will be based on a particular vibrational mode and the designer will take steps to try to restrict the resonance to this mode. As well as the straightforward longitudinal mode some others which are used include flexural mode, torsional mode, radial mode and drumhead mode. Modes are numbered according to the number of half-wavelengths in the vibration. Some modes exhibit vibrations in more than one direction (such as drumhead mode which has two) and consequently the mode number consists of more than one number. When the vibration is in one of the higher modes, there will be multiple nodes on the resonator where there is no motion. For some types of resonator, this can provide a convenient place to make a mechanical attachment for structural support. Wires attached at nodes will have no effect on the vibration of the resonator or the overall filter response. In figure 5, some possible anchor points are shown as wires attached at the nodes. Circuit designs There are a great many combinations of resonators and transducers that can be used to construct a mechanical filter. A selection of some of these is shown in the diagrams. Figure 6 shows a filter using disc flexural resonators and magnetostrictive transducers. The transducer drives the centre of the first resonator, causing it to vibrate. The edges of the disc move in antiphase to the centre when the driving signal is at, or close to, resonance, and the signal is transmitted through the connecting rods to the next resonator. When the driving signal is not close to resonance, there is little movement at the edges, and the filter rejects (does not pass) the signal. Figure 7 shows a similar idea involving longitudinal resonators connected together in a chain by connecting rods. In this diagram, the filter is driven by piezoelectric transducers. It could equally well have used magnetostrictive transducers. Figure 8 shows a filter using torsional resonators. In this diagram, the input has a torsional piezoelectric transducer and the output has a magnetostrictive transducer. This would be quite unusual in a real design, as both input and output usually have the same type of transducer. The magnetostrictive transducer is only shown here to demonstrate how longitudinal vibrations may be converted to torsional vibrations and vice versa. Figure 9 shows a filter using drumhead mode resonators. The edges of the discs are fixed to the casing of the filter (not shown in the diagram) so the vibration of the disc is in the same modes as the membrane of a drum. Collins calls this type of filter a disc wire filter. The various types of resonator are all particularly suited to different frequency bands. Overall, mechanical filters with lumped elements of all kinds can cover frequencies from about 5 to 700 kHz although mechanical filters down as low as a few kilohertz (kHz) are rare. The lower part of this range, below 100 kHz, is best covered with bar flexural resonators. The upper part is better done with torsional resonators. Drumhead disc resonators are in the middle, covering the range from around 100 to 300 kHz. The frequency response behaviour of all mechanical filters can be expressed as an equivalent electrical circuit using the impedance analogy described above. An example of this is shown in figure 8b which is the equivalent circuit of the mechanical filter of figure 8a. Elements on the electrical side, such as the inductance of the magnetostrictive transducer, are omitted but would be taken into account in a complete design. The series resonant circuits on the circuit diagram represent the torsional resonators, and the shunt capacitors represent the coupling wires. The component values of the electrical equivalent circuit can be adjusted, more or less at will, by modifying the dimensions of the mechanical components. In this way, all the theoretical tools of electrical analysis and filter design can be brought to bear on the mechanical design. Any filter realisable in electrical theory can, in principle, also be realised as a mechanical filter. In particular, the popular finite element approximations to an ideal filter response of the Butterworth and Chebyshev filters can both readily be realised. As with the electrical counterpart, the more elements that are used, the closer the approximation approaches the ideal, however, for practical reasons the number of resonators does not normally exceed eight. Semi-lumped designs Frequencies of the order of megahertz (MHz) are above the usual range for mechanical filters. The components start to become very small, or alternatively the components are large compared to the signal wavelength. The lumped-element model described above starts to break down and the components must be considered as distributed elements. The frequency at which the transition from lumped to distributed modeling takes place is much lower for mechanical filters than it is for their electrical counterparts. This is because mechanical vibrations travel at the speed of sound for the material the component is composed of. For solid components, this is many times (x15 for nickel-iron) the speed of sound in air () but still considerably less than the speed of electromagnetic waves (approx. in vacuum). Consequently, mechanical wavelengths are much shorter than electrical wavelengths for the same frequency. Advantage can be taken of these effects by deliberately designing components to be distributed elements, and the components and methods used in electrical distributed-element filters can be brought to bear. The equivalents of stubs and impedance transformers are both achievable. Designs which use a mixture of lumped and distributed elements are referred to as semi-lumped. An example of such a design is shown in figure 10a. The resonators are disc flexural resonators similar to those shown in figure 6, except that these are energised from an edge, leading to vibration in the fundamental flexural mode with a node in the centre, whereas the figure 6 design is energised in the centre leading to vibration in the second flexural mode at resonance. The resonators are mechanically attached to the housing by pivots at right angles to the coupling wires. The pivots are to ensure free turning of the resonator and minimise losses. The resonators are treated as lumped elements; however, the coupling wires are made exactly one half-wavelength long and are equivalent to a open circuit stub in the electrical equivalent circuit. For a narrow-band filter, a stub of this sort has the approximate equivalent circuit of a parallel shunt tuned circuit as shown in figure 10b. Consequently, the connecting wires are being used in this design to add additional resonators into the circuit and will have a better response than one with just the lumped resonators and short couplings. For even higher frequencies, microelectromechanical methods can be used as described below. Bridging wires Bridging wires are rods that couple together resonators that are not adjacent. They can be used to produce poles of attenuation in the stopband. This has the benefit of increasing the stopband rejection. When the pole is placed near the passband edge, it also has the benefit of increasing roll-off and narrowing the transition band. The typical effects of some of these on filter frequency response are shown in figure 11. Bridging across a single resonator (figure 11b) can produce a pole of attenuation in the high stopband. Bridging across two resonators (figure 11c) can produce a pole of attenuation in both the high and the low stopband. Using multiple bridges (figure 11d) will result in multiple poles of attenuation. In this way, the attenuation of the stopbands can be deepened over a broad frequency range. The method of coupling between non-adjacent resonators is not limited to mechanical filters. It can be applied to other filter formats and the general term for this class is cross-coupled filter. For instance, channels can be cut between cavity resonators, mutual inductance can be used with discrete component filters, and feedback paths can be used with active analogue or digital filters. Nor was the method first discovered in the field of mechanical filters; the earliest description is in a 1948 patent for filters using microwave cavity resonators. However, mechanical filter designers were the first (1960s) to develop practical filters of this kind and the method became a particular feature of mechanical filters. Microelectromechanical filters A new technology emerging in mechanical filtering is microelectromechanical systems (MEMS). MEMS are very small micromachines with component sizes measured in micrometres (μm), but not as small as nanomachines. These filters can be designed to operate at much higher frequencies than can be achieved with traditional mechanical filters. These systems are mostly fabricated from silicon (Si), silicon nitride (Si3N4), or polymers. A common component used for radio frequency filtering (and MEMS applications generally), is the cantilever resonator. Cantilevers are simple mechanical components to manufacture by much the same methods used by the semiconductor industry; masking, photolithography and etching, with a final undercutting etch to separate the cantilever from the substrate. The technology has great promise since cantilevers can be produced in large numbers on a single substrate—much as large numbers of transistors are currently contained on a single silicon chip. The resonator shown in figure 12 is around 120 μm in length. Experimental complete filters with an operating frequency of 30 GHz have been produced using cantilever varactors as the resonator elements. The size of this filter is around 4×3.5 mm. Cantilever resonators are typically applied at frequencies below 200 MHz, but other structures, such as micro-machined cavities, can be used in the microwave bands. Extremely high resonators can be made with this technology; flexural mode resonators with a in excess of 80,000 at 8 MHz are reported. Adjustment The precision applications in which mechanical filters are used require that the resonators are accurately adjusted to the specified resonance frequency. This is known as trimming and usually involves a mechanical machining process. In most filter designs, this can be difficult to do once the resonators have been assembled into the complete filter so the resonators are trimmed before assembly. Trimming is done in at least two stages; coarse and fine, with each stage bringing the resonance frequency closer to the specified value. Most trimming methods involve removing material from the resonator which will increase the resonance frequency. The target frequency for a coarse trimming stage consequently needs to be set below the final frequency since the tolerances of the process could otherwise result in a frequency higher than the following fine trimming stage could adjust for. The coarsest method of trimming is grinding of the main resonating surface of the resonator; this process has an accuracy of around . Better control can be achieved by grinding the edge of the resonator instead of the main surface. This has a less dramatic effect and consequently better accuracy. Processes that can be used for fine trimming, in order of increasing accuracy, are sandblasting, drilling, and laser ablation. Laser trimming is capable of achieving an accuracy of . Trimming by hand, rather than machine, was used on some early production components but would now normally only be encountered during product development. Methods available include sanding and filing. It is also possible to add material to the resonator by hand, thus reducing the resonance frequency. One such method is to add solder, but this is not suitable for production use since the solder will tend to reduce the high of the resonator. In the case of MEMS filters, it is not possible to trim the resonators outside of the filter because of the integrated nature of the device construction. However, trimming is still a requirement in many MEMS applications. Laser ablation can be used for this but material deposition methods are available as well as material removal. These methods include laser or ion-beam induced deposition. See also Ceramic resonator Surface acoustic wave Crystal oscillator Reed receiver Footnotes References Bibliography (filed in Germany 1923) Further reading Analog circuits Electromechanical engineering Electronic design Linear filters Mechanics Signal processing filter Sound recording technology
Mechanical filter
[ "Physics", "Chemistry", "Technology", "Engineering" ]
6,613
[ "Sound recording technology", "Electronic design", "Analog circuits", "Filters", "Electronic engineering", "Mechanics", "Mechanical engineering by discipline", "Mechanical engineering", "Electromechanical engineering", "Electrical engineering", "Recording devices", "Design", "Signal processi...
25,154,610
https://en.wikipedia.org/wiki/Sorption%20calorimetry
The method of sorption calorimetry is designed for studies of hydration of complex organic and biological materials. It has been applied for studies of surfactants, lipids, DNA, nanomaterials and other substances. A sorption calorimetric experiment is performed at isothermal regime, but different temperatures can be studied in separate experiments. In a sorption calorimetric experiment, a two-chamber calorimetric cell is inserted into a double-twin microcalorimeter. Water evaporates, diffuses through the tube connecting two chambers of the calorimetric cell and is absorbed by the studied substance. The amount of evaporated water is calculated from the thermal power registered in the vaporisation chamber: From the same data, the activity of water in the sample can also be calculated: From the thermal powers registered in the two chambers one can calculate the partial molar enthalpy of mixing of water. During the sorption experiment the water content in the sample increases until it reaches a value high enough to make the process of diffusion of water vapor between the chambers very slow. Then the sorption experiment can be stopped. For studies of hydration at very high relative humidities, a special modification of the method of sorption calorimetry – the desorption calorimetric method – was developed. A desorption experiment starts with a fully hydrated sample which is placed in the sample chamber (the top chamber in the figure). In the bottom chamber a salt solution is injected. During the desorption experiment the sample is being slowly dehydrated and the salt solution takes up the water evaporated from the sample. See also Isothermal microcalorimetry Isothermal titration calorimetry Pressure perturbation calorimetry References External links Vitaly Kocherbitov: Sorption calorimetry (Malmö University) Calorimetry Heat transfer
Sorption calorimetry
[ "Physics", "Chemistry" ]
391
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Thermodynamics" ]
37,840,569
https://en.wikipedia.org/wiki/Electrochemical%20fatigue%20crack%20sensor
An Electrochemical Fatigue Crack Sensor (EFCS) is a type of low cost electrochemical nondestructive dynamic testing method used primarily in the aerospace and transportation infrastructure industries. The method is used to locate surface-breaking and slightly subsurface defects in all metallic materials. In bridge structures, EFCS is used at known fatigue susceptible areas, such as sharp-angled coped beams, stringer to beam attachments, and the toe of welds. This dynamic testing can be a form of short term or long-term monitoring, as long as the structure is undergoing dynamic cyclic loading. History In 1992, Dr. Campbell Laird and Dr. Yuanfeng Li invented the EFS™. The EFS™ relies on a patented electrical test method, which monitors the current flow at the surface of a metal while it is being mechanically flexed. The output current resembles a heart’s EKG pattern and can be interpreted to indicate the degree of fatigue as well as the presence of cracks in their earliest stages of development. The technology behind EFS was devised by researchers from the U.S. Air Force and the University of Pennsylvania for use in the aerospace industry. The original research was aimed at developing a technology for detecting problem cracks in airframes and engines. Since that time, additional research and development has resulted in the adaptation of the EFS system for steel bridge inspection. Principles The Electrochemical Fatigue Sensor (EFS) is a nondestructive crack dynamic inspection technology, similar in concept to a medical EKG, which is used to determine if actively growing fatigue cracks are present. An EFS sensor is first applied to the fatigue sensitive location on the bridge or metal structure, and then is injected with an electrolyte, at which point a small voltage is applied. The system subsequently monitors changes in the current response that results from the exposure of fresh steel during crack propagation. The EFS system consists of an electrolyte, a sensor array and a modified potentiostat call the potentiostat data link (PDL) for applying a constant polarizing voltage between the bridge and sensor, as well as data collection and analysis software. The current response from the sensor array, which consists of a crack measurement sensor and a reference sensor, are collected, analyzed and compared with the system software. Data is presented in both the time domain and the frequency domain. An algorithm, specifically written for this system, automatically indicates the level of fatigue crack activity at the inspection location. EFS can detect cracks in the field as small as 0.01 inches in an actual structure (too small to be seen with the unaided eye). Materials The original research for the EFS was aimed at developing a technology for detecting problem cracks in airframes and engines. Grade 5, also known as Ti6Al4V, Ti-6Al-4V, or Ti 6-4, is the most commonly used titanium alloy in the aerospace industry, such as internal combustion engine connecting rods. It features a chemical composition of 6% aluminium, 4% vanadium, 0.25% (maximum) iron, 0.2% (maximum) oxygen, and the remainder titanium. and the remaining portion titanium. Notably, Grade 5 is considerably stronger than commercially pure titanium, while sharing the same stiffness and thermal properties (excluding thermal conductivity, which is approximately 60% lower in Grade 5 Ti than in CP Ti). One of its notable advantages is its heat treatable. This grade exhibits an exceptional combination of strength, corrosion resistance, weldability, and fabricability. Typically, it finds application in uses up to temperatures of 400 degrees Celsius. (Grade 5 has a density of approximately 4420 kg/m3, Young's modulus of 110 GPa, and tensile strength of 1000 MPa. By comparison, annealed type 316 stainless steel has a density of 8000 kg/m3, modulus of 193 GPa, and tensile strength of only 570 MPa and tempered 6061 aluminum alloy has a density of 2700 kg/m3, modulus of 69 GPa, and tensile strength of 310 MPa). EFS detects growing cracks in steel, aluminum, titanium alloys, and other metals. Inspection steps Below are the main steps of using Electrochemical Fatigue Sensors on a bridge: 1. Identification of Critical Areas: To use the EFS on bridges, inspectors first identify the vulnerable parts of a bridge. These could be the areas most susceptible to wear and tear, such as sharp-angled coped beams, stringer to beam attachments, or the toe of welds. It could also be locations where bridge owners already suspect a crack. 2. Installation of Sensors: The area to be monitored should be clean and free of any loose material. (The paint does not have to be totally removed as in other sensor installations.) The inspectors wire up the areas with sensors, which are similar to the peel-and-stick versions used for an EKG reading. The sensor array consists of a crack measurement sensor and a reference sensor. 3. Apply a Constant Current: The sensors are injected with an electrolyte liquid which facilitates the applied constant electric current between the sensors and the bridge. 4. Monitoring: The system monitors changes in the current response that results from the exposure of fresh steel during crack propagation. 5. Interpretation of Data: The current response from the sensor array indicates quickly and clearly, whether a growing crack exists at the inspection location. And because the device is operated while the bridge is in use, it can determine how the cracks change as the structure flexes under stress. Data is presented in both the time domain and the frequency domain. An algorithm, specifically written for this system, automatically indicates the level of fatigue crack activity at the inspection location. The system can detect cracks in the field as small as 0.01 inches in an actual structure. See also Nondestructive testing References External links Demonstration Of The Electrochemical Fatigue Sensor System At The Transportation Technology Center Facility Inspection of Fatigue Cracks on a CN Bridge Using the Electrochemical Fatigue Sensor Electrochemical Fatigue Sensor Demonstration on the Steel Bridge at Fast Nondestructive testing Sensors
Electrochemical fatigue crack sensor
[ "Materials_science", "Technology", "Engineering" ]
1,253
[ "Nondestructive testing", "Materials testing", "Sensors", "Measuring instruments" ]
37,841,887
https://en.wikipedia.org/wiki/Elastic%20mechanisms%20in%20animals
Elastic mechanisms in animals are very important in the movement of vertebrate animals. The muscles that control vertebrate locomotion are affiliated with tissues that are springy, such as tendons, which lie within the muscles and connective tissue. A spring can be a mechanism for different actions involved in hopping, running, walking, and serve in other diverse functions such as metabolic energy conservation, attenuation of muscle power production, and amplification of muscle power production. When a body is running, walking or hopping, it uses springs as a way to store energy which indicates that elastic mechanisms have a great influence on its dynamics. When a force is applied to a spring it bends and stores energy in the form of elastic strain energy and when it recoils after the force has been released, this energy is released as well. Elastic proteins provide the property of elasticity which gives the spring the ability to bend reversibly without the loss of energy, and the ability to bend to large strains with small force. Elastic proteins also contain high resilience and low stiffness which helps with the function of elastic strain energy. While running, tendons are able to reduce the metabolic rate of muscle activity by reducing the volume of the muscle that is active to produce force. The timing of muscle activation is very important for utilizing the mechanical and energetic benefits of tendon elasticity. Power attenuation by the use of the tendons can allow the muscle-tendon system the ability to absorb energy at a rate beyond the muscles maximum capacity to absorb energy. Power amplification mechanisms are able to work because the spring and muscles contain different intrinsic limits of power. Muscles in a skeletal system can be limited in their maximum power production. Power amplification by the use of the tendons allows the muscle to produce power beyond the muscle's capacity. The mechanical functions of tendons contain a structural basis and are not subjected to limitation of power production. Elastic mechanisms for metabolic energy conservation From previous experimental studies on large animals, it was noted that during active locomotion mammals save much of the energy they would otherwise need for running by means of elastic structures in their legs. Measurements been made of the rates of oxygen consumption of various animals, as they walked, ran or hopped, revealed that at high speeds animals seem to save more than a half the metabolic energy they would otherwise need for locomotion. A notable example is jumping in kangaroos. When hopping at slow speeds, their uses of energy increase linearly, but at high speeds, kangaroos can move as cheaply (from an energetic perspective) as if they were moving at slower speeds. Deep research into the anatomy of large mammals such as, kangaroos and other large ungulates such as deer and gazelle, suggests strongly that some sort of elastic mechanism is important for this energetic savings. Previous combination of careful experiments, with anatomical (e.g. tendon dimensions), mechanical (e.g. force plate recordings) and mathematical calculations revealed that a significant fraction of the work done with each step could be provided by the spring-like action of tendons, rather than by muscle work. When the animal's foot contacts the surface of the ground during high speed locomotion, the tendon or ligament is pressed tightly together, storing elastic energy much like a compressed spring. As the foot gets of the ground, the pressure on the compressed tendons and ligaments is released, and elastic recoil from these spring like structures provides additional force to propel the animal thus resulting in energetic savings. Simple calculations based kangaroo hopping and forces involved in hopping show how storage of elastic strain energy can save twenty to thirty percent of metabolic energy required for hopping. Measurements of oxygen consumption with fluctuations of kinetic and gravitational potential energy, indicate elastic savings of at least fifty four percent at high speeds. It is important to take under consideration that metabolic benefits of elastic structures are probably most apparent for larger animals, rather than small organisms such as insects. This results from a simple fact, that larger animals can exert much higher forces on their tendons and ligaments during movement, compared to small animals. Elastic mechanisms for power attenuation In eccentric contractions, elastic tendons have the ability to operate as power attenuation. Tendons exhibit power attenuation that allows the muscle-tendon systems to absorb energy. This rate exceeds the muscle's maximum capacity for energy. In comparison, power amplification of tendons allow for greater output of power that can exceed the capacity of their respective muscle. This elastic mechanism can lead to the following reductions by lengthening muscles: peak power input, lengthening velocity, and force. Muscle damage has been correlated with these factors. However, the shuttling of energy through tendons before it is absorbed by muscles has been shown to provide a protective mechanism against that damage. However, large accumulations of elastic energy storage over time may negatively affect the timing of recoil. This results in power attenuation. Though muscles produce and absorb mechanical power, tendons still have an integral role for dissipation of mechanical energy. This action is essential for activities like deceleration, when landing from a jump or downhill running. R.I. Griffiths conducted cross-experiments of isolated muscle-tendon preparations with in vivo studies in 1991 to keep muscles isometric during muscle-tendon unit lengthening. This was achieved with the practice of rapid stretches applied to muscle-tendon units which are then absorbed by the stretch of tendons. Experimenters explain this phenomenon by the idea that muscles are susceptible to damage when actively lengthened and this practice acts as a mechanical buffer against it. In addition, in vivo experiments it has been found that the elastic mechanism gives protection to musculoskeletal structure exceeding the sarcomere. Due to this fact, forces developed in active muscles eventually decide the forces on tendons such as bones, joints, and ligaments. Similarly, tendons are unable to entirely insulate muscles from dynamic extension. Tendons affect muscles when muscles lengthen, which affects peak forces experienced due to energy absorbing actions in the muscle tendon unit. Active lengthening of muscle fibers results in both an accumulation and loss of energy. Even though energy is briefly stored in stretched elastic elements are also released, there is an overall net gain. This shows that muscle fibers are effective in both power production and for energy consumption utilized by the body or individual body segments with muscle-tendon units. Elastic mechanisms as power amplifiers Tendons, connective tissues, and molecular structures within a skeletal system can act as power amplifiers by storing energy gradually and releasing it rapidly. This amplification process is possible because spring-like tendons are not limited by the same rate limits imposed upon muscles by their intrinsic enzymatic processes. The process of amplification begins when a muscle contracts steadily, storing elastic strain energy in the tendon. Once the energy is completely stored, the tendon releases it in a much shorter time span than was required to create it within the muscle. The tendon is actually producing energy at a level slightly below the work done by the contracting muscle, but because power is equivalent to work over time, the considerably shorter time increases the power significantly. This phenomenon has been observed in numerous vertebrate behaviors, one of the most notable being jumping. Observed in kangaroos, bush babies, birds, frogs, and various species of antelope, jumping relies on this system because the action is inherently limited in the time that is available to produce power once the body has begun to accelerate. Once the body loses contact with the ground there is no way for the organism to continue to produce force. Substantial improvement in acceleration resulting from these mechanisms have been observed in jumping fleas, accelerating turkeys, the striking of mantis shrimp, and the running of horses whose bicep brachii power output is amplified fifty times by the use of catapult-like behavior of the tendon. Feeding mechanisms also benefit from spring-like power amplifiers within the skeletal system. The depressor mandibulae of toads rely on this mechanism to produce catapult-like tongue projection. More dramatically, the ballistic tongue projection utilized by chameleons and some salamanders utilize elastic mechanisms to produce mass-specific power outputs more than five times higher than those reported for most fast muscles. In chameleons, it is significant to note that the retractor muscles utilized in prey capture decreased in power output by 600% over a 20 °C temperature range while the tongue projection mechanism, which utilizes the elastic energy storage, decreased a mere 50%, demonstrating that these elastic mechanisms do not simply amplify the power output, but they also extend the temperature range in which power outputs may be amplified. References Elasticity (physics) Muscular system Skeletal muscle Animal physiology
Elastic mechanisms in animals
[ "Physics", "Materials_science", "Biology" ]
1,785
[ "Physical phenomena", "Animals", "Animal physiology", "Elasticity (physics)", "Deformation (mechanics)", "Physical properties" ]
37,844,522
https://en.wikipedia.org/wiki/Gene%20therapy%20for%20osteoarthritis
Gene therapy for osteoarthritis is the application of gene therapy to treat osteoarthritis (OA). Unlike pharmacological treatments which are administered locally or systemically as a series of interventions, gene therapy aims to establish sustained therapeutic effect after a single, local injection. The main risk factors for osteoarthritis are age and body mass index, as such, OA is predominantly considered a disease of aging. As the body ages, catabolic factors begin to predominate over anabolic factors resulting in a reduction of extracellular matrix gene expression and reduced cellularity in articular cartilage. Catabolism eventually predominates over anabolism to such an extent that severe cartilage erosions and bone marrow lesions / remodeling manifest in clinical osteoarthritis. Joint inflammation is also a key mechanism in OA, and a number of pro-inflammatory cytokines, particularly IL-1, have been implicated in pathophysiology, human genetics, and animal models of disease. In addition, osteoarthritis has a number of heritable factors, and there may be additional genetic risk factors for the disease. Gene augmentation, gene replacement, and novel transgene gene therapy strategies for the potential medical management of osteoarthritis are under preliminary research to define pathological mechanisms and possible treatments for this chronic disease. While viral vector gene therapies predominate, both viral and non-viral vectors have been developed as a means to deliver therapeutic genes. Other gene augmentation approaches Other approaches have involved other anabolic and anti-catabolic factors. As the body ages, catabolic factors begin to predominate over anabolic factors. In osteoarthritis, catabolic factors promote the degradation of articular cartilage and decrease the total cell content of cartilage. While the body is young, anabolic factors are able to replace the lost cartilage and cartilage producing cells, however, this ability appears to decrease with age. Gene Augmentation approaches, such as the delivery of FGF18 and PRG4 aim to augment the natural anabolic processes within the joint, to delay the progression of cartilage degeneration. Anabolic factors appear to be successful in clinical studies when delivered in the form of repeat protein injections, however, due to the pharmacokinetics of articular joints, these approaches require up to 12 injections per year in bilateral osteoarthritis, and may need to be sustained indefinitely to prevent reversal of cartilage gains. Gene Augmentation approaches aim to replicate the success of anabolic protein therapies by delivering the genetic instructions for these factors in the form of single injection treatments. Gene replacement approaches Passing from parents to children, genes are the building blocks of inheritance. They contain instructions for making proteins. If genes do not produce the right proteins in a correct way, a child can develop a genetic disorder. Gene therapy is a molecular method aiming to replace defective or absent genes, or to counteract the ones undergoing overexpression. For this purpose, genes may be inserted into delivery vectors and administered to target cells to augment or replace defective genetic material. The most common form of gene therapy involves inserting a normal gene to replace an abnormal gene. Other approaches include repairing an abnormal gene and altering the degree to which a gene is turned on or off. Two basic methodologies are utilized to transfer vectors into target tissues; Ex vivo gene transfer and In vivo gene transfer. One type of gene therapy, also often referred to as Cell Therapy (or genetically modified Cell Therapy) in which the gene transfer takes place outside the patient's body is called ex vivo gene therapy. This method of gene therapy is more complicated since the cells first have to be harvested from the patient in an invasive procedure. The harvested cells also need to be manipulated in a sterile manner and care must be taken to not damage the cells or their genetic material. Alternative approaches allow for the use of autologous stem cells, which have not been originally harvested from the patient undergoing treatment. Such approaches need to rely on "cloaking" technology to ensure that the cells are not eliminated from the body once detected as foreign. This "cloaking" often requires the use of additional genetic manipulation, such as the insertion of a CD47 gene to express a "don't eat me" signal on the surface of the cells to make them hypoimmune. A major challenge with the use of cell therapy for Osteoarthritis is the nature of the articular joints, which experience significant shear leading to rapid loss of transplanted cells. Genetically modified cell therapies for the treatment of osteoarthritis are currently strictly investigated and their safety and effectiveness claims have not been reviewed by the FDA. Significance and causes of osteoarthritis Primary osteoarthritis (OA) is a degenerative joint disease which is the western world's leading cause of pain and disability. It is characterized by the progressive loss of normal structure and function of articular cartilage, the smooth tissue covering the end of the moving bones. This chronic disease not only affects the articular cartilage but also the subchondral bone, the synovium, and periarticular tissues. Individuals with OA can experience severe pain and limited motion, and the disease often tends to progress as the body ages. OA is mostly the result of natural aging of the joint due to biochemical changes in the cartilage extracellular matrix. While age and BMI are the main risk factors for osteoarthritis, contributors such as joint trauma and mechanical overloading of joints or joint-instability can accelerate or exacerbate the condition. OA caused by secondary factors such as joint injury or damage to the subchondral bone is referred to as secondary osteoarthritis. Since the degeneration of cartilage is not naturally reversible, it will continue to progress, eventually resulting in the need for joint replacement as a potential terminal intervention. Due to the prevalence of OA, the repair and regeneration of articular cartilage has become a dominant area of research. The growing number of people suffering from osteoarthritis and the potential of some gene therapy approaches, attracts a great deal of attention to the development of genetic medicines for the treatment of this chronic disease. Vectors for osteoarthritis gene delivery Various vectors have been developed to carry therapeutic genes to cells. There are two broad categories of gene delivery vectors: Viral vectors, involving viruses as the genetic carriers and non-viral agents, such as polymers, lipid nanoparticles, and liposomes. Viral vectors Viral vectors are the most widely used gene delivery method as they have evolved to do this job with a high degree of efficiency and specificity. When using viral vectors for gene delivery, researchers aim to remove all of the viruses undesired genes and replace them with at least one therapeutic gene. The combination of their evolutionary origin and broad use, makes viral vectors highly effective at delivering genetic cargo to cells, and significantly reduces the risks associated with using this delivery method. When administered systemically, or in high doses, viral vectors may induce an inflammatory response, which can cause minor side effects such as edema or serious ones like multisystem organ failure. It may also be difficult to administer gene therapy repeatedly due to the immune system's enhanced response to viruses. However, viral vectors delivered locally to the joint, appear to be well contained within the joint area and are very well tolerated based on preclinical and early clinical studies. Furthermore, the durability of therapeutic transgene expression appears to be such, that a single injection therapy may be sufficient to reverse progression of a disease. The most commonly used viral vectors today are Adeno-Associated Viruses (AAVs), since AAVs do not appear to cause any disease in humans, have low immunogenicity, and are non-replicating, they have proven to be safe and effective in a number of indications. Adenoviruses have also been investigated in the clinic for the treatment of Osteoarthritis, however, since adenoviruses are highly immunogenic, their most successful application has been in the delivery of adenoviral vector vaccines. Non-viral vectors Non-viral methods involve complexing therapeutic DNA to various macromolecules including cationic lipids and liposomes, polymers, polyamines and polyethylenimine, and nanoparticles. FuGene 6 and modified cationic liposomes are two non-viral gene delivery methods that have so far been utilized for gene delivery to cartilage. FuGene 6 is a non-liposomal lipid formulation, which has proved to be successful in transfecting a variety of cell lines (cancerous cells used for in vitro research). Liposomes have shown to be a potential candidate for gene delivery, in this approach cationic liposomes are made to facilitate the interaction with the cell membranes to deliver nucleic acids. Non-viral vectors may have the capacity to deliver a large amount of therapeutic genes repeatedly and may be lower cost to produce at large scale. Another advantage of non-viral delivery methods is that they do not elicit a memory immune response and may be administered several times. In spite of having advantages, non-viral vectors have not yet replaced viral vectors due to relatively low efficiency, toxicity of the individual formulation components, and short-term transgene expression. As a result, while a number of viral vectors have successfully been used in several clinical studies, non-viral vectors for intra-articular delivery have thus far only been investigated preclinically. Target cells The cells targeted for the treatment of osteoarthritis are chondrocytes, synoviocytes, and their progenitors. Since the joint capsule is relatively well contained, intra-articular injections are highly successful at delivering the therapeutic gene therapy locally to the target cell types. Treatment of osteoarthritis may be successful via: Stimulation of anabolic pathways to rebuild the matrix or chondrocyte content of cartilage. This approach may result in reversal of the disease (Examples include: FGF18). Inhibition of catabolic pathways to prevent further degeneration of cartilage. This approach may result in slowing of the disease progression, but not reversal (Examples include: IL-1Ra). Replacing of the damaged cells or tissues with cells with or without a matrix. This approach may result in reversal of the disease pathology, but has thus far only been successful for the treatment of focal cartilage lesions (Examples include: MACI and Hyalofast). Avoiding the pathological or symptomatic complications such as the reduction of pain or formation of osteophytes (Examples include, steroids and viscosupplements). Thus far, the most promising therapies appear to be those focused on promoting cartilage anabolism. Specifically, only the chondro-anabolic FGF18 therapy which uses the recombinant protein analog of FGF18, sprifermin, has been able to demonstrate an ability to increase cartilage thickness in a dose-dependent manner, arrest progression to joint replacement, and reduce pain and clinically meaningful symptom progression. Based on this success, FGF18 is also being investigated as a gene therapy for the treatment of OA. While several anti-inflammatory or anti-catabolic approaches have been reported in preclinical studies, none of the clinical studies to date have produced any evidence of efficacy in modifying disease progression (e.g. IGF-I/IL-1RA, steroids). Some anti-inflammatory treatments, have actually been demonstrated to promote cartilage degeneration with long-term use. Gene defects leading to osteoarthritis While Osteoarthritis is mainly a disease of aging, it has some degree of heritability. Epidemiological studies have shown that a genetic component may be an important risk factor in OA. Insulin-like growth factor I genes (IGF-1), Transforming growth factorβ, cartilage oligomeric matrix protein, bone morphogenetic protein, and other anabolic gene candidates are among the candidate genes for OA. Genetic changes in OA can lead to defects of a structural protein such as collagen, or changes in the metabolism of bone and cartilage. OA is rarely considered as a simple disorder following Mendelian inheritance being predominantly a multifactorial disease. However, in the field of OA gene therapy, researches has focused on gene transfer as a delivery system for therapeutic gene products, rather counteracting genetic abnormalities or polymorphisms. Genes, which contribute to protect and restore the matrix of articular cartilage, are attracting the most attention. These Genes are listed in Table 1. Among all candidates listed below, only FGF18 has been successful at a protein level in initial clinical studies. Other candidates, such as proteins that block the actions of interleukin-1 (IL-1) (interleukin-1 receptor antagonists / IL-1Ra) have been evaluated as both protein or gene therapy injections and were either abandoned (as in the case of the protein) or did not report any efficacy in disease modification (as in the case of gene therapies). Osteoarthritis targets Interleukin-1 Preclinical studies suggest that a pro-inflammatory Interleukin-1 (IL1) is a contributor to joint pain, cartilage loss, and inflammation. Although prior approaches with recombinant proteins have shown mixed results, gene therapy remains a promising avenue for IL-1 inhibition. A therapeutic gene with potential to counteract the effect of Interleukin-1, the Interleukin 1 receptor antagonist (IL-1Ra), is currently being evaluated in early clinical trials with several delivery vectors including AAV and Adenovirus. The natural agonist of IL-1, is a protein that binds non-productively to the cell surface of interleukin-1 receptor, blocking the activity of IL-1 via the IL-1 receptor. A number of studies in dogs, rabbits, and horses suggested that local IL-1Ra gene therapy is safe and effective in animal models of OA, however, none of these findings have translated to clinical efficacy despite both the protein and gene therapy being evaluated in multiple clinical trials. FGF18 Another gene therapy approach uses FGF-18 as a potential anabolic agent. A prior clinical trial using sprifermin (FGF-18 protein, rather than gene therapy) showed that spirefermin was able to increase cartilage thickness in a dose dependent manner in placebo controlled, randomized clinical studies. The trial also demonstrated the potential of FGF18 to arrest progression to joint replacement over the study period. Finally, FGF18 was able to reduce pain (WOMAC) and clinically meaningful symptom progression, in both the full trial population and the high-risk subgroup. Based on these highly promising clinical results, FGF18 is being investigated as a gene therapy for the treatment of osteoarthritis. Strategies In the context of OA, the most attractive intra-articular sites for gene transfer are the synovium and the articular cartilage. Most experimental progress has been made with gene transfer to a convenient intra-articular tissue, such as the hyaline cartilage or the synovium, tissues amenable to genetic modification by a variety of vectors, using both in vivo and ex vivo protocols. Gene transfer to cartilage Chondrocytes are non-dividing cells (with the exception of chondrocyte progenitors), embedded in a network of collagens and proteoglycans; however researches suggest that genes can be transferred to chondrocytes within normal or arthritic cartilage by intraarticular injection of AAVs or liposomes containing sendai virus (HVJ- liposomes). Since chondrocytes are considered resident cells of the joint, with lower turnover rates than synoviocytes, gene delivery strategies to chondrocytes may provide a higher degree of durability. Most efficient methods of gene transfer to cartilage have involved in vivo strategies delivering AAVs directly to joints via intra-articular injection. Of the AAV serotypes studied, AAV2 appears to be particularly effective at transducing Chondrocytes and Synoviocytes, whereas AAV serotype 2.5 has shown efficient delivery to human cartilage explants and to horse joints in vivo. Some currently evaluated strategies for gene delivery to chondrocytes include FGF18, PRG4, and IL-1Ra. Gene transfer to synovium The major purpose of gene delivery is to alter the lining of the joint in a way that enables them to serve as an endogenous source of therapeutic molecules that can diffuse and influence the metabolism of adjacent tissues such as cartilage. Genes may be delivered to synovium in animal models of RA and OA by direct, in vivo injection of vector or by indirect, ex vivo methods involving autologous synovial cells, skin fibroblasts, or other cell types such as mesenchymal stem cells. Synoviocytes, which are the predominant cell type in the synovium are closely related to fibroblasts, and have relatively high turnover rates (when compared to for example chondrocyte cells). As such, gene therapy treatment of the synovium is likely to be challenged by low durability. Also, since osteoarthritis is the disease of cartilage tissues, treating the synovium is an indirect approach and may be complicated by lack of therapeutic activity. However, gene therapy administered into the intra-articular space is likely to deliver the therapeutic gene to both cartilage and synovial tissues, the preference for the tissue type may be further modified by selecting a specific delivery vector. Some delivery vectors and their advantages and limitations are listed in Table 2: The indirect ex vivo approach involves harvest of synovium, cartilage, or bone marrow cells, isolation and culture of the harvested cells, in vitro transduction with the therapeutic gene of interest, and injection of engineered cells into the joint. Safety One important issue related to human gene therapy is safety, particularly for the gene therapy of a debilitating, but non-fatal disease such as OA. The main concern is the high immunogenicity of certain viral vectors such as Adenoviruses, which may further exacerbate the pathology. Retroviral vectors permanently integrate into the chromosomes of the cells they infect, there will be always a chance of integrating into a tumor suppressor gene or an oncogene, leading to oncogenic transformation of the cell. As a result, the most advanced therapies focus on the use of non-integrating vectors, low doses, and intra-articular (rather than systemic) delivery. All approaches involving genetic modification are currently only investigational, not approved by the FDA, EMA, or any other regulator; as such, their safety and efficacy statements have not been reviewed or approved by regulatory agencies and the treatments are not approved for commercial use. See also Disease Modifying Osteoarthritis Drug Vectors in gene therapy Protein therapy Adeno-associated virus Gene therapy for epilepsy Management of Parkinson's disease References Arthritis Medical genetics Medical treatments Gene therapy
Gene therapy for osteoarthritis
[ "Engineering", "Biology" ]
4,090
[ "Gene therapy", "Genetic engineering" ]
37,847,650
https://en.wikipedia.org/wiki/Cypherpunks%20%28book%29
Cypherpunks: Freedom and the Future of the Internet is a 2012 book by Julian Assange, in discussion with Internet activists and cypherpunks Jacob Appelbaum, Andy Müller-Maguhn and Jérémie Zimmermann. Its primary topic is society's relationship with information security. In the book, the authors warn that the Internet has become a tool of the police state, and that the world is inadvertently heading toward a form of totalitarianism. They promote the use of cryptography to protect against state surveillance. In the introduction, Assange says that the book is "not a manifesto [...] [but] a warning". He told Guardian journalist Decca Aitkenhead: Assange later wrote in The Guardian: "Strong cryptography is a vital tool in fighting state oppression." saying that was the message of his book, Cypherpunks. Cypherpunks is published by OR Books. It is primarily a transcript of World Tomorrow episode eight, a two-part interview between Assange, Jacob Appelbaum, Andy Müller-Maguhn, and Jérémie Zimmermann. In the foreword, Assange said, "the Internet, our greatest tool for emancipation, has been transformed into the most dangerous facilitator of totalitarianism we have ever seen". See also Cypherpunk Computer and network surveillance Secrecy References External links 2012 non-fiction books Works by Julian Assange OR Books books Computer security books Cryptography Internet privacy Works about privacy Cypherpunks
Cypherpunks (book)
[ "Mathematics", "Engineering" ]
317
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
31,194,124
https://en.wikipedia.org/wiki/5-cell%20honeycomb
In four-dimensional Euclidean geometry, the 4-simplex honeycomb, 5-cell honeycomb or pentachoric-dispentachoric honeycomb is a space-filling tessellation honeycomb. It is composed of 5-cells and rectified 5-cells facets in a ratio of 1:1. Structure Cells of the vertex figure are ten tetrahedrons and 20 triangular prisms, corresponding to the ten 5-cells and 20 rectified 5-cells that meet at each vertex. All the vertices lie in parallel realms in which they form alternated cubic honeycombs, the tetrahedra being either tops of the rectified 5-cell or the bases of the 5-cell, and the octahedra being the bottoms of the rectified 5-cell. Alternate names Cyclopentachoric tetracomb Pentachoric-dispentachoric tetracomb Projection by folding The 5-cell honeycomb can be projected into the 2-dimensional square tiling by a geometric folding operation that maps two pairs of mirrors into each other, sharing the same vertex arrangement: Two different aperiodic tilings with 5-fold symmetry can be obtained by projecting two-dimensional slices of the honeycomb: the Penrose tiling composed of rhombi, and the Tübingen triangle tiling composed of isosceles triangles. A4 lattice The vertex arrangement of the 5-cell honeycomb is called the A4 lattice, or 4-simplex lattice. The 20 vertices of its vertex figure, the runcinated 5-cell represent the 20 roots of the Coxeter group. It is the 4-dimensional case of a simplectic honeycomb. The A lattice is the union of five A4 lattices, and is the dual to the omnitruncated 5-simplex honeycomb, and therefore the Voronoi cell of this lattice is an omnitruncated 5-cell ∪ ∪ ∪ ∪ = dual of Related polytopes and honeycombs The tops of the 5-cells in this honeycomb adjoin the bases of the 5-cells, and vice versa, in adjacent laminae (or layers); but alternating laminae may be inverted so that the tops of the rectified 5-cells adjoin the tops of the rectified 5-cells and the bases of the 5-cells adjoin the bases of other 5-cells. This inversion results in another non-Wythoffian uniform convex honeycomb. Octahedral prisms and tetrahedral prisms may be inserted in between alternated laminae as well, resulting in two more non-Wythoffian elongated uniform honeycombs. Rectified 5-cell honeycomb The rectified 4-simplex honeycomb or rectified 5-cell honeycomb is a space-filling tessellation honeycomb. Alternate names small cyclorhombated pentachoric tetracomb small prismatodispentachoric tetracomb Cyclotruncated 5-cell honeycomb The cyclotruncated 4-simplex honeycomb or cyclotruncated 5-cell honeycomb is a space-filling tessellation honeycomb. It can also be seen as a birectified 5-cell honeycomb. It is composed of 5-cells, truncated 5-cells, and bitruncated 5-cells facets in a ratio of 2:2:1. Its vertex figure is a tetrahedral antiprism, with 2 regular tetrahedron, 8 triangular pyramid, and 6 tetragonal disphenoid cells, defining 2 5-cell, 8 truncated 5-cell, and 6 bitruncated 5-cell facets around a vertex. It can be constructed as five sets of parallel hyperplanes that divide space into two half-spaces. The 3-space hyperplanes contain quarter cubic honeycombs as a collection facets. Alternate names Cyclotruncated pentachoric tetracomb Small truncated-pentachoric tetracomb Truncated 5-cell honeycomb The truncated 4-simplex honeycomb or truncated 5-cell honeycomb is a space-filling tessellation honeycomb. It can also be called a cyclocantitruncated 5-cell honeycomb. Alaternate names Great cyclorhombated pentachoric tetracomb Great truncated-pentachoric tetracomb Cantellated 5-cell honeycomb The cantellated 4-simplex honeycomb or cantellated 5-cell honeycomb is a space-filling tessellation honeycomb. It can also be called a cycloruncitruncated 5-cell honeycomb. Alternate names Cycloprismatorhombated pentachoric tetracomb Great prismatodispentachoric tetracomb Bitruncated 5-cell honeycomb The bitruncated 4-simplex honeycomb or bitruncated 5-cell honeycomb is a space-filling tessellation honeycomb. It can also be called a cycloruncicantitruncated 5-cell honeycomb. Alternate names Great cycloprismated pentachoric tetracomb Grand prismatodispentachoric tetracomb Omnitruncated 5-cell honeycomb The omnitruncated 4-simplex honeycomb or omnitruncated 5-cell honeycomb is a space-filling tessellation honeycomb. It can also be seen as a cyclosteriruncicantitruncated 5-cell honeycomb. . It is composed entirely of omnitruncated 5-cell (omnitruncated 4-simplex) facets. Coxeter calls this Hinton's honeycomb after C. H. Hinton, who described it in his book The Fourth Dimension in 1906. The facets of all omnitruncated simplectic honeycombs are called permutohedra and can be positioned in n+1 space with integral coordinates, permutations of the whole numbers (0,1,..,n). Alternate names Omnitruncated cyclopentachoric tetracomb Great-prismatodecachoric tetracomb A4* lattice The A lattice is the union of five A4 lattices, and is the dual to the omnitruncated 5-cell honeycomb, and therefore the Voronoi cell of this lattice is an omnitruncated 5-cell. ∪ ∪ ∪ ∪ = dual of Alternated form This honeycomb can be alternated, creating omnisnub 5-cells with irregular 5-cells created at the deleted vertices. Although it is not uniform, the 5-cells have a symmetry of order 10. See also Regular and uniform honeycombs in 4-space: Tesseractic honeycomb 16-cell honeycomb 24-cell honeycomb Truncated 24-cell honeycomb Snub 24-cell honeycomb Notes References Norman Johnson Uniform Polytopes, Manuscript (1991) Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380–407, MR 2,10] (1.9 Uniform space-fillings) (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Model 134 , x3o3o3o3o3*a - cypit - O134, x3x3x3x3x3*a - otcypit - 135, x3x3x3o3o3*a - gocyropit - O137, x3x3o3x3o3*a - cypropit - O138, x3x3x3x3o3*a - gocypapit - O139, x3x3x3x3x3*a - otcypit - 140 Affine Coxeter group Wa(A4), Quaternions, and Decagonal Quasicrystals, Mehmet Koca, Nazife O. Koca, Ramazan Koc (2013) Honeycombs (geometry) 5-polytopes
5-cell honeycomb
[ "Physics", "Chemistry", "Materials_science" ]
1,823
[ "Tessellation", "Crystallography", "Honeycombs (geometry)", "Symmetry" ]
32,186,226
https://en.wikipedia.org/wiki/Line%20group
A line group is a mathematical way of describing symmetries associated with moving along a line. These symmetries include repeating along that line, making that line a one-dimensional lattice. However, line groups may have more than one dimension, and they may involve those dimensions in its isometries or symmetry transformations. One constructs a line group by taking a point group in the full dimensions of the space, and then adding translations or offsets along the line to each of the point group's elements, in the fashion of constructing a space group. These offsets include the repeats, and a fraction of the repeat, one fraction for each element. For convenience, the fractions are scaled to the size of the repeat; they are thus within the line's unit cell segment. One-dimensional There are 2 one-dimensional line groups. They are the infinite limits of the discrete two-dimensional point groups Cn and Dn: Two-dimensional There are 7 frieze groups, which involve reflections along the line, reflections perpendicular to the line, and 180° rotations in the two dimensions. Three-dimensional There are 13 infinite families of three-dimensional line groups, derived from the 7 infinite families of axial three-dimensional point groups. As with space groups in general, line groups with the same point group can have different patterns of offsets. Each of the families is based on a group of rotations around the axis with order n. The groups are listed in Hermann-Mauguin notation, and for the point groups, Schönflies notation. There appears to be no comparable notation for the line groups. These groups can also be interpreted as patterns of wallpaper groups wrapped around a cylinder n times and infinitely repeating along the cylinder's axis, much like the three-dimensional point groups and the frieze groups. A table of these groups: The offset types are: None. Offsets along the axis include no offsets around it to within repeats of the unit cell around the axis. Helical offset with helicity q. For a unit offset along the axis, there is an offset of q around it. A point that has repeated offsets will trace out a helix. Zigzag offset. Helical offset of 1/2 relative to the unit cell around the axis. Note that the wallpaper groups pm, pg, cm, and pmg appear twice. Each appearance has a different orientation relative to the line-group axis; reflection parallel (h) or perpendicular (v). The other groups have no such orientation: p1, p2, pmm, pgg, cmm. If the point group is constrained to be a crystallographic point group, a symmetry of some three-dimensional lattice, then the resulting line group is called a rod group. There are 75 rod groups. The Coxeter notation is based on the rectangular wallpaper groups, with the vertical axis wrapped into a cylinder of symmetry order n or 2n. Going to the continuum limit, with n to ∞, the possible point groups become C∞, C∞h, C∞v, D∞, and D∞h, and the line groups have the appropriate possible offsets, with the exception of zigzag. Helical symmetry The groups Cn(q) and Dn(q) express the symmetries of helical objects. Cn(q) is for n helices oriented in the same direction, while Dn(q) is for n unoriented helices and 2n helices with alternating orientations. Reversing the sign of q creates a mirror image, reversing the helices' chirality or handedness. Nucleic acids, DNA and RNA, are well known for their helical symmetry. Nucleic acids have a well-defined direction, giving single strands C1(q). Double strands have opposite directions and are on opposite sides of the helix axis, giving them D1(q). See also Point group Space group One-dimensional symmetry group Frieze group Rod group References Euclidean symmetries Discrete groups
Line group
[ "Physics", "Mathematics" ]
837
[ "Functions and mappings", "Euclidean symmetries", "Mathematical objects", "Mathematical relations", "Symmetry" ]
32,187,203
https://en.wikipedia.org/wiki/Layer%20group
In mathematics, a layer group is a three-dimensional extension of a wallpaper group, with reflections in the third dimension. It is a space group with a two-dimensional lattice, meaning that it is symmetric over repeats in the two lattice directions. The symmetry group at each lattice point is an axial crystallographic point group with the main axis being perpendicular to the lattice plane. Table of the 80 layer groups, organized by crystal system or lattice type, and by their point groups: See also Point group Crystallographic point group Space group Rod group Frieze group Wallpaper group References External links Bilbao Crystallographic Server, under "Subperiodic Groups: Layer, Rod and Frieze Groups" Nomenclature, Symbols and Classification of the Subperiodic Groups, V. Kopsky and D. B. Litvin CVM 1.1: Vibrating Wallpaper by Frank Farris. He constructs layer groups from wallpaper groups using negating isometries. Euclidean symmetries Discrete groups
Layer group
[ "Physics", "Mathematics" ]
206
[ "Functions and mappings", "Euclidean symmetries", "Mathematical objects", "Mathematical relations", "Symmetry" ]
32,195,081
https://en.wikipedia.org/wiki/Q-exponential%20distribution
The q-exponential distribution is a probability distribution arising from the maximization of the Tsallis entropy under appropriate constraints, including constraining the domain to be positive. It is one example of a Tsallis distribution. The q-exponential is a generalization of the exponential distribution in the same way that Tsallis entropy is a generalization of standard Boltzmann–Gibbs entropy or Shannon entropy. The exponential distribution is recovered as Originally proposed by the statisticians George Box and David Cox in 1964, and known as the reverse Box–Cox transformation for a particular case of power transform in statistics. Characterization Probability density function The q-exponential distribution has the probability density function where is the q-exponential if . When , eq(x) is just exp(x). Derivation In a similar procedure to how the exponential distribution can be derived (using the standard Boltzmann–Gibbs entropy or Shannon entropy and constraining the domain of the variable to be positive), the q-exponential distribution can be derived from a maximization of the Tsallis Entropy subject to the appropriate constraints. Relationship to other distributions The q-exponential is a special case of the generalized Pareto distribution where The q-exponential is the generalization of the Lomax distribution (Pareto Type II), as it extends this distribution to the cases of finite support. The Lomax parameters are: As the Lomax distribution is a shifted version of the Pareto distribution, the q-exponential is a shifted reparameterized generalization of the Pareto. When , the q-exponential is equivalent to the Pareto shifted to have support starting at zero. Specifically, if then Generating random deviates Random deviates can be drawn using inverse transform sampling. Given a variable U that is uniformly distributed on the interval (0,1), then where is the q-logarithm and Applications Being a power transform, it is a usual technique in statistics for stabilizing the variance, making the data more normal distribution-like and improving the validity of measures of association such as the Pearson correlation between variables. It has been found to be an accurate model for train delays. It is also found in atomic physics and quantum optics, for example processes of molecular condensate creation via transition through the Feshbach resonance. See also Constantino Tsallis Tsallis statistics Tsallis entropy Tsallis distribution q-copula q-Gaussian Notes Further reading Juniper, J. (2007) "The Tsallis Distribution and Generalised Entropy: Prospects for Future Research into Decision-Making under Uncertainty", Centre of Full Employment and Equity, The University of Newcastle, Australia External links Tsallis Statistics, Statistical Mechanics for Non-extensive Systems and Long-Range Interactions Statistical mechanics Continuous distributions Probability distributions with non-finite variance
Q-exponential distribution
[ "Physics" ]
572
[ "Statistical mechanics" ]
32,195,422
https://en.wikipedia.org/wiki/STEP%20Study
The STEP Study was a Phase IIb clinical trial intended to study the efficacy of an experimental HIV vaccine based on a human adenovirus 5 (HAdV-5) vector. The study was conducted in North and South America, the Caribbean, and Australia. A related study (the "Phambili trial") using the same experimental vaccine was conducted simultaneously in South Africa. These trials were co-sponsored by Merck, the HIV Vaccine Trials Network (HVTN), and the National Institute of Allergy and Infectious Diseases (NIAID), and had an Oversight Committee consisting of representatives from these three organizations. In South Africa the trial was overseen by the South African AIDS Vaccine Initiative. These trials were terminated before their scheduled conclusion, when the Data Safety Monitoring Board determined that the vaccine was not preventing HIV infection, and was possibly enhancing susceptibility to HIV infection in some of the study participants. Design The study was a multicenter, double-blinded, randomized, placebo-controlled phase II proof-of-concept trial which involved administering an experimental vaccine (the MRKAd5 HIV-1 Gag/Pol/Nef trivalent vaccine) to nearly 3,000 healthy HIV-negative (uninfected) volunteers. Enrollment began in North and South America, the Caribbean and Australia in December 2004, and was completed in March 2007. Enrollment in the South African arm of the trial began in early 2007 and ended in September 2007. Candidates for enrollment into the study were men and women identified as high risk for acquiring HIV infection but who were currently HIV-negative. The vaccine contained three separate replication-defective vectors based on Human Adenovirus C serotype 5 (HAdV-5). Each of the three vectors expressed a single gene encoding a protein from the HIV virus: gag, pol, or nef. It was hoped that the adenovirus vectors would carry these HIV-1 genes into the cell, and that this would result in the development of a cell-mediated immune response that would confer a degree of immunity to the HIV virus. Findings 24 of the 741 men in the vaccine group and 21 men of 762 in the placebo group had tested HIV-positive. The protocol expected that the group which had received the vaccine would have a lower or equal infection rate as compared to the control group, but this was not seen. In fact, certain groups of the vaccine recipients were seen to have a higher risk of HIV infection as compared to the placebo group. While almost everyone enrolled in the STEP study had received the full course of the vaccine when the vaccination cessation was announced, no one in Phambili, the African trial, had been entirely vaccinated. Response On September 21, 2007, sponsors of the STEP study announced that further vaccination would cease and that vaccination in the Phambili Trial would be paused pending review. On October 23, 2007, the sponsors announced that the Phambili Trial would stop further immunizations. By November 2007, all participants were unblinded when researchers informed them whether they had received the vaccine or placebo. Alan Aderem of Seattle Biomed stated that "the experimental inoculation... actually increased the chances that some people would later acquire HIV." In May 2012, The New York Times reported that a study confirmed that the vaccine given to volunteers in the STEP Study made them more likely, not less, to become infected with HIV. References External links information for trial enrollment summary of results question and answer about trial end layman's interpretation of trial results HIV vaccine research Clinical trials related to HIV
STEP Study
[ "Chemistry" ]
744
[ "HIV vaccine research", "Drug discovery" ]
32,196,401
https://en.wikipedia.org/wiki/Integer%20broom%20topology
In general topology, a branch of mathematics, the integer broom topology is an example of a topology on the so-called integer broom space X. Definition of the integer broom space The integer broom space X is a subset of the plane R2. Assume that the plane is parametrised by polar coordinates. The integer broom contains the origin and the points such that n is a non-negative integer and }, where Z+ is the set of positive integers. The image on the right gives an illustration for and . Geometrically, the space consists of a collection of convergent sequences. For a fixed n, we have a sequence of points − lying on circle with centre (0, 0) and radius n − that converges to the point (n, 0). Definition of the integer broom topology We define the topology on X by means of a product topology. The integer broom space is given by the polar coordinates Let us write for simplicity. The integer broom topology on X is the product topology induced by giving U the right order topology, and V the subspace topology from R. Properties The integer broom space, together with the integer broom topology, is a compact topological space. It is a T0 space, but it is neither a T1 space nor a Hausdorff space. The space is path connected, while neither locally connected nor arc connected. See also Comb space Infinite broom List of topologies References General topology Topological spaces
Integer broom topology
[ "Mathematics" ]
291
[ "General topology", "Mathematical structures", "Space (mathematics)", "Topological spaces", "Topology" ]
32,197,567
https://en.wikipedia.org/wiki/Hecke%20algebra%20of%20a%20pair
In mathematics, the Hecke algebra of a pair (G, K) of locally compact or reductive Lie groups is an algebra of measures under convolution. It can also be defined for a pair (g, K) of a maximal compact subgroup K of a Lie group with Lie algebra g, in which case the Hecke algebra is an algebra with an approximate identity, whose approximately unital modules are the same as K-finite representations of the pairs (g, K). The Hecke algebra of a pair is a generalization of the classical Hecke algebra studied by Erich Hecke, which corresponds to the case (GL2(Q), GL2(Z)). Locally compact groups Let (G, K) be a pair consisting of a unimodular locally compact topological group G and a closed subgroup K of G. Then the space of bi-K-invariant continuous functions of compact support Cc∞[K\G/K] can be endowed with a structure of an associative algebra under the operation of convolution. This algebra is often denoted H(G//K) and called the Hecke algebra of the pair (G, K). Properties If (G, K) is a Gelfand pair then the Hecke algebra turns out to be commutative. Reductive Lie groups and Lie algebras In 1979, Daniel Flath gave a similar construction for general reductive Lie groups G. The Hecke algebra of a pair (g, K) of a Lie algebra g with Lie group G and maximal compact subgroup K is the algebra of K-finite distributions on G with support in K, with the product given by convolution. Examples Finite groups When G is a finite group and K is any subgroup of G, then the Hecke algebra is spanned by double cosets of H\G/H. SL(n) over a p-adic field For the special linear group over the p-adic numbers, G = SLn(Qp) and K = SLn(Zp), the representations of the corresponding commutative Hecke ring were studied by Ian G. Macdonald. GL(2) over the rationals For the general linear group over the rational numbers, G = GL2(Q) and K = GL2(Z) the Hecke algebra of the pair (G, K is the classical Hecke algebra, which is the commutative ring of Hecke operators in the theory of modular forms. Iwahori The case leading to the Iwahori–Hecke algebra of a finite Weyl group is when G is the finite Chevalley group over a finite field with pk elements, and B is its Borel subgroup. Iwahori showed that the Hecke ring H(G//B) is obtained from the generic Hecke algebra Hq of the Weyl group W of G by specializing the indeterminate q of the latter algebra to pk, the cardinality of the finite field. George Lusztig remarked in 1984: Iwahori and Matsumoto (1965) considered the case when G is a group of points of a reductive algebraic group over a non-archimedean local field F, such as Qp, and K is what is now called an Iwahori subgroup of G. The resulting Hecke ring is isomorphic to the Hecke algebra of the affine Weyl group of G, or the affine Hecke algebra, where the indeterminate q has been specialized to the cardinality of the residue field of F. Notes References Representation theory
Hecke algebra of a pair
[ "Mathematics" ]
756
[ "Representation theory", "Fields of abstract algebra" ]
40,601,541
https://en.wikipedia.org/wiki/Bateman%20equation
In nuclear physics, the Bateman equation is a mathematical model describing abundances and activities in a decay chain as a function of time, based on the decay rates and initial abundances. The model was formulated by Ernest Rutherford in 1905 and the analytical solution was provided by Harry Bateman in 1910. If, at time t, there are atoms of isotope that decays into isotope at the rate , the amounts of isotopes in the k-step decay chain evolves as: (this can be adapted to handle decay branches). While this can be solved explicitly for i = 2, the formulas quickly become cumbersome for longer chains. The Bateman equation is a classical master equation where the transition rates are only allowed from one species (i) to the next (i+1) but never in the reverse sense (i+1 to i is forbidden). Bateman found a general explicit formula for the amounts by taking the Laplace transform of the variables. (it can also be expanded with source terms, if more atoms of isotope i are provided externally at a constant rate). While the Bateman formula can be implemented in a computer code, if for some isotope pair, catastrophic cancellation can lead to computational errors. Therefore, other methods such as numerical integration or the matrix exponential method are also in use. For example, for the simple case of a chain of three isotopes the corresponding Bateman equation reduces to Which gives the following formula for activity of isotope (by substituting ) See also Harry Bateman List of equations in nuclear and particle physics Transient equilibrium Secular equilibrium Pharmacokinetics, loose applicability References Nuclear history of the United Kingdom Ordinary differential equations Radioactivity
Bateman equation
[ "Physics", "Chemistry" ]
342
[ "Radioactivity", "Nuclear physics" ]
40,601,687
https://en.wikipedia.org/wiki/Border%20barrier
A border barrier, border fence or border wall is a separation barrier that runs along or near an international border. Such barriers are typically constructed for border control purposes such as curbing illegal immigration, human trafficking, and smuggling. Some such barriers are constructed for defence or security reasons. In cases of a disputed or unclear border, erecting a barrier can serve as a de facto unilateral consolidation of a territorial claim that can supersede formal delimitation. A border barrier does not usually indicate the location of the actual border, and is usually constructed unilaterally by a country, without the agreement or cooperation of the other country. Examples of border walls include the ancient Great Wall of China, a series of walls separating China from nomadic empires to the north. The construction of border barriers increased in the early 2000s; half of all the border barriers built since World War II, which ended in 1945, were built after 2000. List of current barriers Note: The table can be sorted alphabetically or chronologically using the icon. Border barriers in history Antiquity Antonine Wall (began in AD 142 by the Roman province in Britain) Anastasian Wall (built from AD 469, west of Istanbul, Turkey) Great Wall of China (parts were built as early as the 7th century BC by the Qi dynasty in China) Great Wall of Gorgan (built in 5th or 6th century AD) Hadrian's Wall (begun in AD 122) Madukkarai Wall (may have been built as early as the 1st century AD in India) Southern Great Wall, southern counterpart wall to the Great Wall, erected to protect and divide the Chinese from the "southern barbarians" called Miao (meaning barbaric and nomadic) Middle Ages Cheolli Jangseong The fortifications, castles and border walls between the Emirate of Granada and the Crown of Castile, later Spain between 1238 and 2 January 1492 although it continued as the internal border of the Crown of Castile, then Spain as Kingdom of Granada from 2 January 1492 to 29 September 1833. A large part of the fortifications, castles and walls is currently in good condition. Asilah: The King Alfonso V of Portugal built walls on the outskirts of the city of Asilah, serving as a border between the Kingdom of Portugal and the Marinid dynasty (1471–1472), the Saadi Sultanate (1472–1550), the Wattasid dynasty (1577–1589), in the area of the sea there are still cannons next to a Portuguese square tower. Currently the Wall is in good condition. Macau: Ming dynasty (currently China) built a wall around Macau in 1570. Early modern period Great Hedge of India, built by the British in 1803 The border forts between the Captaincy General of Chile later Chile and the Mapuche territory that served as the border between the two from 1598 to 1883, which was delimited mainly by the Biobío river, the route passed through the Province of Biobío, Region of Biobío and the Region of Araucanía. Zanja de Alsina, built in the 1870s along the southern frontier of Argentina The points, forts and border barriers that functioned between the French Conchinchina and the Nguyễn dynasty that functioned between 1862 and 1887. Defunct barriers in modern times The posts and barriers between Kiautschou Bay Leased Territory and Qing dynasty later China between 1898 and 1914 and later served as the border between the Empire of Japan and the China, that worked between November 7, 1914, and December 10, 1922, and reoccupied in the Second Sino-Japanese War from 1937 and 1945. The posts and border barriers between Leased Territory of Guangzhouwan and Qing dynasty later China that worked between 1898 and 1945. The Czechoslovak border fortifications and fortified defensive lines built between 1935 and 1938 as a defensive countermeasure against the rising threat of Nazi Germany. The Maginot Line, built between 1929 and 1938 by France on the French–German border as a defensive structure. The border posts and barriers between the (Empire of Japan) through the (Karafuto Prefecture) and the (Russian Empire after Russian SSR after USSR) through the Sakhalin Oblast that operated between 5 September 1907 and 25 August 1945. The border posts, forts and walls between the Russian Dalian (Russian Empire) and the Qing dynasty between 27 March 1898 – 5 September 1905 and between the Kwantung Leased Territory (Empire of Japan) and the (Qing dynasty 5 September 1905 – 10 October 1911 and the Republic of China 10 October 1911 – 1932), later Manchukuo (1932 – 14 August 1945) later by the USSR as the Soviet occupation of Manchuria between August 14, 1945, and May 3, 1946, when it was returned to the Republic of China. The posts, forts and border barriers that operated between the British Burma (1824–1858, 1947–1948) later British Raj through the province of British Burma and French Indochina, Kingdom of Siam and Qing dynasty later China). The frontier posts, forts and barriers that operated between the colony and overseas territory of the French India and (Mughal Empire after Maratha Empire after Company Rule in India after British Raj after India) which operated between 1664 and 1 November 1954. The posts, forts and border barriers between Finland and the former Soviet Union, in the Hanko peninsula (1940–1941) and in the Porkkala peninsula (1945–1956). The frontier posts, forts and barriers that operated between the colony and overseas province of the Portuguese India and (Mughal Empire after Bijapur Sultanate after Maratha Empire after Company Rule in India after British Raj after India) which functioned between 15 August 1505 and 19 December 1961. The coastal posts, forts and walls that were in Dutch New Guinea (27 December 1949 – 1 October 1962) after United Nations Provisional Executive Authority (1 October 1962 – 1 May 1963) and served to ward off the United States of Indonesia (27 December 1949 – 17 August 1950) and after Indonesia (17 August 1950 – 1 May 1963). The frontier posts, forts and barriers that operated between the colony and overseas province of the Portuguese Timor, (Independent East Timor between November 1975 to December 7, 1975) and (Dutch East Indies (15 August 1702 – 27 December 1949) later United States of Indonesia (27 December 1949 – 17 August 1950) later Indonesia (17 August 1950 – 7 December 1975) which functioned between 15 August 1702 and 7 December 1975, the border continues to function but is currently free passage. The border posts, forts and barriers between the (Panama Canal Zone) United States and Panama in operation between November 18, 1903, until October 1, 1979, partially and totally the July 1, 1999. The former Soviet Union had a security barrier (see С-175 "curtain") along its entire border from Norway and Finland to North Korea and China. The barrier also existed along to direct border to Soviet allies, e.g. Poland. In Europe, the more than long direct border between the Soviet Union and Finland/Norway was of particular importance during the Cold War. Along the Finnish border, the barrier was not so well guarded however since Finland agreed to send back all Soviet citizens who escaped. The fence was located a few kilometers (miles) from the border, and still partly remains. Russian law still forbids crossing the border outside of a border station. Iron Curtain in Europe and Asia: apart from the direct border between the former Soviet Union and Norway/Finland, this former barrier includes: Berlin Wall Inner German border Vietnamese Demilitarized Zone German–Czech Border Inner Yemen border Other Danevirke Gates of Alexander Götavirke Limes Germanicus Limes Saxoniae Offa's Dyke Willow Palisade Zasechnaya cherta See also Buffer zone Canada–United States border Canal Citadel Defense line Peace lines Demilitarised zone Defensive walls List of cities with defensive walls List of fortifications List of walls Second Amendment sanctuary Fences in Rio de Janeiro References External links Borders Fences Fortifications by type Types of wall Human migration Physical security Political geography
Border barrier
[ "Physics", "Engineering" ]
1,624
[ "Structural engineering", "Separation barriers", "Types of wall", "Space", "Spacetime", "Borders", "Border barriers" ]
40,607,155
https://en.wikipedia.org/wiki/BCFW%20recursion
The Britto–Cachazo–Feng–Witten recursion relations are a set of on-shell recursion relations in quantum field theory. They are named for their creators, Ruth Britto, Freddy Cachazo, Bo Feng and Edward Witten. The BCFW recursion method is a way of calculating scattering amplitudes. This technique is widely used in analytic calculations due to the relative conciseness of the resulting expressions, when compared to the more traditional methods. The principal property of the BCFW recursion is that at every stage of the calculation it involves exclusively real (on-shell) particles, as opposed to the virtual (off-shell) particles that propagate inside conventional Feynman diagrams. See also MHV amplitudes References Quantum field theory Scattering theory
BCFW recursion
[ "Physics", "Chemistry" ]
167
[ "Quantum field theory", "Scattering theory", "Scattering stubs", "Quantum mechanics", "Scattering" ]
42,025,871
https://en.wikipedia.org/wiki/Dimetcote
Dimetcote is commonly used for steel corrosion resistance. It is generally reliable under humid or corrosive conditions. Because of this, Dimetcote is widely used in ships, power generation facilities, and marine, oil, and offshore structures. History The Dimetcote patent was approved in 1948 by the U.S. Patent Office. The owner of the patent is PPG Industries. Dimetcote, which was created to protect the surface of metal, could be coloured by being mixed with other paints. Type There are several kinds of Dimetcote, designed for different working environments. Dimetcote 21-5 Dimetcote 3a Dimetcote 9H Dimetcote 9 Dimetcote 11 Dimetcote 302H Dimetcote 4 Use Marine Dimetcote is popular in the marine industry. The inorganic zinc coating of Dimetcote can protect metal components from moisture. Construction Dimetcote is widely used to protect construction steel from corrosion. Application equipment Dimetcote should be applied by specific sprays. Here is a list of several suitable items of equipment typically used by manufacturers. Airless spray Airless spray equipment used for Dimetcote should have a fluid tip with orifice no smaller than 0.019 inch (0.48), and the minimum level of pump ratio is 28:1. Some standard airless sprays such as Spee-Flo, Graco, Nordson-Bede, and DeVilbiss meet these requirements. Conventional spray Some industrial-level sprays (with teflon or leather needle packing, variable speed agitator in pressure pot, separate air and fluid pressure regulators) can also be used for Dimetcote. Mixer A powerful mixer is required for Dimetcote. To meet the high pressure requirement, the mixer should be powered by an explosion-proof electric motor or air motor. Workers can attain optimal spray characteristics by adjusting the tip size or pressure of the spray. See also Corrosion PPG Industries References External links Corrosion PPG Industries
Dimetcote
[ "Chemistry", "Materials_science" ]
405
[ "Metallurgy", "Corrosion", "Electrochemistry", "Electrochemistry stubs", "Materials degradation", "Physical chemistry stubs" ]
42,027,929
https://en.wikipedia.org/wiki/Nokia%20Lumia%20Icon
The Nokia Lumia Icon (originally known as the Lumia 929) is a high-end smartphone developed by Nokia that runs Microsoft's Windows Phone 8 operating system. It was announced on February 12, 2014, and released on Verizon Wireless in the United States on February 20, 2014. It is currently exclusive to Verizon and the U.S. market; its international counterpart is the Nokia Lumia 930. On February 11, 2015, Verizon released the Windows Phone 8.1 operating system and Lumia Denim firmware update for the Icon. On June 23, 2016, Verizon released the Windows 10 Mobile operating system update for the Icon. Primary features The primary features of the Lumia Icon are: 5in 1920x1080 AMOLED 441 PPI touchscreen display Qualcomm Snapdragon 800 Processor 2GB of LPDDR3 RAM 20 MP PureView camera with Carl Zeiss optics and pixel oversampling Optical image stabilization 2160p (4K UHD) video recording at 30fps Quad microphones with noise reduction Wireless AC Wi-Fi 4G LTE support Microsoft Cortana Voice Assistant with "Hey Cortana" voice activation (with Lumia Denim update) Availability The phone was released for sale exclusively through Verizon in the United States for $199.99 with a 2-year contract or $549.99 with no contract. The Lumia Icon has almost identical internal specifications to the larger Nokia Lumia 1520 with the primary difference being that it has a smaller screen of 5 inches compared with the Lumia 1520's 6 inches. The Nokia Lumia 930, released in April 2014, is nearly identical to the Icon in both appearance and specifications. However, the 930 uses GSM radios and comes with Windows Phone 8.1 and the Cyan firmware, and is the worldwide variant of the Icon. While the 930 has since been updated to Denim (which contains the Windows Phone 8.1 Update), Verizon previously faced criticism for not releasing the Cyan update for the Icon. Now that Verizon Wireless has updated the Icon directly to Denim, skipping Cyan, the OS and firmware distinctions have largely been eliminated. Naming While in development, the Nokia Lumia Icon was known by its model number. Early development screenshots and prototype accessories referred to the phone as the Lumia 929. This was in keeping with Nokia's previous branding practice of assigning a corresponding number to the place where the phone would sit in Nokia's lineup, with higher numbers indicating higher-end models and lower numbers indicating lower-end products. Upon release, the phone kept the model number 929, but was the first Lumia to utilize a name other than its model number for branding. Reception The Lumia Icon received fairly positive reviews, with some reviewers calling it the best Windows Phone released, praising the phone's camera quality, display, and overall speed but detracting its being locked to one carrier and having a camera with a slow transition time between taking photographs. Reviewers were split on the design of the phone, with some praising its metal build quality as solid and premium, and others criticizing it for being too utilitarian and conservative. Brad Molen of Engadget called the Lumia Icon "the solid high-end Windows Phone that we've wanted for a long time. It has an amazing display, great performance and solid imaging capability, but its exclusivity to Verizon will severely limit its appeal." and Mark Hachman of PCWorld said "If you’re an app fiend, you’d still be better off buying an iPhone or Android phone, which dependably receive third-party apps. But the Icon and Lumia 1520 are clearly the best Windows Phones on the market. Deciding between them simply depends on which size you prefer." Christina Bonnington from Wired said that the best Windows Phone ever still disappoints, and mentioned poor call quality as one of the detractors, but praised the solid build quality, inclusion of wireless charging, and powerful processor. See also Microsoft Lumia Nokia Lumia 1520 Nokia Lumia 930 References Microsoft Lumia Nokia smartphones Mobile phones introduced in 2014 Discontinued flagship smartphones Windows Phone devices PureView
Nokia Lumia Icon
[ "Technology" ]
876
[ "Discontinued flagship smartphones", "Flagship smartphones" ]
49,545,883
https://en.wikipedia.org/wiki/Synthetic%20immunology
Synthetic immunology is the rational design and construction of synthetic systems that perform complex immunological functions. Functions include using specific cell markers to target cells for destruction and or interfering with immune reactions. US Food and Drug Administration (FDA)-approved immune system modulators include anti-inflammatory and immunosuppressive agents, vaccines, therapeutic antibodies and Toll-like receptor (TLR) agonists. History The discipline emerged after 2010 following the development of genome editing technology including TALENS and CRISPR. In 2015, one project created T cells that became active only in the presence of a specific drug, allowing them to be turned on and off in situ. Another example is a T cell that targets only cells that display two separate markers. In 2016, John Lin head of Pfizer's San Francisco biotech unit stated, “the immune system will be the most convenient vehicle for [engineered human cells], because they can move and migrate and play such important roles.” Advances in systems biology support high-dimensional quantitative analysis of immune responses. Techniques include viral gene delivery, inducible gene expression, RNA-guided genome editing, and site-specific recombinases for applications related to biotechnology and cellular immunotherapy. Types Immunity-modulating organisms Researchers are exploring the creation of 'smart' organisms such as bacteriophages and bacteria that can perform complex immunological tasks. Such strategies could produce organisms that perform multistep immune functions such as presenting antigen to and co-stimulating helper T cells in a specific manner, or providing integrated signals to B cells to induce affinity maturation and isotype switching during antibody production. Such engineered organisms have the potential be as safe and as inexpensive as probiotics but precise in carrying out targeted interventions. Antibody-recruiting small molecules Antibody therapeutics and other 'biologics' have proven to be effective in treating a diseases from rheumatoid arthritis to cancer. However, such agents can cause unwanted anaphylactic or inflammatory reactions, are administered by injection and are expensive. Small molecules, in contrast, are generally inexpensive to produce, orally bioavailable and are rarely allergenic. Synthetic antibody-recruiting small molecules have been created that redirect natural antibodies to pathogens for destruction. Transdifferentiated cells Deletion of a single transcription factor enables mature B cells to transform into T cells via dedifferentiation and redifferentiation. Technologies that can control cell fate include strategies to induce pluripotent stem cell formation and using small molecules to induce stem cells to differentiate into specific cell types. Dedifferentiation could be used to turn autoimmune cells into inactive progenitors or to suppress rejection of transplanted organs. In 2016 researchers transdifferentiated fibroblasts into induced neural stem cells. The team mixed the cells into an FDA-approved surgical glue that provided a physical support matrix. They administered the result to mice. Survival times increased from 160 to 220 percent, depending on the type of tumor. Vaccines Therapeutic vaccines treat and immunize patients already infected with a given disease. Provenge is an adoptive cell-transfer therapy in which a patient's antigen-presenting target autologous prostate cancer tissue. Advances in chemical biology include synthetic molecules that modulate B cell activation, structurally complex carbohydrate tumor antigen and adjuvants synthesis, immunogenic chemotherapeutic agents and chemically homogeneous, synthetic vaccines. See also References External links Synthetic biology Immunology
Synthetic immunology
[ "Engineering", "Biology" ]
721
[ "Synthetic biology", "Biological engineering", "Bioinformatics", "Immunology", "Molecular genetics" ]
49,549,493
https://en.wikipedia.org/wiki/Pepsi%20Spire
Pepsi Spire is a touch screen soda fountain introduced by PepsiCo in 2014. The Spire's main competitor is the Coca-Cola Freestyle. Currently, Spire is available to retailers in two models, 2.0 and 5.0. It was designed by the Japanese machinery company Mitsubishi Heavy Industries. History The Spire was first unveiled at the National Restaurant Association trade show in May 2014. Choices The Spire has up to eight flavor options, depending on retailer selection: Cherry, Vanilla, Strawberry, Lemon, Raspberry, Lime, Grape and Peach. Pepsi also markets their major brands for the machine, including: Pepsi Diet Pepsi Pepsi Zero Sugar (Pepsi Max outside the United States and Canada) Sierra Mist/Starry (7 Up outside the United States) Crush (Tango outside of the United States and Canada) Gatorade G2 Fruit Punch Dole Kiwi Cocktail Tropicana Juices & Lemonade (Brisk Lemonade outside United States) Mountain Dew Diet Mountain Dew Dr Pepper Diet Dr Pepper Mug Root Beer Diet Mug Root Beer Schweppes Brisk Iced Tea brands Manzanita Sol SoBe Lifewater Crush, Gatorade, Dole Kiwi Cocktail, all Dr Pepper brands, and Schweppes are not available at Pepsi Spire soda fountains in the United States. Locations Countries that have Pepsi Spire soda fountains include the United States, Canada, Switzerland, Ireland and Iran. There are 3148 locations in the United States, 430 in Canada, 1 in Switzerland, and 1 in Iran. It is served in many Subway locations in various countries (in the United States, most that are under contract with Coca-Cola do not use this). It is also at Shippensburg University of Pennsylvania, Muhlenberg College, University of Wisconsin–Oshkosh, Northern Arizona University, DePaul University, New Jersey Institute of Technology, Virginia Commonwealth University, and Hersheypark. See also Coca-Cola Freestyle References External links Commercial machines PepsiCo Soft drinks Vending machines 2010s in food
Pepsi Spire
[ "Physics", "Technology", "Engineering" ]
407
[ "Machines", "Commercial machines", "Vending machines", "Automation", "Physical systems" ]
49,552,891
https://en.wikipedia.org/wiki/Wax%20emulsion
Wax emulsions are stable mixtures of one or more waxes in water. Waxes and water are normally immiscible but can be brought together stably by the use of surfactants and a clever preparation process. Strictly speaking a wax emulsion should be called a wax dispersion since the wax is solid at room temperature. However, because the preparation takes place above the melting point of the wax, the actual process is called emulsification, hence the name wax emulsion. In praxis, wax dispersion is used for solvent based systems. A wide range of emulsions based on different waxes and blends thereof are available, depending on the final application. Waxes that are found in wax emulsions can be of natural or synthetic origin. Common non-fossil natural waxes are carnaubawax, beeswax, candelilla wax or ricebran wax. Paraffin, microcrystalline and montanwax are the most used fossil natural waxes that are found in emulsions. Synthetic waxes that are used include (oxidised) LDPE and HDPE, maleic anhydride grafted polypropylene and Fischer-Tropsch waxes. A range of different emulsifiers or surfactants are used to emulsify waxes. These can be anionic, cationic or non-ionic in nature. The most common however are fatty alcohol ethoxylates as non-ionic surfactants due to their superb stability against hard water, pH-shock and electrolytes. Some applications demand different emulsifier systems for example anionic surfactants for better hydrophobicity or cationic surfactants for better adhesion to certain materials like textile fibers. Applications Wax emulsions are widely used in a variety of technical applications like printing inks & lacquers, leather and textiles, paper, wood, metal, polishes, glass fiber sizing, glass bottle protection among other things. The most important properties that can be improved by the addition of wax emulsions are matting & gloss, hydrophobicity, soft touch, abrasion & rub resistance, scratch resistance, release, corrosion protection and anti-blocking. Emulsions based on natural waxes are used for coating fruits and candies and crop protection. Synthetic wax based emulsions are often used in food packaging. Wax emulsions based on beeswax, carnauba wax and paraffin wax are used in creams and ointments. The emergence of soybean waxes with varying properties and melt points has led to the use of vegetable wax emulsions in applications such as paper coatings, paint and ink additives, and even wet sizing for pulp and paper applications. These wax emulsions can be formulated to deliver some of the same properties that petroleum-based wax emulsions deliver, but offer advantages of being a green product and offer more consistent availability. References Colloids Waxes
Wax emulsion
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
621
[ "Materials science stubs", "Materials science", "Colloids", "Materials", "Chemical mixtures", "Condensed matter physics", "Matter", "Waxes" ]
50,677,472
https://en.wikipedia.org/wiki/Karlsruhe%20Accurate%20Arithmetic
Karlsruhe Accurate Arithmetic (KAA), or Karlsruhe Accurate Arithmetic Approach (KAAA), augments conventional floating-point arithmetic with good error behaviour with new operations to calculate scalar products with a single rounding error. The foundations for KAA were developed at the University of Karlsruhe starting in the late 1960s. See also Ulrich W. Kulisch References Further reading Computer arithmetic Numerical analysis
Karlsruhe Accurate Arithmetic
[ "Mathematics", "Technology" ]
78
[ "Computational mathematics", "Computer arithmetic", "Arithmetic", "Mathematical relations", "Numerical analysis", "Computing stubs", "Approximations" ]
43,492,426
https://en.wikipedia.org/wiki/Toroidal%20Fusion%20Core%20Experiment
The Toroidal Fusion Core Experiment (TFCX) was a US design study for a tokamak fusion experiment in the mid 1980s. It was intended to achieve ignition using long burns of over 100 seconds. It could have used superconducting coils to create a 10 Tesla magnetic field. Designed for an nt ~ 3x1020 and ion energy ~ 10-20keV, it was never built. References Tokamaks
Toroidal Fusion Core Experiment
[ "Physics" ]
90
[ "Plasma physics stubs", "Nuclear and atomic physics stubs", "Plasma physics", "Nuclear physics" ]
43,492,779
https://en.wikipedia.org/wiki/Golden%20Gate%20Cloning
Golden Gate Cloning or Golden Gate assembly is a molecular cloning method that allows a researcher to simultaneously and directionally assemble multiple DNA fragments into a single piece using Type IIS restriction enzymes and T4 DNA ligase. This assembly is performed in vitro. Most commonly used Type IIS enzymes include BsaI, BsmBI, and BbsI. Unlike standard Type II restriction enzymes like EcoRI and BamHI, these enzymes cut DNA outside of their recognition sites and, therefore, can create non-palindromic overhangs. Since 256 potential overhang sequences are possible, multiple fragments of DNA can be assembled by using combinations of overhang sequences. In practice, this means that Golden Gate Cloning is typically scarless. Additionally, because the final product does not have a Type IIS restriction enzyme recognition site, the correctly-ligated product cannot be cut again by the restriction enzyme, meaning the reaction is essentially irreversible. This has multiple benefits, the first is that it is possible to do digestion and ligation of the DNA fragments in a single reaction, in contrast to conventional cloning methods where these reactions are separate. The second is higher efficiency because the end product cannot be cut again by the restriction enzyme. A typical thermal cycler protocol oscillates between 37 °C (optimal for restriction enzymes) and 16 °C (optimal for ligases) many times. While this technique can be used for a single insert, researchers have used Golden Gate Cloning to assemble many pieces of DNA simultaneously. Seamless cloning Scar sequences are common in multiple segment DNA assembly. In the multisegment assembly method Gateway, segments are added into the donor with additional ATT sequences, which overlap in those added segments, and this results in the segments separated by the ATT sequences. In BioBrick assembly, an eight-nucleotide scar sequence, which codes for a tyrosine and a stop codon, is left between every segment added into the plasmid. Golden Gate assembly uses Type IIS restriction enzymes cutting outside their recognition sequences. Also, the same Type IIS restriction enzyme can generate copious different overhangs on the inserts and the vector; for instance, BsaI creates 256 four-basepair overhangs. If the overhangs are carefully designed, the segments are ligated without scar sequences between them, and the final construct can be quasi-scarless, where the restriction enzyme sites remain on both sides of the insert. As additional segments can be inserted into the vectors without scars within an open reading frame, Golden Gate is widely used in protein engineering. Plasmid design Although Golden Gate Cloning speeds up multisegment cloning, careful design of donor and recipient plasmids is required. Scientists at New England Biolabs have successfully demonstrated the assembly of 35 fragments via a single-tube Golden Gate Assembly reaction. Critical to this method of assembly, the vector backbone of the destination plasmid and all the assembly fragments are flanked by Type IIS restriction enzyme recognition sites, as this subtype of restriction enzymes cut downstream from their recognition sites. After cutting, each assembly active piece of DNA has unique overhangs that anneal to the next fragment of DNA in the planned assembly and become ligated, building the assembly. While it is also possible for an overhang to anneal back to its original complementary overhang associated with the upstream recognition site and become ligated, re-forming the original sequence, this will be susceptible to further cutting throughout the assembly reaction. Cloning standards Restriction enzyme DNA assembly has cloning standards to minimize the change in cloning efficiency and the function of the plasmid, which can be caused by compatibility of the restriction sites on the insert and those on the vector. Golden Gate assembly's cloning standards have two tiers. First-tier Golden Gate assembly constructs the single-gene construct by adding in genetic elements such as promoter, open reading frames, and terminators. Then, second-tier Golden Gate assembly combine several constructs made in first-tier assembly to make a multigene construct. To achieve second-tier assembly, modular cloning (MoClo) system and GoldenBraid2.0 standard are used. MoClo system Modular Cloning, or MoClo, is an assembly method introduced in 2011 by Ernst Weber et al., whereby using Type IIS restriction sites, the user can ligate at least six DNA parts together into a backbone in a one-pot reaction. It is a method based on Golden Gate Assembly, where Type IIS restriction enzymes cleave outside of their recognition site to one side, allowing for removal of those restriction sites from the design. This helps eliminate excess base pairs, or scars, from forming between DNA Parts. However, in order to ligate together properly, MoClo utilizes a set of 4-base pair fusion sites, which remain between parts after ligation, forming 4-base pair scars between DNA parts in the final DNA sequence following ligation of two or more parts. MoClo utilizes a parallel approach, where all constructs from tier-one(level 0 modules) have restriction sites for BpiI on both sides of the inserts. The vector(also known as "destination vector"), where genes will be added, has an outward-facing BsaI restriction site with a drop-out screening cassette. LacZ is a common screening cassette, where it is replaced by the multigene construct on the destination vector. Each tier-one construct and the vector have different overhangs on them yet complementary to the overhang of the next segment, and this determines the layout of the final multigene construct. Golden Gate Cloning usually starts with level 0 modules. However, if the level 0 module is too large, cloning will start from level -1 fragments, which have to be sequenced, to help cloning the large construct. If starting from level -1 fragments, the level 0 modules do not need to be sequenced again, whereas if starting from level 0 modules, the modules must be sequenced. Level 0 modules Level 0 modules are the base for MoClo system, where they contain genetic elements like a promoter, a 5' untranslated region (UTR), a coding sequence, and a terminator. For the purpose of Golden Gate Cloning, the internal sequences of level 0 modules should not contain type IIS restriction enzymes sites for BsaI, BpiI, and Esp3I while surrounded by two BsaI restriction sites in inverted orientation. Level 0 modules without type IIS restriction sites flanking can add the BsaI sites during the process of Golden Gate Cloning. If the level 0 modules contains any unwanted restriction site, they can be mutated in silico by removing one nucleotide from the Type IIS restriction site. In this process, one needs to make sure that the introduced mutation will not affect the genetic function encoded by the sequence of interest. A silent mutation in the coding sequence is preferred, for it neither changes the protein sequence nor the function of the gene of interest. Level -1 fragments Level -1 fragments are used to help cloning large level 0 modules. To clone level -1 fragments, blunt-end cloning with restriction ligation can be used. The vector used in cloning level -1 fragments cannot contain Type IIS restriction site BpiI that is used for the following assembly step. Moreover, the vector should also have a different selection marker from the destination vector in next assembly step, for example, if spectinomycin resistance is used in level 0 modules, level -1 fragments should have another antibiotic resistance like ampicillin. Level 1 constructs The level 1 destination vector determines the position and orientation of each gene in the final construct. There are fourteen available level 1 vectors, which differ only by the sequence of the flanking fusion sites while being identical in the internal fusion sites. Hence, all vectors can assemble the same level 0 parts. As all level 1 vectors are binary plasmids, they are used for Agrobacterium mediated temporary expression in plants. Level 2 constructs Level 2 vectors have two inverted BpiI sites from the insertion of level 1 modules. The upstream fusion site is compatible to a gene cloned in level 1 vector while the downstream fusion site has a universal sequence. Each cloning allows 2-6 genes to be inserted in the same vector. Adding more genes in one cloning step is not recommended, for this would result in incorrect constructs. On one hand, this can induce more restriction sites in the construct, where this open construct allows additional genes be added. On the other hand, this can also eliminate restriction sites, where this close construct stop the further addition of genes. Therefore, constructs of more than six genes need successive cloning steps, which requires end-linkers containing BsaI or BsmBI internal restriction sites and blue or purple markers. Each cloning step needs to alternate the restriction site and the marker. Furthermore, two restriction enzymes are needed, where BpiI is used for releasing level 1 modules from level 1 constructs and BsaI/BsmBI is for digesting and opening the recipient level 2-n plasmid. When screening, the correct colonies should alternate from blue to purple every cloning step, but if a "closed" end-linker is used, the colonies will be white. Level M constructs Level M vectors are similar to level 2 vectors, but have a BsaI site located upstream of the two inverted BpiI sites. When one or several genes are cloned in a level M vector, a second BsaI is added at the end of the construct via a Level M end-linker (ref). This allows a fragment containing all assembled genes to be excised from the vector and subcloned in a next level of cloning (Level P). Level P constructs Level P vectors are similar to level M constructs except that the BpiI sites are replaced by BsaI sites and the BsaI sites are replaced by BpiI sites. Several level M constructs with compatible fusion sites can be subcloned into a level P vector in one step. Theoretically, as many as 36 genes can be assembled in one construct using 6 parallel level M reactions (each required for assembly of 6 genes per level M construct) followed by one final level P reaction. In practice, fewer genes are usually assembled as most cloning projects do not require so many genes. The structure of level M and P vectors is designed in a such as way that genes cloned in level P constructs can be further assembled in level M vectors. Repeated cloning in level M and P vectors forms a loop that can be repeated indefinitely to assemble progressively large constructs. GoldenBraid In standard Golden Gate Cloning, the restriction sites from the previous tier construct cannot be reused. To add more genes to the construct, restriction sites of a different Type IIS restriction enzyme need to be added to the destination vector. This can be done using either level 2, or M and P. A variant version of level M and P is also provided by GoldenBraid. GoldenBraid overcomes the problem of designing numerous destination vectors by having a double loop, which is the "braid," to allow binary assembly of multiple constructs. There are two levels of destination plasmids, level α and level Ω. Each level of plasmids can be used as entry plasmids for the other level of plasmids for multiple times because both levels of plasmids have different Type IIS restriction sites that are in inverted orientation. For counterselection, the two levels of plasmids differ in their antibiotic resistance markers. Golden mutagenesis The Golden Gate Cloning principle can also be applied to perform mutagenesis termed Golden Mutagenesis. The technology is easy to implement as a web tool is available for primer design (https://msbi.ipb-halle.de/GoldenMutagenesisWeb/) and the vectors are deposited at addgene (http://www.addgene.org/browse/article/28196591/). Name The name Golden Gate Assembly comes from a proposal of Yuri Gleba. It shall refer on the one hand to the Gateway Technology, on the other hand picture the higher precision with a bridge connecting the streets of two shores seamlessly. One of the most well known bridges is the Golden Gate Bridge in San Francisco. References Molecular biology
Golden Gate Cloning
[ "Chemistry", "Biology" ]
2,570
[ "Biochemistry", "Molecular biology" ]
43,495,360
https://en.wikipedia.org/wiki/Unified%20Diagnostic%20Services
Unified Diagnostic Services (UDS) is a diagnostic communication protocol used in electronic control units (ECUs) within automotive electronics, which is specified in the ISO 14229-1. It is derived from ISO 14230-3 (KWP2000) and the now obsolete ISO 15765-3 (Diagnostic Communication over Controller Area Network (DoCAN)). 'Unified' in this context means that it is an international and not a company-specific standard. By now this communication protocol is used in all new ECUs made by Tier 1 suppliers of original equipment manufacturer (OEM), and is incorporated into other standards, such as AUTOSAR. The ECUs in modern vehicles control nearly all functions, including electronic fuel injection (EFI), engine control, the transmission, anti-lock braking system, door locks, braking, window operation, and more. Diagnostic tools are able to contact all ECUs installed in a vehicle which has UDS services enabled. In contrast to the CAN bus protocol, which only uses the first and second layers of the OSI model, UDS utilizes the fifth and seventh layers of the OSI model. The Service ID (SID) and the parameters associated with the services are contained in the payload of a message frame. Modern vehicles have a diagnostic interface for off-board diagnostics, which makes it possible to connect a computer (client) or diagnostics tool, which is referred to as tester, to the communication system of the vehicle. Thus, UDS requests can be sent to the controllers which must provide a response (this may be positive or negative). This makes it possible to interrogate the fault memory of the individual control units, to update them with new firmware, have low-level interaction with their hardware (e.g. to turn a specific output on or off), or to make use of special functions (referred to as routines) to attempt to understand the environment and operating conditions of an ECU to be able to diagnose faulty or otherwise undesirable behavior. UDS uses the ISO-TP transport layer (ISO 15765-2). The United States standard OBD-II also uses ISO-TP. Since OBD-II uses service numbers 0x01-0x0A, UDS uses service numbers starting with 0x10, in order to avoid overlap. Services SID (Service Identifier) Negative response codes Negative response from ECU contains SID 0x7F and two payload bytes: request's SID and error code. These codes can be found in freely available software (for example, BusMaster) as well as in the ISO itself. See also On-board diagnostics, general article about diagnostic services in vehicles OBD-II PIDs, about the US standard References External links Unified Diagnostic Services - ISO 14229 (poster by softing.com) PCAN-UDS 2.x API description Automotive technologies Embedded systems
Unified Diagnostic Services
[ "Technology", "Engineering" ]
598
[ "Embedded systems", "Computer science", "Computer engineering", "Computer systems" ]
43,496,222
https://en.wikipedia.org/wiki/Deuterated%20drug
A deuterated drug is a small molecule medicinal product in which one or more of the hydrogen atoms in the drug molecule have been replaced by its heavier stable isotope deuterium. Because of the kinetic isotope effect, deuterium-containing drugs may have significantly lower rates of metabolism, and hence a longer half-life. Mode of action Hydrogen is a chemical element with an atomic number of 1. It has one proton and one electron. Deuterium is the heavier naturally-occurring stable isotope of hydrogen. Deuterium was discovered by Harold Urey in 1931, for which he received the Nobel Prize in 1934. The deuterium isotope effect has become an important tool in the elucidation of the mechanism of chemical reactions. Deuterium contains one proton, one electron, and a neutron, effectively doubling the mass of the deuterium isotope without changing its properties significantly. However, the C–D bond is a bit shorter, and it has reduced electronic polarizability and less hyperconjugative stabilization of adjacent bonds, including developing an anti-bonding orbital as part of the newly formed bond. This can potentially result in weaker van der Waals stabilization, and can produce other changes in properties that are difficult to predict, including changes in the intramolecular volume and the transition state volume. Substituting deuterium for hydrogen yields deuterated compounds that are similar in size and shape to hydrogen-based compounds. History The concept of replacing hydrogen with deuterium is an example of bioisosterism, whereby similar biological effects to a known drug are produced in an analog designed to confer superior properties. The first patent in the US granted for deuterated molecules was in the 1970s. Since then patents on deuterated drugs have become more common. The applications of the deuterium isotope effect has increased over time, and it is now applied extensively in mechanistic studies of the metabolism of drugs as well as other studies focused on pharmacokinetics (PK), efficacy, tolerability, bioavailability, and safety. The introduction of deuterated drug candidates that began in the 1970s evolved from earlier work with deuterated metabolites. However, it took more than 40 years for the first deuterated drug, Austedo® (deutetrabenazine), to be approved by the FDA. Numerous publications have discussed the advantages and disadvantages of deuterated drugs. A number of publications have discussed aspects of intellectual property of deuterated versions of drugs. Examples Deutetrabenazine is a deuterated version of tetrabenazine. It was developed by Auspex then acquired by Teva in 2015 and approved by the FDA in 2017 as a treatment for chorea associated with Huntington's disease; it has a longer half life than the non-deuterated form of tetrabenazine, which had been approved earlier for the same use. Deucravacitinib is a deuterated JAK inhibitor (specifically, TYK2 inhibitor) approved for the treatment of plaque psoriasis. Concert Pharmaceuticals focuses on deuterated drugs for various conditions. Concert was acquired by Sun Pharma in March 2023. The company Retrotope discovered and has been developing a deuterated fatty acid RT001 as a treatment for neurodegenerative diseases such as Friedreich's ataxia and infantile neuroaxonal dystrophy. Their premise is that fatty acids in cell membranes are a source of reactive oxygen species and deuterated versions will be less prone to generating them. Poxel SA, a French clinical-stage biopharmaceutical company focused on therapies for rare metabolic diseases, is developing PXL065 to target nonalcoholic steatohepatitis (NASH). The company acquired PXL065 (the deuterium-stabilized (R)-enantiomer of pioglitazone) and a portfolio of deuterated thiazolidinediones (TZDs) from DeuteRx, LLC, in 2018, and published positive results from the Phase 2 trial in March 2023. See also Reinforced lipids References Further reading Heavy drugs gaining momentum. Deuterated compounds Drugs by structure Chemicals in medicine
Deuterated drug
[ "Chemistry" ]
883
[ "Chemicals in medicine", "Medicinal chemistry" ]
45,313,942
https://en.wikipedia.org/wiki/PAN%20domain
PAN domains have significant functional versatility fulfilling diverse biological roles by mediating protein-protein and protein-carbohydrate interactions. These domains contain a hair-pin loop like structure, similar to that found in knottins but with a different pattern of disulfide bonds. It has been shown that the N-terminal domains of members of the plasminogen/hepatocyte growth factor family, the apple domains of the plasma prekallikrein/coagulation factor XI family, and domains of various nematode proteins belong to the same module superfamily, the PAN module. The PAN domain contains a conserved core of three disulfide bridges. In some members of the family there is an additional fourth disulfide bridge that links the N- and C-termini of the domain. The apple domain, as well as other examples of the PAN domain, consists of 7 β-strands that fold into a curved antiparallel sheet cradling an α-helix. Two disulfide bonds lock the helix onto the central β4 and β5 strands, whereas a third connects the N- and C-termini of the domain. In the apple domain, the β4-β5 loop and β5-β6 crossover loop generate a small pocket on the opposite side of the sheet from the α-helix. In native plasminogen the PAN domain is associated with five kringle domains. The interactions between the PAN domain and the kringles play a critical role in stabilising the quaternary complex of the native plasminogen; References Protein domains
PAN domain
[ "Chemistry", "Biology" ]
330
[ "Biochemistry stubs", "Protein stubs", "Protein domains", "Protein classification" ]
45,318,796
https://en.wikipedia.org/wiki/Russian%20Geometric%20Kernel
Russian Geometric Kernel (also known as RGK) is a proprietary geometric modeling kernel developed by several Russian software companies, most notably Top Systems and LEDAS, and supervised by STANKIN (State Technology University). It was written in C++. History The kernel was developed in 2011–2013 under the supervision of “Stankin” Moscow State Technical University within the framework of the project for “Developing Licensed Home 3D-Kernel”, funded by the Ministry of Industry and Trade of the Russian Federation. The kernel is said to be completed by 2013, with no other news on it available (by the end of 2016). Architecture RGK is described using boundary representation (B-rep). But other descriptions are used when necessary. For instance, to optimize the speed of kernel's functions, and to ensure precise storage and computation of the model, canonical objects and NURBS curves and surfaces are used. To solve tasks associated with complex operations (such as hole-covering surfaces, N-side patches, and blending surfaces in complex cases), special types of curves and surfaces are used by the kernel. Low-level and high-level operations Kernel functions can be grouped under another criterion: low-level and high-level ones. The low-level operations include constructing curves and surfaces (canonical objects, NURBS, offset curves and surfaces, and so on), projecting points and curves on surfaces, intersecting and extending curves and surfaces, modifying topology (including Euler operations), and so on. Low-level operations enable application developers to modify kernel data in a most flexible manner, practically operating in manual mode. High-level operations include operations that are standard for body generation, and Boolean operations on bodies (union, subtract, and intersect). It can be used with solid and surface bodies, and with combinations of the two. Platforms The geometric kernel supports 32- and 64-bit architecture, and Windows and Linux platforms. It can be compiled with any C++ compiler that implements features of С++11 standard. References External links Official RGK web page Computer-aided design software Computer-aided engineering software 3D graphics software Computer-aided design
Russian Geometric Kernel
[ "Engineering" ]
443
[ "Computer-aided design", "Design engineering" ]
45,320,987
https://en.wikipedia.org/wiki/Furstenberg%20boundary
In potential theory, a discipline within applied mathematics, the Furstenberg boundary is a notion of boundary associated with a group. It is named for Harry Furstenberg, who introduced it in a series of papers beginning in 1963 (in the case of semisimple Lie groups). The Furstenberg boundary, roughly speaking, is a universal moduli space for the Poisson integral, expressing a harmonic function on a group in terms of its boundary values. Motivation A model for the Furstenberg boundary is the hyperbolic disc . The classical Poisson formula for a bounded harmonic function on the disc has the form where P is the Poisson kernel. Any function f on the disc determines a function on the group of Möbius transformations of the disc by setting . Then the Poisson formula has the form where m is the Haar measure on the boundary. This function is then harmonic in the sense that it satisfies the mean-value property with respect to a measure on the Möbius group induced from the usual Lebesgue measure of the disc, suitably normalized. The association of a bounded harmonic function to an (essentially) bounded function on the boundary is one-to-one. Construction for semi-simple groups In general, let G be a semi-simple Lie group and μ a probability measure on G that is absolutely continuous. A function f on G is μ-harmonic if it satisfies the mean value property with respect to the measure μ: There is then a compact space Π, with a G action and measure ν, such that any bounded harmonic function on G is given by for some bounded function on Π. The space Π and measure ν depend on the measure μ (and so, what precisely constitutes a harmonic function). However, it turns out that although there are many possibilities for the measure ν (which always depends genuinely on μ), there are only a finite number of spaces Π (up to isomorphism): these are homogeneous spaces of G that are quotients of G by some parabolic subgroup, which can be described completely in terms of root data and a given Iwasawa decomposition. Moreover, there is a maximal such space, with quotient maps going down to all of the other spaces, that is called the Furstenberg boundary. References Potential theory
Furstenberg boundary
[ "Mathematics" ]
467
[ "Mathematical objects", "Functions and mappings", "Mathematical relations", "Potential theory" ]
46,647,638
https://en.wikipedia.org/wiki/Pax%20Atomica
Pax Atomica (Latin for “Atomic Peace”) is one of the terms that has sometimes been used to describe the period of severe tensions without a major military conflict between the United States and the Soviet Union during the Cold War. The term is also at times used to describe the entire post World War II/ post-atomic-bomb era. In the phrase's narrower application, applying only to the Cold War era, the phrase refers to the argument that the stability between the two superpowers was caused by each side's large nuclear arsenals which led to a state of Mutually Assured Destruction (MAD). That is, if one of the superpowers would have launched a nuclear attack, the other would have responded in the same way. This threatened the complete destruction of both countries and probably the entire northern hemisphere. John Lewis Gaddis has described the period as the Long Peace. In the phrase's broader application, applying to the entire post World War II era, the phrase refers to the argument that the possession of nuclear arms by several of the world's larger powers has tended to act to prevent the outbreak of full-scale warfare between any of these several powers, also due to the probability of MAD. The phrase Pax Atomica is derived from the more popular term Pax Romana, which describes the period of stability under Roman hegemony during the Roman Age. See also Deterrence theory Nuclear arms race Balance of power (international relations) Nuclear peace References Military terminology of the United States Nuclear warfare Atomica
Pax Atomica
[ "Chemistry" ]
312
[ "Radioactivity", "Nuclear warfare" ]
46,647,836
https://en.wikipedia.org/wiki/Orange%20carotenoid%20protein
Orange carotenoid protein (OCP) is a water-soluble protein which plays a role in photoprotection in diverse cyanobacteria. It is the only photoactive protein known to use a carotenoid as the photoresponsive chromophore. The protein consists of two domains, with a single keto-carotenoid molecule non-covalently bound between the two domains. It is a very efficient quencher of excitation energy absorbed by the primary light-harvesting antenna complexes of cyanobacteria, the phycobilisomes. The quenching is induced by blue-green light. It is also capable of preventing oxidative damage by directly scavenging singlet oxygen (1O2). History OCP was first described in 1981 by Holt and Krogmann who isolated it from the unicellular cyanobacterium Arthrospira maxima, although its function would remain obscure until 2006. The crystal structure of the OCP was reported in 2003. At the same time the protein was shown to be an effective quencher of singlet oxygen and was suggested to be involved in photoprotection, or carotenoid transport. In 2000, it was demonstrated that cyanobacteria could perform photoprotective fluorescence quenching independent of lipid phase transitions, differential transmembrane pH, and inhibitors. The action spectrum for this quenching process suggested the involvement of carotenoids, and the specific involvement of the OCP was later demonstrated by Kirilovsky and coworkers in 2006. In 2008, OCP was shown to require photoactivation by strong blue-green light for its photoprotective quenching function. Photoactivation is accompanied by a pronounced color change, from orange to red, which had been previously observed by Kerfeld et al in the initial structural studies. In 2015 a combination of biophysical methods by researchers in Berkeley showed that the visible color change is the consequence of a 12Å translocation of the carotenoid. Physiological significance For a long time, cyanobacteria were considered incapable of performing non-photochemical quenching (NPQ) as a photoprotective mechanism, relying instead on a mechanism of energy redistribution between the two photosynthetic reaction centers, PSII and PSI, known as "state transitions". OCP is found in a majority of cyanobacterial genomes, with remarkable conservation of its amino acid sequence, implying evolutionary constraints to preserve an important function. Mutant cells engineered to lack OCP photobleach under high light and become photoinhibited more rapidly under fluctuating light. Under nutrient stress conditions, which are expected to be norm in marine environments, photoprotective mechanisms such as OCP become important even at lower irradiances. This protein is not found in chloroplasts, and appears to be specific to cyanobacteria. Function Photoactivity Upon illumination with blue-green light, OCP switches from an orange form (OCPO) to a red form (OCPR). The reversion of OCPR to OCPO is light independent and occurs slowly in darkness. OCPO is considered the dark, stable form of the protein, and does not contribute to phycobilisome quenching. OCPR is considered to be essential for induction of the photoprotection mechanism. The photoconversion from the orange to red form has a poor light efficiency (very low quantum yield), which helps to ensure the protein's photoprotective role only functions during high light conditions; otherwise, the dissipative NPQ process could unproductively divert light energy away from photosynthesis under light-limiting conditions. Energy quenching As evidenced by a decreased fluorescence, OCP in its red form is capable of dissipating absorbed light energy from the phycobilisome antenna complex. According to Rakhimberdieva and coworkers, about 30-40% of the energy absorbed by phycobilisomes does not reach the reaction centers when the carotenoid-induced NPQ is active. The exact mechanism and quenching site in both the carotenoid as well as the phycobilisome still remain uncertain. The linker polypeptide ApcE in the allophycocyanin (APC) core of the phycobilisomes is known to be important, but is not the site of quenching. Several lines of evidence suggest that it is the 660 nm fluorescence emission band of the APC core which is quenched by OCPR. The temperature dependence of the rate of fluorescence quenching is similar to that of soluble protein folding, supporting the hypothesis that OCPO slightly unfolds when it converts to OCPR. Singlet oxygen quenching As first shown in 2003, the auxiliary function of carotenoids as quenchers of singlet oxygen contributes to the photoprotective role of OCP has also been demonstrated under strong orange-red light, which are conditions where OCP cannot be photoactivated for its energy-quenching role. This is significant because all oxygenic phototrophs have a particular risk of oxidative damage initiated by singlet oxygen (1O2), which is produced when their own light-harvesting pigments act as photosensitizers. Structure 3D structure The three-dimensional protein structure of OCP (in the OCPO form) was solved in 2003, before its photoprotective role had been defined. The 35 kDa protein contains two structural domains: an all-α-helical N-terminal domain (NTD) consisting of two interleaved 4-helix bundles, and a mixed α/β C-terminal domain (CTD). The two domains are connected by an extended linker. In OCPO, the carotenoid spans both domains, which are tightly associated in this form of protein. In 2013 Kerfeld and co-workers showed that the NTD is the effector (quencher) domain of the protein while the CTD plays a regulatory role. Protein–protein interactions The OCP participates in key protein–protein interactions that are critical to its photoprotective function. The activated OCPR form binds to allophycocyanin in the core of the phycobilisome and initiates the OCP-dependent photoprotective quenching mechanism. Another protein, the fluorescence recovery protein (FRP), interacts with the CTD in OCPR and catalyzes the reaction which reverts it back to the OCPO form. Because OCPO cannot bind to the phycobilisome antenna, FRP effectively can detach OCP from the antenna and restore full light-harvesting capacity. Evolution The primary structure (amino acid sequence) is highly conserved among OCP sequences, and the full-length protein is usually co-located on the chromosome with a second open reading frame that was later characterized as the FRP. Often, biosynthetic genes for ketocarotenoid synthesis (e.g., CrtW) are nearby. These conserved functional linkages underscore the evolutionary importance of the OCP style of photoprotection for many cyanobacteria. The first structure determination of the OCP coincided with the beginning of the genome sequencing era, and it was already apparent in 2003 that there is also a variety of evolutionarily related genes which encode proteins with only one of the two domains present in OCP. The N-terminal domain (NTD), "Carot_N", is found only in cyanobacteria, but exhibits a considerable amount of gene duplication. The C-terminal domain (CTD), however, is homologous with the widespread NTF2 superfamily, which shares a protein fold with its namesake, nuclear transport factor 2, as well as around 20 other subfamilies of proteins with functions as diverse as limonene-1,2-epoxide hydrolase, SnoaL polyketide cyclase, and delta-5-3-ketosteroid isomerase (KSI). Most, if not all, of the members of the NTF2 superfamily form oligomers, often using the surface of their beta sheet to interact with another monomer or other protein. Bioinformatic analyses carried out over the past 15 years has resulted in the identification of new groups of carotenoid proteins:  In addition to new families of the OCP, there are HCPs and CCPs that correspond to the NTD and CTD of the OCP, respectively.  Based on the primary structure, the HCPs can be subdivided into at least nine evolutionarily distinct clades, each binds carotenoid. The CCPs resolve into 2 major groups, and these proteins also bind carotenoid.  Given these data, and the ability to devolve OCP into its two component domains while retaining function has led to a reconstruction of the evolution of the OCP. Applications Its water-solubility, together with its status as the only known photoactive protein containing a carotenoid, makes the OCP a valuable model for studying solution-state energetic and photophysical properties of carotenoids, which are a diverse class of molecules found across all domains of life. Moreover, carotenoids are widely investigated for their properties as anti-oxidants, and thus the protein may serve as a template for delivery of carotenoids for therapeutic purposes in human medicine. Because of its high efficiency of fluorescence quenching, coupled to its low quantum yield of photoactivation by specific wavelengths of light, OCP has ideal properties as a photoswitch and has been proposed as a novel system for developing optogenetics technologies and may have other applications in optofluidics and biophotonics. See also Photoprotection Xanthophylls Biological pigments Orange carotenoid N-terminal domain Phycobilisome Fluorescence recovery protein Photosynthetic state transition Ketocarotenoids References Antioxidants Cyanobacteria proteins Photosynthesis Carotenoids Photochemistry
Orange carotenoid protein
[ "Chemistry", "Biology" ]
2,156
[ "Biomarkers", "Photosynthesis", "Carotenoids", "nan", "Biochemistry" ]
46,655,391
https://en.wikipedia.org/wiki/Prism%20spectrometer
A prism spectrometer is an optical spectrometer which uses a dispersive prism as its dispersive element. The prism refracts light into its different colors (wavelengths). The dispersion occurs because the angle of refraction is dependent on the refractive index of the prism's material, which in turn is slightly dependent on the wavelength of light that is traveling through it. Theory Light is emitted from a source such as a vapor lamp. A slit selects a thin strip of light which passes through the collimator where it gets parallelized. The aligned light then passes through the prism in which it is refracted twice (once when entering and once when leaving). Due to the nature of a dispersive element the angle with which light is refracted depends on its wavelength. This leads to a spectrum of thin lines of light, each being observable at a different angle. A lens or telescope is then used to form images of the original slit, with images formed using different wavelengths of light at different positions. If a real image is formed, it can be recorded on film or an image sensor, making the device a spectrograph. Replacing the prism with a diffraction grating would result in a grating spectrometer. Optical gratings are less expensive, provide much higher resolution, and are easier to calibrate, due to their linear diffraction dependency. A prism's refraction angle varies nonlinearly with wavelength. On the other hand, gratings have significant intensity losses. Usage Spectroscopy A prism spectrometer may be used to determine the composition of a material from its emitted spectral lines. Measurement of refractive indices A prism spectrometer may be used to measure the refractive index of a material if the wavelengths of the light used are known. The calibration of a prism spectrometer is carried out with known spectral lines from vapor lamps or laser light. External links The prism spectrometer Physics Laboratory Guide, Durham University The Prism Spectrometer Spectrometer, Refractive Index of the material of a prism Virtual Laboratory, Amrita University Refractometers Spectrometers Prisms (optics)
Prism spectrometer
[ "Physics", "Chemistry", "Technology", "Engineering" ]
447
[ "Refractometers", "Spectrum (physical sciences)", "Measuring instruments", "Spectrometers", "Spectroscopy" ]
29,882,763
https://en.wikipedia.org/wiki/K%20factor%20%28crude%20oil%20refining%29
The K factor or characterization factor is defined from Rankine boiling temperature °R=1.8Tb[k] and relative to water density ρ at 60°F: K(UOP) = The K factor is a systematic way of classifying a crude oil according to its paraffinic, naphthenic, intermediate or aromatic nature. 12.5 or higher indicate a crude oil of predominantly paraffinic constituents, while 10 or lower indicate a crude of more aromatic nature. The K(UOP) is also referred to as the UOP K factor or just UOPK. See also Crude oil assay References External links Pipe fitting friction calculation Pipe Friction Loss Calculations Oil refining Separation processes
K factor (crude oil refining)
[ "Chemistry" ]
144
[ "Separation processes", "Petroleum stubs", "Petroleum technology", "Petroleum", "Oil refining", "nan", "Chemical process stubs" ]
29,888,055
https://en.wikipedia.org/wiki/Terminal%20Productivity%20Executive
Terminal Productivity Executive (TPX) is a multiple session manager for IBM mainframes. It allows connected users to access resources with a single sign-on. It holds several sessions concurrently, allowing a person to switch among them via the single connection on their physical terminal or terminal emulator application, i.e. telnet. For each session, TPX uses a virtual terminal; users can use it to switch amongst ISPF and SDSF in the Time Sharing Option. TPX is presently a product of CA Technologies, having been originally developed by Morgan Stanley, and later acquired by Duquesne Systems. TPX is primarily used on z/OS, but a version also exists for z/VM. References IBM mainframe software
Terminal Productivity Executive
[ "Technology" ]
154
[ "Computing stubs", "Software stubs" ]
33,800,100
https://en.wikipedia.org/wiki/Hilbert%E2%80%93Burch%20theorem
In mathematics, the Hilbert–Burch theorem describes the structure of some free resolutions of a quotient of a local or graded ring in the case that the quotient has projective dimension 2. proved a version of this theorem for polynomial rings, and proved a more general version. Several other authors later rediscovered and published variations of this theorem. gives a statement and proof. Statement If R is a local ring with an ideal I and is a free resolution of the R-module R/I, then m = n – 1 and the ideal I is aJ where a is a regular element of R and J, a depth-2 ideal, is the first Fitting ideal of I, i.e., the ideal generated by the determinants of the minors of size m of the matrix of f. References Commutative algebra Theorems in algebra
Hilbert–Burch theorem
[ "Mathematics" ]
173
[ "Theorems in algebra", "Commutative algebra", "Fields of abstract algebra", "Mathematical problems", "Mathematical theorems", "Algebra" ]
35,471,835
https://en.wikipedia.org/wiki/Chung%E2%80%93Fuchs%20theorem
In mathematics, the Chung–Fuchs theorem, named after Chung Kai-lai and Wolfgang Heinrich Johannes Fuchs, states that for a particle undergoing a zero-mean random walk in m-dimensions, it is certain to come back infinitely often to any neighborhood of the origin on a one-dimensional line (m = 1) or two-dimensional plane (m = 2), but in three or more dimensional spaces it will leave to infinity. Specifically, if a position of the particle is described by the vector : where are independent m-dimensional vectors with a given multivariate distribution, then if , and , or if and , the following holds: However, for , References . "On the distribution of values of sums of random variables" Chung, K.L. and Fuchs, W.H.J. Mem. Amer. Math. Soc. 1951 no.6, 12pp Eponymous theorems of physics
Chung–Fuchs theorem
[ "Physics" ]
189
[ "Eponymous theorems of physics", "Equations of physics", "Physics theorems" ]
35,474,809
https://en.wikipedia.org/wiki/Roles%20of%20chemical%20elements
This table is designed to show the role(s) performed by each chemical element, in nature and in technology. Z = Atomic numberSym. = SymbolPer. = PeriodGr. = Group See also Abundance of the chemical elements Dietary mineral External links The Role of Elements in Life Processes | Mineral Information Institute Periodic Table of the Chemical Elements and Dietary Minerals What Chemical Elements Are Found Within The Human Body? - Science - Questions & Answers Digging for rare earths: The mines where iPhones are born | Apple - CNET News, September 26, 2012
Roles of chemical elements
[ "Physics" ]
112
[ "Chemical elements", "Atoms", "Matter" ]
35,475,501
https://en.wikipedia.org/wiki/Birks%27%20law
Birks' law (named after British physicist John B. Birks) is an empirical formula for the light yield per path length as a function of the energy loss per path length for a particle traversing a scintillator, and gives a relation that is not linear at high loss rates. Overview The relation is: where L is the light yield, S is the scintillation efficiency, dE/dx is the specific energy loss of the particle per path length, k is the probability of quenching, and B is a constant of proportionality linking the local density of ionized molecules at a point along the particle's path to the specific energy loss; "Since k and B appear only as a product, they act as one parameter, kB, called Birks' coefficient, which has units of distance per energy. Its value depends on the scintillating material." kB is 0.126 mm/MeV for polystyrene-based scintillators and 1.26–2.07 × 10−2 g MeV−1 cm−2 for polyvinyltoluene-based scintillators. Birks speculated that the loss of linearity is due to recombination and quenching effects between the excited molecules and the surrounding substrate. Birks' law has mostly been tested for organic scintillators. Its applicability to inorganic scintillators is debated. A good discussion can be found in Particle Detectors at Accelerators: Organic scintillators. A compilation of Birks' constant for various materials can be found in Semi-empirical calculation of quenching factors for ions in scintillators. A more complete theory of scintillation saturation, that gives Birks' law when only unimolecular de-excitation is included, can be found in a paper by Blanc, Cambou, and De Laford. References Empirical laws Eponymous laws of physics Particle detectors
Birks' law
[ "Physics", "Technology", "Engineering" ]
402
[ "Particle physics stubs", "Particle detectors", "Particle physics", "Measuring instruments" ]
35,477,085
https://en.wikipedia.org/wiki/Belinski%E2%80%93Zakharov%20transform
The Belinski–Zakharov (inverse) transform is a nonlinear transformation that generates new exact solutions of the vacuum Einstein's field equation. It was developed by Vladimir Belinski and Vladimir Zakharov in 1978. The Belinski–Zakharov transform is a generalization of the inverse scattering transform. The solutions produced by this transform are called gravitational solitons (gravisolitons). Despite the term 'soliton' being used to describe gravitational solitons, their behavior is very different from other (classical) solitons. In particular, gravitational solitons do not preserve their amplitude and shape in time, and up to June 2012 their general interpretation remains unknown. What is known however, is that most black holes (and particularly the Schwarzschild metric and the Kerr metric) are special cases of gravitational solitons. Introduction The Belinski–Zakharov transform works for spacetime intervals of the form where we use the Einstein summation convention for . It is assumed that both the function and the matrix depend on the coordinates and only. Despite being a specific form of the spacetime interval that depends only on two variables, it includes a great number of interesting solutions as special cases, such as the Schwarzschild metric, the Kerr metric, Einstein–Rosen metric, and many others. In this case, Einstein's vacuum equation decomposes into two sets of equations for the matrix and the function . Using light-cone coordinates , the first equation for the matrix is where is the square root of the determinant of , namely The second set of equations is Taking the trace of the matrix equation for reveals that in fact satisfies the wave equation Lax pair Consider the linear operators defined by where is an auxiliary complex spectral parameter. A simple computation shows that since satisfies the wave equation, . This pair of operators commute, this is the Lax pair. The gist behind the inverse scattering transform is rewriting the nonlinear Einstein equation as an overdetermined linear system of equation for a new matrix function . Consider the Belinski–Zakharov equations: By operating on the left-hand side of the first equation with and on the left-hand side of the second equation with and subtracting the results, the left-hand side vanishes as a result of the commutativity of and . As for the right-hand side, a short computation shows that indeed it vanishes as well precisely when satisfies the nonlinear matrix Einstein equation. This means that the overdetermined linear Belinski–Zakharov equations are solvable simultaneously exactly when solves the nonlinear matrix equation. One can easily restore from the matrix-valued function by a simple limiting process. Taking the limit in the Belinski–Zakharov equations and multiplying by from the right gives Thus a solution of the nonlinear equation is obtained from a solution of the linear Belinski–Zakharov equation by a simple evaluation References Exact solutions in general relativity
Belinski–Zakharov transform
[ "Mathematics" ]
608
[ "Exact solutions in general relativity", "Mathematical objects", "Equations" ]
35,480,371
https://en.wikipedia.org/wiki/Phylogeny%20%28psychoanalysis%29
Phylogeny in psychoanalysis is the study of the whole family or species of an organism in order to better understand the pre-history of it. It might have an unconscious influence on a patient, according to Sigmund Freud. After the possibilities of ontogeny, which is the development of the whole organism viewed from the light of occurrences during the course of its life, have been exhausted, phylogeny might shed more light on the pre-history of an organism. The term phylogeny derives from the Greek terms phyle (φυλή) and phylon (φῦλον), denoting “tribe” and “race”; and the term genetikos (γενετικός), denoting “relative to birth”, from genesis (γένεσις) “origin” and “birth”. Phylogenetics () is the study of evolutionary relatedness among groups of organisms (e.g. species, populations), In biology this is discovered through molecular sequencing data and morphological data matrices (phylogenetics), while in psychoanalysis this is discovered by analysis of the memories of a patient and the relatives. References See also Ontogeny (psychoanalysis) Ontogeny Phylogenetics Phylogenetics Evolutionary biology Psychoanalysis
Phylogeny (psychoanalysis)
[ "Biology" ]
265
[ "Evolutionary biology", "Phylogenetics", "Bioinformatics", "Taxonomy (biology)" ]
36,394,774
https://en.wikipedia.org/wiki/K%20factor%20%28traffic%20engineering%29
In transportation engineering, the K factor is defined as the proportion of annual average daily traffic occurring in an hour. This factor is used for designing and analyzing the flow of traffic on highways. K factors must be calculated at a continuous count station, usually an "automatic traffic recorder", for a year before being determined. Usually this number is the proportion of "annual average daily traffic" (AADT) occurring at the 30th-highest hour of traffic density from the year's-worth of data. This 30th-highest hour of traffic is also known as "K30" or the "Design Hour Factor". This factor improves traffic forecasting, which in turn improves the ability of designers and engineers to plan for efficiency and serve the needs of this particular set of traffic. Such forecasting includes the selection of pavement and inclusion of different geometric aspects of highway design, as well as the effects of lane closures and necessity of traffic lights. Engineers have reached consensus on identify K30 as reaching a reasonable peak of activity before high outliers of traffic volume are used as determinative of overall patterns. The K factor has three general characteristics: K generally decreases as AADT increases. K generally decreases as development density increases. K is generally highest near recreational facilities, next highest in rural and suburban areas, and lowest in urban areas. Another notable proportions of K is the measure of K100 (the proportion of AADT occurring during the 100th highest hour of the design year). This proportion is also known as the Planning Analysis Hour Factor. Calculation The calculation for the K factor is given by the formula: in which DHV is the "Design Hourly Volume," the 30th highest hourly traffic volume (in both directions) in the year in which data was collected, by vehicles per hour. DHV could also be the 50th or 100th highest hourly traffic volume (in both directions) in the year in which data was collected, by vehicles per hour; but one would need to mention that by saying that this is the K50 or K100 factor. Usage The use of the K30 standard is mandated for the Highway Performance Monitoring System's comparisons of congestion. The K Factor also helps calculate the peak-to-daily ratio of traffic. K30 helps maintain a healthy volume to capacity ratio. K50 and K100 will sometimes be seen. K50 and K100 will not use the 30th highest hourly traffic volumes but the 50th or 100th highest hourly traffic volume when calculating the K factor. References Transportation engineering
K factor (traffic engineering)
[ "Physics", "Engineering" ]
503
[ "Transport stubs", "Industrial engineering", "Physical systems", "Transport", "Transportation engineering", "Civil engineering" ]
36,395,362
https://en.wikipedia.org/wiki/Micronized%20rubber%20powder
Micronized rubber powder (MRP) is classified as fine, dry, powdered elastomeric crumb rubber in which a significant proportion of particles are less than 100 μm and free of foreign particulates (metal, fiber, etc.). MRP particle size distributions typically range from 180 μm to 10 μm. Narrower distributions can be achieved depending on the classification technology. MRP source materials MRP is typically made from vulcanized elastomeric material, most often from end-of-life tire material, but can also be produced from post-industrial nitrile rubber, ethylene propylene diene monomer (EPDM), butyl and natural rubber compounds. Characteristics MRP is a free flowing, black rubber powder that disperses into a multitude of systems and applications. Due to its micron size, MRP can be incorporated into multiple polymers, and provides a smooth surface appearance on finished products. In some cases, in order to improve compatibility with host materials, the MRP is given a chemical treatment to activate, or “make functional” the surface of the powder particles. This is referred to as functionalized MRP or FMRP. MRP represents an evolution over previous post-manufactured rubber technologies. The most basic rubber processing technology converts end-of-life tire and post-industrial rubber material into rubber chips that are typically one inch or larger in size. These chips are then used in tire-derived fuel and civil engineering projects. A second-generation processing technology converts end-of-life tire and rubber material into crumb rubber, also known as ground tire rubber (GTR). GTR typically comprises chips between one inch and 30 mesh in size, with the associated fiber and steel mostly removed. This material is used in asphalt, as garden mulch and in playgrounds. MRP is a micron-size material that is produced in various sizes, including 80 mesh and down to 300 mesh. MRP is virtually metal and fiber-free, enabling its use in a wide range of advanced products. Applications MRP is used as a compound extender to offset the use of natural rubber and synthetic polymers as well as act as a process aid in material production. In some cases, MRP can reduce formulation costs, because it replaces commodity-priced rubber- and oil-based feedstocks. According to some estimates, MRP offers up to 50 percent cost savings over virgin raw materials. MRP also can improve the sustainability, and in some cases the performance, of the compounds in which it is used. For example, the smaller particle sizes of MRP are known to increase the impact strength of certain plastic compositions. However, in all applications the particle size and loading levels depend on the target application. Due to its size and composition, MRP can be incorporated into more advanced and higher-value applications than crumb rubber. Industries incorporating MRP into their products include tire, automotive, construction, industrial components and consumer products. It is also used as an additive in tires, plastics, asphalt, coatings, and sealants. MRP can also be incorporated into prime or recycled grade polypropylene (PP), high-density polyethylene (HDPE) and nylons. Additionally, the incorporation of MRPs in thermoplastic elastomers (TPE) and thermoplastic vulcanizates (TPV) makes it a feasible ingredient for automotive and building and construction applications. Currently, the leading producer of MRP is Lehigh Technologies, which utilizes a cryogenic turbo mill process with more than 100 million pounds of annual production capacity. MRP produced by Lehigh has set high benchmarks for performance in a range of applications with customers and third-party research institutions, including several studies on increased asphalt performance. Lehigh claims more than 250 million tires on the road today have been made using its MRP. There is an applicable American Society for Testing and Materials (ASTM) specification [ASTM D5603-01 (2008)] for the classification of rubber powder, including MRP. Safety Numerous U.S. and European studies have found crumb rubber and MRP meet standards for human health and safety. Recently, an EPA study found that crumb rubber in field turfs and playgrounds contained concentrations of materials below harmful levels. References Rubber Recycling Powders
Micronized rubber powder
[ "Physics" ]
887
[ "Materials", "Powders", "Matter" ]
36,402,111
https://en.wikipedia.org/wiki/International%20Conference%20on%20Radiation%20Effects%20in%20Insulators
Radiation Effects in Insulators (REI) is a long-running international conference series dedicated to basic and applied scientific research relating to radiation effects in insulators and non-metallic materials. It is held every second year in locations around the world. The REI conference has a long history. Since the first conference was held in 1981, REI has been the international forum to present and discuss the latest achievements in the field of insulating materials modification through different kind of radiation (ions, electrons, neutrons, etc). The conference regularly attracts about 200 attendees. Topics covered The REI conference covers a wide range of topics including the following. Atomistic and Collective Processes of Radiation Effects Fundamental knowledge on atomistic and electronic defect production and stability Irradiation-induced microstructural evolution and material modifications Fundamentals, theory and computer simulations Advances in defect and material characterization Radiation response of nanomaterials Swift heavy ion irradiations Neutron irradiations Laser-solid interactions Electron-solid interactions Irradiated Materials Simple and complex oxides Carbides and nitrides Polymers Ionic crystals Semiconductor and scintillator materials Glasses and silica Carbon-based materials Nanocomposites and nanostructured materials Applications Nuclear materials: fission, fusion and waste forms Functional nanocomposites Photonic, bio-medicine and sensing materials Micro- and nano-patterning Materials processing with swift heavy ions and cluster beams Proceedings The proceedings of REI-1 (1981) and REI-3 (1985) were published in the peer-reviewed journal Radiation Effects, renamed Radiation Effects and Defects in Solids in 1989. The proceedings of REI-1 are found in volume 64 [issues 1-4] and volume 65 [issues 1-4] of this journal. The proceedings of REI-3 are found in volume 97 [issues 3-4], volume 98 [issues 1-4], and volume 99 [issues 1-4] of this journal. The proceedings of REI-2 (1983) and every REI conference since REI-4 (1987) have been published in the peer-reviewed Elsevier journal Nuclear Instruments and Methods in Physics Research B. These REI proceedings can be found in the following volumes of this journal: REI-2 (vol. 1), REI-4 (vol. 32), REI-5 (vol. 46), REI-6 (vol. 65), REI-7 (vol. 91), REI-8 (vol. 116), REI-9 (vol.141), REI-10 (vol. 166-167), REI-11 (vol. 191), REI-12 (vol. 218), REI-13 (vol. 250), REI-14 (vol. 266), REI-15 (vol. 268), REI-16 (vol. 286), REI-17 (vol. 326), REI-18 (vol. 379) and REI-19 (vol. 435). The proceedings of REI-20 and following conferences are reported in special issue collections of Nuclear Instruments and Methods in Physics Research B. REI conferences held The complete list of REI conferences held up to 2023 is as follows. The chairmen for the conferences prior to 2009 are taken from the list of proceedings editors in References Physics conferences Radiation effects
International Conference on Radiation Effects in Insulators
[ "Physics", "Materials_science", "Engineering" ]
671
[ "Physical phenomena", "Materials science", "Radiation", "Condensed matter physics", "Radiation effects" ]
36,402,267
https://en.wikipedia.org/wiki/C23H31N3O
{{DISPLAYTITLE:C23H31N3O}} The molecular formula C23H31N3O (molar mass: 365.51 g/mol, exact mass: 365.2467 u) may refer to: APINACA, or AKB48 BU-LAD, or 6-butyl-6-nor-lysergic acid diethylamide Etomethazene Molecular formulas
C23H31N3O
[ "Physics", "Chemistry" ]
91
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
39,299,080
https://en.wikipedia.org/wiki/Lie-admissible%20algebra
In algebra, a Lie-admissible algebra, introduced by , is a (possibly non-associative) algebra that becomes a Lie algebra under the bracket [a, b] = ab − ba. Examples include associative algebras, Lie algebras, and Okubo algebras. See also Malcev-admissible algebra Jordan-admissible algebra References Non-associative algebra
Lie-admissible algebra
[ "Mathematics" ]
87
[ "Non-associative algebra", "Mathematical structures", "Algebraic structures" ]
39,299,327
https://en.wikipedia.org/wiki/Malcev-admissible%20algebra
In algebra, a Malcev-admissible algebra, introduced by , is a (possibly non-associative) algebra that becomes a Malcev algebra under the bracket [a, b] = ab − ba. Examples include alternative algebras, Malcev algebras and Lie-admissible algebras. See also Jordan-admissible algebra References Non-associative algebra
Malcev-admissible algebra
[ "Mathematics" ]
83
[ "Non-associative algebra", "Mathematical structures", "Algebraic structures" ]
39,299,698
https://en.wikipedia.org/wiki/Noncommutative%20Jordan%20algebra
In algebra, a noncommutative Jordan algebra is an algebra, usually over a field of characteristic not 2, such that the four operations of left and right multiplication by x and x2 all commute with each other. Examples include associative algebras and Jordan algebras. Over fields of characteristic not 2, noncommutative Jordan algebras are the same as flexible Jordan-admissible algebras, where a Jordan-admissible algebra – introduced by and named after Pascual Jordan – is a (possibly non-associative) algebra that becomes a Jordan algebra under the product a ∘ b = ab + ba. See also Malcev-admissible algebra Lie-admissible algebra References Non-associative algebra
Noncommutative Jordan algebra
[ "Mathematics" ]
154
[ "Non-associative algebra", "Mathematical structures", "Algebraic structures" ]
39,300,801
https://en.wikipedia.org/wiki/Sexual%20selection%20in%20mammals
Sexual selection in mammals is a process the study of which started with Charles Darwin's observations concerning sexual selection, including sexual selection in humans, and in other mammals, consisting of male–male competition and mate choice that mold the development of future phenotypes in a population for a given species. Elephant seals A good example of intrasexual selection, in which males fight for dominance over a harem of females, is the elephant seal – large, oceangoing mammals of the genus Mirounga. There are two species: the northern (M. angustirostris) and southern elephant seal (M. leonina) – the largest carnivoran living today. Both species show extreme sexual dimorphism, possibly the largest of any mammal, with southern males typically five to six times heavier than the females. While the females average and long, the bulls average and long. The record-sized bull, shot in Possession Bay, South Georgia, on February 28, 1913, measured long and was estimated to weigh . The maximum weight of a female is with a length of . Males arrive in the colonies before the females and fight for control of harems. Large body size confers advantages in fighting. The agonistic behaviour of the bulls gives rise to a dominance hierarchy, with access to harems and breeding activity being determined by rank. The dominant bulls or "harem masters" establish harems of several dozen females. The least successful males have no harems, but may try to copulate with a harem male's females when the dominant male is not looking. A dominant male must stay in his territory to defend it, which can mean months without eating, living on his store of blubber. Some males have stayed ashore for more than three months without food. Two fighting males use their weight and canine teeth against each other. The outcome is rarely fatal, and the defeated bull will flee; however, bulls suffer severe tears and cuts. Males commonly vocalize with a coughing roar that serves in both individual recognition and size assessment. Conflicts between high-ranking males are more often resolved with posturing and vocalizing than with physical contact. In the case of intrasexual selection, adorned males may gain a reproductive advantage without the intervention of female preference. This advantage will be conferred by weapons used in the process of resolving disputes, such as those over territorial rights. The use of sexual ornamentation as a signaling device to create a dominance hierarchy among males, also known as a pecking order, allows struggle to proceed without excessive injury or fatality. It is predominantly when two opposing males are so closely matched, as would be found in males not having established themselves in a dominance hierarchy, that asymmetries cannot be found and the confrontation escalates to a point where the asymmetries must be proved by aggressive use of ornamentation. How often males will physically engage each other, and in what manner, can best be understood by applying game theory developed for biology, most notably by John Maynard Smith. An uncertain example: the giraffe The evolutionary origins of the giraffe's (Giraffa camelopardalis) long neck are controversial. The long-accepted "competing browser's hypothesis" originally put forth by Charles Darwin has been put into question. Originally, scientists believed that the elongation of the giraffe's neck had been a result of natural selection acting in relation to foraging behaviour, where it was supposed that longer necks enabled favoured individuals to gather food inaccessible to other animals. But even though the giraffe's overall height is about 6 meters, it still typically feeds at about 2 meters above the ground. Moreover, the giraffe's kudu, impala, and steenbok competitors do not feed above 2 meters and prefer feeding at shoulder level as well, rather than at the maximum height they could reach. An alternative explanation for the origin of long necks in giraffe is sexual selection. Male giraffe often neck with other males to exhibit dominance. There are six criteria that need to be satisfied for the exaggerated neck to be classified as a result of sexual selection. The characteristic should be more exaggerated in one of the sexes; it must be used to indicate dominance; have no direct survival benefits; cost the organism in terms of survival or other factors (e.g., energetics); positive allometry should be observed. But evolutionary history shows that increased neck length is not correlated to increases in other parts of the body, which would be expected from foraging selection, so sexual selection may be a more satisfactory explanation. Studies have failed to resolve the causes involved: perhaps the neck was a result of both or other forces. Precopulatory mechanisms Precopulatory mechanisms determine who will father an offspring prior to sex. Male–male competition is the biggest precopulatory mechanism in mammals. Sexual dimorphism is a result of male–male competition that is easily seen in species. Male–male competition Male–male competition to copulate with the opposite sex is often seen in mammals. African elephants strongly promote male–male competition. Elephants continuously grow throughout their lifetime. As males grow older, they also experience increasing lapses of musth, a violent sexual excitement, and most reproductive success happens to males in musth as it helps them win fights. A fight between a male in musth and one not can result in the death of the latter. Species with intense male–male competition are known to exhibit the most size dimorphism. For example, female American black bears (Ursus americanus) are 20–40% smaller than males. Male mammals can compete for harems as well with elephant seals competing fiercely for harems. As mammals reach sexual maturity, secondary sexual characteristic arise. Elephant seals have a proboscis in the adult male, which is used to project loud noises, frequently heard during the mating season. Elephant seals with a bigger proboscis emit lower sounds than males with a smaller proboscis and are the bigger of the males in a colony. Mate-guarding is an important factor in male–male competition to ensure fertilization of an offspring, and, when successful, helps to overlook and court the female. It especially prevents sperm competition from occurring as it ensures reproductive success. This process can be engaged when a post-coital signal is sent to a male to keep guard by the female. Mating plugs are a form of mate-guarding that have proved to show precopulatory female choice. Copulatory plugs are commonly acellular and thought to be made by proteins from the seminal vesicles. DNA taken from copulatory plugs show females deter from mating with close relatives. Callings During the breeding season, mammals will call out to the opposite sex. Male koalas that are bigger will let out a different sound than smaller koalas. The bigger males which are routinely sought out for are called sires. Females choose sires because of indirect benefits that their offspring could inherit, like larger bodies. Non-sires and females do not vary in their body mass and can reject a male by screaming or hitting him. Male–male competition is rarely exhibited in koalas. Acoustic signaling is a type of call that can be used from a significant distance encoding an organism's location, condition and identity. Sac-winged bats display acoustic signaling, which is often interpreted as songs. When females hear these songs, named a 'whistle', they call onto the males to breed with a screech of their own. This action is termed 'calling of the sexes'. Red deer and spotted hyenas along with other mammals also perform acoustic signaling. Testosterone Testosterone is a driving factor towards achieving fertilizing success. Bighorn sheep display curved horns on the rams of the species which are big compared to the small horns displayed on the females of the species. The bigger the horns are, the more testosterone there was found to be in the male. This is important because social rank has a positive correlation with the length of the horns. Social rank leads to tending of a group of females to copulate with. Testosterone also appears more in polygynous species than in monogamous species. Polyandry Polyandrous females have two or more mating partners while they are in heat. Females are more likely to find a new mate when their current male had a high number of paternities the year before or their current male was old. This is presumed to have an effect on offspring and giving them more genetic diversity. Sex-role reversal Sex role reversal is the change in behavior of a specific species from their ordinary pattern. Sex-role reversal supports sexual dimorphism very strongly. Female–female competition is a common abnormality within animals with accepted sex roles. Females invest into choosing the best possible mate because they have more of a part in bringing up their offspring than males (gestation and lactation). Gestation and lactation are energy-consuming, which means their competition for resources is high. Female–female competitions are observed to gain access for better mates. Meerkat females acquire dominant status because resources for female reproduction are scarce. Dominant females in this species are heavier and win in competitions over other females. Postcopulatory mechanisms Copulating with the opposite sex does not ensure fertilization of an egg. Postcopulatory mechanisms include sperm competition and cryptic female choice. Sperm competition Sperm competition involves male gametes trying to fertilize eggs first. As a result of sperm competition, some males in a given species can develop bigger testes and seminal vesicles. Larger midpiece areas in the sperm that contain mitochondria are also observed. Larger testes and bigger midpieces in sperm are seen in males that mate with multiple partners. A female that has been with multiple partners will most likely give birth to an offspring fathered by the male that produced the most or faster sperm. It was found that primates and rodents with longer flagellum fathered more offspring. The length of the baculum is also influenced by sperm competition in some mammal species. Cryptic female choice Cryptic female choice is a postcopulatory mechanism that cannot be observed because it takes place inside a female's body. It enables a female to have some control over who fathers her child even after fertilization. In some species, females may choose to mate with more than one male to prevent infanticide or harassment. Infanticide can be prevented by confusing the males in a given colony. If the female mates with multiple males, then the males will not know for sure who fathered the offspring. Infanticide can also be prevented by choosing a male that will protect her and the offspring. Sexual harassment may be avoided if females give in to males and copulate when they please. References Sexual selection Mammalian sexuality
Sexual selection in mammals
[ "Biology" ]
2,214
[ "Evolutionary processes", "Behavior", "Sexual selection", "Mating" ]
39,301,133
https://en.wikipedia.org/wiki/Asbestos%20insulating%20board
Asbestos insulating board (AIB), also known by the trade names Asbestolux and Turnabestos, is an asbestos-containing board formerly used in construction for its fire resistance and insulating properties. These boards were commonly used in the United Kingdom from the 1950s until production ended in 1980. AIB is 16-35% asbestos, typically a blend of amosite and chrysotile, though crocidolite was also used in early boards. AIB is softer, more porous and less dense than asbestos cement. This, and the fact it typically contains a greater proportion of asbestos than the 10-15% of asbestos cement, makes AIB far more friable and thus at greater risk of releasing asbestos fibres if boards are damaged or removed. The inhalation of loose asbestos fibres is linked to various health conditions affecting the lungs, including asbestosis, lung cancer and malignant mesothelioma. References Building insulation materials Asbestos
Asbestos insulating board
[ "Physics", "Environmental_science" ]
198
[ "Toxicology", "Materials stubs", "Materials", "Asbestos", "Matter" ]
39,301,550
https://en.wikipedia.org/wiki/Kosmos%201484
Kosmos 1484 ( meaning Cosmos 1484), also known as Resurs-OE No.3-2 was a Soviet prototype Earth imaging satellite, launched in 1983 as part of the Resurs programme. It was a prototype of the Meteor-derived Resurs-O1 spacecraft, which paved the way for the first Resurs-O1 to fly in October 1985. Kosmos 1484 was launched at 05:30:37 UTC on July 24, 1983. A Vostok-2M carrier rocket was used to place the satellite into low Earth orbit. The launch was conducted from Site 31/6 at the Baikonur Cosmodrome. Following the successful launch, the satellite was assigned its Kosmos designation, and was also given the International Designator 1983-075A, and the Satellite Catalog Number 14207. Following the completion of its mission, Kosmos 1484 remained in orbit for several years as [[space debris]|a derelict satellite]]. It suffered a fragmentation event - possibly due to a battery explosion - on October 18, 1993; however, the spacecraft remained relatively intact. Its orbit decayed and the main component of it reentered Earth's atmosphere on January 28, 2013. The American Meteor Society reported that its re-entry fireball was witnessed over the eastern United States, with sightings from New York state to Georgia. Most of the rest of Kosmos 1484 has also decayed but as of 2023, at least one fragment - 1983-075BG - remains. See also List of Kosmos satellites (1251–1500) References Spacecraft which reentered in 2013 Kosmos satellites Spacecraft launched in 1983 Spacecraft that broke apart in space
Kosmos 1484
[ "Technology" ]
357
[ "Space debris", "Spacecraft that broke apart in space" ]
39,302,352
https://en.wikipedia.org/wiki/Edwards%20equation
The Edwards equation in organic chemistry is a two-parameter equation for correlating nucleophilic reactivity, as defined by relative rate constants, with the basicity of the nucleophile (relative to protons) and its polarizability. This equation was first developed by John O. Edwards in 1954 and later revised based on additional work in 1956. The general idea is that most nucleophiles are also good bases because the concentration of negatively charged electron density that defines a nucleophile will strongly attract positively charged protons, which is the definition of a base according to Brønsted–Lowry acid-base theory. Additionally, highly polarizable nucleophiles will have greater nucleophilic character than suggested by their basicity because their electron density can be shifted with relative ease to concentrate in one area. History Prior to Edwards developing his equation, other scientists were also working to define nucleophilicity quantitatively. Brønsted and Pederson first discovered the relationship between basicity, with respect to protons, and nucleophilicity in 1924: where where kb is the rate constant for nitramide decomposition by a base (B) and βN is a parameter of the equation. Swain and Scott later tried to define a more specific and quantitative relationship by correlating nucleophilic data with a single-parameter equation derived in 1953: This equation relates the rate constant k, of a reaction, normalized to that of a standard reaction with water as the nucleophile (k0), to a nucleophilic constant n for a given nucleophile and a substrate constant s that depends on the sensitivity of a substrate to nucleophilic attack (defined as 1 for methyl bromide). This equation was modeled after the Hammett equation. However, both the Swain–Scott equation and the Brønsted relationship make the rather inaccurate assumption that all nucleophiles have the same reactivity with respect to a specific reaction site. There are several different categories of nucleophiles with different attacking atoms (e.g. oxygen, carbon, nitrogen) and each of these atoms has different nucleophilic characteristics. The Edwards equation attempts to account for this additional parameter by introducing a polarizability term. Edwards equations The first generation of the Edwards equation was where k and k0 are the rate constants for a nucleophile and a standard (H2O). H is a measure of the basicity of the nucleophile relative to protons, as defined by the equation: where the pKa is that of the conjugate acid of the nucleophile and the constant 1.74 is the correction for the pKa of H3O+. En is the term Edwards introduced to account for the polarizability of the nucleophile. It is related to the oxidation potential (E0) of the reaction (oxidative dimerization of the nucleophile) by the equation: where 2.60 is the correction for the oxidative dimerization of water, obtained from a least-squares correlation of data in Edwards’ first paper on the subject. α and β are then parameters unique to specific nucleophiles that relate the sensitivity of the substrate to the basicity and polarizability factors. However, because some β's appeared to be negative as defined by the first generation of the Edwards equation, which theoretically should not occur, Edwards adjusted his equation. The term En was determined to have some dependence on the basicity relative to protons (H) due to some factors that affect basicity also influencing the electrochemical properties of the nucleophile. To account for this, En was redefined in terms of basicity and polarizability (given as molar refractivity, RN): where The values of a and b, obtained by the method of least squares, are 3.60 and 0.0624 respectively. With this new definition of En, the Edwards equation can be rearranged: where A= αa and B = β + αb. However, because the second generation of the equation was also the final one, the equation is sometimes written as , especially since it was republished in that form in a later paper of Edwards’, leading to confusion over which parameters are being defined. Significance A later paper by Edwards and Pearson, following research done by Jencks and Carriuolo in 1960 led to the discovery of an additional factor in nucleophilic reactivity, which Edwards and Pearson called the alpha effect, where nucleophiles with a lone pair of electrons on an atom adjacent to the nucleophilic center have enhanced reactivity. The alpha effect, basicity, and polarizability are still accepted as the main factors in determining nucleophilic reactivity. As such, the Edwards equation is applied in a qualitative sense much more frequently than in a quantitative one. In studying nucleophilic reactions, Edwards and Pearson noticed that for certain classes of nucleophiles most of the contribution of nucleophilic character originated from their basicity, resulting in large β values. For other nucleophiles, most of the nucleophilic character came from their high polarizability, with little contribution from basicity, resulting in large α values. This observation led Pearson to develop his hard-soft acid-base theory, which is arguably the most important contribution that the Edwards equation has made to current understanding of organic and inorganic chemistry. Nucleophiles, or bases, that were polarizable, with large α values, were categorized as “soft”, and nucleophiles that were non-polarizable, with large β and small α values, were categorized as “hard”. The Edwards equation parameters have since been used to help categorize acids and bases as hard or soft, due to the approach's simplicity. See also Free-energy relationship Brønsted catalysis equation Bell–Evans–Polanyi principle References Physical organic chemistry Equations
Edwards equation
[ "Chemistry", "Mathematics" ]
1,278
[ "Equations", "Mathematical objects", "Physical organic chemistry" ]
48,309,405
https://en.wikipedia.org/wiki/TNP-ATP
TNP-ATP is a fluorescent molecule that is able to determine whether a protein binds to ATP, and the constants associated with that binding. It is primarily used in fluorescence spectroscopy, but is also very useful as an acceptor molecule in FRET, and as a fluorescent probe in fluorescence microscopy and X-ray crystallography. Constituent parts TNP refers to the chemical compound 2,4,6-trinitrophenol, also known as Picric acid. It is a primary constituent of many unexploded landmines, and is a cousin to TNT, but less stable. It is recognized as an environmental contaminant and is toxic to many organisms. It is still commonly used in the manufacturing of fireworks, explosives, and rocket fuels, as well as in leather, pharmaceutical, and dye industries. ATP is an essential mediator of life. It is used to overcome unfavorable energy barriers to initiate and fuel chemical reactions. It is also used to drive biological machinery and regulate a number of processes via protein-phosphorylation. However, the proteins that bind ATP for both regulation and enzymatic reactions are very diverse—many yet undiscovered—and for many proteins their relationship to ATP in terms of number of binding sites, binding constants, and dissociation constants remain unclear. TNP-ATP Conjugating TNP to ATP renders this nucleotide triphosphate fluorescent and colored whilst allowing it to retain its biological activity. TNP-ATP is thus a fluorescent analog of ATP. This conjugation is very useful in providing information about interactions between ATP and an ATP-binding protein because TNP-ATP interacts with proteins and enzymes as a substitute for its parent nucleotide, and has a strong binding affinity for most systems that require ATP. TNP is excited at a wavelength of 408 and 470 nm, and fluoresces in the 530–560 nm range. This is a very useful range of excitation because it is far from where proteins or nucleotides absorb. When TNP-ATP is in water or other aqueous solutions, this emission is very weak. However, once TNP-ATP binds to a protein, there is a dramatic increase in fluorescent intensity. This property enables researchers to study various proteins’ binding interaction with ATP. Thus, with enhanced fluorescence, it can be seen whether a protein binds to ATP. When TNP-ATP in water is excited at 410 nm, TNP-ATP shows a single fluorescence maximum at 561 nm. This maximum shifts as the fluid's viscosity changes. For example, in N,N-dimethylformamide, instead of having its maxima at 561 nm as in water, the maxima is instead at 533 nm. Binding to a protein will also change the wavelength of maximal emission, as well as a change in fluorescent intensity. For example, binding to the chemotaxis protein CheA indicates a severalfold enhancement of fluorescence intensity and a blue-shift in wavelength of the maximal emission. Using this TNP nucleotide analog has been shown in many instances to be superior to traditional radionucleotide-labelling based techniques. The health concerns and the cost associated with the use of radioactive isotopes makes TNP-ATP an attractive alternative. The first fluorescent ribose-modified ATP is 2’,3’-O-(2,4,7-trinitrocyclohexadienylidene) adenosine 5’triphosphate (TNP-ATP), and was introduced in 1973 by Hiratsuka and Uchida. TNP-ATP was originally synthesized to investigate the ATP binding site of myosin ATPase. Reports of TNP-ATP’s success in the investigation of this motor protein extended TNP-ATP’s use to other proteins and enzymes. TNP-ATP has now been used as a spectroscopic probe for numerous proteins suspected to have ATP interactions. These include several protein kinases, ATPases, myosin, and other nucleotide binding proteins. Over the past twenty years, there have been hundreds of papers describing TNP-ATP’s use and applications. Many applications involving this fluorescently labeled nucleotide have helped to clarify structure-function relationships of many ATP-requiring proteins and enzymes. There have also been a growing number of papers that display TNP-ATP use as a means of assessing the ATP-binding capacity of various mutant proteins. Preparation Preparing TNP-ATP is a one-step synthesis that is relatively safe and easy. Adenosine’s ribose moiety can be trinitrophenylated by 2,4,6-trinitrobenzene-1-sulfonate (TNBS). The resulting compound assumes a bright orange color and has visible absorption characteristics, as is characteristic of a Meiseinheimer spiro complex compound linking. To see the exact method of preparion, please refer to T. Hiratsuka's and K. Uchida's paper "Preparation and Properties of 2'(r 3')-O(2,4,6-trinitrophenyl) Adenosine 5'-triphosphate, an Analog of Adenosine Triphosphate," found in the reference section. To revert TNP-ATP back to its constituent parts, or in other words to hydrolyze TNP-ATP to give equilmolar amounts of picric acid (TNP) and ATP, TNP-ATP should be treated with 1 M HCl at 100 degrees Celsius for 1.5 hours. This is because if TNP-ATP is acidified under mild conditions, it results in the opening of the dioxolane ring attached to the 2’-oxygen, leaving a 3’O-TNP derivative as the only product. Storage TNP-ATP should be stored at −20 °C, in the dark, and used under minimal lighting conditions. When in solution, TNP-ATP has a shelf life of about 30 days. pKa and isosbestic point When absorption was measured against wavelength at various pH values, the changes at wavelength 408 nm and 470 nm yielded a sigmoidal line with a midpoint at 5.1. This indicated that the absorbance at these two wavelengths depends upon the ionization of the chromophoric portion of TNP-ATP and is unaffected by ionization of ATP. Although this ionization constant of 5.1 is not in physiological range, it has been shown that the absorbance of TNP-ATP is sensitive enough to detect changes due to slight shifts in neutral pH. Spectroscopic superposition indicated TNP-ATP’s isosbestic point to be 339 nm. Constants and calculations At low concentrations of TNP-ATP (≤1 μM), fluorescent intensity is proportional to the concentration of TNP added. However, at concentrations exceeding 1 μM, inner filter effects cause this relationship to no longer be linear. To correct this, researchers must determine the ratio of the predicted theoretical fluorescence intensity (assuming linearity) to the observed fluorescence intensity and then apply this correction factor. However, in most cases, researchers will try to keep the concentration of TNP to lower than 1 μM. To determine binding affinities, TNP-ATP is added to a solution and then titrated with protein. This produces a saturation curve from which the binding affinity can be determined. The number of binding sites may also be determined through this saturation curve by looking to see if there are sudden changes in slope. One can also titrate a fixed amount of protein with increasing additions of TNP-ATP to obtain a saturation curve. To do so, however, may get complicated due to the inner filter effects that will need to be corrected for. To determine dissociation constants, TNP-ATP can be competed off of a protein with ATP. The value of the dissociation constant Kd for a single-site binding can then be obtained by applying the Langmuir equation for a curve fit: where RFU is relative fluorescent units, RFUobs is the fluorescence observed, RFUfree is the fluorescence of free TNP-ATP, and RFUbound is the fluorescence of TNP-ATP when completely bound to a protein. To measure an ATP competitor, one can add competitor to pre-incubated samples of protein:TNP-ATP. The fraction of TNP-ATP bound to the protein can be calculated via: where θ is that fraction, and RFUmax is the value of fluorescence intensity at saturation, meaning when 100% of TNP-ATP is bound. The dissociation constants for TNP and competitor can then be calculated through the equation: For reasons not yet fully understood, TNP-ATP typically binds the ATP binding sites of proteins and enzymes anywhere from one to three times tighter than regular ATP. The dissociation constants are usually around 0.3–50 μM. Other uses In addition to using TNP-ATP to determine whether a protein binds ATP, its binding affinity and dissociation constants, and number of binding sites, TNP-ATP can also be used in ligand binding studies. To do this, titrations of the protein are added to TNP-ATP. Then, ligand is added to displace the bound analog. This is measured by decreases in fluorescence. One can also do this by titrating protein with TNP-ATP in the presence and absence of varying concentrations of the ligand of interest. Using either experiment will allow the binding affinity of the ligand to protein to be measured. TNP-ATP is also valuable fluorescence acceptor. This is because, as with any good acceptor, TNP-ATP absorbs over a wide wavelength range that corresponds to the range of emission of common FRET donors. Thus, TNP-ATP can be used to look at the conformational changes that proteins undergo. For example, Na+/K+ ATPase, the distance between the active site and Cys457 was shown to change from 25 Angstroms to 28 Angstroms in changing from the Na+ conformation to the K+ conformation. In addition to fluorescent spectroscopy, TNP-ATP is very useful in fluorescent microscopy. This is because it greatly increases the sensitivity of the observations when bound to proteins—the enhanced fluorescence greatly reduces the problem of background fluorescence. This is especially true under epifluorescent illumation (illumination and light are both on the same side of the specimen). TNP-ATP has also been used in X-ray crystallography because it can be used to determine binding constants of crystallized substrates. This technique also demonstrates the structure of proteins in the presence or absence of TNP-ATP, which may or may not correspond to the structure of proteins when they bind ATP. References Biophysics Biochemistry Cell biology Physiology Nucleotides Purines Spectroscopy Fluorescence
TNP-ATP
[ "Physics", "Chemistry", "Biology" ]
2,286
[ "Luminescence", "Fluorescence", "Applied and interdisciplinary physics", "Spectrum (physical sciences)", "Molecular physics", "Physiology", "Cell biology", "Instrumental analysis", "Biophysics", "nan", "Biochemistry", "Spectroscopy" ]
48,313,506
https://en.wikipedia.org/wiki/Flow%20cups
Flow cups are designed to accurately measure the viscosity of paints, inks, varnishes and similar products. The process of flow through an orifice can often be used as a relative measurement and classification of viscosity. This measured kinematic viscosity is generally expressed in seconds of flow time which can be converted into centistokes (cSt) using a viscosity calculator. Flow cups are manufactured using high grade aluminium alloy with stainless steel orifices (where indicated), flow cups are available with a range of UKAS / ISO 17025 certified standard oils to confirm the flow cup is measuring within specification. See also Flow measurement Viscometer Zahn cup Ford cup References Fluid dynamics Viscosity meters
Flow cups
[ "Chemistry", "Technology", "Engineering" ]
152
[ "Viscosity meters", "Chemical engineering", "Measuring instruments", "Piping", "Fluid dynamics" ]
26,578,849
https://en.wikipedia.org/wiki/MultiDark
MultiDark (MULTImessenger Approach for DARK Matter Detection) is a Spanish project, with a stated goal of contributing to the identification and detection of dark matter. History The project is a grouping effort, involving many researchers in the Spanish community with a special interest in dark matter. It began on 17 December 2009 and was funded for five years. The project is supported by Consolider-Ingenio, a programme of the Ministry of Economy and Finance. Goals To analyse in detail the most plausible candidates for dark matter. To investigate how they form the dark halos that are believed to surround galaxies. To contribute to the development of experiments to detect dark matter. References Further reading Invited talk at the 36th COSPAR Scientific Assembly, Beijing, China, 16–23 July 2006 External links Multimessenger Approach for Dark Matter Detection. Spanish Project of the Consolider-Ingenio 2010 Programme Experiments for dark matter search
MultiDark
[ "Physics" ]
189
[ "Dark matter", "Experiments for dark matter search", "Unsolved problems in physics" ]
26,580,361
https://en.wikipedia.org/wiki/Digital%20materialization
Digital materialization (DM) can loosely be defined as two-way direct communication or conversion between matter and information that enables people to exactly describe, monitor, manipulate and create any arbitrary real object. DM is a general paradigm alongside a specified framework that is suitable for computer processing and includes: holistic, coherent, volumetric modeling systems; symbolic languages that are able to handle infinite degrees of freedom and detail in a compact format; and the direct interaction and/or fabrication of any object at any spatial resolution without the need for “lossy” or intermediate formats. DM systems possess the following attributes: realistic - correct spatial mapping of matter to information exact - exact language and/or methods for input from and output to matter infinite - ability to operate at any scale and define infinite detail symbolic - accessible to individuals for design, creation and modification Such an approach can not only be applied to tangible objects but can include the conversion of things such as light and sound to/from information and matter. Systems to digitally materialize light and sound already largely exist now (e.g. photo editing, audio mixing, etc.) and have been quite effective - but the representation, control and creation of tangible matter is poorly support by computational and digital systems. Commonplace computer-aided design and manufacturing systems currently represent real objects as "2.5 dimensional" shells. In contrast, DM proposes a deeper understanding and sophisticated manipulation of matter by directly using rigorous mathematics as complete volumetric descriptions of real objects. By utilizing technologies such as Function representation (FRep) it becomes possible to compactly describe and understand the surface and internal structures or properties of an object at an infinite resolution. Thus models can accurately represent matter across all scales making it possible to capture the complexity and quality of natural and real objects and ideally suited for digital fabrication and other kinds of real world interactions. DM surpasses the previous limitations of static disassociated languages and simple human-made objects, to propose systems that are heterogeneous, interacting directly and more naturally with the complex world. Digital and computer-based languages and processes, unlike the analogue counterparts, can computationally and spatially describe and control matter in an exact, constructive and accessible manner. However, this requires approaches that can handle the complexity of natural objects and materials. See also Function representation Constructive Solid Geometry Isosurface Solid modeling 3D printing Additive manufacturing Rapid prototyping Molecular assembler RepRap References External links Digital Materialization Group Computer Aided Design's eXtended Dimensions The Self Fab House People who have Digital Materialization as a research interest Engineering concepts Digital manufacturing Computer-aided design 3D computer graphics Synthetic environment
Digital materialization
[ "Technology", "Engineering" ]
533
[ "Computer-aided design", "Design engineering", "nan", "Industrial computing", "Digital manufacturing" ]
28,391,726
https://en.wikipedia.org/wiki/Einstein%20group
Albert Einstein, in searching for the transformation group for his unified field theory, wrote: "Every attempt to establish a unified field theory must start, in my opinion, from a group of transformations which is no less general than that of the continuous transformations of the four coordinates. For we should hardly be successful in looking for the subsequent enlargement of the group for a theory based on a narrower group." The Poincaré group The Poincaré group, the transformation group of special relativity, being orthogonal, the inverse of a transformation equals its transpose, introducing discrete reflections. This, in turn, violates Einstein's dictum for a group "no less general than that of the continuous transformations of the four coordinates". Specifically, any pair of Euler angles θk and −θk are not independent, nor are any pair of boosts vk/c and −vk/c. Available parameters are thus reduced, from the 16 needed to express all transformations in a curved spacetime, per the general principle of relativity, ∂x/∂xν, to the 10 of the Poincaré group. The Einstein group Mendel Sachs, in the 1960s, found the transformation group that Einstein had sought, the "Einstein" group. The Einstein group can be obtained by factorizing the squared spacetime invariant interval ds2 = gμν dxμ dxν into a quaternion-valued form and its conjugate, ds ds*, where ds = qμ(x) dxμ and qμ(x) is a four-vector of Hermitian quaternions. Note that the Einstein group approaches—but never reaches—the Poincare group as the flat spacetime (special relativity limit) is approached. References Lie groups Theory of relativity Symmetry
Einstein group
[ "Physics", "Mathematics" ]
367
[ "Lie groups", "Mathematical structures", "Algebraic structures", "Geometry", "Theory of relativity", "Symmetry" ]
37,850,414
https://en.wikipedia.org/wiki/Thermoresponsive%20polymers%20in%20chromatography
Thermoresponsive polymers can be used as stationary phase in liquid chromatography. Here, the polarity of the stationary phase can be varied by temperature changes, altering the power of separation without changing the column or solvent composition. Thermally related benefits of gas chromatography can now be applied to classes of compounds that are restricted to liquid chromatography due to their thermolability. In place of solvent gradient elution, thermoresponsive polymers allow the use of temperature gradients under purely aqueous isocratic conditions. The versatility of the system is controlled not only through changing temperature, but through the addition of modifying moieties that allow for a choice of enhanced hydrophobic interaction, or by introducing the prospect of electrostatic interaction. These developments have already introduced major improvements to the fields of hydrophobic interaction chromatography, size exclusion chromatography, ion exchange chromatography, and affinity chromatography separations as well as pseudo-solid phase extractions ("pseudo" because of phase transitions). Hydrophobic interaction chromatography Gel permeation chromatography The research that appeared to spark an onslaught of modified applications was a gel permeation chromatography technique of fixing poly(isopropyl acrylate) (PIPA) strands to glass beads and separating a mixture of dextrans, which was developed by Gewehr et al. They found that between the temperatures of 25–32 °C, the elution time of dextrans at different molecular weights exhibited a dependence on the temperature. Dextrans of the highest molecular weight eluted first since the PIPA chains exhibit hydrophilicity at temperatures below the LCST. As the temperature of the elution increased, when the chains behave in a more hydrophobic manner, the elution times increased for each of the analytes for the given range. The trend generally applies over the entire temperature range, but there is a flattening of the curve before 25 °C and after 32 °C (the approximate LCST for this experiment). It is important to note that above the LCST, the PIPA acts as a typical nonpolar stationary phase that would be used in reverse-phased chromatography. There are also instances of the elution times increasing below 15 °C, which most likely can be attributed to the lower temperatures’ effects on mass transfer playing a more significant role on retention than the stationary phase behavior. This study showed that the resolution could essentially be tuned by adjusting the operating temperature. The scope of this study was limited to isothermal conditions and attaching polymer chains to glass beads. The results, however, were satisfying enough to inspire other investigations and modifications to create a more versatile stationary phase for the advancement of chromatography. Enhancing hydrophobic interaction Okano’s group expanded on their success by using different modifiers to enhance hydrophobicity through the attachment of butyl methacrylate (BMA), a hydrophobic comonomer. For simplification the resultant polymer has been labeled as IBc (isopropylacrylamide butyl methacrylate copolymer). The polymers were synthesized using radical telomerization with varying BMA content. Where pure PNIPAAm was unable to resolve hydrophobic steroids at any temperature, IBc-grafted silica stationary phases were able to resolve steroid peaks with increasingly retarded retention times in correlation to both increased BMA content and increased temperature. They went on to develop a method to separate phenylthiohydantoin(PTH)-amino acids using their IBc stationary phase with a stronger emphasis of implementing environmentally friendly conditions using a purely aqueous phase in HPLC. Another group separated catechins using PNIPAAm. Modifying the LCST for improved experimental parameters Since the separation of biological molecules such as proteins would be better served by isocratic elution with an aqueous solvent, resolution of HPLC analysis should be tweaked in the area of stationary phases to elute such analytes that may be sensitive to organic solvents. Kanazawa et al. recognized the possibility of changing the LCST parameter through the addition of different moieties. Kanazawa’s group investigated the reversible changes of PNIPAAm once modifying it with a carboxyl end. It was suggested that the modification leads to faster changes in conformation due to the restrictions introduced by the carboxyl group. They attached the carboxyl-terminated PNIPAAm chains to (aminopropyl)silica and used it as packing material for HPLC analysis of steroids. The separation took place under isocratic conditions using pure water as the mobile phase, and controlled the temperature using a water bath. They were able to shift the LCST from 32 °C to 20 °C by making the solution 1M in NaCl concentration. Of the 5 steroids and benzene, only testosterone could be resolved from the other peaks below the LCST (5 °C, LCST=20 °C in 1M NaCl). Above the LCST (25 °C, LCST=20 °C in 1M NaCl), all of the peaks are well resolved, and there is an increasing trend of retention time versus temperature up to 50 °C. Size exclusion chromatography Prior to these studies, HPLC analyses were tuned by modifying the mobile and stationary phases only. Gradient elution for HPLC merely meant changing the ratio of solvents to improve column efficiency, and this requires the use of sophisticated solvent pumping mechanisms along with extra steps and precautions in the chromatographic analysis. Enlightened by the prospect of using temperature gradient elutions for HPLC analyses, Hosoya et al. sought to make surface modification of HPLC stationary phases more accessible. Their study utilizes graft-type copolymerization of PNIPAAm onto macroporous polymeric materials. The in-situ preparation compared the use of cyclohexanol and toluene as porogens in the preparation of the modified polystyrene seeds. Reverse-phased size-exclusion chromatography (SEC) revealed pore size and pore size distribution of the particles and its dependence on temperature. Cyclohexanol acted as a successful porogen showing a dependent relationship of pore size to temperature. The use of toluene as a porogen gave results that were similar to unmodified macroporous particles. This indicates that PNIPAAm can be successfully grafted onto the surface and within the pores of macroporous materials. The application of this preparatory technique gives rise to tunable pore sizes. Temperature gradient elutions can be used to improve column efficiency through the changing of pore size in SEC. The mechanism of the change in pore size is simple, the pores are smaller under LCST due to the elongated chains of PNIPAAm within the pores, as temperature increases to and above LCST, the chains retract into a globular formation increasing the pore size. Ion-exchange chromatography Modification had also been extended past hydrophobic and hydrophilic attachments, charged compounds have also been introduced to TRPs. Kobayashi et al. had previously performed successful modifications to separate bioactive ionic compounds, and continued on that success to improve separation efficiency of bioactive compounds. Common methods of separating angiotensin peptides had involved reverse-phased high-performance liquid chromatography (RP-HPLC) and cation-exchange chromatography. RP-HPLC requires the use of organic solvents, which is not favored and current trends are moving away from that. Hydrophobic interaction chromatography requires high concentration salt elutions and eluent cleaning to remove the salt. To address the shortcomings of the previous methods, Kobayashi’s group grafted acrylic acid (anionic acrylate under neutral conditions) and tert-butylacrylamide (hydrophobic) monomers onto PNIPAAm, resulting in PNIPAAm-co-AAc-co-tBAAm (IAtB) onto silica beads as a stationary phase medium. The reason for incorporating both ionic and hydrophobic compounds is multifaceted. The ionic compound improves interactivity with the ionic species, but raises the LCST significantly. The hydrophobic addition counteracts against the raise in LCST and lowers it to a more standard value, but also interacts with the hydrophobic surfaces of biological compounds. This resulted in successful and resolved elution of angiotensin peptides. Additionally, they were able to tune the retention factor for the analytes through isocratic temperature gradient elution. Ideal elutions occurred at 35 °C, but decreasing the temperature to 10 °C or raising it to 50 °C caused faster elutions either way. This is a strong indication that electrostatic and hydrophobic interactions can be similarly affected by changes in temperature. The major advantages from applying these success of this study include stationary phase versatility and maintaining bioactivity of the analytes. Ayano et al. modified PNIPAAm with cationic N,N-dimethylaminopropylacrylamide (DMAPAAm) and hydrophobic BMA and grafted it onto silica beads to form IDB. They used pH changes to adjust the LCST. The effect of pH on the LCST is as follows, from a plateau value between pH 4.5 and pH 6.0, the LCST decreased up to pH 9 and below pH 4.5. This can be interpreted as requiring slightly basic or moderately acidic conditions, as the 4.5–6.0 pH region holds a maximum value of the LCST, an unfavorable condition. They used these properties to separate several non-steroidal anti-inflammatory drugs (NSAIDs). The analysis of acidic drugs (salicylic acid: BA; SA; MS; and As) was performed below pH 4.5. MS is hydrophobic only its retention time was affected by an increase in temperature on the column without a terminally modified anion-exchanger (IB column). However, with an anion-exchanger present, dissociated acidic drugs were retained longer at temperatures below LCST, and shorter at temperatures above LCST. When the IBD column compared to recently established PNIPAAm columns, electrostatic forces show remarkably higher retention ability of charged compounds than its hydrophilic predecessor. A single stationary phase can accomplish pharmaceutical separations based on hydrophobic interactions, hydrophilic interactions, and electrostatic interactions merely by adjusting the temperature (while adjusting pH to tweak the LCST). Affinity chromatography Selective enzyme and antibody separation can be achieved with the use of specific end groups that conjugate with the specific compounds. This results in a formation of a polymer-enzyme conjugate which can be reversibly precipitated and dissolved by changing the temperature. Chen and Hoffman used N-Hydroxysuccinimide (NHS) ester functional end group on NIPAAm to conjugate selectively with β-D-glucosidase. They found that the conjugated enzyme could be repeatedly precipitated and dissolved in solution and still maintain sufficient enzymatic activity. In a study that was published in 1998, Hoshino et al. prepared a TRP with a maltose ligand, evaluated it with concanavalin A (Con A), and attempted to separate and purify α-glucosidase, a thermolabile compound. Since the goal is to selectively isolate a thermolabile enzyme, a TRP with a small LCST value is desired. To fit this condition, the selected TRP was poly(N-acryloylpiperidine)-cysteamine (pAP), which has an LCST of 4 °C. The terminally bound maltose moiety maintains affinity for both analytes, thus the modified TRP, pAPM, met critical conditions of external temperature requirements and affinity for both target analytes. The solubility properties changed from 4 °C (soluble) to 8 °C (insoluble). Several reagents were tested for the recovery of Con A by desorption which had higher binding affinities to Con A than maltose. These reagents were α-D-glucopyranoside, D-mannose, methyl α-D-mannopyranoside, and glucose. α-D-mannopyranoside was the most effective for desorbing Con A from pAPM at virtually 100% after 1 hour. As a control, pAPM was used to bind Con A from a crude extract, which found the pickup of several impurities but still managed to recover 80% of Con A. This exemplifies the need for selective moieties, maltose not residing among them. Finally, the application of pAPM was tested by attempting to separate α-glucosidase from yeast extract under low temperature conditions. In conclusion, the pAPM was found to recover 68% of α-glucosidase activity tested against, maltose being the selected desorption reagent. Another interesting development for AC was involved with antibody separation using another TRP-ligand combination. Anastase-Ravion et al. attached a dextran derivative to the classic PNIPAAm to result in a poly(NIPAAm)-DD, and used this stationary phase to separate polyclonal antibodies from subcutaneous rabbit serum. From the study, the dextran derivative of choice was carboxymethyl dextran benzylamide sulfonate/sulfate, and when bound to the TRP was labeled poly(NIPAAm)-CMDBS. The LCST for the poly(NIPAAm)-CMDBS was raised from 32 °C to 33 °C. To test the success of the affinity binding, the antibodies were eluted with glycine buffer (adjusted to pH 2.6 with HCl). Promising results were obtained in 2003 in a study that merged the newer developments in affinity chromatography with microfluidic devices. Upon the development of microfluidic technology, coupling it with affinity chromatography meant modifying channel surfaces, packing coated beads, or packing with coated porous material, neither of which allow for replenishing the columns. This produces limitations that prevent the packing material from being changed or the column being regenerated. The approach they took to address those challenges meant incorporating TRP particles as a reversibly immobilized stationary phase. What separates this development from other AC methods is that the beads on which the modified TRP are attached can reversibly adhere to the inner surfaces of the microfluidic channels. The formulation of the smart bead matrix is a little complex, but in general PNIPAAm is modified two times, first with NHS, then with polyethylene glycol-biotin (PEG-b) resulting in PEG-b/pNIPAAm beads. The inner surface of the microfluidic channels is composed of polyethylene terephthalate, to which the PEG-b/pNIPAAm beads reversibly bind above the LCST. When the sample solution is passed through the channels, the target analyte binds to the biotin ligand. The temperature can then be brought below the LCST to dissociate and become removed from the inner channels. This allows for a system adept to being reloaded with stationary phase under mild conditions. They successfully separated and eluted Streptavidin. Further application of these procedures allow for portable AC columns which can be packed on site and used for local or clinical analytical separations of complex biological fluids. References Chromatography
Thermoresponsive polymers in chromatography
[ "Chemistry" ]
3,291
[ "Chromatography", "Separation processes" ]
37,850,900
https://en.wikipedia.org/wiki/C11H15NO2S
{{DISPLAYTITLE:C11H15NO2S}} The molecular formula C11H15NO2S (molar mass: 225.31 g/mol, exact mass: 225.0823 u) may refer to: Ethiofencarb Methiocarb Molecular formulas
C11H15NO2S
[ "Physics", "Chemistry" ]
65
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
37,853,202
https://en.wikipedia.org/wiki/Strong%20subadditivity%20of%20quantum%20entropy
In quantum information theory, strong subadditivity of quantum entropy (SSA) is the relation among the von Neumann entropies of various quantum subsystems of a larger quantum system consisting of three subsystems (or of one quantum system with three degrees of freedom). It is a basic theorem in modern quantum information theory. It was conjectured by D. W. Robinson and D. Ruelle in 1966 and O. E. Lanford III and D. W. Robinson in 1968 and proved in 1973 by E.H. Lieb and M.B. Ruskai, building on results obtained by Lieb in his proof of the Wigner-Yanase-Dyson conjecture. The classical version of SSA was long known and appreciated in classical probability theory and information theory. The proof of this relation in the classical case is quite easy, but the quantum case is difficult because of the non-commutativity of the reduced density matrices describing the quantum subsystems. Some useful references here include: "Quantum Computation and Quantum Information" "Quantum Entropy and Its Use" Trace Inequalities and Quantum Entropy: An Introductory Course Definitions We use the following notation throughout the following: A Hilbert space is denoted by , and denotes the bounded linear operators on . Tensor products are denoted by superscripts, e.g., . The trace is denoted by . Density matrix A density matrix is a Hermitian, positive semi-definite matrix of trace one. It allows for the description of a quantum system in a mixed state. Density matrices on a tensor product are denoted by superscripts, e.g., is a density matrix on . Entropy The von Neumann quantum entropy of a density matrix is . Relative entropy Umegaki's quantum relative entropy of two density matrices and is . Joint concavity A function of two variables is said to be jointly concave if for any the following holds Subadditivity of entropy Ordinary subadditivity concerns only two spaces and a density matrix . It states that This inequality is true, of course, in classical probability theory, but the latter also contains the theorem that the conditional entropies and are both non-negative. In the quantum case, however, both can be negative, e.g. can be zero while . Nevertheless, the subadditivity upper bound on continues to hold. The closest thing one has to is the Araki–Lieb triangle inequality which is derived in from subadditivity by a mathematical technique known as purification. Strong subadditivity (SSA) Suppose that the Hilbert space of the system is a tensor product of three spaces: . Physically, these three spaces can be interpreted as the space of three different systems, or else as three parts or three degrees of freedom of one physical system. Given a density matrix on , we define a density matrix on as a partial trace: . Similarly, we can define density matrices: , , , , . Statement For any tri-partite state the following holds , where , for example. Equivalently, the statement can be recast in terms of conditional entropies to show that for tripartite state , . This can also be restated in terms of quantum mutual information, . These statements run parallel to classical intuition, except that quantum conditional entropies can be negative, and quantum mutual informations can exceed the classical bound of the marginal entropy. The strong subadditivity inequality was improved in the following way by Carlen and Lieb , with the optimal constant . J. Kiefer proved a peripherally related convexity result in 1959, which is a corollary of an operator Schwarz inequality proved by E.H.Lieb and M.B.Ruskai. However, these results are comparatively simple, and the proofs do not use the results of Lieb's 1973 paper on convex and concave trace functionals. It was this paper that provided the mathematical basis of the proof of SSA by Lieb and Ruskai. The extension from a Hilbert space setting to a von Neumann algebra setting, where states are not given by density matrices, was done by Narnhofer and Thirring . The theorem can also be obtained by proving numerous equivalent statements, some of which are summarized below. Wigner–Yanase–Dyson conjecture E. P. Wigner and M. M. Yanase proposed a different definition of entropy, which was generalized by Freeman Dyson. The Wigner–Yanase–Dyson p-skew information The Wigner–Yanase–Dyson -skew information of a density matrix . with respect to an operator is where is a commutator, is the adjoint of and is fixed. Concavity of p-skew information It was conjectured by E. P. Wigner and M. M. Yanase in that - skew information is concave as a function of a density matrix for a fixed . Since the term is concave (it is linear), the conjecture reduces to the problem of concavity of . As noted in, this conjecture (for all ) implies SSA, and was proved for in, and for all in in the following more general form: The function of two matrix variables is jointly concave in and when and . This theorem is an essential part of the proof of SSA in. In their paper E. P. Wigner and M. M. Yanase also conjectured the subadditivity of -skew information for , which was disproved by Hansen by giving a counterexample. First two statements equivalent to SSA It was pointed out in that the first statement below is equivalent to SSA and A. Ulhmann in showed the equivalence between the second statement below and SSA. Note that the conditional entropies and do not have to be both non-negative. The map is convex. Both of these statements were proved directly in. Joint convexity of relative entropy As noted by Lindblad and Uhlmann, if, in equation (), one takes and and and differentiates in at , one obtains the joint convexity of relative entropy: i.e., if , and , then where with . Monotonicity of quantum relative entropy The relative entropy decreases monotonically under completely positive trace preserving (CPTP) operations on density matrices, . This inequality is called Monotonicity of quantum relative entropy. Owing to the Stinespring factorization theorem, this inequality is a consequence of a particular choice of the CPTP map - a partial trace map described below. The most important and basic class of CPTP maps is a partial trace operation , given by . Then which is called Monotonicity of quantum relative entropy under partial trace. To see how this follows from the joint convexity of relative entropy, observe that can be written in Uhlmann's representation as for some finite and some collection of unitary matrices on (alternatively, integrate over Haar measure). Since the trace (and hence the relative entropy) is unitarily invariant, inequality () now follows from (). This theorem is due to Lindblad and Uhlmann, whose proof is the one given here. SSA is obtained from () with replaced by and replaced . Take . Then () becomes Therefore, which is SSA. Thus, the monotonicity of quantum relative entropy (which follows from () implies SSA. Relationship among inequalities All of the above important inequalities are equivalent to each other, and can also be proved directly. The following are equivalent: Monotonicity of quantum relative entropy (MONO); Monotonicity of quantum relative entropy under partial trace (MPT); Strong subadditivity (SSA); Joint convexity of quantum relative entropy (JC); The following implications show the equivalence between these inequalities. MONO MPT: follows since the MPT is a particular case of MONO; MPT MONO: was shown by Lindblad, using a representation of stochastic maps as a partial trace over an auxiliary system; MPT SSA: follows by taking a particular choice of tri-partite states in MPT, described in the section above, "Monotonicity of quantum relative entropy"; SSA MPT: by choosing to be block diagonal, one can show that SSA implies that the map is convex. In it was observed that this convexity yields MPT; MPT JC: as it was mentioned above, by choosing (and similarly, ) to be block diagonal matrix with blocks (and ), the partial trace is a sum over blocks so that , so from MPT one can obtain JC; JC SSA: using the 'purification process', Araki and Lieb, observed that one could obtain new useful inequalities from the known ones. By purifying to it can be shown that SSA is equivalent to Moreover, if is pure, then and , so the equality holds in the above inequality. Since the extreme points of the convex set of density matrices are pure states, SSA follows from JC; See, for a discussion. The case of equality Equality in monotonicity of quantum relative entropy inequality In, D. Petz showed that the only case of equality in the monotonicity relation is to have a proper "recovery" channel: For all states and on a Hilbert space and all quantum operators , if and only if there exists a quantum operator such that and Moreover, can be given explicitly by the formula where is the adjoint map of . D. Petz also gave another condition when the equality holds in Monotonicity of quantum relative entropy: the first statement below. Differentiating it at we have the second condition. Moreover, M.B. Ruskai gave another proof of the second statement. For all states and on and all quantum operators , if and only if the following equivalent conditions are satisfied: for all real . where is the adjoint map of . Equality in strong subadditivity inequality P. Hayden, R. Jozsa, D. Petz and A. Winter described the states for which the equality holds in SSA. A state on a Hilbert space satisfies strong subadditivity with equality if and only if there is a decomposition of second system as into a direct sum of tensor products, such that with states on and on , and a probability distribution . Carlen-Lieb Extension E. H. Lieb and E.A. Carlen have found an explicit error term in the SSA inequality, namely, If and , as is always the case for the classical Shannon entropy, this inequality has nothing to say. For the quantum entropy, on the other hand, it is quite possible that the conditional entropies satisfy or (but never both!). Then, in this "highly quantum" regime, this inequality provides additional information. The constant 2 is optimal, in the sense that for any constant larger than 2, one can find a state for which the inequality is violated with that constant. Operator extension of strong subadditivity In his paper I. Kim studied an operator extension of strong subadditivity, proving the following inequality: For a tri-partite state (density matrix) on , The proof of this inequality is based on Effros's theorem, for which particular functions and operators are chosen to derive the inequality above. M. B. Ruskai describes this work in details in and discusses how to prove a large class of new matrix inequalities in the tri-partite and bi-partite cases by taking a partial trace over all but one of the spaces. Extensions of strong subadditivity in terms of recoverability A significant strengthening of strong subadditivity was proved in 2014, which was subsequently improved in and. In 2017, it was shown that the recovery channel can be taken to be the original Petz recovery map. These improvements of strong subadditivity have physical interpretations in terms of recoverability, meaning that if the conditional mutual information of a tripartite quantum state is nearly equal to zero, then it is possible to perform a recovery channel (from system E to AE) such that . These results thus generalize the exact equality conditions mentioned above. See also Von Neumann entropy Conditional quantum entropy Quantum mutual information Kullback–Leibler divergence References Quantum mechanical entropy Quantum mechanics
Strong subadditivity of quantum entropy
[ "Physics" ]
2,526
[ "Quantum mechanical entropy", "Entropy", "Physical quantities" ]
37,855,063
https://en.wikipedia.org/wiki/Drug%20nomenclature
Drug nomenclature is the systematic naming of drugs, especially pharmaceutical drugs. In the majority of circumstances, drugs have 3 types of names: chemical names, the most important of which is the IUPAC name; generic or nonproprietary names, the most important of which are international nonproprietary names (INNs); and trade names, which are brand names. Under the INN system, generic names for drugs are constructed out of affixes and stems that classify the drugs into useful categories while keeping related names distinguishable. A marketed drug might also have a company code or compound code. Legal regulation Drug names are often subject to legal regulation, including approval for new drugs (to avoid confusion with existing drugs) and on packaging to establish clear rules about adulterants and fraudulent or misleading labeling. A national formulary is often designated to define drug names (and purity standards) for regulatory purposes. The legally approved names in various countries include: Australian Approved Name Brazilian Nonproprietary Name British Approved Name Dénomination Commune Française (France) Denominazione Comune Italiana (Italy, generic name) Japanese Accepted Name United States Adopted Name The World Health Organization administers the international nonproprietary name list. A company or person developing a drug can apply for a generic (nonproprietary) name through their national formulary or directly to the WHO INN Programme. In order to minimize confusion, many of the national naming bodies have policies of maintaining harmony between national nonproprietary names and INNs. The European Union has mandated this harmonization for all member states In the United States, the developer applies to United States Adopted Name (USAN) Council, and a USAN negotiator applies to the INN on the developer's behalf. Chemical names The chemical names are the scientific names, based on the molecular structure of the drug. There are various systems of chemical nomenclature and thus various chemical names for any one substance. The most important is the IUPAC name. Chemical names are typically very long and too complex to be commonly used in referring to a drug in speech or in prose documents. For example, "1-(isopropylamino)-3-(1-naphthyloxy) propan-2-ol" is a chemical name for propranolol. Sometimes, a company that is developing a drug might give the drug a company code, which is used to identify the drug while it is in development. For example, CDP870 was UCB's company code for certolizumab pegol; UCB later chose "Cimzia" as its trade name. Many of these codes, although not all, have prefixes that correspond to the company name. Nonproprietary (generic) names Generic names are used for a variety of reasons. They provide a clear and unique identifier for active chemical substances, appearing on all drug labels, advertising, and other information about the substance. Relatedly, they help maintain clear differentiation between proprietary and nonproprietary aspects of reality, which people trying to sell proprietary things have an incentive to obfuscate; they help people compare apples to apples. They are used in scientific descriptions of the chemical, in discussions of the chemical in the scientific literature and descriptions of clinical trials. Generic names usually indicate via their stems what drug class the drug belongs to. For example, one can tell that aciclovir is an antiviral drug because its name ends in the -vir suffix. History The earliest roots of standardization of generic names for drugs began with city pharmacopoeias, such as the London, Edinburgh, Dublin, Hamburg, and Berlin Pharmacopoeias. The fundamental advances in chemistry during the 19th century made that era the first time in which what we now call chemical nomenclature, a huge profusion of names based on atoms, functional groups, and molecules, was necessary or conceivable. In the second half of the 19th century and the early 20th, city pharmacopoeias were unified into national pharmacopoeias (such as the British Pharmacopoeia, United States Pharmacopeia, Pharmacopoeia Germanica (PhG or PG), Italian Pharmacopeia, and Japanese Pharmacopoeia) and national formularies (such as the British National Formulary, the Australian Pharmaceutical Formulary, and the National Formulary of India). International pharmacopeias, such as the European Pharmacopoeia and the International Pharmacopoeia of the World Health Organization (WHO), have been the next level. In 1953 the WHO created the International Nonproprietary Name (INN) system, which issues INNs in various languages, including Latin, English, French, Spanish, Russian, Chinese, and Arabic. Several countries also have national-level systems for creating generic drug names, including the British Approved Name (BAN) system, the Australian Approved Name (AAN) system, the United States Adopted Name (USAN) system (which is mostly the same as the United States Pharmacopeia (USP) system), and the Japanese Accepted Name (JAN) system. At least several of these national-level Approved Name/Adopted Name/Accepted Name systems were not created until the 1960s, after the INN system already existed. In the 21st century, increasing globalization is encouraging maximal rationalization for new generic names for drugs, and there is an increasing expectation that new USANs, BANs, and JANs will not differ from new INNs without special justification. During the first half of the 20th century, generic names for drugs were often coined by contracting the chemical names into fewer syllables. Such contraction was partially, informally, locally standardized, but it was not universally consistent. In the second half of the 20th century, the nomenclatural systems moved away from such contraction toward the present system of stems and affixes that show chemical relationships. Biopharmaceuticals have posed a challenge in nonproprietary naming because unlike smaller molecules made with total synthesis or semisynthesis, there is less assurance of complete fungibility between products from different manufacturers. Just as wine may vary by strain of yeast and year of grape harvest, so each product can be subtly different because living organisms are an integral part of production. The WHO MedNet community continually works to augment its system for biopharmaceuticals to ensure continued fulfillment of the goals served by having nonproprietary names. In recent years the development of the Biological Qualifier system has been an example. The prefixes and interfixes have no pharmacological significance and are used to separate the drug from others in the same class. Suffixes or stems may be found in the middle or more often the end of the drug name, and normally suggest the action of the drug. Generic names often have suffixes that define what class the drug is. List of stems and affixes More comprehensive lists can be found in Appendix VII of the USP Dictionary or in the WHO INN stembook. {| class="wikitable sortable" border="1" |- ! scope="col" | Stem ! scope="col" | Drug class ! scope="col" class="unsortable" | Example |- | -vir || Antiviral drug || aciclovir, oseltamivir |- | -cillin || Penicillin-derived antibiotics || penicillin, carbenicillin, oxacillin |- | cef- || Cephem-type antibiotics || cefazolin |- | -mab || Monoclonal antibodies || trastuzumab, ipilimumab |- | -ximab || Chimeric antibody, in which the design of the therapeutic antibody incorporates parts of multiple different antibodies, for example, in the case of infliximab, variable (binding) regions from a mouse anti-TNF antibody and constant regions from human antibodies (to reduce the likelihood of the patient developing their own antibodies against the therapeutic antibody) || infliximab |- | -zumab || humanized antibody || natalizumab, bevacizumab |- | -anib || Angiogenesis inhibitors || pazopanib, vandetanib |- | -ciclib || Cyclin-dependent kinase 4/CDK6 inhibitors || palbociclib, ribociclib |- | -degib || hedgehog signaling pathway inhibitors || vismodegib, sonidegib |- | -denib || IDH1 and IDH2 inhibitors || enasidenib, ivosidenib |- | -lisib || Phosphatidylinositol 3-kinase inhibitors || alpelisib, buparlisib |- | -parib || PARP inhibitor || olaparib, veliparib |- | -rafenib || BRAF inhibitors || sorafenib, vemurafenib |- | -tinib || Tyrosine-kinase inhibitors || erlotinib, crizotinib |- | -zomib || proteasome inhibitors || bortezomib, carfilzomib |- | -vastatin || HMG-CoA reductase inhibitor || atorvastatin |- | -prazole || Proton-pump inhibitor || omeprazole |- | -lukast || Leukotriene receptor antagonists || zafirlukast, montelukast |- | -grel- || Platelet aggregation inhibitor || clopidogrel, ticagrelor |- | -axine || Dopamine and serotonin–norepinephrine reuptake inhibitor || venlafaxine |- | -olol || Beta-blockers || metoprolol, atenolol |- | -oxetine || Antidepressant related to fluoxetine || duloxetine, reboxetine |- | -sartan || Angiotensin receptor antagonists || losartan, valsartan |- | -pril || Angiotensin converting enzyme inhibitor || captopril, lisinopril |- | -oxacin || Quinolone-derived antibiotics || levofloxacin, moxifloxacin |- | -barb- || Barbiturates || phenobarbital, secobarbital |- | -xaban || Direct Xa inhibitor || apixaban, rivaroxaban |- | -afil || Inhibitor of PDE5 with vasodilator action || sildenafil, tadalafil |- | -prost- || Prostaglandin analogue || latanoprost, unoprostone |- | -ine || Alkaloids and organic bases|| atropine, quinine |- | -tide || Peptides and glycopeptides || nesiritide, octreotide |- | -vec || Gene therapy vectors || Alipogene tiparvovec |- | -ast || Anti-asthmatic || zafirlukast, seratrodast |- | -caine || local anesthetic ||benzocaine |- | -dipine || Calcium channel blocker derived from dihydropyridine || amlodipine, nifedipine, felodipine |- | -tidine || H2 receptor antagonist || cimetidine, ranitidine, famotidine |- | -setron || 5-HT3 antagonist || ondansetron, granisetron, palonosetron |- |[[Mycinamicin III 3-O-methyltransferase|-mycin]] |Antibiotic produced by Streptomyces strains |vancomycin, streptomycin, Neomycin |} Example breakdown of a drug name If the name of the drug solanezumab were to be broken down, it would be divided into two parts like this: solane-zumab. -Zumab is the suffix for humanized monoclonal antibody. Monoclonal antibodies by definition contain only a single antibody clone and have binding specificity for one particular epitope. In the case of solanezumab, the antibody is designed to bond to the amyloid-β peptides which make up protein plaques on the neurons of people with Alzheimer's disease. See also Time release technology > List of abbreviations for formulation suffixes. Combination drug products For combination drug products—those with two or more drugs combined into a single dosage form—single nonproprietary names beginning with "co-" exist in both British Approved Name (BAN) form and in a formerly maintained USP name called the pharmacy equivalent name (PEN). Otherwise the two names are simply both given, joined by hyphens or slashes. For example, suspensions combining trimethoprim and sulfamethoxazole are called either trimethoprim/sulfamethoxazole or co-trimoxazole. Similarly, co-codamol is codeine-paracetamol (acetaminophen), and co-triamterzide is triamterene-hydrochlorothiazide. The USP ceased maintaining PENs, but the similar "co"-prefixed BANs are still current. Pronunciation Most commonly, a nonproprietary drug name has one widely agreed pronunciation in each language. For example, doxorubicin is consistently in English. Trade names almost always have one accepted pronunciation, because the sponsoring company who coined the name has an intended pronunciation for it. However, it is also common for a nonproprietary drug name to have two pronunciation variants, or sometimes three. For example, for paracetamol, both and are common, and one medical dictionary gives . Some of the variation comes from the fact that some stems and affixes have pronunciation variants. For example, the aforementioned third (and least common) pronunciation for paracetamol reflects the treatment of the acet affix as rather than (both are accepted for acetyl). The World Health Organization does not give suggested pronunciations for its INNs, but familiarity with the typical sounds and spellings of the stems and affixes often points to the widely accepted pronunciation of any given INN. For example, abciximab is predictably , because for INNs ending in -ciximab, the sound is familiar. The United States Pharmacopeia gives suggested pronunciations for most USANs in its USP Dictionary'', which is published in annual editions. Medical dictionaries give pronunciations of many drugs that are both commonly used and have been commercially available for a decade or more, although many newer drugs or less common drugs are not entered. Pharmacists also have access to pronunciations from various clinical decision support systems such as Lexicomp. Drug brands For drugs that make it all the way through development, testing, and regulatory acceptance, the pharmaceutical company then gives the drug a trade name, which is a standard term in the pharmaceutical industry for a brand name or trademark name. For example, Lipitor is Pfizer's trade name for atorvastatin, a cholesterol-lowering medication. Many drugs have multiple trade names, reflecting marketing in different countries, manufacture by different companies, or both. Thus the trade names for atorvastatin include not only Lipitor (in the U.S.) but also Atocor (in India). Publication policies for nonproprietary and proprietary names In the scientific literature, there is a set of strong conventions for drug nomenclature regarding the letter case and placement of nonproprietary and proprietary names, as follows: Nonproprietary names begin in lowercase; trade names begin with a capital. Unbiased mentions of a drug place the nonproprietary name first and follow it with the trade name in parentheses, if relevant (for example, "doxorubicin (Adriamycin)"). This pattern is important for the scientific literature, where conflict of interest is disclosed or avoided. The authors reporting on a study are not endorsing any particular brand of drug. They will often state which brand was used, for methodologic validity (fully disclosing all details that might possibly affect reproducibility), but they do so in a way that makes clear the absence of endorsement. For example, the 2015 American Society of Hematology (ASH) publication policies say, "Non-proprietary (generic/scientific) names should be used and should be lowercase." ... "[T]he first letter of the name of a proprietary drug should be capitalized." ... "If necessary, you may include a proprietary name in parentheses directly following the generic name after its first mention." Valid exceptions to the general pattern occur when a nonproprietary name starts a sentence (and thus takes a capital), when a proprietary name has intercapping (for example, GoLYTELY, MiraLAX), or when tall-man letters are used within nonproprietary names to prevent confusion of similar names (for example, predniSONE versus predniSOLONE). Examples See also Drug class Drug development Generic brand Pharmaceutical code Regulation of therapeutic goods List of pharmaceutical compound number prefixes References Pharmaceutical industry Pharmaceuticals policy Pharmacological classification systems
Drug nomenclature
[ "Chemistry", "Biology" ]
3,740
[ "Pharmacological classification systems", "Pharmacology", "Pharmaceutical industry", "Life sciences industry" ]
37,856,584
https://en.wikipedia.org/wiki/Spray%20pond
A spray pond is a reservoir in which warmed water (e.g. from a power plant) is cooled before reuse by spraying the warm water with nozzles into the cooler air. Cooling takes place by exchange of heat with the ambient air, involving both conductive heat transfer between the water droplets and the surrounding air and evaporative cooling (which provides by far the greatest portion, typically 85 to 90%, of the total cooling). The primary purpose of spray pond design is thus to ensure an adequate degree of contact between the hot injection water and the ambient air, so as to facilitate the process of heat transfer. The spray pond is the predecessor to the natural draft cooling tower, which is much more efficient and takes up less space but has a much higher construction cost. A spray pond requires between 25 and 50 times the area of a cooling tower. However, some spray ponds are still in use today. Spray nozzles The height of each spray nozzle above the surface of the pond should be between 1.5 m and 2.0 m. The spray nozzles themselves should be chosen so as to provide the desired spray pattern diameter at the pond surface, while yielding a maximum spray height of 2.5 m or more above the nozzle. This will provide an adequate contact time between the air and water and should be achievable with a delivery pressure of between 50 and 75 kPa across the nozzles. The performance of a spray pond depends to a large degree on the effectiveness of the spray nozzles which are installed. Ideally, the chosen nozzles should provide a fine, evenly distributed spray in conical form, be capable of passing small particles of suspended matter without blocking and be readily dismantled for cleaning. Typical droplet sizes which are achieved by spray pond nozzles vary between 3 mm and 6 mm. While providing better cooling performance because of their increased surface-to-volume ratios, the generation of droplets of smaller size would require an excessive pressure drop across the nozzles and could lead to increased wind-drift losses from the pond. Pond size Specific spray pond surface areas tend to range between 1.2 and 1.7 m2 per m3/h of water to be cooled. The width chosen for a drift channel around the active zone of the pond (containing the sprays) is dependent on a number of factors, including the prevailing wind strength, the average size of the spray droplets produced by the nozzles, and the presence of any nearby structures which may be sensitive to fogging or water drift, such as roads, houses, etc. Drift channel widths between 3 and 4 m are typically recommended. In order to be most effective in terms of heat transfer, spray ponds should always be oriented with their longer sides at right angles to the direction of the prevailing wind. Additionally, spray ponds should be made as long and narrow as possible (i.e. with a width-to-length ratio as low as possible), so as to decrease the path length which the ambient air must travel across the pond. The depth of a spray pond has very little influence on its thermal performance. However, the pond should contain sufficient water to fill all flumes, seal wells and pump suctions during plant startup. Typically, spray pond depths of between 0.9 m and 1.5 m are recommended in the literature, with a depth of 0.9 m being most common. Additionally, sufficient additional volume above the normal operating level should be provided within the spray pond to accept all water drainage from these flumes, seal wells and pump suctions when the plant is stopped. Drift and evaporative losses from spray ponds of conventional design range between 3 and 5% Thermal performance The thermal efficiency of a spray pond may be calculated based on its approach to the saturation (wet bulb) temperature of the air: (TH - TC) / (TH - TW), where the subscripts H and C refer to the temperatures of the hot and cold water streams, while the subscript W refers to the wet bulb temperature of the air. Typically, spray ponds achieve thermal efficiencies of between 50% and 70%. Further details of performance estimation may be found in the engineering literature. References Thermodynamics, an engineering approach, 7th edition, Yunus A. Cengel and Michael A. Boles Injection Water Cooling - Spray Ponds: with image Heating, ventilation, and air conditioning Evaporators Cooling technology
Spray pond
[ "Chemistry", "Engineering" ]
907
[ "Chemical equipment", "Distillation", "Evaporators" ]
37,857,996
https://en.wikipedia.org/wiki/V529%20Andromedae
V529 Andromedae, also known as HD 8801, is a variable star in the constellation of Andromeda. It has a 13th magnitude visual companion star 15" away, which is just a distant star on the same line of sight. It is also an Am star with a spectral classification Am(kA5/hF1/mF2), meaning that it has the calcium K line of a star with spectral type A5, the Balmer series of a F1 star, and metallic lines of an F2 star. Variability Thr variable brightness of V529 Andromedae was first detected in the Hipparcos satellite data. It was classified as an "unsolved variable" (meaning it could not be placed into any specific variable star category) in the Hipparcos catalog released in 1997. The star's variability was confirmed in a study published by Gregory W. Henry and Francis C. Fekel in 2005, and the star was given its variable star designation in 2011. V529 Andromedae was the first star known to combine Gamma Doradus and Delta Scuti type pulsations. Nine different pulsation frequencies have been observed, and three of them could arise from a previously unknown stellar pulsation mode. Companion V529 Andromedae has a 13th magnitude companion about away. It is a far more distant star than V529 Andromedae, only coincidentally aligned in the sky. References Andromeda (constellation) A-type main-sequence stars Andromedae, V529 008801 418 006794 Durchmusterung objects Am stars Delta Scuti variables Gamma Doradus variables
V529 Andromedae
[ "Astronomy" ]
349
[ "Andromeda (constellation)", "Constellations" ]
37,858,390
https://en.wikipedia.org/wiki/Boundary%20conditions%20in%20computational%20fluid%20dynamics
Almost every computational fluid dynamics problem is defined under the limits of initial and boundary conditions. When constructing a staggered grid, it is common to implement boundary conditions by adding an extra node across the physical boundary. The nodes just outside the inlet of the system are used to assign the inlet conditions and the physical boundaries can coincide with the scalar control volume boundaries. This makes it possible to introduce the boundary conditions and achieve discrete equations for nodes near the boundaries with small modifications. The most common boundary conditions used in computational fluid dynamics are Intake conditions Symmetry conditions Physical boundary conditions Cyclic conditions Pressure conditions Exit conditions Intake boundary conditions Consider the case of an inlet perpendicular to the x direction. For the first u, v, φ-cell all links to neighboring nodes are active, so there is no need of any modifications to discretion equations. At one of the inlet node absolute pressure is fixed and made pressure correction to zero at that node. Generally computational fluid dynamics codes estimate k and ε with approximate formulate based on turbulent intensity between 1 and 6% and length scale Symmetry boundary condition If flow across the boundary is zero: Normal velocities are set to zero Scalar flux across the boundary is zero: In this type of situations values of properties just adjacent to the solution domain are taken as values at the nearest node just inside the domain. Physical boundary conditions Consider situation solid wall parallel to the x-direction: Assumptions made and relations considered- The near wall flow is considered as laminar and the velocity varies linearly with distance from the wall No slip condition: u = v = 0. In this we are applying the “wall functions” instead of the mesh points. Turbulent flow: . in the log-law region of a turbulent boundary layer. Laminar flow : . Important points for applying wall functions: The velocity is constant along parallel to the wall and varies only in the direction normal to the wall. No pressure gradients in the flow direction. High Reynolds number No chemical reactions at the wall Cyclic boundary condition We take flux of flow leaving the outlet cycle boundary equal to the flux entering the inlet cycle boundary Values of each variable at the nodes at upstream and downstream of the inlet plane are equal to values at the nodes at upstream and downstream of the outlet plane. Pressure boundary condition These conditions are used when we don’t know the exact details of flow distribution but boundary values of pressure are known For example: external flows around objects, internal flows with multiple outlets, buoyancy-driven flows, free surface flows, etc. The pressure corrections are taken zero at the nodes. Exit boundary conditions Considering the case of an outlet perpendicular to the x-direction - In fully developed flow no changes occurs in flow direction, gradient of all variables except pressure are zero in flow direction The equations are solved for cells up to NI-1, outside the domain values of flow variables are determined by extrapolation from the interior by assuming zero gradients at the outlet plane The outlet plane velocities with the continuity correction . References An introduction to computational fluid dynamics by Versteeg, PEARSON. Computational fluid dynamics computational fluid dynamics in
Boundary conditions in computational fluid dynamics
[ "Physics", "Chemistry" ]
623
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
40,612,303
https://en.wikipedia.org/wiki/IBeacon
iBeacon is a protocol developed by Apple and introduced at the Apple Worldwide Developers Conference in 2013. Various vendors have since made iBeacon-compatible hardware transmitters – typically called beacons – a class of Bluetooth Low Energy (BLE) devices that broadcast their identifier to nearby portable electronic devices. The technology enables smartphones, tablets and other devices to perform actions when in proximity to an iBeacon. iBeacon is based on Bluetooth low energy proximity sensing by transmitting a universally unique identifier picked up by a compatible app or operating system. The identifier and several bytes sent with it can be used to determine the device's physical location, track customers, or trigger a location-based action on the device such as a check-in on social media or a push notification. iBeacon can also be used with an application as an indoor positioning system, which helps smartphones determine their approximate location or context. With the help of an iBeacon, a smartphone's software can approximately find its relative location to an iBeacon in a store. Brick and mortar retail stores use the beacons for mobile commerce, offering customers special deals through mobile marketing, and can enable mobile payments through point of sale systems. Another application is distributing messages at a specific Point of Interest, for example a store, a bus stop, a room or a more specific location like a piece of furniture or a vending machine. This is similar to previously used geopush technology based on GPS, but with a much reduced impact on battery life and better precision. iBeacon differs from some other location-based technologies as the broadcasting device (beacon) is only a 1-way transmitter to the receiving smartphone or receiving device, and necessitates a specific app installed on the device to interact with the beacons. This ensures that only the installed app (not the iBeacon transmitter) can track users as they walk around the transmitters. iBeacon compatible transmitters come in a variety of form factors, including small coin cell devices, USB sticks, and generic Bluetooth 4.0 capable USB dongles. Functions An iBeacon deployment consists of one or more iBeacon devices that transmit their own unique identification number to the local area. Software on a receiving device may then look up the iBeacon and perform various functions, such as notifying the user. Receiving devices can also connect to the iBeacons to retrieve values from iBeacon's GATT (generic attribute profile) service. iBeacons do not push notifications to receiving devices (other than their own identity). However, mobile software can use signals received from iBeacons to trigger their own push notifications. Region monitoring Region monitoring (limited to 20 regions on iOS) can function in the background (of the listening device) and has different delegates to notify the listening app (and user) of entry/exit in the region - even if the app is in the background or the phone is locked. Region monitoring also allows for a small window in which iOS gives a closed app an opportunity to react to the entry of a region. Ranging As opposed to monitoring, which enables users to detect movement in-and-out of range of the beacons, ranging provides a list of beacons detected in a given region, along with the estimated distance from the user's device to each beacon. Ranging works only in the foreground but will return (to the listening device) an array (unlimited) of all iBeacons found along with their properties (UUID, etc.) An iOS device receiving an iBeacon transmission can approximate the distance from the iBeacon. The distance (between transmitting iBeacon and receiving device) is categorized into 3 distinct ranges: Immediate: Within a few centimeters Near: Within a couple of meters Far: Greater than 10 meters away An iBeacon broadcast has the ability to approximate when a user has entered, exited, or lingered in region. Depending on a customer's proximity to a beacon, they are able to receive different levels of interaction at each of these three ranges. The maximum range of an iBeacon transmission will depend on the location and placement, obstructions in the environment and where the device is being stored (e.g. in a leather handbag or with a thick case). Standard beacons have an approximate range of 70 meters. Long range beacons can reach up to 450 meters. Settings The frequency of the iBeacon transmission depends on the configuration of the iBeacon and can be altered using device specific methods. Both the rate and the transmit power have an effect on the iBeacon battery life. iBeacons come with predefined settings and several of them can be changed by the developer, including the rate, the transmit power, and the Major and Minor values. The Major and Minor values are settings which can be used to connect to specific iBeacons or to work with more than one iBeacon at the same time. Typically, multiple iBeacon deployment at a venue will have the same UUID, and use the major and minor pairs to segment and distinguish subspaces within the venue. For example, the Major values of all the iBeacons in a specific store can be set to the same value and the Minor value can be used to identify a specific iBeacon within the store. Power consumption The Bluetooth LE protocol is significantly more power efficient than Bluetooth Classic. Several chipsets makers, including Texas Instruments and Nordic Semiconductor now supply chipsets optimized for iBeacon use. Power consumption depends on iBeacon configuration parameters of advertising interval and transmit power. A study on 16 different iBeacon vendors reports that battery life can range between 1–24 months. Apple's recommended setting of 100 ms advertising interval with a coin cell battery provides for 1–3 months of life, which increases to 2–3 years as advertising interval is increased to 900 ms. Battery consumption of the phones is a factor that must be taken into account when deploying beacon-enabled apps. A recent report has shown that older phones tend to draw more battery in the vicinity of iBeacons, while the newer phones can be more efficient in the same environment. In addition to the time spent by the phone scanning, number of scans and number of beacons in the vicinity are also significant factors for battery drain, as pointed out by the Aislelabs report. In a follow-up report, Aislelabs found a drastic improvement in battery consumption for iPhone 5s, iPhone 5c versus the older model iPhone 4s. At 10 surrounding iBeacons, iPhone 4s can consume up to 11% of battery per hour whereas iPhone 5s consumes a little less than 5% battery per hour. An energy efficient iBeacon application needs to consider these aspects in order to strike a good balance between app responsiveness and battery consumption. History and developments In mid-2013 Apple introduced iBeacons and experts wrote about how it is designed to help the retail industry by simplifying payments and enabling on-site offers. On December 6, 2013, Apple activated iBeacons across its 254 US retail stores. McDonald's has used the devices to give special offers to consumers in its fast-food stores. As of May 2014, different hardware iBeacons can be purchased for as little as $5 per device to more than $30 per device. Each of these different iBeacons have varying default settings for their default transmit power and iBeacon advertisement frequency. Some hardware iBeacons advertise at frequencies as low as 1 Hz while others can be as high as 10 Hz. iBeacon technology is still in its infancy. One well-reported software quirk exists on 4.2 and 4.3 Android systems whereby the system's bluetooth stack crashes when presented with many iBeacons. This was reportedly fixed in Android 4.4.4. Technical details Bluetooth low energy devices can operate in an advertisement mode to notify nearby devices of their presence. In the simplest form, an iBeacon is a Bluetooth low energy device emitting advertisements following a strict format, that being an Apple-defined iBeacon prefix, followed by a variable UUID, and a major, minor pair. An example iBeacon advertisement frame could look like: fb0b57a2-8228-44cd-913a-94a122ba1206 Major 1 Minor 2 where fb0b57a2-8228-44cd-913a-94a122ba1206 is the UUID. Since iBeacon advertising is just an application of the general Bluetooth Low Energy advertisement, the above iBeacon can be emitted by issuing the following commands on Linux to a supported Bluetooth 4 Low Energy device on a modern kernel: (Set LE Advertising Parameters) hcitool -i hci0 cmd 0x08 0x0006 a0 00 a0 00 03 00 00 00 00 00 00 00 00 07 00 ############################### a0 00: Minimum Advertisement Interval (16-bit Little Endian) (0.625ms* 00 a0) ##################################### a0 00: Maximum Advertisement Interval (16-bit Little Endian) (0.625ms* 00 a0) (Set LE Advertisement Data) hcitool -i hci0 cmd 0x08 0x0008 1E 02 01 06 1A FF 4C 00 02 15 FB 0B 57 A2 82 28 44 CD 91 3A 94 A1 22 BA 12 06 00 01 00 02 D1 00 ############################### 1E: Number of total ADV bytes, cannot be more than 1F, (31 bytes max BLE advertisement length) ################################## 02 01 06 1A FF 4C 00 02 15: Apple's iBeacon advertising prefix (LE Advertisement Enable) hcitool -i hci0 cmd 0x08 0x000a 01 For the retransmission interval setting (first of above commands) to work again, the transmission must be stopped with: (LE Advertisement Disable) hcitool -i hci0 cmd 0x08 0x000a 00 Devices running the Android operating system prior to version 4.3 can only receive iBeacon advertisements but cannot emit iBeacon advertisements. Android 5.0 ("Lollipop") added the support for both central and peripheral modes. BLE advertisement packet structure byte map Byte 0-2: Standard BLE Flags (Not necessary but standard) Byte 0: Length : 0x02 Byte 1: Type: 0x01 (Flags) Byte 2: Value: 0x06 (Typical Flags 0b00000110) (LE General Discoverable Mode, BR/EDR Not Supported) Byte 3-29: Apple Defined iBeacon Data Byte 3: Length: 0x1a (Of the following section) Byte 4: Type: 0xff (Custom Manufacturer Data) Byte 5-6: Manufacturer ID : 0x4c00 (Apple's Bluetooth SIG registered company code, 16-bit Little Endian) Byte 7: SubType: 0x02 (Apple's iBeacon type of Custom Manufacturer Data) Byte 8: SubType Length: 0x15 (Of the rest of the iBeacon data; UUID + Major + Minor + TXPower) Byte 9-24: Proximity UUID (Random or Public/Registered UUID of the specific beacon) Byte 25-26: Major (User-Defined value) Byte 27-28: Minor (User-Defined value) Byte 29: Measured Power (8 bit Signed value, ranges from -128 to 127, use Two's Complement to "convert" if necessary, Units: Measured Transmission Power in dBm @ 1 meters from beacon) (Set by user, not dynamic, can be used in conjunction with the received RSSI at a receiver to calculate rough distance to beacon) Android iBeacon Support Unlike iOS, Android does not have native iBeacon support. Due to this, to use iBeacon on Android, a developer either has to use an existing library or create code that parses BLE packets to find iBeacon advertisements. BLE support was introduced in Android Jelly Bean with major bug fixes in Android KitKat. Stability improvements and additional BLE features have been progressively added there after, with a major stability improvement in version 6.01 of Android Marshmallow that prevents inter-app connection leaking. Spoofing By design, the iBeacon advertisement frame is plainly visible. This leaves the door open for interested parties to capture, copy and reproduce the iBeacon advertisement frames at different physical locations. This can be done simply by issuing the right sequence of commands to compatible Bluetooth 4.0 USB dongles. Successful spoofing of Apple store iBeacons was reported in February 2014. This is not a security flaw in the iBeacon per se, but application developers must keep this in mind when designing their applications with iBeacons. PayPal has taken a more robust approach, where the iBeacon is purely the start of a complex security negotiation (Challenge–response authentication). This is not likely to be hacked, nor is it likely that it would be disrupted by copies of beacons. Listening for iBeacon can be achieved using the following commands with a modern Linux distribution: hcitool -i hci0 lescan --passive --duplicates D6:EE:D4:16:ED:FC (unknown) F6:BE:90:32:3C:5E (unknown) ... On another terminal, launch the protocol dump program: hcidump -R -i hci0 > 04 3E 2A 02 01 00 01 FC ED 16 D4 EE D6 1E 02 01 06 1A FF 4C 00 02 15 B9 40 7F 30 F5 F8 46 6E AF F9 25 55 6B 57 FE 6D ED FC D4 16 B6 B4 ... See Bluetooth Core Spec. Volume 4, Part E, 7.7.65.2: LE Meta Event::LE Advertising Report Sub-Event, for details on the hcidump output. The MAC address of the iBeacon along with its iBeacon payload is clearly identifiable. The sequence of commands in technical details can then be used to reproduce the iBeacon frame. Compatible devices iOS devices with Bluetooth 4.0+ (iPhone 4s and later, iPad (3rd generation) and later, iPad Mini (1st generation) and later, and iPod Touch (5th generation) and later) Macintosh computers with OS X Mavericks (10.9) or later and Bluetooth 4.0 Android Devices with Bluetooth 4.0+ and Android OS 4.3+ (e.g. Samsung Galaxy S7/J1 mini Prime, Samsung Galaxy Note 2/3, HTC One, Google/LG Nexus 7 2013 /Nexus 4/Nexus 5, OnePlus One, LG G3) Windows Phone devices with Bluetooth 4.0+ and the Lumia Cyan update or above (reports suggest support is not included with Windows Phone 8.1). Comparable technologies Even though the NFC environment is very different, and has many non-overlapping applications, it still compares with iBeacons. The NFC range is up to 20 cm (7.87 inches) but the optimum range is less than 4 cm (1.57 inches). iBeacons have a significantly higher range. Not all phones carry NFC chips. Apple's first iPhone model containing NFC chips was the iPhone 6, introduced September 2014, but most modern phones have had Bluetooth 4.0 or later capability for several years prior to this. See also AirTag Eddystone Electric beacon Pseudolite Nearables Types of beacons Proximity Marketing Mobile location analytics References External links Automatic identification and data capture Radio-frequency identification Radio navigation Ubiquitous computing Wireless locating Indoor positioning system Bluetooth Geopositioning
IBeacon
[ "Technology", "Engineering" ]
3,426
[ "Radio electronics", "Wireless locating", "Wireless networking", "Indoor positioning system", "Data", "Automatic identification and data capture", "Radio-frequency identification", "Bluetooth" ]
40,615,979
https://en.wikipedia.org/wiki/Voice%20Mate
Voice Mate, formerly called Quick Voice and later Q Voice, is an intelligent personal assistant and knowledge navigator which is only available as a built-in application for various LG smartphones. The application uses a natural language user interface to answer questions, make recommendations, and perform actions by delegating requests to a set of Web services. It is based on the Maluuba personal assistant. Some of the capabilities of Voice Mate include making appointments, opening apps, setting alarms, updating social network websites, such as Facebook or Twitter and navigation. Voice Mate also offers efficient multitasking as well as automatic activation features, for example when the car engine is started. Devices LG Optimus Vu LG Optimus LTE II LG Optimus L3 LG Optimus L5 LG Optimus L5 II LG Optimus L7 LG Optimus L9 LG Optimus L9 II LG Optimus Vu II LG Optimus F3 LG Optimus F5 LG Optimus F6 LG Optimus F7 LG Optimus G LG Optimus G Pro LG G2 LG G Pro 2 LG G Pad 8.3 LG Vu 3 LG G Flex LG Volt LG G3 LG G4 References Natural language processing software Virtual assistants LG Electronics
Voice Mate
[ "Technology" ]
273
[ "Mobile software stubs", "Mobile technology stubs" ]
49,557,021
https://en.wikipedia.org/wiki/Jason%20X.-J.%20Yuan
Jason X.-J. Yuan (born 1963) is an American physician scientist whose research interests center on pulmonary vascular pathobiology and pulmonary hypertension. His current research is primarily focused on the pathogenic mechanisms of pulmonary vascular diseases and right heart failure. Biography and career He was born in 1963 in Xintian County, Hunan Province, China. Yuan completed his medical training at the Suzhou Medical College in 1983, and received his doctoral degree from the Chinese Academy of Medical Sciences and Peking Union Medical College in 1993. He completed his postdoctoral fellowship at the University of Maryland School of Medicine (1988-1991). Yuan began his academic career as a Research Assistant Professor of Medicine at the University of Maryland School of Medicine (1993-1998) where he established a translational research project using lung tissues and cells isolated from patients with idiopathic pulmonary arterial hypertension to study pathogenic mechanisms of the disease. He received a Parker B. Francis Fellowship from the Francis Families Foundation in 1994 and a Giles F. Filley Memorial Award for Excellence in Respiratory Physiology and Medicine from the American Physiological Society in 1995 for his translational research work. He was also the Winner of the 1995 Cournand and Comroe Young Investigator Award of the American Heart Association. In 1998, Yuan obtained an Established Investigator Award from the American Heart Association for his pioneer work in identifying novel therapeutic approaches for pulmonary vascular disease. He was recognized as one of the highly promising young investigators in the translational research field of pulmonary vascular disease and right heart failure. In 1999, Yuan moved to the University of California, San Diego and became a Professor in 2003. His research interest was then extended to study pathogenic and therapeutic mechanisms of chronic thromboembolic pulmonary hypertension, functional role of ion channels in stem cell proliferation and differentiation, and pharmacogenetics associated with idiopathic and associated pulmonary arterial hypertension. While at the University of California, San Diego, he was the Vice Chair for Research of the Department of Medicine (2007-2010) and Associate Director for Research Training in the Division of Pulmonary and Critical Care Medicine (2003-2010). In July, 2010, Yuan moved to the University of Illinois at Chicago to assume a position of Program Director in the newly established Institute for Personalized Respiratory Medicine (2010-2014). He was also Vice Chair for Scholarly Activities of the Department of Medicine at the College of Medicine and Director of the Program in Pulmonary Vascular Disease and Right Heart Dysfunction at the Center for Cardiovascular Research in the University of Illinois at Chicago. In May, 2010, he was appointed Associate Vice President for Translational Health Sciences of the University of Arizona. At the same time, he became the founding Chief of the Division of Translational and Regenerative Medicine at the Department of Medicine of the College of Medicine. His pulmonary vascular disease research propels the field on pathogenic roles of membrane receptors and ion channels and provides a new research direction for developing therapeutic approaches for the disease. He has been continuously funded by the NIH since 1993 when he received his FIRST award. Yuan is a Fellow of the American Heart Association, the American Association for the Advancement of Science, and the American Physiological Society. He is also a Guggenheim Fellow. He is an elected Member of the American Society for Clinical Investigation and the Association of American Physicians. Yuan has served on many advisory committees and editorial boards, including Chair of the Respiratory Integrative Biology and Translational Research study section of the National Institutes of Health (NIH), and Chair of the Pulmonary Circulation Assembly of the American Thoracic Society. He is currently a regular member of the Vascular Cell and Molecular Biology study section of the NIH, Editor-in-Chief of the journal Pulmonary Circulation, and Associate Editor of the American Journal of Physiology Cell Physiology He is the leading editor of a comprehensive reference book in the field of pulmonary circulation, Textbook of Pulmonary Vascular Disease (Springer, New York, NY, 2011) and an editor or co-editor of the following books: Hypoxic Pulmonary Vasoconstriction: Cellular and Molecular Mechanisms (Kluwer Academic Publishers, Boston, MA, 2004); Ion Channels in the Pulmonary Vasculature (Taylor & Francis Group, Boca Raton, FL, 2005); Membrane Receptors, Channels, and Transporters in Pulmonary Circulation (Humana Press-Springer, New York, NY, 2010); Advances in the Management of Pulmonary Arterial Hypertension (Future Medicine, London, UK. 2013); and Lung Stem Cells in the Epithelium and Vasculature (Humana Press/Springer Cham Heidelberg New York Dordrecht London, 2015). He is a co-author (with Kim Barrett, Susan Barman and Heddwen Brooks) of Ganong’s Review of Medical Physiology (26th edition) (McGraw Hill Education/Chicago, 2019). Honors Yuan has received over 20 citations and awards: 1993 Dr. C.W. Dunker Award, Society of Chinese Biophysicists in America; Molecular Kinetics Award, Molecular Kinetics, Inc. 1994 Parker B. Francis Fellowship, The Francis Families Foundation 1995 Giles F. Filley Memorial Award for Excellence in Respiratory Physiology and Medicine, The American Physiological Society 1995 Research Career Enhancement Award, The American Physiological Society 1995 Winner, Cournand and Comroe Young Investigator Award, American Heart Association 1996 Best Abstract Award, American Heart Association 1998 Established Investigator Award, American Heart Association 1998 Harold Lamport Award, The American Physiological Society 2000- Fellow, The American Physiological Society 2003- Fellow, American Heart Association 2004 Nominating Committee Chair Award, American Thoracic Society 2005 Mentor Recognition Award, University of California, San Diego 2006 Planning Committee Chair Award, American Thoracic Society 2007- Elected Member, The American Society for Clinical Investigation (ASCI) 2007 Program Committee Chair Award, American Thoracic Society 2007 Star Reviewer for 2006, The American Physiological Society (American Journal of Physiology. Lung Cellular and Molecular Physiology) 2007- Fellow, American Association for the Advancement of Science (AAAS) 2008 Guggenheim Fellowship Award (Guggenheim Fellow), John Simon Guggenheim Memorial Foundation, New York, NY 2008 Program Committee Chair Award, American Thoracic Society 2008 The UK-US Stem Cell Collaboration Development Award, UK Science & Innovation, Foreign and Commonwealth Office, British Consulate General, San Francisco, CA 2011 University Scholar Award, University of Illinois at Chicago 2011 Estelle Grover Lecturer, Grover Conference, American Thoracic Society (Lost Valley Ranch in Sedalia, CO; September 8, 2011) 2012 PVRI Achievement Award, Pulmonary Vascular Research Institute(Cape Town, South Africa, February 9, 2012) 2013- Elected Member, Association of American Physicians(AAP) 2013 Research Administration Volunteer Recognition Award, American Heart Association 2017 Kenneth D. Bloch Memorial Lecturer, American Heart Association 2008 Robert M. Berne Distinguished Lectureship, The American Physiological Society External links Jason X.-J. Yuan c.v. Journal articles Google Scholar citations References Living people University of Maryland, Baltimore faculty University of Illinois Chicago faculty 1963 births Translational medicine University of California, San Diego faculty University of Arizona faculty Members of the American Society for Clinical Investigation
Jason X.-J. Yuan
[ "Biology" ]
1,446
[ "Translational medicine" ]
49,557,260
https://en.wikipedia.org/wiki/Droplet%20countercurrent%20chromatography
Droplet countercurrent chromatography (DCCC or DCC) was introduced in 1970 by Tanimura, Pisano, Ito, and Bowman. DCCC is considered to be a form of liquid-liquid separation, which includes countercurrent distribution and countercurrent chromatography, that employs a liquid stationary phase held in a collection of vertical glass columns connected in series. The mobile phase passes through the columns in the form of droplets. The DCCC apparatus may be run with the lower phase stationary and the upper phase being introduced to the bottom of each column. Or it may be run with the upper phase stationary and the lower phase being introduced from the top of the column. In both cases, the work of gravity is allowed influence the two immiscible liquids of different densities to form the signature droplets that rise or descend through the column. The mobile phase is pumped at a rate that will allow droplets to form that maximize the mass transfer of a compound between the upper and lower phases. Compounds that are more soluble in the upper phase will travel quickly through the column, while compounds that are more soluble in the stationary phase will linger. Separation occurs because different compounds distribute differently, in a ratio called the partition coefficient, between the two phases. The biphasic solvent system must be carefully formulated so that it will perform appropriately in the DCCC column. The solvent system must form two phases without excess emulsification in order to form droplets. The densities of the two phases must also be sufficiently different so that the phases will move past each other in the column. Many DCCC solvent systems contain both chloroform and water. The solvent system used in the seminal publication was made from chloroform, acetic acid, and aqueous 0.1 M hydrochloric acid. Many subsequent solvents systems were made with chloroform, methanol, and water which is sometimes represented as a ChMWat solvent system. Solvent systems formulated with n-butanol, water and a modifier such as acetic acid, pyridine or n-propanol have also enjoyed some success in DCCC. In some cases, non-aqueous biphasic solvent systems such as acetonitrile and methanol have been utilized. The main difference between DCCC and other types of countercurrent chromatography techniques is that there is no vigorous mixing of phases to enhance the mass transfer of compounds that allows them to distribute between the two phases. In 1951 Kies and Davis described an apparatus similar to the DCCC. They created a series of open tubes that were arranged in a cascade to either drip a more dense phase through a less dense stationary phase or, conversely, a less dense phase could be introduced into the bottom of the tube to dribble through the more dense phase. In 1954, a fractionation column was introduced by Kepes the resembled a CCC column divided into chambers with perforated plastic disks. Similar DCCC-type instruments have been created by A. E. Kostanyan and collaborators which employ vertical columns that are divided into partitions with porous disks. Once the columns are filled with stationary phase, the mobile phase is pumped through, not continuously but, in pulses. The solvent motion created by a pulsed pumping action creates the mixing and settling that is common to most all forms of countercurrent chromatography. Applications DCCC has been employed to separate a wide variety of phytochemicals from their crude extracts. The long list of natural product separations includes: saponins, alkaloids, senna glycosides, monosaccarides, triterpene glycosides, flavone glycosides, xanthones, iridoid glycosides, vitamin B12, lignans, imbricatolic acid, gallic acid, carotenoids, and triterpenoids. DCCC instruments have been commercially manufactured and distributed by Büchi and Tokyo Rikakikai (Eyela). References Chromatography
Droplet countercurrent chromatography
[ "Chemistry" ]
845
[ "Chromatography", "Separation processes" ]
49,575,179
https://en.wikipedia.org/wiki/Noroxymorphone
Noroxymorphone is an opioid which is both a metabolite of oxymorphone and oxycodone and is manufactured specifically as an intermediate in the production of narcotic antagonists such as naltrexone and others. It is a potent agonist of the μ-opioid receptor, but is poorly able to cross the blood-brain-barrier into the central nervous system, and for this reason, has only minimal analgesic activity. In the United States, noroxymorphone is controlled as a Schedule II Narcotic controlled substance with an ACSCN of 9637 and in 2014 the DEA set annual aggregate manufacturing quotas of 17 500 kilogrammes for conversion and 1262.5 kg for sale. In other countries, it may be similarly controlled, controlled at a lower level, or regulated in another way. See also Oxymorphone hydrazone Oxymorphol - a metabolite of oxymorphone and an intermediate in the creation of hydromorphone Hydromorphone Oxycodone Norbuprenorphine Norbinaltorphimine References 4,5-Epoxymorphinans Semisynthetic opioids German inventions Hydroxyarenes Ketones Tertiary alcohols Ethers Mu-opioid receptor agonists Euphoriants
Noroxymorphone
[ "Chemistry" ]
280
[ "Organic compounds", "Ketones", "Functional groups", "Ethers" ]
49,576,192
https://en.wikipedia.org/wiki/Gyrolite
Gyrolite, NaCa16(Si23Al)O60(OH)8·14H2O, is a rare silicate mineral (basic sodium calcium silicate hydrate: N-C-S-H, in cement chemist notation) belonging to the class of phyllosilicates. Gyrolite is also often associated with zeolites. It is most commonly found as spherical or radial formations in hydrothermally altered basalt and basaltic tuffs. These formations can be glassy, dull or fibrous in appearance. Gyrolite is also known as centrallasite, glimmer zeolite or gurolite. Discovery and natural occurrence It was first described in 1851 for an occurrence at The Storr on the Isle of Skye, Scotland and is named from the ancient Greek word for circle, guros (γῦρος), based on the round form in which it is commonly found. Minerals associated with gyrolite include apophyllite, okenite and many of the other zeolites. Gyrolite is found in Scotland, Ireland; Italy, Faroe Islands, Greenland, India, Japan, USA, Canada and various other localities. Occurrence in hardened cement paste and concrete Gyrolite is also mentioned as a rare calcium silicate hydrate (C-S-H) phase in cement chemistry textbooks with a simplified formulation: Ca8(Si4O10)3(OH)4 · ~6 H2O, which is consistent with the general formulation given here above, but does not consider the isomorphic substitution of one silicon atom by one aluminum and one sodium atoms in its crystal lattice. Gyrolite may form at higher temperature in oilwell cement muds containing ground granulated blast furnace slags (GGBFS) activated by alkali. It could also form in CEM III cement-based concrete exposed to alkali-silica reaction (ASR) at elevated temperature. Hydrothermal synthesis Gyrolite can be synthesized in the laboratory, or industrially, by hydrothermal reaction in the temperature range 150 – 250 °C by reacting CaO and amorphous SiO2, or quartz, in saturated steam in the presence of CaSO4 salts or not. At temperature lower than 150 °C, the reaction rate is very slow. At temperature above 250 °C, gyrolite recrystallizes into 1.13 nm tobermorite and xonotlite. Gyrolite is also one of the rare phases detected in situ along with pectolite by synchrotron X-rays diffraction in hydrothermal synthesis of cement. Synthetic gyrolite has also a large specific surface and could enter industrial applications as oil absorber. Gyrolite globular rosettes resemble that of shlykovite, a new natural crystalline C-S-H mineral characterized in 2010 and also to mountainite and rhodesite, other crystalline ASR products of the same family. See also List of minerals References Further reading Anderson Thomas (1851) Description and analysis of gyrolite, a new mineral species. In: The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, Vol. 1, 111–113. (PDF 239,5 kB) Fleischer M. (1959) New mineral names. In: American Mineralogist, Vol. 44, 464–470 (PDF 444 kB; p. 7: Centrallasite = Gyrolite). Calcium minerals Sodium minerals Aluminium minerals Cement Concrete 14 Phyllosilicates Triclinic minerals Minerals in space group 2
Gyrolite
[ "Chemistry", "Engineering" ]
750
[ "Structural engineering", "Concrete", "Hydrate minerals", "Hydrates" ]
52,142,704
https://en.wikipedia.org/wiki/Polygenic%20score
In genetics, a polygenic score (PGS) is a number that summarizes the estimated effect of many genetic variants on an individual's phenotype. The PGS is also called the polygenic index (PGI) or genome-wide score; in the context of disease risk, it is called a polygenic risk score (PRS or PR score) or genetic risk score. The score reflects an individual's estimated genetic predisposition for a given trait and can be used as a predictor for that trait. It gives an estimate of how likely an individual is to have a given trait based only on genetics, without taking environmental factors into account; and it is typically calculated as a weighted sum of trait-associated alleles. Recent progress in genetics has developed polygenic predictors of complex human traits, including risk for many important complex diseases that are typically affected by many genetic variants, each of which confers a small effect on overall risk. In a polygenic risk predictor the lifetime (or age-range) risk for the disease is a numerical function captured by the score which depends on the states of thousands of individual genetic variants (i.e., single-nucleotide polymorphisms, or SNPs). Polygenic scores are widely used in animal breeding and plant breeding due to their efficacy in improving livestock breeding and crops. In humans, polygenic scores are typically generated from data of genome-wide association study (GWAS). They are an active area of research spanning topics such as learning algorithms for genomic prediction; new predictor training; validation testing of predictors; and clinical application of PRS. In 2018, the American Heart Association named polygenic risk scores as one of the major breakthroughs in research in heart disease and stroke. Background DNA in living organisms is the molecular genetic code for life. Although polygenic risk scores from study in humans have gained the most attention, the basic idea was first introduced for selective plant and animal breeding. Similar to the latter-day approaches of constructing a polygenic risk score, an individual'sanimal or plantbreeding value was calculated to be the combined weight of several single-nucleotide polymorphisms (SNPs) by their individual effects on a trait. Human DNA contains about 3 billion bases. The human genome can be broadly separated into coding and non-coding sequences, where the coding genome encodes instructions for genes, including some of the sequence that codes for proteins. Genome-wide association studies enable mapping phenotypes to the variations in nucleotide bases in human populations. Improvements in methodology and studies with large cohorts have enabled the mapping of many traitssome of which are diseasesto the human genome. Learning which variations influence which specific traits and how strongly they do so, are the key targets for constructing polygenic scores in humans. The methods were first considered for humans after the year 2000, and specifically by a proposal in 2007 that such scores could be used in human genetics to identify individuals at high risk for disease. The concept was successfully applied in 2009 by researchers who organized a genome-wide association study (GWAS) regarding schizophrenia with the objective of constructing scores of risk propensity. That study was the first to use the term polygenic score for a prediction drawn from a linear combination of single-nucleotide polymorphism (SNP) genotypeswhich was able to explain 3% of the variance in schizophrenia. Calculation with genome-wide association study A PRS is constructed from the estimated effect sizes derived from a genome-wide association study (GWAS). In a GWAS, single-nucleotide polymorphisms (SNPs) are tested for an association between cases and controls, (see top graphic). The results from a GWAS estimate the strength of the association at each SNP, i.e., the effect size at the SNP, as well as a p-value for statistical significance. A typical score is then calculated by adding the number of risk-modifying alleles across a large number of SNPs, where the number of alleles for each SNP is multiplied by the weight for the SNP. In mathematical form, the estimated polygenic score is obtained as the sum across m number of SNPs with risk-increasing alleles weighted by their weights, i.e., . This idea can be generalized to the study of any trait, and is an example of the more general mathematical term regression analysis. Key considerations Methods for generating polygenic scores in humans are an active area of research. Two key considerations in developing polygenic scores are which SNPs and the number of SNPs to include. The simplest, the so-called "pruning and thresholding" method, sets weights equal to the coefficient estimates from a regression of the trait on each genetic variant. The included SNPs may be selected using an algorithm that attempts to ensure that each marker is approximately independent. Independence of each SNP is important for the score's predictive accuracy. SNPs that are physically close to each other are more likely to be in linkage disequilibrium, meaning they typically are inherited together and therefore don't provide independent predictive power. That's what's referred to as 'pruning'. The 'thresholding' refers to using only SNPs that meet a specific p-value threshold. Penalized regression can also be used to construct polygenic scores. From prior information penalized regression assigns probabilities on: 1) how many genetic variants are expected to affect a trait, and 2) the distribution of their effect sizes. These methods in effect "penalize" the large coefficients in a regression model and shrink them conservatively. One popular tool for this approach is "PRS-CS". Another is to use certain Bayesian methods, first proposed in 2001 that directly incorporate genetic features of a given trait as well as genomic features like linkage disequilibrium. (One Bayesian method uses "linkage disequilibrium prediction" or LDpred.) More approaches for developing polygenic risk scores continue to be described. For example, by incorporating effect sizes from populations of different ancestry, the predictive ability of scores can be improved. Incorporating knowledge of the functional roles of specific genomic chunks can improve the utility of scores. Studies have examined the performances of these methods on standardized dataset. Application to humans As the number of genome-wide association studies has exploded, along with rapid advances in methods for calculating polygenic scores, its most obvious application is in clinical settings for disease prediction or risk stratification. It is important not to over- or under-state the value of polygenic scores. A key advantage of quantifying polygenic contribution for each individual is that the genetic liability does not change over an individual's lifespan. However, while a disease may have strong genetic contributions, the risk arising from one's genetics has to be interpreted in the context of environmental factors. For example, even if an individual has a high genetic risk for alcoholism, that risk is lessened if that individual is never exposed to alcohol. Predictive performance in humans For humans, while most polygenic scores are not predictive enough to diagnose disease, they could be used in addition to other covariates (such as age, BMI, smoking status) to improve estimates of disease susceptibility. However, even if a polygenic score might not make reliable diagnostic predictions across an entire population, it may still make very accurate predictions for outliers at extreme high or low risk. The clinical utility may therefore still be large even if average measures of prediction performance are moderate. Although issues such as poorer predictive performance in individuals of non-European ancestry limit widespread use, several authors have noted that some causal variants for some conditions, but not others, are shared between Europeans and other groups across different continents for (e.g.) BMI and type 2 diabetes in African populations as well as schizophrenia in Chinese populations. Other researchers recognize that polygenic under-prediction in non-European population should galvanize new GWAS that prioritize greater genetic diversity in order to maximize the potential health benefits brought about by predictive polygenic scores. Significant scientific efforts are being made to this end. Embryo genetic screening is common with millions biopsied and tested each year worldwide. Genotyping methods have been developed so that the embryo genotype can be determined to high precision. Testing for aneuploidy and monogenetic diseases has increasingly become established over decades, whereas tests for polygenic diseases have begun to be employed more recently, having been first used in embryo selection in 2019. The use of polygenic scores for embryo selection has been criticised due to alleged ethical and safety issues as well as limited practical utility. However, trait-specific evaluations claiming the contrary have been put forth and ethical arguments for PGS-based embryo selection have also been made. The topic continues to be an active area of research not only within genomics but also within clinical applications and ethics. As of 2019, polygenic scores from well over a hundred phenotypes have been developed from genome-wide association statistics. These include scores that can be categorized as anthropometric, behavioural, cardiovascular, non-cancer illness, psychiatric/neurological, and response to treatment/medication. Examples of disease prediction performance When predicting disease risk, a PGS gives a continuous score that estimates the risk of having or getting the disease, within some pre-defined time span. A common metric for evaluating such continuous estimates of yes/no questions (see Binary classification) is the area under the ROC curve (AUC). Some example results of PGS performance, as measured in AUC (0 ≤ AUC ≤ 1 where a larger number implies better prediction), include: In 2018, AUC ≈ 0.64 for coronary disease using ~120,000 British individuals. In 2019, AUC ≈ 0.63 for breast cancer, developed from ~95,000 case subjects and ~75,000 controls of European ancestry. In 2019, AUC ≈ 0.71 for hypothyroidism for ~24,000 case subjects and ~463,00 controls of European ancestry. In 2020, AUC ≈ 0.71 for schizophrenia, using 90 cohorts including ~67,000 case subjects and ~94,000 controls with ~80% of European ancestry and ~20% of East Asian ancestry. Note that these results use purely genetic information as input; including additional information such as age and sex often greatly improves the predictions. The coronary disease predictor and the hypothyroidism predictor above achieve AUCs of ~ 0.80 and ~0.78, respectively, when also including age and sex. Importance of sample size The performance of a polygenic predictor is highly dependent on the size of the dataset that is available for analysis and ML training. Recent scientific progress in prediction power relies heavily on the creation and expansion of large biobanks containing data for both genotypes and phenotypes of very many individuals. As of 2021, there exist several biobanks with hundreds of thousands samples, i.e., data entries with both genetic and trait information for each individual (see for instance the incomplete list of biobanks). With the use of these growing biobanks, data from many thousands of individuals are used to detect the relevant variants for a specific trait. Exactly how many are required depends very much on the trait in question. Typically, increasing levels of prediction are observed until a plateau phase where the performance levels off and does not change much when increasing the sample size even further. This is the limit of how accurate a polygenic predictor that only uses genetic information can be and is set by the heritability of the specific trait. The sample size required to reach this performance level for a certain trait is determined by the complexity of the underlying genetic architecture and the distribution of genetic variance in the sampled population. This sample size dependence is illustrated in the figure for hypothyroidism, hypertension and type 2 diabetes. Note again, that current methods to construct polygenic predictors are sensitive to the ancestries present in the data. As of 2021, most available data have been primarily of populations with European ancestry, which is the reason why PGS generally perform better within this ancestry. The construction of more diverse biobanks with successful recruitment from all ancestries is required to rectify this skewed access to and benefits from PGS-based medicine. Clinical utility and current usage A landmark study examining the role of polygenic risk scores in cardiovascular disease invigorated interest the clinical potential of polygenic scores. This study demonstrated that an individual with the highest polygenic risk score (top 1%) had a lifetime cardiovascular risk >10% which was comparable to those with rare genetic variants. This comparison is important because clinical practice can be influenced by knowing which individuals have this rare genetic cause of cardiovascular disease. Since this study, polygenic risk scores have shown promise for disease prediction across other traits. Polygenic risk scores have been studied heavily in obesity, coronary artery disease, diabetes, breast cancer, prostate cancer, Alzheimer's disease and psychiatric diseases. As of January 2021 providing PRS directly to individuals was undergoing research trials in health systems around the world, but is not yet offered as standard of care. Most use is therefore through consumer genetic testing, where a number of private companies report PRS for a number of diseases and traits. Consumers download their genotype (genetic variant) data and upload them into online PRS calculators, e.g. Scripps Health, Impute.me or Color Genomics. The most frequently reported motivation for individuals to seek out PRS reports is general curiosity (98.2%), and the reactions are generally mixed with common misinterpretations. It is speculated that personal use of PRS could contribute to treatment choices, but that more data is needed. As of 2020 a more typical use was that clinicians face individuals with commercially derived disease-specific PRS in the expectation that the clinician will interpret them, something that may create extra burdens for the clinical care system. Challenges and risks in clinical contexts At a fundamental level, the use of polygenic scores in clinical context will have similar technical issues as existing tools. For example, if a tool is not validated in a diverse population, then it may exacerbate disparities with unequal efficacy across populations. This is especially important in genetics where, as of 2018, a majority of the studies to date have been done in Europeans. Other challenges that can arise include how precisely the polygenic risk score can be calculated and how precise it needs to be for clinical utility. Even if a polygenic score is accurately calculated and calibrated for a population, its interpretation must be approached with caution. First, it is important to realize that polygenic traits are different from monogenic traits; the latter stem from fewer genetic loci and can be detected more accurately. Genetic tests are often difficult to interpret and require genetic counseling. Currently, polygenic-score results are being shared with clinicians. Since monogenic genetic testing is far more mature than polygenic scores, we can look there for approximating the clinical impact of polygenic scores. While some studies have found negative effects of returning monogenic genetic results to patients, the majority of studies have that negative consequences are minor. Benefits in humans Unlike many other clinical laboratory or imaging methods, an individual's germ-line genetic risk can be calculated at birth for a variety of diseases after sequencing their DNA once. Thus, polygenic scores may ultimately be a cost-effective measure that can be informative for clinical management. Moreover, the polygenic risk score may be informative across an individual's lifespan helping to quantify the genetic lifelong risk for certain diseases. For many diseases, having a strong genetic risk can results in an earlier onset of presentation (e.g. Familial Hypercholesterolemia). Recognizing an increased genetic burden earlier can allow clinicians to intervene earlier and avoid delayed diagnoses. Polygenic score can be combined with traditional risk factors to increase clinical utility. For example, polygenic risk scores may help improve diagnosis of diseases. This is especially evident in distinguishing Type 1 from Type 2 Diabetes. Likewise, a polygenic risk score based approach may reduce invasive diagnostic procedures as demonstrated in Celiac disease. Polygenic scores may also empower individuals to alter their lifestyles to reduce risk for diseases. While there is some evidence for behavior modification as a result of knowing one's genetic predisposition, more work is required to evaluate risk-modifying behaviors across a variety of different disease states. Population level screening is another use case for polygenic scores. The goal of population-level screening is to identify patients at high risk for a disease who would benefit from an existing treatment. Polygenic scores can identify a subset of the population at high risk that could benefit from screening. Several clinical studies are being done in breast cancer and heart disease is another area that could benefit from a polygenic score based screening program. Non-predictive applications A variety of applications exists for polygenic scores. In humans, polygenic scores were originally computed in an effort to predict the prevalence and etiology of complex, heritable diseases, which are typically affected by many genetic variants that individually confer a small effect to overall risk. Additionally, a polygenic score can be used in several different ways: as a lower bound to test whether heritability estimates may be biased; as a measure of genetic overlap of traits (genetic correlation), which might indicate e.g. shared genetic bases for groups of mental disorders; as a means to assess group differences in a trait such as height, or to examine changes in a trait over time due to natural selection indicative of a soft selective sweep (as e.g. for intelligence where the changes in frequency would be too small to detect on each individual hit but not on the overall polygenic score); in Mendelian randomization (assuming no pleiotropy with relevant traits); to detect & control for the presence of genetic confounds in outcomes (e.g. the correlation of schizophrenia with poverty); or to investigate gene–environment interactions and correlations. Polygenic scores also have useful statistical properties in (genomic) association testing, for instance to account for outcome-specific background effects and/or improve statistical power. Applications in non-human species The benefit of polygenic scores is that they can be used to predict the future for crops, animal breeding, and humans alike. Although the same basic concepts underlie these areas of prediction, they face different challenges that require different methodologies. The ability to produce very large family size in nonhuman species, accompanied by deliberate selection, leads to a smaller effective population, higher degrees of linkage disequilibrium among individuals, and a higher average genetic relatedness among individuals within a population. For example, members of plant and animal breeds that humans have effectively created, such as modern maize or domestic cattle, are all technically "related". In human genomic prediction, by contrast, unrelated individuals in large populations are selected to estimate the effects of common SNPs. Because of smaller effective population in livestock, the mean coefficient of relationship between any two individuals is likely high, and common SNPs will tag causal variants at greater physical distance than for humans; this is the major reason for lower SNP-based heritability estimates for humans compared to livestock. In both cases, however, sample size is key for maximizing the accuracy of genomic prediction. While modern genomic prediction scoring in humans is generally referred to as a "polygenic score" (PGS) or a "polygenic risk score" (PRS), in livestock the more common term is "genomic estimated breeding value", or GEBV (similar to the more familiar "EBV", but with genotypic data). Conceptually, a GEBV is the same as a PGS: a linear function of genetic variants that are each weighted by the apparent effect of the variant. Despite this, polygenic prediction in livestock is useful for a fundamentally different reason than for humans. In humans, a PRS is used for the prediction of individual phenotype, while in livestock a GEBV is typically used to predict the offspring's average value of a phenotype of interest in terms of the genetic material it inherited from a parent. In this way, a GEBV can be understood as the average of the offspring of an individual or pair of individual animals. GEBVs are also typically communicated in the units of the trait of interest. For example, the expected increase in milk production of the offspring of a specific parent compared to the offspring from a reference population might be a typical way of using a GEBV in dairy cow breeding and selection. Notes A. Preprint lists AUC for pure PRS while the published version of the paper only lists AUC for PGS combined with age, sex and genotyping array information. References Further reading Francesca Forzano, Olga Antonova, Angus Clarke, Guido de Wert, Sabine Hentze, Yalda Jamshidi, Yves Moreau, Markus Perola, Inga Prokopenko, Andrew Read, Alexandre Reymond, Vigdis Stefansdottir, Carla van El (2022). "The use of polygenic risk scores in pre-implantation genetic testing: an unproven, unethical practice." Nature 30, pages 493–495. External links Polygenic Risk Scores Polygenic Score Atlas Polygenic Score (PGS) Catalog Animal breeding Plant breeding Regression analysis Genetics studies Statistical genetics Personalized medicine
Polygenic score
[ "Chemistry" ]
4,471
[ "Plant breeding", "Molecular biology" ]
52,144,005
https://en.wikipedia.org/wiki/Jelly-falls
Jelly-falls are marine carbon cycling events whereby gelatinous zooplankton, primarily cnidarians, sink to the seafloor and enhance carbon and nitrogen fluxes via rapidly sinking particulate organic matter. These events provide nutrition to benthic megafauna and bacteria. Jelly-falls have been implicated as a major “gelatinous pathway” for the sequestration of labile biogenic carbon through the biological pump. These events are common in protected areas with high levels of primary production and water quality suitable to support cnidarian species. These areas include estuaries and several studies have been conducted in fjords of Norway. Initiation Jelly-falls are primarily made up of the decaying corpses of Cnidaria and Thaliacea (Pyrosomida, Doliolida, and Salpida). Several circumstances can trigger the death of gelatinous organisms which would cause them to sink. These include high levels of primary production that can clog the feeding apparatuses of the organisms, a sudden temperature change, when an old bloom runs out of food, when predators damage the bodies of the jellies, and parasitism. In general, however, jelly-falls are linked to jelly-blooms and primary production, with over 75% of the jelly falls in subpolar and temperate regions occurring after spring blooms, and over 25% of the jelly-falls in the tropics occurring after upwelling events. With global climates shifting towards creating warmer and more acidic oceans, conditions not favored by non-resilient species, jellies are likely to grow in population sizes. Eutrophic areas and dead zones can become jelly hot spots with substantial blooms. As the climate changes and ocean waters warm, jelly blooms become more prolific and the transport of jelly-carbon to the lower ocean increases. With a possible slowing of the classic biological pump, the transport of carbon and nutrients to the deep sea through jelly-falls may become more and more important to deep ocean. Decomposition The decomposition process starts after death and can proceed in the water column as the gelatinous organisms are sinking. Decay happens faster in the tropics than in temperate and subpolar waters as a result of warmer temperatures. In the tropics, a jelly-fall may take less than 2 days to decay in warmer, surface water, but as many as 25 days when it is lower than 1000 m deep. However, lone gelatinous organisms may spend less time on the sea floor as one study found that jellies could be decomposed by scavengers in the Norwegian deep sea in under two and a half hours. Decomposition of jelly-falls is largely aided by these kinds of scavengers. In general, echinoderms, such as sea stars, have emerged as the primary consumer of jelly-falls, followed by crustaceans and fish. However, which scavengers find their way to jelly-falls is highly reliant on each ecosystem. For example, in an experiment in the Norwegian deep sea, hagfish were the first scavengers to find the traps of decaying jellies, followed by squat lobsters, and finally decapod shrimp. Photographs taken off the coast of Norway on natural jelly-falls also revealed caridean shrimp feeding on jelly carcasses. With increased populations and blooms becoming more common, with favorable conditions and a lack of other filter feeders in the area to consume plankton, environments with jellies present will have carbon pumps be more primarily supplied with jelly-falls. This could lead to issues of habitats with established biological pumps succumbing to nonequilibrium as the presence of jellies would change the food web as well as changes to the amount of carbon deposited into the sediment. Finally, decomposition is aided by the microbial community. In a case study on the Black Sea, the number of bacteria increased in the presence of jelly-falls, and the bacteria were shown to preferentially use nitrogen released from decaying jelly carcasses while mostly leaving carbon. In a study conducted by Andrew Sweetman in 2016, it was discovered using core samples of the sediment in Norwegian fjords, the presence of jelly-falls significantly impacted the biochemical process of these benthic communities. Bacteria consume jelly carcasses rapidly, removing opportunities of acquiring sustenance for bottoming feeding macrofauna, which has impacts traveling up the trophic levels. In addition, with the exclusion of scavengers, jelly-falls develop a white layer of bacteria over the decaying carcasses and emit a black residue over the surrounding area, which is from sulfide. This high level of microbial activity requires a lot of oxygen, which can lead zones around jelly-falls to become hypoxic and inhospitable to larger scavengers. Research challenges Researching jelly-falls relies on direct observational data such as video, photography, or benthic trawls. A complication with trawling for jelly-falls is the gelatinous carcass easily falls apart and as a result, opportunistic photography, videography, and chemical analysis have been primary methods of monitoring. This means that jelly-falls are not always observed in the time period in which they exist. Because jelly-falls can be fully processed and degraded within a number of hours by scavengers and the fact that some jelly-falls will not sink below 500 m in tropical and subtropical waters, the importance and prevalence of jelly-falls may be underestimated. See also Biological pump Jellyfish Pyrosoma atlanticum Whale fall Deep sea community Dead zone References Aquatic ecology Biological oceanography Chemical oceanography Biogeochemistry
Jelly-falls
[ "Chemistry", "Biology", "Environmental_science" ]
1,169
[ "Environmental chemistry", "Chemical oceanography", "Biogeochemistry", "Ecosystems", "Aquatic ecology" ]
31,213,900
https://en.wikipedia.org/wiki/Cavity%20perturbation%20theory
In mathematics and electronics, cavity perturbation theory describes methods for derivation of perturbation formulae for performance changes of a cavity resonator. These performance changes are assumed to be caused by either introduction of a small foreign object into the cavity, or a small deformation of its boundary. Various mathematical methods can be used to study the characteristics of cavities, which are important in the field of microwave systems, and more generally in the field of electro magnetism. There are many industrial applications for cavity resonators, including microwave ovens, microwave communication systems, and remote imaging systems using electro magnetic waves. How a resonant cavity performs can affect the amount of energy that is required to make it resonate, or the relative stability or instability of the system. Introduction When a resonant cavity is perturbed, e.g. by introducing a foreign object with distinct material properties into the cavity or when the shape of the cavity is changed slightly, electromagnetic fields inside the cavity change accordingly. This means that all the resonant modes (i.e. the quasinormal mode) of the unperturbed cavity slightly change. Analytically predicting how the perturbation changes the optical response is a classical problem in electromagnetics, with important implications spanning from the radio-frequency domain to present-day nano-optics. The underlying assumption of cavity perturbation theory is that electromagnetic fields inside the cavity after the change differ by a very small amount from the fields before the change. Then Maxwell's equations for original and perturbed cavities can be used to derive analytical expressions for the resulting resonant frequency shift and linewidth change (or Q factor change) by referring only to the original unperturbed mode (not the perturbed one). General theory It is convenient to denote cavity frequencies with a complex number , where is the angular resonant frequency and is the inverse of the mode lifetime. Cavity perturbation theory has been initially proposed by Bethe-Schwinger in optics , and Waldron in the radio frequency domain. These initial approaches rely on formulae that consider stored energy where and are the complex frequencies of the perturbed and unperturbed cavity modes, and and are the electromagnetic fields of the unperturbed mode (permeability change is not considered for simplicity). Expression () relies on stored energy considerations. The latter are intuitive since common sense dictates that the maximum change in resonant frequency occurs when the perturbation is placed at the intensity maximum of the cavity mode. However energy consideration in electromagnetism is only valid for Hermitian systems for which energy is conserved. For cavities, energy is conserved only in the limit of very small leakage (infinite Q's), so that Expression () is only valid in this limit. For instance, it is apparent that Expression () predicts a change of the Q factor () only if is complex, i.e. only if the perturber is absorbent. Clearly this is not the case and it is well known that a dielectric perturbation may either increase or decrease the Q factor. The problems stems from the fact that a cavity is an open non-Hermitian system with leakage and absorption. The theory of non-Hermitian electromagnetic systems abandons energy, i.e. products, and rather focuses on products that are complex quantities, the imaginary part being related to the leakage. To emphasize the difference between the normal modes of Hermitian systems and the resonance modes of leaky systems, the resonance modes are often referred to as quasinormal mode. In this framework, the frequency shift and the Q change are predicted by The accuracy of the seminal equation has been verified in a variety of complicated geometries. For low-Q cavities, such as plasmonic nanoresonators that are used for sensing, equation has been shown to predict both the shift and the broadening of the resonance with a high accuracy, whereas equation is inaccurately predicting both. For high-Q photonic cavities, such as photonic crystal cavities or microrings, experiments have evidenced that equation accurately predicts both the shift and the Q change, whereas equation accurately predicts the shift only. The following sections are written with products; however, they should be understood with the products of quasinormal mode theory. Material perturbation When a material within a cavity is changed (permittivity and/or permeability), a corresponding change in resonant frequency can be approximated as: where is the angular resonant frequency of the perturbed cavity, is the resonant frequency of the original cavity, and represent original electric and magnetic field respectively, and are original permeability and permittivity respectively, while and are changes in original permeability and permittivity introduced by material change. Expression () can be rewritten in terms of stored energies as: where W is the total energy stored in the original cavity and and are electric and magnetic energy densities respectively. Shape perturbation When a general shape of a resonant cavity is changed, a corresponding change in resonant frequency can be approximated as: Expression () for change in resonant frequency can additionally be written in terms of time-average stored energies as: where and represent time-average electric and magnetic energies contained in . This expression can also be written in terms of energy densities as: Considerable accuracy improvements of the predictive force of Equation () can be gained by incorporating local field corrections, which simply results from the interface conditions for electromagnetic fields that are different for the displacement-field and electric-field vectors at the shape boundaries. Applications Microwave measurement techniques based on cavity perturbation theory are generally used to determine the dielectric and magnetic parameters of materials and various circuit components such as dielectric resonators. Since ex-ante knowledge of the resonant frequency, resonant frequency shift and electromagnetic fields is necessary in order to extrapolate material properties, these measurement techniques generally make use of standard resonant cavities where resonant frequencies and electromagnetic fields are well known. Two examples of such standard resonant cavities are rectangular and circular waveguide cavities and coaxial cables resonators . Cavity perturbation measurement techniques for material characterization are used in many fields ranging from physics and material science to medicine and biology. Examples rectangular waveguide cavity For rectangular waveguide cavity, field distribution of dominant mode is well known. Ideally, the material to be measured is introduced into the cavity at the position of maximum electric or magnetic field. When the material is introduced at the position of maximum electric field, then the contribution of magnetic field to perturbed frequency shift is very small and can be ignored. In this case, we can use perturbation theory to derive expressions for real and imaginary components of complex material permittivity as: where and represent resonant frequencies of original cavity and perturbed cavity respectively, and represent volumes of original cavity and material sample respectively, and represent quality factors of original and perturbed cavities respectively. Once the complex permittivity of the material is known, we can easily calculate its effective conductivity and dielectric loss tangent as: where f is the frequency of interest and is the free space permittivity. Similarly, if the material is introduced into the cavity at the position of maximum magnetic field, then the contribution of electric field to perturbed frequency shift is very small and can be ignored. In this case, we can use perturbation theory to derive expressions for complex material permeability as: where is the guide wavelength (calculated as ). References Electromagnetism Electrical engineering Microwave technology Radio electronics
Cavity perturbation theory
[ "Physics", "Engineering" ]
1,592
[ "Radio electronics", "Electromagnetism", "Physical phenomena", "Fundamental interactions", "Electrical engineering" ]
31,214,917
https://en.wikipedia.org/wiki/Ex-Rad
Ex-Rad (or Ex-RAD; recilisib sodium (INN, USAN); development code ON 01210.Na) is an experimental drug being developed by Onconova Therapeutics and the U.S. Department of Defense. It is being studied as a radiation protection agent. Chemically, it is the sodium salt of 4-carboxystyryl-4-chlorobenzylsulfone. Clinical trials The results of two Phase I clinical studies in healthy human volunteers indicate that subcutaneously injected Ex-Rad is safe and well tolerated, with "no evidence of systemic side effects". A study in mice demonstrated the efficacy of Ex-Rad by increasing the survival rate of mice exposed to typically lethal whole-body irradiation. The study tested oral and parenteral administration of Ex-Rad for both pre- and post-exposure radiomitigation. Research on Ex-Rad has involved collaboration with the Armed Forces Radiobiology Research Institute (AFRRI), the Department of Biochemistry and Molecular & Cellular Biology at Georgetown University, Long Island University's Arnold & Marie Schwartz College of Pharmacy, and the Department of Oncological Sciences at the Mt. Sinai School of Medicine. Mechanism of action Onconova reports that Ex-Rad protects cells exposed to radiation against DNA damage, and that the drug's mechanism of action does not involve scavenging free radicals or arresting the cell cycle. Instead, they claim it employs a "novel mechanism" involving "intracellular signaling, damage sensing, and DNA repair pathways". Ex-RAD is a chlorobenzylsulfone derivative that works after free radicals have damaged DNA. Onconova CEO Ramesh Kumar believes this is a better approach than trying to scavenge free radicals. “Free radicals are very short-lived, and so the window of opportunity to give a drug is very narrow,” he says. In cell and animal models, Ex-RAD protects hematopoietic and gastrointestinal tissues from radiation injury when given either before or after exposure. See also CBLB502 (Entolimod), a compound being studied for its ability to suppresses apoptotic cell death in hematopoietic and gastrointestinal cells. Amifostine (WR2721), the first selective-target and broad-spectrum radioprotector, upregulates DNA repair Filgrastim (Neupogen), a hematopoietic countermeasure of acute radiation syndrome (ARS) Pegfilgrastim (Neulasta), longer acting than its parent, filgrastim Sargramostim (leukine), similar in use to filgrastim N-Acetylcysteine, protects against DNA damage, suggested to be comparable to amifostine Thrombomodulin Activated protein C Chelation therapy, a countermeasure for treating internal radio-isotope contamination DPTA, a chelation agent used to eliminate actinides that have been ingested, one of three U.S. Food and Drug Administration (FDA) radioprotectants stockpiled Prussian blue/radiogardase, a chelation agent to treat radio-Cesium and thallium consumption, one of three the FDA radioprotectants stockpiled Potassium iodide, a prophylactic drug recommended before entering radioiodine environments, one of three FDA radioprotectants stockpiled Kojic acid Hyaluronan Petkau effect References Radiation health effects Radiobiology Drugs with unknown mechanisms of action Experimental drugs 4-Chlorophenyl compounds Sulfones Benzoates
Ex-Rad
[ "Chemistry", "Materials_science", "Biology" ]
768
[ "Radiation health effects", "Radiobiology", "Functional groups", "Sulfones", "Radiation effects", "Radioactivity" ]
25,159,096
https://en.wikipedia.org/wiki/University%20of%20Dayton%20Research%20Institute
The University of Dayton Research Institute is the professional research arm of the University of Dayton in Dayton, Ohio. UD is ranked first among all colleges in the nation for sponsored materials research, according to statistics released by the National Science Foundation. In Ohio, UD is ranked first among nonprofit institutions for research sponsored by the Department of Defense. Facts and information The University of Dayton Research Institute (UDRI) employs 830 full-time research, technical and administrative staff. In fiscal year 2023, UDRI performed 95% of sponsored research at the University, largely contributing to UD's total research volume of $238.6 million. UDRI is nationally recognized for its research in materials, structures, sensors and autonomous systems, energy and sustainment technologies. Established as the research arm of the University of Dayton in 1956, UDRI broke the $3 billion mark in cumulative sponsored research in 2022. References External links University of Dayton Research Institute Research institutes in Ohio Nanotechnology institutions Energy research institutes Materials science institutes Companies based in Dayton, Ohio University of Dayton Economy of Dayton, Ohio Research institutes established in 1956 1956 establishments in Ohio
University of Dayton Research Institute
[ "Materials_science", "Engineering" ]
225
[ "Nanotechnology institutions", "Energy research institutes", "Materials science organizations", "Materials science institutes", "Nanotechnology", "Energy organizations" ]
25,162,957
https://en.wikipedia.org/wiki/Transposons%20as%20a%20genetic%20tool
Transposons are semi-parasitic DNA sequences which can replicate and spread through the host's genome. They can be harnessed as a genetic tool for analysis of gene and protein function. The use of transposons is well-developed in Drosophila (in which P elements are most commonly used) and in Thale cress (Arabidopsis thaliana) and bacteria such as Escherichia coli (E. coli ). Currently transposons can be used in genetic research and recombinant genetic engineering for insertional mutagenesis. Insertional mutagenesis is when transposons function as vectors to help remove and integrate genetic sequences. Given their relatively simple design and inherent ability to move DNA sequences, transposons are highly compatible at transducing genetic material, making them ideal genetic tools. Signature-Tagging Mutagenesis Signature-tagging mutagenesis (also known as STM) is a technique focused on using transposable element insertion to determine the phenotype of a locus in an organism's genome. While genetic sequencing techniques can determine the genotype of a genome, they cannot determine the function or phenotypic expression of gene sequences. STM can bypass this issue by mutating a locus, causing it form a new phenotype; by comparing the observed phenotypic expressions of the mutated and unaltered locus, one can deduce the phenotypic expression of the locus. In STM, specially tagged transposons are inserted into an organism, such as a bacterium, and randomly integrated into the host genome. In theory, the modified mutant organism should express the altered gene, thus altering the phenotype. If a new phenotype is observed, the genome is sequenced and searched for tagged transposons. If the site of transposon integration is found, then the locus may be responsible for expressing the phenotypes. There have been many studies conducted transposon based STM, most notably with the P elements in Drosophila. P elements are transposons originally described in Drosophila melanogaster genome capable of being artificially synthesized or spread to other Drosophila species through horizontal transfer. In experimental trials, artificially created P elements and transposase genes are inserted into the genomes of Drosophila embryos. Subsequently, embryos that exhibit mutations have their genomes sequenced and compared, thus revealing the loci that have been affected by insertion and the roles of the loci., Insertional Inactivation Insertional inactivation focuses on suppressing the expression of a gene by disrupting its sequence with an insertion. When additional nucleotides are inserted near or into a locus, the locus can suffer a frameshift mutation that could prevent it from being properly expressed into polypeptide chain. Transposon-based Insertional inactivation is considered for medical research from suppression of antibiotic resistance in bacteria to the treatment of genetic diseases. In the treatment of genetic diseases, the insertion of a transposon into deleterious gene locus of organism's genome would misalign the locus sequence, truncating any harmful proteins formed and rendering them non-functional. Alternatively insertional inactivation could be used to suppress genes that express antibiotic-resistance in bacteria., Sleeping Beauty While transposons have been used successfully in plants and invertebrate subjects through insertional mutagenesis and insertional activation, the usage of transposons in vertebrates has been limited due to a lack of transposons specific to vertebrates. Nearly all transposons compatible to and present within vertebrate genomes are inactive and are often relegated to "junk" DNA. However it is possible to identify dormant transposons and artificially recreate them as active agents. Genetic researchers Zsuzsanna Izsvák and Zoltán Ivics discovered a fish transposon sequence that, despite being dormant for 15 million years, could be resurrected as a vector for introducing foreign genes into the vertebrate genomes, including those of humans. This transposon, called Sleeping Beauty was described in 1997, and could be artificially reactivated into a functioning transposon. Sleeping Beauty can also be viable in gene therapy procedures by helping introduce beneficial transgenes into host genomes. Belcher et al. tested this notion by using Sleeping Beauty transposons to help insert sequences into mice with sickle cell anemia so they can produce the enzymes need to counteract their anemia. Belcher et al. began their experiment by constructing a genetic sequence consisting of the Hmox-1 transposable element and transposase from Sleeping Beauty. This sequence was then added inserted into a plasmid and introduced into the cells of the mice. The transposase from Sleeping Beauty helped insert the Hmox-1 transposon into the mice genome, allowing the production of enzyme heme oxygenase-1 (HO-1). The mice that receive the insertion showed a fivefold increase in the expression of HO-1, which in turn reduced blood vessel blockage from sickle-cell anemia. The publication of the experiment in 2010 showed that transposons can be useful in gene therapy. P Elements as a tool (Drosophila) Naturally occurring P elements contain: coding sequence for the enzyme transposase; recognition sequences for transposase action. Transposase is an enzyme which regulates and catalyzes the excision of a P element from the host DNA, cutting at two recognition sites, and then reinserts the P element randomly. It is the random-insertion process, that can interfere with existing genes, or carry an additional gene, that can be used as a process for genetic research. To use this process as a useful and controllable genetic tool, the two parts of the P element must be separated to prevent uncontrolled transposition. The normal genetic tools are therefore: DNA coding for transposase (or occasionally simply transposase) with no transposase recognition sequences so it cannot insert; and a "P Plasmid". P Plasmids always contain: a Drosophila reporter gene, often a red-eye marker (the product of the white gene); transposase recognition sequences; and may contain: a gene of interest; an E. coli reporter gene (often some kind of antibiotic resistance); and origin of replication and other associated plasmid 'housekeeping' sequences. Methods of usage (Drosophila) (Forward genetics methods) There are two main ways to utilise these tools: Fly Transformation, and Insertional Mutagenesis, each described below. Fly Transformation (hoping for insertion in non-coding regions) Microinject the posterior end of an early-stage (pre-cellularization) embryo with coding for transposase and a plasmid with the reporter gene, gene of interest and transposase recognition sequences. Random transposition occurs, inserting the gene of interest and reporter gene. Grow flies and cross to remove genetic variation between the cells of the organism. (Only some of the cells of the organism will have been transformed. By breeding only the genotype of the gametes is passed on, removing this variation). Look for flies expressing the reporter gene. These carry the inserted gene of interest, so can be investigated to determine the phenotype due to the gene of interest. It is important to note that the inserted gene may have damaged the function of one of the host's genes. Several lines of flies are required so comparison can take place and ensure that no additional genes have been knocked out. Insertional Mutagenesis (hoping for insertion in coding region) Microinject the embryo with coding for transposase and a plasmid with the reporter gene and transposase recognition sequences (and often the E. coli reporter gene and origin of replication, etc.). Random transposition occurs, inserting the reporter gene randomly. The insertion tends to occur near actively transcribed genes, as this is where the chromatin structure is loosest, so the DNA most accessible. Grow flies and cross to remove genetic variation between the cells of the organism (see above). Look for flies expressing the reporter gene. These have experienced a successful transposition, so can be investigated to determine the phenotype due to mutation of existing genes. Possible mutations: Insertion in a translated region => hybrid protein/truncated protein. Usually causes loss of protein function, although more complex effects are seen. Insertion in an intron => altered splicing pattern/splicing failure. Usually results in protein truncation or the production of inactive mis-spliced products, although more complex effects are common. Insertion in 5' (the sequence that will become the mRNA 5' UTR) untranslated region => truncation of transcript. Usually results in failure of the mRNA to contain a 5' cap, leading to less efficient translation. Insertion in promoter => reduction/complete loss of expression. Always results in greatly reduced protein production levels. The most useful type of insertion for analysis due to the simplicity of the situation. Insertion between promoter and upstream enhancers => loss of enhancer function/hijack of enhancer function for reporter gene.† Generally reduces the level of protein specificity to cell type, although complex effects are often seen. † Enhancer trapping The hijack of an enhancer from another gene allows the analysis of the function of that enhancer. This, especially if the reporter gene is for a fluorescent protein, can be used to help map expression of the mutated gene through the organism, and is a very powerful tool. Other usage of P Elements (Drosophila) (Reverse genetics method) Secondary mobilisation If there is an old P element near the gene of interest (with a broken transposase) you can remobilise by microinjection of the embryo with coding for transposase or transposase itself. The P element will often transpose within a few kilobases of the original location, hopefully affecting your gene of interest as for 'Insertional Mutagenisis'. Analysis of Mutagenesis Products (Drosophila) Once the function of the mutated protein has been determined it is possible to sequence/purify/clone the regions flanking the insertion by the following methods: Inverse PCR Isolate the fly genome. Undergo a light digest (using an enzyme [enzyme 1] known NOT to cut in the reporter gene), giving fragments of a few kilobases, a few with the insertion and its flanking DNA. Self ligate the digest (low DNA concentration to ensure self ligation) giving a selection of circular DNA fragments, a few with the insertion and its flanking DNA. Cut the plasmids at some point in the reporter gene (with an enzyme [enzyme 2] known to cut very rarely in genomic DNA, but is known to in the reporter gene). Using primers for the reporter gene sections, the DNA can be amplified for sequencing. The process of cutting, self ligation and re cutting allows the amplification of the flanking regions of DNA without knowing the sequence. The point at which the ligation occurred can be seen by identifying the cut site of [enzyme 1]. Plasmid Rescue (E. coli Transformation) Isolate the fly genome. Undergo a light digest (using an enzyme [enzyme 1] known to cut in the boundary between the reporter gene and the E. coli reporter gene and plasmid sequences), giving fragments of a few kilobases, a few with the E. coli reporter, the plasmid sequences and its flanking DNA. Self ligate the digest (low DNA concentration to ensure self ligation) giving a selection of circular DNA fragments, a few with the E. coli reporter, the plasmid sequences and its flanking DNA. Insert the plasmids into E. coli cells (e.g. by electroporation). Screen plasmids for the E. coli reporter gene. Only successful inserts of plasmids with the plasmid 'housekeeping' sequences will express this gene. 7. The gene can be cloned for further analysis. Transposable Element Application Other organisms The genomes of other organisms can be analysed in a similar way, although with different transposable elements. The recent discovery of the 'mariner transposon' (from the reconstruction of the original sequence from many 'dead' versions in the human genome) has allowed many new experiments, mariner has well conserved homologues across a wide range of species and is a very versatile tool. References Further reading Mobile genetic elements
Transposons as a genetic tool
[ "Biology" ]
2,643
[ "Molecular genetics", "Mobile genetic elements" ]
25,163,388
https://en.wikipedia.org/wiki/Ho%C5%99ava%E2%80%93Lifshitz%20gravity
Hořava–Lifshitz gravity (or Hořava gravity) is a theory of quantum gravity proposed by Petr Hořava in 2009. It solves the problem of different concepts of time in quantum field theory and general relativity by treating the quantum concept as the more fundamental so that space and time are not equivalent (anisotropic) at high energy level. The relativistic concept of time with its Lorentz invariance emerges at large distances. The theory relies on the theory of foliations to produce its causal structure. It is related to topologically massive gravity and the Cotton tensor. It is a possible UV completion of general relativity. Also, the speed of light goes to infinity at high energies. The novelty of this approach, compared to previous approaches to quantum gravity such as loop quantum gravity, is that it uses concepts from condensed matter physics such as quantum critical phenomena. Hořava's initial formulation was found to have side-effects such as predicting very different results for a spherical Sun compared to a slightly non-spherical Sun, so others have modified the theory. Inconsistencies remain, though progress was made on the theory. Nevertheless, observations of gravitational waves emitted by the neutron-star merger GW170817 contravene predictions made by this model of gravity. Some have revised the theory to account for this. The detailed balance condition Hořava originally imposed the theory to satisfy the detailed balance condition which considerably reduces the number of terms in the action. See also Superfluid vacuum theory Causal dynamical triangulation Problem of time References External links Zeeya Merali, "Splitting Time from Space—New Quantum Theory Topples Einstein's Spacetime". Scientific American, December, 2009 Quantum gravity Theories of gravity
Hořava–Lifshitz gravity
[ "Physics" ]
360
[ "Theoretical physics", "Unsolved problems in physics", "Theory of relativity", "Quantum gravity", "Relativity stubs", "Theories of gravity", "Physics beyond the Standard Model" ]
25,164,088
https://en.wikipedia.org/wiki/Kleptothermy
In biology, kleptothermy is any form of thermoregulation by which an animal shares in the metabolic thermogenesis of another animal. It may or may not be reciprocal, and occurs in both endotherms and ectotherms. One of its forms is huddling. However, kleptothermy can happen between different species that share the same habitat, and can also happen in pre-hatching life where embryos are able to detect thermal changes in the environment. This process requires two major conditions: the thermal heterogeneity created by the presence of a warm organism in a cool environment in addition to the use of that heterogeneity by another animal to maintain body temperatures at higher (and more stable) levels than would be possible elsewhere in the local area. The purpose of this behaviour is to enable these groups to increase its thermal inertia, retard heat loss and/or reduce the per capita metabolic expenditure needed to maintain stable body temperatures. Kleptothermy is seen in cases where ectotherms regulate their own temperatures and exploit the high and constant body temperatures exhibited by endothermic species. In this case, the endotherms involved are not only mammals and birds; they can be termites that maintain high and constant temperatures within their mounds where they provide thermal regimes that are exploited by a wide array of lizards, snakes and crocodilians. However, many cases of kleptothermy involve ectotherms sheltering inside the burrows used by endotherms to help maintain a high constant body temperature. Huddling Huddling confers higher and more constant body temperatures than solitary resting. Some species of ectotherms including lizards and snakes, such as boa constrictors and tiger snakes, increase their effective mass by clustering tightly together. It is also widespread amongst gregarious endotherms such as bats and birds (such as the mousebird and emperor penguin) where it allows the sharing of body heat, particularly among juveniles. In white-backed mousebirds (Colius colius), individuals maintain rest-phase body temperature above 32 °C despite air temperatures as low as -3.4 °C. This rest-phase body temperature was synchronized among individuals that cluster. Sometimes, kleptothermy is not reciprocal and might be accurately described as heat-stealing. For example, some male Canadian red sided garter snakes engage in female mimicry in which they produce fake pheromones after emerging from hibernation. This causes rival males to cover them in a mistaken attempt to mate, and so transfer heat to them. In turn, those males that mimic females become rapidly revitalized after hibernation (which depends upon raising their body temperature), giving them an advantage in their own attempts to mate. On the other hand, huddling allows emperor penguins (Aptenodytes forsteri) to save energy, maintain a high body temperature and sustain their breeding fast during the Antarctic winter. This huddling behaviour raises the ambient temperature that these penguins are exposed to above 0 °C (at average external temperatures of -17 °C). As a consequence of tight huddles, ambient temperatures can be above 20 °C and can increase up to 37.5 °C, close to birds' body temperature. Therefore, this complex social behaviour is what enables all breeders to get an equal and normal access to an environment which allows them to save energy and successfully incubate their eggs during the Antarctic winter. Habitat sharing Many ectotherms exploit the heat produced by endotherms by sharing their nests and burrows. For example, mammal burrows are used by geckos and seabird burrows by Australian tiger snakes and New Zealand tuatara. Termites create high and regulated temperatures in their mounds, and this is exploited by some species of lizards, snakes and crocodiles. Research has shown such kleptothermy can be advantageous in cases such as the blue-lipped sea krait (Laticauda laticaudata), where these reptiles occupy a burrow of a pair of wedge-tailed shearwater incubating their chick. This in turn, raises its body temperature to , compared to when present in other habitats. Its body temperature is also observed to be more stable. On the other hand, burrows without birds did not provide this heat, being only . Another example would be the case of the fairy prion (Pachyptila turtur) that forms a close association with a medium-sized reptile, the tuatara (Sphenodon punctatus). These reptiles share the burrows made by the birds, and often stay when the birds are present which helps maintain a higher body temperature. Research has shown that fairy prions enable tuatara to maintain a higher body temperature through the night for several months of the year, October to January (austral spring to summer). During the night, tuatara sharing a burrow with a bird had the most thermal benefits and helped maintain their body temperature up to 15 hours the next day. Pre-hatching life Research done on embryos of Chinese softshell turtles (Pelodiscus sinensis) falsify the assumption that behavioural thermoregulation is possible only for post-hatching stages of the reptile life history. Remarkably, even undeveloped and tiny embryos were able to detect thermal differentials within the egg and move to exploit that small-scale heterogeneity. Research has shown that this behaviour exhibited by reptile embryos may well enhance offspring fitness where movements of these embryos enabled them to maximize heat gain from their surroundings and thus increase their body temperatures. This in turn leads to a variation in the embryonic development rate and the incubation period as well. This could benefit the embryos in which a warmer incubation increases developmental rate and therefore accelerating the hatching process. On the other hand, decreased incubation periods also may minimize the embryo's exposure to risks of nest predation or lethal extremes thermal conditions where embryos move to cooler regions of the egg during periods of dangerously high temperatures. In addition, embryonic thermoregulation could enhance hatching fitness via modifications to a range of phenotypic traits where embryos with minimal temperature differences hatch at the same time decreasing the individuals' risk of predation. Therefore, the developmental rates of embryos of reptiles are not passive consequences of maternally enforced decisions about the temperatures that the embryo will experience before hatching. Instead, the embryo's behaviour and physiology combine, allowing the smallest embryos to control aspects of their own pre-hatching environment showing that the embryo is not simply a work in progress, but is a functioning organism with surprisingly sophisticated and effective behaviours. Evolution Ectotherms and endotherms undergo different evolutionary perspectives where mammals and birds thermoregulate far more precisely than ectotherms. A major benefit of precise thermoregulation is the ability to enhance performance through thermal specialization. Therefore, mammals and birds are assumed to have evolved relatively narrow performance breadths. Thus, the heterothermy of these endotherms would lead to losses of performance during certain periods and therefore genetic variation in thermosensitivity would enable the evolution of thermal generalists in more heterothermic species. The physiologies of the endotherms allows them to adapt within the constraints imposed by genetics, development, and physics. On the other side, the mechanisms for thermoregulation did not evolve separately, but rather in connection with other functions. These mechanisms were more likely quantitative rather than qualitative and it involved selection of appropriate habitats, changes in levels of locomotor activity, optimum energy liberation, and conservation of metabolic substrates. The evolution of endothermy is directly linked to the selection for high levels of activity sustained by aerobic metabolism. The evolution of the complex behaviour patterns among the birds and mammals requires the evolution of metabolic systems that support the activity prior to that. Endothermy in vertebrates evolved along separate, but parallel lines from different groups of reptilian ancestors. The advantages of endothermy are manifested in the ability to occupy thermal areas that exclude many ectothermic vertebrates, a high degree of thermal independence from environmental temperature, high muscular power output and sustained levels of activity. Endothermy, however, is energetically very expensive and requires a great deal of food, compared with ectotherms in order to support high metabolic rates. See also Rat king References Animal physiology Parasitism Heat transfer Thermoregulation
Kleptothermy
[ "Physics", "Chemistry", "Biology" ]
1,772
[ "Transport phenomena", "Physical phenomena", "Symbiosis", "Heat transfer", "Animals", "Animal physiology", "Parasitism", "Thermoregulation", "Thermodynamics", "Homeostasis" ]
25,164,285
https://en.wikipedia.org/wiki/Christoffel%E2%80%93Darboux%20formula
In mathematics, the Christoffel–Darboux formula or Christoffel–Darboux theorem is an identity for a sequence of orthogonal polynomials, introduced by and . It states that where fj(x) is the jth term of a set of orthogonal polynomials of squared norm hj and leading coefficient kj. There is also a "confluent form" of this identity by taking limit: Proof Let be a sequence of polynomials orthonormal with respect to a probability measure , and define(they are called the "Jacobi parameters"), then we have the three-term recurrence Proof: By definition, , so if , then is a linear combination of , and thus . So, to construct , it suffices to perform Gram-Schmidt process on using , which yields the desired recurrence. Proof of Christoffel–Darboux formula: Since both sides are unchanged by multiplying with a constant, we can scale each to . Since is a degree polynomial, it is perpendicular to , and so . Now the Christoffel-Darboux formula is proved by induction, using the three-term recurrence. Specific cases Hermite polynomials: Associated Legendre polynomials: See also Turán's inequalities Sturm Chain References (Hardback, Paperback) Orthogonal polynomials Functional analysis
Christoffel–Darboux formula
[ "Mathematics" ]
277
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Mathematical analysis stubs", "Mathematical objects", "Mathematical relations" ]
25,168,088
https://en.wikipedia.org/wiki/Johnson%27s%20figure%20of%20merit
Johnson's figure of merit is a measure of suitability of a semiconductor material for high frequency power transistor applications and requirements. More specifically, it is the product of the charge carrier saturation velocity in the material and the electric breakdown field under same conditions, first proposed by Edward O. Johnson of RCA in 1965. Note that this figure of merit (FoM) is applicable to both field-effect transistors (FETs), and with proper interpretation of the parameters, also to bipolar junction transistors (BJTs). Example materials JFM figures vary wildly between sources - see external links and talk page. External links Gallium Nitride as an Electromechanical Material. R-Z. IEEE 2014 Table IV (p 5) lists JFM (relative to Si) : Si=1, GaAs=2.7, SiC=20, InP=0.33, GaN=27.5, also shows Vsat and Ebreakdown. Why diamond? gives very different figures (but no refs) : Si GaAs GaN SiC diamond JFM 1 11 790 410 5800 References Semiconductors
Johnson's figure of merit
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
235
[ "Electrical resistance and conductance", "Materials science stubs", "Physical quantities", "Semiconductors", "Materials science", "Materials", "Electronic engineering", "Condensed matter physics", "Solid state engineering", "Matter" ]
25,169,288
https://en.wikipedia.org/wiki/Reservoir%20modeling
In the oil and gas industry, reservoir modeling involves the construction of a computer model of a petroleum reservoir, for the purposes of improving estimation of reserves and making decisions regarding the development of the field, predicting future production, placing additional wells and evaluating alternative reservoir management scenarios. A reservoir model represents the physical space of the reservoir by an array of discrete cells, delineated by a grid which may be regular or irregular. The array of cells is usually three-dimensional, although 1D and 2D models are sometimes used. Values for attributes such as porosity, permeability and water saturation are associated with each cell. The value of each attribute is implicitly deemed to apply uniformly throughout the volume of the reservoir represented by the cell. Types of reservoir model Reservoir models typically fall into two categories: Geological models are created by geologists and geophysicists and aim to provide a static description of the reservoir, prior to production. Reservoir simulation models are created by reservoir engineers and use finite difference methods to simulate the flow of fluids within the reservoir, over its production lifetime. Sometimes a single "shared earth model" is used for both purposes. More commonly, a geological model is constructed at a relatively high (fine) resolution. A coarser grid for the reservoir simulation model is constructed, with perhaps two orders of magnitude fewer cells. Effective values of attributes for the simulation model are then derived from the geological model by an upscaling process. Alternatively, if no geological model exists, the attribute values for a simulation model may be determined by a process of sampling geological maps. Uncertainty in the true values of the reservoir properties is sometimes investigated by constructing several different realizations of the sets of attribute values. The behaviour of the resulting simulation models can then indicate the associated level of economic uncertainty. The phrase "reservoir characterization" is sometimes used to refer to reservoir modeling activities up to the point when a simulation model is ready to simulate the flow of fluids. Commercially available software is used in the construction, simulation and analysis of the reservoir models. Seismic to simulation The processes required to construct reservoir models are described by the phrase Seismic to simulation. The process is successful if the model accurately reflects the original well logs, seismic data and production history. Reservoir models are constructed to gain a better understanding of the subsurface that leads to informed well placement, reserves estimation and production planning. Models are based on measurements taken in the field, including well logs, seismic surveys, and production history. Seismic to simulation enables the quantitative integration of all field data into an updateable reservoir model built by a team of geologists, geophysicists, and engineers. Key techniques used in the process include integrated petrophysics and rock physics to determine the range of lithotypes and rock properties, geostatistical inversion to determine a set of plausible seismic-derived rock property models at sufficient vertical resolution and heterogeneity for flow simulation, stratigraphic grid transfer to accurately move seismic-derived data to the geologic model, and flow simulation for model validation and ranking to determine the model that best fits all the data. Rock physics and petrophysics The first step in seismic to simulation is establishing a relationship between petrophysical key rock properties and elastic properties of the rock. This is required in order to find common ground between the well logs and seismic data. Well logs are measured in depth and provide high resolution vertical data, but no insight into the inter-well space. Seismic are measured in time and provide great lateral detail but is quite limited in its vertical resolution. When correlated, well logs and seismic can be used to create a fine-scale 3D model of the subsurface. Insight into the rock properties comes from a combination of basic geologic understanding and well-bore measurements. Based on an understanding of how the area was formed over time, geologists can predict the types of rock likely to be present and how rapidly they vary spatially. Well log and core measurements provide samples to verify and fine-tune that understanding. Seismic data is used by petrophysicists to identify the tops of various lithotypes and the distribution of rock properties in the inter-well space using seismic inversion attributes such as impedance. Seismic surveys measure acoustic impedance contrasts between rock layers. As different geologic structures are encountered, the sound wave reflects and refracts as a function of the impedance contrast between the layers. Acoustic impedance varies by rock type and can therefore be correlated to rock properties using rock physics relationships between the inversion attributes and petrophysical properties such as porosity, lithology, water saturation, and permeability. Once well logs are properly conditioned and edited, a petrophysical rock model is generated that can be used to derive the effective elastic rock properties from fluid and mineral parameters as well as rock structure information. The model parameters are calibrated by comparison of the synthetic to the available elastic sonic logs. Calculations are performed following a number of rock physics algorithms including: Xu & White, Greenberg & Castagna, Gassmann, Gardner, modified upper and lower Hashin-Shtrikman, and Batzle & Wang. When the petrophysical rock model is complete, a statistical database is created to describe the rock types and their known properties such as porosity and permeability. Lithotypes are described, along with their distinct elastic properties. MCMC geostatistical inversion In the next step of seismic to simulation, seismic inversion techniques combine well and seismic data to produce multiple equally plausible 3D models of the elastic properties of the reservoir. Seismic data is transformed to elastic property log(s) at every trace. Deterministic inversion techniques are used to provide a good overall view of the porosity over the field, and serve as a quality control check. To obtain greater detail needed for complex geology, additional stochastic inversion is then employed. Geostatistical inversion procedures detect and delineate thin reservoirs otherwise poorly defined. Markov chain Monte Carlo (MCMC) based geostatistical inversion addresses the vertical scaling problem by creating seismic derived rock properties with vertical sampling compatible to geologic models. All field data is incorporated into the geostatistical inversion process through the use of probability distribution functions (PDFs). Each PDF describes a particular input data in geostatistical terms using histograms and variograms, which identify the odds of a given value at a specific place and the overall expected scale and texture based on geologic insight. Once constructed, the PDFs are combined using Bayesian inference, resulting in a posterior PDF that conforms to everything that is known about the field. A weighting system is used within the algorithm, making the process more objective. From the posterior PDF, realizations are generated using a Markov chain Monte Carlo algorithm. These realizations are statistically fair and produce models of high detail, accuracy and realism. Rock properties like porosity can be cosimulated from the elastic properties determined by the geostatistical inversion. This process is iterated until a best fit model is identified. Inversion parameters are tuned by running the inversion many times with and without well data. Without the well data, the inversions are running in blind-well mode. These blind-well mode inversions test the reliability of the constrained inversion and remove potential bias. This statistical approach creates multiple, equi-probable models consistent with the seismic, wells, and geology. Geostatistical inversion simultaneously inverts for impedance and discrete properties types, and other petrophysical properties such as porosity can then be jointly cosimulated. The output volumes are at a sample rate consistent with the reservoir model because making synthetics of finely sampled models is the same as from well logs. Inversion properties are consistent with well log properties because the histograms used to generate the output rock properties from the inversion are based on well log values for those rock properties. Uncertainty is quantified by using random seeds to generate slightly differing realizations, particularly for areas of interest. This process improves the understanding of uncertainty and risk within the model. Stratigraphic grid transfer Following geostatistical inversion and in preparation for history matching and flow simulation, the static model is re-gridded and up-scaled. The transfer simultaneously converts time to depth for the various properties and transfers them in 3D from the seismic grid to a corner-point grid. The relative locations of properties are preserved, ensuring data points in the seismic grid arrive in the correct stratigraphic layer in the corner point grid. The static model built from seismic is typically orthogonal but flow simulators expect corner point grids. The corner point grid consists of cubes that are usually much coarser in the horizontal direction and each corner of the cube is arbitrarily defined to follow the major features in the grid. Converting directly from orthogonal to corner point can cause problems such as creating discontinuity in fluid flow. An intermediate stratigraphic grid ensures that important structures are not misrepresented in the transfer. The stratigraphic grid has the same number of cells as the orthogonal seismic grid, but the boundaries are defined by stratigraphic surfaces and the cells follow the stratigraphic organization. This is a stratigraphic representation of the seismic data using the seismic interpretation to define the layers. The stratigraphic grid model is then mapped to the corner point grid by adjusting the zones. Using the porosity and permeability models and a saturation height function, initial saturation models are built. If volumetric calculations identify problems in the model, changes are made in the petrophysical model without causing the model to stray from the original input data. For example, sealing faults are added for greater compartmentalization. Model validation and ranking In the last step of seismic to simulation, flow simulation continues the integration process by bringing in the production history. This provides a further validation of the static model against history. A representative set of the model realizations from the geostatistical inversion are history matched against production data. If the properties in the model are realistic, simulated well bottom hole pressure behavior should match historical (measured) well bottom hole pressure. Production flow rates and other engineering data should also match. Based on the quality of the match, some models are eliminated. After the initial history match process, dynamic well parameters are adjusted as needed for each of the remaining models to improve the match. The final model represents the best match to original field measurements and production data and is then used in drilling decisions and production planning. See also Extraction of petroleum Petroleum engineering Computer simulation Reservoir simulator Rise in Core References Further reading "Building Highly Detailed, Realistic 3D Numerical Models of Rock and Reservoir Properties: Rigorous Incorporation of All Data Reduces Uncertainty", Fugro-Jason White Paper, 2008. Contreras, A., Torres-Verdin, C., "AVA sensitivity analysis and inversion of 3D pre-stack seismic data to delineate a mixed carbonate-siliciclas tic reservoir in the Barinas-Apure Basin, Venezuela". Contreras, A., Torres-Verdin, C., Kvien, K., Fasnacht, T., Chesters, W., "AVA Stochastic Inversion of Pre-Stack Seismic Data and Well Logs for 3D Reservoir Modeling", EAGE 2005. Pyrcz, M.J. and Deutsch, C. Geostatistical Reservoir Modeling, New York: Oxford University Press, 2014, 448 pages. Jarvis, K., Folkers, A., Saussus, D., "Reservoir compartment prediction of the Simpson field from the geostatistical inversion of AVO seismic data", ASEG 2007. Leggett, M., Chesters, W., "Joint AVO Inversion with Geostatistical Simulation", CSEG National Convention, 2005. Sams, M., Saussus, D., "Comparison of uncertainty estimates from deterministic and geostatistical inversion", SEG Annual Conference, 2008. Soni, S., Littmann, W., Timko, D., Karkooti, H., Karimi, S., Kazemshiroodi, S. "An Integrated Case Study from Seismic to Simulation through Geostatistical Inversion", SPE 118178. Stephen, K., MacBeth, C. "Reducing Reservoir Prediction Uncertainty by Updating a Stochastic Model Using Seismic History Matching", SPE Reservoir Evaluation & Engineering, December 2008. Zou, Y., Bentley, L., Lines, L. "Integration of reservoir simulation with time-lapse seismic modeling", 2004 CSEG National Convention. Economic geology Geostatistics Geophysics Geology software Petroleum geology Geologic modelling Seismology
Reservoir modeling
[ "Physics", "Chemistry" ]
2,618
[ "Petroleum", "Petroleum geology", "Applied and interdisciplinary physics", "Geophysics" ]
42,032,211
https://en.wikipedia.org/wiki/Deep%20hole%20drilling%20measurement%20technique
The deep hole drilling (DHD) measurement technique is a residual stress measurement technique used to measure locked-in and applied stresses in engineering materials and components. DHD is a semi-destructive mechanical strain relaxation (MSR) technique, which seeks to measure the distribution of stresses along the axis of a drilled reference hole. The process is unique in its ability to measure residual stresses at a microscopic level with a penetration of over , without total destruction of the original component. Deep hole drilling is considered deep in comparison to other hole drilling techniques such as centre hole drilling. Technique overview DHD involves drilling a hole through the thickness of the component, measuring the diameter of the hole, trepanning (cutting a circular slot around the hole) a core of material from around the hole and finally re-measuring the diameter of the hole. For engineering metals, the trepanning process is typically performed using electrical discharge machining (EDM) to minimise the introduction of further stresses during the cutting. The differences between the measured diameters before and after stress release enables the original residual stresses to be calculated using elasticity theory. An animated YouTube video explaining the DHD technique can be viewed here: YouTube: Deep Hole Drilling Technique. DHD procedure Firstly, reference bushes are attached to the front and back surfaces of the component at the measurement location, to minimise "bell-mouthing" and assist with aligning the data sets during analysis. A reference hole is then drilled through a component; in engineering metals, a gun-drill is typically used due to the smooth and straight hole profile they produce. After drilling, the diameter of the reference hole is measured at frequent intervals along the full length and circumference of the measurement and reference bushes with an air probe. This is a thin rod with pressurised air forced from the end via two small holes at a normal to the reference hole axis. As the air probe is moved through the hole, changes in hole diameter will result in changes in pressure, which are detected with a calibrated transducer to convert the pressure change into a voltage. A cylinder (i.e. a core) of material containing the reference hole along its axis is then cut (trepanned) from the component using electro-discharge machining (EDM), in order to relax the stresses acting on the reference hole. Finally, the diameter of the reference hole is re-measured through the entire thickness of the cylinder and reference bushes, with the diameter measurements taken at the same locations as those measured prior to the trepanning. Incremental DHD technique (iDHD) If high magnitude residual stresses (>60% yield stress) are present in the component then the DHD technique can be modified to account for plastic behaviour during the stress relief process. The risk of plastic deformation during stress relaxation is a problem in hole drilling techniques due to the approximately x3 stress concentrating factor of holes, effectively "amplifying" the stress relaxation and increasing the chance of yielding. Therefore, for iDHD, the procedure is changed to be performed incrementally, with the core being cut (trepanned) in several steps of increasing depth and the diameter measurements being performed in between each step. The analysis then incorporates this sequence of incremental distortions for calculating the high magnitude residual stresses. Interpretation of the results The DHD method seeks to measure the distribution of stresses along the axis of the reference hole. The relationship between the original residual stresses acting on the reference hole and the measured changes in the hole diameter creates the basis of the analysis. The DHD technique uses an elastic analysis to convert the measured distortions of the reference hole into a residual stress profile. The accuracy of the results is dependent on sources of error in the measurement, but is also dependent on the elastic modulus of the material. A lower elastic modulus will result in larger distortions for a given stress release, meaning a higher measurement resolution and thus a greater achievable accuracy. The DHD technique has a nominal accuracy of ±10MPa for Aluminium, ±30MPa for Steel and ±15MPa for Titanium. Appraisal of the DHD technique Advantages and disadvantages of DHD, relative to other residual stress measurement techniques, are listed below. Advantages Residual stresses can be measured at depths up to . Semi-destructive – enabling repeated residual stress measurements at many different stages in component life. The equipment required is portable enough for measurements to be performed on-site as well as in a laboratory. A through-thickness bi-axial residual stress distribution is measured (e.g. σxx, σyy and τxy), including stress gradients. σzz can be measured but with extra difficulty and reduced accuracy. High magnitude residual stresses can be measured with iDHD, i.e. plasticity can be accounted for. Applicable to both simple and complex component shapes. Applicable to a wide range of materials, both metallic and non-metallic. Indifferent to grain structure of component material. Counter rotational Drilling is best for accuracy The process is fast, relative to the quantity of information produced. Extracted cylinder of material provides stress-free sample for further material tests and validations Disadvantages Semi-invasive – the resultant hole might need to be re-filled or a mock-up be provided. Not applicable through components of less than thickness. Validation Several studies have been conducted to validate the DHD technique using samples with "known" stress states, by applying a defined load in the plastic range to create an internal stress state in a component, or by loading the component in the elastic range throughout the duration of the measurements. For example, a beam component was plastically bent to introduce a known residual stress profile. These residual stresses were then measured using multiple residual stress measurement techniques including Neutron Diffraction, Slitting, Ring Core, Incremental Centre Hole Drilling, Deep Hole Drilling and Incremental Deep Hole Drilling, as well as modelled with finite element software to provide further numerical validation. The correlation between the results from techniques is strong, with DHD and iDHD displaying the same trend and magnitudes as both the numerical simulation and the other experimental techniques. The results from this comparison are shown in the Figure. See also Solid mechanics Residual stress References External links VEQTER Ltd - Deep Hole Drilling Mechanical engineering Civil engineering
Deep hole drilling measurement technique
[ "Physics", "Engineering" ]
1,279
[ "Construction", "Civil engineering", "Applied and interdisciplinary physics", "Mechanical engineering" ]
42,035,074
https://en.wikipedia.org/wiki/Cry1Ac
Cry1Ac protoxin is a crystal protein produced by the gram-positive bacterium, Bacillus thuringiensis (Bt) during sporulation. Cry1Ac is one of the delta endotoxins produced by this bacterium which act as insecticides. Because of this, the genes for these have been introduced into commercially important crops by genetic engineering (such as cotton and corn) in order to confer pest resistance on those plants. Transgenic Bt cotton initially expressed a single Bt gene, which codes for Cry1Ac. Subsequently, Bt cotton has added other delta endotoxins. Products such as Bt cotton, Bt brinjal and genetically modified maize have received attention due to a number of issues, including genetically modified food controversies, and the Séralini affair. Cry1Ac is also a mucosal adjuvant (an immune-response enhancer) for humans. It has been used in research to develop a vaccine against the amoeba Naegleria fowleri. This amoeba can invade and attack the human nervous system and brain, causing primary amoebic meningoencephalitis, which is nearly always fatal. See also Genetically modified organism References Bacterial toxins
Cry1Ac
[ "Chemistry" ]
246
[ "Biochemistry stubs", "Protein stubs" ]
42,035,548
https://en.wikipedia.org/wiki/Decentralized%20autonomous%20organization
A decentralized autonomous organization (DAO), sometimes called a decentralized autonomous corporation (DAC), is an organization managed in whole or in part by decentralized computer programs, with voting and finances handled through a decentralized ledger technology like a blockchain.. In particular, processes run by the decentralized programs must be central, enduring, and distinctive to the identity of the organization for the organization to be a DAO. In general terms, DAOs are member-owned communities without centralized leadership. The precise legal status of this type of business organization is unclear. A well-known example, intended for venture capital funding, was The DAO, which amassed 3.6 million ether (ETH)—Ethereum's native cryptocurrency—then worth more than US$70 million in May 2016, and was hacked and drained of in cryptocurrency weeks later. The hack was reversed in the following weeks, and the money restored, via a hard fork of the Ethereum blockchain. Most Ethereum miners and clients switched to the new fork while the original chain became Ethereum Classic. The governance of DAOs is subject to controversy. As these typically allocate and distribute tokens that grant voting rights, their accumulation may lead to concentration of power. Background Although the term may be traced back to the 1990s, it was not until 2013 that it became more widely adopted. Although some argue that Bitcoin was the first DAO, the term is only understood today as organizations deployed as smart contracts on top of an existing blockchain network. Decentralized autonomous organizations are typified by the use of decentralized technologies, such as blockchain technology, to provide a secure digital ledger to track digital interactions across the internet, hardened against forgery by trusted timestamping and dissemination of a distributed database. This approach eliminates the need to involve a mutually acceptable trusted third party in any decentralized digital interaction or cryptocurrency transaction. The costs of a blockchain-enabled transaction and of the associated data reporting may be substantially offset by the elimination of both the trusted third party and of the need for repetitive recording of contract exchanges in different records. For example, the blockchain data could, in principle and if regulatory structures permit it, replace public documents such as deeds and titles. In theory, a blockchain approach allows multiple cloud computing users to enter a loosely coupled peer-to-peer smart contract collaboration. Vitalik Buterin proposed that after a DAO is launched, it might be organized to run without human managerial interactivity, provided the smart contracts are supported by a Turing-complete platform. Ethereum, built on a blockchain and launched in 2015, has been described as meeting that Turing threshold, thus enabling such DAOs. Decentralized autonomous organizations aim to be open platforms through which individuals control their identities and their personal data. Governance DAO governance is coordinated using tokens or NFTs that grant voting powers. Admission to a DAO is limited to people who have a confirmed ownership of these governance tokens in a cryptocurrency wallet, and membership may be exchanged. Governance is conducted through a series of proposals that members vote on through the blockchain, and the possession of more governance tokens often translates to greater voting power. Contributions from members towards the organizational goals of a DAO can sometimes be tracked and internally compensated. Inactive holders of governance tokens can be a major obstacle for DAO governance, which has led to implementations allowing voting power to be delegated to other parties. Issues Social Tokens that grant voting powers are often not used to vote. Inactive or non-voting shareholders in DAOs often disrupt the organization's possible functionality. Another risk is the concentration of power in the case that individuals accumulate large amounts of tokens that grant voting power. Concentration of these tokens defeats the ambitions to distribute governance power. In a study of decentralized finance DAOs, the distribution of tokens was shown to be highly concentrated among a small population of holders. Legal status, liability, and regulation The precise legal status of this type of business organization is generally unclear, and may vary by jurisdiction. On 1 July 2021, Wyoming became the first US state to recognize DAOs as a legal entity. American CryptoFed DAO became the first business entity so recognized. Some previous approaches to blockchain based companies have been regarded by the U.S. Securities and Exchange Commission as illegal offers of unregistered securities. Although often of uncertain legal standing, a DAO may functionally be a corporation without legal status as a corporation: a general partnership. Known participants, or those at the interface between a DAO and regulated financial systems, may be targets of regulatory enforcement or civil actions if they are out of compliance with the law. Security A DAO's code is difficult to alter once the system is up and running, including bug fixes that would be otherwise trivial in centralized code. Corrections to a DAO require writing new code and agreement to migrate all the funds. Although the code is visible to all, it is hard to repair, thus leaving known security holes open to exploitation unless a moratorium is called to enable bug fixing. In 2016, a specific DAO, "The DAO", set a record for the largest crowdfunding campaign to date. Researchers pointed out multiple problems with The DAO's code. The DAO's operational procedure allowed investors to withdraw at will any money that had not yet been committed to a project; the funds could thus deplete quickly. Although safeguards aimed to prevent gaming shareholders' votes to win investments, there were a "number of security vulnerabilities". These enabled an attempted large withdrawal of funds from The DAO to be initiated in mid-June 2016. On 20 July 2016, the Ethereum blockchain was forked to bail out the original contract. DAOs can be subject to coups or hostile takeovers that upend its voting structures especially if the voting power is based upon the number of tokens one owns. An example of this occurred in 2022, when one individual collected enough tokens to give themselves voting control over Build Finance DAO, which they then used to drain the DAO of all its cryptocurrency. List of notable DAOs See also Cooperative Decentralized Finance Decentralized application Decentralized computing Distributed computing Incentive-centered design List of highest-funded crowdfunding projects Smart contract The Social Contract Notes References Further reading External links Application layer protocols Applications of cryptography Blockchains Computer law Computer networking Distributed computing Distributed data storage Network protocols Peer-to-peer computing Production economics Government by algorithm
Decentralized autonomous organization
[ "Technology", "Engineering" ]
1,371
[ "Computer networking", "Computer engineering", "Government by algorithm", "Computer law", "Automation", "Computer science", "Computing and society" ]
42,039,996
https://en.wikipedia.org/wiki/Transformer%20utilization%20factor
The transformer utilization factor (TUF) of a rectifier circuit is defined as the ratio of the DC power available at the load resistor to the AC rating of the secondary coil of a transformer. The rating of the transformer can be defined as: TRANSFORMER utilization factor for half wave rectifier is .287 or .3. References External links Engineering ratios Electric transformers
Transformer utilization factor
[ "Mathematics", "Engineering" ]
82
[ "Quantity", "Metrics", "Engineering ratios" ]
50,686,685
https://en.wikipedia.org/wiki/NgAgo
NgAgo is a single-stranded DNA (ssDNA)-guided Argonaute endonuclease, an acronym for Natronobacterium gregoryi Argonaute. NgAgo binds 5′ phosphorylated ssDNA of ~24 nucleotides (gDNA) to guide it to its target site and will make DNA double-strand breaks at the gDNA site. Like the CRISPR/Cas system, NgAgo was reported by Chunyu Han et al. to be suitable for genome editing, but this has not been replicated. In contrast to Cas9, the NgAgo–gDNA system does not require a protospacer adjacent motif (PAM). Role NgAgo was proposed to be useful for genome editing in May 2016 because of the system’s high accuracy and efficiency, which was said to minimize off-target effects. The specificity of the gDNA is essential, as cleavage efficiency is impaired by a single nucleotide mismatch between the guide and target molecules. Using 5’ phosphorylated ssDNAs as guide molecules reduces the possibility of cellular oligonucleotides misleading NgAgo. A guide molecule can only be attached to NgAgo during the expression of the protein. Once the guide is loaded, NgAgo cannot swap free floating ssDNA for its gDNA. Designing, synthesizing, and adjusting the concentration of ssDNAs is easier compared to systems using sgRNA. The required dosage of ssDNA is less than that of a sgRNA expression plasmid. Controversy Doubts about the technique were raised on gene editing forums as early as June and have persisted. There have been several allegations that this procedure is impossible to reproduce. Nature Biotechnology, which originally published the research, is investigating. In November 2016, a letter was published in Protein & Cell questioning the research and the lead author's claim that replication requires "superb experimental skill". The same month, Nature Biotechnology published a critical correspondence article by three groups and an accompanying expression of concern by the editors on the original article. The authors retracted the study in a statement published in Nature Biotechnology on 3 August 2017, citing the continued inability of the research community to replicate their results. In 2018, an investigation led by Han's university concluded that while Han's findings were flawed, he and his team did not intend to deceive the scientific community. In April 2019, a preprint article found that NgAgo does have the ability to edit genes and implied that previous results might have been difficult to reproduce due to difficulties related to purification of the active protein. References Bacterial enzymes Discovery and invention controversies DNA EC 3.1 Genome editing
NgAgo
[ "Engineering", "Biology" ]
546
[ "Genetics techniques", "Genetic engineering", "Genome editing" ]
50,691,950
https://en.wikipedia.org/wiki/Triangulation%20%28surveying%29
In surveying, triangulation is the process of determining the location of a point by measuring only angles to it from known points at either end of a fixed baseline by using trigonometry, rather than measuring distances to the point directly as in trilateration. The point can then be fixed as the third point of a triangle with one known side and two known angles. Triangulation can also refer to the accurate surveying of systems of very large triangles, called triangulation networks. This followed from the work of Willebrord Snell in 1615–17, who showed how a point could be located from the angles subtended from three known points, but measured at the new unknown point rather than the previously fixed points, a problem called resectioning. Surveying error is minimized if a mesh of triangles at the largest appropriate scale is established first. Points inside the triangles can all then be accurately located with reference to it. Such triangulation methods were used for accurate large-scale land surveying until the rise of global navigation satellite systems in the 1980s. Principle Triangulation may be used to find the position of the ship when the positions of A and B are known. An observer at A measures the angle α, while the observer at B measures β. The position of any vertex of a triangle can be calculated if the position of one side, and two angles, are known. The following formulae are strictly correct only for a flat surface. If the curvature of the Earth must be allowed for, then spherical trigonometry must be used. Calculation With being the distance between A and B gives: Using the trigonometric identities tan α = sin α / cos α and sin(α + β) = sin α cos β + cos α sin β, this is equivalent to: therefore: From this, it is easy to determine the distance of the unknown point from either observation point, its north/south and east/west offsets from the observation point, and finally its full coordinates. History Triangulation today is used for many purposes, including surveying, navigation, metrology, astrometry, binocular vision, model rocketry and gun direction of weapons. In the field, triangulation methods were apparently not used by the Roman specialist land surveyors, the agrimensores; but were introduced into medieval Spain through Arabic treatises on the astrolabe, such as that by Ibn al-Saffar (d. 1035). Abu Rayhan Biruni (d. 1048) also introduced triangulation techniques to measure the size of the Earth and the distances between various places. Simplified Roman techniques then seem to have co-existed with more sophisticated techniques used by professional surveyors. But it was rare for such methods to be translated into Latin (a manual on geometry, the eleventh century Geomatria incerti auctoris is a rare exception), and such techniques appear to have percolated only slowly into the rest of Europe. Increased awareness and use of such techniques in Spain may be attested by the medieval Jacob's staff, used specifically for measuring angles, which dates from about 1300; and the appearance of accurately surveyed coastlines in the Portolan charts, the earliest of which that survives is dated 1296. Gemma Frisius On land, the cartographer Gemma Frisius proposed using triangulation to accurately position far-away places for map-making in his 1533 pamphlet Libellus de Locorum describendorum ratione (Booklet concerning a way of describing places), which he bound in as an appendix in a new edition of Peter Apian's best-selling 1524 Cosmographica. This became very influential, and the technique spread across Germany, Austria and the Netherlands. The astronomer Tycho Brahe applied the method in Scandinavia, completing a detailed triangulation in 1579 of the island of Hven, where his observatory was based, with reference to key landmarks on both sides of the Øresund, producing an estate plan of the island in 1584. In England Frisius's method was included in the growing number of books on surveying which appeared from the middle of the century onwards, including William Cuningham's Cosmographical Glasse (1559), Valentine Leigh's Treatise of Measuring All Kinds of Lands (1562), William Bourne's Rules of Navigation (1571), Thomas Digges's Geometrical Practise named Pantometria (1571), and John Norden's Surveyor's Dialogue (1607). It has been suggested that Christopher Saxton may have used rough-and-ready triangulation to place features in his county maps of the 1570s; but others suppose that, having obtained rough bearings to features from key vantage points, he may have estimated the distances to them simply by guesswork. Willebrord Snell The modern systematic use of triangulation networks stems from the work of the Dutch mathematician Willebrord Snell, who in 1615 surveyed the distance from Alkmaar to Breda, approximately 72 miles (116 kilometres), using a chain of quadrangles containing 33 triangles in all. Snell underestimated the distance by 3.5%. The two towns were separated by one degree on the meridian, so from his measurement he was able to calculate a value for the circumference of the earth – a feat celebrated in the title of his book Eratosthenes Batavus (The Dutch Eratosthenes), published in 1617. Snell calculated how the planar formulae could be corrected to allow for the curvature of the earth. He also showed how to resection, or calculate, the position of a point inside a triangle using the angles cast between the vertices at the unknown point. These could be measured much more accurately than bearings of the vertices, which depended on a compass. This established the key idea of surveying a large-scale primary network of control points first, and then locating secondary subsidiary points later, within that primary network. Further developments Snell's methods were taken up by Jean Picard who in 1669–70 surveyed one degree of latitude along the Paris Meridian using a chain of thirteen triangles stretching north from Paris to the clocktower of Sourdon, near Amiens. Thanks to improvements in instruments and accuracy, Picard's is rated as the first reasonably accurate measurement of the radius of the earth. Over the next century this work was extended most notably by the Cassini family: between 1683 and 1718 Jean-Dominique Cassini and his son Jacques Cassini surveyed the whole of the Paris meridian from Dunkirk to Perpignan; and between 1733 and 1740 Jacques and his son César Cassini undertook the first triangulation of the whole country, including a re-surveying of the meridian arc, leading to the publication in 1745 of the first map of France constructed on rigorous principles. Triangulation methods were by now well established for local mapmaking, but it was only towards the end of the 18th century that other countries began to establish detailed triangulation network surveys to map whole countries. The Principal Triangulation of Great Britain was begun by the Ordnance Survey in 1783, though not completed until 1853; and the Great Trigonometric Survey of India, which ultimately named and mapped Mount Everest and the other Himalayan peaks, was begun in 1801. For the Napoleonic French state, the French triangulation was extended by Jean-Joseph Tranchot into the German Rhineland from 1801, subsequently completed after 1815 by the Prussian general Karl von Müffling. Meanwhile, the mathematician Carl Friedrich Gauss was entrusted from 1821 to 1825 with the triangulation of the kingdom of Hanover (), on which he applied the method of least squares to find the best fit solution for problems of large systems of simultaneous equations given more real-world measurements than unknowns. Today, large-scale triangulation networks for positioning have largely been superseded by the global navigation satellite systems established since the 1980s, but many of the control points for the earlier surveys still survive as valued historical features in the landscape, such as the concrete triangulation pillars set up for retriangulation of Great Britain (1936–1962), or the triangulation points set up for the Struve Geodetic Arc (1816–1855), now scheduled as a UNESCO World Heritage Site. See also Anglo-French Survey (1784–1790) Bilby tower Great Trigonometrical Survey Multilateration, where a point is calculated using the time-difference-of-arrival between other known points Parallax Resection (orientation) SOCET SET Spherical trigonometry Stellar triangulation Stereopsis Trig point References Further reading Bagrow, L. (1964) History of Cartography; revised and enlarged by R.A. Skelton. Harvard University Press. Crone, G.R. (1978 [1953]) Maps and their Makers: An Introduction to the History of Cartography (5th ed). Tooley, R.V. & Bricker, C. (1969) A History of Cartography: 2500 Years of Maps and Mapmakers Keay, J. (2000) The Great Arc: The Dramatic Tale of How India Was Mapped and Everest Was Named. London: Harper Collins. . Murdin, P. (2009) Full Meridian of Glory: Perilous Adventures in the Competition to Measure the Earth. Springer. . Angle Elementary geometry Euclidean geometry Surveying Geodetic surveys
Triangulation (surveying)
[ "Physics", "Mathematics", "Engineering" ]
1,937
[ "Geometric measurement", "Scalar physical quantities", "Physical quantities", "Elementary mathematics", "Elementary geometry", "Surveying", "Civil engineering", "Wikipedia categories named after physical quantities", "Angle" ]
36,408,395
https://en.wikipedia.org/wiki/Statistical%20manifold
In mathematics, a statistical manifold is a Riemannian manifold, each of whose points is a probability distribution. Statistical manifolds provide a setting for the field of information geometry. The Fisher information metric provides a metric on these manifolds. Following this definition, the log-likelihood function is a differentiable map and the score is an inclusion. Examples The family of all normal distributions can be thought of as a 2-dimensional parametric space parametrized by the expected value μ and the variance σ2 ≥ 0. Equipped with the Riemannian metric given by the Fisher information matrix, it is a statistical manifold with a geometry modeled on hyperbolic space. A way of picturing the manifold is done by inferring the parametric equations via the Fisher Information rather than starting from the likelihood-function. A simple example of a statistical manifold, taken from physics, would be the canonical ensemble: it is a one-dimensional manifold, with the temperature T serving as the coordinate on the manifold. For any fixed temperature T, one has a probability space: so, for a gas of atoms, it would be the probability distribution of the velocities of the atoms. As one varies the temperature T, the probability distribution varies. Another simple example, taken from medicine, would be the probability distribution of patient outcomes, in response to the quantity of medicine administered. That is, for a fixed dose, some patients improve, and some do not: this is the base probability space. If the dosage is varied, then the probability of outcomes changes. Thus, the dosage is the coordinate on the manifold. To be a smooth manifold, one would have to measure outcomes in response to arbitrarily small changes in dosage; this is not a practically realizable example, unless one has a pre-existing mathematical model of dose-response where the dose can be arbitrarily varied. Definition Let X be an orientable manifold, and let be a measure on X. Equivalently, let be a probability space on , with sigma algebra and probability . The statistical manifold S(X) of X is defined as the space of all measures on X (with the sigma-algebra held fixed). Note that this space is infinite-dimensional; it is commonly taken to be a Fréchet space. The points of S(X) are measures. Rather than dealing with an infinite-dimensional space S(X), it is common to work with a finite-dimensional submanifold, defined by considering a set of probability distributions parameterized by some smooth, continuously varying parameter . That is, one considers only those measures that are selected by the parameter. If the parameter is n-dimensional, then, in general, the submanifold will be as well. All finite-dimensional statistical manifolds can be understood in this way. See also Chentsov's theorem References Manifolds Information theory
Statistical manifold
[ "Mathematics", "Technology", "Engineering" ]
587
[ "Telecommunications engineering", "Applied mathematics", "Space (mathematics)", "Topological spaces", "Computer science", "Topology", "Information theory", "Manifolds" ]
33,815,178
https://en.wikipedia.org/wiki/Metadata%20repository
A metadata repository is a database created to store metadata. Metadata is information about the structures that contain the actual data. Metadata is often said to be "data about data", but this is misleading. Data profiles are an example of actual "data about data". Metadata adds one layer of abstraction to this definition– it is data about the structures that contain data. Metadata may describe the structure of any data, of any subject, stored in any format. A well-designed metadata repository typically contains data far beyond simple definitions of the various data structures. Typical repositories store dozens to hundreds of separate pieces of information about each data structure. Comparing the metadata of a couple data items - one digital and one physical - clarify what metadata is: First, digital: For data stored in a database one may have a table called "Patient" with many columns, each containing data which describes a different attribute of each patient. One of these columns may be named "Patient_Last_Name". What is some of the metadata about the column that contains the actual surnames of patients in the database? We have already used two items: the name of the column that contains the data (Patient_Last_Name) and the name of the table that contains the column (Patient). Other metadata might include the maximum length of last name that may be entered, whether or not last name is required (can we have a patient without Patient_Last_Name?), and whether the database converts any surnames entered in lower case to upper case. Metadata of a security nature may show the restrictions which limit who may view these names. Second, physical: For data stored in a brick and mortar library, one have many volumes and may have various media, including books. Metadata about books would include ISBN, Binding_Type, Page_Count, Author, etc. Within Binding_Type, metadata would include possible bindings, material, etc. This contextual information of business data include meaning and content, policies that govern, technical attributes, specifications that transform, and programs that manipulate. Definition The metadata repository is responsible for physically storing and cataloging metadata. Data in a metadata repository should be generic, integrated, current, and historical: Generic Meta model should store the metadata by generic terms instead of storing it by an applications-specific defined way, so that if your data base standard changes from one product to another the physical meta model of the metadata repository would not need to change. Integration of the metadata repository allows all business areas' metadata to be in an integrated fashion: Covering all domains and subject areas of the organization. current and historicalThe metadata repository should have accessible current and historical metadata. Metadata repositories used to be referred to as a data dictionary. With the transition of needs for the metadata usage for business intelligence has increased so is the scope of the metadata repository increased. Earlier data dictionaries are the closest place to interact technology with business. Data dictionaries are the universe of metadata repository in the initial stages but as the scope increased Business glossary and their tags to variety of status flags emerged in the business side while consumption of the technology metadata, their lineage and linkages made the repository, the source for valuable reports to bring business and technology together and helped data management decisions easier as well as assess the cost of the changes. Metadata repository explores the enterprise wide data governance, data quality and master data management (includes master data and reference data) and integrates this wealth of information with integrated metadata across the organization to provide decision support system for data structures, even though it only reflects the structures consumed from various systems. Repository vs. registry Repository has additional functionalities compared with registry. Metadata repository not only stores metadata like Metadata registry but also adds relationships with related metadata types. Metadata when related in a flow from its point of entry into organization up to the deliverables is considered as the lineage of that data point. Metadata when related across other related metadata types is called linkages. By providing the relationships to all the metadata points across the organization and maintaining its integrity with an architecture to handle the changes, metadata repository provides the basic material for understanding the complete data flow and their definitions and their impact. Also the important feature is to maintain the version control though this statement for contrasting is open for discussion. These definitions are still evolving, so the accuracy of the definitions needs refinement. The purpose of registry is to define the metadata element and maintained across the organization. And data models and other data management teams refer to the registry for any changes to follow. While Metadata repository sources metadata from various metadata systems in the organizations and reflects what is in the upstream. Repository never acts as an upstream while registry is used as an upstream for metadata changes. Reason for use Metadata repository enables all the structure of the organizations data containers to one integrated place. This opens plethora of resourceful information for making calculated business decisions. This tool uses one generic form of data model to integrate all the models thus brings all the applications and programs of the organization into one format. And on top of it applying the business definitions and business processes brings the business and technology closer that will help organizations make reliable roadmaps with definite goals. With one stop information, business will have more control on the changes, and can do impact analysis of the tool. Usually business spends much time and money to make decisions based on discovery and research on impacts to make changes or to add new data structures or remove structures in data management of the organization. With a structured and well maintained repository, moving the product from ideation to delivery takes the least amount of time (considering other variables are constant). To sum it up: Integration of the metadata across the organization Build relationship between various metadata types Build relationship between various disparate systems Define business golden copy of definitions Version control of the changes at structure level Interaction with Reference data Link view to master data Automatic synchronization with various authorized metadata source systems More control to business decisions Validate the structures by overlapping the models Discovering discrepancies, gaps, lineage, metrics at data structure level Each database management system (DBMS) and database tools have their own language for the metadata components within. Database applications already have their own repositories or registries that are expected to provide all of the necessary functionality to access the data stored within. Vendors do not want other companies to be capable of easily migrating data away from their products and into competitors products, so they are proprietary with the way they handle metadata. CASE tools, DBMS dictionaries, ETL tools, data cleansing tools, OLAP tools, and data mining tools all handle and store metadata differently. Only a metadata repository can be designed to store the metadata components from all of these tools. Design Metadata repositories should store metadata in four classifications: ownership, descriptive characteristics, rules and policies, and physical characteristics. Ownership, showing the data owner and the application owner. The descriptive characteristics, define the names, types and lengths, and definitions describing business data or business processes. Rules and policies, will define security, data cleanliness, timelines for data, and relationships. Physical characteristics define the origin or source, and physical location. Like building a logical data model for creating a database, a logical meta model can help identify the metadata requirements for business data. The metadata repository will be centralized, decentralized, or distributed. A centralized design means that there is one database for the metadata repository that stores metadata for all applications business wide. A centralized metadata repository has the same advantages and disadvantages of a centralized database. Easier to manage because all the data is in one database, but the disadvantage is that bottlenecks may occur. A decentralized metadata repository stores metadata in multiple databases, either separated by location and or departments of the business. This makes management of the repository more involved than a centralized metadata repository, but the advantage is that the metadata can be broken down into individual departments. A distributed metadata repository uses a decentralized method, but unlike a decentralized metadata repository the metadata remains in its original application. An XML gateway is created that acts as a directory for accessing the metadata within each different application. The advantages and disadvantages for a distributed metadata repository mirror that of a distributed database. Design of information model should include various layers of metadata types to be overlapped to create an integrated view of the data. Various metadata types should be stitched with related metadata elements in a top down model linking to business glossary. Layers of Metadata: Business Glossary: contains recursive relationship to Business terms. Business tags: Contains various affiliation to that term or terms. Data Dictionary: contains information from data model tools for the definition of metadata elements and their technical definitions provided by data or enterprise architecture. Conceptual data models: Logical data models Physical data models Databases validation rules and data quality rules ETL, business rules and their relationship to attributes and entities Reports Source to target mapping artifacts (relationships) Reporting requirements (relationships) business processes and their relationship to technology people hierarchy and their relationship owner relationship Entity-Relationship/Object-Oriented Metadata repositories can be designed as either an Entity-relationship model, or an Object-oriented design. See also References Data modeling Databases Metadata Metadata registry
Metadata repository
[ "Technology", "Engineering" ]
1,861
[ "Data modeling", "Data engineering", "Metadata", "Data" ]
32,207,005
https://en.wikipedia.org/wiki/Corrosion%20loop
Corrosion loop(s) are systematized analysis "loops" used during Risk-based inspection analysis. Both terms “RBI Corrosion loops” or “RBI corrosion circuits” are generic terms used to indicate the systematization of piping systems into usable and understandable parts associated with corrosion. Systematized piping loops or circuits are systems used in Risk Based Inspection analysis to assess the likelihood and consequence of failure. Other systematization may also prove useful, such as, i.e. inspection, consequence, materials of construction and chemistry. The system (or sub systems) maybe used to identify, pressure / temperature, subsequent failure mechanism and possible failure rate. They may be based upon Construction drawings, Process Flow diagrams or Piping & Instrument diagrams as required. Each loop or circuit maybe identified using a unique code, with description about; process, material & degradation mode, material, cladding, C.A, specs. See system model comes under the general heading of system analysis the terms analysis and synthesis come from Greek where they mean respectively "to take apart" and "to put together". See also systems theory: Note the exact definition of the systematized risk analysis " loop" is left to the reader and their requirements of the system analysis required, however to ensure consistency and that the expected results is produced, this should be defined before they are constructed. It is suggested that a “true” corrosion loop should be a grouping were the degradation mechanism is "likely" to be the same i.e. Material of Construction, Process fluid (similar stream properties), Temperature (roughly, or at least within the damage mechanisms susceptibility thresholds), Pressure (if the damage mechanism/s of concern is/are reliant upon pressure), and Velocity (if the damage mechanism of concern is reliant upon velocity). By defining the barrier limits of Damage Susceptible Areas, the susceptibility of any part is similar to that of the whole. References API 570 (Nov. 2009) definition and notes re "piping circuits" Piping Inspection Code: In-service Inspection, Rating,Repair, and Alteration of Piping Systems, Third Edition 4th European-American Workshop on Reliability of NDE Tools and Methodologies for Pipework Inspection Data Analysis Peter VAN DE CAMP, Fred HOEVE, Sieger TERPSTRA, Shell Global Solutions International, Amsterdam, the Netherlands (http://www.ndt.net/article/reliability2009/Inhalt/we2a4.pdf) Risk Management Application on Refinery Pipeline Inspection Ren-Rong Chang* Jin-Jhy Jeng** Shang Lai Lee*** * Inspection Engineer of CPC Shell Lubricants Co., LTD. ** Inspection Engineer of Chinese Petroleum Corp. *** Head inspection of Kaohsiung Plant of Chinese Petroleum Corp. (www.aposho.org/conference/img/csiii-b4.doc) Risk Based Management ASME Seminar By Chow NgaiMun M.Sc., Ceng, FIMMM, AWS CWI, ASNT/ACCP/EN 473 Level III (UT, RT, MT, PT), PCN Level II TOFD, API 653, 510, 570, 580 and 571 Singapore Welding Society (1st Vice President) Engineering Manager (Shell Chemicals Seraya Pte Ltd). (http://www.psig.sg/Seminar/S2011-2.pdf) Risk Based Inspection Application on Refinery and Processing Piping Ren-Rong Chang1*, Chi-Min Shu1, Ming-Kuen Chang1 and Kung-Nan Lin2 1 Department of Safety, Health and Environmental Engineering, National Yunlin University of Science and Technology, 123, University Road, Section 3, Touliu, Yunlin, Taiwan 640, ROC 2 Department of Marine Engineering, National Taiwan Ocean University, 2, Pei-Ning Road, Keelung, Taiwan 20224, ROC (www.iitk.ac.in/che/jpg/papersb/full%20papers/S%20-%2084%20.doc) Maintenance Corrosion
Corrosion loop
[ "Chemistry", "Materials_science", "Engineering" ]
855
[ "Metallurgy", "Corrosion", "Electrochemistry", "Mechanical engineering", "Maintenance", "Materials degradation" ]
32,210,654
https://en.wikipedia.org/wiki/Reference%20dimension
A reference dimension is a dimension on an engineering drawing provided for information only. Reference dimensions are provided for a variety of reasons and are often an accumulation of other dimensions that are defined elsewhere (e.g. on the drawing or other related documentation). These dimensions may also be used for convenience to identify a single dimension that is specified elsewhere (e.g. on a different drawing sheet). Reference dimensions are not intended to be used directly to define the geometry of an object. Reference dimensions do not normally govern manufacturing operations (such as machining) in any way and, therefore, do not typically include a dimensional tolerance (though a tolerance may be provided if such information is deemed helpful). Consequently, reference dimensions are also not subject to dimensional inspection under normal circumstances. Reference dimensions are commonly used in CAD software along with constraints that usually denote the opposite: mandatory dimensions to be precisely followed. Notation In Computer-Aided Design (CAD) it's commonly used to denote dimensions. REF Prior to use of modern CAD software, reference dimensions were traditionally indicated on a drawing by the abbreviation "REF" written adjacent to the dimension (typically to the right or underneath the dimension). However, standard ASME Y14.5 has changed the way references are marked, and the abbreviation "REF" has been replaced with the use of parentheses around the dimension. As an example, a distance of 1500 millimeters might be denoted by instead of . This implementation has followed in modern CAD software that makes use of parentheses as the default denotation method whenever reference dimensions are "automatically" created by the software. The method for identifying a reference dimension (or reference data) on drawings is to enclose the dimension (or data) within parentheses. See also Engineering drawing abbreviations and symbols Geometric dimensioning and tolerancing ASME Y14.5 References External links Y14.5 Dimensioning and Tolerancing, 2018, ASME Technical drawing
Reference dimension
[ "Engineering" ]
392
[ "Design engineering", "Technical drawing", "Civil engineering" ]
43,499,978
https://en.wikipedia.org/wiki/Mount%20Polley%20mine
Mount Polley mine is a Canadian gold and copper mine located in British Columbia near the towns of Williams Lake and Likely. It consists of two open-pit sites with an underground mining component and is owned and operated by the Mount Polley Mining Corporation, a wholly owned subsidiary of Imperial Metals. In 2013, the mine produced an output of of copper, 45,823 ounces of gold, and 123,999 of silver. The mill commenced operations in 1997 and was closed and placed on care and maintenance in 2019. The company owns of property near Quesnel Lake and Polley Lake where it has mining leases and operations on and mineral claims on . Mineral concentrate is delivered by truck to the Port of Vancouver. As of January 2020, Mount Polley's Proven and Probable Reserves were 53.8 million tonnes of ore grading 0.34% copper, 0.30 grams per tonne gold and 0.9 grams per tonne silver, equating to 400 million pounds of copper, 517,000 troy ounces of gold and 1.55 million troy ounces of silver. Mount Polley Mining Company reopened the mine in July 2022. Mount Polley Mining Company estimates that the reopening of the mine created 300 local jobs. Mining operations When operating, the Mount Polley mine moves 80,000–90,000 tonnes of material per day from the mine. This contains 20,000 tonnes of ore. Mount Polley does not require high-skilled labour for operations and hires and trains from the local communities of Big Lake, Horsefly and as far away as Quesnel and Williams Lake. Most workers come from communities near the mine. Minerals Mount Polley determines what qualifies as ore and what qualifies as waste using drilling and blasting. Ore is then sorted according to blast ball assays. High value sulfide ore is hauled to a crusher for processing at the on site plant. Chalcopyrite and bornite are the main copper-bearing minerals of value at the Mount Polley mine. Processing During operation, Mount Polley mine processes 20,000 tonnes of ore per day. The ore is sent for crushing, size reduction, and froth floatation. Gold and copper During assaying, the value of copper and the value of gold in the ore is determined and a monetary value per tonne is placed on the material. When this value exceeds a particular threshold, workers start processing the material. Subsequently, the material is processed through a mill where the minerals get floated. In particular, the copper and gold minerals both float and are then concentrated. This process, called upgrading, creates a concentrated material that is approximately 23% copper. Gold is also captured in the concentrate. Transportation Mount Polley ships material, concentrated by floating, by truck to Vancouver when it is sent overseas to buyers who then smelt and refine the material. Staff During operation, Mount Polley runs four shifts. There is a day shift and a night shift each running twelve hours. Around 370 workers work these shifts seven days on and then get seven days off. About 50 staff include administrators, supervisors, warehouse operators, engineers, geologists, assayers, technical personnel, and human resources. Care and maintenance shutdown and reopening On 31 May 2019, Mount Polley mine was put on care and maintenance status. Remediation work in the areas affected by the 2014 breach has been the focus of Mount Polley's staff from 2014 to 2019. While the mine's closure affected mining operations, it did not impact the ongoing environmental monitoring and remediation programs. Care and maintenance at the Mount Polley mine included management of the water treatment facilities and the operation of site water management systems, including pumps and ditches. Mount Polley monitors the stability of the tailings storage facility on a regular basis. Mount Polley employed 15 staff at the site while it operated in care and maintenance status. The company reopened mine operations in July 2022. Geology The Mount Polley mineralization is classified as an alkalic porphyry copper-gold deposit. The deposits are located in the Quesnel trough, a Mesozoic volcanic arc in the Canadian segment of the North American Cordillera. Precious metal mineralization in the two Mount Polley deposits occur the felsic stock occurred during the Jurassic-Triassic period. The copper-gold mineralization occurs within crackle and inclusion breccias. History Pre-development Placer mining, the mining of stream beds for minerals, was common practice in the area since the mid nineteenth century. The Mount Polley ore deposit was discovered subsequent to an airborne magnetometer study completed by the Canadian government in 1964 which detected a significant reading for the surveyed map in the region of Polley Mountain. Investigating further, Karl Springer discovered an alkalic porphyry deposit there the same year. Quintana Resources prospected the area in 1976, discovering numerous copper float boulders but let their claim to the property lapse in 1978. In 1980, E&B Exploration optioned the property from Highland Crow, a subsidiary of Teck. Through the early 1980s, the potential for gold mining on the site was explored due to the rising global price of the commodity. The first feasibility study for the site was completed in 1991 and the first permits for developing the deposit were approved the same year. Financing from Imperial Metals however was not yet in place. Mine opening In 1997, the Mount Polley mine opened with the Cariboo pit being the first site developed. The tailings storage facility was also constructed the same year. In 2010, the underground portion of the mine was built and operations expanded. During a mine closure between 2002 and 2005, a new location called the "White pit" was discovered in the northeast region of the site. The White pit is located 1.5 km northeast of the Cariboo and Bell pits and revealed the richest deposit on the site (10 million tonnes of .9% copper). The White pit's distance from the Cariboo pit necessitated new permitting. Subsequent to the discovery of the white pit, after additional underground mining, another site was discovered and developed called "Martel". Mount Polley tailings pond breach The Mount Polley mine's tailings facility experienced a dam breach and tailings spill that began 4 August 2014. The four square kilometre tailings pond spilled an estimated 25 billion litres of contaminated materials into Polley Lake, Hazeltine Creek, Quesnel Lake, and Cariboo River, a source of drinking water and major spawning grounds for sockeye salmon. Quesnel Lake is claimed to be the deepest fjord lake in the world, the third deepest lake in North America, and is the major tributary of the Fraser River. According to Mount Polley mine records filed with Environment Canada in 2013, there were “326 tonnes of nickel, over 400 tonnes of arsenic, 177 tonnes of lead and 18,400 tonnes of copper and its compounds placed in the tailings pond,” in 2012. By 8 August the sized tailings pond had been emptied of the majority of "process water" from which the crushed rock solids, or "tailings", gradually settle out. The slurry of tailings and process water carried felled trees, mud and debris and wore away the banks of Hazeltine Creek which flows out of Polley Lake and continued into the nearby Quesnel Lake. The spill emptied the tailings pond and caused Polley Lake to rise by . Effects of spill Early reactions to the tailings spill expressed grave concern, but no fines or charges against Imperial Metals have been assessed. On August 6, two days after the breach, the British Columbia Ministry of Environment issued a Pollution Abatement Order to Mount Polley Mining Corporation. The company submitted an action plan for the Preliminary Environmental Impact Assessment, environmental monitoring. The company was required and did report weekly on the implementation of action plan measures. A local state of emergency in nearby communities was initially declared in the interest of public safety with widespread water restrictions implemented and local equitable water distribution set up as a precautionary measure. Days later, some water use restrictions were removed for non-local residents, leaving a boiling water advisory, and narrowing the "Do Not Use" order to Polley Lake, Hazeltine Creek, and an area within of the shoreline sediment deposit, where Hazeltine Creek runs into Quesnel Lake. Initial tests at five testing sites of the second water test run indicated zinc levels above chronic exposure limits for aquatic life, although rainbow trout toxicity test results from water collected at Quesnel Lake near the mouth of Hazeltine Creek on August 5 and 6 showed water was not toxic to rainbow trout. Some tourism businesses in the adjacent surrounding areas remained open. Because the affected water system is salmon-bearing, there was a temporary closure of part of the Chinook salmon fishery by Fisheries and Oceans Canada. Complaints were filed with B.C.'s privacy commissioner regarding the release of environmental assessments and dam inspection reports after journalists found a report from 2010 and assessments from 1992 and 1997 in the public domain, the B.C. government has withheld subsequent reports. Following the event, some First Nations activists held protests and set up blockades. Several local landowners and business operators affected by the spill have launched legal challenges to seek compensation for damages. Investigation and cause of spill On 18 August 2014, the British Columbia government ordered an independent engineering investigation into the pond breach and a third-party review of all 2014 dam safety inspections for every permitted mine's tailings pond in the province. The report found that the tailings dam collapsed because of its construction on underlying earth containing a layer of glacial till, which had been unaccounted for by the company's original engineering contractor. The report investigated whether the piezometers measuring the water pressure on the dam had been located correctly, as the last readings, 2 August 2014, did not show any changes in the water pressure. In 2010, Mount Polley Mining Corporation's (MPMC) engineering firm reported a crack in the earthen dam while working to raise it, and that piezometers were broken, which were later fixed. In 2018, three engineers who worked on the tailings storage facility were charged by their professional association with negligence or unprofessional conduct. Clean-up efforts Clean up efforts have led to a reconstructed Hazeltine Creek, although the contaminated slurry that made its way into Polley Lake and Quesnel Lake remains in the waterways. A drinking water ban was lifted within weeks of the spill and regular water testing is being conducted by the B.C. government, the Mount Polley mine, the University of Northern British Columbia and local residents. Other remediation and reconstruction efforts have included investigation on impacts to human health and safety and affected ecosystems while removing the tailings spill, reconstructing creek shorelines, installing fish habitats, and replanting trees and other local vegetation. Investigation by the remediation team showed elevated levels of selenium, arsenic and other metals. Water management and treatment The long-term water management plan for the Mount Polley mine site has been approved by an independent statutory-decision maker from the Ministry of Environment and is expected to be fully in place by fall 2017 and will replace the short-term water management plan that has been in place since 30 November 2015. Mount Polley Mining Corporation submitted its formal permit amendment application, which included the long-term water management plan and supporting Technical Assessment Report, in October 2016. The documents were subject to extensive public consultation, including First Nations and local communities. The application also underwent a full technical review from the Cariboo Mine Development Review Committee (CMDRC), which includes representatives from provincial and federal agencies, First Nations, local governments (City of Williams Lake and Cariboo Regional District), and the community of Likely. The Mount Polley Mining Corporation (MPMC) treats mine site water with water treatment plant technology by Veolia prior to release into Quesnel lake. The water is monitored for turbidity at 15 second intervals and water quality is assessed at Quesnel lake as part of MPMC's Comprehensive Environmental Monitoring Plan. About 15,000 cubic meters of site water is discharged into Quesnel lake per day. This is below the 29,000 cubic meters threshold allowed under the mining corporation's permit. The water at Quesnel lake, Quesnel river, Polley lake, and Hazeltine creek are regularly monitored by the Ministry of Environment. Remediation timeline The Mount Polley Mining Corporation has invested more than $70 million into remediation efforts since the dam breach in 2014. No government funding has been spent on the clean-up or repair work at the site. The restoration and remediation strategy was carried out in four stages: impact reduction, post-breach environmental assessment, long-term health and environmental assessment, and implementation of work focused on remediation to prevent environmental and health impacts and to improve the condition of the areas affected by breach. 2014 In August, MPMC submitted an interim erosion plan and a sediment control plan to mitigate ongoing erosion and sediment transport downstream, to control further flow from the tailings area, and to improve the quality of water flowing into Quesnel Lake. In the beginning of September 2014, a berm to prevent further spread of tailings was nearing completion and laid off workers, about 40 of the mine's approximately 300 workers demanded to reopen the mine. a spokesman at the Ministry of Mine said operations would require permits and approvals and could only go ahead after a rigorous review. The primary phase of the restoration and remediation strategy implemented work to reduce the environmental effect on Quesnel Lake. 2015 In June 2015, the Post-Event Environmental Impact Assessments Report was published as part of the second phase of the strategy. The report was submitted by Golder Associates to the Mount Polley Mining Corporation to determine the physical, biological, and chemical implications 6–8 months after the dam breach. The report detailed steps taken by the MPMC to stabilize the tailings storage facility by creating two rock berms inside the facility, to provide safe access to Hazeltine creek by reducing the elevation of Polley Lake behind the point of the blockage caused by the discharge of tailings effluent, and to stop inputs from the tailings storage facility. Specialists and environmental scientists and engineers were hired to study the impact of the spill from the tailings dam. This team studied where tailings effluent was deposited on land and in surrounding water environments, in particular how the bottom of Quesnel Lake was affected and how the structures of Hazeltine and Edney creeks had changed. Chemical studies studied soil, water and sediment changes, while biological studies were focused on the effect of aquatic plant and animal life, in particular those at the sediment layer. Biological assessment also studied soil-dependent biota in the areas surrounding Quesnel Lake and Polley Lake. The Assessments Report determined nine areas requiring ongoing monitoring to determine localized strategies for remediation efforts in each location. These areas included the tailings storage facility, the Polley plug (a blockage area between the tailings effluent and Polley Lake), Polley Lake, upper Hazeltine creek, Hazeltine Canyon, lower Hazeltine creek, the mouth of Edney creek, Quesnel Lake, and Quesnel River. The report concluded that Polley Lake, Hazeltine Creek and a small portion of Quesnel Lake were physically affected by the tailings dam breach. The chemical testing on the tailings mixture was determined to be relatively inert though it was found that a higher concentration of copper was contained in the effluent compared to before the breach. Biological testing found copper contained within lake sediment and within the water was not toxic to aquatic life. Soil testing of copper levels determined a level higher than provincial standards for the protection of invertebrates and plants but at far lower levels than the provincial standards for the protection of human health. Deep water analysis found copper to be at levels below the Provincial Water Quality Guideline. Despite the levels of copper present due to presence in the tailings, the report determined it was unlikely to be released from the tailings and therefore adverse effects were deemed unlikely. The restoration of the shoreline of Hazeltine creek began to create a stable water flow and to begin the restoration of fish and associated wildlife habitats. This was preceded by floodplain grading and the determination of the physical land characteristics of the areas surrounding the shoreline. A flow study to determine an ideal range and the annual mean for natural habitats was completed before construction of rock weirs and habitat features. Planting on the floodplain, to continue over subsequent years had also begun. Repairs to the mouth of lower Edney Creek was completed connecting the waterway to Quesnel Lake. By the spring of 2015, remediation work had installed a new fish habitat at lower Edney Creek. Successful spawning of interior Coho, Kokanee and Sockeye Salmon was accomplished. By May, a new channel for Hazeltine Creek was completed. On 13 July 2015, Interior Health, the regional public health authority, declared all water restrictions lifted and determined water sourced from Polley Lake and Hazeltine Creek safe for consumption and recreation from a health perspective. A review of the water, sediment and fish toxicology samples from the Ministry of the Environment determined no known risks to human health. 2016 A detailed site investigation was completed by Golder Associates in January 2016 as mandated by the Pollution Abatement Order issued by the British Columbia Ministry of Environment. This work was part of investigation and remediation work ongoing at the Mount Polley site. The detailed site investigation was completed to produce a Human Health Risk Assessment (HHRA) report and an Ecological Risk Assessment for the affected site. The Post-Event Environmental Impact Assessment Report update was completed in June 2015 by Golder Associates. Remediation work was conducted in tandem with investigative work done by Golder. The Human Health Risk Assessment (HHRA) was completed by Golder Associates as part of that company's work toward implementing the Mount Polley remediation strategy. The produced report detailed current recreational and commercial uses of Polley Lake, Quesnel Lake, and Quesnel River and their environs including fisheries, swimming, boating, kayaking, canoeing, waterskiing, snowmobiling, and ice fishing. The report also noted the use of Quesnel Lake as a source of drinking water for nearby residences. As such the report investigated effect of the dam breach on human health, in particular subsistence land users, Quesnel Lake residents, and recreational land users. The HHRA report found that soil, surface water, and the air did not contain contaminants of particular concern that were present or that exceeded contaminated site regulations. The sediment layer exceeded the regulatory standard for lead, while vegetation had copper and vanadium present, and aluminum, copper, and vanadium were present in the fish. The HHRA report concluded that the risks were low to subsistence land users, recreational land users, loggers, and workers on site. Further, the human health risks associated with the tailings storage facility embankment breach were considered to be "very low". Groundwater did contain metals that exceeded drinking water standards including iron, manganese, arsenic, molybdenum, and sulfate. However, no wells that supply groundwater exist in the Hazeltine Corridor. 2017 The HHRA report was published in May. The Ecological Risk Assessment (ERA) report was published in December and detailed the work done by Golder to understand the ecological significance of the tailings dam breach of 2014. The ERA report was completed as a component of the MPMC's remediation strategy to help inform rehabilitation work in affected areas. The ERA considered levels of metal contaminants in the soil, water, and sediment. Territorial and Aquatic risk assessments were concluded as part of the investigative work of the report. The report found excess concentrations of copper and vanadium in the soil however it was determined that the tailings were not acid generating and were unlikely to leach metals. The ERA determined the cause of some tree death post-breach and attributed a root smothering effect of the tailings effluent in the forested region. It was determined that tailings decreased soil aeration causing a poor environment for soil biota to support tree growth and survival. The food chain of local wildlife was modeled to determine if copper and vanadium exceeded standards. The cumulative dose according to these models was determined to be below a conservative threshold for most wildlife species. The report concluded low risk associated with copper and vanadium contamination. The bioavailability of the metals was likewise determined to be low. As part of the aquatic risk assessment, copper and arsenic were investigated in the sediment, while copper was the contaminant of potential concern investigated in the water of Polley Lake, Hazeltine Creek, Quesnel Lake, and Quesnel River. It was determined that copper levels decreased below the accepted guideline through 2015 in both lakes and Quesnel River, but not in Hazeltine Creek which was the site of active remediation and restructuring. The plants, water-column invertebrates, and fish in Polley Lake and Quesnel Lake are not expected to face long-term effects of the 2014 breach, according to ERA report. Likewise, risk to fish-consuming wildlife was also determined to be low. The deep ecological benthic ecosystem was also considered to have little risk from copper as a limiting factor in the recovery of these organisms at the sediment layer. The ERA concluded that ecological risks associated with metals released by the dam breach and tailings spill are low. 2018 Over 6 kilometers of new fish spawning and rearing habitats were installed in upper to middle Hazeltine Creek. The successful spawning of Rainbow Trout was later observed in 2018 and 2019 in upper Hazeltine Creek. 2019 The Remediation Plan was prepared by Golder Associates for the MPMC and was submitted to the British Columbia Ministry of Environment & Climate Change Strategy. This was the final requirement of the Pollution Abatement Order which was lifted on 12 September 2019. The Mount Polley Review Panel determined that the environmental effect of the dam breach and tailing spill was the physical disruption of the effluents rather than chemical. MPMC turned its remediation focus to restoring the physical state of the affected sites. The remediation efforts include ongoing planting of trees and shrubs that are native to the local ecosystem in the riparian and upland areas along Hazeltine Creek. The Mount Polley remediation efforts have replanted 600,000 trees and shrubs to date. The risk from chemical contamination on the site was determined to be low to very low in the relevant terrestrial and aquatic environments. Remediation efforts also repaired 400 metres of shoreline at Quesnel Lake and installed new fish habitats at that site. New wetlands were also installed at the site next to the tailings pond failure. Government monitoring, impact, and inspection In 2010, provincial government funding was cut for resource management. Preceding the dam breach, Mount Polley was inspected 2013, but not 2011 or 2012. Bill Bennett, Minister of Energy and Mines, said "there is no evidence that the government's missed inspections were related to the failure of the dam [in 2014]". According to an Imperial Metals summary filed with Environment Canada in 2013, "there was 326 tonnes of nickel, over 400 tonnes of arsenic, 177 tonnes of lead and 18,400 tonnes of copper and its compounds placed in the tailings pond [last year]". At a community meeting on 5 August 2014, the president of Imperial Metals stated "we regularly perform toxicity tests and we know this water is not toxic to rainbow trout." Water, sediment, and fish in Polley and Quesnel Lake are monitored by British Columbia government staff at the Ministry of Environment. Fish sampling in the months immediately proceeding the tailing spill revealed elevated levels of selenium that exceed guidelines for human consumption, though elevated levels of arsenic and copper were not considered a threat to human health. These levels were similar to levels found in 2013 before the tailings breach and considered likely due to local geology. Sediment testing near the tailings spill revealed elevated concentrations of copper, iron, manganese, arsenic, silver, selenium and vanadium. Yet, the government said tests in May 2014, prior to the tailings release, had shown elevated levels of the same elements. By 2016, Ministry of Environment testing determined zero exceedances of its guideline levels for contaminants for both aquatic life and drinking water in Quesnel Lake. In the years after the tailing spill, the extent of the impact of the event has been largely determined. The British Columbia Ministry of Environment provides ongoing water monitoring of pH, conductivity, turbidity, total suspended solids, total dissolved solids, total organic carbon, hardness, alkalinity, nutrients, general ions, total and dissolved metals at the site. Imperial Metals history Imperial Metals & Power Ltd was incorporated in British Columbia in December 1959. The company owns the Mount Polley open pit copper mine and gold mine, the Huckleberry open pit copper mine near Houston, British Columbia, and the Ruddock Creek zinc/lead project, near Kamloops, British Columbia. In 2019, Imperial Metals sold its 70% stake in the Red Chris copper/gold mine to Newcrest for $804 million retaining a 30% interest in the mine. Imperial Metals temporarily suspended operations of the Mount Polley mine in 2019 due to declining copper prices. Environmental remediation work continues at the site. The mine's closure affects 250 workers and is the second cessation of work due to global copper prices. The first such closure occurred in 2001 and lasted until 2005. See also List of copper mines List of copper mines in Canada List of gold mines in Canada Gibraltar Mine New Afton mine Coleman Mine Highland Valley Copper mine Canadian Malartic Mine LaRonde mine References External links Web site of the review panel Mount Polley project site Government of British Columbia information site Cariboo Regional District Environmental testing Environment of British Columbia Geography of the Cariboo Copper mines in British Columbia Gold mines in British Columbia Silver mining in Canada Mining in British Columbia Economy of British Columbia Cleaning and the environment Natural resource management Mining and the environment Environmental engineering Water pollution in Canada Disasters in British Columbia
Mount Polley mine
[ "Chemistry", "Engineering" ]
5,372
[ "Reliability engineering", "Chemical engineering", "Civil engineering", "Environmental engineering", "Environmental testing" ]
43,506,577
https://en.wikipedia.org/wiki/Calcium-rich%20supernova
In astronomy, a calcium-rich supernova (or Calcium-rich transient, Ca-rich SN) is a subclass of supernovae that, in contrast to more well-known traditional supernova classes, are fainter and produce unusually large amounts of calcium. Since their luminosity is located in a gap between that of novae and other supernovae, they are also referred to as "gap" transients. Only around 15 events have been classified as a calcium-rich supernova (as of August 2017) – a combination of their intrinsic rarity and low luminosity make new discoveries and their subsequent study difficult. This makes calcium-rich supernovae one of the most mysterious supernova subclasses currently known. Origins and classification A peculiar group of supernova that were unusually rich in calcium were identified by Alexei Filippenko and collaborators. Although they appeared somewhat similar to Type Ib and Ic supernovae, their spectra were dominated by calcium, without other signatures often seen in Type Ib and Ic supernovae, and the term calcium-rich was coined to describe them. Subsequent discoveries led to the classification of empirically similar supernovae. They share characteristics such as quickly rising and fading light curves that peak in luminosity between novae and supernovae, and spectra that are dominated by calcium 2–3 months after initial explosion. Explosion mechanism The exact nature of the stellar systems and their subsequent explosions that give rise to calcium-rich supernovae are unknown. Despite appearing similar to Type Ib supernovae, it was noted that a different explosion mechanism was likely to be responsible for calcium-rich supernovae. Since a large proportion of the galaxies from which they are thought to originate are early-type galaxies, and thus composed of old stellar populations, they are unlikely to contain many young, massive stars that give rise to Type Ib supernovae. Supernova explosions in old stellar populations generally involved a white dwarf since these are old systems that can undergo thermonuclear explosion under the right circumstances, as is the case for Type Ia supernovae. However, because calcium-rich supernovae are much less luminous and fade more quickly than normal Type Ia supernovae, it is unlikely that the same mechanism is at play for both. Another peculiarity of calcium-rich supernovae is that they appear to explode far away from galaxies, even reaching intergalactic space. Searches for faint dwarf galaxies at their locations have ruled that they are exploding in very low density environments, unlike other supernova types. There are several theories that attempt to explain this behaviour. Binary systems of high-velocity stars, such as two white dwarfs or a white dwarf and a neutron star, that have been ejected from their galaxy either due to a neutron star kick or interaction with the supermassive black hole in their galaxy could produce explosions when they eventually merge (due to gravitational wave radiation) that would preferentially occur far from galaxies. Alternatively they have been suggested to be due to stars that reside in the intracluster medium within large galaxy groups or clusters, having been expelled from their galaxy during mergers or interactions. The explosion would then be caused by the detonation of a low mass white dwarf during a merging event as part of a binary system, or the detonation of a helium shell on a white dwarf. A calcium-rich supernova event expels several tenths of a solar mass in material at thousands of kilometres per second and reaches a peak luminosity equal to around 100–200 million times that of the Sun. Despite calcium-rich supernovae being comparatively rare and diminutive compared to other supernova types, they are thought to make a significant contribution to the production of calcium in the Universe. List References External links List of all known Type Ca-rich supernovae at The Open Supernova Catalog . Supernovae
Calcium-rich supernova
[ "Chemistry", "Astronomy" ]
790
[ "Supernovae", "Astronomical events", "Explosions" ]
43,507,260
https://en.wikipedia.org/wiki/Quantifier%20%28logic%29
In logic, a quantifier is an operator that specifies how many individuals in the domain of discourse satisfy an open formula. For instance, the universal quantifier in the first order formula expresses that everything in the domain satisfies the property denoted by . On the other hand, the existential quantifier in the formula expresses that there exists something in the domain which satisfies that property. A formula where a quantifier takes widest scope is called a quantified formula. A quantified formula must contain a bound variable and a subformula specifying a property of the referent of that variable. The most commonly used quantifiers are and . These quantifiers are standardly defined as duals; in classical logic, they are interdefinable using negation. They can also be used to define more complex quantifiers, as in the formula which expresses that nothing has the property . Other quantifiers are only definable within second order logic or higher order logics. Quantifiers have been generalized beginning with the work of Mostowski and Lindström. In a first-order logic statement, quantifications in the same type (either universal quantifications or existential quantifications) can be exchanged without changing the meaning of the statement, while the exchange of quantifications in different types changes the meaning. As an example, the only difference in the definition of uniform continuity and (ordinary) continuity is the order of quantifications. First order quantifiers approximate the meanings of some natural language quantifiers such as "some" and "all". However, many natural language quantifiers can only be analyzed in terms of generalized quantifiers. Relations to logical conjunction and disjunction For a finite domain of discourse , the universally quantified formula is equivalent to the logical conjunction . Dually, the existentially quantified formula is equivalent to the logical disjunction . For example, if is the set of binary digits, the formula abbreviates , which evaluates to true. Infinite domain of discourse Consider the following statement (using dot notation for multiplication): 1 · 2 = 1 + 1, and 2 · 2 = 2 + 2, and 3 · 2 = 3 + 3, ..., and 100 · 2 = 100 + 100, and ..., etc. This has the appearance of an infinite conjunction of propositions. From the point of view of formal languages, this is immediately a problem, since syntax rules are expected to generate finite words. The example above is fortunate in that there is a procedure to generate all the conjuncts. However, if an assertion were to be made about every irrational number, there would be no way to enumerate all the conjuncts, since irrationals cannot be enumerated. A succinct, equivalent formulation which avoids these problems uses universal quantification: For each natural number n, n · 2 = n + n. A similar analysis applies to the disjunction, 1 is equal to 5 + 5, or 2 is equal to 5 + 5, or 3 is equal to 5 + 5, ... , or 100 is equal to 5 + 5, or ..., etc. which can be rephrased using existential quantification: For some natural number n, n is equal to 5+5. Algebraic approaches to quantification It is possible to devise abstract algebras whose models include formal languages with quantification, but progress has been slow and interest in such algebra has been limited. Three approaches have been devised to date: Relation algebra, invented by Augustus De Morgan, and developed by Charles Sanders Peirce, Ernst Schröder, Alfred Tarski, and Tarski's students. Relation algebra cannot represent any formula with quantifiers nested more than three deep. Surprisingly, the models of relation algebra include the axiomatic set theory ZFC and Peano arithmetic; Cylindric algebra, devised by Alfred Tarski, Leon Henkin, and others; The polyadic algebra of Paul Halmos. Notation The two most common quantifiers are the universal quantifier and the existential quantifier. The traditional symbol for the universal quantifier is "∀", a rotated letter "A", which stands for "for all" or "all". The corresponding symbol for the existential quantifier is "∃", a rotated letter "E", which stands for "there exists" or "exists". An example of translating a quantified statement in a natural language such as English would be as follows. Given the statement, "Each of Peter's friends either likes to dance or likes to go to the beach (or both)", key aspects can be identified and rewritten using symbols including quantifiers. So, let X be the set of all Peter's friends, P(x) the predicate "x likes to dance", and Q(x) the predicate "x likes to go to the beach". Then the above sentence can be written in formal notation as , which is read, "for every x that is a member of X, P applies to x or Q applies to x". Some other quantified expressions are constructed as follows,     for a formula P. These two expressions (using the definitions above) are read as "there exists a friend of Peter who likes to dance" and "all friends of Peter like to dance", respectively. Variant notations include, for set X and set members x:                                     All of these variations also apply to universal quantification. Other variations for the universal quantifier are         Some versions of the notation explicitly mention the range of quantification. The range of quantification must always be specified; for a given mathematical theory, this can be done in several ways: Assume a fixed domain of discourse for every quantification, as is done in Zermelo–Fraenkel set theory, Fix several domains of discourse in advance and require that each variable have a declared domain, which is the type of that variable. This is analogous to the situation in statically typed computer programming languages, where variables have declared types. Mention explicitly the range of quantification, perhaps using a symbol for the set of all objects in that domain (or the type of the objects in that domain). One can use any variable as a quantified variable in place of any other, under certain restrictions in which variable capture does not occur. Even if the notation uses typed variables, variables of that type may be used. Informally or in natural language, the "∀x" or "∃x" might appear after or in the middle of P(x). Formally, however, the phrase that introduces the dummy variable is placed in front. Mathematical formulas mix symbolic expressions for quantifiers with natural language quantifiers such as, For every natural number x, ... There exists an x such that ... For at least one x, .... Keywords for uniqueness quantification include: For exactly one natural number x, ... There is one and only one x such that .... Further, x may be replaced by a pronoun. For example, For every natural number, its product with 2 equals to its sum with itself. Some natural number is prime. Order of quantifiers (nesting) The order of quantifiers is critical to meaning, as is illustrated by the following two propositions: For every natural number n, there exists a natural number s such that s = n2. This is clearly true; it just asserts that every natural number has a square. The meaning of the assertion in which the order of quantifiers is reversed is different: There exists a natural number s such that for every natural number n, s = n2. This is clearly false; it asserts that there is a single natural number s that is the square of every natural number. This is because the syntax directs that any variable cannot be a function of subsequently introduced variables. A less trivial example from mathematical analysis regards the concepts of uniform and pointwise continuity, whose definitions differ only by an exchange in the positions of two quantifiers. A function f from R to R is called Pointwise continuous if Uniformly continuous if In the former case, the particular value chosen for δ can be a function of both ε and x, the variables that precede it. In the latter case, δ can be a function only of ε (i.e., it has to be chosen independent of x). For example, f(x) = x2 satisfies pointwise, but not uniform continuity (its slope is unbound). In contrast, interchanging the two initial universal quantifiers in the definition of pointwise continuity does not change the meaning. As a general rule, swapping two adjacent universal quantifiers with the same scope (or swapping two adjacent existential quantifiers with the same scope) doesn't change the meaning of the formula (see Example here), but swapping an existential quantifier and an adjacent universal quantifier may change its meaning. The maximum depth of nesting of quantifiers in a formula is called its "quantifier rank". Equivalent expressions If D is a domain of x and P(x) is a predicate dependent on object variable x, then the universal proposition can be expressed as This notation is known as restricted or relativized or bounded quantification. Equivalently one can write, The existential proposition can be expressed with bounded quantification as or equivalently Together with negation, only one of either the universal or existential quantifier is needed to perform both tasks: which shows that to disprove a "for all x" proposition, one needs no more than to find an x for which the predicate is false. Similarly, to disprove a "there exists an x" proposition, one needs to show that the predicate is false for all x. In classical logic, every formula is logically equivalent to a formula in prenex normal form, that is, a string of quantifiers and bound variables followed by a quantifier-free formula. Quantifier elimination Range of quantification Every quantification involves one specific variable and a domain of discourse or range of quantification of that variable. The range of quantification specifies the set of values that the variable takes. In the examples above, the range of quantification is the set of natural numbers. Specification of the range of quantification allows us to express the difference between, say, asserting that a predicate holds for some natural number or for some real number. Expository conventions often reserve some variable names such as "n" for natural numbers, and "x" for real numbers, although relying exclusively on naming conventions cannot work in general, since ranges of variables can change in the course of a mathematical argument. A universally quantified formula over an empty range (like ) is always vacuously true. Conversely, an existentially quantified formula over an empty range (like ) is always false. A more natural way to restrict the domain of discourse uses guarded quantification. For example, the guarded quantification For some natural number n, n is even and n is prime means For some even number n, n is prime. In some mathematical theories, a single domain of discourse fixed in advance is assumed. For example, in Zermelo–Fraenkel set theory, variables range over all sets. In this case, guarded quantifiers can be used to mimic a smaller range of quantification. Thus in the example above, to express For every natural number n, n·2 = n + n in Zermelo–Fraenkel set theory, one would write For every n, if n belongs to N, then n·2 = n + n, where N is the set of all natural numbers. Formal semantics Mathematical semantics is the application of mathematics to study the meaning of expressions in a formal language. It has three elements: a mathematical specification of a class of objects via syntax, a mathematical specification of various semantic domains and the relation between the two, which is usually expressed as a function from syntactic objects to semantic ones. This article only addresses the issue of how quantifier elements are interpreted. The syntax of a formula can be given by a syntax tree. A quantifier has a scope, and an occurrence of a variable x is free if it is not within the scope of a quantification for that variable. Thus in the occurrence of both x and y in C(y, x) is free, while the occurrence of x and y in B(y, x) is bound (i.e. non-free). An interpretation for first-order predicate calculus assumes as given a domain of individuals X. A formula A whose free variables are x1, ..., xn is interpreted as a Boolean-valued function F(v1, ..., vn) of n arguments, where each argument ranges over the domain X. Boolean-valued means that the function assumes one of the values T (interpreted as truth) or F (interpreted as falsehood). The interpretation of the formula is the function G of n-1 arguments such that G(v1, ..., vn-1) = T if and only if F(v1, ..., vn-1, w) = T for every w in X. If F(v1, ..., vn-1, w) = F for at least one value of w, then G(v1, ..., vn-1) = F. Similarly the interpretation of the formula is the function H of n-1 arguments such that H(v1, ..., vn-1) = T if and only if F(v1, ..., vn-1, w) = T for at least one w and H(v1, ..., vn-1) = F otherwise. The semantics for uniqueness quantification requires first-order predicate calculus with equality. This means there is given a distinguished two-placed predicate "="; the semantics is also modified accordingly so that "=" is always interpreted as the two-place equality relation on X. The interpretation of then is the function of n-1 arguments, which is the logical and of the interpretations of Each kind of quantification defines a corresponding closure operator on the set of formulas, by adding, for each free variable x, a quantifier to bind x. For example, the existential closure of the open formula n>2 ∧ xn+yn=zn is the closed formula ∃n ∃x ∃y ∃z (n>2 ∧ xn+yn=zn); the latter formula, when interpreted over the positive integers, is known to be false by Fermat's Last Theorem. As another example, equational axioms, like x+y=y+x, are usually meant to denote their universal closure, like ∀x ∀y (x+y=y+x) to express commutativity. Paucal, multal and other degree quantifiers None of the quantifiers previously discussed apply to a quantification such as There are many integers n < 100, such that n is divisible by 2 or 3 or 5. One possible interpretation mechanism can be obtained as follows: Suppose that in addition to a semantic domain X, we have given a probability measure P defined on X and cutoff numbers 0 < a ≤ b ≤ 1. If A is a formula with free variables x1,...,xn whose interpretation is the function F of variables v1,...,vn then the interpretation of is the function of v1,...,vn-1 which is T if and only if and F otherwise. Similarly, the interpretation of is the function of v1,...,vn-1 which is F if and only if and T otherwise. Other quantifiers A few other quantifiers have been proposed over time. In particular, the solution quantifier, noted § (section sign) and read "those". For example, is read "those n in N such that n2 ≤ 4 are in {0,1,2}." The same construct is expressible in set-builder notation as Contrary to the other quantifiers, § yields a set rather than a formula. Some other quantifiers sometimes used in mathematics include: There are infinitely many elements such that... For all but finitely many elements... (sometimes expressed as "for almost all elements..."). There are uncountably many elements such that... For all but countably many elements... For all elements in a set of positive measure... For all elements except those in a set of measure zero... History Term logic, also called Aristotelian logic, treats quantification in a manner that is closer to natural language, and also less suited to formal analysis. Term logic treated All, Some and No in the 4th century BC, in an account also touching on the alethic modalities. In 1827, George Bentham published his Outline of a New System of Logic: With a Critical Examination of Dr. Whately's Elements of Logic, describing the principle of the quantifier, but the book was not widely circulated. William Hamilton claimed to have coined the terms "quantify" and "quantification", most likely in his Edinburgh lectures c. 1840. Augustus De Morgan confirmed this in 1847, but modern usage began with De Morgan in 1862 where he makes statements such as "We are to take in both all and some-not-all as quantifiers". Gottlob Frege, in his 1879 Begriffsschrift, was the first to employ a quantifier to bind a variable ranging over a domain of discourse and appearing in predicates. He would universally quantify a variable (or relation) by writing the variable over a dimple in an otherwise straight line appearing in his diagrammatic formulas. Frege did not devise an explicit notation for existential quantification, instead employing his equivalent of ~∀x~, or contraposition. Frege's treatment of quantification went largely unremarked until Bertrand Russell's 1903 Principles of Mathematics. In work that culminated in Peirce (1885), Charles Sanders Peirce and his student Oscar Howard Mitchell independently invented universal and existential quantifiers, and bound variables. Peirce and Mitchell wrote Πx and Σx where we now write ∀x and ∃x. Peirce's notation can be found in the writings of Ernst Schröder, Leopold Loewenheim, Thoralf Skolem, and Polish logicians into the 1950s. Most notably, it is the notation of Kurt Gödel's landmark 1930 paper on the completeness of first-order logic, and 1931 paper on the incompleteness of Peano arithmetic. Peirce's approach to quantification also influenced William Ernest Johnson and Giuseppe Peano, who invented yet another notation, namely (x) for the universal quantification of x and (in 1897) ∃x for the existential quantification of x. Hence for decades, the canonical notation in philosophy and mathematical logic was (x)P to express "all individuals in the domain of discourse have the property P," and "(∃x)P" for "there exists at least one individual in the domain of discourse having the property P." Peano, who was much better known than Peirce, in effect diffused the latter's thinking throughout Europe. Peano's notation was adopted by the Principia Mathematica of Whitehead and Russell, Quine, and Alonzo Church. In 1935, Gentzen introduced the ∀ symbol, by analogy with Peano's ∃ symbol. ∀ did not become canonical until the 1960s. Around 1895, Peirce began developing his existential graphs, whose variables can be seen as tacitly quantified. Whether the shallowest instance of a variable is even or odd determines whether that variable's quantification is universal or existential. (Shallowness is the contrary of depth, which is determined by the nesting of negations.) Peirce's graphical logic has attracted some attention in recent years by those researching heterogeneous reasoning and diagrammatic inference. See also Absolute generality Almost all Branching quantifier Conditional quantifier Counting quantification Eventually (mathematics) Generalized quantifier — a higher-order property used as standard semantics of quantified noun phrases Lindström quantifier — a generalized polyadic quantifier Quantifier shift References Bibliography Barwise, Jon; and Etchemendy, John, 2000. Language Proof and Logic. CSLI (University of Chicago Press) and New York: Seven Bridges Press. A gentle introduction to first-order logic by two first-rate logicians. Frege, Gottlob, 1879. Begriffsschrift. Translated in Jean van Heijenoort, 1967. From Frege to Gödel: A Source Book on Mathematical Logic, 1879-1931. Harvard University Press. The first appearance of quantification. Hilbert, David; and Ackermann, Wilhelm, 1950 (1928). Principles of Mathematical Logic. Chelsea. Translation of Grundzüge der theoretischen Logik. Springer-Verlag. The 1928 first edition is the first time quantification was consciously employed in the now-standard manner, namely as binding variables ranging over some fixed domain of discourse. This is the defining aspect of first-order logic. Peirce, C. S., 1885, "On the Algebra of Logic: A Contribution to the Philosophy of Notation, American Journal of Mathematics, Vol. 7, pp. 180–202. Reprinted in Kloesel, N. et al., eds., 1993. Writings of C. S. Peirce, Vol. 5. Indiana University Press. The first appearance of quantification in anything like its present form. Reichenbach, Hans, 1975 (1947). Elements of Symbolic Logic, Dover Publications. The quantifiers are discussed in chapters §18 "Binding of variables" through §30 "Derivations from Synthetic Premises". Westerståhl, Dag, 2001, "Quantifiers," in Goble, Lou, ed., The Blackwell Guide to Philosophical Logic. Blackwell. Wiese, Heike, 2003. Numbers, language, and the human mind. Cambridge University Press. . External links . From College of Natural Sciences, University of Hawaii at Manoa. Stanford Encyclopedia of Philosophy: Shapiro, Stewart (2000). "Classical Logic" (Covers syntax, model theory, and metatheory for first order logic in the natural deduction style.) Westerståhl, Dag (2005). "Generalized quantifiers" Peters, Stanley; Westerståhl, Dag (2002). "Quantifiers" Logic Predicate logic Quantifier (logic) Philosophical logic Semantics
Quantifier (logic)
[ "Mathematics" ]
4,873
[ "Basic concepts in set theory", "Predicate logic", "Quantifier (logic)", "Mathematical logic" ]
43,507,788
https://en.wikipedia.org/wiki/Neuromanagement
Neuromanagement uses cognitive neuroscience, among other life science fields, and technology to analyze economic and managerial issues. It focuses on exploring human brain activities and mental processes when people are faced with typical problems of economics and management. This research provides insight into human decision-making and other general social behavior. The main research areas include decision neuroscience, neuroeconomics, neuromarketing, neuro-industrial engineering, and neuro-information systems. Neuromanagement was first proposed in 2006 by Professor Qingguo Ma, the director of Neuromanagement Laboratory of Zhejiang University. Decision neuroscience Decision neuroscience consists of the following aspects: The neural basis of decision-making and behavioral preferences; the brain activities during decision-making process; decision-making modeling incorporated with the characteristics of brain activities; and the neural basis of game theory and game modeling. Neuromarketing Neuromarketing derives from research on neural features of consumer behavior. Using neural activity to interpret consumer behavior provides insight into the neural mechanism underlying different consumer decision-making behavior. Marketing experts can then determine what will encourage different consumers to make a purchase and produce appropriate marketing strategy, including general marketing, branding and how these relate to customer loyalty. Neuromarketing research is mainly composed of neuro-consumer behavior, neuro-marketing strategy and neuro-advertising. Neuro-industrial-engineering The concept of neuro-industrial engineering was conceived by Qingguo Ma. It is an interdisciplinary subject of cognitive neuroscience and industrial engineering. Neuro-IE studies human cognition and uses advanced neuroscience and biofeedback technology to measure physiological responses in order to acquire data for further analysis, which provides insight into people's mental states without subjective consciousness control. Then this data, people's neuro activities with physiological and psychological states in the production process, are applied to operations management to improve processes for workers. Neuro-Information-Systems (NeuroIS) The concept of NeuroIS was formally proposed at the 2007 International Conference on Information Systems (ICIS). The proposal discussed four major opportunities for the application of cognitive theory, methods and techniques on information system issues, particularly in technology adoption and application, e-commerce and group decision support systems. Subsequently, the study of NeuroIS has been published in MIS Quarterly, Information Systems Research, as well as presented at ICIS and Americas Conference on Information Systems. Neuro-entrepreneurship This incorporates the interior characteristics of the entrepreneurs to study the neural basis of innovation. More important, neuroentrepreneurship focuses on what if often called the "entrepreneurial mindset" by looking at "what lies beneath" more surface level entrepreneurial thinking such as intentions. Neuroentrepreneurship thus offers new insights into understanding what happens in experiential learning as is essential in entrepreneurship education. Neuromanagement Lab Neuromanagement Lab was established in 2006 as one of the first labs established in China specialized in researches elucidating the micro-mechanisms of management activities in an interdisciplinary field integrating management science, economics, and cognitive neuroscience. The lab is located at Zhejiang University and equipped with standard cognitive neuroscience and behavioral experiment rooms. The lab has recently focused on researches on interdisciplinary fields including Decision Neuroscience, Neuroeconomics, Neuromanagement, Neural-Industrial Engineering, Neuromarketing, and Social neuroscience. The Lab has undertaken over 30 national, ministerial or provincial-level projects since establishment. The project on Neuromarketing and the project on decision-making (both sponsored by the National Social Science Foundation of China) were the first projects approved in their academic fields. Moreover, the lab has hosted approximately 10 high-level academic symposiums, including the International Conference on Neuromanagement and Neuroeconomics. References See also Consumer neuroscience Behavioral economics Neuroeconomics Cognitive psychology
Neuromanagement
[ "Biology" ]
778
[ "Behavior", "Behavioral economics", "Behavioural sciences", "Behaviorism", "Cognitive psychology" ]
43,507,880
https://en.wikipedia.org/wiki/Nanospray%20desorption%20electrospray%20ionization
Nanospray desorption electrospray ionization (nano-DESI) is an ambient pressure ionization technique used in mass spectrometry (MS) for chemical analysis of organic molecules. In this technique, analytes are desorbed into a liquid bridge formed between two capillaries and the sampling surface. Unlike desorption electrospray ionization (DESI), from which nano-DESI is derived, nano-DESI makes use of a secondary capillary, which improves the sampling efficiency. Principle of operation The typical nano-DESI probe setup consists of two fused silica capillaries – primary capillary which supplies solvent and maintains a liquid bridge, and secondary capillary which transports the dissolved analyte to the mass spectrometer. High voltage (several kV) is applied between the inlet of the mass spectrometer and the primary capillary, creating a self-aspirating nanospray. The liquid bridge is maintained by continuous flow of the solvent and the contact area between the solvent bridge and sample surface can be controlled by changing the solvent flow rate, varying the diameter of the utilized capillaries and regulating the distance between the sample and the nano-DESI probe. In this way, the spatial resolution in mass spectrometry imaging applications can be improved, with typical resolution ranging between 100–150 μm. Applications Nano-DESI has been applied for localized analysis of complex molecules and imaging of tissue sections, microbial communities and environmental samples. References Mass spectrometry Spatial analysis
Nanospray desorption electrospray ionization
[ "Physics", "Chemistry" ]
319
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Spatial analysis", "Space", "Mass spectrometry", "Spacetime", "Matter" ]
43,510,390
https://en.wikipedia.org/wiki/Murnaghan%E2%80%93Nakayama%20rule
In group theory, a branch of mathematics, the Murnaghan–Nakayama rule, named after Francis Murnaghan and Tadashi Nakayama, is a combinatorial method to compute irreducible character values of a symmetric group. There are several generalizations of this rule beyond the representation theory of symmetric groups, but they are not covered here. The irreducible characters of a group are of interest to mathematicians because they concisely summarize important information about the group, such as the dimensions of the vector spaces in which the elements of the group can be represented by linear transformations that “mix” all the dimensions. For many groups, calculating irreducible character values is very difficult; the existence of simple formulas is the exception rather than the rule. The Murnaghan–Nakayama rule is a combinatorial rule for computing symmetric group character values χ using a particular kind of Young tableaux. Here λ and ρ are both integer partitions of some integer n, the order of the symmetric group under consideration. The partition λ specifies the irreducible character, while the partition ρ specifies the conjugacy class on whose group elements the character is evaluated to produce the character value. The partitions are represented as weakly decreasing tuples; for example, two of the partitions of 8 are (5,2,1) and (3,3,1,1). There are two versions of the Murnaghan-Nakayama rule, one non-recursive and one recursive. Non-recursive version Theorem: where the sum is taken over the set BST(λ,ρ) of all border-strip tableaux of shape λ and type ρ. That is, each tableau T is a tableau such that the k-th row of T has λk boxes the boxes of T are filled with integers, with the integer i appearing ρi times the integers in every row and column are weakly increasing the set of squares filled with the integer i form a border strip, that is, a connected skew-shape with no 2×2-square. The height, ht(T), is the sum of the heights of the border strips in T. The height of a border strip is one less than the number of rows it touches. It follows from this theorem that the character values of a symmetric group are integers. For some combinations of λ and ρ, there are no border-strip tableaux. In this case, there are no terms in the sum and therefore the character value is zero. Example Consider the calculation of one of the character values for the symmetric group of order 8, when λ is the partition (5,2,1) and ρ is the partition (3,3,1,1). The shape partition λ specifies that the tableau must have three rows, the first having 5 boxes, the second having 2 boxes, and the third having 1 box. The type partition ρ specifies that the tableau must be filled with three 1's, three 2's, one 3, and one 4. There are six such border-strip tableaux: If we call these , , , , , and , then their heights are and the character value is therefore Recursive version Theorem: where the sum is taken over the set BS(λ,ρ1) of border strips within the Young diagram of shape λ that have ρ1 boxes and whose removal leaves a valid Young diagram. The notation represents the partition that results from removing the border strip ξ from λ. The notation represents the partition that results from removing the first element ρ1 from ρ. Note that the right-hand side is a sum of characters for symmetric groups that have smaller order than that of the symmetric group we started with on the left-hand side. In other words, this version of the Murnaghan-Nakayama rule expresses a character of the symmetric group Sn in terms of the characters of smaller symmetric groups Sk with k<n. Applying this rule recursively will result in a tree of character value evaluations for smaller and smaller partitions. Each branch stops for one of two reasons: Either there are no border strips of the required length within the reduced shape, so the sum on the right is zero, or a border strip occupying the entire reduced shape is removed, leaving a Young diagram with no boxes. At this point we are evaluating χ when both λ and ρ are the empty partition (), and the rule requires that this terminal case be defined as having character . This recursive version of the Murnaghan-Nakayama rule is especially efficient for computer calculation when one computes character tables for Sk for increasing values of k and stores all of the previously computed character tables. Example We will again compute the character value with λ=(5,2,1) and ρ=(3,3,1,1). To begin, consider the Young diagram with shape λ. Since the first part of ρ is 3, look for border strips that consist of 3 boxes. There are two possibilities: In the first diagram, the border strip has height 0, and removing it produces the reduced shape (2,2,1). In the second diagram, the border strip has height 1, and removing it produces the reduced shape (5). Therefore, one has , expressing a character value of S8 in terms of two character values of S5. Applying the rule again to both terms, one finds and , reducing to a character value of S2. Applying again, one finds , reducing to the only character value of S1. A final application produces the terminal character : Working backwards from this known character, the result is , as before. References Combinatorics Representation theory of finite groups Symmetry
Murnaghan–Nakayama rule
[ "Physics", "Mathematics" ]
1,176
[ "Discrete mathematics", "Geometry", "Symmetry", "Combinatorics" ]
46,664,026
https://en.wikipedia.org/wiki/Modified%20pressure
Some systems in fluid dynamics involve a fluid being subject to conservative body forces. Since a conservative body force is the gradient of some potential function, it has the same effect as a gradient in fluid pressure. It is often convenient to define a modified pressure equal to the true fluid pressure plus the potential. Examples of conservative body forces include gravity and the centrifugal force in a rotating reference frame. See also Reduced gravity References Fluid dynamics
Modified pressure
[ "Chemistry", "Engineering" ]
88
[ "Piping", "Chemical engineering", "Fluid dynamics stubs", "Fluid dynamics" ]
35,487,235
https://en.wikipedia.org/wiki/High-frequency%20impact%20treatment
The high-frequency impact treatment or HiFIT – Method is the treatment of welded steel constructions at the weld transition to increase the fatigue strength. Features The durability and life of dynamically loaded, welded steel structures is determined in many cases by the welds, in particular the weld transitions. Through selective treatment of the transitions (grinding (abrasive cutting), abrasive blasting, hammering, etc.), the durability of many designs increase significantly. Hammering methods have proven to be particularly effective treatment methods and were within the joint project REFRESH extensively studied and developed. The HiFIT (High-Frequency Impact Treatment (also called HFMI (High Frequency Mechanical Impact))) process is such a hammering method that is universally applicable, requires only a low tech equipment and still offers high reproducibility and the possibility for quality control. Operation The HiFIT hammer operates with a hardened pin with a ball resting on the workpiece with a diameter D of 3 mm. This pin is hammered with an adjustable intensity at around 180–300 Hz at the weld toe. Local mechanical deformations occur in the form of a treatment track. The weld toe is deformed plastically. The induced compressive residual stress prevents the track cracking and the crack propagation on the surface. Evidence The International Institute of Welding Technology IIW published the Guideline "Recommendations for the HFMI Treatment" in October 2016. An overview of higher frequency hammers (HFMI) is presented, and recommendations for the correct application of the method and quantitative measurements for quality assurance the guideline provides the basis for measurements of HFMI improved welded joints on the basis of all known stress calculation concepts. In numerous experiments at various institutes and universities an 80 to 100 percent increase of fatigue strength and a 5 – to 15-fold increase in weld-life could be demonstrated. The most extensive research project was from 2006 to 2009 "REFRESH – life extension of existing and new welded steel structures (P702). In this research project, the HiFIT device was developed and made ready for production. This report is available in book form at the FOSTA (Forschungsvereinigung Stahlanwendung e.V.) and can be ordered under the number . The book contains detailed scientific verifications and validations. Steps in the HiFIT method The HiFIT method can be applied to both existing as well as new steel structures. Preliminary steps For a targeted treatment, the visibility and accessibility of the transition in the welded areas are required. Existing structures typically are prepared at the transition for surface finishing. The parts must be free of loose rust and old paint. If necessary, previous sandblasting is required. The device operates with a compressed air supply of 6–8 bar. Steps HiFIT device is manually placed on the treated weld transition and during treatment, along this run. Result By local transformations, the weld toe plastically deformed and solidified. The depth of the aftertreatment track should be between 0.2 and 0.35 mm. The undercut at the weld toe is no longer recognizable. Process safety By visual inspection, the treated region are examined. The treatment depth can be checked with a special gauge. A digital display of the operating pressure allows the user to control the entire process. Economic importance of HiFIT Lifetime extension When applied to existing constructions, the lifetime can be extended considerably. If no macroscopically visible cracks are present, HiFIT is a very suitable remediation tool. With timely remediation of existing structures there is practically no difference to the life of new treated welds. This gives the potential to use existing constructions far beyond the planned lifetime. The HiFIT-method is used very efficient e.g. at highway bridges in steel hollow box-section design on the fly. Costs for reconstruction are low compared to conventional methods. In the commercial vehicle industry and other industries highly stressed welds on existing and new structures are treated with HiFIT to extend lifetime successfully. Increasing the transferable load level In case of new constructions and for some existing structures the load level for treated welds can be increased. Using constructions for the same lifetime as before welds can transfer 1.6 times loads. This has e.g. for cranes the very positive effect of larger lifting capacity. The efficiency of cranes increases with each stroke. Lightweight Taking into account the HiFIT process during development, on same load level and same lifetime, the construction can be slimmed down specifically. Extensive experimental investigations on structural details and FEM-supported-design methods has shown the high efficiency with conventional S235, S355J2 and fine grain steels, such as S460N, S690QL and even higher strength steels. The achievable material saving makes the HiFIT application in most applications already economically viable. Considering the additional benefit of the weight advantage e.g. the achievable payload in vehicles can be increased. See also Welding Ultrasonic impact treatment Shot peening Autofrettage, which produces compressive residual stresses in pressure vessels. Publications The book REFRESH – life extension of existing and new welded steel structures. can be ordered under the number at FOSTA – Research Association for Steel Application Association in Germany Düsseldorf. Stahlbau September 2009, 78-year, ISSN 0038-9145 A6449 IIW Recommendations for the HFMI Treatment For Improving the Fatigue Strength of Welded Joints. Autoren: Gary B. Marquis, Zuheir Barsoum, https://www.springer.com/de/book/9789811025037 DASt-Recommendation - 026 Weld Assessment for fatigue stressed constructions, using high frequency impact hammer treatments, Stahlbau Verlags- und Service GmbH, https://shop.deutscherstahlbau.de/de/dast-richtlinie-026 References External links Website of the Institut for Metal- und Leightweight: Master-Thesis: Einsatz von Hochfrequenzhämmerverfahren zur Steigerung der Ermüdungsfestigkeit von geschweißten Stahlkonstruktionen Website of the University Stuttgart: Research-project Anwendung hochfrequenter Hämmerverfahren im Stahlwasserbau Welding Pneumatic tools Hand-held power tools Power tools Metallurgical processes Metalworking
High-frequency impact treatment
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,339
[ "Power tools", "Welding", "Physical quantities", "Metallurgy", "Metallurgical processes", "Power (physics)", "Mechanical engineering" ]