text
stringlengths
11
320k
source
stringlengths
26
161
Integrated enterprise modeling (IEM) is an enterprise modeling method used for the admission and for the reengineering of processes both in producing enterprises and in the public area and service providers. In integrated enterprise modeling different aspects as functions and data become described in one model. Furthermore, the method supports analyses of business processes independently of the available organizational structure. The Integrated Enterprise Modeling is developed at the Fraunhofer Institute for Production Systems and Design Technology (German: IPK) Berlin, Germany . [ 1 ] The integrated enterprise modeling (IEM) method uses an object-oriented approach and adapts this for the enterprise description. An application-oriented division of all elements of an enterprise forms the core of the method in generic object classes "product" , "resource" and "order" . The object class "product" represents all objects whose production and sale are the aim of the looked-at-enterprise as well as all objects which flow into the end product. Raw materials, intermediate products, components and end products, as well as services and the describing data, are included. The object class "order" describes all types of commissioning in the enterprise. The objects of the class "order" represent the information that is relevant from the point of view of planning, control, and supervision of the enterprise processes. One understands by it what, when, at which objects, in whose responsibility and with which resources it will be executed. The IEM class "resource" contains all necessary key players which are required in the enterprise for the execution or support of activities. Among other things, these are employees, business partner, all kinds of documents as well as information systems or operating supplies. The classes "product", "order", and "resource" can gradually be given full particulars and specified. Through this it is possible to show both line of business typical and enterprise-specific product, order and resource subclasses. Structures (e.g. parts lists or organisation charts) can be shown as relational features of the classes with the help of being-part-of- and consists-of-relations between different subclasses. The activities which are necessary for the production of products and to the provision of services can be described as follows: an activity is the purposeful change of objects. The aim orientation of the activities causes an explicit or implicit planning and control. The execution of the activities is incumbent by the capable key players. From these considerations the definitions can be derived for the following constructs: All modeled data of the looked-at-enterprise are recorded in the model core of an Integrated Enterprise Modeling (IEM) model in two main views : All relevant objects of an enterprise, their qualities and relations are shown in the "information model". It is class trees of the object classes "product", "order" and "resource" here. The "business process model" represents enterprise processes and their relations to each other. Activities are shown in their interaction with the objects. The structuring of the enterprise processes in Integrated Enterprise Modeling (IEM) is reached by its hierarchical subdivision with the help of the decomposition. Decomposition means the reduction of a system in a partial system which respectively contains components which are in a logical cohesion. The process modeling is a partitioning of processes into its threads. Every thread describes a task completed into itself. The decomposition of single processes can be carried out long enough until the threads are manageable, i.e. appropriately small. They may turn out also not too rudimentary because a high number of detailed processes increases the complexity of a business process model. A process modeling person, therefore, has to find a balance between the effort complexity degree of the model and possible detailed description of the enterprise processes. A model depth generally recommends itself with at most three to four decomposition levels (model levels). On a model level business process flows are represented with the aid of illustrated combination elements. There are these five basic types of combinations between the activities: The modeling procedure for the illustration of business processes in IEM covers the following steps: The system delimitation is the base of an efficient modeling. Starting out from a conceptual formulation the area of the real system to be shown is selected and interfaces will be defined to an environment. In addition, the detail depth of the model is also determined, i.e. the depth of the hierarchical decomposition relations in the view "business process model". The delimited real system is convicted with help of the IEM method in an abstract model. IEM is the construction of the two main positions "information model" and "business process model". The "information model" is made by the specification of the object classes to be modeled for "product", "order" and "resource" with the class structures as well as descriptive and relational features. By identification and description of functions, activities and its combination to processes the "business process model" is formed. As a general rule the construction of the "information model" follows first in which the modeling person can go back to available reference class structures. The reference classes which do not correspond to the real system or were not found to be relevant at the system delimitation are deleted. The missing relevant classes are inserted. After the object base is fixed, the activities and functions are joined at the objects according to the "generic activity model" and with the help of combination elements to business processes. A model is made which can be analysed and changed if it is required. It often happens, that during the construction of the "business process model" new relevant object classes are identified so that the class trees getting completed. The construction of the two positions is, therefore, an iterative process. Afterward, weak points and improvement potentials can be identified in the course of the model evaluation . This can cause the model changes whose realization should clear the weak points and make use of the improvement potentials in the real system. The software tool MO²GO (method for an object-oriented business process optimization) supports the modeling process based on the integrated enterprise modeling (IEM). Different analyses of a given model are available like the planning and implementation of information systems. The MO²GO system is expandable easily and makes a high-speed modeling approach possible. The currently used MO²GO system consists of the following components: The IEM business process models contain much information that can not only be used by system analysts but also be helpful for the employees at their daily work. To provide this model information for the staff and to enable the participation of the employees for the results of the modeling, a special tool was developed at the Fraunhofer IPK. This is a web-based process assistant whose contents are generated automatically from the IEM business process model of the enterprise. The process assistant provides all users the information of the business process model in an HTML -based form by intranet of the enterprise. For its implementation, no special methods or tool knowledge is required besides the basic EDP and Internet experiences. The process assistant has been developed so that the employees can find answers to the questions fast and precisely: e.g. Or also: To make an informative process assistant from the business process model, certain modeling rules must be followed. The means e.g. that the individual actions must be deposited with its descriptions, the responsibility of the organisation units must be indicated explicitly or the paths also must be entered to the documents in the class tree. The fulfilment of these conditions means an additional time expenditure at the modeling, if these conditions are met, all employees are able to "surf" online through the intranet with the help of the process assistant by an informative enterprise documentation. They have the possibility between a graphic view and a texture-based description according to their preferences and methodical previous knowledge. The graphic view is provided by the MO²GO Viewer, a viewer tool for MO²GO models. The process assistant and the MO²GO Viewer are connected so that the graphic representation of the process looked at can be accessed context sensitively from the process assistant. Users can call on all templates, specifications and documents for the working sequence both from the process assistant and from the MO²GO Viewer online. Therefore, the process assistant cannot only be employed for the tracing of the modeling results but also in the daily business for the training of new employees as well as execution of process steps. To improve the usability in the daily routine, the process assistant can be adapted to the needs of the users' flexibility. This customization can be carried out both concerning the layout and concerning the main content emphases of the process assistant. Knowledge is used in organisations as a resource to render services for customers. The service preparation performs along actions which are described as processes or business processes. The analysis and improvement in dealing with knowledge presupposes a common idea about this context. An explicit description of the processes, therefore, is required because they represent the context for the respective knowledge contents. The process modeling represents a powerful instrument for the design and a conversion of a process-oriented knowledge management . In the context of the method of the business process-oriented knowledge management (GPO KM) developed at the Fraunhofer IPK the method of the "integrated enterprise modeling" (IEM) is accessed. It makes it possible to be able to show, to describe, to analyse and to form organisational processes. The IEM features few object classes, is ascertainable easily understandable and fast. Furthermore, the object orientation of the IEM opens up the possibility of showing knowledge as an object class. For the knowledge-oriented modeling of the business processes according to the IEM method the relevant knowledge contents have to be specified after knowledge domains and know-how bearers and represented as resources in the business process model. In further applications, IEM is used to create models across organisations (e.g. companies) to archive a common understanding between the involved stakeholders and derive services (create software and define the ASP). In this context the object-oriented basis of IEM has been used to create a common semantic across the single company models and to archive compliant enterprise models (predefined classes – terminology, model templates, etc.). The reason is that the terminology used within a model has to be understandable independent of the modeling language, see also SDDEM .
https://en.wikipedia.org/wiki/Integrated_enterprise_modeling
Integrated fluidic circuits (IFC) are a type of integrated circuit utilizing fluidics and the traditional microelectronics found in an integrated circuit . [ 1 ] One company that produces these circuits for use in biology is Standard Biotools. [ 2 ] This microcomputer - or microprocessor -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Integrated_fluidic_circuit
An integrated gasification combined cycle ( IGCC ) is a technology using a high pressure gasifier to turn coal and other carbon based fuels into pressurized synthesis gas . This enables removal of impurities from the fuel prior to generating electricity , reducing emissions of sulfur dioxide , particulates, mercury , and in some cases carbon dioxide . Some of these impurities, such as sulfur, can be turned into re-usable byproducts through the Claus process . With additional process equipment, carbon monoxide can be converted to carbon dioxide via water-gas shift reaction , enabling it to be sequestered and increasing gasification efficiency. Excess heat from the primary combustion and syngas fired generation is then passed to a steam cycle , producing additional electricity. This process results in improved thermodynamic efficiency, compared to conventional pulverized coal combustion. Coal can be found in abundance in the USA and many other countries and its price has remained relatively constant in recent years. Of the traditional hydrocarbon fuels - oil , coal , and natural gas - coal is used as a feedstock for 40% of global electricity generation. Fossil fuel consumption and its contribution to large-scale CO 2 emissions is becoming a pressing issue because of the adverse effects of climate change . In particular, coal contains more CO 2 per BTU than oil or natural gas and is responsible for 43% of CO 2 emissions from fuel combustion. Thus, the lower emissions that IGCC technology allows through gasification and pre-combustion carbon capture is discussed as a way to addressing aforementioned concerns. [ 1 ] Below is a schematic flow diagram of an IGCC plant: The gasification process can produce syngas from a wide variety of carbon-containing feedstocks, such as high-sulfur coal, heavy petroleum residues, and biomass . The plant is called integrated because (1) the syngas produced in the gasification section is used as fuel for the gas turbine in the combined cycle and (2) the steam produced by the syngas coolers in the gasification section is used by the steam turbine in the combined cycle. In this example the syngas produced is used as fuel in a gas turbine which produces electrical power. In a normal combined cycle, so-called "waste heat" from the gas turbine exhaust is used in a Heat Recovery Steam Generator (HRSG) to make steam for the steam turbine cycle. An IGCC plant improves the overall process efficiency by adding the higher-temperature steam produced by the gasification process to the steam turbine cycle. This steam is then used in steam turbines to produce additional electrical power. IGCC plants are advantageous in comparison to conventional coal power plants due to their high thermal efficiency , low non-carbon greenhouse gas emissions, and capability to process low grade coal. The disadvantages include higher capital and maintenance costs, and the amount of CO 2 released without pre-combustion capture. [ 2 ] A major drawback of using coal as a fuel source is the emission of carbon dioxide and pollutants, including sulfur dioxide, nitrogen oxide, mercury, and particulates. Almost all coal-fired power plants use pulverized coal combustion, which grinds the coal to increase the surface area, burns it to make steam, and runs the steam through a turbine to generate electricity. Pulverized coal plants can only capture carbon dioxide after combustion when it is diluted and harder to separate. In comparison, gasification in IGCC allows for separation and capture of the concentrated and pressurized carbon dioxide before combustion. Syngas cleanup includes filters to remove bulk particulates, scrubbing to remove fine particulates, and solid adsorbents for mercury removal. Additionally, hydrogen gas is used as fuel, which produces no pollutants under combustion. [ 4 ] IGCC also consumes less water than traditional pulverized coal plants. In a pulverized coal plant, coal is burned to produce steam, which is then used to create electricity using a steam turbine. Then steam exhaust must then be condensed with cooling water, and water is lost by evaporation. In IGCC, water consumption is reduced by combustion in a gas turbine, which uses the generated heat to expand air and drive the turbine. Steam is only used to capture the heat from the combustion turbine exhaust for use in a secondary steam turbine. Currently, the major drawback is the high capital cost compared to other forms of power production. The DOE Clean Coal Demonstration Project [ 5 ] helped construct 3 IGCC plants: Edwarsport Power Station in Edwardsport, Indiana , Polk Power Station in Tampa, Florida (online 1996), and Pinon Pine in Reno, Nevada . In the Reno demonstration project, researchers found that then-current IGCC technology would not work more than 300 feet (100m) above sea level. [ 6 ] The DOE report in reference 3 however makes no mention of any altitude effect, and most of the problems were associated with the solid waste extraction system. The Polk Power station is currently operating, following resolution of demonstration start-up problems, but the Piñon Pine project encountered significant problems and was abandoned. The US DOE's Clean Coal Power Initiative (CCPI Phase 2) selected the Kemper Project as one of two projects to demonstrate the feasibility of low emission coal-fired power plants. Mississippi Power began construction on the Kemper Project in Kemper County, Mississippi, in 2010 and is poised to begin operation in 2016, though there have been many delays. [ 7 ] In March, the projected date was further pushed back from early 2016 to August 31, 2016, adding $110 million to the total and putting the project 3 years behind schedule. The electrical plant is a flagship Carbon Capture and Storage (CCS) project that burns lignite coal and utilizes pre-combustion IGCC technology with a projected 65% emission capture rate. [ 8 ] The first generation of IGCC plants polluted less than contemporary coal-based technology, but also polluted water; for example, the Wabash Gasification Facility, located in Vigo County, Indiana, was out of compliance with its water permit during 1998–2001 [ 9 ] because it emitted arsenic, selenium and cyanide. Wabash operated commercially until 2016, and was being converted to a low carbon hydrogen and ammonia facility as of 2025. [ 10 ] [ 11 ] IGCC is now touted as capture ready and could potentially be used to capture and store carbon dioxide. [ 12 ] [ 13 ] (See FutureGen )Poland's Kędzierzyn will soon host a Zero-Emission Power & Chemical Plant that combines coal gasification technology with Carbon Capture & Storage (CCS). This installation had been planned, but there has been no information about it since 2009. Other operating IGCC plants in existence around the world are the Alexander (formerly Buggenum) in the Netherlands, Puertollano in Spain, and JGC in Japan. The Texas Clean Energy project planned to build a 400 MW IGCC facility that would incorporate carbon capture, utilization and storage (CCUS) technology. The project would have been the first coal power plant in the United States to combine IGCC and 90% carbon capture and storage. The sponsor Summit Power filed for bankruptcy in 2017. [ 14 ] There are several advantages and disadvantages when compared to conventional post combustion carbon capture and various variations [ 15 ] A key issue in implementing IGCC is its high capital cost, which prevents it from competing with other power plant technologies. Currently, ordinary pulverized coal plants are the lowest cost power plant option. The advantage of IGCC comes from the ease of retrofitting existing power plants that could offset the high capital cost . In a 2007 model, IGCC with CCS is the lowest-cost system in all cases. This model compared estimations of levelized cost of electricity , showing IGCC with CCS to cost 71.9 $US2005/MWh, pulverized coal with CCS to cost 88 $US2005/MWh, and natural gas combined cycle with CCS to cost 80.6 $US2005/MWh. The levelized cost of electricity was noticeably sensitive to the price of natural gas and the inclusion of carbon storage and transport costs. [ 16 ] The potential benefit of retrofitting has so far, not offset the cost of IGCC with carbon capture technology. A 2013 report by the U.S. Energy Information Administration demonstrates that the overnight cost of IGCC with CCS has increased 19% since 2010. Amongst the three power plant types, pulverized coal with CCS has an overnight capital cost of $5,227 (2012 dollars)/kW, IGCC with CCS has an overnight capital cost of $6,599 (2012 dollars)/kW, and natural gas combined cycle with CCS has an overnight capital cost of $2,095 (2012 dollars)/kW. Pulverized coal and NGCC costs did not change significantly since 2010. The report further relates that the 19% increase in IGCC cost is due to recent information from IGCC projects that have gone over budget and cost more than expected. [ 17 ] Recent testimony in regulatory proceedings show the cost of IGCC to be twice that predicted by Goddell, from $96 to 104/MWh. [ 18 ] [ 19 ] That's before addition of carbon capture and sequestration (sequestration has been a mature technology at both Weyburn in Canada (for enhanced oil recovery ) and Sleipner in the North Sea at a commercial scale for the past ten years)—capture at a 90% rate is expected to have a $30/MWh additional cost. [ 20 ] Wabash was down repeatedly for long stretches due to gasifier problems. Subsequent projects, such as Excelsior's Mesaba Project, have a third gasifier and train built in. The Polk County IGCC has design problems. First, the project was initially shut down because of corrosion in the slurry pipeline that fed slurried coal from the rail cars into the gasifier. A new coating for the pipe was developed. Second, the thermocoupler was replaced in less than two years; an indication that the gasifier had problems with a variety of feedstocks; from bituminous to sub-bituminous coal. The gasifier was designed to also handle lower rank lignites. Third, unplanned down time on the gasifier because of refractory liner problems, and those problems were expensive to repair. The gasifier was originally designed in Italy to be half the size of what was built at Polk. Newer ceramic materials may assist in improving gasifier performance and longevity. Understanding the operating problems of the current IGCC plant is necessary to improve the design for the IGCC plant of the future. (Polk IGCC Power Plant, https://web.archive.org/web/20151228085513/http://www.clean-energy.us/projects/polk_florida.html .) Keim, K., 2009, IGCC A Project on Sustainability Management Systems for Plant Re-Design and Re-Image. This is an unpublished paper from Harvard University) General Electric is currently designing an IGCC model plant that should introduce greater reliability. GE's model features advanced turbines optimized for the coal syngas. Eastman's industrial gasification plant in Kingsport, TN uses a GE Energy solid-fed gasifier. Eastman, a fortune 500 company, built the facility in 1983 without any state or federal subsidies and turns a profit. [ 21 ] [ 22 ] There are several refinery-based IGCC plants in Europe that have demonstrated good availability (90-95%) after initial shakedown periods. Several factors help this performance: Another IGCC success story has been the 250 MW Buggenum plant in The Netherlands, which was commissioned in 1994 and closed in 2013, [ 23 ] had good availability. This coal-based IGCC plant was originally designed to use up to 30% biomass as a supplemental feedstock. The owner, NUON, was paid an incentive fee by the government to use the biomass. NUON has constructed a 1,311 MW IGCC plant in the Netherlands, comprising three 437 MW CCGT units. The Nuon Magnum IGCC power plant was commissioned in 2011, and was officially opened in June 2013. Mitsubishi Heavy Industries has been awarded to construct the power plant. [ 24 ] Following a deal with environmental organizations, NUON has been prohibited from using the Magnum plant to burn coal and biomass, until 2020. Because of high gas prices in the Netherlands, two of the three units are currently offline, whilst the third unit sees only low usage levels. The relatively low 59% efficiency of the Magnum plant means that more efficient CCGT plants (such as the Hemweg 9 plant) are preferred to provide (backup) power. A new generation of IGCC-based coal-fired power plants has been proposed, although none is yet under construction. Projects are being developed by AEP , Duke Energy , and Southern Company in the US, and in Europe by ZAK/PKE , Centrica (UK), E.ON and RWE (both Germany) and NUON (Netherlands). In Minnesota, the state's Dept. of Commerce analysis found IGCC to have the highest cost, with an emissions profile not significantly better than pulverized coal. In Delaware, the Delmarva and state consultant analysis had essentially the same results. The high cost of IGCC is the biggest obstacle to its integration in the power market; however, most energy executives recognize that carbon regulation is coming soon. Bills requiring carbon reduction are being proposed again both the House and the Senate, and with the Democratic majority it seems likely that with the next President there will be a greater push for carbon regulation. The Supreme Court decision requiring the EPA to regulate carbon (Commonwealth of Massachusetts et al. v. Environmental Protection Agency et al.)[20] also speaks to the likelihood of future carbon regulations coming sooner, rather than later. With carbon capture, the cost of electricity from an IGCC plant would increase approximately 33%. For a natural gas CC, the increase is approximately 46%. For a pulverized coal plant, the increase is approximately 57%. [ 25 ] This potential for less expensive carbon capture makes IGCC an attractive choice for keeping low cost coal an available fuel source in a carbon constrained world. However, the industry needs a lot more experience to reduce the risk premium. IGCC with CCS requires some sort of mandate, higher carbon market price, or regulatory framework to properly incentivize the industry. [ 26 ] In Japan, electric power companies, in conjunction with Mitsubishi Heavy Industries has been operating a 200 t/d IGCC pilot plant since the early '90s. In September 2007, they started up a 250 MW demo plant in Nakoso. It runs on air-blown (not oxygen) dry feed coal only. It burns PRB coal with an unburned carbon content ratio of <0.1% and no detected leaching of trace elements. It employs not only F type turbines but G type as well. (see gasification.org link below) Next generation IGCC plants with CO 2 capture technology will be expected to have higher thermal efficiency and to hold the cost down because of simplified systems compared to conventional IGCC. The main feature is that instead of using oxygen and nitrogen to gasify coal, they use oxygen and CO 2 . The main advantage is that it is possible to improve the performance of cold gas efficiency and to reduce the unburned carbon (char). As a reference for powerplant efficiency: The CO 2 extracted from gas turbine exhaust gas is utilized in this system. Using a closed gas turbine system capable of capturing the CO 2 by direct compression and liquefication obviates the need for a separation and capture system. [ 28 ] Pre-combustion CO 2 removal is much easier than CO 2 removal from flue gas in post-combustion capture due to the high concentration of CO 2 after the water-gas-shift reaction and the high pressure of the syngas. During pre-combustion in IGCC, the partial pressure of CO 2 is nearly 1000 times higher than in post-combustion flue gas. [ 29 ] Due to the high concentration of CO 2 pre-combustion, physical solvents, such as Selexol and Rectisol , are preferred for the removal of CO 2 vs that of chemical solvents. Physical solvents work by absorbing the acid gases without the need of a chemical reaction as in traditional amine based solvents. The solvent can then be regenerated, and the CO 2 desorbed, by reducing the pressure. The biggest obstacle with physical solvents is the need to cool the syngas before separation, then reheat it afterwards for combustion, consuming energy and decreasing overall plant efficiency. [ 29 ] National and international test codes are used to standardize the procedures and definitions used to test IGCC Power Plants. Selection of the test code to be used is an agreement between the purchaser and the manufacturer, and has some significance to the design of the plant and associated systems. In the United States, The American Society of Mechanical Engineers published the Performance Test Code for IGCC Power Generation Plants (PTC 47) in 2006 which provides procedures for the determination of quantity and quality of fuel gas by its flow rate, temperature, pressure, composition, heating value, and its content of contaminants. [ 30 ] In 2007, the New York State Attorney General's office demanded full disclosure of "financial risks from greenhouse gases" to the shareholders of electric power companies proposing the development of IGCC coal-fired power plants. "Any one of the several new or likely regulatory initiatives for CO 2 emissions from power plants - including state carbon controls, EPA's regulations under the Clean Air Act, or the enactment of federal global warming legislation - would add a significant cost to carbon-intensive coal generation"; [ 31 ] U.S. Senator Hillary Clinton from New York has proposed that this full risk disclosure be required of all publicly traded power companies nationwide. [ 32 ] This honest disclosure has begun to reduce investor interest in all types of existing-technology coal-fired power plant development, including IGCC. Senator Harry Reid (Majority Leader of the 2007/2008 U.S. Senate) told the 2007 Clean Energy Summit that he will do everything he can to stop construction of proposed new IGCC coal-fired electric power plants in Nevada. Reid wants Nevada utility companies to invest in solar energy , wind energy and geothermal energy instead of coal technologies. Reid stated that global warming is a reality, and just one proposed coal-fired plant would contribute to it by burning seven million tons of coal a year. The long-term healthcare costs would be far too high, he claimed (no source attributed). "I'm going to do everything I can to stop these plants.", he said. "There is no clean coal technology . There is cleaner coal technology, but there is no clean coal technology." [ 33 ] One of the most efficient ways to treat the H 2 S gas from an IGCC plant is by converting it into sulphuric acid in a wet gas sulphuric acid process WSA process . However, the majority of the H 2 S treating plants utilize the modified Claus process, as the sulphur market infrastructure and the transportation costs of sulphuric acid versus sulphur are in favour of sulphur production.
https://en.wikipedia.org/wiki/Integrated_gasification_combined_cycle
Integrated geography (also referred to as integrative geography , [ 1 ] environmental geography or human–environment geography ) is where the branches of human geography and physical geography overlap to describe and explain the spatial aspects of interactions between human individuals or societies and their natural environment , [ 2 ] these interactions being called coupled human–environment system . Integrated geography requires an understanding of the dynamics of physical geography , as well as the ways in which human societies conceptualize the environment ( human geography ). Thus, to a certain degree, it may be seen as a successor of Physische Anthropogeographie (English: "physical anthropogeography")—a term coined by University of Vienna geographer Albrecht Penck in 1924 [ 3 ] —and geographical cultural or human ecology ( Harlan H. Barrows 1923). Integrated geography in the United States is principally influenced by the schools of Carl O. Sauer (Berkeley), whose perspective was rather historical, and Gilbert F. White (Chicago), who developed a more applied view. Integrated geography describes and explains the spatial aspects of interactions between human individuals or societies and their natural environment, called coupled human–environment systems . The links between human and physical geography were once more apparent than they are today. As human experience of the world is increasingly mediated by technology, the relationships between humans and the environment have often become obscured. Thereby, integrated geography represents a critically important set of analytical tools for assessing the impact of human presence on the environment . This is done by measuring the result of human activity on natural landforms and cycles. [ 4 ] Methods for which this information is gained include remote sensing , and geographic information systems . [ 5 ] Integrated geography helps us to ponder the environment in terms of its relationship to people. With integrated geography we can analyze different social science and humanities perspectives and their use in understanding people environment processes. [ 6 ] Hence, it is considered the third branch of geography, [ 7 ] the other branches being physical and human geography. [ 8 ]
https://en.wikipedia.org/wiki/Integrated_geography
Integrated information theory ( IIT ) proposes a mathematical model for the consciousness of a system. It comprises a framework ultimately intended to explain why some physical systems (such as human brains ) are conscious, [ 1 ] and to be capable of providing a concrete inference about whether any physical system is conscious, to what degree, and what particular experience it has; why they feel the particular way they do in particular states (e.g. why our visual field appears extended when we gaze out at the night sky), [ 2 ] and what it would take for other physical systems to be conscious (Are other animals conscious? Might the whole universe be ?). [ 3 ] According to IIT, a system's consciousness (what it is like subjectively) is conjectured to be identical to its causal properties (what it is like objectively). Therefore, it should be possible to account for the conscious experience of a physical system by unfolding its complete causal powers. [ 4 ] IIT was proposed by neuroscientist Giulio Tononi in 2004. [ 5 ] Despite significant interest, IIT remains controversial and has been widely criticized, including that it is unfalsifiable pseudoscience . [ 6 ] David Chalmers has argued that any attempt to explain consciousness in purely physical terms (i.e., to start with the laws of physics as they are currently formulated and derive the necessary and inevitable existence of consciousness) eventually runs into the so-called " hard problem ". Rather than try to start from physical principles and arrive at consciousness, IIT "starts with consciousness" (accepts the existence of our own consciousness as certain) and reasons about the properties that a postulated physical substrate would need to have in order to account for it. The ability to perform this jump from phenomenology to mechanism rests on IIT's assumption that if the formal properties of a conscious experience can be fully accounted for by an underlying physical system, then the properties of the physical system must be constrained by the properties of the experience. The limitations on the physical system for consciousness to exist are unknown and consciousness may exist on a spectrum, as implied by studies involving split-brain patients [ 7 ] and conscious patients with large amounts of brain matter missing. [ 8 ] Specifically, IIT moves from phenomenology to mechanism by attempting to identify the essential properties of conscious experience (dubbed " axioms ") and, from there, the essential properties of conscious physical systems (dubbed "postulates"). The calculation of even a modestly-sized system's Φ Max {\displaystyle \Phi ^{\textrm {Max}}} is often computationally intractable, [ 9 ] so efforts have been made to develop heuristic or proxy measures of integrated information. For example, Masafumi Oizumi and colleagues have developed both Φ ∗ {\displaystyle \Phi ^{*}} [ 10 ] and geometric integrated information or Φ G {\displaystyle \Phi ^{G}} , [ 11 ] which are practical approximations for integrated information. These are related to proxy measures developed earlier by Anil Seth and Adam Barrett. [ 12 ] However, none of these proxy measures have a mathematically proven relationship to the actual Φ Max {\displaystyle \Phi ^{\textrm {Max}}} value, which complicates the interpretation of analyses that use them. They can give qualitatively different results even for very small systems. [ 13 ] In 2021, Angus Leung and colleagues published a direct application of IIT's mathematical formalism to neural data. [ 14 ] To circumvent the computational challenges associated with larger datasets, the authors focused on neuronal population activity in the fly. The study showed that Φ Max {\displaystyle \Phi ^{\textrm {Max}}} can readily be computed for smaller sets of neural data. Moreover, matching IIT's predictions, Φ Max {\displaystyle \Phi ^{\textrm {Max}}} was significantly decreased when the animals underwent general anesthesia. [ 14 ] A significant computational challenge in calculating integrated information is finding the minimum information partition of a neural system, which requires iterating through all possible network partitions. To solve this problem, Daniel Toker and Friedrich T. Sommer have shown that the spectral decomposition of the correlation matrix of a system's dynamics is a quick and robust proxy for the minimum information partition. [ 15 ] While the algorithm [ 9 ] [ 16 ] for assessing a system's Φ Max {\displaystyle \Phi ^{\textrm {Max}}} and conceptual structure is relatively straightforward, its high time complexity makes it computationally intractable for many systems of interest. [ 9 ] Heuristics and approximations can sometimes be used to provide ballpark estimates of a complex system's integrated information, but precise calculations are often impossible. These computational challenges, combined with the already difficult task of reliably and accurately assessing consciousness under experimental conditions, make testing many of the theory's predictions difficult. Despite these challenges, researchers have attempted to use measures of information integration and differentiation to assess levels of consciousness in a variety of subjects. [ 17 ] [ 18 ] For instance, a recent study using a less computationally-intensive proxy for Φ Max {\displaystyle \Phi ^{\textrm {Max}}} was able to reliably discriminate between varying levels of consciousness in wakeful, sleeping (dreaming vs. non-dreaming), anesthetized, and comatose (vegetative vs. minimally-conscious vs. locked-in) individuals. [ 19 ] IIT also makes several predictions which fit well with existing experimental evidence, and can be used to explain some counterintuitive findings in consciousness research. [ 20 ] For example, IIT can be used to explain why some brain regions, such as the cerebellum do not appear to contribute to consciousness, despite their size and/or functional importance. Integrated information theory has received both broad criticism and support. Neuroscientist Christof Koch , who has helped to develop later versions of the theory, has called IIT "the only really promising fundamental theory of consciousness". [ 21 ] Neuroscientist and consciousness researcher Anil Seth is supportive of the theory, with some caveats, claiming that "conscious experiences are highly informative and always integrated."; and that "One thing that immediately follows from [IIT] is that you have a nice post hoc explanation for certain things we know about consciousness.". But he also claims "the parts of IIT that I find less promising are where it claims that integrated information actually is consciousness — that there's an identity between the two.", [ 22 ] and has criticized the panpsychist extrapolations of the theory. [ 23 ] Philosopher David Chalmers , famous for the idea of the hard problem of consciousness , has expressed some enthusiasm about IIT. According to Chalmers, IIT is a development in the right direction, whether or not it is correct. [ 24 ] Max Tegmark has tried to address the problem of the computational complexity behind the calculations. According to Max Tegmark "the integration measure proposed by IIT is computationally infeasible to evaluate for large systems, growing super-exponentially with the system's information content." [ 25 ] As a result, Φ can only be approximated in general. However, different ways of approximating Φ provide radically different results. [ 26 ] Other works have shown that Φ can be computed in some large mean-field neural network models, although some assumptions of the theory have to be revised to capture phase transitions in these large systems. [ 27 ] [ 28 ] In 2019, the Templeton Foundation announced funding in excess of $6,000,000 to test opposing empirical predictions of IIT and a rival theory ( Global Neuronal Workspace Theory , GNWT). [ 29 ] [ 30 ] The originators of both theories signed off on experimental protocols and data analyses as well as the exact conditions that satisfy if their championed theory correctly predicted the outcome or not. [ 31 ] [ 32 ] Initial results were revealed in June 2023. [ 33 ] None of GNWT's predictions passed what was agreed upon pre-registration while two out of three of IIT's predictions passed that threshold. [ 34 ] The final, peer-reviewed results were published in the 30 April 2025 issue of Nature . [ 35 ] In a March 2025 Nature Neuroscience commentary titled “Consciousness or pseudo-consciousness? A clash of two paradigms,” proponents of IIT listed 16 peer-reviewed studies as empirical tests of the theory’s core claims. [ 36 ] A commentary in the same issue by Alex Gomez-Marin and Anil Seth , titled “A science of consciousness beyond pseudo-science and pseudo-consciousness,” argued that, despite current empirical limitations, IIT remains scientifically legitimate. [ 37 ] Influential philosopher John Searle has given a critique of the theory saying "The theory implies panpsychism " and "The problem with panpsychism is not that it is false; it does not get up to the level of being false. It is strictly speaking meaningless because no clear notion has been given to the claim." [ 38 ] Searle's take has itself been criticized by other philosophers for misunderstanding and misrepresenting a theory that may actually be resonant with his own ideas. [ 39 ] Theoretical computer scientist Scott Aaronson has criticized IIT by demonstrating through its own formulation that an inactive series of logic gates , arranged in the correct way, would not only be conscious but be "unboundedly more conscious than humans are." [ 40 ] Tononi himself agrees with the assessment and argues that according to IIT, an even simpler arrangement of inactive logic gates, if large enough, would also be conscious. However he further argues that this is a strength of IIT rather than a weakness, because that's exactly the sort of cytoarchitecture followed by large portions of the cerebral cortex , [ 41 ] [ 42 ] specially at the back of the brain, [ 2 ] which is the most likely neuroanatomical correlate of consciousness according to some reviews. [ 43 ] Philosopher Tim Bayne has criticized the axiomatic foundations of the theory. [ 44 ] He concludes that "the so-called 'axioms' that Tononi et al. appeal to fail to qualify as genuine axioms". IIT as a scientific theory of consciousness has been criticized in the scientific literature as only able to be "either false or unscientific" by its own definitions. [ 45 ] IIT has also been denounced by other members of the consciousness field as requiring "an unscientific leap of faith". [ 46 ] The theory has also been derided for failing to answer the basic questions required of a theory of consciousness. Philosopher Adam Pautz says "As long as proponents of IIT do not address these questions, they have not put a clear theory on the table that can be evaluated as true or false." [ 47 ] Neuroscientist Michael Graziano , proponent of the competing attention schema theory , rejects IIT as pseudoscience . He claims IIT is a "magicalist theory" that has "no chance of scientific success or understanding". [ 48 ] Similarly, IIT was criticized that its claims are "not scientifically established or testable at the moment". [ 49 ] Neuroscientists Björn Merker , David Rudrauf and Philosopher Kenneth Williford co-authored a paper criticizing IIT on several grounds. Firstly, by not demonstrating that all members of systems which do in fact combine integration and differentiation in the formal IIT sense are conscious, systems which demonstrate high levels of integration and differentiation of information might provide the necessary conditions for consciousness but those combinations of attributes do not amount to the conditions for consciousness. Secondly that the measure, Φ, reflects efficiency of global information transfer rather than level of consciousness, and that the correlation of Φ with level of consciousness through different states of wakefulness (e.g. awake, dreaming and dreamless sleep, anesthesia, seizures and coma) actually reflect the level of efficient network interactions performed for cortical engagement. Hence Φ reflects network efficiency rather than consciousness, which would be one of the functions served by cortical network efficiency. [ 50 ] A letter published on 15 September 2023 in the preprint repository PsyArXiv and signed by 124 scholars asserted that until IIT is empirically testable, it should be labeled pseudoscience. [ 51 ] A number of researchers defended the theory in response. [ 6 ] Computer scientist Hector Zenil based his criticism of IIT and what he considers a similarly unscientific theory, Assembly theory (AT), on the lack of correspondence of the methods and theory in some IIT research papers and the media frenzy. [ 52 ] He criticized the shallowness and misleading nature of the media coverage, including that which appeared in journals such as Nature and Science . He also criticized the testing methods and evidence used by IIT proponents, noting that one test amounted to simply applying LZW compression to measure entropy rather than to indicate consciousness as proponents claimed. An anonymized public survey invited all authors from peer-reviewed papers published between 2013 and 2023 found by a query of Web of Science using "consciousness AND theor*". Of the 60 respondents, 8% "fully" agreed, and 20% did "not at all" agree with the letter, with the remainder falling in between these poles. [ 53 ] The 10 March 2025 Nature Neuroscience commentary "What Makes a Theory of Consciousness Unscientific?" was signed by many of the same writers as the letter. It asserts that "the core ideas of IIT lack empirical support and are metaphysical, and not scientific" and refers to "the core claims of IIT, which we argue are unscientific". [ 54 ]
https://en.wikipedia.org/wiki/Integrated_information_theory
Integrated logistics [ 1 ] support ( ILS ) is a technology in the system engineering to lower a product life cycle cost and decrease demand for logistics by the maintenance system optimization to ease the product support . Although originally developed for military purposes, it is also widely used in commercial customer service organisations. [ 2 ] In general, ILS plans and directs the identification and development of logistics support and system requirements for military systems, with the goal of creating systems that last longer and require less support, thereby reducing costs and increasing return on investments . ILS therefore addresses these aspects of supportability not only during acquisition, but also throughout the operational life cycle of the system. The impact of ILS is often measured in terms of metrics such as reliability , availability , maintainability and testability ( RAMT ), and sometimes System Safety (RAMS). ILS is the integrated planning and action of a number of disciplines in concert with one another to assure system availability. The planning of each element of ILS is ideally developed in coordination with the system engineering effort and with each other. Tradeoffs may be required between elements in order to acquire a system that is: affordable (lowest life cycle cost), operable, supportable, sustainable, transportable, and environmentally sound. In some cases, a deliberate process of Logistics Support Analysis will be used to identify tasks within each logistics support element. The most widely accepted list of ILS activities include: Decisions are documented in a life cycle sustainment plan (LCSP), a Supportability Strategy, or (most commonly) an Integrated Logistics Support Plan (ILSP). ILS planning activities coincide with development of the system acquisition strategy, and the program will be tailored accordingly. A properly executed ILS strategy will ensure that the requirements for each of the elements of ILS are properly planned, resourced, and implemented. These actions will enable the system to achieve the operational readiness levels required by the warfighter at the time of fielding and throughout the life cycle. [ 3 ] [ 4 ] ILS can be also used for civilian projects, as highlighted by the ASD/AIA ILS Guide. [ 5 ] It is considered common practice within some industries - primarily Defence - for ILS practitioners to take a leave of absence to undertake an ILS Sabbatical ; furthering their knowledge of the logistics engineering disciplines. ILS Sabbaticals are normally taken in developing nations - allowing the practitioner an insight into sustainment practices in an environment of limited materiel resources. ILS is a technique introduced by the US Army to ensure that the supportability of an equipment item is considered during its design and development. The technique was adopted by the UK MoD in 1993 and made compulsory for the procurement of the majority of MOD equipment. The ILS management process facilitates specification, design, development, acquisition, test, fielding, and support of systems. Maintenance planning begins early in the acquisition process with development of the maintenance concept. It is conducted to evolve and establish requirements and tasks to be accomplished for achieving, restoring, and maintaining the operational capability for the life of the system. Maintenance planning also involves Level Of Repair Analysis (LORA) as a function of the system acquisition process. Maintenance planning will: Supply support encompasses all management actions, procedures, and techniques used to determine requirements to: Support and test equipment includes all equipment, mobile and fixed, that is required to perform the support functions, except that equipment which is an integral part of the system. Support equipment categories include: This also encompasses planning and acquisition of logistic support for this equipment. Manpower and personnel involves identification and acquisition of personnel with skills and grades required to operate and maintain a system over its lifetime. Manpower requirements are developed and personnel assignments are made to meet support demands throughout the life cycle of the system. Manpower requirements are based on related ILS elements and other considerations. Human factors engineering (HFE) or behavioral research is frequently applied to ensure a good man-machine interface. Manpower requirements are predicated on accomplishing the logistics support mission in the most efficient and economical way. This element includes requirements during the planning and decision process to optimize numbers, skills, and positions. This area considers: Training and training devices support encompasses the processes, procedures, techniques, training devices, and equipment used to train personnel to operate and support a system. This element defines qualitative and quantitative requirements for the training of operating and support personnel throughout the life cycle of the system. It includes requirements for: Embedded training devices, features, and components are designed and built into a specific system to provide training or assistance in the use of the system. (One example of this is the HELP files of many software programs.) The design, development, delivery, installation, and logistic support of required embedded training features, mockups, simulators, and training aids are also included. Technical Data and Technical Publications consists of scientific or technical information necessary to translate system requirements into discrete engineering and logistic support documentation. Technical data is used in the development of repair manuals, maintenance manuals, user manuals, and other documents that are used to operate or support the system. Technical data includes, but may not be limited to: Computer Resources Support includes the facilities, hardware, software, documentation, manpower, and personnel needed to operate and support computer systems and the software within those systems. Computer resources include both stand-alone and embedded systems. This element is usually planned, developed, implemented, and monitored by a Computer Resources Working Group (CRWG) or Computer Resources Integrated Product Team (CR-IPT) that documents the approach and tracks progress via a Computer Resources Life-Cycle Management Plan (CRLCMP). Developers will need to ensure that planning actions and strategies contained in the ILSP and CRLCMP are complementary and that computer resources support for the operational software, and ATE software, support software, is available where and when needed. This element includes resources and procedures to ensure that all equipment and support items are preserved, packaged, packed, marked, handled, transported, and stored properly for short- and long-term requirements. It includes material-handling equipment and packaging, handling and storage requirements, and pre-positioning of material and parts. It also includes preservation and packaging level requirements and storage requirements (for example, sensitive, proprietary, and controlled items). This element includes planning and programming the details associated with movement of the system in its shipping configuration to the ultimate destination via transportation modes and networks available and authorized for use. It further encompasses establishment of critical engineering design parameters and constraints (e.g., width, length, height, component and system rating, and weight) that must be considered during system development. Customs requirements, air shipping requirements, rail shipping requirements, container considerations, special movement precautions, mobility, and transportation asset impact of the shipping mode or the contract shipper must be carefully assessed. PHS&T planning must consider: The Facilities logistics element is composed of a variety of planning activities, all of which are directed toward ensuring that all required permanent or semi-permanent operating and support facilities (for instance, training, field and depot maintenance, storage, operational, and testing) are available concurrently with system fielding. Planning must be comprehensive and include the need for new construction as well as modifications to existing facilities. It also includes studies to define and establish impacts on life cycle cost, funding requirements, facility locations and improvements, space requirements, environmental impacts, duration or frequency of use, safety and health standards requirements, and security restrictions. Also included are any utility requirements, for both fixed and mobile facilities, with emphasis on limiting requirements of scarce or unique resources. Design interface is the relationship of logistics-related design parameters of the system to its projected or actual support resource requirements. These design parameters are expressed in operational terms rather than as inherent values and specifically relate to system requirements and support costs of the system. Programs such as "design for testability" and "design for discard" must be considered during system design. The basic requirements that need to be considered as part of design interface include: The references below cover many relevant standards and handbooks related to Integrated logistics support. The ASD/AIA Suite of S-Series ILS specifications
https://en.wikipedia.org/wiki/Integrated_logistics_support
An integrated database system can be used by small and large businesses as a means to incorporate IT in the manufacturing process. It updates, stores and records information, with a view to rapid retrieval. Some examples of could include: It is capable of performing searches for a particular part that may be present in many different products. This computing article is a stub . You can help Wikipedia by expanding it . This industry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Integrated_manufacturing_database
In the United States Department of Defense , the Integrated Master Plan ( IMP ) and the Integrated Master Schedule (IMS) are important program management tools that provide significant assistance in the planning and scheduling of work efforts in large and complex materiel acquisitions. [ 1 ] The IMP is an event-driven plan that documents the significant accomplishments necessary to complete the work and ties each accomplishment to a key program event. [ 2 ] The IMP is expanded to a time-based IMS to produce a networked and multi-layered schedule showing all detailed tasks required to accomplish the work effort contained in the IMP. The IMS flows directly from the IMP and supplements it with additional levels of detail——both then form the foundations to implement an Earned Value Management System . In civic planning or urban planning , Integrated Master Plan is used at the levels of city [ 4 ] development, [ 5 ] county, [ 6 ] and state or province to refer to a document integrating diverse aspects of a public works project. The primary purpose of the IMP—and the supporting detailed schedules of the IMS—is their use by the U.S. Government and Contractor acquisition team as the day-to-day tools for the planning, executing, and tracking program technical, schedule, and cost status, including risk mitigation efforts. [ 7 ] The IMP provides a better structure than either the Work Breakdown Structure (WBS) or Organizational Breakdown Structure (OBS) for measuring actual integrated master schedule (IMS) progress. [ 8 ] The primary objective of the IMP is a single plan that establishes the program or project fundamentals. It provides a hierarchical, event-based plan that contains: Events; Significant accomplishments; Entry and exit criteria; however it does not include any dates or durations. Using the IMP provides sufficient definition for explain program process and completion tracking, as well as providing effective communication of the program/project content and the " What and How " of the program. The IMP is a collection of milestones (called "events") that form the process architecture of the program. This means the sequence of events must always result in a deliverable product or service. While delivering products or services is relatively straight forward in some instances (i.e., list the tasks to be done, arrange them in the proper sequence, and execute to this “plan”), in other cases, problems often arise: (i) the description of "complete" is often missing for intermediate activities; (ii) program partners, integration activities, and subcontractors all have unknown or possibly unknowable impacts on the program; and (iii) as products or services are delivered the maturity of the program changes (e.g., quality and functionality expectations, as well as other attributes)——this maturity provided by defining "complete" serves as an insurance policy against future problems encountered later in the program. Often, it's easier to define the IMP by stating what it is not. The IMP is NOT BASED on calendar dates, and therefore it is not schedule oriented; each event is completed when its supporting accomplishments are completed, and this completion is evidenced by the satisfaction of the criteria supporting each of the accomplishments. Furthermore, many of the IMP events are fixed by customer-defined milestones (e.g., Preliminary or Critical Design Review, Production Deliver, etc.) while intermediate events are defined by the Supplier (e.g., integration and test, software build releases, Test Readiness Review, etc.). The critical IMP attribute is its focus on events, when compared to effort or task focused planning. The event focus asks and answers the question what does done look like? rather than what work has been done. Certainly work must be done to complete a task, but a focus solely on the work hides the more important metric of are we meeting our commitments? While meeting commitments is critical, it's important to first define the criteria used for judging if the commitments are being met. This is where Significant Accomplishments (SA) and their Accomplishment Criteria (AC) become important. It is important to meet commitments, but recognizing when the commitment has been met is even more important. The IMP provides Program Traceability by expanding and complying with the program's Statement of Objectives (SOO), Technical Performance Requirements (TPRs), the Contract Work Breakdown Structure (CWBS), and the Contract Statement of Work (CSOW)—all of which are based on the Customer's WBS to form the basis of the IMS and all cost reporting. The IMP implements a measurable and trackable program structure to accomplish integrated product development, integrate the functional program activities, and incorporates functional, lower-level and subcontractor IMPs. The IMP provides a framework for independent evaluation of Program Maturity by allowing insight into the overall effort with a level-of-detail that is consistent with levied risk and complexity metrics. It uses the methodology of decomposing events into a logical series of accomplishments having measurable criteria to demonstrate the completion and/or quality of accomplishments. A Government customer tasks a Supplier to prepare and implement an IMP that linked with the IMS and integrated with the EVMS. The IMP list the contract requirements documents (e.g., Systems Requirements Document and Technical Requirements Document (i.e., the system specification or similar document)) as well as the IMP events corresponding to development and/or production activities required by the contract. The IMP should include significant accomplishments encompassing all steps necessary to satisfy all contract objectives and requirements, manage all significant risks, and facilitate Government insight for each event. Significant accomplishments shall be networked to show their logical relationships and that they flow logically from one to another. The IMP, IMS, and EVMS products will usually include the prime contractor, subcontractor, and major vendor activities and products. [ 9 ] When evaluating a proposed IMS, the user should focus on realistic task durations, predecessor/successor relationships, and identification of critical path tasks with viable risk mitigation and contingency plans. An IMS summarized at too high a level may result in obscuring critical execution elements, and contributing to failure of the EVMS to report progress. A high-level IMS may fail to show related risk management approaches being used, which can result in long duration tasks and artificial linkages masking the true critical path. In general, the IMP is a top-down planning tool and the IMS as the bottom-up execution tool. The IMS is a scheduling tool for management control of program progression, not for cost collection purposes. [ 10 ] An IMS would seek general consistency and a standardized approach to project planning, scheduling and analysis. It may use guides such as the PASEG Generally Accepted Schedule Principles (GASP) as guidance to improve execution and enable EVMS. [ 11 ] The IMP/IMS are related to the product-based Work Breakdown Structure (WBS) as defined in MIL-STD-881, by giving a second type of view on the effort, for different audiences or to provide a combination which gives better overall understanding. [ 12 ] Linkage between the IMP/IMS and WBS is done by referencing the WBS numbering whenever the PE (Program Event), SA (Significant Accomplishment), or AC (Accomplishment Criteria) involves a deliverable product. The IMP is often called out as a contract data deliverable on United States Department of Defense materiel acquisitions, as well as other U.S. Government procurements. Formats for these deliverables are covered in Data Item Descriptions (DIDs) that define the data content, format, and data usages. Recently, the DoD cancelled the DID (DI-MISC-81183A) that jointly addressed both the IMP and the IMS. [ 13 ] The replacement documents include DI-MGMT-81650 (Integrated Master Schedule), DI-MGMT-81334A (Contract Work Breakdown Structure) and DI-MGMT-81466 (Contract Performance Report). [ 14 ] [ 15 ] [ 16 ] In addition DFARS 252.242–7001 and 252.242–7002 provide guidance for integrating IMP/IMS with Earned Value Management .
https://en.wikipedia.org/wiki/Integrated_master_plan
Integrated modification methodology ( IMM ) is a procedure encompassing an open set of scientific techniques for morphologically analyzing the built environment in a multiscale manner and evaluating its performance in actual states or under specific design scenarios. The methodology is structured around a nonlinear phasing process aiming for delivering a systemic understanding of any given urban settlement, formulating the modification set-ups for improving its performance, and examining the modification strategies to transform that system. The basic assumption in IMM is the recognition of the built environment as a Complex Adaptive System . [ 1 ] IMM has been developed by IMMdesignlab, a research lab based at Politecnico di Milano at the Department of Architecture, Built Environment and Construction Engineering (DABC). IMM began in 2010 as an academic research at Politecnico di Milano. That research criticized the analytical approach frequently used to study and evaluate the built environment by most of the sustainable development methods. By Recognizing the built environment as a Complex Adaptive System (CAS), IMM is urged towards a holistic simulation rather than simplifying the complex mechanisms within the cities with reductionism . In 2013, Massimo Tadi established the IMMdesignlab at the Department of Architecture, Built Environment and Construction Engineering (DABC) of the Politecnico di Milano. The purpose of the mentioned laboratory is to develop IMM through research and education. IN 2015, Integrated Modification Methodology for the Sustainable Built Environment has been approved as an academic course in the curriculum of the Architectural Engineering , an International Master Program in Politecnico di Milano. At its theoretical background, Integrated Modification Methodology refers to the contemporary urban development as a highly paradoxical context arisen from the social and economic significance of the cities on the one hand and their arguably negative environmental impacts on the other. Asserting the inevitably of urbanization , IMM declares that the only way to overcome that paradox for the cities is to develop in a profound integration with the ecology . According to IMM, the fundamental prerequisite of ecologically sustainable development is to have a comprehensive systemic understanding of the built environment. IMM suggests that the advancement in construction techniques, building materials quality and transportation technologies alone have not solved the complex problems of the urban life simply because such improvements are not necessarily dealing with the systemic integration. The core argument of IMM is that the performance of the city is being chiefly driven by the complex relationships subsystems rather than the independent qualities of the urban elements. Thus, it aims at portraying the systemic structure of the built environment by introducing a logical framework for modeling the linkage between the city's static and dynamic elements. Integrated Modification Methodology is based on an iterative process involving the following four major phases: [ citation needed ] The first phase, Investigation, is a synthesis-based inquiry into the systemic structure of the urban form. It begins with Horizontal Investigation in which the area under study is being dismantled into its morphology-generator elements, namely Urban Built-ups, Urban Voids, Types of Uses, and Links. It follows with Vertical Investigation [ 2 ] that is a study of integral relationships between the mentioned elements. The output of Vertical Investigation is a set of quantitative descriptions and qualitative illustrations of certain attributes named Key Categories . In a nutshell they are types of emergence that show how elements come to self-organize or to synchronize their states into forming a new level of organization. Hence in IMM, Key Categories are the result of an emergence process of interaction between elementary parts (Urban Built-ups, Urban Voids, Types of Uses, and Links) to form a synergy able to add value to the combined organization. Key categories are the products of the synergy between elementary parts, a new organization that emerge not (simply as) an additive result of the proprieties of the elementary parts. IMM declares that the city's functioning manner is chiefly driven from the Key Categories, hence, they have the most fundamental role in understanding the architecture of the city as a Complex Adaptive System. The Investigation phase concludes with the Evaluation step which is basically an examination of the system's performance by referring to a list of verified indicators associated with ecological sustainability. The same indicators are later used in the CAS retrofitting process necessary for the final evaluation of the system performance, after the transformation design process occurred. The Formulation phase is a technical assumption of the most critical Key Category and the urban element within the area deduced from the Investigation phase. These critical attributes are being interpreted as the Catalysts of transformation and are to come to the designer's use to set a contextual priority list of Design Ordering Principals . The third phase is the introduction of the modification/design scenarios to the project and advances with examining them by the exact procedure of the Investigation phase in a repetitive manner until the transformed context is predicted to be acceptable in arrangement and evaluation. The fourth phase, Retrofitting and Optimization, is a testing process of the outcomes of the modification phase, then a local optimization by technical strategies (e.g. installing photovoltaic panels, designing green roofs, studying building orientations etc.) is initiated. [ 3 ]
https://en.wikipedia.org/wiki/Integrated_modification_methodology
Integrated modular avionics ( IMA ) are real-time computer network airborne systems . This network consists of a number of computing modules capable of supporting numerous applications of differing criticality levels . In opposition to traditional federated architectures, the IMA concept proposes an integrated architecture with application software portable across an assembly of common hardware modules. An IMA architecture imposes multiple requirements on the underlying operating system . [ 1 ] It is believed that the IMA concept originated with the avionics design of the fourth-generation jet fighters . It has been in use in fighters such as F-22 and F-35 , or Dassault Rafale since the beginning of the '90s. Standardization efforts were ongoing at this time (see ASAAC or STANAG 4626 ), but no final documents were issued then. [ 2 ] IMA modularity simplifies the development process of avionics software : Communication between the modules can use an internal high speed Computer bus , or can share an external network, such as ARINC 429 or ARINC 664 (part 7) . However, much complexity is added to the systems, which thus require novel design and verification approaches since applications with different criticality levels share hardware and software resources such as CPU and network schedules, memory, inputs and outputs. Partitioning is generally used in order to help segregate mixed criticality applications and thus ease the verification process. ARINC 650 and ARINC 651 provide general purpose hardware and software standards used in an IMA architecture. However, parts of the API involved in an IMA network has been standardized, such as: RTCA DO-178C and RTCA DO-254 form the basis for flight certification today, while DO-297 gives specific guidance for Integrated modular avionics. ARINC 653 contributes by providing a framework that enables each software building block (called a partition) of the overall Integrated modular avionics to be tested, validated, and qualified independently (up to a certain measure) by its supplier. [ 3 ] The FAA CAST-32A position paper provides information (not official guidance) for certification of multicore systems, but does not specifically address IMA with multicore. A research paper by VanderLeest and Matthews addresses implementation of IMA principles for multicore" [ 4 ] Examples of aircraft avionics that uses IMA architecture:
https://en.wikipedia.org/wiki/Integrated_modular_avionics
The integrated nanoliter system is a measuring, separating, and mixing device that is able to measure fluids to the nanoliter , mix different fluids for a specific product , and separate a solution into simpler solutions. [ 1 ] All features of the integrated nanoliter system are specifically designed for controlling very small volumes of liquid (referred as microfluidic solutions). The integrated nanoliter system's scalability depends on what type of processing method the system is based on (refer as technology platform) with each processing method having its advantages and disadvantages. Possible uses for the integrated nanoliter system are in controlling biological fluids (refer as synthetic biology ) and accurately detecting changes in cells for genetic purposes (such as single-cell gene expression analysis) where the smaller scale directly influences the result and accuracy. The integrated nanoliter system consists of microfabricated fluidic channels , heaters, temperature sensors, and fluorescence detectors. The microfabricated fluidic channels (basically very small pipes) act as the main transportation structures for any fluids as well as where reactions occur within the system. For the desired reactions to occur, the temperature needs to be adjusted. Therefore, heaters are attached to some microfabricated fluidic channels . To monitor and maintain the desired temperature, temperature sensors are crucial for successful and desired reactions . In order to accurately track the fluids before and after a reaction , fluorescence detectors are used for detecting the movements of the fluids within the system. For instance, when a specific fluid passes a certain point where it triggers or excites emission of light, the fluorescence detector is able to receive that emission and calculate the time it takes to reach that certain point. [ 1 ] There are three different technology platforms for the integrated nanoliter system's scalability . Therefore, the main processing method of the integrated nanoliter system varies from the type of technology platform it is using. The three technology platforms for scalability are electrokinetic manipulation, vesicle encapsulation, and mechanical valving. [ 2 ] The main processing method for controlling the fluid under this technology platform is the capillary electrophoresis , which is an electrokinetic phenomena. Capillary electrophoresis is a great method for controlling fluids because the charged particles of the fluid are being directed by the controllable electric field within the system. However, a disadvantage of the technique is that the method of controlling the fluid's particles heavily depends on the particles' original charges. Another disadvantage is that the possible fluid "leaks" within the system. These "leaks" occur through diffusion which are dependent on the size of the fluid's particles. [ 2 ] The main processing method for controlling the fluid under this technology platform is to confine the fluids of interest in carrier molecules, which are generally droplets of water, vesicles , or micelles . The carrier molecules (with the fluid within them) are controlled by individually directing each carrier molecules within the microfabricated fluidic channels . This method is great for solving the possible fluid "leaks", since confinement of the fluid in a carrier molecule does not depend on the size of the fluid's particles. However, this technique has a disadvantage on how complex the solution can be when using the system. [ 2 ] The main processing method for controlling the fluid under this technology platform is the use of small mechanical valves . Mechanical valving is similar to a complex plumbing system because the microfabricated fluidic channels act as the plumbing pipes while the various controllable valves direct the fluid. Mechanical valving is also considered to be the most robust solution to the disadvantages of the electrokinetic manipulation and vesicle encapsulation, since the mechanical valves operate completely independent from the fluid's physical and chemical properties. Because the physical properties that make up the microfabricated fluidic channels and mechanical valves are difficult to process due to the system's extremely small scale, this technique has a disadvantage of creating an integrated nanoliter system with mechanical valving to the nanoliter scale. [ 2 ] A possible use of the integrated nanoliter system is in synthetic biology (controlling biological fluids). Since the integrated nanoliter system is generally made up of many controllable microfabricated fluidic networks , integrated nanoliter systems are an ideal environment for controlling biological fluids. A common process of synthetic biology that uses the integrated nanoliter system is processing complex reactions among biological fluids, which usually involves separating a biological solution into individual pure or simpler reagent solutions then mixing the individual solutions for the desired product . An advantage of using the integrated nanoliter system in synthetic biology includes the extremely small length of the microfluidic networks that result in fast diffusion rates. Another advantage is the fast mixing rates due to the combination of diffusion and advection ( chaotic mixing ). Compared to previous microfluidic systems, another advantage is the smaller necessary amount of reagent solutions for a single operation due to the integrated nanoliter system's microscopic scalability . Smaller necessary amounts of reagent solutions tend to lead to more operations that can be carried out with less delay from gathering or reproducing the necessary amounts of reagent solutions. [ 3 ] Another possible use of the integrated nanoliter system is in single-cell gene expression analysis. One benefit of using the integrated nanoliter system is its capability to detect the changes of a gene expression more accurately than the previous technique of microarray . The nanoliter system's microscopic scalability ( nanoliter to picoliter scale) allows it to analyze the gene expression at the single-cell level (around 1 picoliter ), while the microarray analyzes changes of the gene expression by averaging a large group of cells. Another convenient and important benefit is the integrated nanoliter system's capability of having all the necessary biological fluids in the system before operation by storing each biological fluid in a specific microfabricated fluidic network . The integrated nanoliter system is convenient because the biological fluids are all controlled by a computer compared to how previous systems required a manual loading of every biological fluid. The integrated nanoliter system is also important for the gene expression analysis because the analysis would not be undesirably influenced by contamination due to the "closed" system while in operation. [ 4 ]
https://en.wikipedia.org/wiki/Integrated_nanoliter_system
In the petroleum industry , Integrated operations (IO) refers to the integration of people, disciplines, organizations, work processes and information and communication technology to make smarter decisions. In short, IO is collaboration with focus on production. The most striking part of IO has been the use of always-on videoconference rooms between offshore platforms and land-based offices. This includes broadband connections for sharing of data and video-surveillance of the platform. This has made it possible to move some personnel onshore and use the existing human resources more efficiently. Instead of having e.g. an expert in geology on duty at every platform, the expert may be stationed on land and be available for consultation for several offshore platforms. It is also possible for a team at an office in a different time zone to be consulting the night-shift of the platform, so that no land-based workers need work at night. Splitting the team between land and sea demands new work processes, which together with ICT is the two main focus points for IO. Tools like videoconferencing and 3D-visualization also creates an opportunity for new, more cross-discipline cooperations. For instance, a shared 3D-visualization may be tailored to each member of the group, so that the geologist gets a visualization of the geological structures while the drilling engineer focuses on visualizing the well. Here, real-time measurements from the well are important but the downhole bandwidth has previously been very restricted. Improvements in bandwidth, better measurement devices, better aggregation and visualization of this information and improved models that simulate the rock formations and wellbore currently all feed on each other. An important task where all these improvements play together is real-time production optimization. In the process industry in general, the term is used to describe the increased cooperation, independent of location, between operators, maintenance personnel, electricians, production management as well as business management and suppliers to provide a more streamlined plant operation. By deploying IO, the petroleum industry draws on lessons from the process industry. This can be seen in a larger focus on the whole production chain and management ideas imported from the production and process industry. A prominent idea in this regard is real-time optimization of the whole value chain, from long term management of the oil reservoir , through capacity allocations in pipe networks and calculations of the net present value of the produced oil. Reviews of the application of Integrated Operations can be found in papers presented in the by-annual society of petroleum engineers Intelligent Energy conferences. [ 1 ] A focus on the whole production chain is also seen in debates about how to organize people in an IO organisation, with frequent calls for breaking down the Information silos in the oil companies. A large oil company is typically organized in functional silos corresponding to disciplines such as drilling , production and reservoir management. This is regarded as inefficient by the IO movement, pointing out that the activities in any well or field by any of the silos will involve or affect all of the others. While some companies focus on their inhouse management structure, others also emphasize the integration and coordination of outside suppliers and collaborators in offshore-operations. For instance, it is pointed out that the oil and gas industry is lagging behind other industries in terms of Operational intelligence . [ 2 ] Ideas and theories that IO management and work processes build on will be familiar from operations research , knowledge management and continual improvement as well as information systems and business transformation . This is perhaps most evident in the repeated referral to "people, process and technology" [ 3 ] [ 4 ] [ 5 ] in IO discussions . [ 6 ] As bullet-points this mirror many of the aforementioned fields. Since 2010 major mining companies have become implementers of Integrated Operations, most notable Rio Tinto, BHP Biliton and Codelco. [ 7 ] Common to most companies is that IO leads to cost savings as fewer people are stationed offshore and an increased efficiency. Lower costs, more efficient reservoir management and fewer mistakes during well drilling will in turn raise profits and make more oil fields economically viable. IO comes at a time when the oil industry is faced with more "brown fields", also referred to as "tail production", where the cost of extracting the oil will be higher than its market value, unless major improvements in technology and work processes are made. It has been estimated that deployment of IO could produce 300 billion NOK of added value to the Norwegian continental shelf alone. [ 8 ] On a longer time-scale, working onshore control and monitoring of the oil production may become a necessity as new fields at deeper waters are based purely on unmanned sub-sea facilities. Moving jobs onshore has also been touted as a way to keep and make better use of an aging workforce , which is regarded as a challenge by western oil and gas companies. As the average age of the industry workforce is increasing with many nearing retirement, IO is being leveraged for knowledge sharing and training of younger workforce. More comfortable onshore jobs together with "high-tech" tools has also been fronted as a way to recruit young workers into an industry that is seen as "unsexy", "lowtech" and difficult to combine with a normal family life. The security aspect of reducing the offshore workforce has been raised. Will on-site experience be lost and can familiarity with the platform and its processes be attained from an onshore office? The new working environment in any case demands changes to HSE routines. Some of the challenges also include clear role and responsibility definitions and clarifications between the onshore & offshore personnel. Who in a given situation has the authority to take decisions, the onsite or the offshore staff. The increased integration of the offshore facilities with the onshore office environment and outside collaborators also expose work-critical ICT-infrastructure to the internet and the hazards of everyday ICT. As for the efficiency aspect, some criticize the onshore-offshore collaboration for creating a more bureaucratic working environment. Both the exact terms and the content used to describe IO vary between companies. The oil company Shell has traditionally branded the term Smart Fields , [ 9 ] which was an extension of Smart Wells that only referred to remote-controlled well-valves. BP uses Field of the future [ 10 ] [ 11 ] to refer to its innovations in oil production. Chevron has i-field , Honeywell has Digital Suites for Oil and Gas (a set of software and services), and Schlumberger terms it Digital Energy . [ 12 ] The latter term, understood as referring to oil and gas, is adopted in the title of the digital energy journal . [ 13 ] This term could have several meanings, as GE Digital Energy for instance, do not appear to use it in the IO sense. Other terms include e-Field , i-Field , Digital Oilfield , Intelligent Oilfield , Field of the future and Intelligent Energy . [ 14 ] Integrated operations has been the preferred term by Statoil , the Norwegian Oil Industry Association (OLF), a professional body and employer's association for oil and supplier companies [ 15 ] [ 16 ] and vendors such as ABB . [ 17 ] IO is also the preferred term for Petrobras. [ 18 ] Intelligent Energy is the dominant term in publications revolving around the biannual SPE Intelligent Energy conference, [ 19 ] which has been one of the major conferences for the IO movement, along with the annual IO Science and Practice conference [ 20 ] which obviously supports the IO term.
https://en.wikipedia.org/wiki/Integrated_operations
Integrated pest management (IPM) , also known as integrated pest control (IPC) integrates both chemical and non-chemical practices for economic control of pests . The UN's Food and Agriculture Organization defines IPM as "the careful consideration of all available pest control techniques and subsequent integration of appropriate measures that discourage the development of pest populations and keep pesticides and other interventions to levels that are economically justified and reduce or minimize risks to human health and the environment. IPM emphasizes the growth of a healthy crop with the least possible disruption to agro-ecosystems and encourages natural pest control mechanisms." [ 1 ] Entomologists and ecologists have urged the adoption of IPM pest control since the 1970s. [ 2 ] IPM is a safer pest control framework than reliance on the use of chemical pesticides, mitigating risks such as: insecticide-induced resurgence , pesticide resistance and (especially food) crop residues . [ 3 ] [ 4 ] [ 5 ] [ 6 ] Shortly after World War II, when synthetic insecticides were introduced, entomologists in California developed the concept of "supervised insect control". [ 7 ] Around the same time, entomologists in the US Cotton Belt were advocating a similar approach. Under this scheme, insect control was "supervised" by qualified entomologists and insecticide applications were based on conclusions reached from periodic monitoring of pest and natural-enemy populations. This was viewed as an alternative to calendar-based programs. Supervised control was based on knowledge of the ecology and analysis of projected trends in pest and natural-enemy populations. [ citation needed ] Supervised control formed much of the conceptual basis for the "integrated control" that University of California entomologists articulated in the 1950s. Integrated control sought to identify the best mix of chemical and biological controls for a given insect pest. Chemical insecticides were to be used in the manner least disruptive to biological control. The term "integrated" was thus synonymous with "compatible." Chemical controls were to be applied only after regular monitoring indicated that a pest population had reached a level that required treatment (the economic threshold ) to prevent the population from reaching a level at which economic losses would exceed the cost of the control measures (the economic injury level). [ citation needed ] IPM extended the concept of integrated control to all classes of pests and was expanded to include all tactics. Controls such as pesticides were to be applied as in integrated control, but these now had to be compatible with tactics for all classes of pests. Other tactics, such as host-plant resistance and cultural manipulations, became part of the IPM framework. IPM combined entomologists, plant pathologists , nematologists and weed scientists. In the United States, IPM was formulated into national policy in February 1972 as directed by President Richard Nixon . In 1979, President Jimmy Carter established an interagency IPM Coordinating Committee to ensure development and implementation of IPM practices. [ 8 ] Perry Adkisson and Ray F. Smith received the 1997 World Food Prize for encouraging the use of IPM. [ 9 ] IPM is used in agriculture , horticulture , forestry , human habitations, preventive conservation of cultural property and general pest control, including structural pest management, turf pest management and ornamental pest management. IPM practices help to prevent and slow the development of resistance, known as resistance management . [ 10 ] [ 11 ] [ 12 ] An American IPM system is designed around six basic components: [ 13 ] Although originally developed for agricultural pest management, [ 17 ] IPM programmes now encompass diseases, weeds and other pests that interfere with management objectives for sites such as residential and commercial structures, lawn and turf areas, and home and community gardens . Predictive models have proved to be suitable tools supporting the implementation of IPM programmes. [ 18 ] IPM is the selection and [ 18 ] use of pest control actions that will ensure favourable economic condition, ecological and social consequences [ 19 ] and is applicable to most agricultural, public health and amenity pest management situations. The IPM process starts with monitoring, which includes inspection and identification, followed by the establishment of economic injury levels. The economic injury levels set the economic threshold level. Economic Injury level is the pest population level at which crop damage exceeds the cost of treatment of pest. [ 20 ] This can also be an action threshold level for determining an unacceptable level that is not tied to economic injury. Action thresholds are more common in structural pest management and economic injury levels in classic agricultural pest management. An example of an action threshold is one fly in a hospital operating room is not acceptable, but one fly in a pet kennel would be acceptable. Once a threshold has been crossed by the pest population action steps need to be taken to reduce and control the pest. Integrated pest management employs a variety of actions including cultural controls such as physical barriers, biological controls such as adding and conserving natural predators and enemies of the pest, and finally chemical controls or pesticides. Reliance on knowledge, experience, observation and integration of multiple techniques makes IPM appropriate for organic farming (excluding synthetic pesticides). These may or may not include materials listed on the Organic Materials Review Institute (OMRI) [ 21 ] Although the pesticides and particularly insecticides used in organic farming and organic gardening are generally safer than synthetic pesticides, they are not always more safe or environmentally friendly than synthetic pesticides and can cause harm. [ 22 ] For conventional farms IPM can reduce human and environmental exposure to hazardous chemicals, and potentially lower overall costs. [ citation needed ] Risk assessment usually includes four issues: 1) characterization of biological control agents, 2) health risks, 3) environmental risks and 4) efficacy. [ 23 ] Mistaken identification of a pest may result in ineffective actions. E.g., plant damage due to over-watering could be mistaken for fungal infection , since many fungal and viral infections arise under moist conditions. Monitoring begins immediately, before the pest's activity becomes significant. Monitoring of agricultural pests includes tracking soil /planting media fertility and water quality . Overall plant health and resistance to pests is greatly influenced by pH , alkalinity , of dissolved mineral and oxygen reduction potential. Many diseases are waterborne, spread directly by irrigation water and indirectly by splashing. Once the pest is known, knowledge of its lifecycle provides the optimal intervention points. [ 24 ] For example, weeds reproducing from last year's seed can be prevented with mulches and pre-emergent herbicide. [ citation needed ] Pest-tolerant crops such as soybeans may not warrant interventions unless the pests are numerous or rapidly increasing. Intervention is warranted if the expected cost of damage by the pest is more than the cost of control. Health hazards may require intervention that is not warranted by economic considerations. [ citation needed ] Specific sites may also have varying requirements. E.g., white clover may be acceptable on the sides of a tee box on a golf course , but unacceptable in the fairway where it could confuse the field of play. [ 25 ] Possible interventions include mechanical/physical, cultural, biological and chemical. Mechanical/physical controls include picking pests off plants, or using netting or other material to exclude pests such as birds from grapes or rodents from structures. Cultural controls include keeping an area free of conducive conditions by removing waste or diseased plants, flooding, sanding, and the use of disease-resistant crop varieties. [ 19 ] Biological controls are numerous. They include: conservation of natural predators or augmentation of natural predators, sterile insect technique (SIT). [ 26 ] Augmentation, inoculative release and inundative release are different methods of biological control that affect the target pest in different ways. Augmentative control includes the periodic introduction of predators. [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ] With inundative release, predators are collected, mass-reared and periodically released in large numbers into the pest area. [ 32 ] [ 33 ] [ 34 ] This is used for an immediate reduction in host populations, generally for annual crops, but is not suitable for long run use. [ 35 ] With inoculative release a limited number of beneficial organisms are introduced at the start of the growing season. This strategy offers long term control as the organism's progeny affect pest populations throughout the season and is common in orchards. [ 35 ] [ 36 ] With seasonal inoculative release the beneficials are collected, mass-reared and released seasonally to maintain the beneficial population. This is commonly used in greenhouses. [ 36 ] In America and other western countries, inundative releases are predominant, while Asia and the eastern Europe more commonly use inoculation and occasional introductions. [ 35 ] The sterile insect technique (SIT) is an area-wide IPM program that introduces sterile male pests into the pest population to trick females into (unsuccessful) breeding encounters, providing a form of birth control and reducing reproduction rates. [ 26 ] The biological controls mentioned above only appropriate in extreme cases, because in the introduction of new species, or supplementation of naturally occurring species can have detrimental ecosystem effects. Biological controls can be used to stop invasive species or pests, but they can become an introduction path for new pests. [ 37 ] Chemical controls include horticultural oils or the application of insecticides and herbicides. A green pest management IPM program uses pesticides derived from plants, such as botanicals, or other naturally occurring materials. Pesticides can be classified by their modes of action. Rotating among materials with diverse modes of action minimizes pest resistance. [ 19 ] Evaluation is the process of assessing whether the intervention was effective, whether it produced unacceptable side effects, whether to continue, revise or abandon the program. [ 38 ] The Green Revolution of the 1960s and '70s introduced sturdier plants that could support the heavier grain loads resulting from intensive fertilizer use. Pesticide imports by 11 Southeast Asian countries grew nearly sevenfold in value between 1990 and 2010, according to FAO statistics, with disastrous results. Rice farmers become accustomed to spraying soon after planting, triggered by signs of the leaf folder moth, which appears early in the growing season. It causes only superficial damage and doesn't reduce yields. In 1986, Indonesia banned 57 pesticides and completely stopped subsidizing their use. Progress was reversed in the 2000s, when growing production capacity, particularly in China, reduced prices. Rice production in Asia more than doubled. But it left farmers believing more is better—whether it's seed, fertilizer, or pesticides. [ 39 ] The brown planthopper , Nilaparvata lugens , the farmers' main target, has become increasingly resistant. Since 2008, outbreaks have devastated rice harvests throughout Asia, but not in the Mekong Delta. Reduced spraying allowed natural predators to neutralize planthoppers in Vietnam. In 2010 and 2011, massive planthopper outbreaks hit 400,000 hectares of Thai rice fields, causing losses of about $64 million. The Thai government is now pushing the "no spray in the first 40 days" approach. [ 39 ] By contrast early spraying kills frogs, spiders, wasps and dragonflies that prey on the later-arriving and dangerous planthopper and produced resistant strains. Planthoppers now require pesticide doses 500 times greater than originally. Overuse indiscriminately kills beneficial insects and decimates bird and amphibian populations. Pesticides are suspected of harming human health and became a common means for rural Asians to commit suicide. [ 39 ] In 2001, 950 Vietnamese farmers tried IPM. In one plot, each farmer grew rice using their usual amounts of seed and fertilizer, applying pesticide as they chose. In a nearby plot, less seed and fertilizer were used and no pesticides were applied for 40 days after planting. Yields from the experimental plots were as good or better and costs were lower, generating 8% to 10% more net income. The experiment led to the "three reductions, three gains" campaign, claiming that cutting the use of seed, fertilizer and pesticide would boost yield, quality and income. Posters, leaflets, TV commercials and a 2004 radio soap opera that featured a rice farmer who gradually accepted the changes. It didn't hurt that a 2006 planthopper outbreak hit farmers using insecticides harder than those who didn't. Mekong Delta farmers cut insecticide spraying from five times per crop cycle to zero to one. [ citation needed ] The Plant Protection Center and the International Rice Research Institute (IRRI) have been encouraging farmers to grow flowers, okra , and beans on rice paddy banks, instead of stripping vegetation, as was typical. The plants attract bees and wasps that eat planthopper eggs, while the vegetables diversify farm incomes. [ 39 ] Agriculture companies offer bundles of pesticides with seeds and fertilizer, with incentives for volume purchases. A proposed law in Vietnam requires licensing pesticide dealers and government approval of advertisements to prevent exaggerated claims. Insecticides that target other pests, such as Scirpophaga incertulas (stem borer), the larvae of moth species that feed on rice plants allegedly yield gains of 21% with proper use. [ 39 ]
https://en.wikipedia.org/wiki/Integrated_pest_management
An integrated product team ( IPT ) is a multidisciplinary group of people who are collectively responsible for delivering a defined product or process. [ 1 ] IPTs are used in complex development programs/projects for review and decision making . The emphasis of the IPT is on involvement of all stakeholders (users, customers, management, developers, contractors) in a collaborative forum. IPTs may be addressed at the program level, but there may also be Oversight IPTs (OIPTs), or Working-level IPTs (WIPTs). [ 2 ] IPTs are created most often as part of structured systems engineering methodologies, focusing attention on understanding the needs and desires of each stakeholder. IPTs were introduced to the U.S. Department of Defense in 1995 as part of "a fundamental change in the way the Department acquires goods and services". [ 3 ]
https://en.wikipedia.org/wiki/Integrated_product_team
Integrated project delivery ( IPD ) is a construction project delivery method that seeks the efficiency and involvement of all participants (people, systems, business structures and practices) through all phases of design, fabrication, and construction. [ 1 ] IPD combines ideas from integrated practice [ 2 ] and lean construction . The objectives of IPD are to increase productivity, reduce waste (waste being described as resources spent on activities that do not add value to the end product), avoid time overruns, enhance final product quality, and reduce conflicts between owners, architects and contractors during construction. [ 3 ] IPD emphasizes the use of technology to facilitate communication between the parties involved in the construction process. The construction industry has suffered from a productivity decline since the 1960s [ 4 ] [ 5 ] while all other non-farm industries have seen large boosts in productivity. Proponents of Integrated project delivery argue that problems in contemporary construction, such as buildings that are behind schedule and over budget, are due to adverse relations between the owner , general contractor , and architect . Using ideas developed by Toyota in their Toyota Production System and computer technology advances, [ 6 ] the new focus in IPD is the final value created for the owner. In essence, IPD sees all allocation of resources for any activity that does not add value to the end product (the finished building) as wasteful. [ 7 ] In Practice, the IPD system is a process where all disciplines in a construction project work as one firm. The primary team members include the architect, key technical consultants as well as a general contractor and subcontractors. The growing use of building information modeling in the construction industry is allowing for easier sharing of information between project participants using IPD and is considered a tool to increase productivity throughout the construction process. [ 3 ] Unlike the design–build project delivery method which typically places the contractor in the leading role on a building project, IPD represents a return to the "master builder" concept where the entire building team including the owner, architect , general contractor , building engineers , fabricators, and subcontractors work collaboratively throughout the construction process. One common way to further the goals of IPD is through a multi-party agreement among key participants. In a multi-party agreement (MPA), the primary project participants execute a single contract specifying their respective roles, rights, obligations, and liabilities. In effect, the multi-party agreement creates a temporary virtual, and in some instances formal, organization to realize a specific project. Because a single agreement is used, each party understands its role in relationship to the other participants. Compensation structures are often open-book, so each party's interests and contributions are similarly transparent. Multi-party agreements require trust, as compensation is tied to overall project success and individual success depends on the contributions of all team members. [ 8 ] Common forms of multi-party agreements include The adoption of IPD as a standard for collaborative good practice on construction projects presents its own problems. As most construction projects involve disparate stakeholders, traditional IT solutions are not conducive to collaborative working. Sharing files behind IT firewalls , large email attachment sizes and the ability to view all manner of file types without the native software all make IPD difficult. The need to overcome collaborative IT challenges has been one of the drivers behind the growth of online construction collaboration technology . Since 2000, a new generation of technology companies evolved using SaaS to facilitate IPD. This collaboration software streamlines the flow of documentation, communications and workflows ensuring everyone is working from 'one version of the truth'. Collaboration software allows users from disparate locations to keep all communications, documents & drawings, forms and data, amongst other types of electronic file, in one place. Version control is assured and users are able to view and mark up files online without the need for native software. The technology also enables project confidence and mitigates risk thanks to inbuilt audit trails. A significant criticism of IPD that the single-minded focus on efficiency is often associated with a lack of concern for employee safety and well-being. This led to a poor safety performance and increased stress levels among construction workers, as they strive reach higher goals with less resources. [ 7 ] Job Order Contracting, JOC is form of integrated project delivery that specifically targets repair, renovation, and minor new construction. It has proven to be capable of delivering over 90% of projects on-time, on-budget, and to the satisfaction of the owner, contractors, and customer alike. [ 9 ]
https://en.wikipedia.org/wiki/Integrated_project_delivery
Integrated pulmonary index (IPI) is a patient pulmonary index which uses information from capnography and pulse oximetry to provide a single value that describes the patient's respiratory status. IPI is used by clinicians to quickly assess the patient's respiratory status to determine the need for additional clinical assessment or intervention. The IPI is a patient index which provides a simple indication in real time of the patient's overall ventilatory status as an integer ranging from numbers 1 to 10. IPI integrates four major physiological parameters provided by a patient monitor, using this information along with an algorithm to produce the IPI score. The IPI score is not intended to replace current patient respiratory parameters, but to provide an additional integrated score or index of the patient ventilation status to the caregiver. The IPI incorporates four patient parameters (end-tidal CO 2 and respiratory rate measured by capnography , as well as pulse rate and blood oxygenation SpO 2 as measured by pulse oximetry ) into a single index value. [ 1 ] The IPI value on the patient monitor indicates the patient ventilatory status, where a score of 10 is normal, indicating optimal pulmonary status, and a score of 1 or 2 requires immediate intervention. The IPI algorithm was developed based on the data from a group of medical experts (anesthesiologists, nurses, respiratory therapists, and physiologists) who evaluated cases with varying parameter values and whom assigned an IPI value to a predefined patient status. [ 2 ] A mathematical model was built using patient normal ranges for these parameters and the ratings given to various combinations of the parameters by these professionals. Fuzzy logic , a mathematical method which mimics human logical thinking, was used to develop the IPI model. Clinical validation studies indicate that the IPI value produced by the IPI algorithm accurately reflects the patient's ventilatory status. In studies on both adult and pediatric patients, in which experts’ ratings of ventilatory status were collected along with IPI data, the IPI scores were found to be highly correlated with the experts’ annotated ratings. [ 3 ] [ 4 ] Studies conducted to validate the index also concluded that the single numeric value of IPI along with IPI trend may be valuable for promoting early awareness to changes in patient ventilatory status [ 5 ] and in simplifying the monitoring of patients in busy clinical environments. [ 6 ] IPI is a real-time patient value, updated every second, always available to the caregiver. An IPI trend graph also shows IPI scores over the previous hour (or other set time period), indicating if the IPI is remaining steady or trending up or down, thus reflecting changes in pulmonary status over time. In the example seen here, the changing IPI score indicates changes in the ventilatory status of the patient; IPI improves after a stimulus is applied. IPI can promote early awareness to changes in a patient's ventilatory status. The caregiver can view the IPI trend, which indicates changes in IPI over time. A quick view of the IPI trend can show that if the IPI has changed over the previous minutes or hours, to help the clinician ascertain if the patient's overall ventilatory status is worsening, remaining steady, or improving. This information can help determine the next steps in patient care. Thus, IPI can simplify the monitoring of patients in clinical environments. The caregiver can quickly and easily assess a patient's ventilatory status by following one number, the IPI, before checking the four parameters that make up this number. The four parameters continue to be displayed on the monitor screen. A significant change in the IPI is a “red flag” indicator, indicating that the clinician should review other monitored data and assess the patient. In the clinical environment, a quick check of the IPI value and IPI trend is a first indicator of pulmonary status of the patient and may be used to determine if further patient assessment is warranted. IPI can increase patient safety, by indicating the presence of slow-developing patient respiratory issues not easily identified with individual instantaneous data to the caregiver in real time. This enables timely decisions and interventions to reduce patient risk, improve outcomes and increase patient safety. Since normal values for the physiological parameters are different for different age categories, the IPI algorithm differs for different age groups (three pediatric age groups and adult). IPI is not available for neonatal and infant patients (up to the age of 1 year).
https://en.wikipedia.org/wiki/Integrated_pulmonary_index
An integrated standby instrument system (ISIS) is an electronic aircraft instrument. It is intended to serve as backup in case of a failure of the standard glass cockpit instrumentation, allowing pilots to continue to receive key flight-related information. Prior to the use of ISIS, this was performed by individual redundant mechanical instrumentation instead. Such systems have become common to be installed in various types of aircraft, ranging from airliners to helicopters and smaller general aviation aircraft. While it is common for new-built aircraft to be outfitted with ISIS, numerous operators have opted to have their fleets retrofitted with such apparatus as well. Typically there are 3 instruments: a airspeed indicator, a altimeter and a attitude indicator. An ISIS is designed to combine the functions of separate equivalent mechanical instruments that had previously been included as backup in such cockpits, including altimeter , airspeed indicator , and attitude indicator . Various aspects of ISIS are defined by its function of being a backup to conventional instrumentation. In accordance with this principal, it has been designed to operate with a high level of availability and reliability, as well as being as independent as possible from the aircraft's primary instrumentation and sensors alike. It is commonplace for an ISIS to work in conjunction with provisions for auxiliary power (typically a battery unit), as well as harnessing embedded sensors for its readings wherever possible. [ 1 ] [ 2 ] When all onboard instrumentation is performing normally, the readings indicated by an ISIS are identical to that of the primary flight display . [ 3 ] Advantages presented by ISIS over traditional systems include improved safety, greater ease of operation, and reduced operating costs. [ 4 ] A number of aircraft have been produced with relatively sophisticated integrated standby systems which may include additional functions. For example, the Rockwell Collins Pro Line 21 flight deck, which is fitted to aircraft such as the Cessna Citation XLS+ business jet , features a standby navigation display and engine gauges. [ 5 ] [ 6 ] Thales Group produce their own ISIS, which is installed on the Airbus A320 narrow-body and Airbus A330 wide-body airliners amongst other aircraft; it allowed for one single instrument to replace four standby instruments that had been traditionally used. [ 7 ] [ 8 ] Thales also produced an Integrated Electronic Standby Instrument (IESI) dedicated for use on helicopters ; in excess of 6,000 such units have reportedly been sold as of July 2020. [ 9 ] Another such system is manufactured by L3Harris Technologies , intended for both helicopters and general aviation purposes. [ 3 ] [ 10 ] Additional companies specialising in avionics, such as GE Aviation , Smiths Group , and Meggitt , have also marketed ranges of standby instrumentation using both standalone and ISIS-compliance principles. [ 11 ] [ 12 ] [ 13 ] Several companies have produced patentable innovations related to ISIS, including large aerospace players such as Airbus Group . [ 14 ] In addition to such technology being adopted upon new-build aircraft, several operators have opted to retrofit their existing aircraft fleets with current generation ISIS, such as UPS 's Airbus A300 freighters. [ 15 ] Along these lines, Rockwell Collins developed a retrofit package for the Boeing 757 and Boeing 767 that incorporates ISIS. [ 16 ] During the 2010s, the cost of performing such cockpit display retrofits reportedly dropped substantially. [ 17 ]
https://en.wikipedia.org/wiki/Integrated_standby_instrument_system
The integrated stress response is a cellular stress response conserved in eukaryotic cells that downregulates protein synthesis and upregulates specific genes in response to internal or environmental stresses. [ 1 ] The integrated stress response can be triggered within a cell due to either extrinsic or intrinsic conditions. Extrinsic factors include hypoxia , amino acid deprivation, glucose deprivation, viral infection and presence of oxidants . The main intrinsic factor is endoplasmic reticulum stress due to the accumulation of unfolded proteins . It has also been observed that the integrated stress response may trigger due to oncogene activation. The integrated stress response will either cause the expression of genes that fix the damage in the cell due to the stressful conditions, or it will cause a cascade of events leading to apoptosis , which occurs when the cell cannot be brought back into homeostasis . [ 1 ] Stress signals can cause protein kinases , known as EIF-2 kinases , to phosphorylate the α subunit of a protein complex called translation initiation factor 2 (eIF2), resulting in the gene ATF4 being turned on, which will further affect gene expression. [ 1 ] eIF2 consists of three subunits: eIF2α , eIF2β and eIF2γ . eIF2α contains two binding sites, one for phosphorylation and one for RNA binding. [ 1 ] The kinases work to phosphorylate serine 51 on the α subunit, which is a reversible action. [ 2 ] In a cell experiencing normal conditions, eIF2 aids in the initiation of mRNA translation and recognizing the AUG start codon. [ 1 ] However, once eIF2α is phosphorylated, the complex’s activity reduces, causing reduction in translation initiation and protein synthesis, while promoting expression of the ATF4 gene. [ 2 ] There are four known mammalian protein kinases that phosphorylate eIF2α, including PKR-like ER kinase (PERK, EIF2AK3), heme-regulated eIF2α kinase (HRI, EIF2AK1), general control non-depressible 2 (GCN2, EIF2AK4) and double stranded RNA dependent protein kinase (PKR, EIF2AK2). [ 1 ] [ 3 ] PERK (encoded in humans by the gene EIF2AK3 ) responds mainly to endoplasmic reticulum stress and has two modes of activation. [ 1 ] [ 2 ] This kinase has a unique luminal domain that plays a role in activation. The classical model of activation states that the luminal domain is normally bound to 78-kDa glucose-regulated protein ( GRP78 ). Once there is a buildup of unfolded proteins, GRP78 dissociates from the luminal domain. This causes PERK to dimerize, leading to autophosphorylation and activation. The activated PERK kinase will then phosphorylate eIF2α, causing a cascade of events. Thus, the activation of this kinase is dependent on the aggregation of unfolded proteins in the endoplasmic reticulum. PERK has also been observed to activate in response to activity of the proto-oncogene MYC . This activation causes ATF4 expression, resulting in tumorigenesis and cellular transformation . [ 1 ] HRI (encoded in humans by the gene EIF2AK1 ) also dimerizes in order to autophosphorylate and activate. This activation is dependent on the presence of heme . HRI has two domains that heme may bind to, including one on the N-terminus and one on the kinase insertion domain. The presence of heme causes a disulfide bond to form between the monomers of HRI, resulting in the structure of an inactive dimer. However, when heme is absent, HRI monomers form an active dimer through non-covalent interactions. Therefore, the activation of this kinase is dependent on heme deficiency. HRI activation can also occur due to other stressors such as heat shock, osmotic stress and proteasome inhibition. Activation of HRI in response to these stressors does not depend on heme, but rather relies on the help of two heat shock proteins ( HSP90 and HSP70 ). HRI is mainly found in the precursors of red blood cells, and has been observed to increase during erythropoiesis . [ 1 ] GCN2 (encoded in humans by the gene EIF2AK4 ) is activated as a result of amino acid deprivation. The mechanisms regarding this activation are still being researched; however, one mechanism has been studied in yeast. [ 1 ] It was observed that GCN2 binds to uncharged/deacylated tRNA which causes a conformational change, resulting in dimerization. [ 2 ] Dimerization then causes autophosphorylation and activation. [ 2 ] Other stressors have also been reported to activate GCN2. GCN2 activation was observed in glucose deprived tumor cells, although it was suggested that it was an indirect effect due to cells using amino acids as an alternate energy source. [ 1 ] In mouse embryonic fibroblast cells and human keratinocytes , GCN2 was activated due to UV light exposure. [ 4 ] [ 5 ] The pathways for this activation require further research, although multiple models have been proposed, including crosslinking between GCN2 and tRNA. [ 1 ] PKR (encoded in humans by the gene EIF2AK2 ) activation is mainly dependent on the presence of double-stranded RNA during a viral infection . dsRNA causes PKR to form dimers, resulting in autophosphorylation and activation. [ 1 ] Once activated, PKR will phosphorylate eIF2α which causes a cascade of events that result in viral and host protein synthesis being inhibited. Other stressors that cause the activation of PKR include oxidative stress , endoplasmic reticulum stress, growth factor deprivation and bacterial infection . Caspase activity early on in apoptosis has also been observed to trigger activation of PKR. However, these stressors differ in that they activate PKR without using dsRNA. [ 1 ] When a cell is subjected to stressful conditions, the ATF4 gene is expressed. [ 1 ] The ATF4 transcription factor has the ability to form dimers with many different proteins that influence gene expression and cell fate. ATF4 binds to C/EBP‐ATF response element (CARE) sequences which work together to increase the transcription of stress-responsive genes. However, when undergoing amino acid starvation, the sequences will act as amino acid response elements instead. [ 1 ] ATF4 will work together with other transcription factors, such as CHOP and ATF3 , by forming homodimers or heterodimers, resulting in numerous observed effects. [ 3 ] The proteins that ATF4 interacts with determines the outcome of the cell during the integrated stress response. [ 1 ] For example, ATF4 and ATF3 work to establish homeostasis inside of the cell following stressful conditions. [ 3 ] On the other hand, ATF4 and CHOP work together to induce cell death, as well as regulating amino acid biosynthesis, transport and metabolic processes. The presence of a leucine zipper domain ( bZIP ) allows ATF4 to work together with many other proteins, thus creating specific responses to different types of stressors. When a cell is undergoing the stress of hypoxia, ATF4 will interact with PHD1 and PHD3 to decrease its transcriptional activity. In addition, when a cell is undergoing amino acid starvation or endoplasmic reticulum stress, TRIP3 also interacts with ATF4 to decrease activity. [ 1 ] One result of ATF4 and stress-response proteins expression is the induction of autophagy . [ 6 ] During this process, the cell forms autophagosomes , or double membraned vesicles, that allow for transportation of material throughout the cell. [ 6 ] These autophagosomes can carry unneeded organelles and proteins, as well as damaged or harmful components in an attempt by the cell to maintain homeostasis. [ 6 ] In order to terminate the integrated stress response, dephosphorylation of eIF2α is required. The protein phosphatase 1 complex (PP1) aids in the dephosphorylation of eIF2α. This complex contains a PP1 catalytic subunit as well as two regulatory subunits. This complex is negatively regulated by two proteins: growth arrest and DNA damage‐inducible protein (GADD34), also known as PPP1R15A , or constitutive repressor of eIF2α phosphorylation (CReP), also known as PPP1R15B . CReP acts to keep levels of eIF2α phosphorylation low in cells under normal conditions. GADD34 is produced in response to ATF4 and works to increase dephosphorylation of eIF2α. The dephosphorylation of eIF2α results in the return of normal protein synthesis and cellular function. However, dephosphorylation of eIF2α can also facilitate the production of death-inducing proteins in cases where the cell is so severely damaged that normal functioning cannot be restored. [ 1 ] Mutations that affect the functioning of the integrated stress response may have debilitating effects on cells. For example, cells lacking the ATF4 gene are unable to elicit proper gene expression in response to stressors. This results in cells exhibiting issues with amino acid transport, glutathione biosynthesis and oxidative stress resistance. When a mutation inhibits the functioning of PERK, endogenous peroxides accumulate when the cell experiences endoplasmic reticulum stress. [ 1 ] In mice and humans lacking PERK, there have been observed destruction of secretory cells undergoing high endoplasmic reticulum stress. [ 2 ]
https://en.wikipedia.org/wiki/Integrated_stress_response
An integrating ADC is a type of analog-to-digital converter that converts an unknown input voltage into a digital representation through the use of an integrator . In its basic implementation, the dual-slope converter, the unknown input voltage is applied to the input of the integrator and allowed to ramp for a fixed time period (the run-up period). Then a known reference voltage of opposite polarity is applied to the integrator and is allowed to ramp until the integrator output returns to zero (the run-down period). The input voltage is computed as a function of the reference voltage, the constant run-up time period, and the measured run-down time period. The run-down time measurement is usually made in units of the converter's clock, so longer integration times allow for higher resolutions. Likewise, the speed of the converter can be improved by sacrificing resolution. Converters of this type can achieve high resolution, but often do so at the expense of speed. For this reason, these converters are not found in audio or signal processing applications. Their use is typically limited to digital voltmeters and other instruments requiring highly accurate measurements. The basic integrating ADC circuit consists of an integrator, a switch to select between the voltage to be measured and the reference voltage, a timer that determines how long to integrate the unknown and measures how long the reference integration took, a comparator to detect zero crossing, and a controller. Depending on the implementation, a switch may also be present in parallel with the integrator capacitor to allow the integrator to be reset. Inputs to the controller include a clock (used to measure time) and the output of a comparator used to detect when the integrator's output reaches zero. The conversion takes place in two phases: the run-up phase, where the input to the integrator is the voltage to be measured, and the run-down phase, where the input to the integrator is a known reference voltage. During the run-up phase, the switch selects the measured voltage as the input to the integrator. The integrator is allowed to ramp for a fixed period of time to allow a charge to build on the integrator capacitor. During the run-down phase, the switch selects the reference voltage as the input to the integrator. The time that it takes for the integrator's output to return to zero is measured during this phase. In order for the reference voltage to ramp the integrator voltage down, the reference voltage needs to have a polarity opposite to that of the input voltage. In most cases, for positive input voltages, this means that the reference voltage will be negative. To handle both positive and negative input voltages, a positive and negative reference voltage is required. The selection of which reference to use during the run-down phase would be based on the polarity of the integrator output at the end of the run-up phase. The basic equation for the output of the integrator (assuming a constant input) is: Assuming that the initial integrator voltage at the start of each conversion is zero and that the integrator voltage at the end of the run down period will be zero, we have the following two equations that cover the integrator's output during the two phases of the conversion: The two equations can be combined and solved for V i n {\displaystyle V_{in}} , the unknown input voltage: From the equation, one of the benefits of the dual-slope integrating ADC becomes apparent: the measurement is independent of the values of the circuit elements (R and C). This does not mean, however, that the values of R and C are unimportant in the design of a dual-slope integrating ADC (as will be explained below). Note that in the graph, the voltage is shown as going up during the run-up phase and down during the run-down phase. In reality, because the integrator uses the op-amp in a negative feedback configuration, applying a positive V in {\displaystyle V_{\text{in}}} will cause the output of the integrator to go down . The up and down more accurately refer to the process of adding charge to the integrator capacitor during the run-up phase and removing charge during the run-down phase. The resolution of the dual-slope integrating ADC is determined primarily by the length of the run-down period and by the time measurement resolution (i.e., the frequency of the controller's clock). The required resolution (in number of bits) dictates the minimum length of the run-down period for a full-scale input (e.g. V in = − V ref {\displaystyle V_{\text{in}}=-V_{\text{ref}}} ): During the measurement of a full-scale input, the slope of the integrator's output will be the same during the run-up and run-down phases. This also implies that the time of the run-up period and run-down period will be equal ( t u = t d {\displaystyle t_{u}=t_{d}} ) and that the total measurement time will be 2 t d {\displaystyle 2t_{d}} . Therefore, the total measurement time for a full-scale input will be based on the desired resolution and the frequency of the controller's clock: Typically the run-up time is chosen to be a multiple of the period of the mains frequency , to suppress mains hum. There are limits to the maximum resolution of the dual-slope integrating ADC. It is not possible to increase the resolution of the basic dual-slope ADC to arbitrarily high values by using longer measurement times or faster clocks. Resolution is limited by: The basic design of the dual-slope integrating ADC has a limitations in linearity, conversion speed and resolution. A number of modifications to the basic design have been made to overcome these to some degree. The run-up phase of the basic dual-slope design integrates the input voltage for a fixed period of time. That is, it allows an unknown amount of charge to build up on the integrator's capacitor. The run-down phase is then used to measure this unknown charge to determine the unknown voltage. For a full-scale input equal to the reference voltage, half of the measurement time is spent in the run-up phase. Reducing the amount of time spent in the run-up phase can reduce the total measurement time. A common implementation uses an input range twice as large as the reference voltage. A simple way to reduce the run-up time is to increase the rate that charge accumulates on the integrator capacitor by reducing the size of the resistor used on the input. This still allows the same total amount of charge accumulation, but it does so over a smaller period of time. Using the same algorithm for the run-down phase results in the following equation for the calculation of the unknown input voltage ( V in {\displaystyle V_{\text{in}}} ): Note that this equation, unlike the equation for the basic dual-slope converter, has a dependence on the values of the integrator resistors. Or, more importantly, it has a dependence on the ratio of the two resistance values. This modification does nothing to improve the resolution of the converter (since it doesn't address either of the resolution limitations noted above). The purpose of the run-up phase is to add a charge proportional to the input voltage to the integrator to be later measured during the run-down phase. One method to improve the resolution of the converter is to artificially increase the range of the integrating amplifier during the run-up phase. One method to increase the integrator capacity is by periodically adding or subtracting known quantities of charge during the run-up phase in order to keep the integrator's output within the range of the integrator amplifier. The total accumulated charge is the charge introduced by the unknown input voltage plus the sum of the known charges that were added or subtracted. The circuit diagram shown to the right is an example of how multi-slope run-up could be implemented. During the run-up the unknown input voltage, V in {\displaystyle V_{\text{in}}} , is always applied to the integrator. Positive and negative reference voltages controlled by the two independent switches add and subtract charge as needed to keep the output of the integrator within its limits. The reference resistors, R p {\displaystyle R_{p}} and R n {\displaystyle R_{n}} are necessarily smaller than R i {\displaystyle R_{i}} to ensure that the references can overcome the charge introduced by the input. A comparator connected to the integrator's output is used to decide which reference voltage should be applied. This can be a relatively simple algorithm: if the integrator's output above the threshold, enable the positive reference (to cause the output to go down); if the integrator's output is below the threshold, enable the negative reference (to cause the output to go up). The controller keeps track of how often each switch is turned on in order to account for the additional charge placed onto (or removed from) the integrator capacitor as a result of the reference voltages. The charge added / subtracted during the multi-slope run-up form the coarse part of the result (e.g. the leading 3 digits). To the right is a graph of sample output from the integrator during such a multi-slope run-up. Each dashed vertical line represents a decision point by the controller where it samples the polarity of the output and chooses to apply either the positive or negative reference voltage to the input. Ideally, the output voltage of the integrator at the end of the run-up period can be represented by the following equation: where t Δ {\displaystyle t_{\Delta }} is the sampling period, N p {\displaystyle N_{p}} is the number of periods in which the positive reference is switched in, N n {\displaystyle N_{n}} is the number of periods in which the negative reference is switched in, and N {\displaystyle N} is the total number of periods in the run-up phase. The resolution obtained during the run-up is given by the number of periods of the run-up algorithm. The multi-slope run-up comes with multiple advantages: While it is possible to continue the multi-slope run-up indefinitely, it is not possible to increase the resolution of the converter to arbitrarily high levels just by using a longer run-up time. Error is introduced into the multi-slope run-up through the action of the switches controlling the references, cross-coupling between the switches, unintended switch charge injection, mismatches in the references, and timing errors. [ 3 ] Some of this error can be reduced by careful operation of the switches. [ 4 ] [ 5 ] In particular, during the run-up period, each switch should be activated a constant number of times. The algorithm explained above does not do this and just toggles switches as needed to keep the integrator output within the limits. Activating each switch a constant number of times makes the error related to switching approximately constant. Any output offset that is a result of the switching error can be measured and then numerically subtracted from the result. The simple, single-slope run-down is slow. Typically, the run down time is measured in clock ticks, so to get four digit resolution, the rundown time may take as long as 10,000 clock cycles. A multi-slope run-down can speed the measurement up without sacrificing accuracy. By using 4 slope rates that are each a power of ten more gradual than the previous, four digit resolution can be achieved in roughly 40 clock ticks—a huge speed improvement. [ 6 ] The circuit shown to the right is an example of a multi-slope run-down circuit with four run-down slopes with each being ten times more gradual than the previous. The switches control which slope is selected. The switch containing R d / 1000 {\displaystyle R_{d}/1000} selects the steepest slope (i.e., will cause the integrator output to move toward zero the fastest). At the start of the run-down interval, the unknown input is removed from the circuit by opening the switch connected to V i n {\displaystyle V_{in}} and closing the R d / 1000 {\displaystyle R_{d}/1000} switch. Once the integrator's output reaches zero (and the run-down time measured), the R d / 1000 {\displaystyle R_{d}/1000} switch is opened and the next slope is selected by closing the R d / 100 {\displaystyle R_{d}/100} switch. This repeats until the final slope of R d {\displaystyle R_{d}} has reached zero. The combination of the run-down times for each of the slopes determines the value of the unknown input. In essence, each slope adds one digit of resolution to the result. The multi-slope run-down is often used in combination with a multi-slope run-up. The multi-slope run-up allows for a relatively small capacitor at the integrator and thus a relatively steep slope to start with and thus the option to actually use much more gradual slopes. It is possible to use a multi-slope rundown also with a simple run-up (like in the dual-slope ADC), but limited by the already relatively small slope for the initial phase and not much room for much smaller slopes. In the example circuit, the slope resistors differ by a factor of 10. This value, known as the base ( B {\displaystyle B} ), can be any value. As explained below, the choice of the base affects the speed of the converter and determines the number of slopes needed to achieve the desired resolution. The basis of this design is the assumption that there will always be overshoot when trying to find the zero crossing at the end of a run-down interval. This will be true due to the periodic sampling of the comparator based on the converter's clock. If we assume that the converter switches from one slope to the next in a single clock cycle (which may or may not be possible), the maximum amount of overshoot for a given slope would be the largest integrator output change in one clock period: To overcome this overshoot, the next slope would require no more than B {\displaystyle B} clock cycles, which helps to place a bound on the total time of the run-down. The time for the first-run down (using the steepest slope) is dependent on the unknown input (i.e., the amount of charge placed on the integrator capacitor during the run-up phase). At most, this will be: where T first {\displaystyle T_{\text{first}}} is the maximum number of clock periods for the first slope, V max {\displaystyle V_{\text{max}}} is the maximum integrator voltage at the start of the run-down phase, and R s 1 {\displaystyle R_{s1}} is the resistor used for the first slope. The remainder of the slopes have a limited duration based on the selected base, so the remaining time of the conversion (in converter clock periods) is: where N {\displaystyle N} is the number of slopes. Converting the measured time intervals during the multi-slope run-down into a measured voltage is similar to the charge-balancing method used in the multi-slope run-up enhancement. Each slope adds or subtracts known amounts of charge to/from the integrator capacitor. The run-up will have added some unknown amount of charge to the integrator. Then, during the run-down, the first slope subtracts a large amount of charge, the second slope adds a smaller amount of charge, etc. with each subsequent slope moving a smaller amount in the opposite direction of the previous slope with the goal of reaching closer and closer to zero. Each slope adds or subtracts a quantity of charge Q slope {\displaystyle Q_{\text{slope}}} proportional to the slope's resistor and the duration of the slope: T slope {\displaystyle T_{\text{slope}}} is necessarily an integer and will ideally be less than or equal to B {\displaystyle B} for the second and subsequent slopes. Using the circuit above as an example, the second slope, R d / 100 {\displaystyle R_{d}/100} , can contribute the following charge, Q s l o p e 2 {\displaystyle Q_{slope2}} , to the integrator: That is, B {\displaystyle B} possible values with the largest equal to the first slope's smallest step, or one (base 10) digit of resolution per slope. Generalizing this, we can represent the number of slopes, N {\displaystyle N} , in terms of the base and the required resolution, M {\displaystyle M} : Substituting this back into the equation representing the run-down time required for the second and subsequent slopes gives us this: Which, when evaluated, shows that the minimum run-down time can be achieved using a base of e . This base may be difficult to use both in terms of complexity in the calculation of the result and of finding an appropriate resistor network, so a base of 2 or 4 would be more common. When using run-up enhancements like the multi-slope run-up, where a portion of the converter's resolution is resolved during the run-up phase, it is possible to eliminate the run-down phase altogether by using a second type of analog-to-digital converter. [ 7 ] At the end of the run-up phase of a multi-slope run-up conversion, there will still be an unknown amount of charge remaining on the integrator's capacitor. Instead of using a traditional run-down phase to determine this unknown charge, the unknown voltage can be converted directly by a second converter and combined with the result from the run-up phase to determine the unknown input voltage. Assuming that multi-slope run-up as described above is being used, the unknown input voltage can be related to the multi-slope run-up counters, N p {\displaystyle N_{p}} and N n {\displaystyle N_{n}} , and the measured integrator output voltage, V o u t {\displaystyle V_{out}} using the following equation (derived from the multi-slope run-up output equation): This equation represents the theoretical calculation of the input voltage assuming ideal components. Since the equation depends on nearly all of the circuit's parameters, any variances in reference currents, the integrator capacitor, or other values will introduce errors in the result. A calibration factor is typically included in the C V o u t {\displaystyle CV_{out}} term to account for measured errors (or, as described in the referenced patent, to convert the residue ADC's output into the units of the run-up counters). Instead of being used to eliminate the run-down phase completely, the residue ADC can also be used to make the run-down phase more accurate than would otherwise be possible. [ 8 ] With a traditional run-down phase, the run-down time measurement period ends with the integrator output crossing through zero volts. There is a certain amount of error involved in detecting the zero crossing using a comparator (one of the short-comings of the basic dual-slope design as explained above). By using the residue ADC to rapidly sample the integrator output (synchronized with the converter controller's clock, for example), a voltage reading can be taken both immediately before and immediately after the zero crossing (as measured with a comparator). As the slope of the integrator voltage is constant during the run-down phase, the two voltage measurements can be used as inputs to an interpolation function that more accurately determines the time of the zero-crossing (i.e., with a much higher resolution than the controller's clock alone would allow). By combining some of these enhancements to the basic dual-slope design (namely multi-slope run-up and the residue ADC), it is possible to construct an integrating analog-to-digital converter that is capable of operating continuously without the need for a run-down interval. [ 9 ] Conceptually, the multi-slope run-up algorithm is allowed to operate continuously. To start a conversion, two things happen simultaneously: the residue ADC is used to measure the approximate charge currently on the integrator capacitor and the counters monitoring the multi-slope run-up are reset. At the end of a conversion period, another residue ADC reading is taken and the values of the multi-slope run-up counters are noted. The unknown input is calculated using a similar equation as used for the residue ADC, except that two output voltages are included ( V o u t 1 {\displaystyle V_{out1}} representing the measured integrator voltage at the start of the conversion, and V o u t 2 {\displaystyle V_{out2}} representing the measured integrator voltage at the end of the conversion. Such a continuously-integrating converter is very similar to a delta-sigma analog-to-digital converter . In most variants of the dual-slope integrating converter, the converter's performance is dependent on one or more of the circuit parameters. In the case of the basic design, the output of the converter is in terms of the reference voltage. In more advanced designs, there are also dependencies on one or more resistors used in the circuit or on the integrator capacitor being used. In all cases, even using expensive precision components there may be other effects that are not accounted for in the general dual-slope equations (dielectric effect on the capacitor or frequency or temperature dependencies on any of the components). Any of these variations result in error in the output of the converter. In the best case, this is simply gain and/or offset error. In the worst case, nonlinearity or nonmonotonicity could result. Some calibration can be performed internal to the converter (i.e., not requiring any special external input). This type of calibration would be performed every time the converter is turned on, periodically while the converter is running, or only when a special calibration mode is entered. Another type of calibration requires external inputs of known quantities (e.g., voltage standards or precision resistance references) and would typically be performed infrequently (every year for equipment used in normal conditions, more often when being used in metrology applications). Of these types of error, offset error is the simplest to correct (assuming that there is a constant offset over the entire range of the converter). This is often done internal to the converter itself by periodically taking measurements of the ground potential. Ideally, measuring the ground should always result in a zero output. Any non-zero output indicates the offset error in the converter. That is, if the measurement of ground resulted in an output of 0.001 volts, one can assume that all measurements will be offset by the same amount and can subtract 0.001 from all subsequent results. Gain error can similarly be measured and corrected internally (again assuming that there is a constant gain error over the entire output range). The voltage reference (or some voltage derived directly from the reference) can be used as the input to the converter. If the assumption is made that the voltage reference is accurate (to within the tolerances of the converter) or that the voltage reference has been externally calibrated against a voltage standard, any error in the measurement would be a gain error in the converter. If, for example, the measurement of a converter's 5 volt reference resulted in an output of 5.3 volts (after accounting for any offset error), a gain multiplier of 0.94 (5 / 5.3) can be applied to any subsequent measurement results.
https://en.wikipedia.org/wiki/Integrating_ADC
Integration Driven Development (IDD) is an incremental approach to systems development where the contents of the increments are determined by the integration plan, rather than the opposite. The increments can be seen as defined system capability changes - "Deltas" (Taxén et al., 2011). The advantages compared to other incremental development models ( such as RUP and Scrum ) still apply, such as short design cycles, early testing and managing late requirement changes, however IDD adds pull to the concept and also has the advantage of optimizing the contents of each increment to allow early integration and testing. Pull, in this context, means that information is requested from the user when needed (or is planned to be integrated and tested), as opposed to delivered when it happens to be ready. Development planning has to adjust to the optimal order of integration. System implementation is driven by what is going to be integrated and tested. System design, in turn is driven by the planned implementation and requirements by the planned system design steps. By doing so, artifacts will be delivered just-in-time, thus enabling fast feedback. IDD is not used instead of other incremental models, but rather as an enhancement that will make those models more efficient. One obstacle when using IDD is to create the integration plan – the definition of what to develop and integrate at a given time. One way that has proven successful is to use System Anatomies for original planning and Integration Anatomies for re-planning and follow-up. Since all planning will require time and resources IDD may be considered unnecessary for development with low complexity of the system and organization (i.e., small teams developing small systems).
https://en.wikipedia.org/wiki/Integration_Driven_Development
An integration appliance is a computer system specifically designed to lower the cost of integrating computer systems. Most integration appliances send or receive electronic messages from other computers that are exchanging electronic documents. Most Integration Appliances support XML messaging standards such as SOAP and Web services are frequently referred to as XML appliances and perform functions that can be grouped together as XML-Enabled Networking . [ 1 ] This computer networking article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Integration_appliance
In calculus , and more generally in mathematical analysis , integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative . It is frequently used to transform the antiderivative of a product of functions into an antiderivative for which a solution can be more easily found. The rule can be thought of as an integral version of the product rule of differentiation ; it is indeed derived using the product rule. The integration by parts formula states: ∫ a b u ( x ) v ′ ( x ) d x = [ u ( x ) v ( x ) ] a b − ∫ a b u ′ ( x ) v ( x ) d x = u ( b ) v ( b ) − u ( a ) v ( a ) − ∫ a b u ′ ( x ) v ( x ) d x . {\displaystyle {\begin{aligned}\int _{a}^{b}u(x)v'(x)\,dx&={\Big [}u(x)v(x){\Big ]}_{a}^{b}-\int _{a}^{b}u'(x)v(x)\,dx\\&=u(b)v(b)-u(a)v(a)-\int _{a}^{b}u'(x)v(x)\,dx.\end{aligned}}} Or, letting u = u ( x ) {\displaystyle u=u(x)} and d u = u ′ ( x ) d x {\displaystyle du=u'(x)\,dx} while v = v ( x ) {\displaystyle v=v(x)} and d v = v ′ ( x ) d x , {\displaystyle dv=v'(x)\,dx,} the formula can be written more compactly: ∫ u d v = u v − ∫ v d u . {\displaystyle \int u\,dv\ =\ uv-\int v\,du.} The former expression is written as a definite integral and the latter is written as an indefinite integral. Applying the appropriate limits to the latter expression should yield the former, but the latter is not necessarily equivalent to the former. Mathematician Brook Taylor discovered integration by parts, first publishing the idea in 1715. [ 1 ] [ 2 ] More general formulations of integration by parts exist for the Riemann–Stieltjes and Lebesgue–Stieltjes integrals . The discrete analogue for sequences is called summation by parts . The theorem can be derived as follows. For two continuously differentiable functions u ( x ) {\displaystyle u(x)} and v ( x ) {\displaystyle v(x)} , the product rule states: ( u ( x ) v ( x ) ) ′ = u ′ ( x ) v ( x ) + u ( x ) v ′ ( x ) . {\displaystyle {\Big (}u(x)v(x){\Big )}'=u'(x)v(x)+u(x)v'(x).} Integrating both sides with respect to x {\displaystyle x} , ∫ ( u ( x ) v ( x ) ) ′ d x = ∫ u ′ ( x ) v ( x ) d x + ∫ u ( x ) v ′ ( x ) d x , {\displaystyle \int {\Big (}u(x)v(x){\Big )}'\,dx=\int u'(x)v(x)\,dx+\int u(x)v'(x)\,dx,} and noting that an indefinite integral is an antiderivative gives u ( x ) v ( x ) = ∫ u ′ ( x ) v ( x ) d x + ∫ u ( x ) v ′ ( x ) d x , {\displaystyle u(x)v(x)=\int u'(x)v(x)\,dx+\int u(x)v'(x)\,dx,} where we neglect writing the constant of integration . This yields the formula for integration by parts : ∫ u ( x ) v ′ ( x ) d x = u ( x ) v ( x ) − ∫ u ′ ( x ) v ( x ) d x , {\displaystyle \int u(x)v'(x)\,dx=u(x)v(x)-\int u'(x)v(x)\,dx,} or in terms of the differentials d u = u ′ ( x ) d x {\displaystyle du=u'(x)\,dx} , d v = v ′ ( x ) d x , {\displaystyle dv=v'(x)\,dx,\quad } ∫ u ( x ) d v = u ( x ) v ( x ) − ∫ v ( x ) d u . {\displaystyle \int u(x)\,dv=u(x)v(x)-\int v(x)\,du.} This is to be understood as an equality of functions with an unspecified constant added to each side. Taking the difference of each side between two values x = a {\displaystyle x=a} and x = b {\displaystyle x=b} and applying the fundamental theorem of calculus gives the definite integral version: ∫ a b u ( x ) v ′ ( x ) d x = u ( b ) v ( b ) − u ( a ) v ( a ) − ∫ a b u ′ ( x ) v ( x ) d x . {\displaystyle \int _{a}^{b}u(x)v'(x)\,dx=u(b)v(b)-u(a)v(a)-\int _{a}^{b}u'(x)v(x)\,dx.} The original integral ∫ u v ′ d x {\displaystyle \int uv'\,dx} contains the derivative v' ; to apply the theorem, one must find v , the antiderivative of v' , then evaluate the resulting integral ∫ v u ′ d x . {\displaystyle \int vu'\,dx.} It is not necessary for u {\displaystyle u} and v {\displaystyle v} to be continuously differentiable. Integration by parts works if u {\displaystyle u} is absolutely continuous and the function designated v ′ {\displaystyle v'} is Lebesgue integrable (but not necessarily continuous). [ 3 ] (If v ′ {\displaystyle v'} has a point of discontinuity then its antiderivative v {\displaystyle v} may not have a derivative at that point.) If the interval of integration is not compact , then it is not necessary for u {\displaystyle u} to be absolutely continuous in the whole interval or for v ′ {\displaystyle v'} to be Lebesgue integrable in the interval, as a couple of examples (in which u {\displaystyle u} and v {\displaystyle v} are continuous and continuously differentiable) will show. For instance, if u ( x ) = e x / x 2 , v ′ ( x ) = e − x {\displaystyle u(x)=e^{x}/x^{2},\,v'(x)=e^{-x}} u {\displaystyle u} is not absolutely continuous on the interval [1, ∞) , but nevertheless: ∫ 1 ∞ u ( x ) v ′ ( x ) d x = [ u ( x ) v ( x ) ] 1 ∞ − ∫ 1 ∞ u ′ ( x ) v ( x ) d x {\displaystyle \int _{1}^{\infty }u(x)v'(x)\,dx={\Big [}u(x)v(x){\Big ]}_{1}^{\infty }-\int _{1}^{\infty }u'(x)v(x)\,dx} so long as [ u ( x ) v ( x ) ] 1 ∞ {\displaystyle \left[u(x)v(x)\right]_{1}^{\infty }} is taken to mean the limit of u ( L ) v ( L ) − u ( 1 ) v ( 1 ) {\displaystyle u(L)v(L)-u(1)v(1)} as L → ∞ {\displaystyle L\to \infty } and so long as the two terms on the right-hand side are finite. This is only true if we choose v ( x ) = − e − x . {\displaystyle v(x)=-e^{-x}.} Similarly, if u ( x ) = e − x , v ′ ( x ) = x − 1 sin ⁡ ( x ) {\displaystyle u(x)=e^{-x},\,v'(x)=x^{-1}\sin(x)} v ′ {\displaystyle v'} is not Lebesgue integrable on the interval [1, ∞) , but nevertheless ∫ 1 ∞ u ( x ) v ′ ( x ) d x = [ u ( x ) v ( x ) ] 1 ∞ − ∫ 1 ∞ u ′ ( x ) v ( x ) d x {\displaystyle \int _{1}^{\infty }u(x)v'(x)\,dx={\Big [}u(x)v(x){\Big ]}_{1}^{\infty }-\int _{1}^{\infty }u'(x)v(x)\,dx} with the same interpretation. One can also easily come up with similar examples in which u {\displaystyle u} and v {\displaystyle v} are not continuously differentiable. Further, if f ( x ) {\displaystyle f(x)} is a function of bounded variation on the segment [ a , b ] , {\displaystyle [a,b],} and φ ( x ) {\displaystyle \varphi (x)} is differentiable on [ a , b ] , {\displaystyle [a,b],} then ∫ a b f ( x ) φ ′ ( x ) d x = − ∫ − ∞ ∞ φ ~ ( x ) d ( χ ~ [ a , b ] ( x ) f ~ ( x ) ) , {\displaystyle \int _{a}^{b}f(x)\varphi '(x)\,dx=-\int _{-\infty }^{\infty }{\widetilde {\varphi }}(x)\,d({\widetilde {\chi }}_{[a,b]}(x){\widetilde {f}}(x)),} where d ( χ [ a , b ] ( x ) f ~ ( x ) ) {\displaystyle d(\chi _{[a,b]}(x){\widetilde {f}}(x))} denotes the signed measure corresponding to the function of bounded variation χ [ a , b ] ( x ) f ( x ) {\displaystyle \chi _{[a,b]}(x)f(x)} , and functions f ~ , φ ~ {\displaystyle {\widetilde {f}},{\widetilde {\varphi }}} are extensions of f , φ {\displaystyle f,\varphi } to R , {\displaystyle \mathbb {R} ,} which are respectively of bounded variation and differentiable. [ citation needed ] Integrating the product rule for three multiplied functions, u ( x ) {\displaystyle u(x)} , v ( x ) {\displaystyle v(x)} , w ( x ) {\displaystyle w(x)} , gives a similar result: ∫ a b u v d w = [ u v w ] a b − ∫ a b u w d v − ∫ a b v w d u . {\displaystyle \int _{a}^{b}uv\,dw\ =\ {\Big [}uvw{\Big ]}_{a}^{b}-\int _{a}^{b}uw\,dv-\int _{a}^{b}vw\,du.} In general, for n {\displaystyle n} factors ( ∏ i = 1 n u i ( x ) ) ′ = ∑ j = 1 n u j ′ ( x ) ∏ i ≠ j n u i ( x ) , {\displaystyle \left(\prod _{i=1}^{n}u_{i}(x)\right)'\ =\ \sum _{j=1}^{n}u_{j}'(x)\prod _{i\neq j}^{n}u_{i}(x),} which leads to [ ∏ i = 1 n u i ( x ) ] a b = ∑ j = 1 n ∫ a b u j ′ ( x ) ∏ i ≠ j n u i ( x ) . {\displaystyle \left[\prod _{i=1}^{n}u_{i}(x)\right]_{a}^{b}\ =\ \sum _{j=1}^{n}\int _{a}^{b}u_{j}'(x)\prod _{i\neq j}^{n}u_{i}(x).} Consider a parametric curve ( x , y ) = ( f ( t ) , g ( t ) ) {\displaystyle (x,y)=(f(t),g(t))} . Assuming that the curve is locally one-to-one and integrable , we can define x ( y ) = f ( g − 1 ( y ) ) y ( x ) = g ( f − 1 ( x ) ) {\displaystyle {\begin{aligned}x(y)&=f(g^{-1}(y))\\y(x)&=g(f^{-1}(x))\end{aligned}}} The area of the blue region is A 1 = ∫ y 1 y 2 x ( y ) d y {\displaystyle A_{1}=\int _{y_{1}}^{y_{2}}x(y)\,dy} Similarly, the area of the red region is A 2 = ∫ x 1 x 2 y ( x ) d x {\displaystyle A_{2}=\int _{x_{1}}^{x_{2}}y(x)\,dx} The total area A 1 + A 2 is equal to the area of the bigger rectangle, x 2 y 2 , minus the area of the smaller one, x 1 y 1 : ∫ y 1 y 2 x ( y ) d y ⏞ A 1 + ∫ x 1 x 2 y ( x ) d x ⏞ A 2 = x ⋅ y ( x ) | x 1 x 2 = y ⋅ x ( y ) | y 1 y 2 {\displaystyle \overbrace {\int _{y_{1}}^{y_{2}}x(y)\,dy} ^{A_{1}}+\overbrace {\int _{x_{1}}^{x_{2}}y(x)\,dx} ^{A_{2}}\ =\ {\biggl .}x\cdot y(x){\biggl |}_{x_{1}}^{x_{2}}\ =\ {\biggl .}y\cdot x(y){\biggl |}_{y_{1}}^{y_{2}}} Or, in terms of t , ∫ t 1 t 2 x ( t ) d y ( t ) + ∫ t 1 t 2 y ( t ) d x ( t ) = x ( t ) y ( t ) | t 1 t 2 {\displaystyle \int _{t_{1}}^{t_{2}}x(t)\,dy(t)+\int _{t_{1}}^{t_{2}}y(t)\,dx(t)\ =\ {\biggl .}x(t)y(t){\biggl |}_{t_{1}}^{t_{2}}} Or, in terms of indefinite integrals, this can be written as ∫ x d y + ∫ y d x = x y {\displaystyle \int x\,dy+\int y\,dx\ =\ xy} Rearranging: ∫ x d y = x y − ∫ y d x {\displaystyle \int x\,dy\ =\ xy-\int y\,dx} Thus integration by parts may be thought of as deriving the area of the blue region from the area of rectangles and that of the red region. This visualization also explains why integration by parts may help find the integral of an inverse function f −1 ( x ) when the integral of the function f ( x ) is known. Indeed, the functions x ( y ) and y ( x ) are inverses, and the integral ∫ x dy may be calculated as above from knowing the integral ∫ y dx . In particular, this explains use of integration by parts to integrate logarithm and inverse trigonometric functions . In fact, if f {\displaystyle f} is a differentiable one-to-one function on an interval, then integration by parts can be used to derive a formula for the integral of f − 1 {\displaystyle f^{-1}} in terms of the integral of f {\displaystyle f} . This is demonstrated in the article, Integral of inverse functions . Integration by parts is a heuristic rather than a purely mechanical process for solving integrals; given a single function to integrate, the typical strategy is to carefully separate this single function into a product of two functions u ( x ) v ( x ) such that the residual integral from the integration by parts formula is easier to evaluate than the single function. The following form is useful in illustrating the best strategy to take: ∫ u v d x = u ∫ v d x − ∫ ( u ′ ∫ v d x ) d x . {\displaystyle \int uv\,dx=u\int v\,dx-\int \left(u'\int v\,dx\right)\,dx.} On the right-hand side, u is differentiated and v is integrated; consequently it is useful to choose u as a function that simplifies when differentiated, or to choose v as a function that simplifies when integrated. As a simple example, consider: ∫ ln ⁡ ( x ) x 2 d x . {\displaystyle \int {\frac {\ln(x)}{x^{2}}}\,dx\,.} Since the derivative of ln( x ) is ⁠ 1 / x ⁠ , one makes (ln( x )) part u ; since the antiderivative of ⁠ 1 / x 2 ⁠ is − ⁠ 1 / x ⁠ , one makes ⁠ 1 / x 2 ⁠ part v . The formula now yields: ∫ ln ⁡ ( x ) x 2 d x = − ln ⁡ ( x ) x − ∫ ( 1 x ) ( − 1 x ) d x . {\displaystyle \int {\frac {\ln(x)}{x^{2}}}\,dx=-{\frac {\ln(x)}{x}}-\int {\biggl (}{\frac {1}{x}}{\biggr )}{\biggl (}-{\frac {1}{x}}{\biggr )}\,dx\,.} The antiderivative of − ⁠ 1 / x 2 ⁠ can be found with the power rule and is ⁠ 1 / x ⁠ . Alternatively, one may choose u and v such that the product u ′ (∫ v dx ) simplifies due to cancellation. For example, suppose one wishes to integrate: ∫ sec 2 ⁡ ( x ) ⋅ ln ⁡ ( | sin ⁡ ( x ) | ) d x . {\displaystyle \int \sec ^{2}(x)\cdot \ln {\Big (}{\bigl |}\sin(x){\bigr |}{\Big )}\,dx.} If we choose u ( x ) = ln(|sin( x )|) and v ( x ) = sec 2 x, then u differentiates to 1 tan ⁡ x {\displaystyle {\frac {1}{\tan x}}} using the chain rule and v integrates to tan x ; so the formula gives: ∫ sec 2 ⁡ ( x ) ⋅ ln ⁡ ( | sin ⁡ ( x ) | ) d x = tan ⁡ ( x ) ⋅ ln ⁡ ( | sin ⁡ ( x ) | ) − ∫ tan ⁡ ( x ) ⋅ 1 tan ⁡ ( x ) d x . {\displaystyle \int \sec ^{2}(x)\cdot \ln {\Big (}{\bigl |}\sin(x){\bigr |}{\Big )}\,dx=\tan(x)\cdot \ln {\Big (}{\bigl |}\sin(x){\bigr |}{\Big )}-\int \tan(x)\cdot {\frac {1}{\tan(x)}}\,dx\ .} The integrand simplifies to 1, so the antiderivative is x . Finding a simplifying combination frequently involves experimentation. In some applications, it may not be necessary to ensure that the integral produced by integration by parts has a simple form; for example, in numerical analysis , it may suffice that it has small magnitude and so contributes only a small error term. Some other special techniques are demonstrated in the examples below. In order to calculate I = ∫ x cos ⁡ ( x ) d x , {\displaystyle I=\int x\cos(x)\,dx\,,} let: u = x ⇒ d u = d x d v = cos ⁡ ( x ) d x ⇒ v = ∫ cos ⁡ ( x ) d x = sin ⁡ ( x ) {\displaystyle {\begin{alignedat}{3}u&=x\ &\Rightarrow \ &&du&=dx\\dv&=\cos(x)\,dx\ &\Rightarrow \ &&v&=\int \cos(x)\,dx=\sin(x)\end{alignedat}}} then: ∫ x cos ⁡ ( x ) d x = ∫ u d v = u ⋅ v − ∫ v d u = x sin ⁡ ( x ) − ∫ sin ⁡ ( x ) d x = x sin ⁡ ( x ) + cos ⁡ ( x ) + C , {\displaystyle {\begin{aligned}\int x\cos(x)\,dx&=\int u\ dv\\&=u\cdot v-\int v\,du\\&=x\sin(x)-\int \sin(x)\,dx\\&=x\sin(x)+\cos(x)+C,\end{aligned}}} where C is a constant of integration . For higher powers of x {\displaystyle x} in the form ∫ x n e x d x , ∫ x n sin ⁡ ( x ) d x , ∫ x n cos ⁡ ( x ) d x , {\displaystyle \int x^{n}e^{x}\,dx,\ \int x^{n}\sin(x)\,dx,\ \int x^{n}\cos(x)\,dx\,,} repeatedly using integration by parts can evaluate integrals such as these; each application of the theorem lowers the power of x {\displaystyle x} by one. An example commonly used to examine the workings of integration by parts is I = ∫ e x cos ⁡ ( x ) d x . {\displaystyle I=\int e^{x}\cos(x)\,dx.} Here, integration by parts is performed twice. First let u = cos ⁡ ( x ) ⇒ d u = − sin ⁡ ( x ) d x d v = e x d x ⇒ v = ∫ e x d x = e x {\displaystyle {\begin{alignedat}{3}u&=\cos(x)\ &\Rightarrow \ &&du&=-\sin(x)\,dx\\dv&=e^{x}\,dx\ &\Rightarrow \ &&v&=\int e^{x}\,dx=e^{x}\end{alignedat}}} then: ∫ e x cos ⁡ ( x ) d x = e x cos ⁡ ( x ) + ∫ e x sin ⁡ ( x ) d x . {\displaystyle \int e^{x}\cos(x)\,dx=e^{x}\cos(x)+\int e^{x}\sin(x)\,dx.} Now, to evaluate the remaining integral, we use integration by parts again, with: u = sin ⁡ ( x ) ⇒ d u = cos ⁡ ( x ) d x d v = e x d x ⇒ v = ∫ e x d x = e x . {\displaystyle {\begin{alignedat}{3}u&=\sin(x)\ &\Rightarrow \ &&du&=\cos(x)\,dx\\dv&=e^{x}\,dx\,&\Rightarrow \ &&v&=\int e^{x}\,dx=e^{x}.\end{alignedat}}} Then: ∫ e x sin ⁡ ( x ) d x = e x sin ⁡ ( x ) − ∫ e x cos ⁡ ( x ) d x . {\displaystyle \int e^{x}\sin(x)\,dx=e^{x}\sin(x)-\int e^{x}\cos(x)\,dx.} Putting these together, ∫ e x cos ⁡ ( x ) d x = e x cos ⁡ ( x ) + e x sin ⁡ ( x ) − ∫ e x cos ⁡ ( x ) d x . {\displaystyle \int e^{x}\cos(x)\,dx=e^{x}\cos(x)+e^{x}\sin(x)-\int e^{x}\cos(x)\,dx.} The same integral shows up on both sides of this equation. The integral can simply be added to both sides to get 2 ∫ e x cos ⁡ ( x ) d x = e x [ sin ⁡ ( x ) + cos ⁡ ( x ) ] + C , {\displaystyle 2\int e^{x}\cos(x)\,dx=e^{x}{\bigl [}\sin(x)+\cos(x){\bigr ]}+C,} which rearranges to ∫ e x cos ⁡ ( x ) d x = 1 2 e x [ sin ⁡ ( x ) + cos ⁡ ( x ) ] + C ′ {\displaystyle \int e^{x}\cos(x)\,dx={\frac {1}{2}}e^{x}{\bigl [}\sin(x)+\cos(x){\bigr ]}+C'} where again C {\displaystyle C} (and C ′ = C 2 {\displaystyle C'={\frac {C}{2}}} ) is a constant of integration . A similar method is used to find the integral of secant cubed . Two other well-known examples are when integration by parts is applied to a function expressed as a product of 1 and itself. This works if the derivative of the function is known, and the integral of this derivative times x {\displaystyle x} is also known. The first example is ∫ ln ⁡ ( x ) d x {\displaystyle \int \ln(x)dx} . We write this as: I = ∫ ln ⁡ ( x ) ⋅ 1 d x . {\displaystyle I=\int \ln(x)\cdot 1\,dx\,.} Let: u = ln ⁡ ( x ) ⇒ d u = d x x {\displaystyle u=\ln(x)\ \Rightarrow \ du={\frac {dx}{x}}} d v = d x ⇒ v = x {\displaystyle dv=dx\ \Rightarrow \ v=x} then: ∫ ln ⁡ ( x ) d x = x ln ⁡ ( x ) − ∫ x x d x = x ln ⁡ ( x ) − ∫ 1 d x = x ln ⁡ ( x ) − x + C {\displaystyle {\begin{aligned}\int \ln(x)\,dx&=x\ln(x)-\int {\frac {x}{x}}\,dx\\&=x\ln(x)-\int 1\,dx\\&=x\ln(x)-x+C\end{aligned}}} where C {\displaystyle C} is the constant of integration . The second example is the inverse tangent function arctan ⁡ ( x ) {\displaystyle \arctan(x)} : I = ∫ arctan ⁡ ( x ) d x . {\displaystyle I=\int \arctan(x)\,dx.} Rewrite this as ∫ arctan ⁡ ( x ) ⋅ 1 d x . {\displaystyle \int \arctan(x)\cdot 1\,dx.} Now let: u = arctan ⁡ ( x ) ⇒ d u = d x 1 + x 2 {\displaystyle u=\arctan(x)\ \Rightarrow \ du={\frac {dx}{1+x^{2}}}} d v = d x ⇒ v = x {\displaystyle dv=dx\ \Rightarrow \ v=x} then ∫ arctan ⁡ ( x ) d x = x arctan ⁡ ( x ) − ∫ x 1 + x 2 d x = x arctan ⁡ ( x ) − ln ⁡ ( 1 + x 2 ) 2 + C {\displaystyle {\begin{aligned}\int \arctan(x)\,dx&=x\arctan(x)-\int {\frac {x}{1+x^{2}}}\,dx\\[8pt]&=x\arctan(x)-{\frac {\ln(1+x^{2})}{2}}+C\end{aligned}}} using a combination of the inverse chain rule method and the natural logarithm integral condition . The LIATE rule is a rule of thumb for integration by parts. It involves choosing as u the function that comes first in the following list: [ 4 ] The function which is to be dv is whichever comes last in the list. The reason is that functions lower on the list generally have simpler antiderivatives than the functions above them. The rule is sometimes written as "DETAIL", where D stands for dv and the top of the list is the function chosen to be dv . An alternative to this rule is the ILATE rule, where inverse trigonometric functions come before logarithmic functions. To demonstrate the LIATE rule, consider the integral ∫ x ⋅ cos ⁡ ( x ) d x . {\displaystyle \int x\cdot \cos(x)\,dx.} Following the LIATE rule, u = x , and dv = cos( x ) dx , hence du = dx , and v = sin( x ), which makes the integral become x ⋅ sin ⁡ ( x ) − ∫ 1 sin ⁡ ( x ) d x , {\displaystyle x\cdot \sin(x)-\int 1\sin(x)\,dx,} which equals x ⋅ sin ⁡ ( x ) + cos ⁡ ( x ) + C . {\displaystyle x\cdot \sin(x)+\cos(x)+C.} In general, one tries to choose u and dv such that du is simpler than u and dv is easy to integrate. If instead cos( x ) was chosen as u , and x dx as dv , we would have the integral x 2 2 cos ⁡ ( x ) + ∫ x 2 2 sin ⁡ ( x ) d x , {\displaystyle {\frac {x^{2}}{2}}\cos(x)+\int {\frac {x^{2}}{2}}\sin(x)\,dx,} which, after recursive application of the integration by parts formula, would clearly result in an infinite recursion and lead nowhere. Although a useful rule of thumb, there are exceptions to the LIATE rule. A common alternative is to consider the rules in the "ILATE" order instead. Also, in some cases, polynomial terms need to be split in non-trivial ways. For example, to integrate ∫ x 3 e x 2 d x , {\displaystyle \int x^{3}e^{x^{2}}\,dx,} one would set u = x 2 , d v = x ⋅ e x 2 d x , {\displaystyle u=x^{2},\quad dv=x\cdot e^{x^{2}}\,dx,} so that d u = 2 x d x , v = e x 2 2 . {\displaystyle du=2x\,dx,\quad v={\frac {e^{x^{2}}}{2}}.} Then ∫ x 3 e x 2 d x = ∫ ( x 2 ) ( x e x 2 ) d x = ∫ u d v = u v − ∫ v d u = x 2 e x 2 2 − ∫ x e x 2 d x . {\displaystyle \int x^{3}e^{x^{2}}\,dx=\int \left(x^{2}\right)\left(xe^{x^{2}}\right)\,dx=\int u\,dv=uv-\int v\,du={\frac {x^{2}e^{x^{2}}}{2}}-\int xe^{x^{2}}\,dx.} Finally, this results in ∫ x 3 e x 2 d x = e x 2 ( x 2 − 1 ) 2 + C . {\displaystyle \int x^{3}e^{x^{2}}\,dx={\frac {e^{x^{2}}\left(x^{2}-1\right)}{2}}+C.} Integration by parts is often used as a tool to prove theorems in mathematical analysis . The Wallis infinite product for π {\displaystyle \pi } π 2 = ∏ n = 1 ∞ 4 n 2 4 n 2 − 1 = ∏ n = 1 ∞ ( 2 n 2 n − 1 ⋅ 2 n 2 n + 1 ) = ( 2 1 ⋅ 2 3 ) ⋅ ( 4 3 ⋅ 4 5 ) ⋅ ( 6 5 ⋅ 6 7 ) ⋅ ( 8 7 ⋅ 8 9 ) ⋅ ⋯ {\displaystyle {\begin{aligned}{\frac {\pi }{2}}&=\prod _{n=1}^{\infty }{\frac {4n^{2}}{4n^{2}-1}}=\prod _{n=1}^{\infty }\left({\frac {2n}{2n-1}}\cdot {\frac {2n}{2n+1}}\right)\\[6pt]&={\Big (}{\frac {2}{1}}\cdot {\frac {2}{3}}{\Big )}\cdot {\Big (}{\frac {4}{3}}\cdot {\frac {4}{5}}{\Big )}\cdot {\Big (}{\frac {6}{5}}\cdot {\frac {6}{7}}{\Big )}\cdot {\Big (}{\frac {8}{7}}\cdot {\frac {8}{9}}{\Big )}\cdot \;\cdots \end{aligned}}} may be derived using integration by parts . The gamma function is an example of a special function , defined as an improper integral for z > 0 {\displaystyle z>0} . Integration by parts illustrates it to be an extension of the factorial function: Γ ( z ) = ∫ 0 ∞ e − x x z − 1 d x = − ∫ 0 ∞ x z − 1 d ( e − x ) = − [ e − x x z − 1 ] 0 ∞ + ∫ 0 ∞ e − x d ( x z − 1 ) = 0 + ∫ 0 ∞ ( z − 1 ) x z − 2 e − x d x = ( z − 1 ) Γ ( z − 1 ) . {\displaystyle {\begin{aligned}\Gamma (z)&=\int _{0}^{\infty }e^{-x}x^{z-1}dx\\[6pt]&=-\int _{0}^{\infty }x^{z-1}\,d\left(e^{-x}\right)\\[6pt]&=-{\Biggl [}e^{-x}x^{z-1}{\Biggl ]}_{0}^{\infty }+\int _{0}^{\infty }e^{-x}d\left(x^{z-1}\right)\\[6pt]&=0+\int _{0}^{\infty }\left(z-1\right)x^{z-2}e^{-x}dx\\[6pt]&=(z-1)\Gamma (z-1).\end{aligned}}} Since Γ ( 1 ) = ∫ 0 ∞ e − x d x = 1 , {\displaystyle \Gamma (1)=\int _{0}^{\infty }e^{-x}\,dx=1,} when z {\displaystyle z} is a natural number, that is, z = n ∈ N {\displaystyle z=n\in \mathbb {N} } , applying this formula repeatedly gives the factorial : Γ ( n + 1 ) = n ! {\displaystyle \Gamma (n+1)=n!} Integration by parts is often used in harmonic analysis , particularly Fourier analysis , to show that quickly oscillating integrals with sufficiently smooth integrands decay quickly . The most common example of this is its use in showing that the decay of function's Fourier transform depends on the smoothness of that function, as described below. If f {\displaystyle f} is a k {\displaystyle k} -times continuously differentiable function and all derivatives up to the k {\displaystyle k} th one decay to zero at infinity, then its Fourier transform satisfies ( F f ( k ) ) ( ξ ) = ( 2 π i ξ ) k F f ( ξ ) , {\displaystyle ({\mathcal {F}}f^{(k)})(\xi )=(2\pi i\xi )^{k}{\mathcal {F}}f(\xi ),} where f ( k ) {\displaystyle f^{(k)}} is the k {\displaystyle k} th derivative of f {\displaystyle f} . (The exact constant on the right depends on the convention of the Fourier transform used .) This is proved by noting that d d y e − 2 π i y ξ = − 2 π i ξ e − 2 π i y ξ , {\displaystyle {\frac {d}{dy}}e^{-2\pi iy\xi }=-2\pi i\xi e^{-2\pi iy\xi },} so using integration by parts on the Fourier transform of the derivative we get ( F f ′ ) ( ξ ) = ∫ − ∞ ∞ e − 2 π i y ξ f ′ ( y ) d y = [ e − 2 π i y ξ f ( y ) ] − ∞ ∞ − ∫ − ∞ ∞ ( − 2 π i ξ e − 2 π i y ξ ) f ( y ) d y = 2 π i ξ ∫ − ∞ ∞ e − 2 π i y ξ f ( y ) d y = 2 π i ξ F f ( ξ ) . {\displaystyle {\begin{aligned}({\mathcal {F}}f')(\xi )&=\int _{-\infty }^{\infty }e^{-2\pi iy\xi }f'(y)\,dy\\&=\left[e^{-2\pi iy\xi }f(y)\right]_{-\infty }^{\infty }-\int _{-\infty }^{\infty }(-2\pi i\xi e^{-2\pi iy\xi })f(y)\,dy\\[5pt]&=2\pi i\xi \int _{-\infty }^{\infty }e^{-2\pi iy\xi }f(y)\,dy\\[5pt]&=2\pi i\xi {\mathcal {F}}f(\xi ).\end{aligned}}} Applying this inductively gives the result for general k {\displaystyle k} . A similar method can be used to find the Laplace transform of a derivative of a function. The above result tells us about the decay of the Fourier transform, since it follows that if f {\displaystyle f} and f ( k ) {\displaystyle f^{(k)}} are integrable then | F f ( ξ ) | ≤ I ( f ) 1 + | 2 π ξ | k , where I ( f ) = ∫ − ∞ ∞ ( | f ( y ) | + | f ( k ) ( y ) | ) d y . {\displaystyle \vert {\mathcal {F}}f(\xi )\vert \leq {\frac {I(f)}{1+\vert 2\pi \xi \vert ^{k}}},{\text{ where }}I(f)=\int _{-\infty }^{\infty }{\Bigl (}\vert f(y)\vert +\vert f^{(k)}(y)\vert {\Bigr )}\,dy.} In other words, if f {\displaystyle f} satisfies these conditions then its Fourier transform decays at infinity at least as quickly as 1/| ξ | k . In particular, if k ≥ 2 {\displaystyle k\geq 2} then the Fourier transform is integrable. The proof uses the fact, which is immediate from the definition of the Fourier transform , that | F f ( ξ ) | ≤ ∫ − ∞ ∞ | f ( y ) | d y . {\displaystyle \vert {\mathcal {F}}f(\xi )\vert \leq \int _{-\infty }^{\infty }\vert f(y)\vert \,dy.} Using the same idea on the equality stated at the start of this subsection gives | ( 2 π i ξ ) k F f ( ξ ) | ≤ ∫ − ∞ ∞ | f ( k ) ( y ) | d y . {\displaystyle \vert (2\pi i\xi )^{k}{\mathcal {F}}f(\xi )\vert \leq \int _{-\infty }^{\infty }\vert f^{(k)}(y)\vert \,dy.} Summing these two inequalities and then dividing by 1 + |2 π ξ k | gives the stated inequality. One use of integration by parts in operator theory is that it shows that the −∆ (where ∆ is the Laplace operator ) is a positive operator on L 2 {\displaystyle L^{2}} (see L p space ). If f {\displaystyle f} is smooth and compactly supported then, using integration by parts, we have ⟨ − Δ f , f ⟩ L 2 = − ∫ − ∞ ∞ f ″ ( x ) f ( x ) ¯ d x = − [ f ′ ( x ) f ( x ) ¯ ] − ∞ ∞ + ∫ − ∞ ∞ f ′ ( x ) f ′ ( x ) ¯ d x = ∫ − ∞ ∞ | f ′ ( x ) | 2 d x ≥ 0. {\displaystyle {\begin{aligned}\langle -\Delta f,f\rangle _{L^{2}}&=-\int _{-\infty }^{\infty }f''(x){\overline {f(x)}}\,dx\\[5pt]&=-\left[f'(x){\overline {f(x)}}\right]_{-\infty }^{\infty }+\int _{-\infty }^{\infty }f'(x){\overline {f'(x)}}\,dx\\[5pt]&=\int _{-\infty }^{\infty }\vert f'(x)\vert ^{2}\,dx\geq 0.\end{aligned}}} Considering a second derivative of v {\displaystyle v} in the integral on the LHS of the formula for partial integration suggests a repeated application to the integral on the RHS: ∫ u v ″ d x = u v ′ − ∫ u ′ v ′ d x = u v ′ − ( u ′ v − ∫ u ″ v d x ) . {\displaystyle \int uv''\,dx=uv'-\int u'v'\,dx=uv'-\left(u'v-\int u''v\,dx\right).} Extending this concept of repeated partial integration to derivatives of degree n leads to ∫ u ( 0 ) v ( n ) d x = u ( 0 ) v ( n − 1 ) − u ( 1 ) v ( n − 2 ) + u ( 2 ) v ( n − 3 ) − ⋯ + ( − 1 ) n − 1 u ( n − 1 ) v ( 0 ) + ( − 1 ) n ∫ u ( n ) v ( 0 ) d x . = ∑ k = 0 n − 1 ( − 1 ) k u ( k ) v ( n − 1 − k ) + ( − 1 ) n ∫ u ( n ) v ( 0 ) d x . {\displaystyle {\begin{aligned}\int u^{(0)}v^{(n)}\,dx&=u^{(0)}v^{(n-1)}-u^{(1)}v^{(n-2)}+u^{(2)}v^{(n-3)}-\cdots +(-1)^{n-1}u^{(n-1)}v^{(0)}+(-1)^{n}\int u^{(n)}v^{(0)}\,dx.\\[5pt]&=\sum _{k=0}^{n-1}(-1)^{k}u^{(k)}v^{(n-1-k)}+(-1)^{n}\int u^{(n)}v^{(0)}\,dx.\end{aligned}}} This concept may be useful when the successive integrals of v ( n ) {\displaystyle v^{(n)}} are readily available (e.g., plain exponentials or sine and cosine, as in Laplace or Fourier transforms ), and when the n th derivative of u {\displaystyle u} vanishes (e.g., as a polynomial function with degree ( n − 1 ) {\displaystyle (n-1)} ). The latter condition stops the repeating of partial integration, because the RHS-integral vanishes. In the course of the above repetition of partial integrations the integrals ∫ u ( 0 ) v ( n ) d x {\displaystyle \int u^{(0)}v^{(n)}\,dx\quad } and ∫ u ( ℓ ) v ( n − ℓ ) d x {\displaystyle \quad \int u^{(\ell )}v^{(n-\ell )}\,dx\quad } and ∫ u ( m ) v ( n − m ) d x for 1 ≤ m , ℓ ≤ n {\displaystyle \quad \int u^{(m)}v^{(n-m)}\,dx\quad {\text{ for }}1\leq m,\ell \leq n} get related. This may be interpreted as arbitrarily "shifting" derivatives between v {\displaystyle v} and u {\displaystyle u} within the integrand, and proves useful, too (see Rodrigues' formula ). The essential process of the above formula can be summarized in a table; the resulting method is called "tabular integration" [ 5 ] and was featured in the film Stand and Deliver (1988). [ 6 ] For example, consider the integral ∫ x 3 cos ⁡ x d x {\displaystyle \int x^{3}\cos x\,dx\quad } and take u ( 0 ) = x 3 , v ( n ) = cos ⁡ x . {\displaystyle \quad u^{(0)}=x^{3},\quad v^{(n)}=\cos x.} Begin to list in column A the function u ( 0 ) = x 3 {\displaystyle u^{(0)}=x^{3}} and its subsequent derivatives u ( i ) {\displaystyle u^{(i)}} until zero is reached. Then list in column B the function v ( n ) = cos ⁡ x {\displaystyle v^{(n)}=\cos x} and its subsequent integrals v ( n − i ) {\displaystyle v^{(n-i)}} until the size of column B is the same as that of column A . The result is as follows: The product of the entries in row i of columns A and B together with the respective sign give the relevant integrals in step i in the course of repeated integration by parts. Step i = 0 yields the original integral. For the complete result in step i > 0 the i th integral must be added to all the previous products ( 0 ≤ j < i ) of the j th entry of column A and the ( j + 1) st entry of column B (i.e., multiply the 1st entry of column A with the 2nd entry of column B, the 2nd entry of column A with the 3rd entry of column B, etc. ...) with the given j th sign. This process comes to a natural halt, when the product, which yields the integral, is zero ( i = 4 in the example). The complete result is the following (with the alternating signs in each term): ( + 1 ) ( x 3 ) ( sin ⁡ x ) ⏟ j = 0 + ( − 1 ) ( 3 x 2 ) ( − cos ⁡ x ) ⏟ j = 1 + ( + 1 ) ( 6 x ) ( − sin ⁡ x ) ⏟ j = 2 + ( − 1 ) ( 6 ) ( cos ⁡ x ) ⏟ j = 3 + ∫ ( + 1 ) ( 0 ) ( cos ⁡ x ) d x ⏟ i = 4 : → C . {\displaystyle \underbrace {(+1)(x^{3})(\sin x)} _{j=0}+\underbrace {(-1)(3x^{2})(-\cos x)} _{j=1}+\underbrace {(+1)(6x)(-\sin x)} _{j=2}+\underbrace {(-1)(6)(\cos x)} _{j=3}+\underbrace {\int (+1)(0)(\cos x)\,dx} _{i=4:\;\to \;C}.} This yields ∫ x 3 cos ⁡ x d x ⏟ step 0 = x 3 sin ⁡ x + 3 x 2 cos ⁡ x − 6 x sin ⁡ x − 6 cos ⁡ x + C . {\displaystyle \underbrace {\int x^{3}\cos x\,dx} _{\text{step 0}}=x^{3}\sin x+3x^{2}\cos x-6x\sin x-6\cos x+C.} The repeated partial integration also turns out useful, when in the course of respectively differentiating and integrating the functions u ( i ) {\displaystyle u^{(i)}} and v ( n − i ) {\displaystyle v^{(n-i)}} their product results in a multiple of the original integrand. In this case the repetition may also be terminated with this index i. This can happen, expectably, with exponentials and trigonometric functions. As an example consider ∫ e x cos ⁡ x d x . {\displaystyle \int e^{x}\cos x\,dx.} In this case the product of the terms in columns A and B with the appropriate sign for index i = 2 yields the negative of the original integrand (compare rows i = 0 and i = 2 ). ∫ e x cos ⁡ x d x ⏟ step 0 = ( + 1 ) ( e x ) ( sin ⁡ x ) ⏟ j = 0 + ( − 1 ) ( e x ) ( − cos ⁡ x ) ⏟ j = 1 + ∫ ( + 1 ) ( e x ) ( − cos ⁡ x ) d x ⏟ i = 2 . {\displaystyle \underbrace {\int e^{x}\cos x\,dx} _{\text{step 0}}=\underbrace {(+1)(e^{x})(\sin x)} _{j=0}+\underbrace {(-1)(e^{x})(-\cos x)} _{j=1}+\underbrace {\int (+1)(e^{x})(-\cos x)\,dx} _{i=2}.} Observing that the integral on the RHS can have its own constant of integration C ′ {\displaystyle C'} , and bringing the abstract integral to the other side, gives: 2 ∫ e x cos ⁡ x d x = e x sin ⁡ x + e x cos ⁡ x + C ′ , {\displaystyle 2\int e^{x}\cos x\,dx=e^{x}\sin x+e^{x}\cos x+C',} and finally: ∫ e x cos ⁡ x d x = 1 2 ( e x ( sin ⁡ x + cos ⁡ x ) ) + C , {\displaystyle \int e^{x}\cos x\,dx={\frac {1}{2}}\left(e^{x}(\sin x+\cos x)\right)+C,} where C = C ′ 2 {\displaystyle C={\frac {C'}{2}}} . Integration by parts can be extended to functions of several variables by applying a version of the fundamental theorem of calculus to an appropriate product rule. There are several such pairings possible in multivariate calculus, involving a scalar-valued function u and vector-valued function (vector field) V . [ 7 ] The product rule for divergence states: ∇ ⋅ ( u V ) = u ∇ ⋅ V + ∇ u ⋅ V . {\displaystyle \nabla \cdot (u\mathbf {V} )\ =\ u\,\nabla \cdot \mathbf {V} \ +\ \nabla u\cdot \mathbf {V} .} Suppose Ω {\displaystyle \Omega } is an open bounded subset of R n {\displaystyle \mathbb {R} ^{n}} with a piecewise smooth boundary Γ = ∂ Ω {\displaystyle \Gamma =\partial \Omega } . Integrating over Ω {\displaystyle \Omega } with respect to the standard volume form d Ω {\displaystyle d\Omega } , and applying the divergence theorem , gives: ∫ Γ u V ⋅ n ^ d Γ = ∫ Ω ∇ ⋅ ( u V ) d Ω = ∫ Ω u ∇ ⋅ V d Ω + ∫ Ω ∇ u ⋅ V d Ω , {\displaystyle \int _{\Gamma }u\mathbf {V} \cdot {\hat {\mathbf {n} }}\,d\Gamma \ =\ \int _{\Omega }\nabla \cdot (u\mathbf {V} )\,d\Omega \ =\ \int _{\Omega }u\,\nabla \cdot \mathbf {V} \,d\Omega \ +\ \int _{\Omega }\nabla u\cdot \mathbf {V} \,d\Omega ,} where n ^ {\displaystyle {\hat {\mathbf {n} }}} is the outward unit normal vector to the boundary, integrated with respect to its standard Riemannian volume form d Γ {\displaystyle d\Gamma } . Rearranging gives: ∫ Ω u ∇ ⋅ V d Ω = ∫ Γ u V ⋅ n ^ d Γ − ∫ Ω ∇ u ⋅ V d Ω , {\displaystyle \int _{\Omega }u\,\nabla \cdot \mathbf {V} \,d\Omega \ =\ \int _{\Gamma }u\mathbf {V} \cdot {\hat {\mathbf {n} }}\,d\Gamma -\int _{\Omega }\nabla u\cdot \mathbf {V} \,d\Omega ,} or in other words ∫ Ω u div ⁡ ( V ) d Ω = ∫ Γ u V ⋅ n ^ d Γ − ∫ Ω grad ⁡ ( u ) ⋅ V d Ω . {\displaystyle \int _{\Omega }u\,\operatorname {div} (\mathbf {V} )\,d\Omega \ =\ \int _{\Gamma }u\mathbf {V} \cdot {\hat {\mathbf {n} }}\,d\Gamma -\int _{\Omega }\operatorname {grad} (u)\cdot \mathbf {V} \,d\Omega .} The regularity requirements of the theorem can be relaxed. For instance, the boundary Γ = ∂ Ω {\displaystyle \Gamma =\partial \Omega } need only be Lipschitz continuous , and the functions u , v need only lie in the Sobolev space H 1 ( Ω ) {\displaystyle H^{1}(\Omega )} . Consider the continuously differentiable vector fields U = u 1 e 1 + ⋯ + u n e n {\displaystyle \mathbf {U} =u_{1}\mathbf {e} _{1}+\cdots +u_{n}\mathbf {e} _{n}} and v e 1 , … , v e n {\displaystyle v\mathbf {e} _{1},\ldots ,v\mathbf {e} _{n}} , where e i {\displaystyle \mathbf {e} _{i}} is the i -th standard basis vector for i = 1 , … , n {\displaystyle i=1,\ldots ,n} . Now apply the above integration by parts to each u i {\displaystyle u_{i}} times the vector field v e i {\displaystyle v\mathbf {e} _{i}} : ∫ Ω u i ∂ v ∂ x i d Ω = ∫ Γ u i v e i ⋅ n ^ d Γ − ∫ Ω ∂ u i ∂ x i v d Ω . {\displaystyle \int _{\Omega }u_{i}{\frac {\partial v}{\partial x_{i}}}\,d\Omega \ =\ \int _{\Gamma }u_{i}v\,\mathbf {e} _{i}\cdot {\hat {\mathbf {n} }}\,d\Gamma -\int _{\Omega }{\frac {\partial u_{i}}{\partial x_{i}}}v\,d\Omega .} Summing over i gives a new integration by parts formula: ∫ Ω U ⋅ ∇ v d Ω = ∫ Γ v U ⋅ n ^ d Γ − ∫ Ω v ∇ ⋅ U d Ω . {\displaystyle \int _{\Omega }\mathbf {U} \cdot \nabla v\,d\Omega \ =\ \int _{\Gamma }v\mathbf {U} \cdot {\hat {\mathbf {n} }}\,d\Gamma -\int _{\Omega }v\,\nabla \cdot \mathbf {U} \,d\Omega .} The case U = ∇ u {\displaystyle \mathbf {U} =\nabla u} , where u ∈ C 2 ( Ω ¯ ) {\displaystyle u\in C^{2}({\bar {\Omega }})} , is known as the first of Green's identities : ∫ Ω ∇ u ⋅ ∇ v d Ω = ∫ Γ v ∇ u ⋅ n ^ d Γ − ∫ Ω v ∇ 2 u d Ω . {\displaystyle \int _{\Omega }\nabla u\cdot \nabla v\,d\Omega \ =\ \int _{\Gamma }v\,\nabla u\cdot {\hat {\mathbf {n} }}\,d\Gamma -\int _{\Omega }v\,\nabla ^{2}u\,d\Omega .}
https://en.wikipedia.org/wiki/Integration_by_parts
In mathematics , an integration by parts operator is a linear operator used to formulate integration by parts formulae; the most interesting examples of integration by parts operators occur in infinite-dimensional settings and find uses in stochastic analysis and its applications. Let E be a Banach space such that both E and its continuous dual space E ∗ are separable spaces ; let μ be a Borel measure on E . Let S be any (fixed) subset of the class of functions defined on E . A linear operator A : S → L 2 ( E , μ ; R ) is said to be an integration by parts operator for μ if for every C 1 function φ : E → R and all h ∈ S for which either side of the above equality makes sense. In the above, D φ ( x ) denotes the Fréchet derivative of φ at x .
https://en.wikipedia.org/wiki/Integration_by_parts_operator
In integral calculus, integration by reduction formulae is a method relying on recurrence relations . It is used when an expression containing an integer parameter , usually in the form of powers of elementary functions, or products of transcendental functions and polynomials of arbitrary degree , can't be integrated directly. But using other methods of integration a reduction formula can be set up to obtain the integral of the same or similar expression with a lower integer parameter, progressively simplifying the integral until it can be evaluated. [ 1 ] This method of integration is one of the earliest used. The reduction formula can be derived using any of the common methods of integration, like integration by substitution , integration by parts , integration by trigonometric substitution , integration by partial fractions , etc. The main idea is to express an integral involving an integer parameter (e.g. power) of a function, represented by I n , in terms of an integral that involves a lower value of the parameter (lower power) of that function, for example I n -1 or I n -2 . This makes the reduction formula a type of recurrence relation . In other words, the reduction formula expresses the integral in terms of where To compute the integral, we set n to its value and use the reduction formula to express it in terms of the ( n – 1) or ( n – 2) integral. The lower index integral can be used to calculate the higher index ones; the process is continued repeatedly until we reach a point where the function to be integrated can be computed, usually when its index is 0 or 1. Then we back-substitute the previous results until we have computed I n . [ 2 ] Below are examples of the procedure. Typically, integrals like can be evaluated by a reduction formula. Start by setting: Now re-write as: Integrating by this substitution: Now integrating by parts: solving for I n : so the reduction formula is: To supplement the example, the above can be used to evaluate the integral for (say) n = 5; Calculating lower indices: back-substituting: where C is a constant. Another typical example is: Start by setting: Integrating by substitution: Now integrating by parts: shifting indices back by 1 (so n + 1 → n , n → n – 1): solving for I n : so the reduction formula is: An alternative way in which the derivation could be done starts by substituting e a x {\displaystyle e^{ax}} . Integration by substitution: e a x d x = d ( e a x ) a , {\displaystyle e^{ax}\,{\text{d}}x={\frac {{\text{d}}(e^{ax})}{a}},\,\!} I n = 1 a ∫ x n d ( e a x ) , {\displaystyle I_{n}={\frac {1}{a}}\int x^{n}\,{\text{d}}(e^{ax}),\!} Now integrating by parts: ∫ x n d ( e a x ) = x n e a x − ∫ e a x d ( x n ) = x n e a x − n ∫ e a x x n − 1 d x , {\displaystyle {\begin{aligned}\int x^{n}\,{\text{d}}(e^{ax})&=x^{n}e^{ax}-\int e^{ax}\,{\text{d}}(x^{n})\\&=x^{n}e^{ax}-n\int e^{ax}x^{n-1}\,{\text{d}}x,\end{aligned}}\!} which gives the reduction formula when substituting back: I n = 1 a ( x n e a x − n I n − 1 ) , {\displaystyle I_{n}={\frac {1}{a}}\left(x^{n}e^{ax}-nI_{n-1}\right),\,\!} which is equivalent to: Another alternative way in which the derivation could be done by integrating by parts: Remember: which gives the reduction formula when substituting back: which is equivalent to: The following integrals [ 3 ] contain: I n = 2 ( p x + q ) n a x + b a ( 2 n + 1 ) + 2 n ( a q − b p ) a ( 2 n + 1 ) I n − 1 {\displaystyle I_{n}={\frac {2(px+q)^{n}{\sqrt {ax+b}}}{a(2n+1)}}+{\frac {2n(aq-bp)}{a(2n+1)}}I_{n-1}\,\!} I n = − a x + b ( n − 1 ) ( a q − b p ) ( p x + q ) n − 1 + a ( 2 n − 3 ) 2 ( n − 1 ) ( a q − b p ) I n − 1 {\displaystyle I_{n}=-{\frac {\sqrt {ax+b}}{(n-1)(aq-bp)(px+q)^{n-1}}}+{\frac {a(2n-3)}{2(n-1)(aq-bp)}}I_{n-1}\,\!} note that by the laws of indices : The following integrals [ 4 ] contain: J n = ∫ cos ⁡ a x x n d x {\displaystyle J_{n}=\int {\frac {\cos {ax}}{x^{n}}}\,{\text{d}}x\,\!} J n = − cos ⁡ a x ( n − 1 ) x n − 1 − a n − 1 I n − 1 {\displaystyle J_{n}=-{\frac {\cos {ax}}{(n-1)x^{n-1}}}-{\frac {a}{n-1}}I_{n-1}\,\!} the formulae can be combined to obtain separate equations in I n : J n − 1 = − cos ⁡ a x ( n − 2 ) x n − 2 − a n − 2 I n − 2 {\displaystyle J_{n-1}=-{\frac {\cos {ax}}{(n-2)x^{n-2}}}-{\frac {a}{n-2}}I_{n-2}\,\!} I n = − sin ⁡ a x ( n − 1 ) x n − 1 − a n − 1 [ cos ⁡ a x ( n − 2 ) x n − 2 + a n − 2 I n − 2 ] {\displaystyle I_{n}=-{\frac {\sin {ax}}{(n-1)x^{n-1}}}-{\frac {a}{n-1}}\left[{\frac {\cos {ax}}{(n-2)x^{n-2}}}+{\frac {a}{n-2}}I_{n-2}\right]\,\!} ∴ I n = − sin ⁡ a x ( n − 1 ) x n − 1 − a ( n − 1 ) ( n − 2 ) ( cos ⁡ a x x n − 2 + a I n − 2 ) {\displaystyle \therefore I_{n}=-{\frac {\sin {ax}}{(n-1)x^{n-1}}}-{\frac {a}{(n-1)(n-2)}}\left({\frac {\cos {ax}}{x^{n-2}}}+aI_{n-2}\right)\,\!} and J n : I n − 1 = − sin ⁡ a x ( n − 2 ) x n − 2 + a n − 2 J n − 2 {\displaystyle I_{n-1}=-{\frac {\sin {ax}}{(n-2)x^{n-2}}}+{\frac {a}{n-2}}J_{n-2}\,\!} J n = − cos ⁡ a x ( n − 1 ) x n − 1 − a n − 1 [ − sin ⁡ a x ( n − 2 ) x n − 2 + a n − 2 J n − 2 ] {\displaystyle J_{n}=-{\frac {\cos {ax}}{(n-1)x^{n-1}}}-{\frac {a}{n-1}}\left[-{\frac {\sin {ax}}{(n-2)x^{n-2}}}+{\frac {a}{n-2}}J_{n-2}\right]\,\!} ∴ J n = − cos ⁡ a x ( n − 1 ) x n − 1 − a ( n − 1 ) ( n − 2 ) ( − sin ⁡ a x x n − 2 + a J n − 2 ) {\displaystyle \therefore J_{n}=-{\frac {\cos {ax}}{(n-1)x^{n-1}}}-{\frac {a}{(n-1)(n-2)}}\left(-{\frac {\sin {ax}}{x^{n-2}}}+aJ_{n-2}\right)\,\!} n > 0 {\displaystyle n>0\,\!} n > 0 {\displaystyle n>0\,\!} n ≠ 1 {\displaystyle n\neq 1\,\!}
https://en.wikipedia.org/wiki/Integration_by_reduction_formulae
In calculus , integration by substitution , also known as u -substitution , reverse chain rule or change of variables , [ 1 ] is a method for evaluating integrals and antiderivatives . It is the counterpart to the chain rule for differentiation , and can loosely be thought of as using the chain rule "backwards." This involves differential forms . Before stating the result rigorously , consider a simple case using indefinite integrals . Compute ∫ ( 2 x 3 + 1 ) 7 ( x 2 ) d x . {\textstyle \int (2x^{3}+1)^{7}(x^{2})\,dx.} [ 2 ] Set u = 2 x 3 + 1. {\displaystyle u=2x^{3}+1.} This means d u d x = 6 x 2 , {\textstyle {\frac {du}{dx}}=6x^{2},} or as a differential form , d u = 6 x 2 d x . {\textstyle du=6x^{2}\,dx.} Now: ∫ ( 2 x 3 + 1 ) 7 ( x 2 ) d x = 1 6 ∫ ( 2 x 3 + 1 ) 7 ⏟ u 7 ( 6 x 2 ) d x ⏟ d u = 1 6 ∫ u 7 d u = 1 6 ( 1 8 u 8 ) + C = 1 48 ( 2 x 3 + 1 ) 8 + C , {\displaystyle {\begin{aligned}\int (2x^{3}+1)^{7}(x^{2})\,dx&={\frac {1}{6}}\int \underbrace {(2x^{3}+1)^{7}} _{u^{7}}\underbrace {(6x^{2})\,dx} _{du}\\&={\frac {1}{6}}\int u^{7}\,du\\&={\frac {1}{6}}\left({\frac {1}{8}}u^{8}\right)+C\\&={\frac {1}{48}}(2x^{3}+1)^{8}+C,\end{aligned}}} where C {\displaystyle C} is an arbitrary constant of integration . This procedure is frequently used, but not all integrals are of a form that permits its use. In any event, the result should be verified by differentiating and comparing to the original integrand. d d x [ 1 48 ( 2 x 3 + 1 ) 8 + C ] = 1 6 ( 2 x 3 + 1 ) 7 ( 6 x 2 ) = ( 2 x 3 + 1 ) 7 ( x 2 ) . {\displaystyle {\frac {d}{dx}}\left[{\frac {1}{48}}(2x^{3}+1)^{8}+C\right]={\frac {1}{6}}(2x^{3}+1)^{7}(6x^{2})=(2x^{3}+1)^{7}(x^{2}).} For definite integrals, the limits of integration must also be adjusted, but the procedure is mostly the same. Let g : [ a , b ] → I {\displaystyle g:[a,b]\to I} be a differentiable function with a continuous derivative, where I ⊂ R {\displaystyle I\subset \mathbb {R} } is an interval . Suppose that f : I → R {\displaystyle f:I\to \mathbb {R} } is a continuous function . Then: [ 3 ] ∫ a b f ( g ( x ) ) ⋅ g ′ ( x ) d x = ∫ g ( a ) g ( b ) f ( u ) d u . {\displaystyle \int _{a}^{b}f(g(x))\cdot g'(x)\,dx=\int _{g(a)}^{g(b)}f(u)\ du.} In Leibniz notation, the substitution u = g ( x ) {\displaystyle u=g(x)} yields: d u d x = g ′ ( x ) . {\displaystyle {\frac {du}{dx}}=g'(x).} Working heuristically with infinitesimals yields the equation d u = g ′ ( x ) d x , {\displaystyle du=g'(x)\,dx,} which suggests the substitution formula above. (This equation may be put on a rigorous foundation by interpreting it as a statement about differential forms .) One may view the method of integration by substitution as a partial justification of Leibniz's notation for integrals and derivatives. The formula is used to transform one integral into another integral that is easier to compute. Thus, the formula can be read from left to right or from right to left in order to simplify a given integral. When used in the former manner, it is sometimes known as u -substitution or w -substitution in which a new variable is defined to be a function of the original variable found inside the composite function multiplied by the derivative of the inner function. The latter manner is commonly used in trigonometric substitution , replacing the original variable with a trigonometric function of a new variable and the original differential with the differential of the trigonometric function. Integration by substitution can be derived from the fundamental theorem of calculus as follows. Let f {\displaystyle f} and g {\displaystyle g} be two functions satisfying the above hypothesis that f {\displaystyle f} is continuous on I {\displaystyle I} and g ′ {\displaystyle g'} is integrable on the closed interval [ a , b ] {\displaystyle [a,b]} . Then the function f ( g ( x ) ) ⋅ g ′ ( x ) {\displaystyle f(g(x))\cdot g'(x)} is also integrable on [ a , b ] {\displaystyle [a,b]} . Hence the integrals ∫ a b f ( g ( x ) ) ⋅ g ′ ( x ) d x {\displaystyle \int _{a}^{b}f(g(x))\cdot g'(x)\ dx} and ∫ g ( a ) g ( b ) f ( u ) d u {\displaystyle \int _{g(a)}^{g(b)}f(u)\ du} in fact exist, and it remains to show that they are equal. Since f {\displaystyle f} is continuous, it has an antiderivative F {\displaystyle F} . The composite function F ∘ g {\displaystyle F\circ g} is then defined. Since g {\displaystyle g} is differentiable, combining the chain rule and the definition of an antiderivative gives: ( F ∘ g ) ′ ( x ) = F ′ ( g ( x ) ) ⋅ g ′ ( x ) = f ( g ( x ) ) ⋅ g ′ ( x ) . {\displaystyle (F\circ g)'(x)=F'(g(x))\cdot g'(x)=f(g(x))\cdot g'(x).} Applying the fundamental theorem of calculus twice gives: ∫ a b f ( g ( x ) ) ⋅ g ′ ( x ) d x = ∫ a b ( F ∘ g ) ′ ( x ) d x = ( F ∘ g ) ( b ) − ( F ∘ g ) ( a ) = F ( g ( b ) ) − F ( g ( a ) ) = ∫ g ( a ) g ( b ) f ( u ) d u , {\displaystyle {\begin{aligned}\int _{a}^{b}f(g(x))\cdot g'(x)\ dx&=\int _{a}^{b}(F\circ g)'(x)\ dx\\&=(F\circ g)(b)-(F\circ g)(a)\\&=F(g(b))-F(g(a))\\&=\int _{g(a)}^{g(b)}f(u)\,du,\end{aligned}}} which is the substitution rule. Substitution can be used to determine antiderivatives . One chooses a relation between x {\displaystyle x} and u , {\displaystyle u,} determines the corresponding relation between d x {\displaystyle dx} and d u {\displaystyle du} by differentiating, and performs the substitutions. An antiderivative for the substituted function can hopefully be determined; the original substitution between x {\displaystyle x} and u {\displaystyle u} is then undone. Consider the integral: ∫ x cos ⁡ ( x 2 + 1 ) d x . {\displaystyle \int x\cos(x^{2}+1)\ dx.} Make the substitution u = x 2 + 1 {\textstyle u=x^{2}+1} to obtain d u = 2 x d x , {\displaystyle du=2x\ dx,} meaning x d x = 1 2 d u . {\textstyle x\ dx={\frac {1}{2}}\ du.} Therefore: ∫ x cos ⁡ ( x 2 + 1 ) d x = 1 2 ∫ 2 x cos ⁡ ( x 2 + 1 ) d x = 1 2 ∫ cos ⁡ u d u = 1 2 sin ⁡ u + C = 1 2 sin ⁡ ( x 2 + 1 ) + C , {\displaystyle {\begin{aligned}\int x\cos(x^{2}+1)\,dx&={\frac {1}{2}}\int 2x\cos(x^{2}+1)\,dx\\[6pt]&={\frac {1}{2}}\int \cos u\,du\\[6pt]&={\frac {1}{2}}\sin u+C\\[6pt]&={\frac {1}{2}}\sin(x^{2}+1)+C,\end{aligned}}} where C {\displaystyle C} is an arbitrary constant of integration . The tangent function can be integrated using substitution by expressing it in terms of the sine and cosine: tan ⁡ x = sin ⁡ x cos ⁡ x {\displaystyle \tan x={\tfrac {\sin x}{\cos x}}} . Using the substitution u = cos ⁡ x {\displaystyle u=\cos x} gives d u = − sin ⁡ x d x {\displaystyle du=-\sin x\,dx} and ∫ tan ⁡ x d x = ∫ sin ⁡ x cos ⁡ x d x = ∫ − d u u = − ln ⁡ | u | + C = − ln ⁡ | cos ⁡ x | + C = ln ⁡ | sec ⁡ x | + C . {\displaystyle {\begin{aligned}\int \tan x\,dx&=\int {\frac {\sin x}{\cos x}}\,dx\\&=\int -{\frac {du}{u}}\\&=-\ln \left|u\right|+C\\&=-\ln \left|\cos x\right|+C\\&=\ln \left|\sec x\right|+C.\end{aligned}}} The cotangent function can be integrated similarly by expressing it as cot ⁡ x = cos ⁡ x sin ⁡ x {\displaystyle \cot x={\tfrac {\cos x}{\sin x}}} and using the substitution u = sin ⁡ x , d u = cos ⁡ x d x {\displaystyle u=\sin {x},du=\cos {x}\,dx} : ∫ cot ⁡ x d x = ∫ cos ⁡ x sin ⁡ x d x = ∫ d u u = ln ⁡ | u | + C = ln ⁡ | sin ⁡ x | + C . {\displaystyle {\begin{aligned}\int \cot x\,dx&=\int {\frac {\cos x}{\sin x}}\,dx\\&=\int {\frac {du}{u}}\\&=\ln \left|u\right|+C\\&=\ln \left|\sin x\right|+C.\end{aligned}}} When evaluating definite integrals by substitution, one may calculate the antiderivative fully first, then apply the boundary conditions. In that case, there is no need to transform the boundary terms. Alternatively, one may fully evaluate the indefinite integral ( see above ) first then apply the boundary conditions. This becomes especially handy when multiple substitutions are used. Consider the integral: ∫ 0 2 x x 2 + 1 d x . {\displaystyle \int _{0}^{2}{\frac {x}{\sqrt {x^{2}+1}}}dx.} Make the substitution u = x 2 + 1 {\textstyle u=x^{2}+1} to obtain d u = 2 x d x , {\displaystyle du=2x\ dx,} meaning x d x = 1 2 d u . {\textstyle x\ dx={\frac {1}{2}}\ du.} Therefore: ∫ x = 0 x = 2 x x 2 + 1 d x = 1 2 ∫ u = 1 u = 5 d u u = 1 2 ( 2 5 − 2 1 ) = 5 − 1. {\displaystyle {\begin{aligned}\int _{x=0}^{x=2}{\frac {x}{\sqrt {x^{2}+1}}}\ dx&={\frac {1}{2}}\int _{u=1}^{u=5}{\frac {du}{\sqrt {u}}}\\[6pt]&={\frac {1}{2}}\left(2{\sqrt {5}}-2{\sqrt {1}}\right)\\[6pt]&={\sqrt {5}}-1.\end{aligned}}} Since the lower limit x = 0 {\displaystyle x=0} was replaced with u = 1 , {\displaystyle u=1,} and the upper limit x = 2 {\displaystyle x=2} with 2 2 + 1 = 5 , {\displaystyle 2^{2}+1=5,} a transformation back into terms of x {\displaystyle x} was unnecessary. For the integral ∫ 0 1 1 − x 2 d x , {\displaystyle \int _{0}^{1}{\sqrt {1-x^{2}}}\,dx,} a variation of the above procedure is needed. The substitution x = sin ⁡ u {\displaystyle x=\sin u} implying d x = cos ⁡ u d u {\displaystyle dx=\cos u\,du} is useful because 1 − sin 2 ⁡ u = cos ⁡ u . {\textstyle {\sqrt {1-\sin ^{2}u}}=\cos u.} We thus have: ∫ 0 1 1 − x 2 d x = ∫ 0 π / 2 1 − sin 2 ⁡ u cos ⁡ u d u = ∫ 0 π / 2 cos 2 ⁡ u d u = [ u 2 + sin ⁡ ( 2 u ) 4 ] 0 π / 2 = π 4 + 0 = π 4 . {\displaystyle {\begin{aligned}\int _{0}^{1}{\sqrt {1-x^{2}}}\ dx&=\int _{0}^{\pi /2}{\sqrt {1-\sin ^{2}u}}\cos u\ du\\[6pt]&=\int _{0}^{\pi /2}\cos ^{2}u\ du\\[6pt]&=\left[{\frac {u}{2}}+{\frac {\sin(2u)}{4}}\right]_{0}^{\pi /2}\\[6pt]&={\frac {\pi }{4}}+0\\[6pt]&={\frac {\pi }{4}}.\end{aligned}}} The resulting integral can be computed using integration by parts or a double angle formula , 2 cos 2 ⁡ u = 1 + cos ⁡ ( 2 u ) , {\textstyle 2\cos ^{2}u=1+\cos(2u),} followed by one more substitution. One can also note that the function being integrated is the upper right quarter of a circle with a radius of one, and hence integrating the upper right quarter from zero to one is the geometric equivalent to the area of one quarter of the unit circle, or π 4 . {\displaystyle {\tfrac {\pi }{4}}.} One may also use substitution when integrating functions of several variables . Here, the substitution function ( v 1 ,..., v n ) = φ ( u 1 , ..., u n ) needs to be injective and continuously differentiable, and the differentials transform as: d v 1 ⋯ d v n = | det ( D φ ) ( u 1 , … , u n ) | d u 1 ⋯ d u n , {\displaystyle dv_{1}\cdots dv_{n}=\left|\det(D\varphi )(u_{1},\ldots ,u_{n})\right|\,du_{1}\cdots du_{n},} where det( Dφ )( u 1 , ..., u n ) denotes the determinant of the Jacobian matrix of partial derivatives of φ at the point ( u 1 , ..., u n ) . This formula expresses the fact that the absolute value of the determinant of a matrix equals the volume of the parallelotope spanned by its columns or rows. More precisely, the change of variables formula is stated in the next theorem: Theorem — Let U be an open set in R n and φ : U → R n an injective differentiable function with continuous partial derivatives, the Jacobian of which is nonzero for every x in U . Then for any real-valued, compactly supported, continuous function f , with support contained in φ ( U ) : ∫ φ ( U ) f ( v ) d v = ∫ U f ( φ ( u ) ) | det ( D φ ) ( u ) | d u . {\displaystyle \int _{\varphi (U)}f(\mathbf {v} )\,d\mathbf {v} =\int _{U}f(\varphi (\mathbf {u} ))\,\,\left|\!\det(D\varphi )(\mathbf {u} )\right|\,d\mathbf {u} .} The conditions on the theorem can be weakened in various ways. First, the requirement that φ be continuously differentiable can be replaced by the weaker assumption that φ be merely differentiable and have a continuous inverse. [ 4 ] This is guaranteed to hold if φ is continuously differentiable by the inverse function theorem . Alternatively, the requirement that det( Dφ ) ≠ 0 can be eliminated by applying Sard's theorem . [ 5 ] For Lebesgue measurable functions, the theorem can be stated in the following form: [ 6 ] Theorem — Let U be a measurable subset of R n and φ : U → R n an injective function , and suppose for every x in U there exists φ ′( x ) in R n , n such that φ ( y ) = φ ( x ) + φ′ ( x )( y − x ) + o (‖ y − x ‖) as y → x (here o is little- o notation ). Then φ ( U ) is measurable, and for any real-valued function f defined on φ ( U ) : ∫ φ ( U ) f ( v ) d v = ∫ U f ( φ ( u ) ) | det φ ′ ( u ) | d u {\displaystyle \int _{\varphi (U)}f(v)\,dv=\int _{U}f(\varphi (u))\,\,\left|\!\det \varphi '(u)\right|\,du} in the sense that if either integral exists (including the possibility of being properly infinite), then so does the other one, and they have the same value. Another very general version in measure theory is the following: [ 7 ] Theorem — Let X be a locally compact Hausdorff space equipped with a finite Radon measure μ , and let Y be a σ-compact Hausdorff space with a σ-finite Radon measure ρ . Let φ : X → Y be an absolutely continuous function (where the latter means that ρ ( φ ( E )) = 0 whenever μ ( E ) = 0 ). Then there exists a real-valued Borel measurable function w on X such that for every Lebesgue integrable function f : Y → R , the function ( f ∘ φ ) ⋅ w is Lebesgue integrable on X , and ∫ Y f ( y ) d ρ ( y ) = ∫ X ( f ∘ φ ) ( x ) w ( x ) d μ ( x ) . {\displaystyle \int _{Y}f(y)\,d\rho (y)=\int _{X}(f\circ \varphi )(x)\,w(x)\,d\mu (x).} Furthermore, it is possible to write w ( x ) = ( g ∘ φ ) ( x ) {\displaystyle w(x)=(g\circ \varphi )(x)} for some Borel measurable function g on Y . In geometric measure theory , integration by substitution is used with Lipschitz functions . A bi-Lipschitz function is a Lipschitz function φ : U → R n which is injective and whose inverse function φ −1 : φ ( U ) → U is also Lipschitz. By Rademacher's theorem , a bi-Lipschitz mapping is differentiable almost everywhere . In particular, the Jacobian determinant of a bi-Lipschitz mapping det Dφ is well-defined almost everywhere. The following result then holds: Theorem — Let U be an open subset of R n and φ : U → R n be a bi-Lipschitz mapping. Let f : φ ( U ) → R be measurable. Then ∫ φ ( U ) f ( x ) d x = ∫ U ( f ∘ φ ) ( x ) | det D φ ( x ) | d x {\displaystyle \int _{\varphi (U)}f(x)\,dx=\int _{U}(f\circ \varphi )(x)\,\,\left|\!\det D\varphi (x)\right|\,dx} in the sense that if either integral exists (or is properly infinite), then so does the other one, and they have the same value. The above theorem was first proposed by Euler when he developed the notion of double integrals in 1769. Although generalized to triple integrals by Lagrange in 1773, and used by Legendre , Laplace , and Gauss , and first generalized to n variables by Mikhail Ostrogradsky in 1836, it resisted a fully rigorous formal proof for a surprisingly long time, and was first satisfactorily resolved 125 years later, by Élie Cartan in a series of papers beginning in the mid-1890s. [ 8 ] [ 9 ] Substitution can be used to answer the following important question in probability: given a random variable X with probability density p X and another random variable Y such that Y = ϕ ( X ) for injective (one-to-one) ϕ , what is the probability density for Y ? It is easiest to answer this question by first answering a slightly different question: what is the probability that Y takes a value in some particular subset S ? Denote this probability P ( Y ∈ S ). Of course, if Y has probability density p Y , then the answer is: P ( Y ∈ S ) = ∫ S p Y ( y ) d y , {\displaystyle P(Y\in S)=\int _{S}p_{Y}(y)\,dy,} but this is not really useful because we do not know p Y ; it is what we are trying to find. We can make progress by considering the problem in the variable X . Y takes a value in S whenever X takes a value in ϕ − 1 ( S ) , {\textstyle \phi ^{-1}(S),} so: P ( Y ∈ S ) = P ( X ∈ ϕ − 1 ( S ) ) = ∫ ϕ − 1 ( S ) p X ( x ) d x . {\displaystyle P(Y\in S)=P(X\in \phi ^{-1}(S))=\int _{\phi ^{-1}(S)}p_{X}(x)\,dx.} Changing from variable x to y gives: P ( Y ∈ S ) = ∫ ϕ − 1 ( S ) p X ( x ) d x = ∫ S p X ( ϕ − 1 ( y ) ) | d ϕ − 1 d y | d y . {\displaystyle P(Y\in S)=\int _{\phi ^{-1}(S)}p_{X}(x)\,dx=\int _{S}p_{X}(\phi ^{-1}(y))\left|{\frac {d\phi ^{-1}}{dy}}\right|\,dy.} Combining this with our first equation gives: ∫ S p Y ( y ) d y = ∫ S p X ( ϕ − 1 ( y ) ) | d ϕ − 1 d y | d y , {\displaystyle \int _{S}p_{Y}(y)\,dy=\int _{S}p_{X}(\phi ^{-1}(y))\left|{\frac {d\phi ^{-1}}{dy}}\right|\,dy,} so: p Y ( y ) = p X ( ϕ − 1 ( y ) ) | d ϕ − 1 d y | . {\displaystyle p_{Y}(y)=p_{X}(\phi ^{-1}(y))\left|{\frac {d\phi ^{-1}}{dy}}\right|.} In the case where X and Y depend on several uncorrelated variables (i.e., p X = p X ( x 1 , … , x n ) {\textstyle p_{X}=p_{X}(x_{1},\ldots ,x_{n})} and y = ϕ ( x ) {\displaystyle y=\phi (x)} ), p Y {\displaystyle p_{Y}} can be found by substitution in several variables discussed above. The result is: p Y ( y ) = p X ( ϕ − 1 ( y ) ) | det D ϕ − 1 ( y ) | . {\displaystyle p_{Y}(y)=p_{X}(\phi ^{-1}(y))\left|\det D\phi ^{-1}(y)\right|.}
https://en.wikipedia.org/wiki/Integration_by_substitution
An integration competency center (ICC), sometimes referred to as an integration center of excellence (COE), is a shared service function providing methodical data integration , system integration , or enterprise application integration within organizations, particularly large corporations and public sector institutions. Data integration allows companies to access their enterprise data and functions, fragmented across disparate systems , in order to create a combined, accurate, and consistent view of their core information as well as process assets and leverage them across the enterprise to drive business decisions and operations. System integration is the bringing together of component subsystems into a unified whole and ensuring that they function together effectively. Enterprise application integration enables efficient information exchanges and business process automation across separate computer applications in a cohesive fashion. Breaking down the acronym may enhance understanding. Integration refers to the objective of the ICC to take a holistic perspective and optimize certain qualities such as cost efficiency, organizational agility and effectiveness, operational risk , customer (internal or external) experience, etc. across multiple functional groups. Competency refers to the expertise, knowledge or capability that the ICC offers as services. Center means that the service is managed or coordinated from a common (central) point independent from the functional areas that it supports. Large organizations are usually sub-divided into functional areas such as marketing, sales, distribution, finance or human resources , to name just a few. These functional groups have separate operations, are vertically integrated and are sometimes referred to as "silos" or "stovepipes". From an organizational perspective, an ICC is a group of people with special skills who are centrally coordinated, and offer services to accomplish a mission that requires separate functional areas to work together. Key objectives of an ICC include: ICCs allow companies to: An ICC may be a temporary group in support of a program or a permanent part of the organization. Furthermore, ICCs can be established at various scales or levels, within divisions of a company, at the enterprise level, or across multiple companies in a supply chain. The term "integration competency center" and its acronym ICC was popularized by Roy Schulte of Gartner in a series of articles and conference presentations beginning in 2001 with The Integration Competency Center . [ 1 ] He picked up the term from one of his colleagues, Gary Long, who found some of his clients using it (they took the established term “competency center” and applied it to integration). Prior to that (from 1997 to 2001) Gartner had been referring to it as the central integration team . The concept itself (even before it was given a label) goes back to 1996 in one of Gartner's first reports on integration. [ citation needed ] A major milestone was the publication in 2005 of the first book on the topic: Integration Competency Center: An Implementation Methodology [ 1 ] by John G. Schmidt and David Lyle . The book introduced five ICC organizational models and explored the people, process and technology dimensions of ICCs. Several reviews of the book can be found at IT Toolbox and at Amazon . The concept of integration as a competency in the IT domain has now survived for over 10 years and appears to be picking up momentum and broad-based acceptance. These days ICCs are often called integration centers of excellence, SOA centers of excellence, data management centers of excellence and other variants. The most advanced ICCs are using Lean Integration practices to optimize end-to-end processes and to drive continuous improvements. Universities are also beginning to include integration topics in their MBA programs and computer science curricula. For example, The College of Information Sciences and Technology at Penn State University has established an Enterprise Informatics and Integration Center with the following mission: " The Enterprise Informatics and Integration Center (EI²) will actively engage industry, non-profit, and government agency leaders to address critical issues in enterprise processes, knowledge management, and decision making. " There are a number of ways an ICC can be organized and a wide range of responsibilities with which it can be chartered. The ICC book [ 1 ] introduced five ICC organizational models and explored the people, process and technology dimensions of ICCs. They include: The primary function of this ICC model is to document practices considered effective and widely applicable. It does not include a central support or development team to implement these standards across projects and typically does not manage metadata. To implement a best practices ICC, companies require a development environment that accommodates diverse teams and allows them to enhance and extend existing systems and processes. This team is often a subset of an existing enterprise architecture capability and usually consists of a small number of staff (1–5). A standard services ICC provides the same knowledge sharing as a best practices ICC but enforces technical consistency in software development and hardware selection. A standard services ICC focuses on processes such as standardizing and enforcing naming conventions, establishing metadata standards, instituting change management procedures, and providing training on standards. This type of ICC also evaluates emerging technologies, selects vendors, and oversees hardware and software systems. It is typically closely associated with the enterprise architecture team and may be larger than a best practices ICC. A shared services ICC provides a supported technical environment and services ranging from development support all the way through to a help desk for projects in production. This type of ICC is significantly more complex than a Best Practices or Standard Services model. It establishes processes for knowledge management, including product training, standards enforcement, technology benchmarking, and metadata management, and it facilitates impact analysis, software quality, and effective use of developer resources across projects. The organizational structure of a Shared Services ICC is sometimes referred to as a hybrid or federated model which often includes a small central coordinating team plus dotted-line reporting relationships with multiple distributed teams. A central services ICC controls integration across the enterprise. It carries out the same processes as the other models, but in addition usually has its own budget and a charge-back methodology. It also offers more support for development projects, providing management, development resources, data profiling , data quality, and unit testing. Because a central services ICC is more involved in development activities than the other models, it requires a production operator and a data integration developer. The staff in a central services ICC does not necessarily need to be located centrally and may be distributed geographically; the important distinction is that the staffs have a solid line reporting relationship to the ICC Director. The size of these teams can vary and may be as large as 10%-15% of the IT staff in an organization. The self-service ICC represents the highest level of maturity in an organization. The ICC itself may be almost invisible in that its functions are so ingrained in the day-to-day systems development life cycle and its operations are so tightly integrated with the infrastructure that it may require only a small central team to sustain itself. This ICC model achieves both a highly efficient operation and provides an environment where independent development and innovation can flourish. This goal is achieved by strict enforcement of a set of application integration standards through automated processes enabled by tools and systems. ICC as a concept is fairly simple. It is the embodiment of IT management best practices to deliver shared services. However, being an organizational concept it is far more challenging to implement in practice than the conceptual view because every organization has different DNA, and it takes specific personalization/customization effort for ICC that makes the ICC initiative successful. Here are some of the common challenges in ICC establishment journey: These issues are important to consider when embarking on the ICC investment since the last leg of the implementation of ICC is what matters most. Intellectual definitions of ICC that are not implemented in the organization have no real value for the enterprise.
https://en.wikipedia.org/wiki/Integration_competency_center
In integral calculus , Euler's formula for complex numbers may be used to evaluate integrals involving trigonometric functions . Using Euler's formula, any trigonometric function may be written in terms of complex exponential functions, namely e i x {\displaystyle e^{ix}} and e − i x {\displaystyle e^{-ix}} and then integrated. This technique is often simpler and faster than using trigonometric identities or integration by parts , and is sufficiently powerful to integrate any rational expression involving trigonometric functions. [ 1 ] Euler's formula states that [ 2 ] Substituting − x {\displaystyle -x} for x {\displaystyle x} gives the equation because cosine is an even function and sine is odd. These two equations can be solved for the sine and cosine to give Consider the integral The standard approach to this integral is to use a half-angle formula to simplify the integrand. We can use Euler's identity instead: At this point, it would be possible to change back to real numbers using the formula e 2 ix + e −2 ix = 2 cos 2 x . Alternatively, we can integrate the complex exponentials and not change back to trigonometric functions until the end: Consider the integral This integral would be extremely tedious to solve using trigonometric identities, but using Euler's identity makes it relatively painless: At this point we can either integrate directly, or we can first change the integrand to 2 cos 6 x − 4 cos 4 x + 2 cos 2 x and continue from there. Either method gives In addition to Euler's identity, it can be helpful to make judicious use of the real parts of complex expressions. For example, consider the integral Since cos x is the real part of e ix , we know that The integral on the right is easy to evaluate: Thus: In general, this technique may be used to evaluate any fractions involving trigonometric functions. For example, consider the integral Using Euler's identity, this integral becomes If we now make the substitution u = e i x {\displaystyle u=e^{ix}} , the result is the integral of a rational function : One may proceed using partial fraction decomposition .
https://en.wikipedia.org/wiki/Integration_using_Euler's_formula
In calculus , integration by parametric derivatives , also called parametric integration , [ 1 ] is a method which uses known Integrals to integrate derived functions. It is often used in Physics, and is similar to integration by substitution . By using the Leibniz integral rule with the upper and lower bounds fixed we get that d d t ( ∫ a b f ( x , t ) d x ) = ∫ a b ∂ ∂ t f ( x , t ) d x {\displaystyle {\frac {d}{dt}}\left(\int _{a}^{b}f(x,t)dx\right)=\int _{a}^{b}{\frac {\partial }{\partial t}}f(x,t)dx} It is also true for non-finite bounds. For example, suppose we want to find the integral Since this is a product of two functions that are simple to integrate separately, repeated integration by parts is certainly one way to evaluate it. However, we may also evaluate this by starting with a simpler integral and an added parameter, which in this case is t = 3: This converges only for t > 0, which is true of the desired integral. Now that we know we can differentiate both sides twice with respect to t (not x ) in order to add the factor of x 2 in the original integral. This is the same form as the desired integral, where t = 3. Substituting that into the above equation gives the value: Starting with the integral ∫ − ∞ ∞ e − x 2 t d x = π t {\displaystyle \int _{-\infty }^{\infty }e^{-x^{2}t}dx={\frac {\sqrt {\pi }}{\sqrt {t}}}} , taking the derivative with respect to t on both sides yields d d t ∫ − ∞ ∞ e − x 2 t d x = d d t π t − ∫ − ∞ ∞ x 2 e − x 2 t = − π 2 t − 3 2 ∫ − ∞ ∞ x 2 e − x 2 t = π 2 t − 3 2 {\displaystyle {\begin{aligned}&{\frac {d}{dt}}\int _{-\infty }^{\infty }e^{-x^{2}t}dx={\frac {d}{dt}}{\frac {\sqrt {\pi }}{\sqrt {t}}}\\&-\int _{-\infty }^{\infty }x^{2}e^{-x^{2}t}=-{\frac {\sqrt {\pi }}{2}}t^{-{\frac {3}{2}}}\\&\int _{-\infty }^{\infty }x^{2}e^{-x^{2}t}={\frac {\sqrt {\pi }}{2}}t^{-{\frac {3}{2}}}\end{aligned}}} . In general, taking the n -th derivative with respect to t gives us ∫ − ∞ ∞ x 2 n e − x 2 t = ( 2 n − 1 ) ! ! π 2 n t − 2 n + 1 2 {\displaystyle \int _{-\infty }^{\infty }x^{2n}e^{-x^{2}t}={\frac {(2n-1)!!{\sqrt {\pi }}}{2^{n}}}t^{-{\frac {2n+1}{2}}}} . Using the classical ∫ x t d x = x t + 1 t + 1 {\displaystyle \int x^{t}dx={\frac {x^{t+1}}{t+1}}} and taking the derivative with respect to t we get ∫ ln ⁡ ( x ) x t = ln ⁡ ( x ) x t + 1 t + 1 − x t + 1 ( t + 1 ) 2 {\displaystyle \int \ln(x)x^{t}={\frac {\ln(x)x^{t+1}}{t+1}}-{\frac {x^{t+1}}{(t+1)^{2}}}} . The method can also be applied to sums, as exemplified below. Use the Weierstrass factorization of the sinh function: sinh ⁡ ( z ) z = ∏ n = 1 ∞ ( π 2 n 2 + z 2 π 2 n 2 ) {\displaystyle {\frac {\sinh(z)}{z}}=\prod _{n=1}^{\infty }\left({\frac {\pi ^{2}n^{2}+z^{2}}{\pi ^{2}n^{2}}}\right)} . Take the logarithm: ln ⁡ ( sinh ⁡ ( z ) ) − ln ⁡ ( z ) = ∑ n = 1 ∞ ln ⁡ ( π 2 n 2 + z 2 π 2 n 2 ) {\displaystyle \ln(\sinh(z))-\ln(z)=\sum _{n=1}^{\infty }\ln \left({\frac {\pi ^{2}n^{2}+z^{2}}{\pi ^{2}n^{2}}}\right)} . Derive with respect to z : coth ⁡ ( z ) − 1 z = ∑ n = 1 ∞ 2 z z 2 + π 2 n 2 {\displaystyle \coth(z)-{\frac {1}{z}}=\sum _{n=1}^{\infty }{\frac {2z}{z^{2}+\pi ^{2}n^{2}}}} . Let w = z π {\displaystyle w={\frac {z}{\pi }}} : 1 2 coth ⁡ ( π w ) π w − 1 2 1 z 2 = ∑ n = 1 ∞ 1 n 2 + w 2 {\displaystyle {\frac {1}{2}}{\frac {\coth(\pi w)}{\pi w}}-{\frac {1}{2}}{\frac {1}{z^{2}}}=\sum _{n=1}^{\infty }{\frac {1}{n^{2}+w^{2}}}} . WikiBooks: Parametric_Integration This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Integration_using_parametric_derivatives
Integrative and conjugative elements ( ICEs ) are mobile genetic elements present in both Gram-positive and Gram-negative bacteria . In a donor cell, ICEs are located primarily on the chromosome , but have the ability to excise themselves from the genome and transfer to recipient cells via bacterial conjugation . Due to their physical association with chromosomes, identifying integrative and conjugative elements has proven challenging, but in silico analysis of bacterial genomes indicate these elements are widespread among many microorganisms. [ 1 ] [ 2 ] ICEs have been detected in Pseudomonadota (e.g., Pseudomonas spp., Aeromonas spp., E. coli , Haemophilus spp.), Actinomycetota and Bacillota . Among many other virulence determinants, ICEs may spread antibiotic and metal ion resistance genes across prokaryotic phyla. [ 1 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] In addition, ICE elements may also facilitate the mobilisation of other DNA modules such as genomic islands . [ 3 ] [ 7 ] Although ICEs exhibit various mechanisms promoting their integration, transfer and regulation, they share many common characteristics. ICEs comprise all mobile genetic elements with self-replication, integration, and conjugation abilities, including conjugative transposons, regardless of the particular conjugation and integration mechanism by which they act. Some immobile genomic pathogenicity islands are also believed to be defective ICEs that have lost their ability to conjugate. ICEs combine certain features of the following mobile genetic elements: [ 1 ] In contrast to plasmids and phages, integrative and conjugative elements cannot remain in an extrachromosomal form in the cytoplasm of bacterial cells and replicate only with the chromosome they reside in. ICEs possess the structure organized into three gene modules that are responsible for their integration with the chromosome, excision from the genome and conjugation, as well as regulatory genes. [ 1 ] [ 3 ] All integrative and conjugative elements encode integrases that are essential for controlling the excision, transfer and integration of an ICE. The representative example of ICE integrases is the integrase encoded by lambda phage. The transfer of an integrated ICE element from the donor to recipient bacterium must be preceded by its excision from the chromosome that is co-promoted by small DNA-binding proteins , the so-called recombination directionality factors. The dynamics of the integration and excision processes are specific to each integrative and conjugative element. [ 1 ]
https://en.wikipedia.org/wiki/Integrative_and_conjugative_element
Integrative bioinformatics is a discipline of bioinformatics that focuses on problems of data integration for the life sciences . With the rise of high-throughput (HTP) technologies in the life sciences, particularly in molecular biology , the amount of collected data has grown in an exponential fashion. Furthermore, the data are scattered over a plethora of both public and private repositories , and are stored using a large number of different formats . This situation makes searching these data and performing the analysis necessary for the extraction of new knowledge from the complete set of available data very difficult. Integrative bioinformatics attempts to tackle this problem by providing unified access to life science data. In the Semantic Web approach, data from multiple websites or databases is searched via metadata . Metadata is machine-readable code, which defines the contents of the page for the program so that the comparisons between the data and the search terms are more accurate. This serves to decrease the number of results that are irrelevant or unhelpful. Some meta-data exists as definitions called ontologies , which can be tagged by either users or programs; these serve to facilitate searches by using key terms or phrases to find and return the data. [ 1 ] Advantages of this approach include the general increased quality of the data returned in searches and with proper tagging, ontologies finding entries that may not explicitly state the search term but are still relevant. One disadvantage of this approach is that the results that are returned come in the format of the database of their origin and as such, direct comparisons may be difficult. Another problem is that the terms used in tagging and searching can sometimes be ambiguous and may cause confusion among the results. [ 2 ] In addition, the semantic web approach is still considered an emerging technology and is not in wide-scale use at this time. [ 3 ] One of the current applications of ontology-based search in the biomedical sciences is GoPubMed , which searches the PubMed database of scientific literature. [ 1 ] Another use of ontologies is within databases such as SwissProt , Ensembl and TrEMBL , which use this technology to search through the stores of human proteome-related data for tags related to the search term. [ 4 ] Some of the research in this field has focused on creating new and specific ontologies. [ 5 ] Other researchers have worked on verifying the results of existing ontologies. [ 2 ] In a specific example, the goal of Verschelde, et al. was the integration of several different ontology libraries into a larger one that contained more definitions of different subspecialties (medical, molecular biological, etc.) and was able to distinguish between ambiguous tags; the result was a data-warehouse like effect, with easy access to multiple databases through the use of ontologies. [ 4 ] In a separate project, Bertens, et al. constructed a lattice work of three ontologies (for anatomy and development of model organisms) on a novel framework ontology of generic organs. For example, results from a search of ‘heart’ in this ontology would return the heart plans for each of the vertebrate species whose ontologies were included. The stated goal of the project is to facilitate comparative and evolutionary studies. [ 6 ] In the data warehousing strategy, the data from different sources are extracted and integrated in a single database. For example, various 'omics' datasets may be integrated to provide biological insights into biological systems. Examples include data from genomics, transcriptomics, proteomics, interactomics, metabolomics. Ideally, changes in these sources are regularly synchronized to the integrated database. The data is presented to the users in a common format. Many programs aimed to aid in the creation of such warehouses are designed to be extremely versatile to allow for them to be implemented in diverse research projects. [ 7 ] One advantage of this approach is that data is available for analysis at a single site, using a uniform schema. Some disadvantages are that the datasets are often huge and difficult to keep up to date. Another problem with this method is that it is costly to compile such a warehouse. [ 8 ] Standardized formats for different types of data (ex: protein data) are now emerging due to the influence of groups like the Proteomics Standards Initiative (PSI). Some data warehousing projects even require the submission of data in one of these new formats. [ 9 ] Data mining uses statistical methods to search for patterns in existing data. This method generally returns many patterns, of which some are spurious and some are significant, but all of the patterns the program finds must be evaluated individually. Currently, some research is focused on incorporating existing data mining techniques with novel pattern analysis methods that reduce the need to spend time going over each pattern found by the initial program, but instead, return a few results with a high likelihood of relevance. [ 10 ] One drawback of this approach is that it does not integrate multiple databases, which means that comparisons across databases are not possible. The major advantage to this approach is that it allows for the generation of new hypotheses to test.
https://en.wikipedia.org/wiki/Integrative_bioinformatics
An integrator in measurement and control applications is an element whose output signal is the time integral of its input signal. It accumulates the input quantity over a defined time to produce a representative output. Integration is an important part of many engineering and scientific applications. Mechanical integrators are the oldest type [ 1 ] and are still used for metering water flow or electrical power. [ citation needed ] Electronic analogue integrators, which have generally displaced mechanical integrators, [ 1 ] are the basis of analog computers and charge amplifiers. [ citation needed ] Integration can also be performed by algorithms in digital computers. One simple kind of mechanical integrator is the disk-and-wheel integrator. [ 1 ] This functions by placing a wheel on and perpendicular to a spinning disk, held there by means of a freely spinning shaft parallel to the disk. [ 1 ] Because the speed at which a part of the disk turns is proportional to its distance from the center, the rate at which the wheel turns is proportional to its distance from the center of the disk. [ 1 ] Therefore, the number of turns made by the integrating wheel is equal to the definite integral of the integrating wheel's distance from the center, [ 1 ] which is in turn controlled by the motion of the shaft relative to the disk. A current integrator is an electronic device performing a time integration of an electric current , [ 2 ] thus measuring a total electric charge . In combination with time it can be used to determine the average current during an experiment. [ 2 ] Feeding current into a capacitor (initialized with zero volts) and monitoring the capacitor's voltage has been used in nuclear physics experiments before 1953 to measure the number of ions received. [ 3 ] Such a simple circuit works because the capacitor's current–voltage relation when written in integral form mathematically states that a capacitor's final voltage equals its initial voltage plus the time integral of its current divided by its capacitance: V ( t ) = V ( t 0 ) + 1 C ∫ t 0 t I ( τ ) d τ {\displaystyle V(t)=V(t_{0})+{\frac {1}{C}}\int _{t_{0}}^{t}I(\tau )\,\mathrm {d} \tau } More sophisticated current integrator circuits build on this relation, such as the charge amplifier . A current integrator is also used to measure the electric charge on a Faraday cup in a residual gas analyzer to measure partial pressures of gasses in a vacuum. Another application of current integration is in ion beam deposition , where the measured charge directly corresponds to the number of ions deposited on a substrate, assuming the charge state of the ions is known. The two current-carrying electrical leads must to be connected to the ion source and the substrate, closing the electric circuit which in part is given by the ion beam. A voltage integrator is an electronic device performing a time integration of an electric voltage, thus measuring the total volt-second product. A first-order low-pass filter such as a resistor – capacitor circuit acts like a voltage integrator at high frequencies well above the filter's cutoff frequency . An ideal op amp integrator (e.g. Figure 1) is a voltage integrator that works over all frequencies (limited by the op amp's gain–bandwidth product ) and provides gain. Thus, an ideal integrator needs to be modified with additional components to reduce the effect of an error voltage in practice. This modified integrator is referred as practical integrator. Main description at: Op amp integrator § Practical circuit The gain of an integrator at low frequency can be limited to avoid the saturation problem, by shunting the feedback capacitor with a feedback resistor. This practical integrator acts as a low-pass filter with constant gain in its low frequency pass band. It only performs integration in high frequencies, not in low frequencies, so bandwidth for integrating is limited. Mechanical integrators were key elements in the mechanical differential analyser , used to solve practical physical problems. Mechanical integration mechanisms were also used in control systems such as regulating flows or temperature in industrial processes. Mechanisms such as the ball-and-disk integrator were used both for computation in differential analysers and as components of instruments such as naval gun directors , flow totalizers and others. A planimeter is a mechanical device used for calculating the definite integral of a curve given in graphical form, or more generally finding the area of a closed curve. An integraph is used to plot the indefinite integral of a function given in graphical form.
https://en.wikipedia.org/wiki/Integrator
Integrons are genetic mechanisms that allow bacteria to adapt and evolve rapidly through the stockpiling and expression of new genes. [ 1 ] These genes are embedded in a specific genetic structure called gene cassette (a term that is lately changing to integron cassette) that generally carries one promoterless open reading frame (ORF) together with a recombination site ( attC ). Integron cassettes are incorporated to the attI site of the integron platform by site-specific recombination reactions mediated by the integrase. Integrons were initially discovered on conjugative plasmids through their role in antibiotic resistance. [ 2 ] Indeed, these mobile integrons, as they are now known, can carry a variety of cassettes containing genes that are almost exclusively related to antibiotic resistance. Further studies have come to the conclusion that integrons are chromosomal elements, and that their mobilisation onto plasmids has been fostered by transposons and selected by the intensive use of antibiotics. The function of the majority of cassettes found in chromosomal integrons remains unknown. Cassette maintenance requires that they be integrated within a replicative element (chromosome, plasmids). The integrase encoded by the integron preferentially catalyses two types of recombination reaction: 1) attC x attC, which results in cassette excision, 2) attI x attC, which allows integration of the cassette at the attI site of the integron. Once inserted, the cassette is maintained during cell division. [ 3 ] Successive integrations of gene cassettes result in the formation of a series of cassettes. The cassette integrated last is then the one closest to the Pc promoter at the attI site. The IntI-catalysed mode of recombination involves structured single-stranded DNA and gives the attC site recognition mode unique characteristics. [ 4 ] The integration of gene cassettes within an integron also provides a Pc promoter that allows expression of all cassettes in the array, much like an operon. [ 3 ] The level of gene expression of a cassette is then a function of the number and nature of the cassettes that precede it. In 2009, Didier Mazel and his team showed that the expression of the IntI integrase was controlled by the bacterial SOS response, thus coupling this adaptive apparatus to the stress response in bacteria. [ 5 ] An integron is minimally composed of: [ 6 ] [ 7 ] Additionally, an integron will usually contain one or more gene cassettes that have been incorporated into it. The gene cassettes may encode genes for antibiotic resistance , [ 9 ] although most genes in integrons are uncharacterized. An attC sequence (also called 59-be) is a repeat that flanks cassettes and enables cassettes to be integrated at the attI site, excised and undergo horizontal gene transfer . Integrons may be found as part of mobile genetic elements such as plasmids and transposons . Integrons can also be found in chromosomes . The term super-integron was first applied in 1998 (but without definition) to the integron with a long cassette array on the small chromosome of Vibrio cholerae . [ 10 ] [ 11 ] The term has since been used for integrons of various cassette array lengths or for integrons on bacterial chromosomes (versus, for example, plasmids). Use of "super-integron" is now discouraged since its meaning is unclear. [ 10 ] In more modern usage, an integron located on a bacterial chromosome is termed a sedentary chromosomal integron , and one associated with transposons or plasmids is called a mobile integron . [ 12 ] This genetics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Integron
The integumentary system is the set of organs forming the outermost layer of an animal's body. It comprises the skin and its appendages, which act as a physical barrier between the external environment and the internal environment that it serves to protect and maintain the body of the animal. Mainly it is the body's outer skin. The integumentary system includes skin , hair , scales , feathers , hooves , claws , and nails . It has a variety of additional functions: it may serve to maintain water balance, protect the deeper tissues, excrete wastes, and regulate body temperature , and is the attachment site for sensory receptors which detect pain, sensation, pressure, and temperature. The skin is one of the largest organs of the body. In humans, it accounts for about 12 to 15 percent of total body weight and covers 1.5 to 2 m 2 of surface area. [ 1 ] The skin (integument) is a composite organ, made up of at least two major layers of tissue: the epidermis and the dermis . [ 2 ] The epidermis is the outermost layer, providing the initial barrier to the external environment. It is separated from the dermis by the basement membrane ( basal lamina and reticular lamina ). The epidermis contains melanocytes and gives color to the skin. The deepest layer of the epidermis also contains nerve endings . Beneath this, the dermis comprises two sections, the papillary and reticular layers, and contains connective tissues , blood vessels, glands, follicles, hair roots , sensory nerve endings, and muscular tissue. [ 3 ] Between the integument and the deep body musculature there is a transitional subcutaneous zone made up of very loose connective and adipose tissue , the hypodermis . Substantial collagen bundles anchor the dermis to the hypodermis in a way that permits most areas of the skin to move freely over the deeper tissue layers. [ 4 ] The epidermis is the strong, superficial layer that serves as the first line of protection against the outer environment. The human epidermis is composed of stratified squamous epithelial cells , which further break down into four to five layers: the stratum corneum , stratum granulosum , stratum spinosum and stratum basale . Where the skin is thicker, such as in the palms and soles, there is an extra layer of skin between the stratum corneum and the stratum granulosum, called the stratum lucidum . The epidermis is regenerated from the stem cells found in the basal layer that develop into the corneum. The epidermis itself is devoid of blood supply and draws its nutrition from its underlying dermis. [ 5 ] Its main functions are protection, absorption of nutrients, and homeostasis . In structure, it consists of a keratinized stratified squamous epithelium ; four types of cells: keratinocytes , melanocytes , Merkel cells , and Langerhans cells . The predominant cell keratinocyte , which produces keratin , a fibrous protein that aids in skin protection, is responsible for the formation of the epidermal water barrier by making and secreting lipids . [ 6 ] The majority of the skin on the human body is keratinized, with the exception of the lining of mucous membranes , such as the inside of the mouth. Non-keratinized cells allow water to "stay" atop the structure. The protein keratin stiffens epidermal tissue to form fingernails . Nails grow from a thin area called the nail matrix at an average of 1 mm per week. The lunula is the crescent-shape area at the base of the nail, lighter in color as it mixes with matrix cells. Only primates have nails. In other vertebrates, the keratinizing system at the terminus of each digit produces claws or hooves. [ 2 ] The epidermis of vertebrates is surrounded by two kinds of coverings, which are produced by the epidermis itself. In fish and aquatic amphibians , it is a thin mucus layer that is constantly being replaced. In terrestrial vertebrates, it is the stratum corneum (dead keratinized cells). The epidermis is, to some degree, glandular in all vertebrates, but more so in fish and amphibians . Multicellular epidermal glands penetrate the dermis, where they are surrounded by blood capillaries that provide nutrients and, in the case of endocrine glands, transport their products. [ 7 ] The dermis is the underlying connective tissue layer that supports the epidermis. It is composed of dense irregular connective tissue and areolar connective tissue such as a collagen with elastin arranged in a diffusely bundled and woven pattern. The dermis has two layers: the papillary dermis and the reticular layer. The papillary layer is the superficial layer that forms finger-like projections into the epidermis (dermal papillae), [ 5 ] and consists of highly vascularized, loose connective tissue. The reticular layer is the deep layer of the dermis and consists of the dense irregular connective tissue. These layers serve to give elasticity to the integument, allowing stretching and conferring flexibility, while also resisting distortions, wrinkling, and sagging. [ 3 ] The dermal layer provides a site for the endings of blood vessels and nerves. Many chromatophores are also stored in this layer, as are the bases of integumental structures such as hair , feathers , and glands . The hypodermis, otherwise known as the subcutaneous layer, is a layer beneath the skin. It invaginates into the dermis and is attached to the latter, immediately above it, by collagen and elastin fibers. It is essentially composed of a type of cell known as adipocytes, which are specialized in accumulating and storing fats. These cells are grouped together in lobules separated by connective tissue. The hypodermis acts as an energy reserve. The fats contained in the adipocytes can be put back into circulation, via the venous route, during intense effort or when there is a lack of energy-providing substances, and are then transformed into energy. The hypodermis participates, passively at least, in thermoregulation since fat is a heat insulator. The integumentary system has multiple roles in maintaining the body's equilibrium . All body systems work in an interconnected manner to maintain the internal conditions essential to the function of the body. The skin has an important job of protecting the body and acts as the body's first line of defense against infection, temperature change, and other challenges to homeostasis. [ 8 ] [ 9 ] Its main functions include: Small-bodied invertebrates of aquatic or continually moist habitats respire using the outer layer (integument). This gas exchange system, where gases simply diffuse into and out of the interstitial fluid , is called integumentary exchange . Possible diseases and injuries to the human integumentary system include:
https://en.wikipedia.org/wiki/Integumentary_system
Protein splicing is an intramolecular reaction of a particular protein in which an internal protein segment (called an intein ) is removed from a precursor protein with a ligation of C-terminal and N-terminal external proteins (called exteins ) on both sides. The splicing junction of the precursor protein is mainly a cysteine or a serine , which are amino acids containing a nucleophilic side chain . The protein splicing reactions which are known now do not require exogenous cofactors or energy sources such as adenosine triphosphate (ATP) or guanosine triphosphate (GTP). Normally, splicing is associated only with pre-mRNA splicing . This precursor protein contains three segments—an N-extein followed by the intein followed by a C-extein . After splicing has taken place, the resulting protein contains the N-extein linked to the C-extein; this splicing product is also termed an extein. The first intein was discovered in 1988 through sequence comparison between the Neurospora crassa [ 1 ] and carrot [ 2 ] vacuolar ATPase (without intein) and the homologous gene in yeast (with intein) that was first described as a putative calcium ion transporter . [ 3 ] In 1990 Hirata et al. [ 4 ] demonstrated that the extra sequence in the yeast gene was transcribed into mRNA and removed itself from the host protein only after translation. Since then, inteins have been found in all three domains of life (eukaryotes, bacteria, and archaea) and in viruses . Protein splicing was unanticipated and its mechanisms were discovered by two groups (Anraku [ 5 ] and Stevens [ 6 ] ) in 1990. They both discovered a Saccharomyces cerevisiae VMA1 in a precursor of a vacuolar H + -ATPase enzyme. The amino acid sequence of the N- and C-termini corresponded to 70% DNA sequence of that of a vacuolar H + -ATPase from other organisms, while the amino acid sequence of the central position corresponded to 30% of the total DNA sequence of the yeast HO nuclease . Many genes have unrelated intein-coding segments inserted at different positions. For these and other reasons, inteins (or more properly, the gene segments coding for inteins) are sometimes called selfish genetic elements , but it may be more accurate to call them parasitic . According to the gene centered view of evolution, most genes are "selfish" only insofar as to compete with other genes or alleles but usually they fulfill a function for the organisms, whereas "parasitic genetic elements", at least initially, do not make a positive contribution to the fitness of the organism. [ 7 ] [ 8 ] As of December 2019, the UniProtKB database contains 188 entries manually annotated as inteins, ranging from just tens of amino acid residues to thousands. [ 9 ] The first intein was found encoded within the VMA gene of Saccharomyces cerevisiae . They were later found in fungi ( ascomycetes , basidiomycetes , zygomycetes and chytrids ) and in diverse proteins as well. A protein distantly related to known inteins containing protein, but closely related to metazoan hedgehog proteins , has been described to have the intein sequence from Glomeromycota . Many of the newly described inteins contain homing endonucleases and some of these are apparently active. [ 10 ] The abundance of intein in fungi indicates lateral transfer of intein-containing genes. While in eubacteria and archaea, there are 289 and 182 currently known inteins. Not surprisingly, most intein in eubacteria and archaea are found to be inserted into nucleic acid metabolic protein, like fungi. [ 10 ] Inteins vary greatly, but many of the same intein-containing proteins are found in a number of species. For example, pre-mRNA processing factor 8 ( Prp8 ) protein, instrumental in the spliceosome , has seven different intein insertion sites across eukaryotic species. [ 11 ] Intein-containing Prp8 is most commonly found in fungi, but is also seen in Amoebozoa , Chlorophyta , Capsaspora , and Choanoflagellida . Many mycobacteria contain inteins within DnaB (bacterial replicative helicase), RecA (bacterial DNA recombinase), and SufB ( FeS cluster assembly protein). [ 12 ] [ 13 ] There is remarkable variety within the structure and number of DnaB inteins, both within the mycobacterium genus and beyond. Interestingly, intein-containing DnaB is also found in the chloroplasts of algae. [ 14 ] Intein-containing proteins found in archaea include RadA (RecA homolog), RFC, PolB, RNR. [ 15 ] Many of the same intein-containing proteins (or their homologs) are found in two or even all three domains of life. Inteins are also seen in the proteomes encoded by bacteriophages and eukaryotic viruses. Viruses may have been involved as vectors of intein distribution across the wide variety of intein containing organisms. [ 15 ] The process for class 1 inteins begins with an N-O or N-S shift when the side chain of the first residue (a serine , threonine , or cysteine ) of the intein portion of the precursor protein nucleophilically attacks the peptide bond of the residue immediately upstream (that is, the final residue of the N-extein) to form a linear ester (or thioester ) intermediate. A transesterification occurs when the side chain of the first residue of the C-extein attacks the newly formed (thio)ester to free the N-terminal end of the intein. This forms a branched intermediate in which the N-extein and C-extein are attached, albeit not through a peptide bond. The last residue of the intein is always an asparagine (Asn), and the amide nitrogen atom of this side chain cleaves apart the peptide bond between the intein and the C-extein, resulting in a free intein segment with a terminal cyclic imide . Finally, the free amino group of the C-extein now attacks the (thio)ester linking the N- and C-exteins together. An O-N or S-N shift produces a peptide bond and the functional, ligated protein. [ 16 ] Class 2 inteins have no nucleophilic first side chain, only an alanine. Instead, the reaction starts directly with a nucleophilic displacement, with the first residue of the C-extein atticking the peptide carboxyl on the final residue of the N-extein. The rest proceeds as usual, starting with Asn turning into a cyclic imide. [ 17 ] Class 3 inteins have no nucleophilic first side chain, only an alanine, yet they have an internal noncontiguous "WCT" motif. The internal C (cysteine) residue attacks the peptide carboxyl on the final residue of the N-extein (nucleophilic displacement). Transesterification occurs when the first residue of the C-extein attacks the newly formed thioester. The rest proceeds as usual. [ 18 ] The mechanism for the splicing effect is a naturally occurring analogy to the technique for chemically generating medium-sized proteins called native chemical ligation . An intein is a segment of a protein that is able to excise itself and join the remaining portions (the exteins ) with a peptide bond during protein splicing. [ 19 ] Inteins have also been called protein introns , by analogy with (RNA) introns . The first part of an intein name is based on the scientific name of the organism in which it is found, and the second part is based on the name of the corresponding gene or extein. For example, the intein found in Thermoplasma acidophilum and associated with Vacuolar ATPase subunit A (VMA) is called "Tac VMA". Normally, as in this example, just three letters suffice to specify the organism, but there are variations. For example, additional letters may be added to indicate a strain. If more than one intein is encoded in the corresponding gene, the inteins are given a numerical suffix starting from 5 ′ to 3 ′ or in order of their identification (for example, "Msm dnaB-1"). The segment of the gene that encodes the intein is usually given the same name as the intein, but to avoid confusion the name of the intein proper is usually capitalized ( e.g. , Pfu RIR1-1), whereas the name of the corresponding gene segment is italicized ( e.g. , Pfu rir1-1 ). A different disambiguating convention is to place a lowercase "i" after the source protein name, e.g. "Msm DnaBi1". [ 20 ] Inteins can be classified on many criteria. Inteins can contain a homing endonuclease gene (HEG) domain in addition to the splicing domains. This domain is responsible for the spread of the intein by cleaving DNA at an intein-free allele on the homologous chromosome , triggering the DNA double-stranded break repair (DSBR) system, which then repairs the break, thus copying the intein-coding DNA into a previously intein-free site. [ 17 ] The HEG domain is not necessary for intein splicing, and so it can be lost, forming a minimal , or mini , intein . Several studies have demonstrated the modular nature of inteins by adding or removing HEG domains and determining the activity of the new construct. [ citation needed ] Sometimes, the intein of the precursor protein comes from two genes. In this case, the intein is said to be a split intein . For example, in cyanobacteria , DnaE , the catalytic subunit α of DNA polymerase III , is encoded by two separate genes, dnaE-n and dnaE-c . The dnaE-n product consists of an N-extein sequence followed by a 123-AA intein sequence, whereas the dnaE-c product consists of a 36-AA intein sequence followed by a C-extein sequence. [ 21 ] Inteins are very efficient at protein splicing, and they have accordingly found an important role in biotechnology . There are more than 200 inteins identified to date; sizes range from 100–800 AAs . Inteins have been engineered for particular applications such as protein semisynthesis [ 22 ] and the selective labeling of protein segments, which is useful for NMR studies of large proteins. [ 23 ] Pharmaceutical inhibition of intein excision may be a useful tool for drug development ; the protein that contains the intein will not carry out its normal function if the intein does not excise, since its structure will be disrupted. It has been suggested that inteins could prove useful for achieving allotopic expression of certain highly hydrophobic proteins normally encoded by the mitochondrial genome, for example in gene therapy . [ 24 ] The hydrophobicity of these proteins is an obstacle to their import into mitochondria. Therefore, the insertion of a non-hydrophobic intein may allow this import to proceed. Excision of the intein after import would then restore the protein to wild-type . Affinity tags have been widely used to purify recombinant proteins, as they allow the accumulation of recombinant protein with little impurities. However, the affinity tag must be removed by proteases in the final purification step. The extra proteolysis step raises the problems of protease specificity in removing affinity tags from recombinant protein, and the removal of the digestion product. This problem can be avoided by fusing an affinity tag to self-cleavable inteins in a controlled environment. The first generation of expression vectors of this kind used modified Saccharomyces cerevisiae VMA (Sce VMA) intein. Chong et al. [ 25 ] used a chitin binding domain (CBD) from Bacillus circulans as an affinity tag, and fused this tag with a modified Sce VMA intein. The modified intein undergoes a self-cleavage reaction at its N-terminal peptide linkage with 1,4-dithiothreitol (DTT), β-mercaptoethanol (β-ME), or cystine at low temperatures over a broad pH range. After expressing the recombinant protein, the cell homogenate is passed through the column containing chitin . This allows the CBD of the chimeric protein to bind to the column. Furthermore, when the temperature is lowered and the molecules described above pass through the column, the chimeric protein undergoes self-splicing and only the target protein is eluted. This novel technique eliminates the need for a proteolysis step, and modified Sce VMA stays in column attached to chitin through CBD. [ 25 ] Recently inteins have been used to purify proteins based on self aggregating peptides. Elastin-like polypeptides (ELPs) are a useful tool in biotechnology. Fused with target protein, they tend to form aggregates inside the cells. [ 26 ] This eliminates the chromatographic step needed in protein purification. The ELP tags have been used in the fusion protein of intein, so that the aggregates can be isolated without chromatography (by centrifugation) and then intein and tag can be cleaved in controlled manner to release the target protein into solution. This protein isolation can be done using continuous media flow, yielding high amounts of protein, making this process more economically efficient than conventional methods. [ 26 ] Another group of researchers used smaller self aggregating tags to isolate target protein. Small amphipathic peptides 18A and ELK16 (figure 5) were used to form self cleaving aggregating protein. [ 27 ] Over the last twenty years, there has been increasing interest in leveraging inteins for antimicrobial applications. [ 12 ] Intein splicing is found exclusively in unicellular organisms, with a particularly high abundance in pathogenic microorganisms. [ 28 ] Furthermore, inteins are commonly found within housekeeping proteins and/or proteins involved in the survival of the organism within a human host. Post-translational intein removal is necessary for the protein to properly fold and function. For example, Gaëlle Huet et al. demonstrated that in Mycobacterium tuberculosis , unspliced SufB prevents the formation of the SufBCD complex, a component of the SUF machinery. [ 29 ] As such, the inhibition of intein splicing may serve as a powerful platform for the development of antimicrobials. Current research on intein splicing inhibitors has focused on developing antimycobacterials ( M. tb. has three intein-containing proteins), as well as agents active against pathogenic fungi Cryptococcus and Aspergillus. [ 13 ] Cisplatin and similar platinum-containing compounds inhibit splicing of the M. tb. RecA intein through coordinating to catalytic residues. [ 30 ] Divalent cations, such as copper (II) and zinc (II) ions, function similarly to reversibly inhibit splicing. [ 12 ] However, neither of these methods are currently suitable for an effective and safe antibiotic. The fungal Prp8 intein is also inhibited by divalent cations and cisplatin through interfering with the catalytic Cys1 residue. [ 12 ] In 2021, Li et al. showed that small molecule inhibitors of Prp8 intein splicing were selective and effective at slowing the growth of C. neoformans and C. gattii , providing exciting evidence for the antimicrobial potential of intein splicing inhibitors. [ 31 ]
https://en.wikipedia.org/wiki/Intein
Intel's Communication Streaming Architecture ( CSA ) was a mechanism used in the Intel Hub Architecture to increase the bandwidth available between a network card and the CPU. It consists of connecting directly the network controller to the Memory Controller Hub ( northbridge ), instead of to the I/O Controller Hub (southbridge) through the PCI bus, which was the common practice until that point. The technology was only used in Intel chipsets released in 2003, and was largely seen as a stop-gap measure to allow Gigabit Ethernet chips to run at full-speed until the arrival of a faster expansion bus (it was also used to connect the Wireless networking chips in Intel's Centrino mobile platform). To Intel's credit though, CSA-connected Ethernet chips did show consistently higher transfer rates than comparable PCI cards. The following year, PCI Express replaced CSA as the method of connecting network chips in Intel's chipsets, and the technology was subsequently discontinued. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Intel_Communication_Streaming_Architecture
The Intel Compute Stick was a stick PC designed by Intel to be used in media center applications. According to Intel, it is designed to be smaller than conventional desktop or other small-form-factor PCs, while offering comparable performance. Its main connector, an HDMI 1.4 port, along with a compatible monitor (or TV) and Bluetooth -based keyboards and mice, allows it to be used for general computing tasks. [ 3 ] The small form factor device was launched in early 2015 using the Atom Z3735F power-efficient processor from Intel's Bay Trail family, a SoC family that is predominantly designed for use with tablets and 2-in-1 devices. The processor offers 1.33 GHz processor base frequency and a maximum RAM of 2 GB. [ 4 ] This is sufficient for home entertainment usage, light office productivity, thin clients, and digital signage applications. [ 5 ] In mid-2015 it was announced that second generation versions of the Compute Stick would feature advancements on the Bay Trail framework through application of Core M processors in the form factor. The new devices (released Q1 2016) allowed Intel to introduce additional processing power as well as 4 GB memory for "more intensive application and content creation" as well as "faster multi-tasking". [ 6 ] The Intel Compute stick line was discontinued in July 7 2021. [ 7 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Intel_Compute_Stick
Intel Display Power Saving Technology or Intel DPST is an Intel backlight control technology. Intel claims that display take up most power in mobile devices and reducing backlight linearly affects energy footprint. Intel DPST technology aims to adaptively reduce backlight brightness while maintaining satisfactory visual performance. The Intel DPST subsystem analyzes the image to be displayed and it uses a set of algorithms to change the chroma value of pixels while reducing the brightness of backlight simultaneously such that there is minimum perceived visual degradation. When the frame to be projected and the frame being projected has considerable difference a software interrupt is asserted and new chroma values for pixels and brightness values are calculated. The current version is Intel DPST 6.0. Intel claims that the current DPST version reduces backlight power by 70% for DVD workloads. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Intel_Display_Power_Saving_Technology
Intel PRO/Wireless is a series of wireless products developed by Intel . These products include wireless network adapters, access points, and routers that are designed to provide high-speed wireless connectivity for computers, laptops, and other devices. Intel PRO/Wireless products use various wireless technologies, including Wi-Fi (IEEE 802.11) and Bluetooth, to provide wireless connectivity. Intel PRO/Wireless network adapters allow devices to connect to wireless networks, while access points and routers create wireless networks that devices can connect to. Intel PRO/Wireless products are commonly used in homes, offices, and other settings where wireless connectivity is desired. They are known for their high performance and reliability, and are often used in business environments where a reliable wireless connection is critical. Intel PRO/Wireless products are also used in some public places, such as airports, hotels, and coffee shops, where they can be used to provide wireless access to the Internet for travelers and other patrons. After the release of the wireless products called Intel Pro/Wireless 2100, 2200BG/2225BG/2915ABG and 3945ABG in 2005, Intel was criticized for not granting free redistribution rights for the firmware necessary to be included in the operating systems for the wireless devices to operate. [ 1 ] As a result of this, Intel became a target of campaigns to allow free operating systems to include binary firmware on terms acceptable to the open-source community . Linspire - Linux creator Michael Robertson outlined the difficult position that Intel was in releasing to open source , as Intel did not want to upset their large customer Microsoft . [ 2 ] Theo de Raadt of OpenBSD also claimed that Intel is being "an Open Source fraud" after an Intel employee presented a distorted view of the situation on an open-source conference. [ 3 ] In spite of the negative attention Intel received as a result of the wireless dealings, the binary firmware still has not gained a license compatible with free software principles. The successor to the PRO/Wireless series is Intel Wireless WiFi Link.
https://en.wikipedia.org/wiki/Intel_PRO/Wireless
The Intel Play product line, developed and jointly marketed by Intel and Mattel , was a product line of consumer "toy" electronic devices. The other toys were the Digital Movie Creator, the Computer Sound Morpher, and the Me2Cam. [ 1 ] [ 2 ] The line was launched in the fall of 1999. [ 3 ] The Intel Play product line was discontinued on March 29, 2002, when it was purchased by Tim Hall's holding company Prime Entertainment . Hall founded Digital Blue, [ 4 ] which continued the Intel Play product line under the Digital Blue brand. The "Play" logo of Intel Play became a staple of 2K Play in 2007. The QX3 Computer Microscope was a product in the Intel Play product line and was continued in the Digital Blue product line. An upgraded QX5 model was available. The QX3 is a small, semi-transparent blue electronic microscope that can connect to a computer via a USB connection . It has magnification levels of 10x, 60x, and 200x. The microscope comes with software which allows a computer to access the microscope and use it to either take pictures or record movie. The specimen can be lit either from underneath or from above by one of two incandescent bulbs (3.5V, 300mA). The specimen platform is adjustable to focus the image. The Vision CPiA (VV0670P001) is interfaced to a CIF CCD sensor, sampled at a resolution of 320x240 pixels. The QX5 Computer Microscope is a Digital Blue product and upgraded the QX3 with multiple improvements, including a 640x480 image capture device and a brighter light source. The Digital Movie Creator was a product in the Intel Play product line and was continued in the Digital Blue product line. The upgraded 2.0 and 3.0 was available. Intel Play Digital Movie Creator is featured as an easy-to-use digital video camera and movie-making software package that allows children to use the PC to script and star in their own feature movies. At the time of development and release in 2001, the goal of the Intel Play products is to extend the value and utility of powerful PCs, like ones based on the Intel® Pentium® 4 processor . [ 5 ] Intel Play Computer Sound Morpher and Editing CD-ROM was another product of the Intel Play line intended for children of 6 years and older to record and subsequently edit sounds on a computer in "fun and surprising ways." It featured a variety of pre-recorded sounds to choose from, and audio filters such as "echo" and "ballpark." Users of the program could then share sounds through means such as e-mail. [ 6 ] The Intel Play Me2Cam Computer Video Camera was a digital video camera that plugs into a computer via a USB port connection. A CD-ROM that comes with the camera uses image processing to remove the background from the subject and overlay the subject onto an external background from the 5 games included. It also features the ability to use body movements to control actions in the games. [ 6 ] This electronics-related article is a stub . You can help Wikipedia by expanding it . This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Intel_Play
Intel Teach is a program established by Intel that aims to improve teacher effectiveness around the world by offering professional development courses and helping teachers integrate information and communications technology into their lessons. Teachers are also trained to promote their students' problem-solving , critical thinking, and collaboration skills. Intel Teach claims to be the largest private-sector program of its kind, training more than 15 million teachers in 70 countries who will in turn influence the learning of over 300 million students. [ 1 ] [ 2 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Intel_Teach
Intelectins are lectins (carbohydrate-binding proteins) expressed in humans and other chordates . Humans express two types of intelectins encoded by ITLN1 and ITLN2 genes respectively. [ 1 ] [ 2 ] Several intelectins bind microbe-specific carbohydrate residues. Therefore, intelectins have been proposed to function as immune lectins. [ 3 ] [ 4 ] Even though intelectins contain fibrinogen-like domain found in the ficolins family of immune lectins, there is significant structural divergence. [ 5 ] Thus, intelectins may not function through the same lectin-complement pathway. Most intelectins are still poorly characterized and they may have diverse biological roles. Human intelectin-1 (hIntL-1) has also been shown to bind lactoferrin , [ 6 ] but the functional consequence has yet to be elucidated. Additionally, hIntL-1 is a major component of asthmatic mucus [ 7 ] and may be involved in insulin physiology as well. [ 8 ] The first intelectin was discovered in Xenopus laevis oocyte and is named XL35 or XCGL-1. [ 9 ] [ 10 ] [ 11 ] X. laevis oocyte also contains a closely related XCGL-2. [ 12 ] In addition, X. laevis embryos secrete Xenopus embryonic epidermal lectin into the environmental water, presumably to bind microbes. [ 13 ] [ 14 ] XSL-1 and XSL-2 are also expressed in X. laevis serum when stimulated with lipopolysaccharide. [ 15 ] Two additional intestinal intelectins are discovered in X. laevis [ 16 ] Human has two intelectins: hIntL-1 (omentin) and hIntL-2. [ 17 ] Mouse also has two intelectins: mIntL-1 and mIntL-2. [ 18 ] Several lines of evidence suggest that intelectins recognize microbes and may function as an innate immune defense protein. Tunicate intelectin is an opsonin for phagocytosis by hemocyte. [ 19 ] Amphioxus intelectin has been shown to agglutinate bacteria. [ 20 ] [ 21 ] In zebrafish and rainbow trout, intelectin expression is stimulated upon microbial exposure. [ 22 ] [ 23 ] [ 24 ] Mammals such as sheep and mice also upregulate intelectin expression upon parasitic infection. [ 25 ] [ 26 ] Increase in intelectin expression upon microbial exposure support the hypothesis that intelectins play a role in the immune system. Although intelectins require calcium ion for function, the sequences bear no resemblance to C-type lectins . [ 3 ] In addition, merely around 50 amino acids (the fibronogen-like domain) align with any known protein, specifically the ficolin family. [ 2 ] The first structural details of an intelectin comes from the crystal structure of selenomethionine -labeled XEEL carbohydrate-recognition domain (Se-Met XEEL-CRD) solved by Se- SAD . [ 5 ] XEEL-CRD was expressed and Se-Met-labeled in High Five insect cells using a recombinant baculovirus . The fibrinogen-like fold is conserved despite amino acid sequence divergence. However, extensive insertions are present in intelectin compared to ficolins, thus making intelectin a distinct lectin structural class. [ 5 ] The Se-Met XEEL-CRD structure then enables the structure solution by molecular replacement of D-glycerol 1-phosphate (GroP)-bound XEEL-CRD, [ 5 ] apo-human intelectin-1 (hIntL-1), [ 4 ] and galactofuranose-bound hIntL-1. [ 4 ] Each polypeptide chain of XEEL and hIntL-1 contains three bound calcium ions: two in the structural calcium site and one in the ligand binding site. [ 4 ] [ 5 ] The amino acid residues in the structural calcium site are conserved among intelectins, thus it is likely that most, if not all, intelectins have two structural calcium ions. [ 5 ] In the ligand binding site of XEEL and hIntL-1, the exocyclic vicinal diol of the carbohydrate ligand directly coordinates to the calcium ion. [ 4 ] [ 5 ] There are large variations in the ligand binding site residues among intelectin homologs suggesting that the intelectin family may have broad ligand specificities and biological functions. [ 5 ] As there is no intelectin numbering conventions in different organisms, one should not assume functional homology based on the intelectin number. For example, hIntL-1 has glutamic acid residues in the ligand binding site to coordinate a calcium ion, while zebrafish intelectin-1 are devoided of these acidic residues. [ 5 ] Zebrafish intelectin-2 ligand binding site residues are similar to those present in hIntL-1. hIntL-1 is a disulfide-linked trimer as shown by non-reducing SDS-PAGE [ 3 ] and X-ray crystallography. [ 4 ] Despite lacking the intermolecular disulfide bonds, XEEL-CRD is trimeric in solution. [ 5 ] The N-terminal peptide of the full length XEEL is responsible for dimerizing the trimeric XEEL-CRD into a disulfide-linked hexameric full-length XEEL. [ 5 ] Therefore, the N-termini of intelectins are often responsible for forming disulfide-linked oligomer. In intelectin homologs where the N-terminal cysteines are absent, the CRD itself may still capable of forming non-covalent oligomer in solution.
https://en.wikipedia.org/wiki/Intelectin
An intellectual property ( IP ) infringement is the infringement or violation of an intellectual property right . There are several types of intellectual property rights, such as copyrights , patents , trademarks , industrial designs , plant breeders rights and trade secrets . Therefore, an intellectual property infringement may for instance be one of the following: Techniques to detect (or deter) intellectual property infringement include: Designing around a patent can sometimes be a way to avoid infringing it. Companies or individuals who infringe on intellectual property rights produce counterfeit or pirated products and services. [ 1 ] An example of a counterfeit product is if a vendor were to place a well-known logo on a piece of clothing that said company did not produce. An example of a pirated product is if an individual were to distribute unauthorized copies of a DVD for a profit of their own. [ 1 ] In such circumstances, the law has the right to punish. Companies may seek out remedies themselves, however, "Criminal sanctions are often warranted to ensure sufficient punishment and deterrence of wrongful activity". [ 1 ] This law -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Intellectual_property_infringement
Intellectualism is the mental perspective that emphasizes the use, development, and exercise of the intellect , and is identified with the life of the mind of the intellectual . [ 1 ] In the field of philosophy , the term intellectualism indicates one of two ways of critically thinking about the character of the world: (i) rationalism , which is knowledge derived solely from reason ; and (ii) empiricism , which is knowledge derived solely from sense experience. Each intellectual approach attempts to eliminate fallacies that ignore, mistake, or distort evidence about "what ought to be" instead of "what is" the character of the world. [ 2 ] The first historical figure who is usually called an "intellectualist" was the Greek philosopher Socrates (c. 470 – 399 BC), who taught that intellectualism allows that "one will do what is right or [what is] best, just as soon as one truly understands what is right or best"; that virtue is a matter of the intellect, because virtue and knowledge are related qualities that a person accrues, possesses, and improves by dedication to the use of reason . [ 3 ] Philosopher Dominic Scott refers to a "standard criticism" of Socrates' attitude to human nature : that he treats human nature as more rational than it really is. [ 4 ] Socrates's definition of moral intellectualism is a basis of the philosophy of Stoicism , wherein the consequences of that definition are called "Socratic paradoxes", such as "There is no weakness of will ", because a person either knowingly does evil or knowingly seeks to do evil (moral wrong); that anyone who does commit evil or seeks to commit evil does so involuntarily; and that virtue is knowledge, that there are few virtues, but that all virtues are one. The concepts of truth and knowledge in contemporary philosophy are unlike Socrates's concepts of truth, knowledge, and ethical conduct, and cannot be equated with modern, post–Cartesian conceptions of knowledge and rational intellectualism. [ 5 ] In that vein, by way of detailed study of history, Michel Foucault demonstrated that in classical antiquity (800 BC – AD 1000), "knowing the truth" was akin to "spiritual knowledge", which is integral to the principle of "caring for the self". In an effort to become a moral person the care for the self is realised through ascetic exercises meant to ensure that knowledge of truth was learned and integrated to the Self. Therefore, to understand truth meant possessing "intellectual knowledge" that integrated the self to the (universal) truth and to living an authentic life. Achieving that ethical state required continual care for the self, but also meant being someone who embodies truth, and so can readily practice the Classical -era rhetorical device of parrhesia : "to speak candidly, and to ask forgiveness for so speaking"; and, by extension, to practice the moral obligation to speak truth for the common good, even at personal risk. [ 6 ] Medieval theological intellectualism is a doctrine of divine action, wherein the faculty of intellect precedes, and is superior to, the faculty of the will ( voluntas intellectum sequitur ). As such, intellectualism is contrasted with voluntarism , which proposes the will as superior to the intellect, and to the emotions; hence, the stance that "according to intellectualism, choices of the Will result from that which the intellect recognizes as good; the will, itself, is determined. For voluntarism, by contrast, it is the Will which identifies which objects are good, and the Will, itself, is indetermined". [ 7 ] From that philosophical perspective and historical context, the Spanish Muslim polymath Averroës (1126–1198) in the 12th century, the English theologian Roger Bacon , [ 8 ] the Italian Christian theologian Thomas Aquinas (1225–1274), and the German Christian theologian Meister Eckhart (1260–1327) in the 13th century, are recognised intellectualists. [ 7 ] [ 9 ]
https://en.wikipedia.org/wiki/Intellectualism
Intelligence-based design is the purposeful manipulation of the built-environment to effectively engage humans in an essential manner through complex organized information. Intelligence-based theory evidences the conterminous relationship between mind and matter, i.e. the direct neurological evaluations of surface, structure, pattern, texture and form. Intelligence-based theory maintains that our sense of well-being is established through neuro-engagement with the physical world at the deepest level common to all people i.e. "Innate Intelligence." These precursory readings of the physical environment represent an evolved set of information processing skills that the human mind has developed over millennia through direct lived experience. This physiological engagement with the world operates in a more immediate sense than the summary events of applied meaning or intellectual speculation. It is through this direct neurological engagement that humans connect more fully with the world. Many of mankind's early religious associations with physical structures were informed by an intuitive understanding that structure and materials speak to our deeper self, i.e. the human spirit, the soul. Intelligence based theory reveals this effectual dimension of the built-environment and its relationship to human cognitive development, mental acuity, perceptual awareness, spirituality, and sense of well-being. It is within this realm that the mind's eye connects, or fails to connect, with the world outside. The degree of neuro-connectivity which occurs at these intervals serves to render the built-environment either intelligible or un-intelligible. The study and theory of this occurrence is known as "Intelligence-based design". Several distinct strands of design thinking, in parallel development, lead towards Intelligence-based design. Christopher Alexander contributed early on to the scientific approach to design, by proposing a theory of design in his book Notes on the Synthesis of Form . Those were the years when Artificial Intelligence was being developed by Herbert A. Simon , and Alexander was part of that movement. His later work A Pattern Language , although written for architects and urbanists, was picked up by the software community and used as a combinatorial and organizational rubric for software complexity, especially Design patterns (computer science) . Alexander's most recent work The Nature of Order continues by building up a framework for design that relies upon natural and biological structures. Entirely separate from this, E. O. Wilson introduced the Biophilia hypothesis to describe the affinity of humans to other living structures, and to conjecture our innate need for such a connection. This topic was later investigated by Stephen R. Kellert and others, and applied to the design of the artificial environment. The third and independent component of the theory is the recent developments in mobile robotics by Rodney Brooks , where a breakthrough occurred by largely dispensing with internal memory. The practical concept of "Intelligence without representation" otherwise known as the Subsumption architecture and Behavior-based robotics introduced by Brooks suggests a parallel with the way human beings interact with, and design their own environment. These notions are brought together in Intelligence-based design, which is a topic currently under investigation for design applications in both architecture and urbanism.
https://en.wikipedia.org/wiki/Intelligence-based_design
The Intelligence Advanced Research Projects Activity ( IARPA ) is an organization, within the Office of the Director of National Intelligence (ODNI), that is responsible for leading research to overcome difficult challenges facing the United States Intelligence Community . [ 1 ] IARPA characterizes its mission as follows: "To envision and lead high-risk, high-payoff research that delivers innovative technology for future overwhelming intelligence advantage." IARPA funds academic and industry research across a broad range of technical areas, including mathematics, computer science, physics, chemistry, biology, neuroscience, linguistics, political science, and cognitive psychology . Most IARPA research is unclassified and openly published. IARPA transfers successful research results and technologies to other government agencies. Notable IARPA investments include quantum computing , [ 2 ] superconducting computing , machine learning, and forecasting tournaments. IARPA characterizes its mission as "to envision and lead high-risk, high-payoff research that delivers innovative technology for future overwhelming intelligence advantage". In 1958, the first Advanced Research Projects Agency, or ARPA, was created in response to an unanticipated surprise—the Soviet Union 's successful launch of Sputnik on October 4, 1957. The ARPA model was designed to anticipate and pre-empt such technological surprises. As then-Secretary of Defense Neil McElroy said, "I want an agency that makes sure no important thing remains undone because it doesn't fit somebody's mission." The ARPA model has been characterized by ambitious technical goals, competitively awarded research led by term-limited staff, and independent testing and evaluation. Authorized by the ODNI in 2006, IARPA was modeled after DARPA but focused on national intelligence, rather than military, needs. The agency was formed from a consolidation of the National Security Agency 's Disruptive Technology Office , the National Geospatial-Intelligence Agency 's National Technology Alliance, and the Central Intelligence Agency 's Intelligence Technology Innovation Center. [ 3 ] IARPA operations began on October 1, 2007 with Lisa Porter as founding director. Its headquarters, a new building in M Square, the University of Maryland 's research park in Riverdale Park, Maryland , was dedicated in April 2009. [ 4 ] In 2010, IARPA's quantum computing research was named Science magazine's Breakthrough of the Year. [ 5 ] [ 6 ] In 2015, IARPA was named to lead foundational research and development for the National Strategic Computing Initiative . [ citation needed ] IARPA is also a part of other White House science and technology efforts, including the U.S. BRAIN Initiative , and the nanotechnology -inspired Grand Challenge for Future Computing. [ 7 ] [ 8 ] In 2013, The New York Times ' s op-ed columnist David Brooks called IARPA "one of the government's most creative agencies." [ 9 ] IARPA invests in multi-year research programs, in which academic and industry teams compete to solve a well-defined set of technical problems, regularly scored on a shared set of metrics and milestones. Each program is led by an IARPA Program Manager (PM) who is a term-limited Government employee. IARPA programs are meant to enable researchers to pursue ideas that are potentially disruptive to the status quo. Most IARPA research is unclassified and openly published. [ 10 ] Former director Jason Matheny has stated that the agency's goals of openness and external engagement serve to draw in expertise from academia and industry, or even individuals who "might be working in their basement on some data-science project and might have an idea for how to solve an important problem". [ 11 ] IARPA transfers successful research results and technologies to other government agencies. IARPA is known for its programs to fund research into anticipatory intelligence, using data science to make predictions about future events ranging from political elections to disease outbreaks to cyberattacks , some of which focus on open-source intelligence . [ 12 ] [ 13 ] [ 14 ] IARPA has pursued these objectives not only through traditional funding programs but also through tournaments [ 12 ] [ 13 ] and prizes. [ 11 ] Aggregative Contingent Estimation (ACE) is an example of one such program. [ 11 ] [ 13 ] Other projects involve the analysis of images or videos that lack metadata by directly analyzing the media's content itself. Examples given by IARPA include determining the location of an image by analyzing features such as the placement of trees or a mountain skyline, or determining whether a video is of a baseball game or a traffic jam. [ 11 ] Another program focuses on developing speech recognition tools that can transcribe arbitrary languages. [ 15 ] IARPA is also involved in high-performance computing and alternative computing methods. In 2015, IARPA was named one of two foundational research and development agencies in the National Strategic Computing Initiative , with the specific charge of finding "future computing paradigms offering an alternative to standard semiconductor computing technologies". [ citation needed ] One such approach is cryogenic superconducting computing , which seeks to use superconductors such as niobium , rather than semiconductors , to reduce the energy consumption of future exascale supercomputers . [ 11 ] [ 15 ] Several programs at IARPA focus on quantum computing [ 2 ] and neuroscience . [ 16 ] IARPA is a major funder of quantum computing research, due to its applications in quantum cryptography . As of 2009, IARPA was said to provide a large portion of quantum computing funding resources in the United States. [ 17 ] Quantum computing research funded by IARPA was named Science Magazine's Breakthrough of the Year in 2010, [ 5 ] [ 6 ] and physicist David Wineland was a winner of the 2012 Nobel Prize in Physics for quantum computing research funded by IARPA. [ 11 ] IARPA is also involved in neuromorphic computation efforts as part of the U.S. BRAIN Initiative and the National Nanotechnology Initiative 's Grand Challenge for Future Computing. IARPA's MICrONS project seeks to reverse engineer one cubic millimeter of brain tissue and use insights from its study to improve machine learning and artificial intelligence . [ 7 ] [ 8 ] Below are some of the past and current research programs of IARPA.
https://en.wikipedia.org/wiki/Intelligence_Advanced_Research_Projects_Activity
An intelligence engine is a type of enterprise information management that combines business rule management , predictive , and prescriptive analytics to form a unified information access platform that provides real-time intelligence through search technologies , dashboards and/or existing business infrastructure. Intelligence Engines are process and/or business problem specific, resulting in industry and/or function-specific marketing trademarks associated with them. They can be differentiated from enterprise resource planning (ERP) software in that intelligence engines include organization-level business rules and proactive decision management functionality. The first intelligence engine application appears to have been introduced in 2001 by Sonus Networks, Inc. in their patent US6961334 B1. [ 1 ] Applied to the field of telecommunications systems, the intelligence engine was composed of a database queried by a data distributor layer, received by a telephony management layer and acted upon by a facility management command & control layer. [ 1 ] This combined standalone business intelligence tools like a data warehouse , reporting and querying software and a decision support system . The concept was reinforced in 2002 in patent application US20030236689 A1 [ 2 ] which applied predictive quantitative models to data and used rules to correlate context data at different stages of the business process with business process outcomes to be presented to end users. [ 2 ] LogRhythm Inc. advanced the concept in 2010 by adding event managers to the end of the intelligence engine's process to determine reporting, remediation and other outcomes. [ 3 ] As a system that combines human intelligence, data inputs, automated decision-making and unified information access, intelligence engines are an advancement in business intelligence tools because they:
https://en.wikipedia.org/wiki/Intelligence_engine
Intelligent House Concept is a building automation system using a star configured topology with wires to each device. Originally made by LK, but now owned by Schneider Electric and sold as "IHC Intelligent House Concept". The system is made up of a central controller and up to 8 input modules and 16 output modules. Each input module can have 16 digital (on/off) inputs and each output module 8 digital (on/off) outputs, resulting in a total of 128 input and 128 outputs per controller. The central controller has one point-to-point data communication wire connected to each module. The protocol between the central controller and the modules uses a 5V pulse width encoding as follows: The above signal constantly repeats. This product article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Intelligent_Home_Control
Intelligent Peripheral Interface ( IPI ) was a server-centric storage interface used in the 1980s and early 1990s with an ISO -9318 standard. [ 1 ] designed for mainframe computers from IBM, Control Data Corporation , and Unisys . It replaced Storage Module Device (SMD) as the hard disk drive interface for very large hard disks. [ 2 ] The idea behind IPI is that the disk drives themselves are as simple as possible, containing only the lowest level control circuitry, while the IPI interface card encapsulates most of the disk control complexity. The IPI interface card , as a central point of control, is thus theoretically able to best coordinate accesses to the connected disks, as it "knows" more about the states of the connected disks than would, say, a SCSI interface. IPI supports cable lengths up to 125 metres (410 ft). [ 2 ] An IPI-2 bus can provide a data transfer rate in the vicinity of 6 MB / s . In practice, the theoretical advantages of IPI over SCSI were often not realized, as they only materialized when several disks were connected to the interface, which could then easily become a bandwidth bottleneck. IPI systems were often shipped by Sun Microsystems on original sun4 architecture servers, but the above limitation and reliability problems made them unpopular with customers, and the technology basically disappeared by the second half of the 1990s. This computer-storage -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Intelligent_Peripheral_Interface
Intelligent automation ( IA ), or intelligent process automation , is a software term that refers to a combination of artificial intelligence (AI) and robotic process automation (RPA). [ 1 ] Companies use intelligent automation to cut costs and streamline tasks by using artificial-intelligence-powered robotic software to mitigate repetitive tasks. [ 1 ] As it accumulates data, the system learns in an effort to improve its efficiency. [ 2 ] Intelligent automation applications consist of but are not limited to, pattern analysis, data assembly, and classification. [ 2 ] The term is similar to hyperautomation , a concept identified by research group Gartner as being one of the top technology trends of 2020. [ 3 ] Intelligent automation applies the assembly line concept of breaking tasks into repetitive steps to improve business processes. [ 4 ] Rather than having humans do each step, intelligent automation can replace steps with an intelligent software robot or bot, improving efficiency. [ 5 ] The technology is used to process unstructured content. Common real-world applications include self-driving cars, self-checkouts at grocery stores, smart home assistants, and appliances. [ 6 ] Businesses can apply data and machine learning to build predictive analytics that react to consumer behavior changes, or to implement RPA to improve manufacturing floor operations. [ 6 ] For example, the technology has also been used to automate the workflow behind distributing Covid-19 vaccines. Data provided by hospital systems’ electronic health records can be processed to identify and educate patients, and schedule vaccinations. [ 7 ] Intelligent Automation can provide real-time insights on profitability and efficiency. However in an April 2022 survey by Alchemmy , despite three quarters of businesses acknowledging the importance of Artificial Intelligence to their future development, just a quarter of business leaders (25%) considered Intelligent Automation a “game changer” in understanding current performance. 42% of CTOs see “shortage of talent” as the main obstacle to implementing Intelligent Automation in their business, while 36% of CEOs see ‘upskilling and professional development of existing workforce’ as the most significant adoption barrier. [ 8 ] [ 9 ] IA is becoming increasingly accessible for firms of all sizes. With this in mind, it is expected to continue to grow rapidly in all industries. [ 10 ] This technology has the potential to change the workforce. As it advances, it will be able to perform increasingly complex and difficult tasks. [ 11 ] In addition, this may expose certain workforce issues as well as change how tasks are allocated. [ 12 ] Streamline Processes Customer Service Improvement Flexibility This technology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Intelligent_automation
An intelligent decision support system ( IDSS ) is a decision support system that makes extensive use of artificial intelligence (AI) techniques. Use of AI techniques in management information systems has a long history – indeed terms such as " Knowledge-based systems " (KBS) and "intelligent systems" have been used since the early 1980s to describe components of management systems, but the term "Intelligent decision support system" is thought to originate with Clyde Holsapple and Andrew Whinston [ 1 ] [ 2 ] in the late 1970s. Examples of specialized intelligent decision support systems include Flexible manufacturing systems (FMS), [ 3 ] intelligent marketing decision support systems [ 4 ] and medical diagnosis systems. [ 5 ] Ideally, an intelligent decision support system should behave like a human consultant: supporting decision makers by gathering and analysing evidence, identifying and diagnosing problems, proposing possible courses of action and evaluating such proposed actions. The aim of the AI techniques embedded in an intelligent decision support system is to enable these tasks to be performed by a computer, while emulating human capabilities as closely as possible. Many IDSS implementations are based on expert systems , [ 6 ] a well established type of KBS that encode knowledge and emulate the cognitive behaviours of human experts using predicate logic rules, and have been shown to perform better than the original human experts in some circumstances. [ 7 ] [ 8 ] Expert systems emerged as practical applications in the 1980s [ 9 ] based on research in artificial intelligence performed during the late 1960s and early 1970s. [ 10 ] They typically combine knowledge of a particular application domain with an inference capability to enable the system to propose decisions or diagnoses. Accuracy and consistency can be comparable to (or even exceed) that of human experts when the decision parameters are well known (e.g. if a common disease is being diagnosed), but performance can be poor when novel or uncertain circumstances arise. Research in AI focused on enabling systems to respond to novelty and uncertainty in more flexible ways is starting to be used in IDSS. For example, intelligent agents [ 11 ] [ 12 ] that perform complex cognitive tasks without any need for human intervention have been used in a range of decision support applications. [ 13 ] Capabilities of these intelligent agents include knowledge sharing , machine learning , data mining , and automated inference . A range of AI techniques such as case based reasoning , rough sets [ 14 ] and fuzzy logic have also been used to enable decision support systems to perform better in uncertain conditions. A 2009 research about a multi-artificial system intelligence system named IILS is proposed to automate problem-solving processes within the logistics industry. The system involves integrating intelligence modules based on case-based reasoning, multi-agent systems, fuzzy logic, and artificial neural networks aiming to offer advanced logistics solutions and support in making well-informed, high-quality decisions to address a wide range of customer needs and challenges. [ 15 ]
https://en.wikipedia.org/wiki/Intelligent_decision_support_system
Intelligent vehicular ad hoc networks ( InVANETs ) use WiFi IEEE 802.11p (WAVE standard) and effective communication between vehicles with dynamic mobility. Effective measures such as media communication between vehicles can be enabled as well methods to track automotive vehicles. InVANET is not foreseen to replace current mobile ( cellular phone ) communication standards. "Older" designs within the IEEE 802.11 scope may refer just to IEEE 802.11b/g. More recent designs refer to the latest issues of IEEE 802.11p (WAVE, draft status). Due to inherent lag times, only the latter one in the IEEE 802.11 scope is capable of coping with the typical dynamics of vehicle operation. Automotive vehicular information can be viewed on electronic maps using the Internet or specialized software. The advantage of WiFi based navigation system function is that it can effectively locate a vehicle which is inside big campuses like universities, airports, and tunnels. InVANET can be used as part of automotive electronics, which has to identify an optimally minimal path for navigation with minimal traffic intensity. The system can also be used as a city guide to locate and identify landmarks in a new city. Communication capabilities in vehicles are the basis of an envisioned InVANET or intelligent transportation systems (ITS). Vehicles are enabled to communicate among themselves (vehicle-to-vehicle, V2V) and via roadside access points (vehicle-to-roadside, V2R) also called as Road Side Units (RSUs). Vehicular communication is expected to contribute to safer and more efficient roads by providing timely information to drivers, and also to make travel more convenient. The integration of V2V and V2R communication is beneficial because V2R provides better service sparse networks and long-distance communication, whereas V2V enables direct communication for small to medium distances/areas and at locations where roadside access points are not available. Providing vehicle–vehicle and vehicle–roadside communication can considerably improve traffic safety and comfort of driving and traveling. For communication in vehicular ad hoc networks, position-based routing has emerged as a promising candidate. For Internet access, Mobile IPv6 is a widely accepted solution to provide session continuity and reachability to the Internet for mobile nodes. While integrated solutions for usage of Mobile IPv6 in (non-vehicular) mobile ad hoc networks exist, a solution has been proposed that, built upon a Mobile IPv6 proxy-based architecture, selects the optimal communication mode (direct in-vehicle, vehicle–vehicle, and vehicle–roadside communication) and provides dynamic switching between vehicle–vehicle and vehicle–roadside communication mode during a communication session in case that more than one communication mode is simultaneously available. Ad hoc network books: Intelligent ad hoc vehicular network papers (Overview): Intelligent ad hoc vehicular network architecture:
https://en.wikipedia.org/wiki/Intelligent_vehicular_ad_hoc_network
Intelligiant is a water cannon invented by John Miscovich (1918 - August 22, 2014) in the post-WW2 years. His Intelligiant influenced the development of fire-fighting and hydraulic gold mining , with numerous other applications. Today it is known as a standard fire fighting cannon.
https://en.wikipedia.org/wiki/Intelligiant
Intensional logic is an approach to predicate logic that extends first-order logic , which has quantifiers that range over the individuals of a universe ( extensions ), by additional quantifiers that range over terms that may have such individuals as their value ( intensions ). The distinction between intensional and extensional entities is parallel to the distinction between sense and reference . Logic is the study of proof and deduction as manifested in language (abstracting from any underlying psychological or biological processes). [ 1 ] Logic is not a closed, completed science, and presumably, it will never stop developing: the logical analysis can penetrate into varying depths of the language [ 2 ] (sentences regarded as atomic, or splitting them to predicates applied to individual terms, or even revealing such fine logical structures like modal , temporal , dynamic , epistemic ones). In order to achieve its special goal, logic was forced to develop its own formal tools, most notably its own grammar, detached from simply making direct use of the underlying natural language. [ 3 ] Functors (also known as function words) belong to the most important categories in logical grammar (along with basic categories like sentence and individual name ): [ 4 ] a functor can be regarded as an "incomplete" expression with argument places to fill in. If we fill them in with appropriate subexpressions, then the resulting entirely completed expression can be regarded as a result, an output. [ 5 ] Thus, a functor acts like a function sign, [ 6 ] taking on input expressions, resulting in a new, output expression. [ 5 ] Semantics links expressions of language to the outside world. Also logical semantics has developed its own structure. Semantic values can be attributed to expressions in basic categories: the reference of an individual name (the "designated" object named by that) is called its extension ; and as for sentences, their truth value is their extension. [ 7 ] As for functors, some of them are simpler than others: extension can be attributed to them in a simple way. In case of a so-called extensional functor we can in a sense abstract from the "material" part of its inputs and output, and regard the functor as a function turning directly the extension of its input(s) into the extension of its output. Of course, it is assumed that we can do so at all: the extension of input expression(s) determines the extension of the resulting expression. Functors for which this assumption does not hold are called intensional . [ 8 ] Natural languages abound with intensional functors; [ 9 ] this can be illustrated by intensional statements . Extensional logic cannot reach inside such fine logical structures of the language, but stops at a coarser level. The attempts for such deep logical analysis have a long past: authors as early as Aristotle had already studied modal syllogisms . [ 10 ] Gottlob Frege developed a kind of two-dimensional semantics : for resolving questions like those of intensional statements , Frege introduced a distinction between two semantic values : sentences (and individual terms) have both an extension and an intension . [ 6 ] These semantic values can be interpreted, transferred also for functors (except for intensional functors, they have only intension). As mentioned, motivations for settling problems that belong today to intensional logic have a long past. As for attempts of formalizations, the development of calculi often preceded the finding of their corresponding formal semantics. Intensional logic is not alone in that: also Gottlob Frege accompanied his (extensional) calculus with detailed explanations of the semantical motivations, but the formal foundation of its semantics appeared only in the 20th century. Thus sometimes similar patterns repeated themselves for the history of development of intensional logic like earlier for that of extensional logic. [ 11 ] There are some intensional logic systems that claim to fully analyze the common language: Modal logic is historically the earliest area in the study of intensional logic, originally motivated by formalizing "necessity" and "possibility" (recently, this original motivation belongs to alethic logic , just one of the many branches of modal logic). [ 12 ] Modal logic can be regarded also as the most simple appearance of such studies: it extends extensional logic just with a few sentential functors: [ 13 ] these are intensional, and they are interpreted (in the metarules of semantics) as quantifying over possible worlds. For example, the Necessity operator (the 'box') when applied to a sentence A says 'The sentence "('box')A" is true in world i if and only if it is true in all worlds accessible from world i'. The corresponding Possibility operator (the 'diamond') when applied to A asserts that "('diamond')A" is true in world i if and only if A is true in some worlds (at least one) accessible to world i. The exact semantic content of these assertions therefore depends crucially on the nature of the accessibility relation. For example, is world i accessible from itself? The answer to this question characterizes the precise nature of the system, and many exist, answering moral and temporal questions (in a temporal system, the accessibility relation relates states or 'instants' and only the future is accessible from a given moment. The Necessity operator corresponds to 'for all future moments' in this logic. The operators are related to one another by similar dualities to those relating existential and universal quantifiers [ 14 ] (for example by the analogous correspondents of De Morgan's laws ). I.e., Something is necessary if and only if its negation is not possible, i.e. inconsistent. Syntactically, the operators are not quantifiers, they do not bind variables, [ 15 ] but govern whole sentences. This gives rise to the problem of referential opacity , i.e. the problem of quantifying over or 'into' modal contexts. The operators appear in the grammar as sentential functors, [ 14 ] they are called modal operators . [ 15 ] As mentioned, precursors of modal logic include Aristotle . Medieval scholarly discussions accompanied its development, for example about de re versus de dicto modalities: said in recent terms, in the de re modality the modal functor is applied to an open sentence , the variable is bound by a quantifier whose scope includes the whole intensional subterm. [ 10 ] Modern modal logic began with the Clarence Irving Lewis . His work was motivated by establishing the notion of strict implication . [ 16 ] The possible worlds approach enabled more exact study of semantical questions. Exact formalization resulted in Kripke semantics (developed by Saul Kripke , Jaakko Hintikka , Stig Kanger). [ 13 ] Already in 1951, Alonzo Church had developed an intensional calculus . The semantical motivations were explained expressively, of course without those tools that we now use for establishing semantics for modal logic in a formal way, because they had not been invented then: [ 17 ] Church did not provide formal semantic definitions. [ 18 ] Later, the possible worlds approach to semantics provided tools for a comprehensive study in intensional semantics. Richard Montague could preserve the most important advantages of Church's intensional calculus in his system. Unlike its forerunner, Montague grammar was built in a purely semantical way: a simpler treatment became possible, thank to the new formal tools invented since Church's work. [ 17 ]
https://en.wikipedia.org/wiki/Intensional_logic
An intensity-duration-frequency curve ( IDF curve ) is a mathematical function that relates the intensity of an event (e.g. rainfall ) with its duration and frequency of occurrence. [ 1 ] Frequency is the inverse of the probability of occurrence. These curves are commonly used in hydrology for flood forecasting and civil engineering for urban drainage design. However, the IDF curves are also analysed in hydrometeorology because of the interest in the time concentration or time-structure of the rainfall , [ 2 ] [ 3 ] but it is also possible to define IDF curves for drought events. [ 4 ] [ 5 ] Additionally, applications of IDF curves to risk-based design are emerging outside of hydrometeorology, for example some authors developed IDF curves for food supply chain inflow shocks to US cities. [ 6 ] The IDF curves can take different mathematical expressions, theoretical or empirically fitted to observed event data. For each duration (e.g. 5, 10, 60, 120, 180 ... minutes), the empirical cumulative distribution function (ECDF), and a determined frequency or return period is set. Therefore, the empirical IDF curve is given by the union of the points of equal frequency of occurrence and different duration and intensity [ 7 ] Likewise, a theoretical or semi-empirical IDF curve is one whose mathematical expression is physically justified, but presents parameters that must be estimated by empirical fits. There is a large number of empirical approaches that relate the intensity ( I ), the duration ( t ) and the return period ( p ), from fits to power laws such as: In hydrometeorology , the simple power law (taking c = 0 {\displaystyle \ c=0} ) is used as a measure of the time-structure of the rainfall: [ 2 ] where I o {\displaystyle \ I_{o}} is defined as an intensity of reference for a fixed time t o {\displaystyle \ t_{o}} , i.e. a = I o t o n {\displaystyle \ a=I_{o}t_{o}^{n}} , and n {\displaystyle \ n} is a non-dimensional parameter known as n -index . [ 2 ] [ 3 ] In a rainfall event, the equivalent to the IDF curve is called Maximum Averaged Intensity (MAI) curve. [ 11 ] To get an IDF curves from a probability distribution , F ( x ) {\displaystyle \ F(x)} it is necessary to mathematically isolate the total amount or depth of the event x {\displaystyle \ x} , which is directly related to the average intensity I {\displaystyle \ I} and the duration t {\displaystyle \ t} , by the equation x = I t {\displaystyle \ x=It} , and since the return period p {\displaystyle p} is defined as the inverse of 1 − F ( x ) {\displaystyle \ 1-F(x)} , the function f ( p ) {\displaystyle \ f(p)} is found as the inverse of F ( x ) {\displaystyle \ F(x)} , according to:
https://en.wikipedia.org/wiki/Intensity-duration-frequency_curve
In physics and many other areas of science and engineering the intensity or flux of radiant energy is the power transferred per unit area , where the area is measured on the plane perpendicular to the direction of propagation of the energy. [ a ] In the SI system, it has units watts per square metre (W/m 2 ), or kg ⋅ s −3 in base units . Intensity is used most frequently with waves such as acoustic waves ( sound ), matter waves such as electrons in electron microscopes , and electromagnetic waves such as light or radio waves , in which case the average power transfer over one period of the wave is used. Intensity can be applied to other circumstances where energy is transferred. For example, one could calculate the intensity of the kinetic energy carried by drops of water from a garden sprinkler . The word "intensity" as used here is not synonymous with " strength ", " amplitude ", " magnitude ", or " level ", as it sometimes is in colloquial speech. Intensity can be found by taking the energy density (energy per unit volume) at a point in space and multiplying it by the velocity at which the energy is moving. The resulting vector has the units of power divided by area (i.e., surface power density ). The intensity of a wave is proportional to the square of its amplitude. For example, the intensity of an electromagnetic wave is proportional to the square of the wave's electric field amplitude. If a point source is radiating energy in all directions (producing a spherical wave ), and no energy is absorbed or scattered by the medium, then the intensity decreases in proportion to the distance from the object squared. This is an example of the inverse-square law . Applying the law of conservation of energy , if the net power emanating is constant, P = ∫ I ⋅ d A , {\displaystyle P=\int \mathbf {I} \,\cdot d\mathbf {A} ,} where If one integrates a uniform intensity, | I | = const. , over a surface that is perpendicular to the intensity vector, for instance over a sphere centered around the point source, the equation becomes P = | I | ⋅ A s u r f = | I | ⋅ 4 π r 2 , {\displaystyle P=|I|\cdot A_{\mathrm {surf} }=|I|\cdot 4\pi r^{2},} where Solving for | I | gives | I | = P A s u r f = P 4 π r 2 . {\displaystyle |I|={\frac {P}{A_{\mathrm {surf} }}}={\frac {P}{4\pi r^{2}}}.} If the medium is damped, then the intensity drops off more quickly than the above equation suggests. Anything that can transmit energy can have an intensity associated with it. For a monochromatic propagating electromagnetic wave, such as a plane wave or a Gaussian beam , if E is the complex amplitude of the electric field , then the time-averaged energy density of the wave, travelling in a non-magnetic material, is given by: ⟨ U ⟩ = n 2 ε 0 2 | E | 2 , {\displaystyle \left\langle U\right\rangle ={\frac {n^{2}\varepsilon _{0}}{2}}|E|^{2},} and the local intensity is obtained by multiplying this expression by the wave velocity, ⁠ c n : {\displaystyle {\tfrac {\mathrm {c} }{n}}\!:} ⁠ I = c n ε 0 2 | E | 2 , {\displaystyle I={\frac {\mathrm {c} n\varepsilon _{0}}{2}}|E|^{2},} where For non-monochromatic waves, the intensity contributions of different spectral components can simply be added. The treatment above does not hold for arbitrary electromagnetic fields. For example, an evanescent wave may have a finite electrical amplitude while not transferring any power. The intensity should then be defined as the magnitude of the Poynting vector . [ 1 ] For electron beams , intensity is the probability of electrons reaching some particular position on a detector (e.g. a charge-coupled device [ 2 ] ) which is used to produce images that are interpreted in terms of both microstructure of inorganic or biological materials, as well as atomic scale structure. [ 3 ] The map of the intensity of scattered electrons or x-rays as a function of direction is also extensively used in crystallography . [ 3 ] [ 4 ] In photometry and radiometry intensity has a different meaning: it is the luminous or radiant power per unit solid angle . This can cause confusion in optics , where intensity can mean any of radiant intensity , luminous intensity or irradiance , depending on the background of the person using the term. Radiance is also sometimes called intensity , especially by astronomers and astrophysicists, and in heat transfer .
https://en.wikipedia.org/wiki/Intensity_(physics)
Intensity-fading MALDI is a term coined to rename an existing method originally reported in 1999 to indirectly study a Protein–protein interaction or other protein complex [ 1 ] and the same year applied to a biological mixture to study the antigenicity of the influenza virus. [ 2 ] It involves treating a protein and a potential binding partner with a site-specific endoproteinase with the binding sites identified by their reduced area (or intensity) in a MALDI mass spectrum compared to that of non-bound protein control. It was falsely reported as new and novel in a later application by a Spanish group. The true origins of the approach and a range of applications including those employing gel based separations, drug-protein interactions and the relative affinity of such interactions, are described in a review article. [ 3 ]
https://en.wikipedia.org/wiki/Intensity_fading_MALDI_mass_spectrometry
In economics , a cardinal utility expresses not only which of two outcomes is preferred, but also the intensity of preferences , i.e. how much better or worse one outcome is compared to another. [ 1 ] In consumer choice theory , economists originally attempted to replace cardinal utility with the apparently weaker concept of ordinal utility . Cardinal utility appears to impose the assumption that levels of absolute satisfaction exist , so magnitudes of increments to satisfaction can be compared across different situations. However, economists in the 1940s proved that under mild conditions, ordinal utilities imply cardinal utilities. This result is now known as the von Neumann–Morgenstern utility theorem ; many similar utility representation theorems exist in other contexts. In 1738, Daniel Bernoulli was the first to theorize about the marginal value of money. He assumed that the value of an additional amount is inversely proportional to the pecuniary possessions which a person already owns. Since Bernoulli tacitly assumed that an interpersonal measure for the utility reaction of different persons can be discovered, he was then inadvertently using an early conception of cardinality. [ 2 ] Bernoulli's imaginary logarithmic utility function and Gabriel Cramer's U = W 1/2 function were conceived at the time not for a theory of demand but to solve the St. Petersburg's game . Bernoulli assumed that "a poor man generally obtains more utility than a rich man from an equal gain" [ 3 ] an approach that is more profound than the simple mathematical expectation of money as it involves a law of moral expectation . Early theorists of utility considered that it had physically quantifiable attributes. They thought that utility behaved like the magnitudes of distance or time, in which the simple use of a ruler or stopwatch resulted in a distinguishable measure. "Utils" was the name actually given to the units in a utility scale. In the Victorian era many aspects of life were succumbing to quantification. [ 4 ] The theory of utility soon began to be applied to moral-philosophy discussions. The essential idea in utilitarianism is to judge people's decisions by looking at their change in utils and measure whether they are better off. The main forerunner of the utilitarian principles since the end of the 18th century was Jeremy Bentham , who believed that utility could be measured by some complex introspective examination and that it should guide the design of social policies and laws. For Bentham a scale of pleasure has as a unit of intensity "the degree of intensity possessed by that pleasure which is the faintest of any that can be distinguished to be pleasure"; [ 5 ] he also stated that as these pleasures increase in intensity, higher and higher numbers could represent them. [ 5 ] In the 18th and 19th centuries utility's measurability received plenty of attention from European schools of political economy, most notably through the work of marginalists (e.g., William Stanley Jevons , [ 6 ] Léon Walras , Alfred Marshall ). However, neither of them offered solid arguments to support the assumption of measurability. In Jevon's case he added to the later editions of his work a note on the difficulty of estimating utility with accuracy. [ 5 ] Walras, too, struggled for many years before he could even attempt to formalize the assumption of measurability. [ 7 ] Marshall was ambiguous about the measurability of hedonism because he adhered to its psychological-hedonistic properties but he also argued that it was "unrealistical" to do so. [ 8 ] Supporters of cardinal utility theory in the 19th century suggested that market prices reflect utility, although they did not say much about their compatibility (i.e., prices being objective while utility is subjective). Accurately measuring subjective pleasure (or pain ) seemed awkward, as the thinkers of the time were surely aware. They renamed utility in imaginative ways such as subjective wealth , overall happiness , moral worth , psychic satisfaction , or ophélimité . During the second half of the 19th century many studies related to this fictional magnitude—utility—were conducted, but the conclusion was always the same: it proved impossible to definitively say whether a good is worth 50, 75, or 125 utils to a person, or to two different people. Moreover, the mere dependence of utility on notions of hedonism led academic circles to be skeptical of this theory. [ 9 ] Francis Edgeworth was also aware of the need to ground the theory of utility into the real world. He discussed the quantitative estimates that a person can make of his own pleasure or the pleasure of others, borrowing methods developed in psychology to study hedonic measurement: psychophysics . This field of psychology was built on work by Ernst H. Weber , but around the time of World War I, psychologists grew discouraged of it. [ 10 ] [ 11 ] In the late 19th century, Carl Menger and his followers from the Austrian school of economics undertook the first successful departure from measurable utility, in the clever form of a theory of ranked uses. Despite abandoning the thought of quantifiable utility (i.e. psychological satisfaction mapped into the set of real numbers) Menger managed to establish a body of hypothesis about decision-making, resting solely on a few axioms of ranked preferences over the possible uses of goods and services. His numerical examples are "illustrative of ordinal, not cardinal, relationships". [ 12 ] However, there are other interpretations of Carl Menger's work. Ivan Moscati and J. Huston McCulloch argue that Menger was a classical cardinalist, as his numerical examples are not merely illustrative but represent explicit arithmetic proportions of value between economic goods. [ 13 ] [ 14 ] Arithmetic proportions, sums, and multiplications are inherently cardinal and do not exist in an ordinal paradigm. Menger also explicitly states the following: "Only the satisfaction of our needs has direct and immediate significance to us. In each concrete instance, this significance is measured by the importance of the various satisfactions for our lives and well-being. We next attribute the exact quantitative magnitude of this importance to the specific goods on which we are conscious of being directly dependent for the satisfactions in question" [ 15 ] Around the turn of the 19th century neoclassical economists started to embrace alternative ways to deal with the measurability issue. By 1900, Pareto was hesitant about accurately measuring pleasure or pain because he thought that such a self-reported subjective magnitude lacked scientific validity. He wanted to find an alternative way to treat utility that did not rely on erratic perceptions of the senses. [ 16 ] Pareto's main contribution to ordinal utility was to assume that higher indifference curves have greater utility, but how much greater does not need to be specified to obtain the result of increasing marginal rates of substitution. The works and manuals of Vilfredo Pareto, Francis Edgeworth, Irving Fischer , and Eugene Slutsky departed from cardinal utility and served as pivots for others to continue the trend on ordinality. According to Viner, [ 17 ] these economic thinkers came up with a theory that explained the negative slopes of demand curves. Their method avoided the measurability of utility by constructing some abstract indifference curve map . During the first three decades of the 20th century, economists from Italy and Russia became familiar with the Paretian idea that utility does not need to be cardinal. According to Schultz, [ 18 ] by 1931 the idea of ordinal utility was not yet embraced by American economists. The breakthrough occurred when a theory of ordinal utility was put together by John Hicks and Roy Allen in 1934. [ 19 ] In fact pages 54–55 from this paper contain the first use ever of the term "cardinal utility". [ 20 ] The first treatment of a class of utility functions preserved by affine transformations, though, was made in 1934 by Oskar Lange. [ 21 ] In 1944 Frank Knight argued extensively for cardinal utility. In the decade of 1960 Parducci studied human judgements of magnitudes and suggested a range-frequency theory. [ 22 ] Since the late 20th century economists are having a renewed interest in the measurement issues of happiness . [ 23 ] [ 24 ] This field has been developing methods, surveys and indices to measure happiness. Several properties of cardinal utility functions can be derived using tools from measure theory and set theory . A utility function is considered to be measurable, if the strength of preference or intensity of liking of a good or service is determined with precision by the use of some objective criteria. For example, suppose that eating an apple gives to a person exactly half the pleasure of that of eating an orange. This would be a measurable utility if and only if the test employed for its direct measurement is based on an objective criterion that could let any external observer repeat the results accurately. [ 25 ] One hypothetical way to achieve this would be by the use of a hedonometer , which was the instrument suggested by Edgeworth to be capable of registering the height of pleasure experienced by people, diverging according to a law of errors. [ 10 ] Before the 1930s, the measurability of utility functions was erroneously labeled as cardinality by economists. A different meaning of cardinality was used by economists who followed the formulation of Hicks-Allen, where two cardinal utility functions are considered the same if they preserve preference orderings uniquely up to positive affine transformations . [ 26 ] [ 27 ] Around the end of the 1940s, some economists even rushed to argue that von Neumann–Morgenstern axiomatization of expected utility had resurrected measurability. [ 16 ] The confusion between cardinality and measurability was not to be solved until the works of Armen Alchian , [ 28 ] William Baumol, [ 29 ] and John Chipman. [ 30 ] The title of Baumol's paper, "The cardinal utility which is ordinal", expressed well the semantic mess of the literature at the time. It is helpful to consider the same problem as it appears in the construction of scales of measurement in the natural sciences. [ 31 ] In the case of temperature there are two degrees of freedom for its measurement – the choice of unit and the zero. Different temperature scales map its intensity in different ways. In the celsius scale the zero is chosen to be the point where water freezes, and likewise, in cardinal utility theory one would be tempted to think that the choice of zero would correspond to a good or service that brings exactly 0 utils. However this is not necessarily true. The mathematical index remains cardinal, even if the zero gets moved arbitrarily to another point, or if the choice of scale is changed, or if both the scale and the zero are changed. Every measurable entity maps into a cardinal function but not every cardinal function is the result of the mapping of a measurable entity. The point of this example was used to prove that (as with temperature) it is still possible to predict something about the combination of two values of some utility function, even if the utils get transformed into entirely different numbers, as long as it remains a linear transformation. Von Neumann and Morgenstern stated that the question of measurability of physical quantities was dynamic. For instance, temperature was originally a number only up to any monotone transformation, but the development of the ideal-gas-thermometry led to transformations in which the absolute zero and absolute unit were missing. Subsequent developments of thermodynamics even fixed the absolute zero so that the transformation system in thermodynamics consists only of the multiplication by constants. According to Von Neumann and Morgenstern (1944, p. 23), "For utility the situation seems to be of a similar nature [to temperature]". The following quote from Alchian served to clarify once and for all [ citation needed ] the real nature of utility functions: Can we assign a set of numbers (measures) to the various entities and predict that the entity with the largest assigned number (measure) will be chosen? If so, we could christen this measure "utility" and then assert that choices are made so as to maximize utility. It is an easy step to the statement that "you are maximizing your utility", which says no more than that your choice is predictable according to the size of some assigned numbers. For analytical convenience it is customary to postulate that an individual seeks to maximize something subject to some constraints. The thing  – or numerical measure of the "thing" – which he seeks to maximize is called "utility". Whether or not utility is of some kind glow or warmth, or happiness, is here irrelevant; all that counts is that we can assign numbers to entities or conditions which a person can strive to realize. Then we say the individual seeks to maximize some function of those numbers. Unfortunately, the term "utility" has by now acquired so many connotations, that it is difficult to realize that for present purposes utility has no more meaning than this. In 1955 Patrick Suppes and Muriel Winet solved the issue of the representability of preferences by a cardinal utility function and derived the set of axioms and primitive characteristics required for this utility index to work. [ 32 ] Suppose an agent is asked to rank his preferences of A relative to B and his preferences of B relative to C . If he finds that he can state, for example, that his degree of preference of A to B exceeds his degree of preference of B to C , we could summarize this information by any triplet of numbers satisfying the two inequalities: U A > U B > U C and U A − U B > U B − U C . If A and B were sums of money, the agent could vary the sum of money represented by B until he could tell us that he found his degree of preference of A over the revised amount B ′ equal to his degree of preference of B ′ over C . If he finds such a B ′ , then the results of this last operation would be expressed by any triplet of numbers satisfying the relationships U A > U B ′ > U C and U A − U B ′ = U B ′ − U C . Any two triplets obeying these relationships must be related by a linear transformation; they represent utility indices differing only by scale and origin. In this case, "cardinality" means nothing more being able to give consistent answers to these particular questions. This experiment does not require measurability of utility. Itzhak Gilboa gives a sound explanation of why measurability can never be attained solely by introspection : It might have happened to you that you were carrying a pile of papers, or clothes, and didn't notice that you dropped a few. The decrease in the total weight you were carrying was probably not large enough for you to notice. Two objects may be too close in terms of weight for us to notice the difference between them. This problem is common to perception in all our senses. If I ask whether two rods are of the same length or not, there are differences that will be too small for you to notice. The same would apply to your perception of sound (volume, pitch), light, temperature, and so forth... According to this view, those situations where a person just cannot tell the difference between A and B will lead to indifference not because of a consistency of preferences, but because of a misperception of the senses. Moreover, human senses adapt to a given level of stimulation and then register changes from that baseline. [ 34 ] Suppose a certain agent has a preference ordering over random outcomes (lotteries). If the agent can be queried about his preferences, it is possible to construct a cardinal utility function that represents these preferences. This is the core of the von Neumann–Morgenstern utility theorem . Among welfare economists of the utilitarian school it has been the general tendency to take satisfaction (in some cases, pleasure) as the unit of welfare. If the function of welfare economics is to contribute data which will serve the social philosopher or the statesman in the making of welfare judgments, this tendency leads perhaps, to a hedonistic ethics. [ 35 ] Under this framework, actions (including production of goods and provision of services) are judged by their contributions to the subjective wealth of people. In other words, it provides a way of judging the "greatest good to the greatest number of persons". An act that reduces one person's utility by 75 utils while increasing two others' by 50 utils each has increased overall utility by 25 utils and is thus a positive contribution; one that costs the first person 125 utils while giving the same 50 each to two other people has resulted in a net loss of 25 utils. If a class of utility functions is cardinal, intrapersonal comparisons of utility differences are allowed. If, in addition, some comparisons of utility are meaningful interpersonally, the linear transformations used to produce the class of utility functions must be restricted across people. An example is cardinal unit comparability. In that information environment, admissible transformations are increasing affine functions and, in addition, the scaling factor must be the same for everyone. This information assumption allows for interpersonal comparisons of utility differences, but utility levels cannot be compared interpersonally because the intercept of the affine transformations may differ across people. [ 36 ] This type of indices involves choices under risk. In this case, A , B , and C , are lotteries associated with outcomes. Unlike cardinal utility theory under certainty, in which the possibility of moving from preferences to quantified utility was almost trivial, here it is paramount to be able to map preferences into the set of real numbers, so that the operation of mathematical expectation can be executed. Once the mapping is done, the introduction of additional assumptions would result in a consistent behavior of people regarding fair bets. But fair bets are, by definition, the result of comparing a gamble with an expected value of zero to some other gamble. Although it is impossible to model attitudes toward risk if one doesn't quantify utility, the theory should not be interpreted as measuring strength of preference under certainty. [ 37 ] Suppose that certain outcomes are associated with three states of nature, so that x 3 is preferred over x 2 which in turn is preferred over x 1 ; this set of outcomes, X , can be assumed to be a calculable money-prize in a controlled game of chance, unique up to one positive proportionality factor depending on the currency unit. Let L 1 and L 2 be two lotteries with probabilities p 1 , p 2 , and p 3 of x 1 , x 2 , and x 3 respectively being Assume that someone has the following preference structure under risk: meaning that L 1 is preferred over L 2 . By modifying the values of p 1 and p 3 in L 1 , eventually there will be some appropriate values ( L 1' ) for which she is found to be indifferent between it and L 2 —for example Expected utility theory tells us that and so In this example from Majumdar [ 38 ] fixing the zero value of the utility index such that the utility of x 1 is 0, and by choosing the scale so that the utility of x 2 equals 1, gives Models of utility with several periods, in which people discount future values of utility, need to employ cardinalities in order to have well-behaved utility functions. According to Paul Samuelson the maximization of the discounted sum of future utilities implies that a person can rank utility differences. [ 39 ] Some authors have commented on the misleading nature of the terms "cardinal utility" and "ordinal utility", as used in economic jargon: These terms, which seem to have been introduced by Hicks and Allen (1934), bear scant if any relation to the mathematicians' concept of ordinal and cardinal numbers; rather they are euphemisms for the concepts of order-homomorphism to the real numbers and group-homomorphism to the real numbers. There remain economists who believe that utility, if it cannot be measured, at least can be approximated somewhat to provide some form of measurement, similar to how prices, which have no uniform unit to provide an actual price level, could still be indexed to provide an "inflation rate" (which is actually a level of change in the prices of weighted indexed products). These measures are not perfect but can act as a proxy for the utility. Lancaster's [ 40 ] characteristics approach to consumer demand illustrates this point. The following table compares the two types of utility functions common in economics:
https://en.wikipedia.org/wiki/Intensity_of_preference
Physical or chemical properties of materials and systems can often be categorized as being either intensive or extensive , according to how the property changes when the size (or extent) of the system changes. The terms "intensive and extensive quantities" were introduced into physics by German mathematician Georg Helm in 1898, and by American physicist and chemist Richard C. Tolman in 1917. [ 1 ] [ 2 ] According to International Union of Pure and Applied Chemistry (IUPAC), an intensive property or intensive quantity is one whose magnitude is independent of the size of the system. [ 3 ] An intensive property is not necessarily homogeneously distributed in space; it can vary from place to place in a body of matter and radiation. Examples of intensive properties include temperature , T ; refractive index , n ; density , ρ ; and hardness , η . By contrast, an extensive property or extensive quantity is one whose magnitude is additive for subsystems. [ 4 ] Examples include mass , volume and Gibbs energy . [ 5 ] Not all properties of matter fall into these two categories. For example, the square root of the volume is neither intensive nor extensive. [ 1 ] If a system is doubled in size by juxtaposing a second identical system, the value of an intensive property equals the value for each subsystem and the value of an extensive property is twice the value for each subsystem. However the property √V is instead multiplied by √2 . The distinction between intensive and extensive properties has some theoretical uses. For example, in thermodynamics, the state of a simple compressible system is completely specified by two independent, intensive properties, along with one extensive property, such as mass. Other intensive properties are derived from those two intensive variables. An intensive property is a physical quantity whose value does not depend on the amount of substance which was measured. The most obvious intensive quantities are ratios of extensive quantities. In a homogeneous system divided into two halves, all its extensive properties, in particular its volume and its mass, are divided into two halves. All its intensive properties, such as the mass per volume (mass density) or volume per mass ( specific volume ), must remain the same in each half. The temperature of a system in thermal equilibrium is the same as the temperature of any part of it, so temperature is an intensive quantity. If the system is divided by a wall that is permeable to heat or to matter, the temperature of each subsystem is identical. Additionally, the boiling temperature of a substance is an intensive property. For example, the boiling temperature of water is 100 °C at a pressure of one atmosphere , regardless of the quantity of water remaining as liquid. Examples of intensive properties include: [ 5 ] [ 2 ] [ 1 ] See List of materials properties for a more exhaustive list specifically pertaining to materials. An extensive property is a physical quantity whose value is proportional to the size of the system it describes, [ 8 ] or to the quantity of matter in the system. For example, the mass of a sample is an extensive quantity; it depends on the amount of substance. The related intensive quantity is the density which is independent of the amount. The density of water is approximately 1g/mL whether you consider a drop of water or a swimming pool, but the mass is different in the two cases. Dividing one extensive property by another extensive property gives an intensive property—for example: mass (extensive) divided by volume (extensive) gives density (intensive). [ 9 ] Any extensive quantity E for a sample can be divided by the sample's volume, to become the "E density" for the sample; similarly, any extensive quantity "E" can be divided by the sample's mass, to become the sample's "specific E"; extensive quantities "E" which have been divided by the number of moles in their sample are referred to as "molar E". Examples of extensive properties include: [ 5 ] [ 2 ] [ 1 ] In thermodynamics, some extensive quantities measure amounts that are conserved in a thermodynamic process of transfer. They are transferred across a wall between two thermodynamic systems or subsystems. For example, species of matter may be transferred through a semipermeable membrane. Likewise, volume may be thought of as transferred in a process in which there is a motion of the wall between two systems, increasing the volume of one and decreasing that of the other by equal amounts. On the other hand, some extensive quantities measure amounts that are not conserved in a thermodynamic process of transfer between a system and its surroundings. In a thermodynamic process in which a quantity of energy is transferred from the surroundings into or out of a system as heat, a corresponding quantity of entropy in the system respectively increases or decreases, but, in general, not in the same amount as in the surroundings. Likewise, a change in the amount of electric polarization in a system is not necessarily matched by a corresponding change in electric polarization in the surroundings. In a thermodynamic system, transfers of extensive quantities are associated with changes in respective specific intensive quantities. For example, a volume transfer is associated with a change in pressure. An entropy change is associated with a temperature change. A change in the amount of electric polarization is associated with an electric field change. The transferred extensive quantities and their associated respective intensive quantities have dimensions that multiply to give the dimensions of energy. The two members of such respective specific pairs are mutually conjugate. Either one, but not both, of a conjugate pair may be set up as an independent state variable of a thermodynamic system. Conjugate setups are associated by Legendre transformations . The ratio of two extensive properties of the same object or system is an intensive property. For example, the ratio of an object's mass and volume, which are two extensive properties, is density, which is an intensive property. [ 10 ] More generally properties can be combined to give new properties, which may be called derived or composite properties. For example, the base quantities [ 11 ] mass and volume can be combined to give the derived quantity [ 12 ] density. These composite properties can sometimes also be classified as intensive or extensive. Suppose a composite property F {\displaystyle F} is a function of a set of intensive properties { a i } {\displaystyle \{a_{i}\}} and a set of extensive properties { A j } {\displaystyle \{A_{j}\}} , which can be shown as F ( { a i } , { A j } ) {\displaystyle F(\{a_{i}\},\{A_{j}\})} . If the size of the system is changed by some scaling factor, λ {\displaystyle \lambda } , only the extensive properties will change, since intensive properties are independent of the size of the system. The scaled system, then, can be represented as F ( { a i } , { λ A j } ) {\displaystyle F(\{a_{i}\},\{\lambda A_{j}\})} . Intensive properties are independent of the size of the system, so the property F is an intensive property if for all values of the scaling factor, λ {\displaystyle \lambda } , (This is equivalent to saying that intensive composite properties are homogeneous functions of degree 0 with respect to { A j } {\displaystyle \{A_{j}\}} .) It follows, for example, that the ratio of two extensive properties is an intensive property. To illustrate, consider a system having a certain mass, m {\displaystyle m} , and volume, V {\displaystyle V} . The density, ρ {\displaystyle \rho } is equal to mass (extensive) divided by volume (extensive): ρ = m V {\displaystyle \rho ={\frac {m}{V}}} . If the system is scaled by the factor λ {\displaystyle \lambda } , then the mass and volume become λ m {\displaystyle \lambda m} and λ V {\displaystyle \lambda V} , and the density becomes ρ = λ m λ V {\displaystyle \rho ={\frac {\lambda m}{\lambda V}}} ; the two λ {\displaystyle \lambda } s cancel, so this could be written mathematically as ρ ( λ m , λ V ) = ρ ( m , V ) {\displaystyle \rho (\lambda m,\lambda V)=\rho (m,V)} , which is analogous to the equation for F {\displaystyle F} above. The property F {\displaystyle F} is an extensive property if for all λ {\displaystyle \lambda } , (This is equivalent to saying that extensive composite properties are homogeneous functions of degree 1 with respect to { A j } {\displaystyle \{A_{j}\}} .) It follows from Euler's homogeneous function theorem that where the partial derivative is taken with all parameters constant except A j {\displaystyle A_{j}} . [ 13 ] This last equation can be used to derive thermodynamic relations. A specific property is the intensive property obtained by dividing an extensive property of a system by its mass. For example, heat capacity is an extensive property of a system. Dividing heat capacity, C p {\displaystyle C_{p}} , by the mass of the system gives the specific heat capacity, c p {\displaystyle c_{p}} , which is an intensive property. When the extensive property is represented by an upper-case letter, the symbol for the corresponding intensive property is usually represented by a lower-case letter. Common examples are given in the table below. [ 5 ] If the amount of substance in moles can be determined, then each of these thermodynamic properties may be expressed on a molar basis, and their name may be qualified with the adjective molar , yielding terms such as molar volume, molar internal energy, molar enthalpy, and molar entropy. The symbol for molar quantities may be indicated by adding a subscript "m" to the corresponding extensive property. For example, molar enthalpy is H m {\displaystyle H_{\mathrm {m} }} . [ 5 ] Molar Gibbs free energy is commonly referred to as chemical potential , symbolized by μ {\displaystyle \mu } , particularly when discussing a partial molar Gibbs free energy μ i {\displaystyle \mu _{i}} for a component i {\displaystyle i} in a mixture. For the characterization of substances or reactions, tables usually report the molar properties referred to a standard state . In that case a superscript ∘ {\displaystyle ^{\circ }} is added to the symbol. Examples: The general validity of the division of physical properties into extensive and intensive kinds has been addressed in the course of science. [ 15 ] Redlich noted that, although physical properties and especially thermodynamic properties are most conveniently defined as either intensive or extensive, these two categories are not all-inclusive and some well-defined concepts like the square-root of a volume conform to neither definition. [ 1 ] Other systems, for which standard definitions do not provide a simple answer, are systems in which the subsystems interact when combined. Redlich pointed out that the assignment of some properties as intensive or extensive may depend on the way subsystems are arranged. For example, if two identical galvanic cells are connected in parallel , the voltage of the system is equal to the voltage of each cell, while the electric charge transferred (or the electric current ) is extensive. However, if the same cells are connected in series , the charge becomes intensive and the voltage extensive. [ 1 ] The IUPAC definitions do not consider such cases. [ 5 ] Some intensive properties do not apply at very small sizes. For example, viscosity is a macroscopic quantity and is not relevant for extremely small systems. Likewise, at a very small scale color is not independent of size, as shown by quantum dots , whose color depends on the size of the "dot". Suresh. "What is the difference between intensive and extensive properties in thermodynamics?" . Callinterview.com . Retrieved 7 April 2024 .
https://en.wikipedia.org/wiki/Intensive_and_extensive_properties
An intentional radiator is any device that is deliberately designed to produce radio waves . Radio transmitters of all kinds, including the garage door opener, cordless telephone, cellular phone , wireless video sender, wireless microphone, and many others fall into this category. In the United States, intentional radiators are regulated under 47 CFR Part 15, Subpart C . This electronics-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Intentional_radiator
The Inter-Agency Space Debris Coordination Committee ( IADC ) is an inter-governmental forum whose aim is to co-ordinate efforts to deal with debris in orbit around the Earth founded in 1993. The primary purposes of the IADC is information exchange on space debris research activities, facilitating opportunities for joint research, and reviewing progress of ongoing activities. All of these are designed to support identification of space debris mitigation options. [ 1 ] In March 2020, the organization has developed recommendations that each program or project establish and document a feasible Space Debris Mitigation Plan. The plan should include the following items: [ 2 ] Members of the IADC include: [ 1 ] This article about a scientific organization is a stub . You can help Wikipedia by expanding it . This spacecraft or satellite related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Inter-Agency_Space_Debris_Coordination_Committee
The Inter-American Biodiversity Information Network ( IABIN ) is a network dedicated to the adoption and promotion of ecoinformatics standards and protocols in all the countries of the Americas , thus facilitating the sound use of biological information for conservation and sustainable use of biodiversity. It is primarily an inter-governmental initiative but has a strong participation of a wide range of non-governmental partners. The creation of IABIN in 1996 was mandated by the Heads of State at the Santa Cruz Summit of the Americas meeting in Bolivia . The Summit requested the Organization of American States (OAS) to act as the diplomatic host of the network. Partnerships with similar or related initiatives is a critical part of the network’s strategy, so that existing standards or protocols can be promoted and not reinvented. For example, the Global Biodiversity Information Facility (GBIF) is leading the world in the development of specimen data standards, which IABIN is promoting. Strong relationships are also being developed with national environmental information organizations which are often very active and better placed to promote national programs, such as the National Biological Information Infrastructure (NBII) in the United States or the Instituto Nacional de Biodiversidad (INBio) of Costa Rica . IABIN is a network in which the countries of the Americas as well as diverse governmental and civil society organizations participate. The highest governing body of the network is the IABIN Council, which meets about every year. Each participating country can send a representative, their “Focal Point”, to the Council, which defines the strategies and policies of the network. In practice, decisions are made by consensus and include a strong participation of non-governmental actors such as major non-governmental organizations (NGOs). At present, 34 countries have designated IABIN Focal Points. Most countries have designated their Clearing House Mechanism National Focal Point as their IABIN Focal Point as well. The Focal Points in each country are responsible for both representing their country’s views in the adoption of IABIN decisions and policies and then promoting them in their country. Between Council meetings, in order to guide effectively the operations of IABIN, a smaller governance body has been created. The IABIN Executive Committee (IEC) comprises representatives of eight countries and two international governmental organization or non-governmental organization (IGO/NGO) members, currently GBIF (Global Biodiversity Information Facility) and TNC (The Nature Conservancy). The IEC members are elected for fixed terms at each Council meeting. The current members of the IEC are: The network has existed in name since 1996 and in its early years, several critical Council meetings were held (in Brasília, Brazil, and in Miami, USA) which defined the general structure and proposed functions of IABIN. In the initial years however, no Secretariat existed and the network benefited only from a number of small ad hoc investments, primarily from the United States, the World Bank, and the OAS. In 2004, a major six-year investment began financed by the Global Environment Facility (GEF) (see below). Under this project, the network has developed its current foci of activities. These are the adoption of ecoinformatics standards and protocols, development of a catalogue and search tools (being done in coordination with NBII), creation of partnerships, creation and maintenance of the Secretariat, data creation grants, the operation of the “Thematic Networks”, and the creation of information tools for decision-makers. The Thematic Networks, or TNs, are intended to lead the development of theme-specific standards and protocols and in the maintenance of hemisphere-wide networks of specialists and specialized institutions. In each case a Coordinating Institution has signed a memorandum of understanding with the IEC to lead the work of the TN. They are also tasked with development of search tools and linking of data in their thematic area with data of the other TNs. The TNs, with the coordinating institution in parentheses, are: Species and Specimens (INBio, Costa Rica), Ecosystems ( NatureServe , USA), Protected Areas ( UNEP-WCMC , UK), Invasive Species I3N Network ( United States Geological Survey , USA), and Pollinators ( CoEvolution Institute , USA). The IABIN web site provides detailed information on a variety of projects and funding sources that are supporting the network and that are now coming on-line. These include investments of the United States, the World Bank, and the Gordon E. and Betty I. Moore Foundation. However, for the period of 2004-2010, a large GEF project has played a particularly important role in jumpstarting the network and implementing its strategies and priorities. The source of the funds for the project is the Global Environment Facility (GEF) with the funds managed by the World Bank . The executing agency of the project, on behalf of the GEF-eligible countries of the Americas, is the OAS. The project includes US$6 million of grant funding from the GEF with almost $30 million of cofinancing provided by participating governments and other partners.
https://en.wikipedia.org/wiki/Inter-American_Biodiversity_Information_Network
The Inter-University Centre for Astronomy and Astrophysics ( IUCAA ) is an autonomous institution set up by the University Grants Commission of India to promote nucleation and growth of active groups in astronomy and astrophysics in Indian universities. IUCAA is located in the University of Pune campus next to the National Centre for Radio Astrophysics , which operates the Giant Metrewave Radio Telescope . IUCAA has a campus designed by Indian architect Charles Correa . [ 1 ] After the founding of the Giant Metrewave Radio Telescope (GMRT) by Prof. Govind Swarup , a common research facility for astronomy and astrophysics was proposed by Dr. Yash Pal of the planning commission. Working on this idea, astrophysicist Prof. Jayant Narlikar , along with Ajit Kembhavi and Naresh Dadhich set up IUCAA within the Pune University campus in 1988. [ 1 ] [ 2 ] In 2002, IUCAA initiated a nationwide campaign to popularize astronomy and astrophysics in colleges and universities. IUCAA arranged visitor programs for universities in Nagpur ( Maharashtra ), Thiruvalla ( Kerala ), Siliguri ( West Bengal ) and others, along with a tie-up with the Ferguson college , Pune . [ 3 ] In 2004, IUCAA set up the Muktangan Vidnyan Shodhika (Exlporatorium), a science popularization initiative, with a grant from the Pu La Deshpande foundation. The center is open to all school students from Pune. [ 4 ] IUCAA was declared the nodal center for India to coordinate the year-long celebrations for the International Year of Astronomy. [ 5 ] IUCAA was headed for its first decade by Prof. Jayant Narlikar , followed by Prof. Naresh Dadhich and Prof. Ajit Kembhavi . From September 2015, the Director is Prof. Somak Raychaudhury . [ 6 ] Scientists at IUCAA carry out research in a wide range of areas in astronomy, astrophysics and physics. IUCAA has active research groups in fields like classical and quantum gravity , cosmology , gravitational waves , optical and radio astronomy , Solar System physics and instrumentation. [ 7 ] IUCAA, along with Persistent Systems , Pune, operates the Virtual Observatory project. The observatory provides users access to raw observational data along with advanced processing software designed by engineers at Persistent. [ 8 ] IUCAA also maintains Girawali Observatory which is about 80 km from Pune city, off Pune-Nasik Road and near the historical Junnar town. In addition to catering to the needs of astronomers in general, this observatory is unique in setting aside a certain amount of time specifically for training as well as observational proposals arising from Indian Universities. The telescope has a primary mirror of diameter 2 meter, f/3 and a secondary of 60 cm, f/10. IUCAA Faint Object Spectrograph & Camera (IFOSC) is the main instrument available on the telescope's direct Cassegrain port currently. [ 9 ] IUCAA, along with the Raman Research Institute and Indian Institute of Astrophysics , Bangalore , declared a proposal to take a ten percent stake in the Large Telescope Project , which would allow Indian astronomers access to major upcoming observatories such as the Giant Magellan Telescope (GMT), the Thirty Meter Telescope (TMT) and the European Extremely Large Telescope (EELT). [ 10 ] The SciPop initiative was set up by Prof. Jayant Narlikar along with N. C. Rana and Arvind Paranjpe. SciPop, based out of the Muktangan Vidnyan Shodhika building, provides educational facilities for school students, teachers and amateur astronomers . [ 11 ] IUCAA organizes the open Science day program every year on 28 February, in which members of the general public can visit the institute to take a look at ongoing research and contemporary work happening elsewhere in the world. IUCAA was one of the few Indian research institutes to start a science popularization program, and other organisations such as the Indian Institute of Science, Indian Institute of Astrophysics, and TIFR , Mumbai started similar public outreach programmes in the wake of its success. [ 4 ] Notable people associated with IUCAA: The logo of IUCAA is a symmetric 8-crossing Carrick mat knot, and a mirror image to that of the International Guild of Knot Tyers .
https://en.wikipedia.org/wiki/Inter-University_Centre_for_Astronomy_and_Astrophysics
In mobile telecommunications , inter-cell interference coordination ( ICIC ) techniques apply restrictions to the radio resource management (RRM) block, improving favorable channel conditions across subsets of users that are severely impacted by the interference , and thus attaining high spectral efficiency . This coordinated resource management can be achieved through fixed, adaptive or real-time coordination with the help of additional inter- cell signaling in which the signaling rate can vary accordingly. In general, inter-cell signaling refers to the communication interface among neighboring cells and the received measurement message reports from user equipments (UEs). [ 1 ]
https://en.wikipedia.org/wiki/Inter-cell_interference_coordination
The inter-working function ( IWF ) is a method for interfacing a wireless telecommunication network with the public switched telephone network (PSTN). The IWF converts the data transmitted over the air interface into a format suitable for the PSTN. [ 1 ] IWF contains both the hardware and software elements that provide the rate adaptation and protocol conversion between PSTN and the wireless network. Some systems require more IWF capability than others, depending on the network which is being connected. The IWF also incorporates a " modem bank", which may be used when, for example, the GSM data terminal equipment (DTE) exchanges data with a land DTE connected via analogue modem The IWF provides the function to enable the GSM system to interface with the various forms of public and private data networks currently available. The basic features of the IWF are: This article related to telephony is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Inter-working_function
The International Society for Porous Media ( InterPore ) is a nonprofit independent scientific organization established in 2008. It aims to advance and disseminate knowledge for the understanding, description, and modeling of natural and industrial porous medium systems. It acts as a platform for researchers active in modeling of flow and transport in natural, biological, and technical porous media, such as soils, aquifers, oil and gas reservoirs, biological tissues, plants, fuel cells, wood, ceramics, concrete, textiles, paper, polymer composites, hygienic materials, food, foams, membranes, etc. In the course of 2006, researchers from the Department of Earth Sciences, Utrecht University and Institute for Modelling Hydraulic and Environmental Systems, University of Stuttgart , under the leadership of Professor Rainer Helmig [ 1 ] and Professor Majid Hassanizadeh , respectively, developed a proposal for setting up a joint international graduate research program. The proposal was submitted to German Research Foundation (DFG) and Netherlands Organisation for Scientific Research (NWO), and was successfully funded. The research school started its activities on January 1, 2007, under the name NUPUS (Non-linearities and Upscaling in PoroUS Media). This project led to the idea of creating an international center for porous media wherein scientists from diverse disciplines who study porous media could exchange ideas and research activities. The European Society for Porous Media (Europore) was established in Spring, 2008. By Summer 2008, the geographical scope was expanded beyond Europe and the name was changed to the International Society for Porous Media (InterPore.) Bylaws were approved and the society was officially registered in Fall 2008. [ 2 ] InterPore Academy was established in 2020 to promote educational activities mainly to serve industrial and/or younger researchers. The academy is organizing short courses, webinars, and workshops. National chapters are country-wide activity groups of InterPore. They form platforms for bringing together porous media researchers from academia and industry of a given country or region. A variety of activities, such as porous media workshops, conferences, short courses, are organized by national chapters. National chapters compile a list of porous media companies in their countries to be able to interact with institutions and industries. [ 3 ] [ 4 ] [ 5 ] As of 2021, InterPore has active national chapters in: InterPore has organized the International Conference on Porous Media annually since 2009. General themes include: fundamentals of porous media; computational challenges in porous media simulation; experimental studies and applications involving porous media. Previous conferences have been hosted by in Fraunhofer ITWM in Kaiserslautern, Germany; Texas A&M University in College Station, Texas, USA; I2M-Dept TREFLE ( CNRS , ENSAM , University of Bordeaux ), France; Purdue University in West Layfayette, Indiana, USA; Technical University of Prague , Czech Republic; University of Wisconsin , in Milwaukee, USA; University of Padova , Italy; University of Cincinnati , Ohio, USA; Technical University of Delft in Rotterdam, Netherlands; Louisiana State University in New Orleans, USA; Universitat Politecnica de Valencia , Spain; and two online conferences (2020 & 2021. [ 16 ] ) InterPore2022 is scheduled for May 30 - June 2, 2022 at Khalifa University in Abu Dhabi, UAE. [ 17 ]
https://en.wikipedia.org/wiki/InterPore
InterPro is a database of protein families , protein domains and functional sites in which identifiable features found in known proteins can be applied to new protein sequences [ 2 ] in order to functionally characterise them. [ 3 ] [ 4 ] The contents of InterPro consist of diagnostic signatures and the proteins that they significantly match. The signatures consist of models (simple types, such as regular expressions or more complex ones, such as Hidden Markov models ) which describe protein families, domains or sites. Unknown sequences are searched to create homology models. Each of the member databases of InterPro contributes towards a different niche, from very high-level, structure-based classifications ( SUPERFAMILY and CATH-Gene3D) through to quite specific sub-family classifications ( PRINTS and PANTHER ). InterPro's intention is to provide a one-stop-shop for protein classification, where all the signatures produced by the different member databases are placed into entries within the InterPro database. Signatures which represent equivalent domains, sites or families are put into the same entry and entries can also be related to one another. Additional information such as a description, consistent names and Gene Ontology (GO) terms are associated with each entry, where possible. InterPro contains three main entities: proteins, signatures (also referred to as "methods" or "models") and entries. The proteins in UniProtKB are also the central protein entities in InterPro. Information regarding which signatures significantly match these proteins are calculated as the sequences are released by UniProtKB and these results are made available to the public (see below). The matches of signatures to proteins are what determine how signatures are integrated together into InterPro entries: comparative overlap of matched protein sets and the location of the signatures' matches on the sequences are used as indicators of relatedness. Only signatures deemed to be of sufficient quality are integrated into InterPro. As of version 81.0 (released 21 August 2020) InterPro entries annotated 73.9% of residues found in UniProtKB with another 9.2% annotated by signatures that are pending integration. [ 5 ] InterPro also includes data for splice variants and the proteins contained in the UniParc and UniMES databases. The signatures from InterPro come from 13 "member databases", which are listed below. InterPro consists of seven types of data provided by different members of the consortium: InterPro entries can be further broken down into five types: The database is available for text- and sequence-based searches via a webserver, and for download via anonymous FTP. Like other EBI databases, it is in the public domain , since its content can be used "by any individual and for any purpose". [ 8 ] InterPro aims to release data to the public every 8 weeks, typically within a day of the UniProtKB release of the same proteins. InterPro provides an API for programmatic access to all InterPro entries and their related entries in Json format. [ 9 ] There are six main endpoints for the API corresponding to the different InterPro data types: entry, protein, structure, taxonomy, proteome and set. InterProScan is a software package that allows users to scan sequences against member database signatures. Users can use this signature scanning software to functionally characterize novel nucleotide or protein sequences. [ 10 ] InterProScan is frequently used in genome projects in order to obtain a "first-pass" characterisation of the genome of interest. [ 11 ] [ 12 ] As of December 2020 [update] , the public version of InterProScan (v5.x) uses a Java-based architecture. [ 13 ] The software package is currently only supported on a 64-bit Linux operating system. InterProScan, along with many other EMBL-EBI bioinformatics tools, can also be accessed programmatically using RESTful and SOAP Web Services APIs. [ 14 ]
https://en.wikipedia.org/wiki/InterPro
The Interact Home Computer (also called The Interact Family Computer [ 1 ] [ 2 ] ) is a 1978 American home computer made by Interact Electronics, Inc. , of Ann Arbor, Michigan . [ 3 ] [ 4 ] [ 5 ] It sold under the name "Interact Model One Home Computer". [ 6 ] The original Interact Model One computer was designed by Rick Barnich and Tim Anderson at 204 E. Washington in Ann Arbor, then moving to the Georgetown Mall on Packard St. in Ann Arbor. Interact Electronics Inc was a privately held company that was funded by Hongiman, Miller, Swartz and Cohn, a law firm out of Detroit . The President/Founder of Interact Electronics Inc was Ken Lochner, who was one of the original developers of the BASIC language based out of Dartmouth College . Ken had started Interact Electronics Inc after founding the successful computer time-sharing company Cyphernetics in Ann Arbor, which was purchased by ADP in 1975. The Interact Model One Home Computer debuted at the Consumer Electronics Show in Chicago in June 1978, at a price of US$499 (equivalent to $2,400 in 2024). Only a few thousand Interacts were sold before the company went bankrupt in late 1979. [ 6 ] Most were sold by the liquidator Protecto Enterprizes of Barrington, Illinois , through mail order sales. It was also sold at Highland Appliance in the Detroit area, Newman Computer Exchange in Ann Arbor, and Montgomery Ward in the Houston, Texas , area. The computer didn't come with any operating system, but Microsoft BASIC V4.7 or EDU-BASIC (supplied with the computer) could be loaded from tape. [ 4 ] [ 7 ] [ 8 ] [ 6 ] Probably the most successful application available for the Interact was a program called "Message Center". [ 9 ] With it, a store could program a scrolling message which appeared on a TV screen (such as advertisements, or a welcome message to guests). Although it was mostly a game machine (with games such as Showdown, Blackjack and Chess ), [ 10 ] users could also create their own programs using the BASIC computer language. Customers began hooking up the Interact to control everything from lights in their house, doors, windows, smoke detectors, to a Chevrolet Corvette . [ citation needed ] Later on the design was sold to a French company, Lambda Systems, and re-branded as the " Victor Lambda " for the French market. [ 11 ] [ 12 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Interact_Home_Computer
The interacting boson model (IBM) is a model in nuclear physics in which nucleons ( protons or neutrons ) pair up, essentially acting as a single particle with boson properties, with integral spin of either 2 (d-boson) or 0 (s-boson). They correspond to a quintuplet and singlet, i.e. 6 states. It is sometimes known as the Interacting boson approximation (IBA). [ 1 ] : 7 The IBM1/IBM-I model treats both types of nucleons the same and considers only pairs of nucleons coupled to total angular momentum 0 and 2, called respectively, s and d bosons . The IBM2/IBM-II model treats protons and neutrons separately. Both models are restricted to nuclei with even numbers of protons and neutrons. [ 1 ] : 9 The model can be used to predict vibrational and rotational modes of non-spherical nuclei. [ 2 ] This model was invented by Akito Arima and Francesco Iachello in 1974. [ 1 ] : 6 while working at the Kernfysisch Versneller Instituut (KVI) in Groningen , Netherlands . KVI is now property of Universitair Medisch Centrum Groningen ( https://umcgresearch.org/ ). This nuclear physics or atomic physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Interacting_boson_model
The Interaction Design Foundation (IxDF) is an educational organization [ 1 ] which produces open access educational materials [ 2 ] [ 3 ] online with the stated goal of "democratizing education by making world-class educational materials free for anyone, anywhere." [ 4 ] [ 5 ] The platform also offers courses taught by industry experts and professors in user experience , psychology , user interface design , and more. [ 6 ] While not accredited, the curriculum and content are structured at the graduate level, targeting at both industry and academia in the fields of interaction design , design thinking , user experience , information architecture , and user interface design . The centerpieces of the Interaction-Design.org are their online design courses, their local chapters in more than 150 countries, and their peer reviewed Encyclopedia of Human-Computer Interaction , which currently holds 40+ textbooks written by 100+ leading designers and professors as well as commentaries and HD video interviews shot around the world. [ 2 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] The platform features professional and academic textbooks, online courses, video lectures, local chapters in more than 150 countries, and a comprehensive bibliography of the most authoritative publications within the design of interactive technology. In June 2013, the Interaction Design Foundation launched a 4 year 35,000 mile bike tour, named "Share the Knowledge Tour", [ 13 ] to raise awareness of the rising cost of education - with weekly events on university campuses. [ 14 ] [ 15 ] Financial sponsors include the German software company SAP . Authors include Harvard professor Clayton Christensen [ 16 ] [ 17 ] and New York Times bestselling author, Robert Spence [ 18 ] who invented the "magnifying glass" visualization that is familiar to anyone with an iPhone or iMac, and Stu Card [ 18 ] who performed the research that led to the computer mouse's commercial introduction by Xerox. The Executive Board currently include Don Norman , Ken Friedman , Bill Buxton , Irene Au, Michael Arent, Daniel Rosenberg, Jonas Lowgren and Olof Schybergson.
https://en.wikipedia.org/wiki/Interaction_Design_Foundation
The Interaction Flow Modeling Language ( IFML ) is a standardized modeling language in the field of software engineering . IFML includes a set of graphic notations to create visual models of user interactions and front-end behavior in software systems . The Interaction Flow Modeling Language was developed in 2012 and 2013 under the lead of WebRatio and was inspired by the WebML notation, as well as by a few other experiences in the Web modeling field. It was adopted as a standard by the Object Management Group (OMG) in March 2013. [ 1 ] IFML supports the platform independent description of graphical user interfaces for applications accessed or deployed on such systems as desktop computers, laptop computers, PDAs, mobile phones, and tablets. The focus of the description is on the structure and behavior of the application as perceived by the end user. IFML describes user interactions and control behaviors of front-end of applications belonging to the following domains: IFML does not cater to the specification of bi-dimensional and tri-dimensional computer based graphics. IFML does not apply to the modeling of presentation issues (layout/look and feel) of an application front-end or to the design of business logic and data components. Although these aspects are not the focus of the language, IFML allows designers to reference external models or modeling artifacts regarding these aspects from within IFML models. The IFML specification [ 1 ] consists of: An IFML model consists of one or more view container s (possibly nested). For example, windows in traditional desktop applications or page templates in Web applications. A view container can contain view component s, which denote the publication of static or dynamic content, or interface elements for data entry (such as input forms). A view component can have input and output parameter s. A view container and a view component can be associated with event s, that can represent users' interactions or system-generated occurrences. For example, an event for selecting one or more items from a list or for submitting inputs from a form. The effect of an event is represented by an interaction flow connection. The interaction flow expresses a change of state of the user interface. An event can also trigger an action , which is executed prior to updating the state of the user interface; for example, a delete or update operation on instances of a database. An input-output dependency between elements can be specified through parameter bindings associated with navigation flows or through data flow s, that only describe data transfer. IFML also includes concepts for defining constraints, modularization, and context awareness (e.g., based on user profile, device, location) over modeling elements. IFML concepts can be extended with standard extension mechanisms based on stereotyping . The cost of front-end application development has increased with the emergence of an unprecedented range of devices, technological platforms, and communication channels, which are not accompanied by the advent of an adequate approach for creating a Platform Independent Model (PIM) that can be used for designing user interactions independently of the implementation platform. This causes front-end development to be a costly and inefficient process, where manual coding is the predominant development approach, reuse of design artifacts is low, and portability of applications across platforms remains difficult. IFML brings several benefits to the development of application front-ends: IFML is currently supported by WebRatio [1] . A set of blog posts describing the standardization process is available here . A new, open-source IFML editor based on Eclipse, EMF /GMF and the Graphiti API is under development. The tool will be released as an open-source Eclipse Project. The tool will include mappings from IFML abstract concepts to the platform- specific concepts of Java Swing, Microsoft WPF, and HTML. The modeling of the IFML diagrams for the UI part can be complemented with (executable) UML diagrams according to fUML specifications combined with Alf scripts for the back-end business logic. A sneak preview of the tool features can be seen in this video . IFMLEdit.org [2] is a web-based open-source IFML editor focused on education and agile development. It supports model editing, code generation and emulation. Currently it supports the generation of code for server side NodeJS , client side JavaScript and mobile applications via Cordova or Flutter . IFML was inspired by the WebML notation, invented at Politecnico di Milano by Stefano Ceri and Piero Fraternali, with a team of people including Roberto Acerbis, Aldo Bongio, Marco Brambilla, Sara Comai, Stefano Butti and Maristella Matera.
https://en.wikipedia.org/wiki/Interaction_Flow_Modeling_Language
Interaction cost can comprise work, costs , and other expenses , required to complete a task or interaction. This applies to several categories, including:
https://en.wikipedia.org/wiki/Interaction_cost
Interaction design , often abbreviated as IxD , is "the practice of designing interactive digital products, environments, systems, and services." [ 1 ] : xxvii, 30 While interaction design has an interest in form (similar to other design fields), its main area of focus rests on behavior. [ 1 ] : xxvii, 30 Rather than analyzing how things are, interaction design synthesizes and imagines things as they could be. This element of interaction design is what characterizes IxD as a design field, as opposed to a science or engineering field. [ 1 ] Interaction design borrows from a wide range of fields like psychology, human-computer interaction , information architecture , and user research to create designs that are tailored to the needs and preferences of users. This involves understanding the context in which the product will be used, identifying user goals and behaviors, and developing design solutions that are responsive to user needs and expectations. While disciplines such as software engineering have a heavy focus on designing for technical stakeholders, interaction design is focused on meeting the needs and optimizing the experience of users, within relevant technical or business constraints. [ 1 ] : xviii The term interaction design was coined by Bill Moggridge and Bill Verplank in the mid-1980s, [ 2 ] [ 3 ] but it took 10 years before the concept started to take hold. [ 1 ] : 31 To Verplank, it was an adaptation of the computer science term user interface design for the industrial design profession. [ 4 ] To Moggridge, it was an improvement over soft-face , which he had coined in 1984 to refer to the application of industrial design to products containing software. [ 5 ] The earliest programs in design for interactive technologies were the Visible Language Workshop, started by Muriel Cooper at MIT in 1975, and the Interactive Telecommunications Program founded at NYU in 1979 by Martin Elton and later headed by Red Burns. [ 6 ] The first academic program officially named "Interaction Design" was established at Carnegie Mellon University in 1994, as a Master of Design in Interaction Design. [ 7 ] At the outset, the program focused mainly on screen interfaces, before shifting to a greater emphasis on the "big picture" aspects of interaction—people, organizations, culture, service and system. In 1990, Gillian Crampton Smith founded the Computer-Related Design MA at the Royal College of Art (RCA) in London, which in 2005 was renamed Design Interactions, [ 8 ] headed by Anthony Dunne. [ 9 ] In 2001, Crampton Smith helped found the Interaction Design Institute Ivrea (IDII), a specialized institute in Olivetti's hometown in Northern Italy, dedicated solely to interaction design. In 2007, after IDII closed due to a lack of funding, some of the people originally involved with IDII set up the Copenhagen Institute of Interaction Design (CIID), in Denmark. After Ivrea, Crampton Smith and Philip Tabor added the Interaction Design (IxD) track in the Visual and Multimedia Communication at the University of Venice , Italy. In 1998, the Swedish Foundation for Strategic Research founded The Interactive Institute —a Swedish research institute in the field of interaction design. Goal-oriented design (or Goal-Directed design) "is concerned with satisfying the needs and desires of the users of a product or service." [ 1 ] : xxviii, 31 Alan Cooper argues in The Inmates Are Running the Asylum that we need a new approach to solving interactive software-based problems. [ 10 ] : 1 The problems with designing computer interfaces are fundamentally different from those that do not include software (e.g., hammers). Cooper introduces the concept of cognitive friction, which is when the interface of a design is complex and difficult to use, and behaves inconsistently and unexpectedly, possessing different modes. [ 10 ] : 22 Alternatively, interfaces can be designed to serve the needs of the service/product provider. User needs may be poorly served by this approach. Usability answers the question "can someone use this interface?". Jakob Nielsen describes usability as the quality attribute [ 11 ] that describes how usable the interface is. Shneiderman proposes principles for designing more usable interfaces called "Eight Golden Rules of Interface Design" [ 12 ] —which are well-known heuristics for creating usable systems. Personas are archetypes that describe the various goals and observed behaviour patterns among users. [ 13 ] A persona encapsulates critical behavioural data in a way that both designers and stakeholders can understand, remember, and relate to. [ 14 ] Personas use storytelling to engage users' social and emotional aspects, which helps designers to either visualize the best product behaviour or see why the recommended design is successful. [ 13 ] The cognitive dimensions framework [ 15 ] provides a vocabulary to evaluate and modify design solutions. Cognitive dimensions offer a lightweight approach to analysis of a design quality, rather than an in-depth, detailed description. They provide a common vocabulary for discussing notation, user interface or programming language design. Dimensions provide high-level descriptions of the interface and how the user interacts with it: examples include consistency , error-proneness , hard mental operations , viscosity and premature commitment . These concepts aid the creation of new designs from existing ones through design maneuvers that alter the design within a particular dimension. Designers must be aware of elements that influence user emotional responses. For instance, products must convey positive emotions while avoiding negative ones. [ 16 ] Other important aspects include motivational, learning, creative, social and persuasive influences. One method that can help convey such aspects is for example, the use of dynamic icons, animations and sound to help communicate, creating a sense of interactivity. Interface aspects such as fonts, color palettes and graphical layouts can influence acceptance. Studies showed that affective aspects can affect perceptions of usability. [ 16 ] Emotion and pleasure theories exist to explain interface responses. These include Don Norman 's emotional design model, Patrick Jordan's pleasure model [ 17 ] and McCarthy and Wright's Technology as Experience framework. [ 18 ] The concept of dimensions of interaction design were introduced in Moggridge's book Designing Interactions. Crampton Smith wrote that interaction design draws on four existing design languages, 1D, 2D, 3D, 4D. [ 19 ] Kevin Silver later proposed a fifth dimension, behaviour. [ 20 ] This dimension defines interactions: words are the element that users interact with. Visual representations are the elements of an interface that the user perceives; these may include but are not limited to "typography, diagrams, icons, and other graphics". This dimension defines the objects or space "with which or within which users interact". The time during which the user interacts with the interface. An example of this includes "content that changes over time such as sound, video or animation". Behavior defines how users respond to the interface. Users may have different reactions in this interface. The Interaction Design Association [ 21 ] was created in 2003 to serve the community. The organization has over 80,000 members and more than 173 local groups. [ 22 ] IxDA hosts Interaction [ 23 ] the annual interaction design conference, and the Interaction Awards. [ 24 ] Interaction Awards have since ended in August 2024 [ 25 ]
https://en.wikipedia.org/wiki/Interaction_design
In physics , interaction energy is the contribution to the total energy that is caused by an interaction between the objects being considered. The interaction energy usually depends on the relative position of the objects. For example, Q 1 Q 2 / ( 4 π ε 0 Δ r ) {\displaystyle Q_{1}Q_{2}/(4\pi \varepsilon _{0}\Delta r)} is the electrostatic interaction energy between two objects with charges Q 1 {\displaystyle Q_{1}} , Q 2 {\displaystyle Q_{2}} . A straightforward approach for evaluating the interaction energy is to calculate the difference between the objects' combined energy and all of their isolated energies. In the case of two objects, A and B , the interaction energy can be written as: [ 1 ] Δ E int = E ( A , B ) − ( E ( A ) + E ( B ) ) , {\displaystyle \Delta E_{\text{int}}=E(A,B)-\left(E(A)+E(B)\right),} where E ( A ) {\displaystyle E(A)} and E ( B ) {\displaystyle E(B)} are the energies of the isolated objects (monomers), and E ( A , B ) {\displaystyle E(A,B)} the energy of their interacting assembly (dimer). For larger system, consisting of N objects, this procedure can be generalized to provide a total many-body interaction energy: Δ E int = E ( A 1 , A 2 , … , A N ) − ∑ i = 1 N E ( A i ) . {\displaystyle \Delta E_{\text{int}}=E(A_{1},A_{2},\dots ,A_{N})-\sum _{i=1}^{N}E(A_{i}).} By calculating the energies for monomers, dimers, trimers, etc., in an N-object system, a complete set of two-, three-, and up to N-body interaction energies can be derived. The supermolecular approach has an important disadvantage in that the final interaction energy is usually much smaller than the total energies from which it is calculated, and therefore contains a much larger relative uncertainty. In the case where energies are derived from quantum chemical calculations using finite atom-centered basis functions, basis set superposition errors can also contribute some degree of artificial stabilization. This article about statistical mechanics is a stub . You can help Wikipedia by expanding it . This article about energy , its collection, its distribution, or its uses is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Interaction_energy
In probability theory and information theory , the interaction information is a generalization of the mutual information for more than two variables . There are many names for interaction information, including amount of information , [ 1 ] information correlation , [ 2 ] co-information , [ 3 ] and simply mutual information . [ 4 ] Interaction information expresses the amount of information (redundancy or synergy) bound up in a set of variables, beyond that which is present in any subset of those variables. Unlike the mutual information, the interaction information can be either positive or negative. These functions, their negativity and minima have a direct interpretation in algebraic topology . [ 5 ] The conditional mutual information can be used to inductively define the interaction information for any finite number of variables as follows: where Some authors [ 6 ] define the interaction information differently, by swapping the two terms being subtracted in the preceding equation. This has the effect of reversing the sign for an odd number of variables. For three variables { X , Y , Z } {\displaystyle \{X,Y,Z\}} , the interaction information I ( X ; Y ; Z ) {\displaystyle I(X;Y;Z)} is given by where I ( X ; Y ) {\displaystyle I(X;Y)} is the mutual information between variables X {\displaystyle X} and Y {\displaystyle Y} , and I ( X ; Y ∣ Z ) {\displaystyle I(X;Y\mid Z)} is the conditional mutual information between variables X {\displaystyle X} and Y {\displaystyle Y} given Z {\displaystyle Z} . The interaction information is symmetric , so it does not matter which variable is conditioned on. This is easy to see when the interaction information is written in terms of entropy and joint entropy, as follows: In general, for the set of variables V = { X 1 , X 2 , … , X n } {\displaystyle {\mathcal {V}}=\{X_{1},X_{2},\ldots ,X_{n}\}} , the interaction information can be written in the following form (compare with Kirkwood approximation ): For three variables, the interaction information measures the influence of a variable Z {\displaystyle Z} on the amount of information shared between X {\displaystyle X} and Y {\displaystyle Y} . Because the term I ( X ; Y ∣ Z ) {\displaystyle I(X;Y\mid Z)} can be larger than I ( X ; Y ) {\displaystyle I(X;Y)} , the interaction information can be negative as well as positive. This will happen, for example, when X {\displaystyle X} and Y {\displaystyle Y} are independent but not conditionally independent given Z {\displaystyle Z} . Positive interaction information indicates that variable Z {\displaystyle Z} inhibits (i.e., accounts for or explains some of) the correlation between X {\displaystyle X} and Y {\displaystyle Y} , whereas negative interaction information indicates that variable Z {\displaystyle Z} facilitates or enhances the correlation. Interaction information is bounded. In the three variable case, it is bounded by [ 4 ] If three variables form a Markov chain X → Y → Z {\displaystyle X\to Y\to Z} , then I ( X ; Z ∣ Y ) = 0 {\displaystyle I(X;Z\mid Y)=0} , but I ( X ; Z ) ≥ 0 {\displaystyle I(X;Z)\geq 0} . Therefore Positive interaction information seems much more natural than negative interaction information in the sense that such explanatory effects are typical of common-cause structures. For example, clouds cause rain and also block the sun; therefore, the correlation between rain and darkness is partly accounted for by the presence of clouds, I ( rain ; dark ∣ cloud ) < I ( rain ; dark ) {\displaystyle I({\text{rain}};{\text{dark}}\mid {\text{cloud}})<I({\text{rain}};{\text{dark}})} . The result is positive interaction information I ( rain ; dark ; cloud ) {\displaystyle I({\text{rain}};{\text{dark}};{\text{cloud}})} . A car's engine can fail to start due to either a dead battery or a blocked fuel pump. Ordinarily, we assume that battery death and fuel pump blockage are independent events, I ( blocked fuel ; dead battery ) = 0 {\displaystyle I({\text{blocked fuel}};{\text{dead battery}})=0} . But knowing that the car fails to start, if an inspection shows the battery to be in good health, we can conclude that the fuel pump must be blocked. Therefore I ( blocked fuel ; dead battery ∣ engine fails ) > 0 {\displaystyle I({\text{blocked fuel}};{\text{dead battery}}\mid {\text{engine fails}})>0} , and the result is negative interaction information. The possible negativity of interaction information can be the source of some confusion. [ 3 ] Many authors have taken zero interaction information as a sign that three or more random variables do not interact, but this interpretation is wrong. [ 7 ] To see how difficult interpretation can be, consider a set of eight independent binary variables { X 1 , X 2 , X 3 , X 4 , X 5 , X 6 , X 7 , X 8 } {\displaystyle \{X_{1},X_{2},X_{3},X_{4},X_{5},X_{6},X_{7},X_{8}\}} . Agglomerate these variables as follows: Because the Y i {\displaystyle Y_{i}} 's overlap each other (are redundant) on the three binary variables { X 5 , X 6 , X 7 } {\displaystyle \{X_{5},X_{6},X_{7}\}} , we would expect the interaction information I ( Y 1 ; Y 2 ; Y 3 ) {\displaystyle I(Y_{1};Y_{2};Y_{3})} to equal 3 {\displaystyle 3} bits, which it does. However, consider now the agglomerated variables These are the same variables as before with the addition of Y 4 = { X 7 , X 8 } {\displaystyle Y_{4}=\{X_{7},X_{8}\}} . However, I ( Y 1 ; Y 2 ; Y 3 ; Y 4 ) {\displaystyle I(Y_{1};Y_{2};Y_{3};Y_{4})} in this case is actually equal to + 1 {\displaystyle +1} bit, indicating less redundancy. This is correct in the sense that but it remains difficult to interpret.
https://en.wikipedia.org/wiki/Interaction_information
In quantum mechanics , the interaction picture (also known as the interaction representation or Dirac picture after Paul Dirac , who introduced it) [ 1 ] [ 2 ] is an intermediate representation between the Schrödinger picture and the Heisenberg picture . Whereas in the other two pictures either the state vector or the operators carry time dependence, in the interaction picture both carry part of the time dependence of observables . [ 3 ] The interaction picture is useful in dealing with changes to the wave functions and observables due to interactions. Most field-theoretical calculations [ 4 ] use the interaction representation because they construct the solution to the many-body Schrödinger equation as the solution to the free-particle problem plus some unknown interaction parts. Equations that include operators acting at different times, which hold in the interaction picture, don't necessarily hold in the Schrödinger or the Heisenberg picture. This is because time-dependent unitary transformations relate operators in one picture to the analogous operators in the others. The interaction picture is a special case of unitary transformation applied to the Hamiltonian and state vectors. Haag's theorem says that the interaction picture doesn't exist in the case of interacting quantum fields . Operators and state vectors in the interaction picture are related by a change of basis ( unitary transformation ) to those same operators and state vectors in the Schrödinger picture. To switch into the interaction picture, we divide the Schrödinger picture Hamiltonian into two parts: H S = H 0 , S + H 1 , S . {\displaystyle H_{\text{S}}=H_{0,{\text{S}}}+H_{1,{\text{S}}}.} Any possible choice of parts will yield a valid interaction picture; but in order for the interaction picture to be useful in simplifying the analysis of a problem, the parts will typically be chosen so that H 0,S is well understood and exactly solvable, while H 1,S contains some harder-to-analyze perturbation to this system. If the Hamiltonian has explicit time-dependence (for example, if the quantum system interacts with an applied external electric field that varies in time), it will usually be advantageous to include the explicitly time-dependent terms with H 1,S , leaving H 0,S time-independent: H S ( t ) = H 0 , S + H 1 , S ( t ) . {\displaystyle H_{\text{S}}(t)=H_{0,{\text{S}}}+H_{1,{\text{S}}}(t).} We proceed assuming that this is the case. If there is a context in which it makes sense to have H 0,S be time-dependent, then one can proceed by replacing e ± i H 0 , S t / ℏ {\displaystyle \mathrm {e} ^{\pm \mathrm {i} H_{0,{\text{S}}}t/\hbar }} by the corresponding time-evolution operator in the definitions below. Let | ψ S ( t ) ⟩ = e − i H S t / ℏ | ψ ( 0 ) ⟩ {\displaystyle |\psi _{\text{S}}(t)\rangle =\mathrm {e} ^{-\mathrm {i} H_{\text{S}}t/\hbar }|\psi (0)\rangle } be the time-dependent state vector in the Schrödinger picture. A state vector in the interaction picture, | ψ I ( t ) ⟩ {\displaystyle |\psi _{\text{I}}(t)\rangle } , is defined with an additional time-dependent unitary transformation. [ 5 ] | ψ I ( t ) ⟩ = e i H 0 , S t / ℏ | ψ S ( t ) ⟩ . {\displaystyle |\psi _{\text{I}}(t)\rangle ={\text{e}}^{\mathrm {i} H_{0,{\text{S}}}t/\hbar }|\psi _{\text{S}}(t)\rangle .} An operator in the interaction picture is defined as A I ( t ) = e i H 0 , S t / ℏ A S ( t ) e − i H 0 , S t / ℏ . {\displaystyle A_{\text{I}}(t)=\mathrm {e} ^{\mathrm {i} H_{0,{\text{S}}}t/\hbar }A_{\text{S}}(t)\mathrm {e} ^{-\mathrm {i} H_{0,{\text{S}}}t/\hbar }.} Note that A S ( t ) will typically not depend on t and can be rewritten as just A S . It only depends on t if the operator has "explicit time dependence", for example, due to its dependence on an applied external time-varying electric field. Another instance of explicit time dependence may occur when A S ( t ) is a density matrix (see below). For the operator H 0 {\displaystyle H_{0}} itself, the interaction picture and Schrödinger picture coincide: This is easily seen through the fact that operators commute with differentiable functions of themselves. This particular operator then can be called H 0 {\displaystyle H_{0}} without ambiguity. For the perturbation Hamiltonian H 1 , I {\displaystyle H_{1,{\text{I}}}} , however, where the interaction-picture perturbation Hamiltonian becomes a time-dependent Hamiltonian, unless [ H 1,S , H 0,S ] = 0. It is possible to obtain the interaction picture for a time-dependent Hamiltonian H 0,S ( t ) as well, but the exponentials need to be replaced by the unitary propagator for the evolution generated by H 0,S ( t ), or more explicitly with a time-ordered exponential integral. The density matrix can be shown to transform to the interaction picture in the same way as any other operator. In particular, let ρ I and ρ S be the density matrices in the interaction picture and the Schrödinger picture respectively. If there is probability p n to be in the physical state | ψ n ⟩, then Transforming the Schrödinger equation into the interaction picture gives which states that in the interaction picture, a quantum state is evolved by the interaction part of the Hamiltonian as expressed in the interaction picture. [ 6 ] A proof is given in Fetter and Walecka. [ 7 ] If the operator A S is time-independent (i.e., does not have "explicit time dependence"; see above), then the corresponding time evolution for A I ( t ) is given by In the interaction picture the operators evolve in time like the operators in the Heisenberg picture with the Hamiltonian H ' = H 0 . The evolution of the density matrix in the interaction picture is in consistency with the Schrödinger equation in the interaction picture. For a general operator A {\displaystyle A} , the expectation value in the interaction picture is given by Using the density-matrix expression for expectation value, we will get The term interaction representation was invented by Schwinger. [ 8 ] [ 9 ] In this new mixed representation the state vector is no longer constant in general, but it is constant if there is no coupling between fields. The change of representation leads directly to the Tomonaga–Schwinger equation: [ 10 ] [ 9 ] Where the Hamiltonian in this case is the QED interaction Hamiltonian, but it can also be a generic interaction, and σ {\displaystyle \sigma } is a spacelike surface that is passing through the point x {\displaystyle x} . The derivative formally represents a variation over that surface given x {\displaystyle x} fixed. It is difficult to give a precise mathematical formal interpretation of this equation. [ 11 ] This approach is called the 'differential' and 'field' approach by Schwinger, as opposed to the 'integral' and 'particle' approach of the Feynman diagrams. [ 12 ] [ 13 ] The core idea is that if the interaction has a small coupling constant (i.e. in the case of electromagnetism of the order of the fine structure constant) successive perturbative terms will be powers of the coupling constant and therefore smaller. [ 14 ] The purpose of the interaction picture is to shunt all the time dependence due to H 0 onto the operators, thus allowing them to evolve freely, and leaving only H 1,I to control the time-evolution of the state vectors. The interaction picture is convenient when considering the effect of a small interaction term, H 1,S , being added to the Hamiltonian of a solved system, H 0,S . By utilizing the interaction picture, one can use time-dependent perturbation theory to find the effect of H 1,I , [ 15 ] : 355ff e.g., in the derivation of Fermi's golden rule , [ 15 ] : 359–363 or the Dyson series [ 15 ] : 355–357 in quantum field theory : in 1947, Shin'ichirō Tomonaga and Julian Schwinger appreciated that covariant perturbation theory could be formulated elegantly in the interaction picture, since field operators can evolve in time as free fields, even in the presence of interactions, now treated perturbatively in such a Dyson series. For a time-independent Hamiltonian H S , where H 0,S is the free Hamiltonian,
https://en.wikipedia.org/wiki/Interaction_picture
In information theory, Interactions of actors theory is a theory developed by Gordon Pask and Gerard de Zeeuw . It is a generalisation of Pask's earlier conversation theory : The chief distinction being that conversation theory focuses on analysing the specific features that allow a conversation to emerge between two participants, whereas interaction of actor's theory focuses on the broader domain of conversation in which conversations may appear, disappear, and reappear over time. [ 1 ] Interactions of actors theory was developed late in Pask's career. It is reminiscent of Freud's psychodynamics , Bateson's panpsychism (see "Mind and Nature: A Necessary Unity" 1980). Pask's nexus of analogy, dependence and mechanical spin produces the differences that are central to cybernetics . While working with clients in the last years of his life, Pask produced an axiomatic scheme [ 2 ] for his interactions of actors theory , less well-known than his conversation theory. Interactions of Actors, Theory and Some Applications , as the manuscript is entitled, is essentially a concurrent spin calculus applied to the living environment with strict topological constraints. [ 3 ] One of the most notable associates of Gordon Pask, Gerard de Zeeuw , was a key contributor to the development of interactions of actors theory. Interactions of actors theory is a process theory. [ 6 ] As a means to describe the interdisciplinary nature of his work, Pask would make analogies to physical theories in the classic positivist enterprises of the social sciences . Pask sought to apply the axiomatic properties of agreement or epistemological dependence to produce a "sharp-valued" social science with precision comparable to the results of the hard sciences. It was out of this inclination that he would develop his interactions of actors theory. Pask's concepts produce relations in all media and he regarded IA as a process theory . In his complementarity principle he stated "Processes produce products and all products (finite, bounded coherences) are produced by processes". [ 7 ] Most importantly Pask also had his exclusion principle . He proved that no two concepts or products could be the same because of their different histories. He called this the "No Doppelgangers" clause or edict. [ 6 ] Later he reflected "Time is incommensurable for Actors". [ 8 ] He saw these properties as necessary to produce differentiation and innovation or new coherences in physical nature and, indeed, minds. In 1995, Pask stated what he called his Last Theorem: "Like concepts repel and unlike concepts attract". For ease of application Pask stated the differences and similarities of descriptions (the products of processes) were context and perspective dependent. In the last three years of his life Pask presented models based on Knot theory knots which described minimal persisting concepts. He interpreted these as acting as computing elements which exert repulsive forces to interact and persist in filling the space. The knots, links and braids of his entailment mesh models of concepts, which could include tangle-like processes seeking "tail-eating" closure, Pask called "tapestries". His analysis proceeded with like seeming concepts repelling or unfolding but after a sufficient duration of interaction (he called this duration "faith") a pair of similar or like-seeming concepts will always produce a difference and thus an attraction. Amity (availability for interaction), respectability (observability), responsibility (able to respond to stimulus), unity (not uniformity) were necessary properties to produce agreement (or dependence) and agreement-to-disagree (or relative independence) when Actors interact. Concepts could be applied imperatively or permissively when a Petri (see Petri net ) condition for synchronous transfer of meaningful information occurred. Extending his physical analogy Pask associated the interactions of thought generation with radiation : "operations generating thoughts and penetrating conceptual boundaries within participants, excite the concepts bounded as oscillators, which, in ridding themselves of this surplus excitation, produce radiation" [ 9 ] In sum, IA supports the earlier kinematic conversation theory work where minimally two concurrent concepts were required to produce a non-trivial third. One distinction separated the similarity and difference of any pair in the minimum triple. However, his formal methods denied the competence of mathematics or digital serial and parallel processes to produce applicable descriptions because of their innate pathologies in locating the infinitesimals of dynamic equilibria ( Stafford Beer 's "Point of Calm"). He dismissed the digital computer as a kind of kinematic "magic lantern". He saw mechanical models as the future for the concurrent kinetic computers required to describe natural processes. He believed that this implied the need to extend quantum computing to emulate true field concurrency rather than the current von Neumann architecture . Reviewing IA [ 8 ] he said: Interaction of actors has no specific beginning or end. It goes on forever. Since it does so it has very peculiar properties. Whereas a conversation is mapped (due to a possibility of obtaining a vague kinematic, perhaps picture-frame image, of it, onto Newtonian time, precisely because it has a beginning and end), an interaction, in general, cannot be treated in this manner. Kinematics are inadequate to deal with life: we need kinetics. Even so as in the minimal case of a strict conversation we cannot construct the truth value , metaphor or analogy of A and B. The A, B differences are generalizations about a coalescence of concepts on the part of A and B; their commonality and coherence is the similarity. The difference (reiterated) is the differentiation of A and B (their agreements to disagree, their incoherences). Truth value in this case meaning the coherence between all of the interacting actors. He added: It is essential to postulate vectorial times (where components of the vectors are incommensurate) and furthermore times which interact with each other in the manner of Louis Kaufmann's knots and tangles. In experimental Epistemology Pask, the "philosopher mechanic", produced a tool kit to analyse the basis for knowledge and criticise the teaching and application of knowledge from all fields: the law, social and system sciences to mathematics, physics and biology. In establishing the vacuity of invariance Pask was challenged with the invariance of atomic number . "Ah", he said "the atomic hypothesis". He rejected this instead preferring the infinite nature of the productions of waves. Pask held that concurrence is a necessary condition for modelling brain functions and he remarked IA was meant to stand AI, Artificial Intelligence, on its head. Pask believed it was the job of cybernetics to compare and contrast. His IA theory showed how to do this. Heinz von Foerster called him a genius, [ 10 ] "Mr. Cybernetics", the "cybernetician's cybernetician". The Hewitt, Bishop and Steiger approach concerns sequential processing and inter-process communication in digital, serial, kinematic computers. It is a parallel or pseudo-concurrent theory as is the theory of concurrency. See Concurrency . In Pask's true field concurrent theory kinetic processes can interrupt (or, indeed, interact with) each other, simply reproducing or producing a new resultant force within a coherence (of concepts) but without buffering delays or priority. [ 11 ] "There are no Doppelgangers" is a fundamental theorem , edict or clause of cybernetics due to Pask in support of his theories of learning and interaction in all media: conversation theory and interactions of actors theory. It accounts for physical differentiation and is Pask's exclusion principle . [ 12 ] It states no two products of concurrent interaction can be the same because of their different dynamic contexts and perspectives. No Doppelgangers is necessary to account for the production by interaction and intermodulation (cf. beats ) of different, evolving, persisting and coherent forms. Two proofs are presented both due to Pask. Consider a pair of moving, dynamic participants A {\displaystyle \textstyle A} and B {\displaystyle \textstyle B} producing an interaction T {\displaystyle \textstyle T} . Their separation will vary during T {\displaystyle \textstyle T} . The duration of T {\displaystyle \textstyle T} observed from A {\displaystyle \textstyle A} will be different from the duration of T {\displaystyle \textstyle T} observed from B {\displaystyle \textstyle B} . [ 8 ] [ 13 ] Let T {\displaystyle \textstyle T} s {\displaystyle \textstyle s} and T {\displaystyle \textstyle T} f {\displaystyle \textstyle f} be the start and finish times for the transfer of meaningful information, we can write: T s A ≠ T f B , T s B ≠ T f B , T s A ≠ T s B , T f A ≠ T s B T f A ≠ T s A T f A ≠ T f B Thus A ≠ B Q.E.D. Pask remarked: [ 8 ] Conversation is defined as having a beginning and an end and time is vectorial. The components of the vector are commensurable (in duration). On the other hand actor interaction time is vectorial with components that are incommensurable. In the general case there is no well-defined beginning and interaction goes on indefinitely. As a result the time vector has incommensurable components. Both the quantity and quality differ. No Doppelgangers applies in both the conversation theory 's kinematic domain (bounded by beginnings and ends) where times are commensurable and in the eternal kinetic interactions of actors domain where times are incommensurable. The second proof [ 6 ] is more reminiscent of R.D. Laing : [ 14 ] Your concept of your concept is not my concept of your concept—a reproduced concept is not the same as the original concept. Pask defined concepts as persisting, countably infinite, recursively packed spin processes (like many cored cable, or skins of an onion) in any medium (stars, liquids, gases, solids, machines and, of course, brains) that produce relations. Here we prove A ( T ) ≠ B ( T ). D means "description of" and <Con A ( T ), D A ( T )> reads A's concept of T produces A's description of T , evoking Dirac notation (required for the production of the quanta of thought: the transfer of "set-theoretic tokens", as Pask puts it in 1996 [ 8 ] ). or, in general also, in general and vice versa, or, in general terms given that for all Z and all T, the concepts and that AA = A(A) is not equal to BA = B(A) and vice versa, hence, there are no Doppelgangers. Q.E.D. Pask attached a piece of string to a bar [ 15 ] with three knots in it. Then he attached a piece of elastic to the bar with three knots in it. One observing actor, A, on the string would see the knotted intervals on the other actor as varying as the elastic was stretched and relaxed corresponding to the relative motion of B as seen from A. The knots correspond to the beginning of the experiment then the start and finish of the A/B interaction. Referring to the three intervals, where x, y, z, are the separation distances of the knots from the bar and each other, he noted x > y > z on the string for participant A does not imply x > z for participant B on the elastic. A change of separation between A and B producing Doppler shifts during interaction, recoil or the differences in relativistic proper time for A and B, would account for this for example. On occasion a second knotted string was tied to the bar representing coordinate time . To set in further context Pask won a prize from Old Dominion University for his complementarity principle : "All processes produce products and all products are produced by processes". This can be written: Ap(Con Z (T)) => D Z (T) where => means produces and Ap means the "application of", D means "description of" and Z is the concept mesh or coherence of which T is part. This can also be written Pask distinguishes Imperative (written &Ap or IM) from Permissive Application (written Ap) [ 16 ] where information is transferred in the Petri net manner, the token appearing as a hole in a torus producing a Klein bottle containing recursively packed concepts. [ 6 ] Pask's "hard" or "repulsive" [ 6 ] carapace was a condition he required for the persistence of concepts. He endorsed Nicholas Rescher 's coherence theory of truth approach where a set membership criterion of similarity also permitted differences amongst set or coherence members, but he insisted repulsive force was exerted at set and members' coherence boundaries. He said of G. Spencer Brown 's Laws of Form that distinctions must exert repulsive forces. This is not yet accepted by Spencer Brown and others. Without a repulsion, or Newtonian reaction at the boundary, sets, their members or interacting participants would diffuse away forming a "smudge"; Hilbertian marks on paper would not be preserved. Pask, the mechanical philosopher, wanted to apply these ideas to bring a new kind of rigour to cybernetic models. Some followers of Pask emphasise his late work, done in the closing chapter of his life, which is neither as clear nor as grounded as the prior decades of research and machine- and theory-building. This tends to skew the impression gleaned by researchers as to Pask's contribution or even his lucidity. [ citation needed ]
https://en.wikipedia.org/wiki/Interactions_of_actors_theory
The Interactive Compilation Interface ( ICI ) is a plugin system with a high-level compiler-independent and low-level compiler-dependent API to transform production compilers into interactive research toolsets. It was developed by Grigori Fursin during the MILEPOST project . [ 2 ] [ 3 ] The ICI framework acts as a "middleware" interface between the compiler and the user-definable plugins. It opens up and reuses the production-quality compiler infrastructure to enable program analysis and instrumentation, fine-grain program optimizations, simple prototyping of new development and research ideas while avoiding building new compilation tools from scratch. For example, it is used in MILEPOST GCC to automate compiler and architecture design and program optimizations based on statistical analysis and machine learning, and predict profitable optimization to improve program execution time, code size and compilation time. ICI is now available in mainline GCC since version 4.5 [ 1 ] ICI was extended during the Google Summer of Code'2009 to enable fine-grain program optimizations including polyhedral transformations, function level run-time adaptation and collective optimization.
https://en.wikipedia.org/wiki/Interactive_Compilation_Interface
The Inter@ctive Pager is a discontinued two-way pager released in 1996 by Research In Motion (later known for the BlackBerry line of smartphones) that allowed users to receive and send messages via the Mobitex wireless network. The US operator of Mobitex, RAM Mobile Data , introduced the Inter@ctive Pager service as RAMfirst Interactive Paging. The device was named '1997 Top Product' by the magazine Wireless for the Corporate User . It is also known as the RIM-900 . [ 1 ] The device is credited with introducing features such as peer-to-peer delivery, read receipts, sending faxes to phones and text-to-speech technology. In August 1998, BellSouth Wireless Data replaced the RIM-900 with the BlackBerry 950 and marketed the service as BellSouth Interactive Paging.
https://en.wikipedia.org/wiki/Interactive_Pager
Interactive Theorem Proving ( ITP ) is an annual international academic conference on the topic of automated theorem proving , proof assistants and related topics, ranging from theoretical foundations to implementation aspects and applications in program verification , security , and formalization of mathematics . ITP brings together the communities using many systems based on higher-order logic such as ACL2 , Coq , Mizar , HOL , Isabelle , Lean , NuPRL , PVS , and Twelf . Individual workshops or meetings devoted to individual systems are usually held concurrently with the conference. The inaugural meeting of ITP was held on 11–14 July 2010 in Edinburgh, Scotland, as part of the Federated Logic Conference. It is the extension of the Theorem Proving in Higher Order Logics ( TPHOLs ) conference series to the broad field of interactive theorem proving. TPHOLs meetings took place every year from 1988 until 2009. The first three were informal users' meetings for the HOL system and were the only ones without published papers. Since 1990 TPHOLs has published formal peer-reviewed proceedings, published by Springer 's Lecture Notes in Computer Science series. It has also entertained an increasingly wide field of interest. This article about a computer conference is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Interactive_Theorem_Proving_(conference)
Interactive architecture refers to the branch of architecture which deals with buildings, structures, surfaces and spaces that are designed to change, adapt and reconfigure in real-time response to people (their activity, behaviour and movements), as well as the wider environment. This is usually achieved by embedding sensors, processors and effectors as a core part of a building's nature and functioning in such a way that the form, structure, mood or program of a space can be altered in real-time. Interactive architecture encompasses building automation but goes beyond it by including forms of interaction engagements and responses that may lie in pure communication purposes as well as in the emotive and artistic realm, thus entering the field of interactive art . [ 1 ] [ 2 ] It is also closely related to the field of Responsive architecture and the terms are sometimes used interchangeably, but the distinction is important for some. While now quite common (most large-scale new buildings are built around environmentally responsive technologies, sustainability systems and user-configurable environments) earlier notable examples of interactive architecture include: Early contributions to the ideas behind interactive architecture include New Babylon (Constant Nieuwenhuys) (a massive global city formed from "a series of linked transformable structures") and Cedric Price 's Fun Palace ("Designed as a flexible framework into which programmable spaces can be plugged, the structure has as its ultimate goal the possibility of change at the behest of its users"), [ 7 ] later given form in his Inter-Action Centre . [ 8 ] Nicholas Negroponte 's book Soft Architecture Machines (1975) proposed architecture machines "not simply used as aids in the design of buildings—they serve as buildings in themselves. Man will live in living, intelligent machines or cognitive physical environments that can immediately respond to his needs or wishes or whims". [ 9 ] He had earlier founded the Architecture Machine Group at MIT in 1968, creating the lab "as a test bed for interactive computers, sensors and programs that sought to change the manner in which computers and humans interacted with each other" [ 10 ] which later grew into MIT Media Lab . Other notable contributors to the conceptual development of the field include: Interactive architecture part of the Internet of things , a term first coined by Kevin Ashton of Procter & Gamble, later MIT's Auto-ID Center, in 1999, can include both interior and exterior elements. Within the interior, many technologies are competing to see who will emerge as the dominant communicative signal. 4GLTE LTE (telecommunication) being replaced eventually by 5G , is the obvious solution; however, visible light communication or Li-Fi , a term first introduced by Harald Haas during a 2011 TEDGlobal talk in Edinburgh, is gaining ground as research into this type of data transfer method increases. Interactive architecture and designing buildings with this technology embedded in it is essential in the development of smart cities. Another essential element in the development of a smart city is the landscape architecture . The space in-between buildings used by the public, or the public realm as it is more commonly termed. There are two levels of communication within the public realm and the difference between the two are commonly accepted as the differentiation between IoT and IoE. IoE, or the Internet of Everything, was a phrase first used by Cisco in an attempt to achieve polarity with competitors that had embraced the term IoT. In Cisco's definition, however, they highlighted interaction with the human node as one main difference between IoT and IoE. The two public realm communication protocols that make that space a smart space are: Whilst IoT concerns itself with communication between objects in order to make the design more efficient and interactive from an operational stand point. IoE also incorporates communication between embedded objects and user devices. The applications include wayfinding , safety, anti-terrorism, targeted advertising , general information such as history of the space or simply just to make the space more enjoyable.
https://en.wikipedia.org/wiki/Interactive_architecture
Interactive Specialization is a theory of brain development proposed by the British developmental cognitive neuroscientist Mark Johnson , formerly head of the Centre for Brain and Cognitive Development [ 1 ] at Birkbeck, University of London , London and who is now Head of Psychology at the University of Cambridge. In his book Developmental Cognitive Neuroscience , [ 2 ] Johnson contrasts two views of development. According to the first, the maturational hypothesis, the relationship between structure and function (i.e. which parts of the brain perform a particular task) is static, and specific cognitive skills come “on-line” as the cortical circuitry intrinsic to a particular task matures. Johnson likens this to a "mosaic" view of development. According to the second, the Interactive Specialization (IS) [ 2 ] [ 3 ] hypothesis, development is not a unidirectional maturational process, but rather a set of complex, dynamic and back-propagated interactions between genetics, brain, body and environment. Development is not a simple question of a brain being built according to a pre-specified genetic blueprint - rather, the components of the brain are interacting with each other constantly - even prenatally, when patterns of spontaneous firing of cells in the eyes (before they have opened) transmit signals that appear to help develop the layered structure of the lateral geniculate nucleus . [ 4 ] The hypothesis has attracted increasing attention in recent years as a number of neuroimaging studies on younger children have provided data that appears to fit specific predictions made by Johnson's model [ 5 ] . [ 6 ] In 1996, Johnson co-authored (with Jeffrey Elman , Annette Karmiloff-Smith , Elizabeth Bates , Domenico Parisi, and Kim Plunkett), the book Rethinking Innateness [ 7 ] , which argues against a strong nativist (innate) view on development. Other key influences include Gilbert Gottlieb's theory of Probabilistic Epigenesis , [ 8 ] a framework that emphasizes the reciprocity and ubiquity of gene-environment interaction in the realization of all phenotypes, and work on developmental disorders by Annette Karmiloff-Smith .
https://en.wikipedia.org/wiki/Interactive_specialization
In molecular biology , an interactome is the whole set of molecular interactions in a particular cell . The term specifically refers to physical interactions among molecules (such as those among proteins, also known as protein–protein interactions , PPIs; or between small molecules and proteins [ 1 ] ) but can also describe sets of indirect interactions among genes ( genetic interactions ). The word "interactome" was originally coined in 1999 by a group of French scientists headed by Bernard Jacq. [ 3 ] Mathematically, interactomes are generally displayed as graphs . While interactomes may be described as biological networks , they should not be confused with other networks such as neural networks or food webs . Molecular interactions can occur between molecules belonging to different biochemical families (proteins, nucleic acids, lipids, carbohydrates, etc.) and also within a given family. Whenever such molecules are connected by physical interactions, they form molecular interaction networks that are generally classified by the nature of the compounds involved. Most commonly, interactome refers to protein–protein interaction (PPI) network (PIN) or subsets thereof. For instance, the Sirt-1 protein interactome and Sirt family second order interactome [ 4 ] [ 5 ] is the network involving Sirt-1 and its directly interacting proteins where as second order interactome illustrates interactions up to second order of neighbors (Neighbors of neighbors). Another extensively studied type of interactome is the protein–DNA interactome, also called a gene-regulatory network , a network formed by transcription factors, chromatin regulatory proteins, and their target genes. Even metabolic networks can be considered as molecular interaction networks: metabolites, i.e. chemical compounds in a cell, are converted into each other by enzymes , which have to bind their substrates physically. In fact, all interactome types are interconnected. For instance, protein interactomes contain many enzymes which in turn form biochemical networks. Similarly, gene regulatory networks overlap substantially with protein interaction networks and signaling networks. It has been suggested that the size of an organism's interactome correlates better than genome size with the biological complexity of the organism. [ 7 ] Although protein–protein interaction maps containing several thousand binary interactions are now available for several species, none of them is presently complete and the size of interactomes is still a matter of debate. The yeast interactome, i.e. all protein–protein interactions among proteins of Saccharomyces cerevisiae , has been estimated to contain between 10,000 and 30,000 interactions. A reasonable estimate may be on the order of 20,000 interactions. Larger estimates often include indirect or predicted interactions, often from affinity purification / mass spectrometry (AP/MS) studies. [ 6 ] Genes interact in the sense that they affect each other's function. For instance, a mutation may be harmless, but when it is combined with another mutation, the combination may turn out to be lethal. Such genes are said to "interact genetically". Genes that are connected in such a way form genetic interaction networks . Some of the goals of these networks are: develop a functional map of a cell's processes, drug target identification using chemoproteomics , and to predict the function of uncharacterized genes. In 2010, the most "complete" gene interactome produced to date was compiled from about 5.4 million two-gene comparisons to describe "the interaction profiles for ~75% of all genes in the budding yeast ", with ~170,000 gene interactions. The genes were grouped based on similar function so as to build a functional map of the cell's processes. Using this method the study was able to predict known gene functions better than any other genome-scale data set as well as adding functional information for genes that hadn't been previously described. From this model genetic interactions can be observed at multiple scales which will assist in the study of concepts such as gene conservation. Some of the observations made from this study are that there were twice as many negative as positive interactions , negative interactions were more informative than positive interactions, and genes with more connections were more likely to result in lethality when disrupted. [ 8 ] Interactomics is a discipline at the intersection of bioinformatics and biology that deals with studying both the interactions and the consequences of those interactions between and among proteins , and other molecules within a cell . [ 9 ] Interactomics thus aims to compare such networks of interactions (i.e., interactomes) between and within species in order to find how the traits of such networks are either preserved or varied. Interactomics is an example of "top-down" systems biology , which takes an overhead view of a biosystem or organism. Large sets of genome-wide and proteomic data are collected, and correlations between different molecules are inferred. From the data new hypotheses are formulated about feedbacks between these molecules. These hypotheses can then be tested by new experiments. [ 10 ] The study of interactomes is called interactomics. The basic unit of a protein network is the protein–protein interaction (PPI). While there are numerous methods to study PPIs, there are relatively few that have been used on a large scale to map whole interactomes. The yeast two hybrid system (Y2H) is suited to explore the binary interactions among two proteins at a time. Affinity purification and subsequent mass spectrometry is suited to identify a protein complex. Both methods can be used in a high-throughput (HTP) fashion. Yeast two hybrid screens allow false positive interactions between proteins that are never expressed in the same time and place; affinity capture mass spectrometry does not have this drawback, and is the current gold standard. Yeast two-hybrid data better indicates non-specific tendencies towards sticky interactions rather while affinity capture mass spectrometry better indicates functional in vivo protein–protein interactions. [ 11 ] [ 12 ] Once an interactome has been created, there are numerous ways to analyze its properties. However, there are two important goals of such analyses. First, scientists try to elucidate the systems properties of interactomes, e.g. the topology of its interactions. Second, studies may focus on individual proteins and their role in the network. Such analyses are mainly carried out using bioinformatics methods and include the following, among many others: First, the coverage and quality of an interactome has to be evaluated. Interactomes are never complete, given the limitations of experimental methods. For instance, it has been estimated that typical Y2H screens detect only 25% or so of all interactions in an interactome. [ 13 ] The coverage of an interactome can be assessed by comparing it to benchmarks of well-known interactions that have been found and validated by independent assays. [ 14 ] Other methods filter out false positives calculating the similarity of known annotations of the proteins involved or define a likelihood of interaction using the subcellular localization of these proteins. [ 15 ] Using experimental data as a starting point, homology transfer is one way to predict interactomes. Here, PPIs from one organism are used to predict interactions among homologous proteins in another organism (" interologs "). However, this approach has certain limitations, primarily because the source data may not be reliable (e.g. contain false positives and false negatives). [ 17 ] In addition, proteins and their interactions change during evolution and thus may have been lost or gained. Nevertheless, numerous interactomes have been predicted, e.g. that of Bacillus licheniformis . [ 18 ] Some algorithms use experimental evidence on structural complexes, the atomic details of binding interfaces and produce detailed atomic models of protein–protein complexes [ 19 ] [ 20 ] as well as other protein–molecule interactions. [ 21 ] [ 22 ] Other algorithms use only sequence information, thereby creating unbiased complete networks of interaction with many mistakes. [ 23 ] Some methods use machine learning to distinguish how interacting protein pairs differ from non-interacting protein pairs in terms of pairwise features such as cellular colocalization, gene co-expression, how closely located on a DNA are the genes that encode the two proteins, and so on. [ 16 ] [ 24 ] Random Forest has been found to be most-effective machine learning method for protein interaction prediction. [ 25 ] Such methods have been applied for discovering protein interactions on human interactome, specifically the interactome of Membrane proteins [ 24 ] and the interactome of Schizophrenia-associated proteins. [ 16 ] Some efforts have been made to extract systematically interaction networks directly from the scientific literature. Such approaches range in terms of complexity from simple co-occurrence statistics of entities that are mentioned together in the same context (e.g. sentence) to sophisticated natural language processing and machine learning methods for detecting interaction relationships. [ 26 ] Protein interaction networks have been used to predict the function of proteins of unknown functions. [ 27 ] [ 28 ] This is usually based on the assumption that uncharacterized proteins have similar functions as their interacting proteins ( guilt by association ). For example, YbeB, a protein of unknown function was found to interact with ribosomal proteins and later shown to be involved in bacterial and eukaryotic (but not archaeal) translation . [ 29 ] Although such predictions may be based on single interactions, usually several interactions are found. Thus, the whole network of interactions can be used to predict protein functions, given that certain functions are usually enriched among the interactors. [ 27 ] The term hypothome has been used to denote an interactome wherein at least one of the genes or proteins is a hypothetical protein . [ 30 ] The topology of an interactome makes certain predictions how a network reacts to the perturbation (e.g. removal) of nodes (proteins) or edges (interactions). [ 31 ] Such perturbations can be caused by mutations of genes, and thus their proteins, and a network reaction can manifest as a disease . [ 32 ] A network analysis can identify drug targets and biomarkers of diseases. [ 33 ] Interaction networks can be analyzed using the tools of graph theory . Network properties include the degree distribution, clustering coefficients , betweenness centrality , and many others. The distribution of properties among the proteins of an interactome has revealed that the interactome networks often have scale-free topology [ 34 ] where functional modules within a network indicate specialized subnetworks. [ 35 ] Such modules can be functional, as in a signaling pathway , or structural, as in a protein complex. In fact, it is a formidable task to identify protein complexes in an interactome, given that a network on its own does not directly reveal the presence of a stable complex. Viral protein interactomes consist of interactions among viral or phage proteins. They were among the first interactome projects as their genomes are small and all proteins can be analyzed with limited resources. Viral interactomes are connected to their host interactomes, forming virus-host interaction networks. [ 36 ] Some published virus interactomes include Bacteriophage The lambda and VZV interactomes are not only relevant for the biology of these viruses but also for technical reasons: they were the first interactomes that were mapped with multiple Y2H vectors, proving an improved strategy to investigate interactomes more completely than previous attempts have shown. Human (mammalian) viruses Relatively few bacteria have been comprehensively studied for their protein–protein interactions. However, none of these interactomes are complete in the sense that they captured all interactions. In fact, it has been estimated that none of them covers more than 20% or 30% of all interactions, primarily because most of these studies have only employed a single method, all of which discover only a subset of interactions. [ 13 ] Among the published bacterial interactomes (including partial ones) are The E. coli and Mycoplasma interactomes have been analyzed using large-scale protein complex affinity purification and mass spectrometry (AP/MS), hence it is not easily possible to infer direct interactions. The others have used extensive yeast two-hybrid (Y2H) screens. The Mycobacterium tuberculosis interactome has been analyzed using a bacterial two-hybrid screen (B2H). Note that numerous additional interactomes have been predicted using computational methods (see section above). There have been several efforts to map eukaryotic interactomes through HTP methods. While no biological interactomes have been fully characterized, over 90% of proteins in Saccharomyces cerevisiae have been screened and their interactions characterized, making it the best-characterized interactome. [ 27 ] [ 58 ] [ 59 ] Species whose interactomes have been studied in some detail include Recently, the pathogen-host interactomes of Hepatitis C Virus/Human (2008), [ 62 ] Epstein Barr virus/Human (2008), Influenza virus/Human (2009) were delineated through HTP to identify essential molecular components for pathogens and for their host's immune system. [ 63 ] As described above, PPIs and thus whole interactomes can be predicted. While the reliability of these predictions is debatable, they are providing hypotheses that can be tested experimentally. Interactomes have been predicted for a number of species, e.g. Protein interaction networks can be analyzed with the same tool as other networks. In fact, they share many properties with biological or social networks . Some of the main characteristics are as follows. The degree distribution describes the number of proteins that have a certain number of connections. Most protein interaction networks show a scale-free ( power law ) degree distribution where the connectivity distribution P(k) ~ k −γ with k being the degree. This relationship can also be seen as a straight line on a log-log plot since, the above equation is equal to log(P(k)) ~ —y•log(k). One characteristic of such distributions is that there are many proteins with few interactions and few proteins that have many interactions, the latter being called "hubs". Highly connected nodes (proteins) are called hubs. Han et al. [ 73 ] have coined the term " party hub " for hubs whose expression is correlated with its interaction partners. Party hubs also connect proteins within functional modules such as protein complexes. In contrast, " date hubs " do not exhibit such a correlation and appear to connect different functional modules. Party hubs are found predominantly in AP/MS data sets, whereas date hubs are found predominantly in binary interactome network maps. [ 74 ] Note that the validity of the date hub/party hub distinction was disputed. [ 75 ] [ 76 ] Party hubs generally consist of multi-interface proteins whereas date hubs are more frequently single-interaction interface proteins. [ 77 ] Consistent with a role for date-hubs in connecting different processes, in yeast the number of binary interactions of a given protein is correlated to the number of phenotypes observed for the corresponding mutant gene in different physiological conditions. [ 74 ] Nodes involved in the same biochemical process are highly interconnected. [ 33 ] The evolution of interactome complexity is delineated in a study published in Nature . [ 78 ] In this study it is first noted that the boundaries between prokaryotes , unicellular eukaryotes and multicellular eukaryotes are accompanied by orders-of-magnitude reductions in effective population size, with concurrent amplifications of the effects of random genetic drift . The resultant decline in the efficiency of selection seems to be sufficient to influence a wide range of attributes at the genomic level in a nonadaptive manner. The Nature study shows that the variation in the power of random genetic drift is also capable of influencing phylogenetic diversity at the subcellular and cellular levels. Thus, population size would have to be considered as a potential determinant of the mechanistic pathways underlying long-term phenotypic evolution. In the study it is further shown that a phylogenetically broad inverse relation exists between the power of drift and the structural integrity of protein subunits. Thus, the accumulation of mildly deleterious mutations in populations of small size induces secondary selection for protein–protein interactions that stabilize key gene functions, mitigating the structural degradation promoted by inefficient selection. By this means, the complex protein architectures and interactions essential to the genesis of phenotypic diversity may initially emerge by non-adaptive mechanisms. Kiemer and Cesareni [ 9 ] raise the following concerns with the state (circa 2007) of the field especially with the comparative interactomic: The experimental procedures associated with the field are error prone leading to "noisy results". This leads to 30% of all reported interactions being artifacts. In fact, two groups using the same techniques on the same organism found less than 30% interactions in common. However, some authors have argued that such non-reproducibility results from the extraordinary sensitivity of various methods to small experimental variation. For instance, identical conditions in Y2H assays result in very different interactions when different Y2H vectors are used. [ 13 ] Techniques may be biased, i.e. the technique determines which interactions are found. In fact, any method has built in biases, especially protein methods. Because every protein is different no method can capture the properties of each protein. For instance, most analytical methods that work fine with soluble proteins deal poorly with membrane proteins. This is also true for Y2H and AP/MS technologies. Interactomes are not nearly complete with perhaps the exception of S. cerevisiae. This is not really a criticism as any scientific area is "incomplete" initially until the methodologies have been improved. Interactomics in 2015 is where genome sequencing was in the late 1990s, given that only a few interactome datasets are available (see table above). While genomes are stable, interactomes may vary between tissues, cell types, and developmental stages. Again, this is not a criticism, but rather a description of the challenges in the field. It is difficult to match evolutionarily related proteins in distantly related species. While homologous DNA sequences can be found relatively easily, it is much more difficult to predict homologous interactions ("interologs") because the homologs of two interacting proteins do not need to interact. For instance, even within a proteome two proteins may interact but their paralogs may not. Each protein–protein interactome may represent only a partial sample of potential interactions, even when a supposedly definitive version is published in a scientific journal. Additional factors may have roles in protein interactions that have yet to be incorporated in interactomes. The binding strength of the various protein interactors, microenvironmental factors, sensitivity to various procedures, and the physiological state of the cell all impact protein–protein interactions, yet are usually not accounted for in interactome studies. [ 79 ]
https://en.wikipedia.org/wiki/Interactome
An interactor is an entity that natural selection acts upon. Interactor is a concept commonly used in the field of evolutionary biology . A widely accepted theory of evolution is the theory from Charles Darwin . He states, in short, that in a population there is often variation in heritable traits among individuals, in which a form of the trait might be more beneficial than the other form(s). Due to this difference, the chance of getting more adjusted offspring to the environment is higher. [ 1 ] The process describing the selection of the environment on the traits of organisms is called natural selection . Based on this idea natural selection seems to act on traits of individuals, which evolutionary biologist like to call the interactor. So stated in a different way; an interactor is defined as a part of an organism that natural selection acts upon. Other terms that are often mentioned in the same context as interactors, are replicators and vehicles. When replicators are mentioned, they mean things that pass on their entire structure through successive replications, like genes. This is not the same as an interactor, as interactors are things that interact with their environment and natural selection can act upon. Due to this interaction with the environment, interactors cause differential replication. However, some things (for example genes) can be both replicators and interactors. Vehicles are often used as a synonym of interactors, only in a way that vehicles can "drive" natural selection, as if they have the behaviour to steer natural selection in a specific way. The term "vehicle" makes it look that way and therefore some people (like Hull ) prefer the word "interactor" to "vehicle" for the same concept. An example of an interactor is the shell colour of snails (see below). A study on common garden snails was performed and showed how natural selection on an interactor works. This species is highly suitable for evolutionary research due to their easily to score phenotype and their very straightforward genotype causing the phenotypic variation . Phenotypic variation among common garden snails can be found in their shell colour and banding and both colouring and banding is regulated by one single gene . The snail shells have variations in colours namely brown, pink and yellow; with brown being more dominant than pink and yellow. Furthermore, banding variation can be described as unbanded and banded, with banded individuals differing from another by the number of bands. One of the conclusions that could be drawn out of this research is that in grasslands, yellow individuals had a higher survival rate and were more abundant in these grasslands. This means that natural selection acted on the shell colour, which means that shell colour is the interactor in this example. Furthermore, they found that the brown individuals were more abundant and had a higher survival rate in woodlands than the yellow individuals. Moreover, a specific form of natural selection called thermal selection showed that shell colour worked in the interaction with the environment by yellow shells being more abundant, so more adjusted to reflect heat, in warmer places. Science and Selection , David Hull, 2001 ( http://assets.cambridge.org/97805216/43399/sample/9780521643399ws.pdf ) Replication and Reproduction , David Hull, 2001 ( https://plato.stanford.edu/entries/replication/ ) Color polymorphism in a land snail Cepaea nemoralis (Pulmonata: Helicidae) as viewed by potential avian predators, Adrian Surmacki & Agata Ożarowska-Nowicka & Zuzanna M. Rosin, 2013 ( https://link.springer.com/content/pdf/10.1007/s00114-013-1049-y.pdf ) On the Origin of Species , Charles Darwin, 1859
https://en.wikipedia.org/wiki/Interactor
The Interagency GPS Executive Board ( IGEB ) was an agency of the United States federal government that sought to integrate the needs and desires of various governmental agencies into formal Global Positioning System Planning. GPS was administered by the Department of Defense , but had grown to service a wide variety of constituents. The majority of GPS uses are now non-military, so this board was fundamental in ensuring the needs of non-military users. In 2004, the IGEB was superseded by the National Executive Committee for Space-Based Positioning, Navigation and Timing (PNT), established by presidential order. This United States government–related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Interagency_GPS_Executive_Board
Interatomic Coulombic decay ( ICD ) [ 1 ] is a general, fundamental property of atoms and molecules that have neighbors. Interatomic (intermolecular) Coulombic decay is a very efficient interatomic (intermolecular) relaxation process of an electronically excited atom or molecule embedded in an environment. Without the environment the process cannot take place. Until now it has been mainly demonstrated for atomic and molecular clusters , independently of whether they are of van-der-Waals or hydrogen bonded type. The nature of the process can be depicted as follows: Consider a cluster with two subunits, A and B . Suppose an inner- valence electron is removed from subunit A . If the resulting (ionized) state is higher in energy than the double ionization threshold of subunit A then an intraatomic (intramolecular) process ( autoionization , in the case of core ionization Auger decay ) sets in. Even though the excitation is energetically not higher than the double ionization threshold of subunit A itself, it may be higher than the double ionization threshold of the cluster which is lowered due to charge separation. If this is the case, an interatomic (intermolecular) process sets in which is called ICD. During the ICD the excess energy of subunit A is used to remove (due to electronic correlation ) an outer-valence electron from subunit B . As a result, a doubly ionized cluster is formed with a single positive charge on A and B . Thus, charge separation in the final state is a fingerprint of ICD. As a consequence of the charge separation the cluster typically breaks apart via Coulomb explosion . ICD is characterized by its decay rate or the lifetime of the excited state . The decay rate depends on the interatomic (intermolecular) distance of A and B and its dependence allows to draw conclusions on the mechanism of ICD. [ 2 ] Particularly important is the determination of the kinetic energy spectrum of the electron emitted from subunit B which is denoted as ICD electron. [ 3 ] ICD electrons are often measured in ICD experiments. [ 4 ] [ 5 ] [ 6 ] Typically, ICD takes place on the femto second time scale, [ 7 ] [ 8 ] [ 9 ] many orders of magnitude faster than those of the competing photon emission and other relaxation processes. Very recently, ICD has been identified to be an additional source of low energy electrons in water [ 10 ] and water clusters . [ 11 ] [ 12 ] There, ICD is faster than the competing proton transfer that is usually the prominent pathway in the case of electronic excitation of water clusters. The response of condensed water to electronic excitations is of utmost importance for biological systems. For instance, it was shown in experiments that low energy electrons do affect constituents of DNA effectively. Furthermore, ICD was reported after core-electron excitations of hydroxide in dissolved water. [ 13 ] Interatomic (Intermolecular) processes do not only occur after ionization as described above. Independent of what kind of electronic excitation is at hand, an interatomic (intermolecular) process can set in if an atom or molecule is in a state energetically higher than the ionization threshold of other atoms or molecules in the neighborhood. The following ICD related processes, which were for convenience considered below for clusters, are known:
https://en.wikipedia.org/wiki/Interatomic_Coulombic_decay
Interatomic potentials are mathematical functions to calculate the potential energy of a system of atoms with given positions in space. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Interatomic potentials are widely used as the physical basis of molecular mechanics and molecular dynamics simulations in computational chemistry , computational physics and computational materials science to explain and predict materials properties. Examples of quantitative properties and qualitative phenomena that are explored with interatomic potentials include lattice parameters, surface energies, interfacial energies, adsorption , cohesion , thermal expansion , and elastic and plastic material behavior, as well as chemical reactions . [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] Interatomic potentials can be written as a series expansion of functional terms that depend on the position of one, two, three, etc. atoms at a time. Then the total potential of the system V {\displaystyle \textstyle V_{\mathrm {} }} can be written as [ 3 ] Here V 1 {\displaystyle \textstyle V_{1}} is the one-body term, V 2 {\displaystyle \textstyle V_{2}} the two-body term, V 3 {\displaystyle \textstyle V_{3}} the three body term, N {\displaystyle \textstyle N} the number of atoms in the system, r → i {\displaystyle {\vec {r}}_{i}} the position of atom i {\displaystyle i} , etc. i {\displaystyle i} , j {\displaystyle j} and k {\displaystyle k} are indices that loop over atom positions. Note that in case the pair potential is given per atom pair, in the two-body term the potential should be multiplied by 1/2 as otherwise each bond is counted twice, and similarly the three-body term by 1/6. [ 3 ] Alternatively, the summation of the pair term can be restricted to cases i < j {\displaystyle \textstyle i<j} and similarly for the three-body term i < j < k {\displaystyle \textstyle i<j<k} , if the potential form is such that it is symmetric with respect to exchange of the j {\displaystyle j} and k {\displaystyle k} indices (this may not be the case for potentials for multielemental systems). The one-body term is only meaningful if the atoms are in an external field (e.g. an electric field). In the absence of external fields, the potential V {\displaystyle V} should not depend on the absolute position of atoms, but only on the relative positions. This means that the functional form can be rewritten as a function of interatomic distances r i j = | r → i − r → j | {\displaystyle \textstyle r_{ij}=|{\vec {r}}_{i}-{\vec {r}}_{j}|} and angles between the bonds (vectors to neighbours) θ i j k {\displaystyle \textstyle \theta _{ijk}} . Then, in the absence of external forces, the general form becomes In the three-body term V 3 {\displaystyle \textstyle V_{3}} the interatomic distance r j k {\displaystyle \textstyle r_{jk}} is not needed since the three terms r i j , r i k , θ i j k {\displaystyle \textstyle r_{ij},r_{ik},\theta _{ijk}} are sufficient to give the relative positions of three atoms i , j , k {\displaystyle i,j,k} in three-dimensional space. Any terms of order higher than 2 are also called many-body potentials . In some interatomic potentials the many-body interactions are embedded into the terms of a pair potential (see discussion on EAM-like and bond order potentials below). In principle the sums in the expressions run over all N {\displaystyle N} atoms. However, if the range of the interatomic potential is finite, i.e. the potentials V ( r ) ≡ 0 {\displaystyle \textstyle V(r)\equiv 0} above some cutoff distance r c u t {\displaystyle \textstyle r_{\mathrm {cut} }} , the summing can be restricted to atoms within the cutoff distance of each other. By also using a cellular method for finding the neighbours, [ 1 ] the MD algorithm can be an O(N) algorithm. Potentials with an infinite range can be summed up efficiently by Ewald summation and its further developments. The forces acting between atoms can be obtained by differentiation of the total energy with respect to atom positions. That is, to get the force on atom i {\displaystyle i} one should take the three-dimensional derivative (gradient) of the potential V tot {\displaystyle V_{\text{tot}}} with respect to the position of atom i {\displaystyle i} : For two-body potentials this gradient reduces, thanks to the symmetry with respect to i j {\displaystyle ij} in the potential form, to straightforward differentiation with respect to the interatomic distances r i j {\displaystyle \textstyle r_{ij}} . However, for many-body potentials (three-body, four-body, etc.) the differentiation becomes considerably more complex [ 12 ] [ 13 ] since the potential may not be any longer symmetric with respect to i j {\displaystyle ij} exchange. In other words, also the energy of atoms k {\displaystyle k} that are not direct neighbours of i {\displaystyle i} can depend on the position r → i {\displaystyle \textstyle {\vec {r}}_{i}} because of angular and other many-body terms, and hence contribute to the gradient ∇ r → k {\displaystyle \textstyle \nabla _{{\vec {r}}_{k}}} . Interatomic potentials come in many different varieties, with different physical motivations. Even for single well-known elements such as silicon, a wide variety of potentials quite different in functional form and motivation have been developed. [ 14 ] The true interatomic interactions are quantum mechanical in nature, and there is no known way in which the true interactions described by the Schrödinger equation or Dirac equation for all electrons and nuclei could be cast into an analytical functional form. Hence all analytical interatomic potentials are by necessity approximations . Over time interatomic potentials have largely grown more complex and more accurate, although this is not strictly true. [ 15 ] This has included both increased descriptions of physics, as well as added parameters. Until recently, all interatomic potentials could be described as "parametric", having been developed and optimized with a fixed number of (physical) terms and parameters. New research focuses instead on non-parametric potentials which can be systematically improvable by using complex local atomic neighbor descriptors and separate mappings to predict system properties, such that the total number of terms and parameters are flexible. [ 16 ] These non-parametric models can be significantly more accurate, but since they are not tied to physical forms and parameters, there are many potential issues surrounding extrapolation and uncertainties. The arguably simplest widely used interatomic interaction model is the Lennard-Jones potential [ 17 ] [ 18 ] [ 11 ] where ε {\displaystyle \textstyle \varepsilon } is the depth of the potential well and σ {\displaystyle \textstyle \sigma } is the distance at which the potential crosses zero. The attractive term proportional to 1 / r 6 {\displaystyle \textstyle 1/r^{6}} in the potential comes from the scaling of van der Waals forces , while the 1 / r 12 {\displaystyle \textstyle 1/r^{12}} repulsive term is much more approximate (conveniently the square of the attractive term). [ 6 ] On its own, this potential is quantitatively accurate only for noble gases and has been extensively studied in the past decades, [ 19 ] but is also widely used for qualitative studies and in systems where dipole interactions are significant, particularly in chemistry force fields to describe intermolecular interactions - especially in fluids. [ 20 ] Another simple and widely used pair potential is the Morse potential , which consists simply of a sum of two exponentials. Here D e {\displaystyle \textstyle D_{e}} is the equilibrium bond energy and r e {\displaystyle \textstyle r_{e}} the bond distance. The Morse potential has been applied to studies of molecular vibrations and solids, [ 21 ] and also inspired the functional form of more accurate potentials such as the bond-order potentials. Ionic materials are often described by a sum of a short-range repulsive term, such as the Buckingham pair potential , and a long-range Coulomb potential giving the ionic interactions between the ions forming the material. The short-range term for ionic materials can also be of many-body character . [ 22 ] Pair potentials have some inherent limitations, such as the inability to describe all 3 elastic constants of cubic metals or correctly describe both cohesive energy and vacancy formation energy. [ 7 ] Therefore, quantitative molecular dynamics simulations are carried out with various of many-body potentials. For very short interatomic separations, important in radiation material science , the interactions can be described quite accurately with screened Coulomb potentials which have the general form Here, φ ( r ) → 1 {\displaystyle \varphi (r)\to 1} when r → 0 {\displaystyle r\to 0} . Z 1 {\displaystyle Z_{1}} and Z 2 {\displaystyle Z_{2}} are the charges of the interacting nuclei, and a {\displaystyle a} is the so-called screening parameter. A widely used popular screening function is the "Universal ZBL" one. [ 23 ] and more accurate ones can be obtained from all-electron quantum chemistry calculations [ 24 ] [ 25 ] In a comparative study of several quantum chemistry methods, it was shown that pair-specific "NLH" repulsive potentials with a simple three-exponential screening function are accurate to within ~2% above 30 eV, while the universal ZBL potential differs by ~5%–10% from the quantum chemical calculations above 100 eV. [ 25 ] In binary collision approximation simulations this kind of potential can be used to describe the nuclear stopping power . The Stillinger-Weber potential [ 26 ] is a potential that has a two-body and three-body terms of the standard form where the three-body term describes how the potential energy changes with bond bending. It was originally developed for pure Si, but has been extended to many other elements and compounds [ 27 ] [ 28 ] and also formed the basis for other Si potentials. [ 29 ] [ 30 ] Metals are very commonly described with what can be called "EAM-like" potentials, i.e. potentials that share the same functional form as the embedded atom model . In these potentials, the total potential energy is written where F i {\displaystyle \textstyle F_{i}} is a so-called embedding function (not to be confused with the force F → i {\displaystyle \textstyle {\vec {F}}_{i}} ) that is a function of the sum of the so-called electron density ρ ( r i j ) {\displaystyle \textstyle \rho (r_{ij})} . V 2 {\displaystyle \textstyle V_{2}} is a pair potential that usually is purely repulsive. In the original formulation [ 31 ] [ 32 ] the electron density function ρ ( r i j ) {\displaystyle \textstyle \rho (r_{ij})} was obtained from true atomic electron densities, and the embedding function was motivated from density-functional theory as the energy needed to 'embed' an atom into the electron density. . [ 33 ] However, many other potentials used for metals share the same functional form but motivate the terms differently, e.g. based on tight-binding theory [ 34 ] [ 35 ] [ 36 ] or other motivations [ 37 ] [ 38 ] . [ 39 ] EAM-like potentials are usually implemented as numerical tables. A collection of tables is available at the interatomic potential repository at NIST [1] Covalently bonded materials are often described by bond order potentials , sometimes also called Tersoff-like or Brenner-like potentials. [ 10 ] [ 40 ] [ 41 ] These have in general a form that resembles a pair potential: where the repulsive and attractive part are simple exponential functions similar to those in the Morse potential. However, the strength is modified by the environment of the atom i {\displaystyle i} via the b i j k {\displaystyle b_{ijk}} term. If implemented without an explicit angular dependence, these potentials can be shown to be mathematically equivalent to some varieties of EAM-like potentials [ 42 ] [ 43 ] Thanks to this equivalence, the bond-order potential formalism has been implemented also for many metal-covalent mixed materials. [ 43 ] [ 44 ] [ 45 ] [ 46 ] EAM potentials have also been extended to describe covalent bonding by adding angular-dependent terms to the electron density function ρ {\displaystyle \rho } , in what is called the modified embedded atom method (MEAM). [ 47 ] [ 48 ] [ 49 ] A force field is the collection of parameters to describe the physical interactions between atoms or physical units (up to ~10 8 ) using a given energy expression. The term force field characterizes the collection of parameters for a given interatomic potential (energy function) and is often used within the computational chemistry community. [ 50 ] The force field parameters make the difference between good and poor models. Force fields are used for the simulation of metals, ceramics, molecules, chemistry, and biological systems, covering the entire periodic table and multiphase materials. Today's performance is among the best for solid-state materials, [ 51 ] [ 52 ] molecular fluids, [ 20 ] and for biomacromolecules, [ 53 ] whereby biomacromolecules were the primary focus of force fields from the 1970s to the early 2000s. Force fields range from relatively simple and interpretable fixed-bond models (e.g. Interface force field, [ 50 ] CHARMM , [ 54 ] and COMPASS) to explicitly reactive models with many adjustable fit parameters (e.g. ReaxFF ) and machine learning models. It should first be noted that non-parametric potentials are often referred to as "machine learning" potentials. While the descriptor/mapping forms of non-parametric models are closely related to machine learning in general and their complex nature make machine learning fitting optimizations almost necessary, differentiation is important in that parametric models can also be optimized using machine learning. Current research in interatomic potentials involves using systematically improvable, non-parametric mathematical forms and increasingly complex machine learning methods. The total energy is then written V T O T = ∑ i N E ( q i ) {\displaystyle V_{\mathrm {TOT} }=\sum _{i}^{N}E(\mathbf {q} _{i})} where q i {\displaystyle \mathbf {q} _{i}} is a mathematical representation of the atomic environment surrounding the atom i {\displaystyle i} , known as the descriptor . [ 55 ] E {\displaystyle E} is a machine-learning model that provides a prediction for the energy of atom i {\displaystyle i} based on the descriptor output. An accurate machine-learning potential requires both a robust descriptor and a suitable machine learning framework. The simplest descriptor is the set of interatomic distances from atom i {\displaystyle i} to its neighbours, yielding a machine-learned pair potential. However, more complex many-body descriptors are needed to produce highly accurate potentials. [ 55 ] It is also possible to use a linear combination of multiple descriptors with associated machine-learning models. [ 56 ] Potentials have been constructed using a variety of machine-learning methods, descriptors, and mappings, including neural networks , [ 57 ] Gaussian process regression , [ 58 ] [ 59 ] and linear regression . [ 60 ] [ 16 ] A non-parametric potential is most often trained to total energies, forces, and/or stresses obtained from quantum-level calculations, such as density functional theory , as with most modern potentials. However, the accuracy of a machine-learning potential can be converged to be comparable with the underlying quantum calculations, unlike analytical models. Hence, they are in general more accurate than traditional analytical potentials, but they are correspondingly less able to extrapolate. Further, owing to the complexity of the machine-learning model and the descriptors, they are computationally far more expensive than their analytical counterparts. Non-parametric, machine learned potentials may also be combined with parametric, analytical potentials, for example to include known physics such as the screened Coulomb repulsion, [ 61 ] or to impose physical constraints on the predictions. [ 62 ] Since the interatomic potentials are approximations, they by necessity all involve parameters that need to be adjusted to some reference values. In simple potentials such as the Lennard-Jones and Morse ones, the parameters are interpretable and can be set to match e.g. the equilibrium bond length and bond strength of a dimer molecule or the surface energy of a solid . [ 63 ] [ 64 ] Lennard-Jones potential can typically describe the lattice parameters, surface energies, and approximate mechanical properties. [ 65 ] Many-body potentials often contain tens or even hundreds of adjustable parameters with limited interpretability and no compatibility with common interatomic potentials for bonded molecules. Such parameter sets can be fit to a larger set of experimental data, or materials properties derived from less reliable data such as from density-functional theory . [ 66 ] [ 67 ] For solids, a many-body potential can often describe the lattice constant of the equilibrium crystal structure, the cohesive energy , and linear elastic constants , as well as basic point defect properties of all the elements and stable compounds well, although deviations in surface energies often exceed 50%. [ 30 ] [ 43 ] [ 45 ] [ 46 ] [ 65 ] [ 50 ] [ 68 ] [ 69 ] [ 70 ] Non-parametric potentials in turn contain hundreds or even thousands of independent parameters to fit. For any but the simplest model forms, sophisticated optimization and machine learning methods are necessary for useful potentials. The aim of most potential functions and fitting is to make the potential transferable , i.e. that it can describe materials properties that are clearly different from those it was fitted to (for examples of potentials explicitly aiming for this, see e.g. [ 71 ] [ 72 ] [ 73 ] [ 74 ] [ 75 ] ). Key aspects here are the correct representation of chemical bonding, validation of structures and energies, as well as interpretability of all parameters. [ 51 ] Full transferability and interpretability is reached with the Interface force field (IFF). [ 50 ] An example of partial transferability, a review of interatomic potentials of Si describes that Stillinger-Weber and Tersoff III potentials for Si can describe several (but not all) materials properties they were not fitted to. [ 14 ] The NIST interatomic potential repository provides a collection of fitted interatomic potentials, either as fitted parameter values or numerical tables of the potential functions. [ 76 ] The OpenKIM [ 77 ] project also provides a repository of fitted potentials, along with collections of validation tests and a software framework for promoting reproducibility in molecular simulations using interatomic potentials. Since the 1990s, machine learning programs have been employed to construct interatomic potentials, mapping atomic structures to their potential energies. These are generally referred to as 'machine learning potentials' (MLPs) [ 78 ] or as 'machine-learned interatomic potentials' (MLIPs). [ 79 ] Such machine learning potentials help fill the gap between highly accurate but computationally intensive simulations like density functional theory and computationally lighter, but much less precise, empirical potentials. Early neural networks showed promise, but their inability to systematically account for interatomic energy interactions limited their applications to smaller, low-dimensional systems, keeping them largely within the confines of academia. However, with continuous advancements in artificial intelligence technology, machine learning methods have become significantly more accurate, increasing the use of machine learning in the field. [ 80 ] [ 81 ] [ 79 ] Modern neural networks have revolutionized the construction of highly accurate and computationally light potentials by integrating theoretical understanding of materials science into their architectures and preprocessing. Almost all are local, accounting for all interactions between an atom and its neighbor up to some cutoff radius. These neural networks usually intake atomic coordinates and output potential energies. Atomic coordinates are sometimes transformed with atom-centered symmetry functions or pair symmetry functions before being fed into neural networks. Encoding symmetry has been pivotal in enhancing machine learning potentials by drastically constraining the neural networks' search space. [ 80 ] [ 82 ] Conversely, message-passing neural networks (MPNNs), a form of graph neural networks, learn their own descriptors and symmetry encodings. They treat molecules as three-dimensional graphs and iteratively update each atom's feature vectors as information about neighboring atoms is processed through message functions and convolutions. These feature vectors are then used to directly predict the final potentials. In 2017, the first-ever MPNN model, a deep tensor neural network, was used to calculate the properties of small organic molecules. [ 83 ] [ 80 ] [ 84 ] Another class of machine-learned interatomic potential is the Gaussian approximation potential (GAP), [ 85 ] [ 86 ] [ 87 ] which combines compact descriptors of local atomic environments [ 88 ] with Gaussian process regression [ 89 ] to machine learn the potential energy surface of a given system. To date, the GAP framework has been used to successfully develop a number of MLIPs for various systems, including for elemental systems such as Carbon [ 90 ] [ 91 ] Silicon, [ 92 ] and Tungsten, [ 93 ] as well as for multicomponent systems such as Ge 2 Sb 2 Te 5 [ 94 ] and austenitic stainless steel , Fe 7 Cr 2 Ni. [ 95 ] Classical interatomic potentials often exceed the accuracy of simplified quantum mechanical methods such as density functional theory at a million times lower computational cost. [ 51 ] The use of interatomic potentials is recommended for the simulation of nanomaterials, biomacromolecules, and electrolytes from atoms up to millions of atoms at the 100 nm scale and beyond. As a limitation, electron densities and quantum processes at the local scale of hundreds of atoms are not included. When of interest, higher level quantum chemistry methods can be locally used. [ 96 ] The robustness of a model at different conditions other than those used in the fitting process is often measured in terms of transferability of the potential.
https://en.wikipedia.org/wiki/Interatomic_potential
The interaural time difference (or ITD ) when concerning humans or animals, is the difference in arrival time of a sound between two ears. It is important in the localization of sounds , as it provides a cue to the direction or angle of the sound source from the head. If a signal arrives at the head from one side, the signal has further to travel to reach the far ear than the near ear. This pathlength difference results in a time difference between the sound's arrivals at the ears, which is detected and aids the process of identifying the direction of sound source. When a signal is produced in the horizontal plane, its angle in relation to the head is referred to as its azimuth , with 0 degrees (0°) azimuth being directly in front of the listener, 90° to the right, and 180° being directly behind. The duplex theory proposed by Lord Rayleigh (1907) provides an explanation for the ability of humans to localise sounds by time differences between the sounds reaching each ear (ITDs) and differences in sound level entering the ears (interaural level differences, ILDs). But there still lies a question whether ITD or ILD is prominent. The duplex theory states that ITDs are used to localise low-frequency sounds, in particular, while ILDs are used in the localisation of high-frequency sound inputs. However, the frequency ranges for which the auditory system can use ITDs and ILDs significantly overlap, and most natural sounds will have both high- and low-frequency components, so that the auditory system will in most cases have to combine information from both ITDs and ILDs to judge the location of a sound source. [ 1 ] A consequence of this duplex system is that it is also possible to generate so-called "cue trading" or "time–intensity trading" stimuli on headphones, where ITDs pointing to the left are offset by ILDs pointing to the right, so the sound is perceived as coming from the midline. A limitation of the duplex theory is that the theory does not completely explain directional hearing, as no explanation is given for the ability to distinguish between a sound source directly in front and behind. Also the theory only relates to localising sounds in the horizontal plane around the head. The theory also does not take into account the use of the pinna in localisation (Gelfand, 2004). Experiments conducted by Woodworth (1938) tested the duplex theory by using a solid sphere to model the shape of the head and measuring the ITDs as a function of azimuth for different frequencies. The model used had a distance between the two ears of approximately 22–23 cm. Initial measurements found that there was a maximum time delay of approximately 660 μs when the sound source was placed at directly 90° azimuth to one ear. This time delay correlates to the wavelength of a sound input with a frequency of 1500 Hz. The results concluded that when a sound played had a frequency less than 1500 Hz, the wavelength is greater than this maximum time delay between the ears. Therefore, there is a phase difference between the sound waves entering the ears providing acoustic localisation cues. With a sound input with a frequency closer to 1500 Hz the wavelength of the sound wave is similar to the natural time delay. Therefore, due to the size of the head and the distance between the ears there is a reduced phase difference, so localisations errors start to be made. When a high-frequency sound input is used with a frequency greater than 1500 Hz, the wavelength is shorter than the distance between the ears, a head shadow is produced, and ILD provide cues for the localisation of this sound. Feddersen et al. (1957) also conducted experiments taking measurements on how ITDs alter with changing the azimuth of the loudspeaker around the head at different frequencies. But unlike the Woodworth experiments, human subjects were used rather than a model of the head. The experiment results agreed with the conclusion made by Woodworth about ITDs. The experiments also concluded that there is no difference in ITDs when sounds are provided from directly in front or behind at 0° and 180° azimuth. The explanation for this is that the sound is equidistant from both ears. Interaural time differences alter as the loudspeaker is moved around the head. The maximum ITD of 660 μs occurs when a sound source is positioned at 90° azimuth to one ear. Starting in 1948, the prevailing theory on interaural time differences centered on the idea that inputs from the medial superior olive differentially process inputs from the ipsilateral and contralateral side relative to the sound. This is accomplished through a discrepancy in arrival time of excitatory inputs into the medial superior olive, based on differential conductance of the axons, which allows both sounds to ultimately converge at the same time through neurons with complementary intrinsic properties. Franken et al. attempted to further elucidate the mechanisms underlying ITD in mammalian brains. [ 2 ] One experiment they performed was to isolate discrete inhibitory post-synaptic potentials and try to determine whether inhibitory inputs to the superior olive were allowing the faster excitatory input to delay firing until the two signals were synced. However, after blocking EPSPs with a glutamate receptor blocker, they determine that the size of inhibitory inputs was too marginal to appear to play a significant role in phase locking. This was verified when the experimenters blocked inhibitory input and still saw clear phase locking of the excitatory inputs in their absence. This led them to the theory that in-phase excitatory inputs are summated such that the brain can process sound localization by counting the number of action potentials that arise from various magnitudes of summated depolarization. Franken et al. also examined anatomical and functional patterns within the superior olive to clarify previous theories about the rostrocaudal axis serving as a source of tonotopy. Their results showed a significant correlation between tuning frequency and relative position along the dorsoventral axis, while they saw no distinguishable pattern of tuning frequency on the rostrocaudal axis. Lastly, they went on to further explore the driving forces behind the interaural time difference, specifically whether the process is simply the alignment of inputs that is processed by a coincidence detector, or whether the process is more complicated. Evidence from Franken et al. shows that the processing is affected by inputs that precede the binaural signal, which would alter the functioning of voltage-gated sodium and potassium channels to shift the membrane potential of the neuron. Furthermore, the shift is dependent on the frequency tuning of each neuron, ultimately creating a more complex confluence and analysis of sound. These findings provide several pieces of evidence that contradict existing theories about binaural audition. The auditory nerve fibres, known as the afferent nerve fibres, carry information from the organ of Corti to the brainstem and brain . Auditory afferent fibres consist of two types of fibres called type I and type II fibres. Type I fibres innervate the base of one or two inner hair cells and Type II fibres innervate the outer hair cells. Both leave the organ of Corti through an opening called the habenula perforata. The type I fibres are thicker than the type II fibres and may also differ in how they innervate the inner hair cells . Neurons with large calyceal endings ensure preservation of timing information throughout the ITD pathway. Next in the pathway is the cochlear nucleus , which receives mainly ipsilateral (that is, from the same side) afferent input. The cochlear nucleus has three distinct anatomical divisions, known as the antero-ventral cochlear nucleus (AVCN), postero-ventral cochlear nucleus (PVCN) and dorsal cochlear nucleus (DCN) and each have different neural innervations. The AVCN contains predominant bushy cells , with one or two profusely branching dendrites ; it is thought that bushy cells may process the change in the spectral profile of complex stimuli. The AVCN also contain cells with more complex firing patterns than bushy cells called multipolar cells , these cells have several profusely branching dendrites and irregular shaped cell bodies. Multipolar cells are sensitive to changes in acoustic stimuli and in particular, onset and offset of sounds, as well as changes in intensity and frequency. The axons of both cell types leave the AVCN as large tract called the ventral acoustic stria , which forms part of the trapezoid body and travels to the superior olivary complex . A group of nuclei in pons make up the superior olivary complex (SOC). This is the first stage in auditory pathway to receive input from both cochleas, which is crucial for our ability to localise the sounds source in the horizontal plane. The SOC receives input from cochlear nuclei, primarily the ipsilateral and contralateral AVCN. Four nuclei make up the SOC but only the medial superior olive (MSO) and the lateral superior olive (LSO) receive input from both cochlear nuclei. The MSO is made up of neurons which receive input from the low-frequency fibers of the left and right AVCN. The result of having input from both cochleas is an increase in the firing rate of the MSO units. The neurons in the MSO are sensitive to the difference in the arrival time of sound at each ear, also known as the interaural time difference (ITD). Research shows that if stimulation arrives at one ear before the other, many of the MSO units will have increased discharge rates. The axons from the MSO continue to higher parts of the pathway via the ipsilateral lateral lemniscus tract.(Yost, 2000) The lateral lemniscus (LL) is the main auditory tract in the brainstem connecting SOC to the inferior colliculus . The dorsal nucleus of the lateral lemniscus (DNLL) is a group of neurons separated by lemniscus fibres, these fibres are predominantly destined for the inferior colliculus (IC). In studies using an unanesthetized rabbit the DNLL was shown to alter the sensitivity of the IC neurons and may alter the coding of interaural timing differences (ITDs) in the IC.(Kuwada et al., 2005) The ventral nucleus of the lateral lemniscus (VNLL) is a chief source of input to the inferior colliculus. Research using rabbits shows the discharge patterns, frequency tuning and dynamic ranges of VNLL neurons supply the inferior colliculus with a variety of inputs, each enabling a different function in the analysis of sound.(Batra & Fitzpatrick, 2001) In the inferior colliculus (IC) all the major ascending pathways from the olivary complex and the central nucleus converge. The IC is situated in the midbrain and consists of a group of nuclei the largest of these is the central nucleus of inferior colliculus (CNIC). The greater part of the ascending axons forming the lateral lemniscus will terminate in the ipsilateral CNIC however a few follow the commissure of Probst and terminate on the contralateral CNIC. The axons of most of the CNIC cells form the brachium of IC and leave the brainstem to travel to the ipsilateral thalamus . Cells in different parts of the IC tend to be monaural, responding to input from one ear, or binaural and therefore respond to bilateral stimulation. The spectral processing that occurs in the AVCN and the ability to process binaural stimuli, as seen in the SOC, are replicated in the IC. Lower centres of the IC extract different features of the acoustic signal such as frequencies, frequency bands, onsets, offsets, changes in intensity and localisation. The integration or synthesis of acoustic information is thought to start in the CNIC.(Yost, 2000) A number of studies have looked into the effect of hearing loss on interaural time differences. In their review of localisation and lateralisation studies, Durlach, Thompson, and Colburn (1981), cited in Moore (1996) found a "clear trend for poor localization and lateralization in people with unilateral or asymmetrical cochlear damage". This is due to the difference in performance between the two ears. In support of this, they did not find significant localisation problems in individuals with symmetrical cochlear losses. In addition to this, studies have been conducted into the effect of hearing loss on the threshold for interaural time differences. The normal human threshold for detection of an ITD is up to a time difference of 10 μs. Studies by Gabriel, Koehnke, & Colburn (1992), Häusler, Colburn, & Marr (1983) and Kinkel, Kollmeier, & Holube (1991) (cited by Moore, 1996) have shown that there can be great differences between individuals regarding binaural performance. It was found that unilateral or asymmetric hearing losses can increase the threshold of ITD detection in patients. This was also found to apply to individuals with symmetrical hearing losses when detecting ITDs in narrowband signals. However, ITD thresholds seem to be normal for those with symmetrical losses when listening to broadband sounds.
https://en.wikipedia.org/wiki/Interaural_time_difference
Interbasin transfer or transbasin diversion are (often hyphenated) terms used to describe man-made conveyance schemes which move water from one river basin where it is available, to another basin where water is less available or could be utilized better for human development. The purpose of such water resource engineering schemes can be to alleviate water shortages in the receiving basin, to generate electricity, or both. Rarely, as in the case of the Glory River which diverted water from the Tigris to Euphrates River in modern Iraq , interbasin transfers have been undertaken for political purposes. While ancient water supply examples exist, the first modern developments were undertaken in the 19th century in Australia, India and the United States, feeding large cities such as Denver and Los Angeles. Since the 20th century many more similar projects have followed in other countries, including Israel and China, and contributions to the Green Revolution in India and hydropower development in Canada. Since conveyance of water between natural basins are described as both a subtraction at the source and as an addition at the destination, such projects may be controversial in some places and over time; they may also be seen as controversial due to their scale, costs and environmental or developmental impacts. In Texas , for example, a 2007 Texas Water Development Board report analyzed the costs and benefits of IBTs in Texas, concluding that while some are essential, barriers to IBT development include cost, resistance to new reservoir construction and environmental impacts. [ 1 ] Despite the costs and other concerns involved, IBTs play an essential role in the state's 50-year water planning horizon. Of 44 recommended ground and surface water conveyance and transfer projects included in the 2012 Texas State Water Plan, 15 would rely on IBTs. [ 1 ] While developed countries often have exploited the most economical sites already with large benefits, many large-scale diversion/transfer schemes have been proposed in developing countries such as Brazil, African countries, India and China. These more modern transfers have been justified because of their potential economic and social benefits in more heavily populated areas, stemming from increased water demand for irrigation , industrial and municipal water supply , and renewable energy needs. These projects are also justified because of possible climate change and a concern over decreased water availability in the future; in that light, these projects thus tend to hedge against ensuing droughts and increasing demand. Projects conveying water between basins economically are often large and expensive, and involve major public and/or private infrastructure planning and coordination. In some cases where desired flow is not provided by gravity alone, additional use of energy is required for pumping water to the destination. Projects of this type can also be complicated in legal terms, since water and riparian rights are affected; this is especially true if the basin of origin is a transnational river. Furthermore, these transfers can have significant environmental impacts on aquatic ecosystems at the source. In some cases water conservation measures at the destination can make such water transfers less immediately necessary to alleviate water scarcity , delay their need to be built, or reduce their initial size and cost. There are dozens of large inter-basin transfers around the world, most of them concentrated in Australia, Canada, China, India and the United States. The oldest interbasin transfers date back to the late 19th century, with an exceptionally old example being the Roman gold mine at Las Médulas in Spain. Their primary purpose usually is either to alleviate water scarcity or to generate hydropower. The Central Arizona Project (CAP) in the USA is not an interbasin transfer per se , although it shares many characteristics with interbasin transfers as it transports large amounts of water over a long distance and difference in altitude. The CAP transfers water from the Colorado River to Central Arizona for both agriculture and municipal water supply to substitute for depleted groundwater . However, the water remains within the watershed of the Colorado River, though transferred into the Gila sub-basin . Characteristics of major existing interbasin transfers and other large-scale water transfers to alleviate water scarcity In Canada, sixteen interbasin transfers have been implemented for hydropower development. The most important is the James Bay Project from the Caniapiscau River and the Eastmain River into the La Grande River , built in the 1970s. The water flow was reduced by 90% at the mouth of the Eastmain River, by 45% where the Caniapiscau River flows into the Koksoak River , and by 35% at the mouth of the Koksoak River. The water flow of the La Grande River, on the other hand, was doubled, increasing from 1,700 m³/s to 3,400 m³/s (and from 500 m³/s to 5,000 m³/s in winter) at the mouth of the La Grande River. Other interbasin transfers include: The Chicago Sanitary and Ship Canal in the US, which serves to divert polluted water from Lake Michigan . The Eastern and Central Routes of the South–North Water Transfer Project in China from the Yangtse River to the Yellow River and Beijing. Nearly all proposed interbasin transfers are in developing countries. The objective of most transfers is the alleviation of water scarcity in the receiving basin(s). Unlike in the case of existing transfers, there are very few proposed transfers whose objective is the generation of hydropower. From the Ubangi River in Congo to the Chari River which empties into Lake Chad . The plan was first proposed in the 1960s and again in the 1980s and 1990s by Nigerian engineer J. Umolu (ZCN Scheme) and Italian firm Bonifica (Transaqua Scheme). [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] In 1994, the Lake Chad Basin Commission (LCBC) proposed a similar project and at a March, 2008 Summit, the Heads of State of the LCBC member countries committed to the diversion project. [ 15 ] In April, 2008, the LCBC advertised a request for proposals for a World Bank-funded feasibility study. From the Ebro River in Spain to Barcelona in the Northeast and to various cities on the Mediterranean coast to the Southwest Since rivers are home to a complex web of species and their interactions, the transfer of water from one basin to another can have a serious impact on species living therein. [ 22 ]
https://en.wikipedia.org/wiki/Interbasin_transfer
Membrane fusion is a key biophysical process that is essential for the functioning of life itself. It is defined as the event where two lipid bilayers approach each other and then merge to form a single continuous structure. [ 1 ] In living beings, cells are made of an outer coat made of lipid bilayers; which then cause fusion to take place in events such as fertilization , embryogenesis and even infections by various types of bacteria and viruses . [ 2 ] It is therefore an extremely important event to study. From an evolutionary angle, fusion is an extremely controlled phenomenon. Random fusion can result in severe problems to the normal functioning of the human body. Fusion of biological membranes is mediated by proteins . Regardless of the complexity of the system, fusion essentially occurs due to the interplay of various interfacial forces, namely hydration repulsion, hydrophobic attraction and van der Waals forces . [ 3 ] Lipid bilayers are structures of lipid molecules consisting of a hydrophobic tail and a hydrophilic head group. Therefore, these structures experience all the characteristic Interbilayer forces involved in that regime. Two hydrated bilayers experience strong repulsion as they approach each other. These forces have been measured using the Surface forces apparatus (S.F.A), an instrument used for measuring forces between surfaces. This repulsion was first proposed by Langmuir and was thought to arise due to water molecules that hydrate the bilayers. Hydration repulsion can thus be defined as the work required in removing the water molecules around hydrophilic molecules (like lipid head groups) in the bilayer system. [ 4 ] As water molecules have an affinity towards hydrophilic head groups, they try to arrange themselves around the head groups of the lipid molecules and it becomes very hard to separate this favorable combination. Experiments performed through SFA have confirmed that the nature of this force is an exponential decline. [ 5 ] The potential V R is given by [ 6 ] where C R (>0) is a measure of the hydration interaction energy for hydrophilic molecules of the given system, λ R is a characteristic length scale of hydration repulsion and z is the distance of separation. In other words, it is on distances up to this length that molecules/surfaces fully experience this repulsion. Hydrophobic forces are the attractive entropic forces between any two hydrophobic groups in aqueous media, e.g. the forces between two long hydrocarbon chains in aqueous solutions. The magnitude of these forces depends on the hydrophobicity of the interacting groups as well as the distance separating them (they are found to decrease roughly exponentially with the distance). The physical origin of these forces is a debated issue but they have been found to be long-ranged and are the strongest among all the physical interaction forces operating between biological surfaces and molecules. [ 7 ] Due to their long range nature, they are responsible for rapid coagulation of hydrophobic particles in water and play important roles in various biological phenomena including folding and stabilization of macromolecules such as proteins and fusion of cell membranes. The potential V A is given by [ 7 ] where C A (<0) is a measure of the hydrophobic interaction energy for the given system, λ A is a characteristic length scale of hydrophobic attraction and z is the distance of separation. These forces arise due to dipole–dipole interactions (induced/permanent) between molecules of bilayers. As molecules come closer, this attractive force arises due to the ordering of these dipoles; like in the case of magnets that align and attract each other as they approach. [ 7 ] This also implies that any surface would experience a van der waals attraction. In bilayers, the form taken by van der Waals interaction potential V VDW is given by [ 8 ] where H is the Hamaker constant and D and z are the bilayers thickness and the distance of separation respectively. For fusion to take place, it has to overcome huge repulsive forces due to the strong hydration repulsion between hydrophilic lipid head groups. [ 7 ] However, it has been hard to exactly determine the connection between adhesion , fusion and interbilayer forces. The forces that promote cell adhesion are not the same as the ones that promote membrane fusion. Studies show that by creating a stress on the interacting bilayers, fusion can be achieved without disrupting the interbilayer interactions. It has also been suggested that membrane fusion takes place through a sequence of structural rearrangements that help to overcome the barrier that prevents fusion. [ 7 ] Thus, interbilayer fusion takes place through When two lipid bilayers approach each other, they experience weak van der Waals attractive forces and much stronger repulsive forces due to hydration repulsion. [ 9 ] These forces are normally dominant over the hydrophobic attractive forces between the membranes. Studies done on membrane bilayers using Surface forces apparatus (SFA) indicate that membrane fusion can instantaneously occur when two bilayers are still at a finite distance from each other without them having to overcome the short-range repulsive force barrier. [ 7 ] This is attributed to the molecular rearrangements that occur resulting in the bypassing of these forces by the membranes. During fusion, the hydrophobic tails of a small patch of lipids on the cell membrane are exposed to the aqueous phase surrounding them. This results in very strong hydrophobic attractions (which dominate the repulsive force) between the exposed groups leading to membrane fusion. [ 10 ] The attractive van der Waals forces play a negligible role in membrane fusion. Thus, fusion is a result of the hydrophobic attractions between internal hydrocarbon chain groups that are exposed to the normally inaccessible aqueous environment. Fusion is observed to start at points on the membranes where the membrane stresses are either the weakest or the strongest. [ 7 ] Interbilayer forces play a key role in mediating membrane fusion, which has extremely important biomedical applications. [ 11 ]
https://en.wikipedia.org/wiki/Interbilayer_forces_in_membrane_fusion
In biochemistry , intercalation is the insertion of molecules between the planar bases of deoxyribonucleic acid (DNA). This process is used as a method for analyzing DNA and it is also the basis of certain kinds of poisoning. There are several ways molecules (in this case, also known as ligands ) can interact with DNA. Ligands may interact with DNA by covalently binding , electrostatically binding, or intercalating. [ 1 ] Intercalation occurs when ligands of an appropriate size and chemical nature fit themselves in between base pairs of DNA. These ligands are mostly polycyclic, aromatic , and planar, and therefore often make good nucleic acid stains . Intensively studied DNA intercalators include berberine , ethidium bromide , proflavine , daunomycin , doxorubicin , and thalidomide . DNA intercalators are used in chemotherapeutic treatment to inhibit DNA replication in rapidly growing cancer cells. Examples include doxorubicin (adriamycin) and daunorubicin (both of which are used in treatment of Hodgkin's lymphoma ), and dactinomycin (used in Wilm's tumour , Ewing's Sarcoma , rhabdomyosarcoma ). Metallointercalators are complexes of a metal cation with polycyclic aromatic ligands. The most commonly used metal ion is ruthenium (II), because its complexes are very slow to decompose in the biological environment. Other metallic cations that have been used include rhodium (III) and iridium (III). Typical ligands attached to the metal ion are dipyridine and terpyridine whose planar structure is ideal for intercalation. [ 2 ] In order for an intercalator to fit between base pairs, the DNA must dynamically open a space between its base pairs by unwinding. The degree of unwinding varies depending on the intercalator; for example, ethidium cation (the ionic form of ethidium bromide found in aqueous solution) unwinds DNA by about 26°, whereas proflavine unwinds it by about 17°. This unwinding causes the base pairs to separate, or "rise", creating an opening of about 0.34 nm (3.4 Å). Similarly, in the case of the intercalation of Thiazole Orange derivatives, the distance between the base pairs increased significantly, from ca. 4.7 Å to ca, 6.9. [ 3 ] This unwinding induces local structural changes to the DNA strand, such as lengthening of the DNA strand or twisting of the base pairs. These structural modifications can lead to functional changes, often to the inhibition of transcription and replication and DNA repair processes, which makes intercalators potent mutagens . For this reason, DNA intercalators are often carcinogenic , such as the exo (but not the endo) 8,9 epoxide of aflatoxin B 1 and acridines such as proflavine or quinacrine . Intercalation as a mechanism of interaction between cationic, planar, polycyclic aromatic systems of the correct size (on the order of a base pair) was first proposed by Leonard Lerman in 1961. [ 4 ] [ 5 ] [ 6 ] One proposed mechanism of intercalation is as follows: In aqueous isotonic solution, the cationic intercalator is attracted electrostatically to the surface of the polyanionic DNA. The ligand displaces a sodium and/or magnesium cation present in the "condensation cloud" of such cations that surrounds DNA (to partially balance the sum of the negative charges carried by each phosphate oxygen), thus forming a weak electrostatic association with the outer surface of DNA. From this position, the ligand diffuses along the surface of the DNA and may slide into the hydrophobic environment found between two base pairs that may transiently "open" to form an intercalation site, allowing the ethidium to move away from the hydrophilic (aqueous) environment surrounding the DNA and into the intercalation site. The base pairs transiently form such openings due to energy absorbed during collisions with solvent molecules.
https://en.wikipedia.org/wiki/Intercalation_(biochemistry)
Intercalation is the reversible inclusion or insertion of a molecule (or ion) into layered materials with layered structures. Examples are found in graphite and transition metal dichalcogenides . [ 1 ] [ 2 ] One famous intercalation host is graphite , which intercalates potassium as a guest. [ 3 ] Intercalation expands the van der Waals gap between sheets, which requires energy . Usually this energy is supplied by charge transfer between the guest and the host solid, i.e., redox . Two potassium graphite compounds are KC 8 and KC 24 . Carbon fluorides (e.g., (CF) x and (C 4 F)) are prepared by reaction of fluorine with graphitic carbon. The color is greyish, white, or yellow. The bond between the carbon and fluorine atoms is covalent, thus fluorine is not intercalated. [ clarification needed ] Such materials have been considered as a cathode in various lithium batteries . Treating graphite with strong acids in the presence of oxidizing agents causes the graphite to oxidise. Graphite bisulfate, [C 24 ] + [HSO 4 ] − , is prepared by this approach using sulfuric acid and a little nitric acid or chromic acid . The analogous graphite perchlorate can be made similarly by reaction with perchloric acid . [ clarification needed ] One of the largest and most diverse uses of the intercalation process by the early 2020s is in lithium-ion electrochemical energy storage , in the batteries used in many handheld electronic devices, mobility devices , electric vehicles , and utility-scale battery electric storage stations . By 2023, all commercial Li-ion cells use intercalation compounds as active materials, and most use them in both the cathode and anode within the battery physical structure. [ 4 ] In 2012 three researchers, Goodenough , Yazami and Yoshino , received the 2012 IEEE Medal for Environmental and Safety Technologies for developing the intercalated lithium-ion battery and subsequently Goodenough, Whittingham , and Yoshino were awarded the 2019 Nobel Prize in Chemistry "for the development of lithium-ion batteries". [ 5 ] An extreme case of intercalation is the complete separation of the layers of the material. This process is called exfoliation. Typically aggressive conditions are required involving highly polar solvents and aggressive reagents. [ 6 ] In biochemistry , intercalation is the insertion of molecules between the bases of DNA. This process is used as a method for analyzing DNA and it is also the basis of certain kinds of poisoning. Clathrates are chemical substances consisting of a lattice that traps or contains molecules. Usually, clathrate compounds are polymeric and completely envelop the guest molecule. Inclusion compounds are often molecules, whereas clathrates are typically polymeric. Intercalation compounds are not 3-dimensional, unlike clathrate compounds. [ 7 ] According to IUPAC , clathrates are "Inclusion compounds in which the guest molecule is in a cage formed by the host molecule or by a lattice of host molecules." [ 8 ]
https://en.wikipedia.org/wiki/Intercalation_(chemistry)
Intercast was a short-lived technology developed in 1996 by Intel for broadcasting information such as web pages and computer software, along with a single television channel. It required a compatible TV tuner card installed in a personal computer and a decoding program called Intel Intercast Viewer. The data for Intercast was embedded in the Vertical Blanking Interval (VBI) of the video signal carrying the Intercast-enabled program, at a maximum of 10.5 Kilobytes/sec in 10 of the 45 lines of the VBI. [ 1 ] [ 2 ] With Intercast, a computer user could watch the TV broadcast in one window of the Intercast Viewer, while being able to view HTML web pages in another window. Users were also able to download software transmitted via Intercast as well. Most often the web pages received were relevant to the television program being broadcast, such as extra information relating to a television program, or extra news headlines and weather forecasts during a newscast. Intercast can be seen as a more modern version of teletext . The Intercast Viewer software was bundled with several TV tuner cards at the time, such as the Hauppauge Win-TV card. [ 3 ] Also at the time of Intercast's introduction, Compaq offered some models of computers with built-in TV tuners installed with the Intercast Viewer software. Upon its debut, Intercast was used by several TV networks, such as NBC , CNN , The Weather Channel , and MTV Networks . [ 4 ] On June 25, 1996, Intel and NBC announced an arrangement which enabled users to watch coverage of the 1996 Summer Olympics and other programming from NBC News . [ 5 ] Intel discontinued support for Intercast a couple of years later. [ when? ] NBC 's series Homicide: Life on the Street was a show that was Intercast-enabled.
https://en.wikipedia.org/wiki/Intercast
In molecular biology , intercellular adhesion molecules ( ICAMs ) and vascular cell adhesion molecule-1 (VCAM-1) are part of the immunoglobulin superfamily . They are important in inflammation , immune responses and in intracellular signalling events. [ 1 ] The ICAM family consists of five members, designated ICAM-1 to ICAM-5. They are known to bind to leucocyte integrins CD11 / CD18 such as LFA-1 and Macrophage-1 antigen , during inflammation and in immune responses. In addition, ICAMs may exist in soluble forms in human plasma , due to activation and proteolysis mechanisms at cell surfaces. Mammalian intercellular adhesion molecules include: This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Intercellular_adhesion_molecule
An intercellular cleft is a channel between two cells through which molecules may travel and gap junctions and tight junctions may be present. Most notably, intercellular clefts are often found between epithelial cells and the endothelium of blood vessels and lymphatic vessels , also helping to form the blood-nerve barrier surrounding nerves. Intercellular clefts are important for allowing the transportation of fluids and small solute matter through the endothelium. The dimensions of intercellular clefts vary throughout the body, however cleft lengths have been determined for a series of capillaries. The average cleft length for capillaries is about 20m/cm 2 . The depths of the intercellular clefts, measured from the luminal to the abluminal openings, vary among different types of capillaries, but the average is about 0.7 μm. The width of the intercellular clefts is about 20 nm outside the junctional region (i.e. in the larger part of the clefts). In intercellular clefts of capillaries, it has been calculated that the fractional area of the capillary wall occupied by the intercellular cleft is 20m/cm 2 x 20 nm (length x width)= 0.004 (0.4%). This is the fractional area of the capillary wall exposed for free diffusion of small hydrophilic solutes and fluids 5 . The intercellular cleft is imperative for cell-cell communication. The cleft contains gap junctions , tight junctions , desmosomes , and adheren proteins, all of which help to propagate and/or regulate cell communication through signal transduction, surface receptors, or a chemogradient. In order for a molecule to be taken into the cell either by endocytosis , phagocytosis , or receptor-mediated endocytosis , often that molecule must first enter through the cleft. The intercellular cleft itself is a channel, but what flows through the channel, like ions, fluid, and small molecules and what proteins or junctions give order to the channel is critical for the life of the cells that border the intercellular cleft. Research at the cell level can deliver proteins, ions, or specific small molecules into the intercellular cleft as a means of injecting a cell. This method is especially useful in cell-to-cell propagation of infectious cytosolic protein aggregates. In one study, protein aggregates from yeast prions were released into a mammalian intercellular cleft and were taken up by the adjacent cell, as opposed to direct cell transfer. This process would be similar to the secretion and transmission of infectious particles through the synaptic cleft between cells of the immune system, as seen in retroviruses . Understanding the routes of intercellular protein aggregate transfer, particularly routes involving clefts is imperative in understanding the progressive spreading of this infection 8 . Endothelial tight junctions are most commonly found in the intercellular cleft and provide for regulation of diffusion through the membranes. These links are most commonly found in the most apical aspect of the intercellular cleft. They prevent macromolecules from navigating the intercellular cleft and limit the lateral diffusion of intrinsic membrane proteins and lipids between the apical and basolateral cell surface domains. In the intercellular clefts of capillaries , tight junctions are the first structural barriers a neutrophil encounters as it penetrates the interendothelial cleft, or the gap linking the blood vessel lumen with the subendothelial space 2 . In capillary endothelium, plasma communicates with the interstitial fluid through the intercellular cleft. Blood plasma without the plasma proteins , red blood cells , and platelets pass through the intercellular cleft and into the capillary 7 . Most notably, intercellular clefts are described in capillary blood vessels. The three types of capillary blood vessels are continuous, fenestrated, and discontinuous, with continuous being the least porous of the three and discontinuous capillaries being extremely high in permeability. Continuous blood capillaries have the smallest intercellular clefts, with discontinuous blood capillaries having the largest intercellular clefts, commonly accompanied with gaps in the basement membrane 6 .Often, fluid is forced out of the capillaries through the intercellular clefts. Fluid is push out through the intercellular cleft at the arterial end of the capillary because that's where the pressure is the highest. However, most of this fluid returns into the capillary at the venous end, creating capillary fluid dynamics. Two opposing forces achieve this balance; hydrostatic pressure and colloid osmotic pressure , using the intercellular clefts are fluid entrances and fluid exits 4 . In addition, the size of the intercellular clefts and pores in the capillary will influence this fluid exchange. The larger the intercellular cleft, the lesser the pressure and the more fluid will flow out the cleft. This enlargement of the cleft is caused by contraction of capillary endothelial cells, often by substances such as histamine and bradykinin . However, smaller intercellular clefts do not help this fluid exchange 3 . Along with fluid, electrolytes are also carried through this transport in the capillary blood vessels 4 . This mechanism of fluid, electrolyte, and also small solute exchange is especially important in renal glomerular capillaries 3 . Intercellular clefts also play a role in the formation of the blood-heart barrier (BHB). The intercellular cleft between endocardial endotheliocytes is 3 to 5 times deeper than the clefts between myocardial capillary endotheliocytes. Also, these clefts are often more twisting and have one or two tight junctions and zona adherens interacting with a circumferential actin filament band and several connecting proteins 7 . These tight junctions localize to the luminal side of the intercellular clefts, where the glycocalyx , which is important in cell–cell recognition and cell signaling , is more developed. The organization of the endocardial endothelium and the intercellular cleft help to establish the blood-heart barrier by ensuring an active transendothelial physicochemical gradient of various ions 1 .
https://en.wikipedia.org/wiki/Intercellular_cleft
Intercellular communication (ICC) refers to the various ways and structures that biological cells use to communicate with each other directly or through their environment. Often the environment has been thought of as the extracellular spaces within an animal. More broadly, cells may also communicate with other animals, either of their own group or species, or other species in the wider ecosystem. Different types of cells use different proteins and mechanisms to communicate with one another using extracellular signalling molecules or electric fluctuations which could be likened to an intercellular ethernet. [ 2 ] Components of each type of intercellular communication may be involved in more than one type of communication, [ 2 ] making attempts at clearly separating the types of communication listed somewhat futile. Broadly speaking, intercellular communication may be categorized as being within a single animal or between an animal and other animals in the ecosystem in which it lives. In this article, intercellular communication has been further collated into various areas of research rather than by functional or structural characteristics. Single-celled organisms sense their environment to seek food and may send signals to other cells to behave symbiotically or reproduce. A classic example of this is the slime mold . The slime mold shows how intercellular communication with a small molecule (e.g., cyclic AMP ) allows a simple organism to form from an organized aggregation of single cells. [ 3 ] Research into cell signalling investigated a receptor specific to each signal or multiple receptors potentially being activated by a single signal. [ 4 ] It is not only the presence or absence of a signal that is important but also the strength. Using a chemical gradient to coordinate cell growth and differentiation continues to be important as multicellular animals and plants become more complex. This type of intercellular communication within an organism is commonly referred to as cell signalling . This type of intercellular communication is typified by a small signalling molecule diffusing through the spaces around cells, [ 5 ] often relying on a diffusion gradient forming part of the signalling response. Complex organisms may have molecules to hold the cells together which can also be involved in intercellular communication. Some binding molecules are termed the extracellular matrix and may involve longer molecules like cellulose for the cell wall in plants or collagen in animals. When the membranes of two animal cells are close, they may form special types of cell junctions, which come in three broad types: occluding junctions (such as tight junctions and septate junctions ), anchoring junctions (such as adherens junctions , desmosomes , focal adhesions , and hemidesmosomes ), and communicating junctions (such as gap junctions ). [ 6 ] The structures they form also form parts of complex protein signaling pathways. [ 7 ] In one respect, tight junctions play a generic role in cell signaling in that they may form a tight zip around cells, forming a barrier to stop even small, unwanted signalling molecules from getting between cells. [ 8 ] Without these junctions, signalling molecules may spread to another group of cells which are not requiring the signal or escape too quickly from where they are needed. Gap junctions allow neighboring cells to directly exchange small molecules. [ 9 ] Pannexins , connexins , and innexins are transmembrane proteins that are all named after the Latin term nexus , meaning to connect. They are grouped as they all share a similar structure of 4 transmembrane domains crossing the cell membrane in a similar way, but they do not all share enough sequence homology to allow them to be considered directly related. [ 2 ] [ 10 ] Earlier investigations involving the connexins demonstrated cells forming a direct connection with each other using groups of connexins but not connections with the cell exterior. As such they were not considered to participate in the extracellular cell signalling at the time. Later studies made it apparent connexins could connect directly to the cell exterior meaning they are a conduit for the release an uptake of signalling molecules from the environment external to the cell. [ 11 ] Furthermore, pannexins appear to do this to such an extent they may rarely if ever participate in direct cell to cell coupling. [ 12 ] As indicated on the pannexin/innexin/connexin tree illustrated many animals do not appear to have pannexins/innexins/connexins, perhaps indicating there may be other similar proteins still to be discovered that serve to aid intercellular communication in these animals. [ 2 ] In fungi , pores crossing their cell walls that separate cellular compartments act as an ICC for the movement of molecules to their neighboring compartments. [ 13 ] Most red algae may have pores in the cell septum that partitions a cell/filament called a pit connection. As a leftover of the mitotic division it may be plugged up by the cell. There are also similar connections between neighboring cells/filaments that may allowing sharing of nutrients. [ 14 ] Cells of a different species may initiate and form a pit connection with the host algae. [ 15 ] Plant cells usually have thick cell walls which need to be crossed if neighboring cells are to communicate directly. Plasmodesmata form a pipe through the cell wall forming an ICC. The pipe has another smaller membranous pipe concentric to it connecting the endoplasmic reticulum of the two cells via a tube called the desmotubule . The larger pipe also contains cytoskeletal and other elements. It is presumed viruses use plasmodesmata as a route through the cell walls to spread through the plant. [ 16 ] Gap junctions can form intercellular links, effectively a tiny direct regulated "pipe" called a connexon pair between the cytoplasms of the two cells that form the junction. 6 connexins make a connexon, 2 connexons make a connexon pair so 12 connexin proteins build each tiny ICC. This ICC allows two cells to communicate directly while being sealed from the outside world. [ 17 ] Cells may form one or thousands of these tiny ICCs between them and their other neighbors, potentially forming large networks of directly linked cells. The connexon pairs form ICCs that can transport water, many other molecules up to around 1000 atoms in size [ 18 ] and can be very rapidly signaled to turn on and off as required. These ICCs are also communicating electrical signals that can be rapidly turned on and off. To add to their versatility there are a range of these ICC types due to their being over 20 different connexins with different properties that can combine with each other in a variety of ways. The variety of potential signaling combinations that results is enormous. A much studied example of gap junctions electrical signalling abilities is in the electrical synapses found on nerves. [ 19 ] [ 20 ] [ 21 ] In heart muscle gap junctions function to coordinate the beating of the heart. Adding even further to their versatility gap junctions can also function to form a direct connection to the exterior of a cell paralleling the functioning of the protein cousin the pannexins which are explained elsewhere. Intercellular bridges are larger than gap junction ICCs so are able to allow the movement of not only small signaling molecules but also large DNA molecules or even whole cell organelles. They are maintained between two cells allowing them to exchange cytoplasmic contents and are frequently observed when cells need intimate communication such as when they are reproducing. They are found in Prokaryotes for exchanging DNA, small organisms such as Pinnularia , Valonia ventricosa , Volvox , C. elegans [ 22 ] and mitosis generally ( Cytokinesis ), [ 23 ] Blepharisma for sexual reproduction and during Meiosis including Spermatocytogenesis to synchronise development of germ cells and oogenesis in larger organisms. Bridges have shown to assist in cell migration as shown in the adjacent picture. [ 24 ] Cytoplasmic bridges can also be used to attack another cell as in the case of Vampirococcus . Cells that require a more permanent, extensive cytoplasmic linkage may fuse with each other to varying degrees in many cases forming one large cell or syncytium. This happens extensively during the development of skeletal muscle forming large muscle fibers . Later it was confirmed in other tissues such as the eye lens. Though both involving cell fibers, in the case of the eye lens the cell fusion is more limited in scope resulting in a less extensively fused stratified syncytium. [ 25 ] Lipid membrane bound vesicles of a large range of sizes are found inside and outside of cells, containing a huge variety of things ranging from food to invading organisms, water to signaling molecules. Using an electrical nerve impulse from a neuron of a neuromuscular junction to stimulate a muscle to contract is an example of very small [ 26 ] (about 0.05μm) vesicles being directly involved in regulating intercellular communication. The neuron produces thousands of tiny vesicles, each containing thousands of signalling molecules. One vesicle is released close to the muscle every second or so when resting. When activated by a nerve impulse more than 100 vesicles will be released at once, hundreds of thousands of signalling molecules, causing a significant contraction of the muscle fiber. All this happens in a small fraction of a second. Generally small vesicles used to transport signalling molecules released from the cell are termed exosomes [ 27 ] [ 28 ] [ 29 ] or simply extracellular vesicles (EV), [ 30 ] and in addition to their importance to the organism they are also important for biosensors . [ 26 ] Extracellular vesicles can be released from malignant cancer cells. These extracellular vesicles have been shown to contain gap junction proteins over-expressed in the malignant cells that spread to non-cancerous cells appearing to enhance the spread of the malignancy. [ 31 ] Vesicles are also associated with the transport of materials outside of the cell to enable growth and repair of tissues in the extracellular matrix. [ 32 ] [ 33 ] In situations such as these they may be given special designations such as Matrix Vesicles (MV). Examples of larger vesicles are in regulatory secretary pathways in endocrine , exocrine tissues, [ 34 ] transcytosis [ 35 ] [ 36 ] and the vesiculo-vacuolar organelle (VVO) in endothelial and perhaps other cell types. [ 37 ] Another form of transfer of pieces of membrane around junctions is called trans-endocytosis. [ 38 ] Some large intercellular vesicles also appear to stay intact as they transport their contents from one part of a tissue to another and involve gap junction plaques. [ 39 ] When we think of intercellular communication we often use our nervous system as a point of reference. Nerves made up of many cells in vertebrates are typically highly specialized in form and function usually being the most complex in the brain . They ensure rapid precise, directional cell to cell communication over longer distances, for example from your brain to your hand. The nerve cells can be thought of as intermediary's, not so much communicating with each other but rather passing on the messages from one neighboring cell to another. Being "accessory" cells that pass on the message they require an additional space and can consume a lot of energy within an organism. [ 40 ] Simpler organisms such as sponges and placozoans often have less food availability and so less energy to spare. Their nervous systems are less specialized and the cells that are part of it are required to do other functions as well. [ 41 ] When groups of nerve cells form another type of intercellular communication called ephaptic coupling can arise. It was first quantified by Katz in 1940 [ 42 ] but it has been difficult to associate any one structure or "ephapse" with this form of communication. There are reductionist attempts to associate particular groups of nerve cells exhibiting ephaptic coupling with particular functions in the brain. [ 43 ] As yet there are no studies on the simplest neural systems such as the polar bodies of Ctenophores to see if ephaptic coupling may explain some of their more complex behaviors. [ 41 ] The definition of biological communication is not simple. [ 44 ] In the field of cell biology early research was at a cellular to organism level. How the individual cells in one organism could affect those in another was difficult to trace and not of primary concern. If intercellular communication includes one cell transmitting a signal to another to elicit a response, intercellular communication is not restricted to the cells within a single organism. Over short distances interkingdom communication in plants is reported. [ 13 ] In-water reproduction often involves vast synchronized release of gametes called spawning . [ 45 ] Over large distances cells in one plant will communicate with cells in another plant of the same species and other species by releasing signals into the air such as green leaf volatiles that can, among other things, pre-warn neighbors of herbivores or in the case of ethylene gas the signal triggers ripening in fruits. Intercellular signalling in plants can also happen below ground with the mycorrhizal network which can link large areas of plants via fungal networks allowing the redistribution of environmental resources. Looking at insect colonies such as bees and ants we have discovered the pheromones [ 46 ] released from one organism's cells to another organism's cells can coordinate colonies in a way reminiscent of slime molds . Cell to cell signalling using "pheromones" was also found in more complex animals. As complexity increases so does the effect of signals. "Pheromones" in more complex animals such as vertebrates are now more correctly referred to as "chemosignals" [ 47 ] [ 48 ] [ 49 ] including between species. [ 50 ] The idea that intercellular communication is so similar among cells within an organism as well as cells between different organisms, even prey, is demonstrated by vinnexin . [ 51 ] This protein is a modified form of an innexin protein found in a caterpillar. That is, the vinnexin is very similar to the caterpillar's own innexin, and could only have been derived from a non-viral innexin in some way that is unclear. The caterpillar innexin forms normal intercellular connections inside the caterpillar as part of the caterpillar's immune response to an egg implanted by a parasitic wasp. The innexin helps ensure the wasp egg is neutralized, saving the caterpillar from the parasite. So what does the vinnexin do and how? Evolution has led to a virus that communicates with the wasp in a way that evades the wasps antiviral responses, allowing the virus to live and replicate in the wasps ovaries. When the wasp injects its egg into the caterpillar host many virus from the wasp's ovary are also injected. The virus particles do not replicate in the caterpillar cells but rather communicate with the caterpillars genetic machinery to produce vinnexin protein. The vinnexin protein incorporates itself into the caterpillar's cells altering the communication in the caterpillar so the caterpillar goes on living but with an altered immune response. Vinnexins are able to mix with normal innexins to alter communication within the caterpillar and probably do. The altered communication within the caterpillar prevents the caterpillar's defenses rejecting the wasps egg. As a result, the wasp egg hatches, consumes the caterpillar and the virus from the wasp larva's mother, and repeats the cycle. It can be seen the virus and wasp are essential to each other and communicate well with each other to allow the virus to live and replicate, but only in a non-destructive way inside the wasp ovary. The virus is injected into a caterpillar by the wasp, but the virus does not replicate in the caterpillar, the virus only communicates with the caterpillar to modify it in a non-lethal way. The wasp larvae will then slowly eat the caterpillar without being stopped while communicating with the virus again to ensure that the wasp has a place in its ovary for it to again replicate. Connexins/innexins/vinnexins, once thought to only participate in providing a path for signaling molecules or electrical signals have now been shown to act as a signaling molecule itself.
https://en.wikipedia.org/wiki/Intercellular_communication
In astronomical navigation , the intercept method , also known as Marcq St. Hilaire method , is a method of calculating an observer's position on Earth ( geopositioning ). It was originally called the azimuth intercept method because the process involves drawing a line which intercepts the azimuth line. This name was shortened to intercept method and the intercept distance was shortened to 'intercept'. The method yields a line of position (LOP) on which the observer is situated. The intersection of two or more such lines will define the observer's position, called a "fix". Sights may be taken at short intervals, usually during hours of twilight, or they may be taken at an interval of an hour or more (as in observing the Sun during the day). In either case, the lines of position, if taken at different times, must be advanced or retired to correct for the movement of the ship during the interval between observations. If observations are taken at short intervals, a few minutes at most, the corrected lines of position by convention yield a "fix". If the lines of position must be advanced or retired by an hour or more, convention dictates that the result is referred to as a "running fix". The intercept method is based on the following principle. The actual distance from the observer to the geographical position ( GP ) of a celestial body (that is, the point where it is directly overhead) is "measured" using a sextant . The observer has already estimated his position by dead reckoning and calculated the distance from the estimated position to the body's GP; the difference between the "measured" and calculated distances is called the intercept. The diagram on the right shows why the zenith distance of a celestial body is equal to the angular distance of its GP from the observer's position. The rays of light from a celestial body are assumed to be parallel (unless the observer is looking at the moon, which is too close for such a simplification). The angle at the centre of the Earth that the ray of light passing through the body's GP makes with the line running from the observer's zenith is the same as the zenith distance. This is because they are corresponding angles . In practice it is not necessary to use zenith distances, which are 90° minus altitude, as the calculations can be done using observed altitude and calculated altitude. Taking a sight using the intercept method consists of the following process: Suitable bodies for celestial sights are selected, often using a Rude Star Finder. Using a sextant , an altitude is obtained of the Sun, the Moon, a star or a planet. The name of the body and the precise time of the sight in UTC is recorded. Then the sextant is read and the altitude ( Hs ) of the body is recorded. Once all sights are taken and recorded, the navigator is ready to start the process of sight reduction and plotting. The first step in sight reduction is to correct the sextant altitude for various errors and corrections. The instrument may have an error, IC or index correction (see article on adjusting a sextant ). Refraction by the atmosphere is corrected for with the aid of a table or calculation and the observer's height of eye above sea level results in a "dip" correction (as the observer's eye is raised the horizon dips below the horizontal). If the Sun or Moon was observed, a semidiameter correction is also applied to find the centre of the object. The resulting value is "observed altitude" ( Ho ). Next, using an accurate clock, the observed celestial object's geographic position ( GP ) is looked up in an almanac. That's the point on the Earth's surface directly below it (where the object is in the zenith ). The latitude of the geographic position is called declination, and the longitude is usually called the hour angle . Next, the altitude and azimuth of the celestial body are computed for a selected position (assumed position or AP). This involves resolving a spherical triangle. Given the three magnitudes: local hour angle ( LHA ), observed body's declination ( dec ), and assumed latitude ( lat ), the altitude Hc and azimuth Zn must be computed. The local hour angle, LHA , is the difference between the AP longitude and the hour angle of the observed object. It is always measured in a westerly direction from the assumed position. The relevant formulas (derived using the spherical trigonometric identities ) are: or, alternatively, Where These computations can be done easily using electronic calculators or computers but traditionally there were methods which used logarithm or haversine tables. Some of these methods were H.O. 211 (Ageton), Davies, haversine , etc. The relevant haversine formula for Hc is Where Hc is the zenith distance, or complement of Hc . Hc = 90° - Hc . The relevant formula for Zn is When using such tables or a computer or scientific calculator, the navigation triangle is solved directly, so any assumed position can be used. Often the dead reckoning DR position is used. This simplifies plotting and also reduces any slight error caused by plotting a segment of a circle as a straight line. With the use of astral navigation for air navigation, faster methods needed to be developed and tables of precomputed triangles were developed. When using precomputed sight reduction tables, selection of the assumed position is one of the trickier steps for the fledgling navigator to master. Sight reduction tables provide solutions for navigation triangles of integral degree values. When using precomputed sight reduction tables, such as H.O. 229, the assumed position must be selected to yield integer degree values for LHA (local hour angle) and latitude. West longitudes are subtracted and east longitudes are added to GHA to derive LHA , so AP's must be selected accordingly. When using precomputed sight reduction tables each observation and each body will require a different assumed position. Professional navigators are divided in usage between sight reduction tables on the one hand, and handheld computers or scientific calculators on the other. The methods are equally accurate. It is simply a matter of personal preference which method is used. An experienced navigator can reduce a sight from start to finish in about five minutes using nautical tables or a scientific calculator. The precise location of the assumed position has no great impact on the result, as long as it is reasonably close to the observer's actual position. An assumed position within 1 degree of arc of the observer's actual position is usually considered acceptable. The calculated altitude ( Hc ) is compared to the observed altitude ( Ho , sextant altitude ( Hs ) corrected for various errors). The difference between Hc and Ho is called "intercept" and is the observer's distance from the assumed position. The resulting line of position ( LOP ) is a small segment of the circle of equal altitude , and is represented by a straight line perpendicular to the azimuth of the celestial body. When plotting the small segment of this circle on a chart it is drawn as a straight line, the resulting tiny errors are too small to be significant. Navigators use the memory aid "computed greater away" to determine whether the observer is farther from the body's geographic position (measure intercept from Hc away from the azimuth). If the Hc is less than Ho , then the observer is closer to the body's geographic position, and intercept is measured from the AP toward the azimuth direction. The last step in the process is to plot the lines of position LOP and determine the vessel's location. Each assumed position is plotted first. Best practise is to then advance or retire the assumed positions to correct for vessel motion during the interval between sights. Each LOP is then constructed from its associated AP by striking off the azimuth to the body, measuring intercept toward or away from the azimuth, and constructing the perpendicular line of position. To obtain a fix (a position) this LOP must be crossed with another LOP either from another sight or from elsewhere e.g. a bearing of a point of land or crossing a depth contour such as the 200 metre depth line on a chart. Until the age of satellite navigation ships usually took sights at dawn, during the forenoon, at noon (meridian transit of the Sun) and dusk. The morning and evening sights were taken during twilight while the horizon was visible and the stars, planets and/or Moon were visible, at least through the telescope of a sextant . Two observations are always required to give a position accurate to within a mile under favourable conditions. Three are always sufficient. A fix is called a running fix when one or more of the LOPs used to obtain it is an LOP advanced or retrieved over time. In order to get a fix the LOP must cross at an angle, the closer to 90° the better. This means the observations must have different azimuths. During the day, if only the Sun is visible, it is possible to get an LOP from the observation but not a fix as another LOP is needed. What may be done is take a first sight which yields one LOP and, some hours later, when the Sun's azimuth has changed substantially, take a second sight which yields a second LOP. Knowing the distance and course sailed in the interval, the first LOP can be advanced to its new position and the intersection with the second LOP yields a running fix . Any sight can be advanced and used to obtain a running fix . It may be that the navigator due to weather conditions could only obtain a single sight at dawn. The resulting LOP can then be advanced when, later in the morning, a Sun observation becomes possible. The precision of a running fix depends on the error in distance and course so, naturally, a running fix tends to be less precise than an unqualified fix and the navigator must take into account his confidence in the exactitude of distance and course to estimate the resulting error in the running fix. Determining a fix by crossing LOPs and advancing LOPs to get running fixes are not specific to the intercept method and can be used with any sight reduction method or with LOPs obtained by any other method (bearings, etc.).
https://en.wikipedia.org/wiki/Intercept_method
Interception [ 1 ] refers to precipitation that does not reach the soil, but is instead intercepted by the leaves, branches of plants and the forest floor. It occurs in the canopy (i.e. canopy interception ), and in the forest floor or litter layer (i.e. forest floor interception [ 2 ] ). Because of evaporation , interception of liquid water generally leads to loss of that precipitation for the drainage basin , except for cases such as fog interception, but increase flood protection dramatically, Alila et al., (2009). [ 3 ] Intercepted snowfall does not result in any notable amount of evaporation, and most of the snow falls off the tree by wind or melts. However, intercepted snow can more easily drift with the wind, out of the watershed. Conifers have a greater interception capacity than hardwoods . Their needles gives them more surface area for droplets to adhere to, and they have foliage in spring and fall , therefore interception also depends on the type of vegetation in a wooded area. Mitscherlich in 1971 calculated the water storage potential as interception values for different species and stand densities. A storm event might produce 50 – 100 mm of rainfall and 4 mm might be the maximum intercepted in this way. Grah and Wilson in 1944 did sprinkling experiments where they watered plants to see how much of the intercepted is kept after watering stops. Trees like Norway maple and a small-leaved lime have an interception of approximately 38% of the gross precipitation in temperate climate. [ 4 ] The interception depends on the leaf area index and what kind of leaves they are. Interception may increase erosion or reduce it depending on the throughfall effects.
https://en.wikipedia.org/wiki/Interception_(water)
In geotechnical engineering , an interceptor ditch is a small ditch or channel constructed to intercept and drain water to an area where it can be safely discharged. [ 1 ] These are used for excavation purposes of limited depth made in a coarse-grained soils. These are constructed around an area to be dewatered. Sump pits are also placed at suitable intervals for installation of centrifugal pumps to remove the water collected in an efficient manner. [ 2 ] In fine sands and silts , there may be sloughing , erosion or quick conditions . For such type of soils the method is confined to a depth of 1 to 2 m. Interceptor ditches are most economical for carrying away water which emerge on the slopes and near the bottom of the foundation pit. [ 3 ] Its size depends on the original ground slope, runoff area, type of soil and vegetation, and other factors related to runoff volume. [ 4 ] Inspection and maintenance is necessary after completion of construction of any structure. Here some steps followed in the maintenance of interceptor ditches are summarized below:
https://en.wikipedia.org/wiki/Interceptor_ditch
In the theory of formal languages , the interchange lemma states a necessary condition for a language to be context-free , just like the pumping lemma for context-free languages . It states that for every context-free language L {\displaystyle L} there is a c > 0 {\displaystyle c>0} such that for all n ≥ m ≥ 2 {\displaystyle n\geq m\geq 2} for any collection of length n {\displaystyle n} words R ⊂ L {\displaystyle R\subset L} there is a Z = { z 1 , … , z k } ⊂ R {\displaystyle Z=\{z_{1},\ldots ,z_{k}\}\subset R} with k ≥ | R | / ( c n 2 ) {\displaystyle k\geq |R|/(cn^{2})} , and decompositions z i = w i x i y i {\displaystyle z_{i}=w_{i}x_{i}y_{i}} such that each of | w i | {\displaystyle |w_{i}|} , | x i | {\displaystyle |x_{i}|} , | y i | {\displaystyle |y_{i}|} is independent of i {\displaystyle i} , moreover, m / 2 < | x i | ≤ m {\displaystyle m/2<|x_{i}|\leq m} , and the words w i x j y i {\displaystyle w_{i}x_{j}y_{i}} are in L {\displaystyle L} for every i {\displaystyle i} and j {\displaystyle j} . The first application of the interchange lemma was to show that the set of repetitive strings (i.e., strings of the form x y y z {\displaystyle xyyz} with | y | > 0 {\displaystyle |y|>0} ) over an alphabet of three or more characters is not context-free. This grammar -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Interchange_lemma
In mathematics , the study of interchange of limiting operations is one of the major concerns of mathematical analysis , in that two given limiting operations, say L and M , cannot be assumed to give the same result when applied in either order. One of the historical sources for this theory is the study of trigonometric series . [ 1 ] In symbols, the assumption where the left-hand side means that M is applied first, then L , and vice versa on the right-hand side , is not a valid equation between mathematical operators , under all circumstances and for all operands. An algebraist would say that the operations do not commute . The approach taken in analysis is somewhat different. Conclusions that assume limiting operations do 'commute' are called formal . The analyst tries to delineate conditions under which such conclusions are valid; in other words mathematical rigour is established by the specification of some set of sufficient conditions for the formal analysis to hold. This approach justifies, for example, the notion of uniform convergence . [ 2 ] It is relatively rare for such sufficient conditions to be also necessary, so that a sharper piece of analysis may extend the domain of validity of formal results. Professionally speaking, therefore, analysts push the envelope of techniques, and expand the meaning of well-behaved for a given context. G. H. Hardy wrote that "The problem of deciding whether two given limit operations are commutative is one of the most important in mathematics". [ 3 ] An opinion apparently not in favour of the piece-wise approach, but of leaving analysis at the level of heuristic , was that of Richard Courant . Examples abound, one of the simplest being that for a double sequence a m , n : it is not necessarily the case that the operations of taking the limits as m → ∞ and as n → ∞ can be freely interchanged. [ 4 ] For example take in which taking the limit first with respect to n gives 0, and with respect to m gives ∞. Many of the fundamental results of infinitesimal calculus also fall into this category: the symmetry of partial derivatives , differentiation under the integral sign , and Fubini's theorem deal with the interchange of differentiation and integration operators. One of the major reasons why the Lebesgue integral is used is that theorems exist, such as the dominated convergence theorem , that give sufficient conditions under which integration and limit operation can be interchanged. Necessary and sufficient conditions for this interchange were discovered by Federico Cafiero . [ 5 ]
https://en.wikipedia.org/wiki/Interchange_of_limiting_operations