text
stringlengths
11
320k
source
stringlengths
26
161
The inflammatory reflex is a neural circuit that regulates the immune response to injury and invasion. All reflexes have an afferent and efferent arc. The Inflammatory reflex has a sensory afferent arc, which is activated by cytokines and a motor or efferent arc, which transmits action potentials in the vagus nerve to suppress cytokine production. Increased signaling in the efferent arc inhibits inflammation and prevents organ damage. It has also been shown that the brain can use this circuit to regulate the immune response and to extend the immunological memory . [ 1 ] [ 2 ] The molecular basis of cytokine-inhibiting signals requires the neurotransmitter acetylcholine , and the Alpha-7 nicotinic receptor receptor expressed on cytokine-producing cells. [ 3 ] [ 4 ] The release of acetylcholine in the spleen suppresses the production of TNF and other cytokines which causes damaging inflammation. [ 5 ] Signaling in the efferent arc of the inflammatory reflex, termed the " Cholinergic anti-inflammatory pathway ," provides a regulatory check on the innate immune system response to invasion and injury. The action potentials that arise in the vagus nerve are transmitted to the spleen, where a subset of specialized T cells are activated to secrete acetylcholine. The net effect of the reflex is to prevent the damage caused by excessive cytokine production. [ 6 ] Evidence from experimental disease models of arthritis , colitis , sepsis , hemorrhagic shock , and congestive heart failure indicate that electrical stimulation of the vagus nerve can prevent or reverse these diseases. [ 7 ] Some research suggests that it is possible to implant nerve stimulants to replace anti-inflammatory drugs that target cytokine activity (e.g. anti-TNF and anti-IL-1 antibodies). [ 8 ]
https://en.wikipedia.org/wiki/Inflammatory_reflex
An inflatable [ 1 ] is an object that can be inflated with a gas , usually with air , but hydrogen , helium , and nitrogen are also used. One of several advantages of an inflatable is that it can be stored in a small space when not inflated, since inflatables depend on the presence of a gas to maintain their size and shape. Function fulfillment per mass used compared with non-inflatable strategies is a key advantage. Stadium cushions, impact guards, vehicle wheel inner tubes, emergency air bags , and inflatable space habitats employ the inflatable principle. Inflation occurs through several strategies: pumps , ram-air , blowing, and suction. Although the term inflatable can refer to any type of inflatable object, the term is often used in boating to specifically refer to inflatable boats . A distinction is made between high-pressure and low-pressure inflatables. In a high-pressure inflatable, structural limbs like pillars and arches are built out of a tough, flexible material and then inflated at a relatively high pressure. These limbs hold up passive membranes. The space where the visitors or inhabitants stay is under normal atmospheric pressure. For example, airplane emergency rafts are high-pressure inflatable structures. Low-pressure inflatables, on the other hand, are slightly pressurized environments completely held up by internal pressure. In other words, the visitors or inhabitants experience a slightly higher than normal pressure. Low-pressure inflatables are usually built of lighter materials. Both types of inflatables (the low-pressure type more so) are somewhat susceptible to high winds. A balloon is an inflatable flexible filled with air and also gas , such as helium , hydrogen , nitrous oxide or oxygen. Modern balloons can be made from materials such as latex rubber , polychloroprene , or a nylon fabric, while some early balloons were made of dried animal bladders [ citation needed ] . Latex rubber balloons may be used as inexpensive children's toys or decorations, while others are used for practical purposes such as meteorology , medical treatment , military defense , or transportation . A balloon's properties, including its low density and low cost, have led to a wide range of applications. The inventor of the natural latex rubber balloon, (the most common balloon) was Michael Faraday in 1824, via experiments with air and various gases. [ 2 ] Inflatable castles and similar structures are temporary inflatable buildings and structures that are rented for functions, school and church festivals and village fetes and used for recreational purposes, mainly by children. The growth in popularity of moonwalks has led to an inflatable rental industry which includes inflatable slides, obstacle courses, games, and more. Inflatables are ideal for portable amusements because they are easy to transport and store. An inflatable boat is a lightweight boat constructed with its sides and bow made of flexible tubes containing pressurised gas. For smaller boats, the floor and hull beneath it are often flexible. On boats longer than 3 metres or 10 feet, the floor often consists of three to five rigid plywood or aluminium sheets fixed between the tubes but not joined rigidly together. Often the transom is rigid, providing a location and structure for mounting an outboard motor . Some inflatable boats have been designed to be disassembled and packed into a small volume, so they can easily be stored and transported to water when needed. Here the boat when inflated is kept rigid crossways by a foldable removable thwart . This feature allows such boats to be used as liferafts for larger boats or aircraft , and for travel or recreational purposes. Other terms for inflatable boats are "inflatable dinghy", "rubber dinghy", "inflatable", or "inflatable rescue boat". A tire (in American English and Canadian English) or tyre (in British English, New Zealand English, Australian English and others) is a ring-shaped covering that fits around a wheel rim to protect it and enable better vehicle performance by providing a flexible cushion that absorbs shock while keeping the wheel in close contact with the ground. The word itself may be derived from the word "tie," which refers to the outer steel ring part of a wooden cart wheel that ties the wood segments together (see Etymology below). The fundamental materials of modern tires are synthetic rubber , natural rubber , fabric and wire, along with other compound chemicals. They consist of a tread and a body. The tread provides traction while the body ensures support. Before rubber was invented, the first versions of tires were simply bands of metal that fitted around wooden wheels in order to prevent wear and tear. Today, the vast majority of tires are pneumatic inflatable structures , comprising a doughnut-shaped body of cords and wires encased in rubber and generally filled with compressed air to form an inflatable cushion. Pneumatic tires are used on many types of vehicles, such as bicycles , motorcycles , cars , trucks , earthmovers , and aircraft . An air-supported (or air-inflated) structure is any permanent building that derives its structural integrity from the use of internal pressurized air to inflate a pliable material (i.e. structural fabric) envelope , so that air is the main support of the structure. It is usually dome-shaped , since this shape creates the greatest volume for the least amount of material. However, rectangular inflatables are also possible, such as the Airtecture Exhibition Hall constructed by Festo AG & Co . [ 3 ] The concept was popularized on a large scale by David H. Geiger with the United States pavilion at Expo '70 in Osaka, Japan in 1970. [ 4 ] To maintain structural integrity, the structure must be pressurized such that the internal pressure equals or exceeds any external pressure being applied to the structure (i.e. wind pressure). The structure does not have to be airtight to retain structural integrity—as long as the pressurization system that supplies internal pressure replaces any air leakage, the structure will remain stable. All access to the structure interior must be equipped with two sets of doors or revolving door ( airlock ). Air-supported structures are secured by heavy weights on the ground, ground anchors, attached to a foundation, or a combination of these. The original inflatable game was the Moonwalk (bounce house) . Today there are a wide variety of inflatable games that come in all shapes and sizes. Many inflatable games put people in head-to-head competition with other people such as the bungee run and gladiator joust. There are also several inflatable obstacle courses available. Because of their large size, most obstacle courses consist of two or more inflatables connected together. There are also several variations on sports games which are made portable thanks to inflatables. A sports cage is an inflatable cage that holds up a backdrop that resembles a sport (e.g., baseball , American football , soccer , golf ) in which you throw, toss, hit or kick a ball at a marked spot on the backdrop. The cage not only holds the backdrop but keeps balls from flying everywhere. Some sports cages come with a radar gun that will tell you the speed of your throw or kick. During the 2000s, inflatables have replaced the plastic blow-molded yard decorations used as Christmas decorations at many U.S. homes, and are also now used as Halloween decorations and for other occasions as well. These are made of a synthetic fabric , of which different colors have been sewn together in various patterns. An electric blower constantly forces air into the figure, replacing air lost through its fabric and seams. They are internally lit by small C7 incandescent light bulbs (also used in nightlights), which are covered by translucent plastic snap-on globes that protect the fabric from the heat if they should rest against it. Inflatables come in various sizes, commonly four feet or 1.2 meters tall (operated with a low-voltage DC power supply and a computer fan), and six or eight feet (1.8 to 2.4 meters) tall, running directly from AC mains electricity . Like inflatable rides, outdoor types are staked to the ground with guy wires (usually synthetic rope or flat straps) to keep them upright in the wind, though being rather flimsy this does not always work. Heavy snow or rainwater which has accumulated may also prevent proper inflation. While these store compactly, there are disadvantages, including the large amount of electricity needed to constantly keep them inflated. While they can be turned off in the daytime, this leaves the figure deflated, and subject to the rain and snow problem. Freezing rain , heavy snow, or high winds may also cause inflatables to collapse. Additionally, like a tent, they must be completely dry before being packed for storage, or mildew may be a problem (especially if kept in a basement). Decorative inflatables can be mended using duct tape or rip stock patching tape. Since these materials are now available in colors, matching the patch to the inflatable is not difficult. Decorative inflatables are made in many popular characters, including Santa Claus and snowmen for Christmas , and ghosts and jack-o-lanterns for Halloween . Several trademarked characters are also produced, including SpongeBob SquarePants , Winnie the Pooh , and Snoopy and Woodstock from Peanuts . There are also walk-through arches and " haunted houses " for children, and items for other holidays like Uncle Sam for Independence Day , and palm trees for backyard summer cookouts. Since 2005, there are also inflatable snow globes which blow tiny styrofoam beads around on the inside, the blower's air jet picking them up and through a tube to the top, where they fall down inside the clear vinyl front. On others, mainly for Halloween, lightweight foam bats or ghosts spin around like confetti in what is called a "tornado globe". The figures inside both types are also inflatables. Since 2006, several of these have motion, which is driven by the air itself and the Venturi effect . The original is a merry-go-round (usually surrounded by clear vinyl for support), another from 2007 is an airplane with moving propeller . Ghosts may also have streamers which blow around where the air escapes. Inflatables have been made by visual artists and displayed in prominent places in Australia, including on the water in Sydney Harbor and in the sky over the city of Canberra . Examples include Alphie the Alpha Turtle and Patricia Piccinini's The Skywhale . Airbeams, inflatable spars, [ 5 ] inflatable wings, [ 6 ] and tensairity -enhanced inflatable bladders provide a means to structure practical objects. [ 7 ] Inflatable ballute structures have been proposed for use during aerocapture , aerobraking and atmospheric entry of cubesat [ 8 ] and nanosat [ 9 ] satellites . The inflatable structures for these applications may take a variety of engineered shapes including stacked toroidal, tension cone and isotensoid ballute form factors. [ 9 ] Inflatable space habitats have been proposed since the 1960s [ 10 ] and one expandable space station is currently planned for launch in 2015. [ 11 ] Typical examples of an inflatable include the inflatable movie screen , inflatable boat , the balloon , the airship , evacuation slide , furniture, kites, and numerous air-filled swimming pool toys . Air beams as structural elements are finding increasing applications. Smaller-scale inflatables (such as pool toys) generally consist of one or more "air chambers", which are hollow enclosures bound by a soft and flexible airtight material (such as vinyl ), which a gas can enter into or leave from through valves (usually one on each air chamber). The design dependence upon an enclosed pocket of gas leads to a need for a very durable surface material and/or ease of repair of tears and holes on the material, since a puncture or tear will result in the escape of the gas inside (a leak) and the deflation of the inflatable, which depends on the gas's pressure to hold its form. Detectable leaks can be caused by holes (from punctures or tears) in the material, the separation of seams, the ware of part of the valve, or an improperly shut or improperly closed valve. Even if an inflatable possesses no macroscopic leaks, the gas inside will usually diffuse out of the inflatable, albeit at a much slower rate, until equilibrium is reached with the pressure outside the inflatable. Many inflatables are made of material that does not stretch upon inflation; a notable exception of this is the balloon, whose rubber stretches greatly when inflated. The airship is usually inflated with helium as it is lighter than air and does not burn unlike hydrogen airships such as the Hindenburg . Inflatables are also used for the construction of specific sports pitches, military quick-assembly tents, camping tent air beams, and noise makers. Inflatable aircraft including the Goodyear Inflatoplane have been used. Inflation by dynamic ram-air is providing wings for hang gliding and paragliding . Inflatables came very much into the public eye as architectural and domestic objects when synthetic materials became commonplace. [ 12 ] Iconic structures like the US Pavilion at the 1970 Osaka Expo by Davis and Brody [ 13 ] and Victor Lundy 's travelling pavilion for the Atomic Energy Commission popularized the idea that inflatables can be a way to build large structures with very extended interior spans without pillars. These great hopes for inflatable structures would later be dashed by the many practical difficulties faced by inflatable buildings, such as climatization, safety, sensitivity to wind and fireproofing that, currently, restrict their use to very specific circumstances. The DVD Ant Farm has directions for making your own inflatables, using plastic bags and an iron . The low technological barrier to building inflatables is further lowered by DIY instruction sets like the Inflatocookbook. [ 14 ] A patent was granted in Australia in 2001 for a "Manually portable and inflatable automobile" (Australian Patent Number 2001100029), however no known practical form of this type of inflatable has yet been commercialised. [ 15 ] Large scale low-pressure inflatables are often seen at festivals as decorations or inflatable games. These are made out of rip stop nylon and have a constant flow of air from a blower inflating them. In some cases, an inflatable roof is added to an otherwise traditional structure: the biggest example in the world was the BC Place Stadium in Vancouver, British Columbia . Another example can be found in the Roman amphitheater of Nîmes . Many companies use inflatables in the shape of their product or service; they do this because no permission is needed to display them from a local council or authority and they are easily moved from place to place. Inflatables have been used prominently in works of art by artists such as Paul Chan (artist) , [ 16 ] Martin Creed , [ 17 ] [ 18 ] John Jasperse , [ 19 ] Jeff Koons , [ 20 ] and Andy Warhol . [ 21 ]
https://en.wikipedia.org/wiki/Inflatable
Inflatable space structures are structures which use pressurized air to maintain shape and rigidity. The technological approach has been employed from the early days of the space program with satellites such as Echo, to impact attenuation system that enabled the successful landing of the Pathfinder satellite and rover on Mars in 1997. Inflatable structures are also candidates for space structures , given their low weight, and hence easy transportability. Inflatable space structures use pressurized air or gas to maintain shape and rigidity. Notable examples of terrestrial inflatable structures include inflatable boats , and some military tents . [ 1 ] The airships of the twentieth century are examples of the concept applied in the aviation environment. [ 2 ] NASA has investigated inflatable, deployable structures since the early 1950s. Concepts include inflatable satellites, booms, and antennas. Inflatable heatshields, decelerators, and airbags can be used for entry, descent and landing applications. Inflatable habitats, airlocks, and space stations are possible for in-space living spaces and surface exploration missions. [ 3 ] The Echo 1 satellite, launched in 1960, was large inflated satellite with a diameter of 30 meters and coated with reflective material that allowed for radio signals to be bounced off its surface. The satellite was sent to orbit in a flat-folded configuration and inflated once in orbit. [ 4 ] The airbags used on the Mars Pathfinder mission descent and landing in 1997 are an example of use of an inflatable system for impact attenuation. [ 3 ] Space Solar Power (SSP) solutions employing inflatable structures have been designed and qualified for space by NASA engineers. [ 5 ] NASA is testing a deployable heat shield solution in space as a secondary payload on the launch that will deliver the NASA JPSS-2 launch in late 2022. The Low-Earth Orbit Flight Test of an Inflatable Decelerator (LOFTID) is designed to demonstrate aerobraking and re-entry from 18,000 miles per hour after separation from the launch vehicle adapter structure. [ 6 ] The space station concepts developed by Bigelow Aerospace is an example of an inflatable crewed orbital space habitat. [ 7 ] This architecture -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Inflatable_space_structures
In mathematics, the inflation-restriction exact sequence is an exact sequence occurring in group cohomology and is a special case of the five-term exact sequence arising from the study of spectral sequences . Specifically, let G be a group , N a normal subgroup , and A an abelian group which is equipped with an action of G , i.e., a homomorphism from G to the automorphism group of A . The quotient group G / N acts on Then the inflation-restriction exact sequence is: In this sequence, there are maps The inflation and restriction are defined for general n : The transgression is defined for general n only if H i ( N , A ) G / N = 0 for i ≤ n − 1. [ 1 ] The sequence for general n may be deduced from the case n = 1 by dimension-shifting or from the Lyndon–Hochschild–Serre spectral sequence . [ 2 ] This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Inflation-restriction_exact_sequence
In hydrology , discharge is the volumetric flow rate (volume per time, in units of m 3 /h or ft 3 /h) of a stream . It equals the product of average flow velocity (with dimension of length per time, in m/h or ft/h) and the cross-sectional area (in m 2 or ft 2 ). [ 1 ] It includes any suspended solids (e.g. sediment), dissolved chemicals like CaCO 3 (aq), or biologic material (e.g. diatoms ) in addition to the water itself. Terms may vary between disciplines. For example, a fluvial hydrologist studying natural river systems may define discharge as streamflow , whereas an engineer operating a reservoir system may equate it with outflow , contrasted with inflow . A discharge is a measure of the quantity of any fluid flow over unit time. The quantity may be either volume or mass. Thus the water discharge of a tap (faucet) can be measured with a measuring jug and a stopwatch. Here the discharge might be 1 litre per 15 seconds, equivalent to 67 ml/second or 4 litres/minute. This is an average measure. For measuring the discharge of a river we need a different method and the most common is the 'area-velocity' method. The area is the cross sectional area across a river and the average velocity across that section needs to be measured for a unit time, commonly a minute. Measurement of cross sectional area and average velocity, although simple in concept, are frequently non-trivial to determine. The units that are typically used to express discharge in streams or rivers include m 3 /s (cubic meters per second), ft 3 /s (cubic feet per second or cfs) and/or acre-feet per day. [ 2 ] A commonly applied methodology for measuring, and estimating, the discharge of a river is based on a simplified form of the continuity equation . The equation implies that for any incompressible fluid, such as liquid water, the discharge ( Q ) is equal to the product of the stream's cross-sectional area ( A ) and its mean velocity ( u ¯ {\displaystyle {\bar {u}}} ), and is written as: where For example, the average discharge of the Rhine river in Europe is 2,200 cubic metres per second (78,000 cu ft/s) or 190,000,000 cubic metres (150,000 acre⋅ft) per day. Because of the difficulties of measurement, a stream gauge is often used at a fixed location on the stream or river. Empirically derived relationships between channel width (breadth) b , depth h , and velocity 'u' are: [ 3 ] Parameter Q {\displaystyle Q} refers to a "dominant discharge" or "channel-forming discharge", which is typically the 1–2 year flood, though there is a large amount of scatter around this mean. This is the event that causes significant erosion and deposition and determines the channel morphology. A hydrograph is a graph showing the rate of flow (discharge) versus time past a specific point in a river, channel, or conduit carrying flow. The rate of flow is typically expressed in units of cubic meters per second (m³/s) or cubic feet per second (cfs). The catchment of a river above a certain location is determined by the surface area of all land which drains toward the river from above that point. The river's discharge at that location depends on the rainfall on the catchment or drainage area and the inflow or outflow of groundwater to or from the area, stream modifications such as dams and irrigation diversions, as well as evaporation and evapotranspiration from the area's land and plant surfaces. In storm hydrology, an important consideration is the stream's discharge hydrograph, a record of how the discharge varies over time after a precipitation event. The stream rises to a peak flow after each precipitation event, then falls in a slow recession . Because the peak flow also corresponds to the maximum water level reached during the event, it is of interest in flood studies. Analysis of the relationship between precipitation intensity and duration and the response of the stream discharge are aided by the concept of the unit hydrograph , which represents the response of stream discharge over time to the application of a hypothetical "unit" amount and duration of rainfall (e.g., half an inch over one hour). The amount of precipitation correlates to the volume of water (depending on the area of the catchment) that subsequently flows out of the river. Using the unit hydrograph method, actual historical rainfalls can be modeled mathematically to confirm characteristics of historical floods, and hypothetical "design storms" can be created for comparison to observed stream responses. The relationship between the discharge in the stream at a given cross-section and the level of the stream is described by a rating curve . Average velocities and the cross-sectional area of the stream are measured for a given stream level. The velocity and the area give the discharge for that level. After measurements are made for several different levels, a rating table or rating curve may be developed. Once rated, the discharge in the stream may be determined by measuring the level, and determining the corresponding discharge from the rating curve. If a continuous level-recording device is located at a rated cross-section, the stream's discharge may be continuously determined. Larger flows (higher discharges) can transport more sediment and larger particles downstream than smaller flows due to their greater force. Larger flows can also erode stream banks and damage public infrastructure. G. H. Dury and M. J. Bradshaw are two geographers who devised models showing the relationship between discharge and other variables in a river. The Bradshaw model described how pebble size and other variables change from source to mouth; while Dury considered the relationships between discharge and variables such as stream slope and friction. These follow from the ideas presented by Leopold, Wolman and Miller in Fluvial Processes in Geomorphology . [ 5 ] and on land use affecting river discharge and bedload supply. [ 6 ] Inflow is the sum of processes within the hydrologic cycle that increase the water levels of bodies of water. [ 7 ] Most precipitation occurs directly over bodies of water such as the oceans, or on land as surface runoff . [ 8 ] A portion of runoff enters streams and rivers, and another portion soaks into the ground as groundwater seepage . [ 9 ] The rest soaks into the ground as infiltration, some of which infiltrates deep into the ground to replenish aquifers. [ 10 ]
https://en.wikipedia.org/wiki/Inflow_(hydrology)
In engineering, an influence line graphs the variation of a function (such as the shear, moment etc. felt in a structural member) at a specific point on a beam or truss caused by a unit load placed at any point along the structure. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] Common functions studied with influence lines include reactions (forces that the structure's supports must apply for the structure to remain static), shear , moment , and deflection (Deformation). [ 6 ] Influence lines are important in designing beams and trusses used in bridges , crane rails, conveyor belts , floor girders, and other structures where loads will move along their span. [ 5 ] The influence lines show where a load will create the maximum effect for any of the functions studied. Influence lines are both scalar and additive . [ 5 ] This means that they can be used even when the load that will be applied is not a unit load or if there are multiple loads applied. To find the effect of any non-unit load on a structure, the ordinate results obtained by the influence line are multiplied by the magnitude of the actual load to be applied. The entire influence line can be scaled, or just the maximum and minimum effects experienced along the line. The scaled maximum and minimum are the critical magnitudes that must be designed for in the beam or truss. In cases where multiple loads may be in effect, influence lines for the individual loads may be added together to obtain the total effect felt the structure bears at a given point. When adding the influence lines together, it is necessary to include the appropriate offsets due to the spacing of loads across the structure. For example, a truck load is applied to the structure. Rear axle, B, is three feet behind front axle, A, then the effect of A at x feet along the structure must be added to the effect of B at ( x – 3) feet along the structure—not the effect of B at x feet along the structure. Many loads are distributed rather than concentrated. Influence lines can be used with either concentrated or distributed loadings. For a concentrated (or point) load, a unit point load is moved along the structure. For a distributed load of a given width, a unit-distributed load of the same width is moved along the structure, noting that as the load nears the ends and moves off the structure only part of the total load is carried by the structure. The effect of the distributed unit load can also be obtained by integrating the point load's influence line over the corresponding length of the structures. The Influence lines of determinate structures becomes a mechanism whereas the Influence lines of indeterminate structures become just determinate. [ 7 ] Influence lines are based on Betti's theorem . From there, consider two external force systems, F i P {\displaystyle F_{i}^{P}} and F i Q {\displaystyle F_{i}^{Q}} , each one associated with a displacement field whose displacements measured in the force's point of application are represented by d i P {\displaystyle d_{i}^{P}} and d i Q {\displaystyle d_{i}^{Q}} . Consider that the F i P {\displaystyle F_{i}^{P}} system represents actual forces applied to the structure, which are in equilibrium. Consider that the F i Q {\displaystyle F_{i}^{Q}} system is formed by a single force, F Q {\displaystyle F^{Q}} . The displacement field d i Q {\displaystyle d_{i}^{Q}} associated with this forced is defined by releasing the structural restraints acting on the point where F Q {\displaystyle F^{Q}} is applied and imposing a relative unit displacement that is kinematically admissible in the negative direction, represented as d 1 Q = − 1 {\displaystyle d_{1}^{Q}=-1} . From Betti's theorem , we obtain the following result: − F 1 P + ∑ i = 2 n F i P d i Q = F Q × 0 ⟺ F 1 P = ∑ i = 2 n F i P d i Q {\displaystyle -F_{1}^{P}+\sum _{i=2}^{n}F_{i}^{P}d_{i}^{Q}=F^{Q}\times 0\iff F_{1}^{P}=\sum _{i=2}^{n}F_{i}^{P}d_{i}^{Q}} When designing a beam or truss, it is necessary to design for the scenarios causing the maximum expected reactions, shears, and moments within the structure members to ensure that no member fails during the life of the structure. When dealing with dead loads (loads that never move, such as the weight of the structure itself), this is relatively easy because the loads are easy to predict and plan for. For live loads (any load that moves during the life of the structure, such as furniture and people), it becomes much harder to predict where the loads will be or how concentrated or distributed they will be throughout the life of the structure. Influence lines graph the response of a beam or truss as a unit load travels across it. The influence line helps designers find where to place a live load in order to calculate the maximum resulting response for each of the following functions: reaction, shear, or moment. The designer can then scale the influence line by the greatest expected load to calculate the maximum response of each function for which the beam or truss must be designed. Influence lines can also be used to find the responses of other functions (such as deflection or axial force) to the applied unit load, but these uses of influence lines are less common. There are three methods used for constructing the influence line. The first is to tabulate the influence values for multiple points along the structure, then use those points to create the influence line. [ 5 ] The second is to determine the influence-line equations that apply to the structure, thereby solving for all points along the influence line in terms of x , where x is the number of feet from the start of the structure to the point where the unit load is applied. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] The third method is called the Müller-Breslau's principle . It creates a qualitative influence line. [ 1 ] [ 2 ] [ 5 ] This influence line will still provide the designer with an accurate idea of where the unit load will produce the largest response of a function at the point being studied, but it cannot be used directly to calculate what the magnitude that response will be, whereas the influence lines produced by the first two methods can. To tabulate the influence values with respect to some point A on the structure, a unit load must be placed at various points along the structure. Statics is used to calculate what the value of the function (reaction, shear, or moment) is at point A. Typically an upwards reaction is seen as positive. Shear and moments are given positive or negative values according to the same conventions used for shear and moment diagrams . R. C. Hibbeler states, in his book Structural Analysis , “All statically determinate beams will have influence lines that consist of straight line segments.” [ 5 ] Therefore, it is possible to minimize the number of computations by recognizing the points that will cause a change in the slope of the influence line and only calculating the values at those points. The slope of the inflection line can change at supports, mid-spans, and joints. An influence line for a given function, such as a reaction, axial force, shear force, or bending moment, is a graph that shows the variation of that function at any given point on a structure due to the application of a unit load at any point on the structure. An influence line for a function differs from a shear, axial, or bending moment diagram. Influence lines can be generated by independently applying a unit load at several points on a structure and determining the value of the function due to this load, i.e. shear, axial, and moment at the desired location. The calculated values for each function are then plotted where the load was applied and then connected together to generate the influence line for the function. Once the influence values have been tabulated, the influence line for the function at point A can be drawn in terms of x . First, the tabulated values must be located. For the sections in between the tabulated points, interpolation is required. Therefore, straight lines may be drawn to connect the points. Once this is done, the influence line is complete. It is possible to create equations defining the influence line across the entire span of a structure. This is done by solving for the reaction, shear, or moment at the point A caused by a unit load placed at x feet along the structure instead of a specific distance. This method is similar to the tabulated values method, but rather than obtaining a numeric solution, the outcome is an equation in terms of x . [ 5 ] It is important to understanding where the slope of the influence line changes for this method because the influence-line equation will change for each linear section of the influence line. Therefore, the complete equation is a piecewise linear function with a separate influence-line equation for each linear section of the influence line. [ 5 ] According to www.public.iastate.edu, “ The Müller-Breslau Principle can be utilized to draw qualitative influence lines, which are directly proportional to the actual influence line.” [ 2 ] Instead of moving a unit load along a beam, the Müller-Breslau Principle finds the deflected shape of the beam caused by first releasing the beam at the point being studied, and then applying the function (reaction, shear, or moment) being studied to that point. The principle states that the influence line of a function will have a scaled shape that is the same as the deflected shape of the beam when the beam is acted upon by the function. To understand how the beam deflects under the function, it is necessary to remove the beam's capacity to resist the function. Below are explanations of how to find the influence lines of a simply supported, rigid beam (such as the one displayed in Figure 1). The Müller-Breslau Principle can only produce qualitative influence lines. [ 2 ] [ 5 ] This means that engineers can use it to determine where to place a load to incur the maximum of a function, but the magnitude of that maximum cannot be calculated from the influence line. Instead, the engineer must use statics to solve for the functions value in that loading case. The simplest loading case is a single point load, but influence lines can also be used to determine responses due to multiple loads and distributed loads. Sometimes it is known that multiple loads will occur at some fixed distance apart. For example, on a bridge the wheels of cars or trucks create point loads that act at relatively standard distances. To calculate the response of a function to all these point loads using an influence line, the results found with the influence line can be scaled for each load, and then the scaled magnitudes can be summed to find the total response that the structure must withstand. [ 5 ] The point loads can have different magnitudes themselves, but even if they apply the same force to the structure, it will be necessary to scale them separately because they act at different distances along the structure. For example, if a car's wheels are 10 feet apart, then when the first set is 13 feet onto the bridge, the second set will be only 3 feet onto the bridge. If the first set of wheels is 7 feet onto the bridge, the second set has not yet reached the bridge, and therefore only the first set is placing a load on the bridge. Also, if, between two loads, one of the loads is heavier, the loads must be examined in both loading orders (the larger load on the right and the larger load on the left) to ensure that the maximum load is found. If there are three or more loads, then the number of cases to be examined increases. Many loads do not act as point loads, but instead act over an extended length or area as distributed loads. For example, a tractor with continuous tracks will apply a load distributed over the length of each track. To find the effect of a distributed load, the designer can integrate an influence line, found using a point load, over the affected distance of the structure. [ 5 ] For example, if a three-foot-long track acts between 5 feet and 8 feet along a beam, the influence line of that beam must be integrated between 5 and 8 feet. The integration of the influence line gives the effect that would be felt if the distributed load had a unit magnitude. Therefore, after integrating, the designer must still scale the results to get the actual effect of the distributed load. While the influence lines of statically determinate structures (as mentioned above) are made up of straight line segments, the same is not true for indeterminate structures. Indeterminate structures are not considered rigid; therefore, the influence lines drawn for them will not be straight lines but rather curves. The methods above can still be used to determine the influence lines for the structure, but the work becomes much more complex as the properties of the beam itself must be taken into consideration.
https://en.wikipedia.org/wiki/Influence_line
The 3' splice site of the influenza A virus segment 7 pre-mRNA can adopt two different types of RNA structure : a pseudoknot and a hairpin . This conformational switch is proposed to play a role in RNA alternative splicing and may influence the production of M1 and M2 proteins produced by splicing of this pre-mRNA. This structured region was first discovered in a bioinformatics survey of influenza A based on thermodynamic folding free energy and amino acid codon suppression. [ 1 ] Initial models of the secondary structure were based on computational methods for Nucleic acid structure prediction . The hairpin conformation was predicted using RNAalifold, [ 2 ] while the pseudoknot was predicted with DotKnot. [ 3 ] Segment 7 encodes the M1 protein and the smaller M2 proton channel protein, which is produced by RNA splicing . M2 protein is critical to the virus: it forms the ion channels that allow for acidification of the virion that stimulates in un-coating. The 3' splice site region used to produce M2 was experimentally probed with structure-sensitive chemicals and enzymes and was found to adopt both the hairpin and pseudoknot conformations in solution. Each conformation places important splicing regulatory sites in different structural environments, which has implications for the modulation of splicing of the segment 7 transcript. For example, the splice site, polypyrimidine tract , branch point, and ASF/SF2 exonic enhancer binding sites are expected to be more accessible in the hairpin conformation and less accessible in the pseudoknot (Figure 1). By shifting the equilibrium between the pseudoknot and hairpin it may be possible to reduce or enhance M2 splicing, respectively. [ 4 ] This putative mechanism may also be one that is common to spliced influenza transcripts. A similarly placed, but structurally distinct, influenza virus pseudoknot /hairpin was also described in the 3' splice site of segment 8 transcripts, which encode the NP and, by splicing, NEP proteins. [ 5 ] These structures form a family of structured RNAs shared between influenza A and influenza B [ 6 ] The 3' splice site structures in influenza A segment 7 show host-species specific trends in structural stability. The greatest number of structurally stabilizing mutations occur in avian specific strains, while the most structurally destabilizing mutations occurred in human strains: swine fell in between. This general trend in stability: avian, swine, human, roughly follows the temperatures at which the influenza virus reproduces within each host species. The temperatures of the avian gut and the swine and human lung are 42, 37, and 34 degrees Celsius, respectively. This observation is a local instance of a global trend in influenza A coding sequences, where avian, swine, and human strains show different stability. It may be the case that RNA structure is more stable in hosts where the replication temperature is high in order to preserve functional structures or important structural equilibria. [ 7 ]
https://en.wikipedia.org/wiki/Influenza_A_segment_7_splice_site
The Influenza Antiviral Drug Search was a distributed computing project that was running on the BOINC platform. It is a project of the University of Texas Medical Branch . The Influenza Antiviral Drug Search conducted millions of virtual docking experiments in order to discover compounds that may be suitable for real-world clinical trials to combat new or drug resistant strains of influenza virus. One vulnerability of all influenza strains is that they need viral neuraminidase , NS1 Influenza Protein and hemagglutinin in order to infect a body. A chemical compound that can disable one of these molecules has the potential to be an effective antiviral drug. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Influenza_Antiviral_Drug_Search
The Influenza Genome Sequencing Project ( IGSP ), initiated in early 2004, seeks to investigate influenza evolution by providing a public data set of complete influenza genome sequences from collections of isolates representing diverse species distributions. The project is funded by the National Institute of Allergy and Infectious Diseases (NIAID), a division of the National Institutes of Health (NIH), and has been operating out of the NIAID Microbial Sequencing Center at The Institute for Genomic Research (TIGR, which in 2006 became The Venter Institute). Sequence information generated by the project has been continually placed into the public domain through GenBank . In late 2003, David Lipman , Lone Simonsen , Steven Salzberg , and a consortium of other scientists wrote a proposal to begin sequencing large numbers of influenza viruses at The Institute for Genomic Research (TIGR). Prior to this project, only a handful of flu genomes were publicly available. [ citation needed ] Their proposal was approved by the National Institutes of Health (NIH), and would later become the IGSP. New technology development led by Elodie Ghedin began at TIGR later that year, and the first publication describing > 100 influenza genomes appeared in 2005 in the journal Nature [ 1 ] The project makes all sequence data publicly available through GenBank , an international, NIH-funded, searchable online database. This research helps to provide international researchers with the information needed to develop new vaccines , therapies and diagnostics, as well as improve understanding of the overall molecular evolution of Influenza and other genetic factors that determine their virulence. [ citation needed ] Such knowledge could not only help mitigate the impact of annual influenza epidemics , but could also improve scientific knowledge of the emergence of pandemic influenza viruses . The project completed its first genomes in March 2005 and has rapidly accelerated since. By mid-2008, over 3000 isolates had been completely sequenced from influenza viruses that are endemic in human ("human flu") avian ("bird flu") and swine ("swine flu") populations, including many strains of H3N2 (human), H1N1 (human), and H5N1 (avian). [ 1 ] The project is funded by the National Institute of Allergy and Infectious Diseases (NIAID) which is a component of the NIH, which is an agency of the United States Department of Health and Human Services . The IGSP has expanded to include a growing list of collaborators, who have contributed both expertise and valuable collections of influenza isolates. Key early contributors included Peter Palese of the Mount Sinai School of Medicine in New York, Jill Taylor of the Wadsworth Center at the New York State Department of Health , Lance Jennings of Canterbury Health Laboratories (New Zealand), Jeff Taubenberger of the Armed Forces Institute of Pathology (who later moved to NIH), Richard Slemons of Ohio State University and Rob Webster of St. Jude's Children's Hospital in Memphis, Tennessee. In 2006 the project was joined by Ilaria Capua of the Istituto Zooprofilattico Sperimentale delle Venezie (in Italy), who contributed a valuable collection of avian flu isolates (including multiple H5N1 strains). Some of these avian isolates were described in a publication in Emerging Infectious Diseases in 2007. [ 2 ] Nancy Cox from the Centers for Disease Control and Prevention (CDC) and Robert Couch from Baylor College of Medicine also joined the project in 2006, contributing over 150 influenza B isolates. The project began prospective studies of the 2007 influenza season with collaborators Florence Bourgeois and Kenneth Mandl of Children's Hospital Boston and the Harvard School of Public Health and Laurel Edelman of Surveillance Data Inc. [ citation needed ]
https://en.wikipedia.org/wiki/Influenza_Genome_Sequencing_Project
InfoLab21 is a research centre at Lancaster University focusing primarily on information and communication technologies. [ 1 ] The centre was opened in 2005 by Patricia Hewitt in order to "transfer the knowledge, technology and innovation techniques that are strong within the university into the private sector." [ 2 ] The centre houses the University's School of Computing and Communications [ 1 ] and operates under the University's department of Science and Technology . [ 3 ]
https://en.wikipedia.org/wiki/InfoLab21
Information quality ( InfoQ ) is the potential of a data set to achieve a specific (scientific or practical) goal using a given empirical analysis method . Formally, the definition is InfoQ = U(X,f|g) where X is the data, f the analysis method, g the goal and U the utility function. InfoQ is different from data quality and analysis quality , but is dependent on these components and on the relationship between them. InfoQ has been applied in a wide range of domains like healthcare, customer surveys, data science programs, advanced manufacturing and Bayesian network applications. Kenett and Shmueli (2014) proposed eight dimensions to help assess InfoQ and various methods for increasing InfoQ: Data resolution, Data structure , Data integration, Temporal relevance, Chronology of data and goal, Generalization , Operationalization , Communication. [ 1 ] [ 2 ] [ 3 ] This statistics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/InfoQ
Roberto Kutić (COO) Infobip is a Croatian IT and telecommunications company, founded in 2006. [ 6 ] In July 2020, Infobip raised $200 million in a Series A funding round led by One Equity Partners, having reached $1 billion valuation. [ 7 ] Later that year, Infobip acquired OpenMarket , a Seattle-based cloud messaging provider. [ 8 ] Infobip was established by Silvio Kutić, Roberto Kutić and Izabel Jelenić in the Croatian city of Vodnjan in 2006. [ 9 ] [ 10 ] Before founding Infobip, Silvio Kutić created Virtual Community, that allowed group communication via web, email, and SMS, with the first message sent as a Christmas text from Vodnjan's town hall to its diaspora. Realizing the technology wasn't scalable, he developed a communications API platform, leading to the creation of Infobip. [ 11 ] [ 12 ] In 2014 and 2015, the company started its operations in Latin America and Asia-Pacific. [ 13 ] [ 14 ] [ 15 ] In 2020, Infobip raised $200 million from One Equity Partners . Infobip's valuation thus surpassed $1 billion, making it the first Croatian unicorn company. [ 7 ] [ 16 ] During the same year, Infobip acquired OpenMarket , a company headquartered in Seattle , United States. [ 17 ] In 2021, Infobip acquired Shift Conference, a tech conference franchise with experience in creating and producing technology conferences in multiple industry verticals such as Software Development, Fin-Tech, and Artificial Intelligence. [ 18 ] In July 2022, Infobip finalized its acquisition of Peerless Network, a U.S.-based telecommunications provider founded in 2008. [ 19 ] Infobip's revenue reached €1.55 billion in 2022 and €1.735 billion in 2023. [ 20 ] The company's global communications platform handles 453 billion interactions per year, or 37 billion per month, with 301 billion being SMS interactions. [ 21 ] During Cyber Week 2024, Infobip saw a 41% increase in total interactions, growing from 8.2 billion in 2023 to 11.6 billion in 2024. [ 22 ] [ 23 ] [ 24 ] In May 2023, Infobip was named a leader in the IDC MarketScape CPaaS Vendor Assessment. [ 25 ] In July (2023), the company was recognized as a leader in the Juniper Customer Data Platform Leaderboard. [ 26 ] In September 2023, Infobip was named a leader in the Gartner Magic Quadrant for Communications Platform as a Service (CPaaS) [ 27 ] and as a leader in the Conversational Commerce Leaderboard in October. [ 28 ] In November 2023, Omdia ranked Infobip as a leader in its CPaaS Universe Report for the second consecutive year. [ 29 ] In December 2023, Infobip was included in the top CPaaS provider in the inaugural MetriRank CPaaS Report [ 30 ] and also named a leader in the CCaaS Leaderboard by Juniper Research. [ 31 ] In March 2024, Infobip was named to Fast Company ’s Annual List of the World’s Most Innovative Companies [ 32 ] and as a leader in the CPaaS Leaderboard, Juniper Research. [ 33 ] In June 2024, Infobip was named a Leader in the Gartner® Magic Quadrant™ for Communications Platform as a Service (CPaaS) 2024 for the second year running. [ 34 ] In October 2024, Juniper Research recognized Infobip as the number one provider in the AIT Fraud Prevention market. [ 35 ] In November 2024, Infobip was ranked one of the top 3 most innovative companies in retail in Fast Company Middle East’s Most Innovative Companies 2024 list, [ 36 ] and Juniper Research ranked Infobip number one among Established Leaders in RCS Business Messaging. [ 37 ] In December 2024, Infobip was named one of the top CPaaS providers in Metrigy’s CPaaS MetriRank Report for the third time. [ 38 ] In February 2025, Infobip was named a Leader for the third time in the IDC MarketScape, [ 39 ] and Infobip was named an Ecosystem Leader in the Ecosystem Compass 2025. [ 40 ] Sinse 2023, Infobip supports more than 300 startups with €1.8 million through the Infobip Startup Tribe, a global perks program supporting startups and providing them with tools and resources to help their growth. [ 41 ] In September 2017, the company opened its first high-tech campus, named Pangea, built on 17,000 square meters of land in Vodnjan, Croatia. [ 42 ] In 2022, Infobip opened its second campus in Zagreb, called Alpha Centauri, designed by 3LHD architectural firm. [ 43 ] The 20,000 square meter Alpha Centauri campus, which was named after the star system closest to Earth, is located near Zagreb's Sveta Klara and Remetinec neighborhoods. [ 43 ]
https://en.wikipedia.org/wiki/Infobip
Infocommunications is the natural expansion of telecommunications with information processing and content handling functions including all types of electronic communications (fixed and mobile telephony, data communications, media communications, broadcasting, etc.) on a common digital technology base, mainly through Internet technology . [ 1 ] The term Infocommunications, or in short form, Infocom(s) or Infocomm(s) first emerged in the beginning of eighties at scientific conferences and then was gradually adopted in the 1990s by the players of telecommunications sector, including manufacturers, service providers, regulatory authorities and international organizations to clearly express their participation in the convergence process of the telecommunications and information technology sectors. The convergence process is triggered by the huge scale development of digital technology. Digital technology has unified, Internet technology radically reshaped telecommunications, integrated information processing and content management functions. [ 2 ] [ 3 ] [ 4 ] [ 5 ] The term "infocommunications" is also used in politics in a wider sense as a shorter form of information and communication(s) technology (ICT). The terms info-com(s) and info-communications (with a hyphen) are also used to express the integration of the information technology (IT) and (tele)communication sectors, [ 4 ] or simply to interpret the abbreviation ICT. The term Information and Communication(s) Technology (ICT) has been defined as an extended synonym for information technology (IT) to emphasize the integration with (tele)communications. Content published in mass communication media such as printed, audio-visual and online contents and related services are not considered as ICT products, but are referred to as the Media & Content products . [ 6 ] The abbreviation TIM , as the Telecom IT Media sector is used to express the full integration of the Telecommunications, IT and Media & Content sectors. [ 5 ] As well, the abbreviation TIME, as the Telecom IT/Internet Media & Entertainment/Edutainment sector is used to express the integration of these sectors. The relationship and position of the terms can be demonstrated by a digital convergence prism [ 1 ] (Figure 1), which shows the three components (T, I, M) and their pairs and the triple combination (convergent TIM triplet) according to the rule of additive colour mixing. Assuming that telecommunications (Telecom) is blue, informatics (IT) is green and Media & Content is red, then teleinformatics/telematics is cyan, telemedia/networked media is magenta, media informatics is yellow, and the convergent TIM is white. In such a way, the integrated TIM sector corresponds to the prism as a whole, the ICT sector to the whole minus the red area (Media & Content), and the Infocommunications relates to Telecom and neighbouring three areas (blue, cyan, magenta and white). That means that, for example, media informatics is a part of ICT but not part of Infocommunications.
https://en.wikipedia.org/wiki/Infocommunications
Infologs are independently designed synthetic genes derived from one or a few genes where substitutions are systematically incorporated to maximize information. Infologs are designed for perfect diversity distribution to maximize search efficiency. Typical protein engineering methods rely on screening a high number (10 6 -10 12 or more) of gene variants to identify individuals with improved activity using a surrogate high throughput screen (HTP) to identify initial hits. Unfortunately, results are defined by what is screened for, thus the “hit” from the HTP screen often has very little real activity in a lower throughput assay more indicative of the improved functionality for which the protein is being developed. By adapting the standard algorithms for engineering complex systems to work with biological systems, the resulting process enables researchers to deconvolute how substitutions within a protein sequence modify its function. Combining these algorithms with an integrated query and ranking mechanism allows the identification of appropriate sequence substitutions. [ 1 ] Infologs refers to the set of designed genes, singular use Infolog describes an individual variant. Homology between protein or DNA sequences is defined in terms of shared ancestry. Two segments of DNA can have shared ancestry because of either a speciation event (orthologs) or a duplication event (paralogs). Homologs are similar genes and/or proteins which are related by ancestry. Orthologs are the 'same' gene, but from different organisms. Homologous sequences are orthologous if they were separated by a speciation event: when a species diverges into two separate species, the copies of a single gene in the two resulting species are said to be orthologous. Orthologs, or orthologous genes, are genes in different species that originated by vertical descent from a single gene of the last common ancestor. The term "ortholog" was coined in 1970 by Walter Fitch. [ 2 ] Paralogs are related genes originating from one gene that through duplication ended up as two genes that over time has evolved for two separate functions (or, according to a recent Science paper, [ 3 ] a promiscuous starting gene that duplicated and each copy evolved towards different functions). Paralogs typically have the same or similar function, but sometimes do not: due to lack of the original selective pressure upon one copy of the duplicated gene, this copy is free to mutate and acquire new functions. Paralogs usually occur from within the same species. Xenologs are homologs resulting from horizontal gene transfer between two organisms. Xenologs can have different functions, if the new environment is vastly different for the horizontally moving gene. In general, though, xenologs typically have similar function in both organisms. Infologs are similar genes and/or proteins which are related by synthetic ancestry to approach perfect diversity distribution. Transforming Protein engineering with Infologs: Using independently designed synthetic genes where substitutions are systematically incorporated (Infologs) leads to uniform sampling, systematic variance and unrestricted information rich results. Wheat Glutathione S-transferases (GST) with the ability to detoxify a panel of common herbicides was designed using this patented bioengineering method. The relative functional contribution of 60 amino acid substitutions against 14 herbicides was quantified using only 96 Infologs and dramatically improved by a small set (16) of 2nd generation Infologs. In addition, highly predictable GST sequence-function models against two commercially relevant herbicides were created with quantification of relative functional contribution of 60 amino acid substitutions in two dimensions. [ 4 ] In rational protein design, the scientist uses detailed knowledge of the structure and function of the protein to make desired changes. This generally has the advantage of being technically easy and inexpensive, since site-directed mutagenesis techniques are well-developed. However, its major drawback is that detailed structural knowledge of a protein is often unavailable, and even when it is available, it can be extremely difficult to predict the effects of various mutations. Computational protein design algorithms seek to identify novel amino acid sequences that are low in energy when folded to the pre-specified target structure. While the sequence-conformation space that needs to be searched is large, the most challenging requirement for computational protein design is a fast, yet accurate, energy function that can distinguish optimal sequences from similar suboptimal ones.
https://en.wikipedia.org/wiki/Infologs
Infomania is the debilitating state of information overload, caused by the combination of a backlog of information to process (usually in e-mail), and continuous interruptions from technologies like phones, instant messaging, and e-mail. [ 1 ] It is also defined as an obsessive need to constantly check social media, online news, and emails to acquire knowledge. [ 2 ] This may be related to a fear of missing out ( FOMO ). To date, the term infomania is not used to refer to any recognized psychological disorder . Infomania is not generally recognized as causing significant impairment. [ citation needed ] The term was coined by Elizabeth M. Ferrarini , the author of Confessions of an Infomaniac (1984) and Infomania: The Guide to Essential Electronic Services (1985). Confessions was an early book about life online. It was excerpted in Cosmopolitan in 1982. In 2005, Dr. Glenn Wilson conducted an experimental study which described effects of information overload on problem solving ability. [ 3 ] The 80 volunteers carried out problem solving tasks in a quiet space and then while being bombarded with new emails and phone calls that they could not answer. [ 3 ] Results showed a reduction in IQ by an average of 10 points during the bombardment session, but not everyone was affected to the same extent; men were distracted more than women. [ 3 ] In 2010, Dr. Glenn Wilson published a clarifying note about the study [ 4 ] in which he documented the limited size of the study and stated the results were "widely misrepresented in the media". [ 4 ] Wilson compares working while having an incoming of calls and email can reduce someone’s ability to focus as much as losing a night’s sleep. [ 3 ] Not only can it affect one’s ability to function below their full potential at a job or in class, but it has been found that it can become addicting using technology as well. [ 3 ] For example, how often have you found yourself on your phone checking work emails during a lunch with family on the weekend? This is just one of many examples of the addiction effect of infomania. There have not been any long-term studies on the effects of infomania. [ 5 ] However, Gloria Mark at UC Irvine conducted a study on the short-term effects of Fear of Missing Out , which involves compulsively checking in on the experiences of others via social media, [ 6 ] and found that it took an average of 23 minutes to return to an original task after an interruption. [ 7 ] She concluded that interruptions result in "more stress, higher frustration, time pressure and effort". [ 7 ]
https://en.wikipedia.org/wiki/Infomania
Informal logic encompasses the principles of logic and logical thought outside of a formal setting (characterized by the usage of particular statements ). However, the precise definition of "informal logic" is a matter of some dispute. [ 1 ] Ralph H. Johnson and J. Anthony Blair define informal logic as "a branch of logic whose task is to develop non-formal standards, criteria, procedures for the analysis, interpretation, evaluation, criticism and construction of argumentation." [ 2 ] This definition reflects what had been implicit in their practice and what others were doing in their informal logic texts. Informal logic is associated with informal fallacies , critical thinking , the thinking skills movement [ 3 ] and the interdisciplinary inquiry known as argumentation theory . Frans H. van Eemeren writes that the label "informal logic" covers a "collection of normative approaches to the study of reasoning in ordinary language that remain closer to the practice of argumentation than formal logic." [ 4 ] Informal logic as a distinguished enterprise under this name emerged roughly in the late 1970s as a sub-field of philosophy . The naming of the field was preceded by the appearance of a number of textbooks that rejected the symbolic approach to logic on pedagogical grounds as inappropriate and unhelpful for introductory textbooks on logic for a general audience, for example Howard Kahane 's Logic and Contemporary Rhetoric , subtitled "The Use of Reason in Everyday Life", first published in 1971. Kahane's textbook was described on the notice of his death in the Proceedings And Addresses of the American Philosophical Association (2002) as "a text in informal logic, [that] was intended to enable students to cope with the misleading rhetoric one frequently finds in the media and in political discourse. It was organized around a discussion of fallacies, and was meant to be a practical instrument for dealing with the problems of everyday life. [It has] ... gone through many editions; [it is] ... still in print; and the thousands upon thousands of students who have taken courses in which his text [was] ... used can thank Howard for contributing to their ability to dissect arguments and avoid the deceptions of deceitful rhetoric. He tried to put into practice the ideal of discourse that aims at truth rather than merely at persuasion. (Hausman et al. 2002)" [ 5 ] [ 6 ] Other textbooks from the era taking this approach were Michael Scriven 's Reasoning (Edgepress, 1976) and Logical Self-Defense by Ralph Johnson and J. Anthony Blair , first published in 1977. [ 5 ] Earlier precursors in this tradition can be considered Monroe Beardsley 's Practical Logic (1950) and Stephen Toulmin 's The Uses of Argument (1958). [ 7 ] The field perhaps became recognized under its current name with the First International Symposium on Informal Logic held in 1978. Although initially motivated by a new pedagogical approach to undergraduate logic textbooks, the scope of the field was basically defined by a list of 13 problems and issues which Blair and Johnson included as an appendix to their keynote address at this symposium: [ 5 ] [ 8 ] David Hitchcock argues that the naming of the field was unfortunate, and that philosophy of argument would have been more appropriate. He argues that more undergraduate students in North America study informal logic than any other branch of philosophy, but that as of 2003 informal logic (or philosophy of argument) was not recognized as separate sub-field by the World Congress of Philosophy . [ 5 ] Frans H. van Eemeren wrote that "informal logic" is mainly an approach to argumentation advanced by a group of US and Canadian philosophers and largely based on the previous works of Stephen Toulmin and to a lesser extent those of Chaïm Perelman . [ 4 ] Alongside the symposia, since 1983 the journal Informal Logic has been the publication of record of the field, with Blair and Johnson as initial editors, with the editorial board now including two other colleagues from the University of Windsor — Christopher Tindale and Hans V. Hansen. [ 9 ] Other journals that regularly publish articles on informal logic include Argumentation (founded in 1986), Philosophy and Rhetoric , Argumentation and Advocacy (the journal of the American Forensic Association ), and Inquiry: Critical Thinking Across the Disciplines (founded in 1988). [ 10 ] Johnson and Blair (2000) proposed the following definition: "Informal logic designates that branch of logic whose task is to develop non-formal 2 standards, criteria, procedures for the analysis, interpretation, evaluation, critique and construction of argumentation in everyday discourse." Their meaning of non-formal 2 is taken from Barth and Krabbe (1982), which is explained below. To understand the definition above, one must understand informal which takes its meaning in contrast to its counterpart formal. (This point was not made for a very long time, hence the nature of informal logic remained opaque, even to those involved in it, for a period of time.) Here it is helpful to have recourse [ 11 ] to Barth and Krabbe (1982:14f) where they distinguish three senses of the term form. By form 1 , Barth and Krabbe mean the sense of the term which derives from the Platonic idea of form —the ultimate metaphysical unit. Barth and Krabbe claim that most traditional logic is formal in this sense. That is, syllogistic logic is a logic of terms where the terms could naturally be understood as place-holders for Platonic (or Aristotelian ) forms. In this first sense of form , almost all logic is informal (not-formal). Understanding informal logic this way would be much too broad to be useful. By form 2 , Barth and Krabbe mean the form of sentences and statements as these are understood in modern systems of logic. Here validity is the focus: if the premises are true, the conclusion must then also be true. Now validity has to do with the logical form of the statement that makes up the argument. In this sense of formal , most modern and contemporary logic is formal . That is, such logics canonize the notion of logical form, and the notion of validity plays the central normative role. In this second sense of form, informal logic is not-formal, because it abandons the notion of logical form as the key to understanding the structure of arguments, and likewise retires validity as normative for the purposes of the evaluation of argument. It seems to many that validity is too stringent a requirement, that there are good arguments in which the conclusion is supported by the premises even though it does not follow necessarily from them (as validity requires). An argument in which the conclusion is thought to be "beyond reasonable doubt, given the premises" is sufficient in law to cause a person to be sentenced to death , even though it does not meet the standard of logical validity. This type of argument, based on accumulation of evidence rather than pure deduction , is called a conductive argument. By form 3 , Barth and Krabbe mean to refer to "procedures which are somehow regulated or regimented, which take place according to some set of rules." Barth and Krabbe say that "we do not defend formality 3 of all kinds and under all circumstances." Rather "we defend the thesis that verbal dialectics must have a certain form (i.e., must proceed according to certain rules) in order that one can speak of the discussion as being won or lost" (19). In this third sense of form , informal logic can be formal, for there is nothing in the informal logic enterprise that stands opposed to the idea that argumentative discourse should be subject to norms, i.e., subject to rules, criteria, standards or procedures. Informal logic does present standards for the evaluation of argument, procedures for detecting missing premises etc. Johnson and Blair (2000) noticed a limitation of their own definition, particularly with respect to "everyday discourse", which could indicate that it does not seek to understand specialized, domain-specific arguments made in natural languages. Consequently, they have argued that the crucial divide is between arguments made in formal languages and those made in natural languages . Fisher and Scriven (1997) proposed a more encompassing definition, seeing informal logic as "the discipline which studies the practice of critical thinking and provides its intellectual spine". By "critical thinking" they understand "skilled and active interpretation and evaluation of observations and communications, information and argumentation." [ 12 ] Some hold the view that informal logic is not a branch or subdiscipline of logic, or even the view that there cannot be such a thing as informal logic. [ 13 ] [ 14 ] [ 15 ] Massey criticizes informal logic on the grounds that it has no theory underpinning it. Informal logic, he says, requires detailed classification schemes to organize it, which in other disciplines is provided by the underlying theory. He maintains that there is no method of establishing the invalidity of an argument aside from the formal method, and that the study of fallacies may be of more interest to other disciplines, like psychology , than to philosophy and logic. [ 13 ] Since the 1980s, informal logic has been partnered and even equated, [ 16 ] in the minds of many, with critical thinking. The precise definition of critical thinking is a subject of much dispute. [ 17 ] Critical thinking, as defined by Johnson, is the evaluation of an intellectual product (an argument, an explanation, a theory) in terms of its strengths and weaknesses. [ 17 ] While critical thinking will include evaluation of arguments and hence require skills of argumentation including informal logic, critical thinking requires additional abilities not supplied by informal logic, such as the ability to obtain and assess information and to clarify meaning. Also, many believe that critical thinking requires certain dispositions. [ 18 ] Understood in this way, critical thinking is a broad term for the attitudes and skills that are involved in analyzing and evaluating arguments. The critical thinking movement promotes critical thinking as an educational ideal. The movement emerged with great force in the 1980s in North America as part of an ongoing critique of education as regards the thinking skills not being taught. The social, communicative practice of argumentation can and should be distinguished from implication (or entailment )—a relationship between propositions; and from inference—a mental activity typically thought of as the drawing of a conclusion from premises. Informal logic may thus be said to be a logic of argumentation, as distinguished from implication and inference. [ 19 ] Argumentation theory is interdisciplinary in the sense that no one discipline will be able to provide a complete account. A full appreciation of argumentation requires insights from logic (both formal and informal), rhetoric, communication theory, linguistics, psychology, and, increasingly, computer science. Since the 1970s, there has been significant agreement that there are three basic approaches to argumentation theory: the logical, the rhetorical and the dialectical. According to Wenzel, [ 20 ] the logical approach deals with the product, the dialectical with the process, and the rhetorical with the procedure. Thus, informal logic is one contributor to this inquiry, being most especially concerned with the norms of argument. The open access issue 20(2) of Informal Logic from year 2000 groups a number of papers addressing foundational issues, based on the Panel on Informal Logic that was held at the 1998 World Congress of Philosophy, including:
https://en.wikipedia.org/wiki/Informal_logic
Informal mathematics , also called naïve mathematics , has historically been the predominant form of mathematics at most times and in most cultures, and is the subject of modern ethno-cultural studies of mathematics . The philosopher Imre Lakatos in his Proofs and Refutations aimed to sharpen the formulation of informal mathematics, by reconstructing its role in nineteenth century mathematical debates and concept formation, opposing the predominant assumptions of mathematical formalism . [ 1 ] Informality may not discern between statements given by inductive reasoning (as in approximations which are deemed "correct" merely because they are useful), and statements derived by deductive reasoning . Informal mathematics means any informal mathematical practices, as used in everyday life, or by aboriginal or ancient peoples, without historical or geographical limitation. Modern mathematics, exceptionally from that point of view, emphasizes formal and strict proofs of all statements from given axioms . This can usefully be called therefore formal mathematics . Informal practices are usually understood intuitively and justified with examples—there are no axioms. This is of direct interest in anthropology and psychology : it casts light on the perceptions and agreements of other cultures. It is also of interest in developmental psychology as it reflects a naïve understanding of the relationships between numbers and things. Another term used for informal mathematics is folk mathematics , which is ambiguous; the mathematical folklore article is dedicated to the usage of that term among professional mathematicians. The field of naïve physics is concerned with similar understandings of physics. People use mathematics and physics in everyday life, without really understanding (or caring) how mathematical and physical ideas were historically derived and justified. There has long been a standard account of the development of geometry in ancient Egypt , followed by Greek mathematics and the emergence of deductive logic. The modern sense of the term mathematics , as meaning only those systems justified with reference to axioms, is however an anachronism if read back into history. Several ancient societies built impressive mathematical systems and carried out complex calculations based on proofless heuristics and practical approaches. Mathematical facts were accepted on a pragmatic basis. Empirical methods , as in science, provided the justification for a given technique. Commerce, engineering , calendar creation and the prediction of eclipses and stellar progression were practiced by ancient cultures on at least three continents.
https://en.wikipedia.org/wiki/Informal_mathematics
Informating is a term coined by Shoshana Zuboff in her book In the Age of the Smart Machine (1988). [ 1 ] It is the process that translates descriptions and measurements of activities, events and objects into information . By doing so, these activities become visible to the organization. Informating has both an empowering and oppressing influence. On the one hand, as information processes become more powerful, the access to information is pushed to ever lower levels of the organization . Conversely, information processes can be used to monitor what Zuboff calls human agency. From In the Age of the Smart Machine , Informating is described as: What is it, then, that distinguishes information technology from earlier generations of machine technology ? As information technology is used to reproduce, extend, and improve upon the process of substituting machines for human agency, it simultaneously accomplishes something quite different. The devices that automate by translating information into action also register data about those automated activities, thus generating new streams of information. For example, computer-based, numerically controlled machine tools or microprocessor-based sensing devices not only apply programmed instructions to equipment but also convert the current state of equipment, product, or process into data. Scanner devices in supermarkets automate the checkout process and simultaneously generate data that can be used for inventory control , warehousing , scheduling of deliveries, and market analysis . The same systems that make it possible to automate office transactions also create a vast overview of an organization's operations, with many levels of data coordinated and accessible for a variety of analytical efforts." (Zuboff, 1988; p. 9) According to Zuboff, any activity, such as two friends using Facebook to communicate, can be said to be informating. In using tools such as Facebook, the two friends are converting their activity of thinking into information. Thereby making their activity visible to another or others. Informating as a concept, is being applied to several contexts. For example, in the context of education, Alan November, in his 2010 work, Empowering Students With Technology , wrote: [ 2 ] builds on Zuboff's definition: It is a powerful learning tool because it shifts the power and control to students by giving access to new sources of information and relationships. November states, "Students have access to content information that was previously only available in the teacher's edition of the textbook."
https://en.wikipedia.org/wiki/Informating
The Brandweerinformatiecentrum voor gevaarlijke stoffen/Information Centre for Dangerous Goods (BIG), established in the Flemish city of Geel , collects and validates information on dangerous goods . BIG was established in 1979 following a number of serious environmental disasters in the mid-seventies. The initial partners were the town of Geel (represented by the local fire service ), Katholieke Hogeschool Kempen and the Applied Sciences department of the Katholieke Universiteit Leuven . The aim was to develop an information centre for dangerous goods that could be consulted by fire brigades. Every year they publish a selection of their chemical database on DVD. This selection, called BIG Kaleidos Database, consists of validated data concerning 20,000 chemical substances and mixtures available in 13 languages. Subscriptions include annual updates. The database contains more than 250 properties, including physico-chemical properties, safety and prevention measures, and transport legislation . Single-user and network versions are available. At an incident scene it is important to identify the substance as soon as possible and to take the appropriate measures. Information and communication play a vital role. For this reason, BIG developed a Pocket PC application, the Kaleidos Pocket. BIG is a 24-hour emergency centre for responding to accidents with dangerous goods. In these situations, the supplied information is always free. Belgian and foreign emergency services use this service at chemical accidents scenes. For companies there is a charge for emergency service. Those companies can mention the BIG emergency number on their documents and packaging. Emergency centre : +32 (0) 14 58 45 45 BIG develops Safety Data Sheets in accordance with the applicable EU legislation (according to (EC) no. 1907/2006 (REACH, Article 31 and annex II). The sheets are available in several languages. The SAS layouts are also available in accordance with local legislation and the new EU GHS/CLP classification. BIG can help companies with environmental and other legislation concerning dangerous goods ( European Agreement concerning the International Carriage of Dangerous Goods by Road , International Air Transport Association , International Maritime Dangerous Goods Code , Regulations concerning the International Carriage of Dangerous Goods by Rail , etc.). Recently BIG has also started an outsourcing service.
https://en.wikipedia.org/wiki/Information_Center_for_Dangerous_Goods
Information Delivery Specification (IDS) is a description format in the construction industry for specifying information for BIM building model and checking it automatically. IDS (like IFC ) is defined by buildingSMART ; version 1.0 was published in June 2024. [ 1 ] An IDS file is a machine-readable document that specifies requirements for a building model. It requires attributes to be supplied, or which attribute values are accepted. For example, it can require that the thermal transmittance of all windows in the building must be within a prescribed range, or that all room names must follow a regular expression such as "Office001". A typical workflow is: [ 2 ]
https://en.wikipedia.org/wiki/Information_Delivery_Specification
Information Hyperlinked over Proteins (or iHOP ) is an online text mining service that provides a gene-guided network to access PubMed abstracts. The service was established by Robert Hoffmann and Alfonso Valencia in 2004. [ 1 ] [ 2 ] The concept underlying iHOP is that by using genes and proteins as hyperlinks between sentences and abstracts, the information in PubMed can be converted into one navigable resource. Navigating across interrelated sentences within this network rather than the use of conventional keyword searches allows for stepwise and controlled acquisition of information. Moreover, this literature network can be superimposed upon experimental interaction data to facilitate the simultaneous analysis of novel and existing knowledge. As of September 2014, the network presented in iHOP contains 28.4 million sentences and 110,000 genes from over 2,700 organisms, including the model organisms Homo sapiens , Mus musculus , Drosophila melanogaster , Caenorhabditis elegans , Danio rerio , Arabidopsis thaliana , Saccharomyces cerevisiae and Escherichia coli . The iHOP system has shown that by navigating from gene to gene, distant medical and biological concepts may be connected by only a small number of genes; the shortest path between two genes has been shown to involve on average four intermediary genes. [ 1 ] The iHOP system architecture consists of two separate parts: the 'iHOP factory' and the web application. The iHOP factory manages the PubMed source data (text and gene data) and organises it within a PostgreSQL relational database. The iHOP factory also produces the relevant XML output for display by the web application. [ 2 ] iHOP is free to use and is licensed under a Creative Commons BY-ND license . [ 3 ]
https://en.wikipedia.org/wiki/Information_Hyperlinked_over_Proteins
The Information Management Body of Knowledge ( IMBOK ) is a management framework that organizes the concept of information management in the full context of business and organizational strategy, management and operations. It is specifically intended to provide researchers and practicing managers with a tool that makes clear the conjunction of the worlds of information technology and the world at large. The IMBOK comprises six 'knowledge' areas and four 'process' areas. The knowledge areas identify domains of management expertise and capability that are each distinctly different to the others, as shown in the figure. The process areas identify critical activities that move the value from the left to the right. For example: The IMBOK was a major deliverable out of a research project at the University of the Western Cape in South Africa, funded by the Carnegie Corporation of New York . It has been adopted as a standard Information Management and Information Systems course text in South Africa, Europe, North America and elsewhere. A monograph describing the IMBOK was made available on the World Wide Web in 2004, but it has been withdrawn and republished in an extended form in a book: Investing in Information . [ 1 ] The Community web site at IMBOK.ORG has lapsed and the content is now available at IMBOK.INFO. (domain is parked 4/1/2024)
https://en.wikipedia.org/wiki/Information_Management_Body_of_Knowledge
An Information System Contingency Plan ( ISCP ) is a pre-established plan for restoration of the services of a given information system after a disruption. The US National Institute of Standards and Technology Computer Security Resource Center (CSRC) has published a Special Publication (SP) named SP 800-34 guiding organizations as to how an ISCP should be developed. [ 1 ] [ 2 ] This computer networking article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Information_System_Contingency_Plan
Information Systems Journal is a bimonthly peer-reviewed scientific journal that covers all aspects of information systems , with particular emphasis on the relationship between information systems and people, business, and organisations. [ 1 ] The journal was established in 1991 as Journal of Information Systems with David Avison and Guy Fitzgerald as founding editors-in-chief . [ 2 ] It obtained its current name in 1994. The current editor-in-chief is Robert M Davison. [ 3 ] The journal is member of the Senior Scholar's 'Basket of Eight'. [ 4 ] The journal is abstracted and indexed in the Science Citation Index , ProQuest , CSA Computer Abstracts , Current Contents /Social & Behavioral Sciences, EBSCO databases , InfoTrac , Inspec , Psychological Abstracts / PsycINFO , Scopus , and the Social Sciences Citation Index . According to the Journal Citation Reports , the journal has a 2017 impact factor of 4.267, ranking it 6th out of 88 journals in the category "Information Science & Library Science". [ 5 ]
https://en.wikipedia.org/wiki/Information_Systems_Journal
Information Systems Security Association ( ISSA ) is a not-for-profit, international professional organization of information security professionals and practitioners. It was founded in 1984 after work on its establishment started in 1982. [ 2 ] ISSA promotes the sharing of information security management practices through educational forums, publications and networking opportunities among security professionals. ISSA members and award winners include many of the industry’s notable luminaries and represent a wide range of industries – from communications, education, healthcare, manufacturing, financial and consulting to IT as well as federal, state and local government departments and agencies. [ 3 ] The association publishes the ISSA Journal , [ 4 ] a peer-reviewed publication on the issues and trends of the industry. It also partners with ESG (Enterprise Strategy Group) to release a yearly research report, "The Life and Times of the Cyber Security Professional", to examine the experiences of cybersecurity professionals as they navigate the modern threat landscape and the effects it has on their careers. [ 5 ] Information Systems Security Association has a board of directors that is elected annually by its members and a set of committees that are appointed. The headquarters of ISSA is located in Houston, Texas. ISSA International Board of Directors Executive Officers President: Jimmy Sanders Vice President: Deb Peinert, CISSP-ISSMP Secretary/Director of Operations: Lee Neely Treasurer/Chief Financial Officer: David Vaughn ISSA has an international membership base. The primary goal of the ISSA is to promote management practices that will ensure the confidentiality, integrity and availability of information resources. The ISSA facilitates interaction and education to create a more successful environment for global information systems security and for the professionals involved. ISSA's goals are to promote security education and skills development, encourage free information exchanges, communicate current events within the security industry and help express the importance of security controls to enterprise business management. [ 6 ] [ 7 ] As an applicant for membership, the individual is expected to be bounded to a principle of ethics related to the Information Security career. [ 8 ] [ 9 ] Applicants for ISSA membership attest that they have and will: ISSA is present in more than one hundred countries, including Europe and Asia, with more than 10,000 members.
https://en.wikipedia.org/wiki/Information_Systems_Security_Association
The Information Systems for Crisis Response and Management (ISCRAM) Community is an international community of researchers, practitioners and policy makers involved in or concerned about the design, development, deployment, use and evaluation of information systems for crisis response and management. The ISCRAM Community has been co-founded by Bartel Van de Walle ( Tilburg University , the Netherlands ), Benny Carlé ( SCK-CEN Nuclear Research Center , Belgium), and Murray Turoff ( New Jersey Institute of Technology ). ISCRAM conferences have been held annually since 2004. Since 2005, the conference has alternated between Europe and the United States / Canada. At the conference, the Mike Meleshkin best PhD student paper is awarded to the best paper written and presented by a PhD student. Past awardees are Sebastian Henke (University of Münster, Germany), Jonas Landgren (Viktoria Institute, Sweden), Jiri Trnka ( Linkoping University , Sweden), Manuel Llavador ( Polytechnic University of Valencia , Spain), Valentin Bertsch ( Karlsruhe University , Germany), Thomas Foulquier ( Université de Sherbrooke , Canada), and the PhD students in crisis informatics at the University of Colorado at Boulder (USA). Since 2005, an annual conference is also held in China, at Harbin Engineering University , with Dr. Song Yan as conference chair. The 2008 meeting is held jointly with the GI4D meeting on August 4–6, 2008. [ 16 ] The Summer School for PhD students took place in the Netherlands at Tilburg University in June 2006 and 2007. The International Journal of Information Systems for Crisis Response and Management (IJISCRAM) is a journal which started in January 2009. Co-Editors-in-Chief are Murray Jennex (San Diego State University) and Bartel Van de Walle (Tilburg University, the Netherlands). The mission of the International Journal of Information Systems for Crisis Response and Management (IJISCRAM) is to provide an outlet for innovative research in the area of information systems for crisis response and management. Research is expected to be rigorous but can utilize any accepted methodology and may be qualitative or quantitative in nature. The journal will provide a comprehensive cross disciplinary forum for advancing the understanding of the organizational, technical, human, and cognitive issues associated with the use of information systems in responding and managing crises of all kinds. The goal of the journal is to publish high quality empirical and theoretical research covering all aspects of information systems for crisis response and management. Full-length research manuscripts, insightful research and practice notes, and case studies will be considered for publication.
https://en.wikipedia.org/wiki/Information_Systems_for_Crisis_Response_and_Management
The term " information algebra " refers to mathematical techniques of information processing . Classical information theory goes back to Claude Shannon . It is a theory of information transmission, looking at communication and storage. However, it has not been considered so far that information comes from different sources and that it is therefore usually combined. It has furthermore been neglected in classical information theory that one wants to extract those parts out of a piece of information that are relevant to specific questions. A mathematical phrasing of these operations leads to an algebra of information , describing basic modes of information processing. Such an algebra involves several formalisms of computer science , which seem to be different on the surface: relational databases, multiple systems of formal logic or numerical problems of linear algebra. It allows the development of generic procedures of information processing and thus a unification of basic methods of computer science, in particular of distributed information processing . Information relates to precise questions, comes from different sources, must be aggregated, and can be focused on questions of interest. Starting from these considerations, information algebras ( Kohlas 2003 ) are two-sorted algebras ( Φ , D ) {\displaystyle (\Phi ,D)} : Where Φ {\displaystyle \Phi } is a semigroup , representing combination or aggregation of information, and D {\displaystyle D} is a lattice of domains (related to questions) whose partial order reflects the granularity of the domain or the question, and a mixed operation representing focusing or extraction of information. More precisely, in the two-sorted algebra ( Φ , D ) {\displaystyle (\Phi ,D)} , the following operations are defined Additionally, in D {\displaystyle D} the usual lattice operations (meet and join) are defined. The axioms of the two-sorted algebra ( Φ , D ) {\displaystyle (\Phi ,D)} , in addition to the axioms of the lattice D {\displaystyle D} : To focus an information on x {\displaystyle x} combined with another information to domain x {\displaystyle x} , one may as well first focus the second information to x {\displaystyle x} and then combine. To focus an information on x {\displaystyle x} and y {\displaystyle y} , one may focus it to x ∧ y {\displaystyle x\wedge y} . An information combined with a part of itself gives nothing new. Each information refers to at least one domain (question). A two-sorted algebra ( Φ , D ) {\displaystyle (\Phi ,D)} satisfying these axioms is called an Information Algebra . A partial order of information can be introduced by defining ϕ ≤ ψ {\displaystyle \phi \leq \psi } if ϕ ⊗ ψ = ψ {\displaystyle \phi \otimes \psi =\psi } . This means that ϕ {\displaystyle \phi } is less informative than ψ {\displaystyle \psi } if it adds no new information to ψ {\displaystyle \psi } . The semigroup Φ {\displaystyle \Phi } is a semilattice relative to this order, i.e. ϕ ⊗ ψ = ϕ ∨ ψ {\displaystyle \phi \otimes \psi =\phi \vee \psi } . Relative to any domain (question) x ∈ D {\displaystyle x\in D} a partial order can be introduced by defining ϕ ≤ x ψ {\displaystyle \phi \leq _{x}\psi } if ϕ ⇒ x ≤ ψ ⇒ x {\displaystyle \phi ^{\Rightarrow x}\leq \psi ^{\Rightarrow x}} . It represents the order of information content of ϕ {\displaystyle \phi } and ψ {\displaystyle \psi } relative to the domain (question) x {\displaystyle x} . The pairs ( ϕ , x ) {\displaystyle (\phi ,x)\ } , where ϕ ∈ Φ {\displaystyle \phi \in \Phi } and x ∈ D {\displaystyle x\in D} such that ϕ ⇒ x = ϕ {\displaystyle \phi ^{\Rightarrow x}=\phi } form a labeled Information Algebra . More precisely, in the two-sorted algebra ( Φ , D ) {\displaystyle (\Phi ,D)\ } , the following operations are defined Here follows an incomplete list of instances of information algebras: Let A {\displaystyle {\mathcal {A}}} be a set of symbols, called attributes (or column names ). For each α ∈ A {\displaystyle \alpha \in {\mathcal {A}}} let U α {\displaystyle U_{\alpha }} be a non-empty set, the set of all possible values of the attribute α {\displaystyle \alpha } . For example, if A = { name , age , income } {\displaystyle {\mathcal {A}}=\{{\texttt {name}},{\texttt {age}},{\texttt {income}}\}} , then U name {\displaystyle U_{\texttt {name}}} could be the set of strings, whereas U age {\displaystyle U_{\texttt {age}}} and U income {\displaystyle U_{\texttt {income}}} are both the set of non-negative integers. Let x ⊆ A {\displaystyle x\subseteq {\mathcal {A}}} . An x {\displaystyle x} -tuple is a function f {\displaystyle f} so that dom ( f ) = x {\displaystyle {\hbox{dom}}(f)=x} and f ( α ) ∈ U α {\displaystyle f(\alpha )\in U_{\alpha }} for each α ∈ x {\displaystyle \alpha \in x} The set of all x {\displaystyle x} -tuples is denoted by E x {\displaystyle E_{x}} . For an x {\displaystyle x} -tuple f {\displaystyle f} and a subset y ⊆ x {\displaystyle y\subseteq x} the restriction f [ y ] {\displaystyle f[y]} is defined to be the y {\displaystyle y} -tuple g {\displaystyle g} so that g ( α ) = f ( α ) {\displaystyle g(\alpha )=f(\alpha )} for all α ∈ y {\displaystyle \alpha \in y} . A relation R {\displaystyle R} over x {\displaystyle x} is a set of x {\displaystyle x} -tuples, i.e. a subset of E x {\displaystyle E_{x}} . The set of attributes x {\displaystyle x} is called the domain of R {\displaystyle R} and denoted by d ( R ) {\displaystyle d(R)} . For y ⊆ d ( R ) {\displaystyle y\subseteq d(R)} the projection of R {\displaystyle R} onto y {\displaystyle y} is defined as follows: The join of a relation R {\displaystyle R} over x {\displaystyle x} and a relation S {\displaystyle S} over y {\displaystyle y} is defined as follows: As an example, let R {\displaystyle R} and S {\displaystyle S} be the following relations: Then the join of R {\displaystyle R} and S {\displaystyle S} is: A relational database with natural join ⋈ {\displaystyle \bowtie } as combination and the usual projection π {\displaystyle \pi } is an information algebra. The operations are well defined since It is easy to see that relational databases satisfy the axioms of a labeled information algebra: The axioms for information algebras are derived from the axiom system proposed in (Shenoy and Shafer, 1990), see also (Shafer, 1991).
https://en.wikipedia.org/wiki/Information_algebra
Information and communication technology (ICT) in Kosovo has experienced a remarkable development since 1999. [ 1 ] From being almost non-existent 10 years ago, Kosovar companies in the information technology (IT) domain offer today wide range of ICT services to their customers both local as well as to foreign companies. [ 1 ] Kosovo has the youngest population in Europe , [ 2 ] with advanced knowledge in ICT. [ 3 ] Today, public and private education institutions in the IT field, through certified learning curricula by companies such as CISCO [ 3 ] and Microsoft , provide education to thousands of young Kosovars while the demand for this form of training is still rising. [ 4 ] Kosovo has two authorized mobile network operators and is the only country in the region not having awarded any UMTS license. Kosovo has neither awarded licenses for fixed wireless access, nor made the 900 and 1800 MHz bands technology neutral. [ 5 ] Currently around 1,200,000 customers of "Vala" Post and Telecom of Kosovo (PTK). As of March 2007 the second GSM license granted to IPKO – Telekom Slovenije . Currently IPKO has over 1,000,000 users. [ 6 ] Following the Brussels Agreement, Kosovo has its own telephone dialing code: +383. Before this assignment, network operators in Kosovo used either +377 (Monaco) or +386 (Slovenia). [ 7 ] All other codes were to have been superseded by the new code on 15 January 2017, but some are still in use. [ 8 ] The infrastructure of ICT sector in Kosovo is mainly built of microwave network , optic and coaxial cable ( DOCSIS ). The telecom industry is liberalized and legislation is introduced adopting European Union regulatory principles and promoting competition. Some of the main internet providers are PTK, IPKO, Kujtesa and Artmotion. [ 7 ] First ICT companies in Kosovo can be found as early 1984, these companies where mainly focused on radio telecommunication and audio-video systems, while in early and mid '90s more companies were created, mainly specializing in personal computer sales. ICT industry in Kosovo boomed after 1999 with a lot of new companies being created, [ 1 ] among which IPKO which now one of the major telecommunication providers and one of the biggest foreign investments in Kosovo. [ 9 ] According to Regulatory Authority of Electronics and Postal Communication 2011 report, 86 telecommunication licenses have been issued since 2004. [ 10 ] In 2010, 74 percent of the population was subscribed to mobile phone services, or a total number of 1,537,164 [ 11 ] In 2007, PTK reported growth of subscribers from 300,000 to 800,000 in less than a year. [ 12 ] In 2006, the number was 562,000. [ 13 ] You are required to show your ID to get a Sim card and pre-paid can be bought for 5 euros. [ citation needed ] There are many shops selling used mobile phones and sim unlocking services. [ citation needed ] Two licensed Mobile network operators offer their services in competition with two MVNOs . The market, however, remains concentrated with the incumbent's mobile subsidiary controlling over 65% of the market. Mobile broadband services are not available as no UMTS licenses have been awarded. So far, there are no plans to carry out the re-farming of 900/1800 MHz bands or assign frequency spectrum for mobile broadband. [ 5 ] GSM-services in Kosovo are provided currently by Vala, a subsidiary of PTK, and IPKO, a company owned by Telekom Slovenije, which has acquired the second mobile operator license in Kosovo and has started operations in late 2007. Vala has over 850,000 subscribers, mostly using the pre-paid system, whereas IPKO has gained over 300,000 subscribers within just a few months. [ 14 ] There are three virtual operators : Fixed telephony penetration rate is among the lowest in Europe about 5 lines per 100 inhabitants in 2011, in contrast with neighboring countries and European Union countries where penetration rates are 25% and 40% respectfully. [ 11 ] [ 10 ] There are currently three licensed Fixed Telephony providers in Kosovo: PTK is by far the leading provider with market share of 94.4%, IPKO has only 5.6%. Number of subscribers dropped in 2011 to 86,014 from 88,372 in 2010, marking a drop of 2,358 or 2.67% of subscriber base. [ 10 ] After the breakup of Yugoslavia , with dialing code 38, Kosovo used the Serbian dialing code, 381 for new and existing landlines. As mobile networks were introduced, PTK adopted the code 377 (Monaco) and IPKO adopted the code 386 (Slovenia). This situation resulted in the highly unusual simultaneous use of three international dialing codes. In September 2012 the Assembly of Kosovo approved a resolution on replacing the various dialing codes in use with the Albanian country code 355. While this initiative draw a lot of media attention, it never saw the light of day. [ 17 ] In January 2016 after ongoing political discussions between Kosovo and Serbia, it was agreed that Kosovo would get its own country code: 383. This code is available for all mobile and fixed-line operators, all of whom were using other international telecom country code. [ 18 ] The code 383 has now been formally assigned, and is in the process of adoption. This code was to have replaced all former codes on 15 January 2017, but the transition has yet to be complete. [ 8 ] Total of 38 licensed companies provide internet services in Kosovo, 6 of them with direct peering towards international gateways. Number of technologies are used to provide internet to end users, most popular being the cable DOCSIS technology with 68.95% of the market, followed by 25.43% xDSL and 5.62% other technologies like FTTX and wireless. [ 10 ] In contrast with other countries, majority of market share is owned by private operators, with a total of 74.57%. The biggest operator being IPKO with 51.21% followed by Kujtesa with 19.08% and PTK with 25.43%. [ 10 ] Others include: Internet penetration in Kosovo is 96%. [ 19 ] Laptops are the most frequent device found in almost half of the Kosovo households (48%), followed by a computer (39%). [ 19 ] On average, households in Kosovo have a 20 Mbps download and 6 upload internet speed. [ 19 ] Vast majority of the Kosovo population (81%) use the internet every day, and the internet is used by the absolute majority (96%) of the Kosovo population at least at some occasion. [ 19 ] In addition, the internet usage is almost equally distributed across majority of age groups and is mostly used by students and employed people. [ 19 ] Internet is used three and a half hours daily on average by the Kosovo citizens. Incomparably, mobile phones are the most frequent device (73%) used to access the internet. [ 19 ] A very important discovery of the research is that 93% of Kosovo citizens use the internet for communication. [ 19 ] All other reasons of usage are drastically lower compared to the communication. Consequently, as well as related to the reasons why the internet is used mostly in terms of applications and webpages visited, communication platforms are used the most. [ 19 ] The vast majority of Kosovo citizens (98%) possess a mobile phone, and close to half of them (43%) possesses internet subscription on their phone. [ 19 ] Digital television transition is still an ongoing process in Kosovo, limiting the analog television broadcast domain to only three national channels. [ 20 ] Due to this conditions all four major telecommunications companies in Kosovo now broadcast digital TV on other mediums. While IPKO and Kujtesa have chosen to reuse the existing coaxial network, PTK which offers xDSL services went for IPTV , Artomotion offers digital TV to selected users in Pristina. IPKO has recently launched an IPTV solution, tailored towards mobile customers. Currently platform is available in iOS , Android and via a web browser. [ 21 ] Due to missing 3G / LTE licenses in Kosovo, and a growing demand for mobile broadband services from subscribers, both telecommunication providers PTK and IPKO turned to Municipal wireless network (Muni Wi-Fi). Both operators cover the majority of cities around Kosovo and touristic destinations like Prekaz [ 22 ] and Brezovica , [ 23 ] with more cities to be covered later. The Internet Exchange Point (KOSIX), the first of its kind in Kosovo, started operating on 23 June 2011, and it operates as a functional unit within Telecommunications Regulatory Authority (TRA). Its function is to provide the ISPs operating in Kosovo, an Internet exchange point for local traffic exchange. Valuable and continuous contributions for the implementation of this project have given: United States Agency for International Development (USAID) offered throughout their program Kosovo Private Enterprise Program, Norwegian government offered through Ministry of Foreign Affairs and Norwegian embassy respectively, Cisco Systems International BV, and University of Prishtina which have offered adequate space within the buildings of Electrical and Computer Engineering Faculty. [ 24 ] Currently four national ISPs are interconnected via KOSIX, there is no cost for peering. [ 25 ] ICT in Kosovo consists of relatively young companies (most of them incorporated after 1999) and with predominantly small companies with less than 20 employees. 53.8 percent of ICT companies are individual business while limited Liability Company (28.6 percent). Businesses specializing in maintenance and manufacturing are purely individual, while other sub-sectors are served by a mix of individual and LLC businesses. Other forms of incorporation are rare, with 4.4 percent being Limited Partnership , and 2.2 percent Joint-Stock Company . The rest (5.5) are either public companies or have the unusual status of NGOs . Currently, the ICT companies are determined to grow and prosper within the Kosovo market, while very few companies seek expansion in markets outside Kosovo. [ 1 ] According to TRA report, ICT industry generated €239,518,037.36, 83.19 of which is generated by Mobile network operator, 8.37% from Fix telephony operators, 7.77% from Internet service providers and 0.68% from leased lines . [ 10 ] The structure of the ICT market in Kosovo is diverse in the variety of activities, sales being the main activity [ 26 ] 62 percent of the ICT companies have reported to import goods for retail, meanwhile their exports are minimal and their market share growth is seen to be within Kosovo, and could reach as far as North Macedonia and Albania . The average annual turnover in the sector is 250,000 euros, with an increasing number of companies reporting turnovers in millions of euros. [ 26 ] The ICT sector is dominated by domestic firms. According to the survey, 80.2 percent of the respondents represented companies that are 100% domestically owned. [ 26 ] Only 6.6 percent of the companies are entirely owned by foreigners. Mixed ownership is rare (3.3 percent). The other 3.3 percent that answered "Other" were either mostly owned by foreign companies or public/state-owned companies. Foreign investors are mostly present in the sub-sectors: consulting, information services, vendors, manufacturing/assembling, and retail. [ 26 ] The majority of ICT companies is small. More than three-quarters (77.1 percent) employ from one to twenty employees. Only a handful of the companies have more than 100 employees. Most of the employees working in the ICT sector are male, leaving the female employees in a tiny minority. Around 19 percent of the companies do not have any female employees at all. Over 93 per cent employ up to 10 women, 6 per cent of the firms employ between 11 and 20, and only 1.4 percent up to 30. There is one large company which employs 230 female workers. [ 26 ] This list is a result of a survey conducted by USAID, Kosovo Private Enterprise Program with 829 ICT companies. [ 27 ] According to statute, STIKK is a non-profit association founded and registered in accordance with the Law on Freedom of Association and Non-Governmental Organization. STIKK represents the interests of the information and communications technology of Kosova, and the interests of professionals in ITC industry. The increasing number of vacancies for ICT professionals in Kosovo is reflecting the increasing progress of the industry, although thanks to the high quality of the university education of IT specialists, and the increasing interest of young people in modern technologies, there are no signs of systematic shortages in ICT employment, except a registered under-supply of specialists in the field of software development and programming. The number of ICT graduates grows each year and the leader in providing the needed skills to the industry is the Faculty of Electrical and Computer Engineering of the University of Pristina. ICT skilled professionals are also supplied by the University for Business and Technology, the American University in Kosovo, as well as a few vocational education providers. [ 28 ] According to the Kosovo Accreditation Agency there are currently 13 higher education institutions, public and private, accredited to offer ICT related study programs in their curricula. [ 29 ] AAB College AAB College is the first non-public institution of higher education in Kosovo [ 36 ] For a short period Kosovo has managed to adopt few very important pieces of legislation and a strategic framework to support the government’s efforts to regulate, promote and improve the development of the ICT sector in Kosovo. Some of the most important legislative acts that have influenced the progress of the sector are: [ 7 ] Law on Telecommunications adopted by the Assembly and promulgated by UNMIK, Regulation 2003/16,recognizes the need to improve the telecommunications sector of Kosovo, by establishing an independent regulatory agency responsible for licensing and supervising the providers of telecommunications services in Kosovo, encouraging the private sector participation and competition in the provision of services; setting standards for all service providers in Kosovo, and, establishing provisions for consumer protection. TRA officially started operating in January 2004. During its young development TRA went through some important milestones that represented a very important step towards a free, competitive market which promotes the development of the information society in Kosova. [ 55 ] Due to the overly young population in Kosovo, [ 2 ] digital divide is not very notable, this phenomenon is more notable with people in their fifties and above. This problem was evident with the educational system as well, to address this Government of Kosovo organized an ECDL course for about 27000 teachers across Kosovo. [ 56 ] The labor force in the ICT sector is dominated by men with women comprising a marginal portion (although more significant in larger companies). [ 26 ] Most ICT firms are based in the Pristina, the economic, political, and social center of the country, where most businesses are located and where there is the highest concentration of customers, as much as 81 percent of all ICT companies have Prishtina as their head office location. The rest are fairly evenly spread out in the regional centers: Peja, Prizren, Gjilan, Gjakova, Podujeva, and Ferizaj. [ 26 ] Over the years there have been a number of open source organizations including Albanian Linux user group (AlbaLinux). Due to lack of support, most of them are now "passive"; among the most active and successful open source groups is Free Libre Open Source Software Kosova (FLOSSK). FLOSSK began in March 2009 at the initiative of James Michael DuPont as a result of the desire to organize a conference on free and open software. After six difficult months and with the help of many supporters, FLOSSK organized the first conference of free and open software in Kosovo in August 2009. Apart from the conference, FLOSSK continued to work in various activities such as organizing Software Freedom Days in different cities of Kosovo, lectures on free software throughout Kosovo, translating software, collaborating with the media to promote free software and creating local free software groups in various cities. From the beginning, FLOSSK members and the general public learned about Linux operating system, FLOSS programs for solving everyday problems, map creation using OpenStreetMap, and met free software movement figures from around the world. [ 57 ] Software Freedom Kosova Conference is an annual conference on free and open source software and related developments in knowledge, culture and mapping held in Pristina, Kosovo. It is the largest conference of its kind in the region. The conference is organised by Free/Libre Open Source Software Kosova (FLOSSK), Kosovo Association of Information and Communication Technology, Ipko Foundation and Faculty of Electrical and Computer Engineering of the University of Prishtina. [ 58 ] A case study on ICT training in Kosovo performed by CISCO Networking Academy (NetAcad) states that the educated and experienced workforce as a whole is searching higher salaries and better working conditions abroad. If graduates are not experienced, they stay for a while in Kosovo and when they have gained experience they start searching opportunities for migration. [ 3 ] It is exactly the findings that NetAcad case study revealed that make Kosovo a perfect ICT outsourcing country, and time difference with the USA makes it only more appealing for the U.S. market. Such is a story of 3CIS which provides highly specialized services to major telecommunication carriers across the globe. This includes network architecture design, planning, consulting, implementation, integration and testing with a strong expertise on mobile backhauling. 3CIS also provides on-site consulting ser- vices as well as manages and coordinates the activities in a multi-vendor environment during the life-cycle of the complete project. On top of this, 3CIS also offers Project management services that are tailored to suit client needs from initial planning to project completions. [ 59 ] Kosovo has made it to other markets as well, both individually as well as established companies. Sprigs is the best example of a Kosovar start-up company established in Pristina. Anoniem is the highest-profile job to date for SPRIGS, which was founded in late 2010, by a Dutch entrepreneur, and this job was trusted to a half-dozen young Kosovar Albanian programmers, who work at computers at a repurposed apartment that now houses the technical brain trust of this IT outsourcing company. [ 60 ]
https://en.wikipedia.org/wiki/Information_and_communications_technology_in_Kosovo
Information and communication technology in agriculture ( ICT in agriculture ), also known as e-agriculture , is a subset of agricultural technology focused on improved information and communication processes. More specifically, e-agriculture involves the conceptualization, design, development, evaluation and application of innovative ways to use information and communication technologies (ICTs) in the rural domain, with a primary focus on agriculture. [ 1 ] ICT includes devices, networks, mobiles, services and applications; these range from innovative Internet-era technologies and sensors to other pre-existing aids such as fixed telephones, televisions, radios and satellites. Provisions of standards, norms, methodologies, and tools as well as development of individual and institutional capacities, and policy support are all key components of e-agriculture. Many ICT in agriculture or e-agriculture interventions have been developed and tested around the world to help agriculturists improve their livelihoods through increased agricultural productivity and income, or by reducing risks. Some useful resources for learning about e-agriculture in practice are the World Bank's e-sourcebook ICT in agriculture – connecting smallholder farmers to knowledge, networks and institutions (2011), [ 2 ] ICT uses for inclusive value chains (2013), [ 3 ] ICT uses for inclusive value chains (2013) [ 4 ] and Success stories on information and communication technologies for agriculture and rural development [ 5 ] have documented many cases of use of ICT in agriculture. Information technology could help improve food security, protect natural resources, and promote a good living standard for smallerholder farmers in Sub-Saharan Africa. [ 6 ] Wireless technologies have numerous applications in agriculture. One major usage is the simplification of closed-circuit television camera systems; the use of wireless communications eliminates the need for the installation of coaxial cables . [ 7 ] In agriculture, the use of the Global Positioning System provides benefits in geo-fencing , map-making and surveying . GPS receivers dropped in price over the years, making it more popular for civilian use. With the use of GPS, civilians can produce simple yet highly accurate digitized map without the help of a professional cartographer . In Kenya , for example, the solution to prevent an elephant bull from wandering into farms and destroying crops was to tag the elephant with a device that sends a text message when it crosses a geo-fence. Using the technology of SMS and GPS, the elephant can roam freely and the authorities are alerted whenever it is near the farm. [ 8 ] Geographic information systems , or GiS , are extensively used in agriculture, especially in precision farming . Land is mapped digitally, and pertinent geodetic data such as topography and contours are combined with other statistical data for easier analysis of the soil. GIS is used in decision making such as what to plant and where to plant using historical data and sampling. Automatic milking systems are computer controlled stand alone systems that milk the dairy cattle without human labor. The complete automation of the milking process is controlled by an agricultural robot , a complex herd management software, and specialized computers. Automatic milking eliminates the farmer from the actual milking process, allowing for more time for supervision of the farm and the herd. Farmers can also improve herd management by using the data gathered by the computer. By analyzing the effect of various animal feeds on milk yield, farmers may adjust accordingly to obtain optimal milk yields. Since the data is available down to individual level, each cow may be tracked and examined, and the farmer may be alerted when there are unusual changes that could mean sickness or injuries. [ 9 ] The use of mobile technologies as a tool of intervention in agriculture is becoming increasingly popular. Smartphone penetration enhances the multi-dimensional positive impact on sustainable poverty reduction and identify accessibility as the main challenge in harnessing the full potential (Silarszky et al., 2008) in agricultural space. The reach of smartphone even in rural areas extended the ICT services beyond simple voice or text messages. Several smartphone apps are available for agriculture, horticulture, animal husbandry and farm machinery. RFID tags for animals represent one of the oldest uses of RFID. Originally meant for large ranches and rough terrain, since the outbreak of mad-cow disease , RFID has become crucial in animal identification management. An implantable RFID tag or transponder can also be used for animal identification. The transponders are better known as PIT (Passive Integrated Transponder) tags, passive RFID, or " chips " on animals. [ 10 ] The Canadian Cattle Identification Agency began using RFID tags as a replacement for barcode tags. Currently CCIA tags are used in Wisconsin and by United States farmers on a voluntary basis. The USDA is currently developing its own program. RFID tags are required for all cattle sold in Australia and in some states, sheep and goats as well. [ 11 ] The Veterinary Department of Malaysia's Ministry of Agriculture introduced a livestock-tracking program in 2009 to track the estimated 80,000 cattle all across the country. Each cattle is tagged with the use of RFID technology for easier identification, providing access to relevant data such as: bearer's location, name of breeder, origin of livestock, sex, and dates of movement. This program is the first of its kind in Asia, and is expected to increase the competitiveness of Malaysian livestock industry in international markets by satisfying the regulatory requirements of importing countries like United States, Europe and Middle East. Tracking by RFID will also help producers meet the dietary standards by the halal market. The program will also provide improvements in controlling disease outbreaks in livestock. [ 12 ] [ 13 ] RFID tags have also been proposed as a means of monitoring animal health. One study involved using RFID to track drinking behavior in pigs as an indicator of overall health. [ 14 ] Online purchasing order of agri-inputs and agri-equipments is a subset of E-commerce . Various image sensor technologies provide the data, in the most common case from a visible light digital camera . [ 15 ] Fluorescence imaging is also used in plant health monitoring – demonstrated by Ning et al. 1995 in very early diagnosis of herbicide injury and attack by fungal plant pathogens . [ 15 ] [ 16 ] : 95 [ 17 ] The FAO - ITU E-agriculture Strategy Guide [ 18 ] provides a framework to holistically address the ICT opportunities and challenges for the agricultural sector in a more efficient manner while generating new revenue streams and improve the livelihoods of the rural community as well as ensure the goals of the national agriculture master plan are achieved. The e-agriculture strategy, and its alignment with other government plans, was intended to prevent e-agriculture projects and services from being implemented in isolation. It was developed by the Food and Agriculture Organization (FAO) [ 19 ] and the International Telecommunication Union (ITU) [ 20 ] with support from partners including the Technical Centre for Agricultural and Rural Cooperation (CTA) [ 21 ] as a framework for countries in developing their national e-agriculture strategy/masterplan. Some of the countries who are using the FAO-ITU E-agriculture Strategy Guide to develop their national e-agriculture strategy are Bhutan , Sri Lanka , Papua New Guinea , Philippines , Pakistan , Fiji , Cambodia , Indonesia , Turkey , Tajikistan and Armenia . The guide provides a framework to engage a broader stakeholders in the development of national e-agriculture strategy. The E-agriculture in Action series of publications, by FAO - ITU , that provides guidance on emerging technologies and how it could be used to address some of the challenges in agriculture through documenting case studies. E-agriculture is one of the action lines identified in the declaration and plan of action of the World Summit on the Information Society (WSIS). The "Tunis Agenda for the Information Society," published on 18 November 2005 and emphasizes the leading facilitating roles that UN agencies need to play in the implementation of the Geneva Plan of Action. The Food and Agriculture Organization of the United Nations ( FAO ) has been assigned the responsibility of organizing activities related to the action line under C.7 ICT Applications on E-Agriculture. Many ICT interventions have been developed and tested around the world, with varied degrees of success, to help agriculturists improve their livelihoods through increased agricultural productivity and incomes, and reduction in risks. Some useful resources for learning about e-agriculture in practice are the World Bank's e-sourcebook ICT in agriculture – connecting smallholder farmers to knowledge, networks and institutions (2011), [ 26 ] ICT uses for inclusive value chains (2013), [ 27 ] ICT uses for inclusive value chains (2013) [ 28 ] and Success stories on information and communication technologies for agriculture and rural development [ 29 ] have documented many cases of use of ICT in agriculture. The FAO-ITU E-agriculture Strategy Guide [ 30 ] was developed by the Food and Agriculture Organization and the International Telecommunication Union (ITU) with support from partners including the Technical Centre for Agricultural and Rural Cooperation (CTA) as a framework for countries in developing their national e-agriculture strategy/masterplan. Some of the countries who are using the FAO-ITU E-agriculture Strategy Guide to develop their national e-agriculture strategy are Bhutan , Sri Lanka , Papua New Guinea , Philippines , Fiji and Vanuatu . The guide provides a framework to engage broader stakeholders in the development of national e-agriculture strategy. In 2008, the United Nations referred to e-agriculture as "an emerging field", [ 31 ] with the expectation that its scope would change and evolve as our understanding of the area grows. In August 2003, the Overseas Development Institute (ODI), the UK Department for International Development ( DFID ) and the United Nations Food and Agricultural Organization ( FAO ) joined in a collaborative research project to look at bringing together livelihoods thinking with concepts from information and communication for development, in order to improve understanding of the role and importance of information and communication in support of rural livelihoods. [ 32 ] The policy recommendations included: The importance of ICT is also recognized in the 8th Millennium Development Goal, with the target to "...make available the benefits of new technologies, especially information and communications technologies (ICTs)" to the fight against poverty. E-agriculture is one of the action lines identified in the declaration and plan of action (2003) of the World Summit on the Information Society (WSIS). [ 33 ] The "Tunis Agenda for the Information Society", published on 18 November 2005, emphasizes the leading facilitating roles that UN agencies need to play in the implementation of the Geneva Plan of Action. [ 34 ] FAO hosted the first e-agriculture workshop in June 2006, bringing together representatives of leading development organizations involved in agriculture. The meeting served to initiate development of an effective process to engage as wide a range of stakeholders involved in e-agriculture, and resulted in the formation of the e-Agriculture Community, a community of practice . The e-Agriculture Community's Founding Partners [ 35 ] include: Consultative Group on International Agricultural Research ( CGIAR ); Technical Centre for Agriculture and Rural Development ( CTA ); FAO; Global Alliance for Information and Communication Technologies and Development ( GAID ); Global Forum on Agricultural Research ( GFAR ); Global Knowledge Partnership (GKP); Gesellschaft fur Technische Zusammenarbeit (now called Deutsche Gesellschaft für Internationale Zusammenarbeit , GIZ); International Association of Agricultural Information Specialists (IAALD); Inter-American Institute for Cooperation on Agriculture (IICA); International Fund for Agricultural Development ( IFAD ); International Centre for Communication for Development ( IICD ); United States National Agricultural Library (NAL); United Nations Department of Economic and Social Affairs ( UNDESA ); the World Bank .
https://en.wikipedia.org/wiki/Information_and_communications_technology_in_agriculture
Information behavior is a field of information science research that seeks to understand the way people search for and use information [ 1 ] in various contexts. It can include information seeking and information retrieval , but it also aims to understand why people seek information and how they use it. The term 'information behavior' was coined by Thomas D. Wilson in 1982 [ 2 ] and sparked controversy upon its introduction. [ 3 ] The term has now been adopted and Wilson's model of information behavior is widely cited in information behavior literature. [ 4 ] In 2000, Wilson defined information behavior as "the totality of human behavior in relation to sources and channels of information". [ 5 ] A variety of theories of information behavior seek to understand the processes that surround information seeking. [ 6 ] An analysis of the most cited publications on information behavior during the early 21st century shows its theoretical nature. [ 7 ] Information behavior research can employ various research methodologies grounded in broader research paradigms from psychology, sociology and education. [ 8 ] In 2003, a framework for information-seeking studies was introduced that aims to guide the production of clear, structured descriptions of research objects and positions information-seeking as a concept within information behavior. [ 9 ] Information need is a concept introduced by Wilson. Understanding the information need of an individual involved three elements: Information-seeking behavior is a more specific concept of information behavior. It specifically focuses on searching, finding, and retrieving information. Information-seeking behavior research can focus on improving information systems or, if it includes information need, can also focus on why the user behaves the way they do. A review study on information search behavior of users highlighted that behavioral factors, personal factors, product/service factors and situational factors affect information search behavior. [ 10 ] Information-seeking behavior can be more or less explicit on the part of users: users might seek to solve some task or to establish some piece of knowledge which can be found in the data in question, [ 11 ] or alternatively the search process itself is part of the objective of the user, in use cases for exploring visual content or for familiarising oneself with the content of an information service. [ 12 ] In the general case, information-seeking needs to be understood and analysed as a session rather than as a one-off transaction with a search engine, and in a broader context which includes user high-level intentions in addition to the immediate information need. [ 13 ] User studies vs. usage studies Introduced by Elfreda Chatman in 1987, [ 14 ] information poverty is informed by the understanding that information is not equally accessible to all people. Information poverty does not describe a lack of information, but rather a worldview in which one's own experiences inside their own small world may create a distrust in the information provided by those outside their own lived experiences. [ 14 ] In Library and Information Science (LIS), a metatheory is described "a set of assumptions that orient and direct theorizing about a given phenomenon". [ 15 ] Library and information science researchers have adopted a number of different metatheories in their research. A common concern among LIS researchers, and a prominent discussion in the field, is the broad spectrum of theories that inform the study of information behavior, information users, or information use. This variation has been noted as a cause of concern because it makes individual studies difficult to compare or synthesize if they are not guided by the same theory. This sentiment has been expressed in studies of information behavior literature from the early 1980s [ 16 ] and more recent literature reviews have declared it necessary to refine their reviews to specific contexts or situations due to the sheer breadth of information behavior research available. [ 17 ] Below are descriptions of some, but not all, metatheories that have guided LIS research. A cognitive approach to understanding information behavior is grounded in psychology. It holds the assumption that a person's thinking influences how they seek, retrieve, and use information. Researchers that approach information behavior with the assumption that it is influenced by cognition, seek to understand what someone is thinking while they engage in information behavior and how those thoughts influence their behavior. [ 18 ] Wilson's attempt to understand information-seeking behavior by defining information need includes a cognitive approach. Wilson theorizes that information behavior is influenced by the cognitive need of an individual. By understanding the cognitive information need of an individual, we may gain insight into their information behavior. [ 2 ] Nigel Ford takes a cognitive approach to information-seeking, focusing on the intellectual processes of information-seeking. In 2004, Ford proposed an information-seeking model using a cognitive approach that focuses on how to improve information retrieval systems and serves to establish information-seeking and information behavior as concepts in and of themselves, rather than synonymous terms . [ 19 ] The constructionist approach to information behavior has roots in the humanities and social sciences. It relies on social constructionism , which assumes that a person's information behavior is influenced by their experiences in society. [ 18 ] In order to understand information behavior, constructionist researchers must first understand the social discourse that surrounds the behavior. The most popular thinker referenced in constructionist information behavior research is Michel Foucault , who famously rejected the concept of a universal human nature. The constructionist approach to information behavior research creates space for contextualizing the behavior based on the social experiences of the individual. One study that approaches information behavior research through the social constructionist approach is a study of the information behavior of a public library knitting group. [ 20 ] The authors use a collectivist theory to frame their research, which denies the universality of information behavior and focuses on "understanding the ways that discourse communities collectively construct information needs, seeking, sources, and uses". [ 20 ] The constructivist approach is born out of education and sociology in which, "individuals are seen as actively constructing an understanding of their worlds, heavily influenced by the social world(s) in which they are operating". [ 18 ] Constructivist approaches to information behavior research generally treat the individual's reality as constructed within their own mind rather than built by the society in which they live. [ 21 ] The constructivist metatheory makes space for the influence of society and culture with social constructivism, "which argues that, while the mind constructs reality in its relationship to the world, this mental process is significantly informed by influences received from societal conventions, history and interaction with significant others". [ 21 ] A common concern among LIS researchers, and a prominent discussion in the field, is the broad spectrum of theories that inform LIS research. This variation has been noted as a cause of concern because it makes individual studies difficult to compare if they are not guided by the same theory. Recent studies have shown that the impact of these theories and theoretical models is very limited. [ 22 ] LIS researchers have applied concepts and theories from many disciplines, including sociology, psychology, communication, organizational behavior, and computer science. [ 23 ] [ 24 ] The term was coined by Thomas D. Wilson in his 1981 paper, on the grounds that the current term, 'information needs' was unhelpful since 'need' could not be directly observed, while how people behaved in seeking information could be observed and investigated. [ 2 ] However, there is increasing work in the information-searching field that is relating behaviors to underlying needs. [ 25 ] In 2000, Wilson described information behavior as the totality of human behavior in relation to sources and channels of information, including both active and passive information-seeking, and information use. [ 5 ] He described information-seeking behavior as purposive seeking of information as a consequence of a need to satisfy some goal. Information-seeking behavior is the micro-level of behavior employed by the searcher in interacting with information systems of all kinds, be it between the seeker and the system, or the pure method of creating and following up on a search. Thomas Wilson proposed that information behavior covers all aspects of human information behavior, whether active or passive. Information-s eeking behavior is the act of actively seeking information in order to answer a specific query. Information-s earching behavior is the behavior which stems from the searcher interacting with the system in question. Information use behavior pertains to the searcher adopting the knowledge they sought. Elfreda Chatman developed the theory of life in the round, which she defines as a world of tolerated approximation. It acknowledges reality at its most routine, predictable enough that unless an initial problem should arise, there is no point in seeking information. [ 26 ] Chatman examined this principle within a small world: a world which imposes on its participants similar concerns and awareness of who is important; which ideas are relevant and whom to trust. Participants in this world are considered insiders. [ 26 ] Chatman focused her study on women at a maximum security prison. She learned that over time, prisoner's private views were assimilated to a communal acceptance of life in the round: a small world perceived in accordance with agreed upon standards and communal perspective. Members who live in the round will not cross the boundaries of their world to seek information unless it is critical; there is a collective expectation that information is relevant; or life lived in the round no longer functions. The world outside prison has secondary importance to inmates who are absent from this reality which is changing with time. [ 26 ] This compares the internet search methods of experienced information seekers (navigators) and inexperienced information seekers (explorers). Navigators revisit domains; follow sequential searches and have few deviations or regressions within their search patterns and interactions. Explorers visit many domains; submit many questions and their search trails branch frequently. [ 27 ] Brenda Dervin developed the concept of sensemaking. Sensemaking considers how we (attempt to) make sense of uncertain situations. [ 28 ] Her description of Sensemaking consisted of the definition of how we interpret information to use for our own information related decisions. Brenda Dervin described sensemaking as a method through which people make sense of their worlds in their own language. ASK was also developed by Nicholas J. Belkin. An anomalous state of knowledge is one in which the searcher recognises a gap in the state of knowledge. This, his or her further hypothesis, is influential in studying why people start to search. [ 29 ] McKenzie's model proposes that the information-seeking in everyday life of individuals occurs on a "continuum of information practices... from actively seeking out a known source... to being given un-asked for advice." [ 30 ] This model crosses the threshold in information-seeking studies from information behavior research to information practices research. Information practices research creates space for understanding encounters with information that may not be a result of the individual's behavior. [ citation needed ] McKenzie's two-dimensional model includes four modes of information practices (active seeking, active scanning, non-directed monitoring, by proxy) over two phases of the information process (connecting and interacting). [ 30 ] Mode (below) In library and information science , Information search process (ISP) is a model proposed by Carol Kuhlthau in 1991 that represents a tighter focus on information-seeking behavior. Kuhlthau's framework was based on research into high school students, [ 31 ] but extended over time to include a diverse range of people, including those in the workplace. It examined the role of emotions, specifically uncertainty, in the information-seeking process, concluding that many searches are abandoned due to an overwhelmingly high level of uncertainty. [ 32 ] [ 33 ] [ 34 ] ISP is a 6-stage process, with each stage each encompassing 4 aspects: [ 35 ] [ 40 ] Kuhlthau's work is constructivist and explores information-seeking beyond the user's cognitive experience into their emotional experience while seeking information. She finds that the process of information-searching begins with feelings of uncertainty, navigates through feelings of anxiety, confusion, or doubt, and ultimately completes their information-seeking with feelings of relief or satisfaction, or disappointment. The consideration of an information-seeker's affect has been replicated more recently in Keilty and Leazer's study which focuses on physical affect and esthetics instead of emotional affect. [ 41 ] The usefulness of the model has been re-evaluated in 2008. [ 42 ] David Ellis investigated the behavior of researchers in the physical and social sciences, [ 43 ] and engineers and research scientists [ 44 ] through semi-structured interviews using a grounded theory approach, with a focus on describing the activities associated with information seeking rather than describing a process. Ellis' initial investigations produced six key activities within the information-seeking process: Later studies by Ellis (focusing on academic researchers in other disciplines) resulted in the addition of two more activities [ citation needed ] : Choo, Detlor and Turnbull elaborated on Ellis' model by applying it to information-searching on the web. Choo identified the key activities associated with Ellis in online searching episodes and connected them with four types of searching (undirected viewing, conditioned viewing, information search, and formal search). [ 45 ] Developed by Stuart Card , Ed H. Chi and Peter Pirolli , this model is derived from anthropological theories and is comparable to foraging for food. Information seekers use clues (or information scents) such as links, summaries and images to estimate how close they are to target information. A scent must be obvious as users often browse aimlessly or look for specific information. Information foraging is descriptive of why people search in particular ways rather than how they search. [ 46 ] Foster and Urquhart provide a rich understanding of their model for nonlinear information behavior. This model takes into consideration varying contexts and personalities when researching information behavior. The authors of this article are themselves cautious of this new model since it still requires more development . [ 47 ] Reijo Savolainen published his ELIS model in 1995. It is based on three basic concepts: way of life, life domain and information search in everyday life (ELIS). [ 48 ]
https://en.wikipedia.org/wiki/Information_behavior
In information theory , the information content , self-information , surprisal , or Shannon information is a basic quantity derived from the probability of a particular event occurring from a random variable . It can be thought of as an alternative way of expressing probability, much like odds or log-odds , but which has particular mathematical advantages in the setting of information theory. The Shannon information can be interpreted as quantifying the level of "surprise" of a particular outcome. As it is such a basic quantity, it also appears in several other settings, such as the length of a message needed to transmit the event given an optimal source coding of the random variable. The Shannon information is closely related to entropy , which is the expected value of the self-information of a random variable, quantifying how surprising the random variable is "on average". This is the average amount of self-information an observer would expect to gain about a random variable when measuring it. [ 1 ] The information content can be expressed in various units of information , of which the most common is the "bit" (more formally called the shannon ), as explained below. The term 'perplexity' has been used in language modelling to quantify the uncertainty inherent in a set of prospective events. [ citation needed ] Claude Shannon 's definition of self-information was chosen to meet several axioms: The detailed derivation is below, but it can be shown that there is a unique function of probability that meets these three axioms, up to a multiplicative scaling factor. Broadly, given a real number b > 1 {\displaystyle b>1} and an event x {\displaystyle x} with probability P {\displaystyle P} , the information content is defined as follows: I ( x ) := − log b ⁡ [ Pr ( x ) ] = − log b ⁡ ( P ) . {\displaystyle \mathrm {I} (x):=-\log _{b}{\left[\Pr {\left(x\right)}\right]}=-\log _{b}{\left(P\right)}.} The base b corresponds to the scaling factor above. Different choices of b correspond to different units of information: when b = 2 , the unit is the shannon (symbol Sh), often called a 'bit'; when b = e , the unit is the natural unit of information (symbol nat); and when b = 10 , the unit is the hartley (symbol Hart). Formally, given a discrete random variable X {\displaystyle X} with probability mass function p X ( x ) {\displaystyle p_{X}{\left(x\right)}} , the self-information of measuring X {\displaystyle X} as outcome x {\displaystyle x} is defined as [ 2 ] I X ⁡ ( x ) := − log ⁡ [ p X ( x ) ] = log ⁡ ( 1 p X ( x ) ) . {\displaystyle \operatorname {I} _{X}(x):=-\log {\left[p_{X}{\left(x\right)}\right]}=\log {\left({\frac {1}{p_{X}{\left(x\right)}}}\right)}.} The use of the notation I X ( x ) {\displaystyle I_{X}(x)} for self-information above is not universal. Since the notation I ( X ; Y ) {\displaystyle I(X;Y)} is also often used for the related quantity of mutual information , many authors use a lowercase h X ( x ) {\displaystyle h_{X}(x)} for self-entropy instead, mirroring the use of the capital H ( X ) {\displaystyle H(X)} for the entropy. For a given probability space , the measurement of rarer events are intuitively more "surprising", and yield more information content, than more common values. Thus, self-information is a strictly decreasing monotonic function of the probability, or sometimes called an "antitonic" function. While standard probabilities are represented by real numbers in the interval [ 0 , 1 ] {\displaystyle [0,1]} , self-informations are represented by extended real numbers in the interval [ 0 , ∞ ] {\displaystyle [0,\infty ]} . In particular, we have the following, for any choice of logarithmic base: From this, we can get a few general properties: The Shannon information is closely related to the log-odds . In particular, given some event x {\displaystyle x} , suppose that p ( x ) {\displaystyle p(x)} is the probability of x {\displaystyle x} occurring, and that p ( ¬ x ) = 1 − p ( x ) {\displaystyle p(\lnot x)=1-p(x)} is the probability of x {\displaystyle x} not occurring. Then we have the following definition of the log-odds: log-odds ( x ) = log ⁡ ( p ( x ) p ( ¬ x ) ) {\displaystyle {\text{log-odds}}(x)=\log \left({\frac {p(x)}{p(\lnot x)}}\right)} This can be expressed as a difference of two Shannon informations: log-odds ( x ) = I ( ¬ x ) − I ( x ) {\displaystyle {\text{log-odds}}(x)=\mathrm {I} (\lnot x)-\mathrm {I} (x)} In other words, the log-odds can be interpreted as the level of surprise when the event doesn't happen, minus the level of surprise when the event does happen. The information content of two independent events is the sum of each event's information content. This property is known as additivity in mathematics, and sigma additivity in particular in measure and probability theory. Consider two independent random variables X , Y {\textstyle X,\,Y} with probability mass functions p X ( x ) {\displaystyle p_{X}(x)} and p Y ( y ) {\displaystyle p_{Y}(y)} respectively. The joint probability mass function is p X , Y ( x , y ) = Pr ( X = x , Y = y ) = p X ( x ) p Y ( y ) {\displaystyle p_{X,Y}\!\left(x,y\right)=\Pr(X=x,\,Y=y)=p_{X}\!(x)\,p_{Y}\!(y)} because X {\textstyle X} and Y {\textstyle Y} are independent . The information content of the outcome ( X , Y ) = ( x , y ) {\displaystyle (X,Y)=(x,y)} is I X , Y ⁡ ( x , y ) = − log 2 ⁡ [ p X , Y ( x , y ) ] = − log 2 ⁡ [ p X ( x ) p Y ( y ) ] = − log 2 ⁡ [ p X ( x ) ] − log 2 ⁡ [ p Y ( y ) ] = I X ⁡ ( x ) + I Y ⁡ ( y ) {\displaystyle {\begin{aligned}\operatorname {I} _{X,Y}(x,y)&=-\log _{2}\left[p_{X,Y}(x,y)\right]=-\log _{2}\left[p_{X}\!(x)p_{Y}\!(y)\right]\\[5pt]&=-\log _{2}\left[p_{X}{(x)}\right]-\log _{2}\left[p_{Y}{(y)}\right]\\[5pt]&=\operatorname {I} _{X}(x)+\operatorname {I} _{Y}(y)\end{aligned}}} See § Two independent, identically distributed dice below for an example. The corresponding property for likelihoods is that the log-likelihood of independent events is the sum of the log-likelihoods of each event. Interpreting log-likelihood as "support" or negative surprisal (the degree to which an event supports a given model: a model is supported by an event to the extent that the event is unsurprising, given the model), this states that independent events add support: the information that the two events together provide for statistical inference is the sum of their independent information. The Shannon entropy of the random variable X {\displaystyle X} above is defined as H ( X ) = ∑ x − p X ( x ) log ⁡ p X ( x ) = ∑ x p X ( x ) I X ⁡ ( x ) = d e f E ⁡ [ I X ⁡ ( X ) ] , {\displaystyle {\begin{alignedat}{2}\mathrm {H} (X)&=\sum _{x}{-p_{X}{\left(x\right)}\log {p_{X}{\left(x\right)}}}\\&=\sum _{x}{p_{X}{\left(x\right)}\operatorname {I} _{X}(x)}\\&{\overset {\underset {\mathrm {def} }{}}{=}}\ \operatorname {E} {\left[\operatorname {I} _{X}(X)\right]},\end{alignedat}}} by definition equal to the expected information content of measurement of X {\displaystyle X} . [ 3 ] : 11 [ 4 ] : 19–20 The expectation is taken over the discrete values over its support . Sometimes, the entropy itself is called the "self-information" of the random variable, possibly because the entropy satisfies H ( X ) = I ⁡ ( X ; X ) {\displaystyle \mathrm {H} (X)=\operatorname {I} (X;X)} , where I ⁡ ( X ; X ) {\displaystyle \operatorname {I} (X;X)} is the mutual information of X {\displaystyle X} with itself. [ 5 ] For continuous random variables the corresponding concept is differential entropy . This measure has also been called surprisal , as it represents the " surprise " of seeing the outcome (a highly improbable outcome is very surprising). This term (as a log-probability measure) was introduced by Edward W. Samson in his 1951 report "Fundamental natural concepts of information theory". [ 6 ] [ 7 ] An early appearance in the Physics literature is in Myron Tribus ' 1961 book Thermostatics and Thermodynamics . [ 8 ] [ 9 ] When the event is a random realization (of a variable) the self-information of the variable is defined as the expected value of the self-information of the realization. [ citation needed ] Consider the Bernoulli trial of tossing a fair coin X {\displaystyle X} . The probabilities of the events of the coin landing as heads H {\displaystyle {\text{H}}} and tails T {\displaystyle {\text{T}}} (see fair coin and obverse and reverse ) are one half each, p X ( H ) = p X ( T ) = 1 2 = 0.5 {\textstyle p_{X}{({\text{H}})}=p_{X}{({\text{T}})}={\tfrac {1}{2}}=0.5} . Upon measuring the variable as heads, the associated information gain is I X ⁡ ( H ) = − log 2 ⁡ p X ( H ) = − log 2 1 2 = 1 , {\displaystyle \operatorname {I} _{X}({\text{H}})=-\log _{2}{p_{X}{({\text{H}})}}=-\log _{2}\!{\tfrac {1}{2}}=1,} so the information gain of a fair coin landing as heads is 1 shannon . [ 2 ] Likewise, the information gain of measuring tails T {\displaystyle T} is I X ⁡ ( T ) = − log 2 ⁡ p X ( T ) = − log 2 ⁡ 1 2 = 1 Sh . {\displaystyle \operatorname {I} _{X}(T)=-\log _{2}{p_{X}{({\text{T}})}}=-\log _{2}{\tfrac {1}{2}}=1{\text{ Sh}}.} Suppose we have a fair six-sided die . The value of a die roll is a discrete uniform random variable X ∼ D U [ 1 , 6 ] {\displaystyle X\sim \mathrm {DU} [1,6]} with probability mass function p X ( k ) = { 1 6 , k ∈ { 1 , 2 , 3 , 4 , 5 , 6 } 0 , otherwise {\displaystyle p_{X}(k)={\begin{cases}{\frac {1}{6}},&k\in \{1,2,3,4,5,6\}\\0,&{\text{otherwise}}\end{cases}}} The probability of rolling a 4 is p X ( 4 ) = 1 6 {\textstyle p_{X}(4)={\frac {1}{6}}} , as for any other valid roll. The information content of rolling a 4 is thus I X ⁡ ( 4 ) = − log 2 ⁡ p X ( 4 ) = − log 2 ⁡ 1 6 ≈ 2.585 Sh {\displaystyle \operatorname {I} _{X}(4)=-\log _{2}{p_{X}{(4)}}=-\log _{2}{\tfrac {1}{6}}\approx 2.585\;{\text{Sh}}} of information. Suppose we have two independent, identically distributed random variables X , Y ∼ D U [ 1 , 6 ] {\textstyle X,\,Y\sim \mathrm {DU} [1,6]} each corresponding to an independent fair 6-sided dice roll. The joint distribution of X {\displaystyle X} and Y {\displaystyle Y} is p X , Y ( x , y ) = Pr ( X = x , Y = y ) = p X ( x ) p Y ( y ) = { 1 36 , x , y ∈ [ 1 , 6 ] ∩ N 0 otherwise. {\displaystyle {\begin{aligned}p_{X,Y}\!\left(x,y\right)&{}=\Pr(X=x,\,Y=y)=p_{X}\!(x)\,p_{Y}\!(y)\\&{}={\begin{cases}\displaystyle {1 \over 36},\ &x,y\in [1,6]\cap \mathbb {N} \\0&{\text{otherwise.}}\end{cases}}\end{aligned}}} The information content of the random variate ( X , Y ) = ( 2 , 4 ) {\displaystyle (X,Y)=(2,\,4)} is I X , Y ⁡ ( 2 , 4 ) = − log 2 [ p X , Y ( 2 , 4 ) ] = log 2 36 = 2 log 2 6 ≈ 5.169925 Sh , {\displaystyle {\begin{aligned}\operatorname {I} _{X,Y}{(2,4)}&=-\log _{2}\!{\left[p_{X,Y}{(2,4)}\right]}=\log _{2}\!{36}=2\log _{2}\!{6}\\&\approx 5.169925{\text{ Sh}},\end{aligned}}} and can also be calculated by additivity of events I X , Y ⁡ ( 2 , 4 ) = − log 2 [ p X , Y ( 2 , 4 ) ] = − log 2 [ p X ( 2 ) ] − log 2 [ p Y ( 4 ) ] = 2 log 2 6 ≈ 5.169925 Sh . {\displaystyle {\begin{aligned}\operatorname {I} _{X,Y}{(2,4)}&=-\log _{2}\!{\left[p_{X,Y}{(2,4)}\right]}=-\log _{2}\!{\left[p_{X}(2)\right]}-\log _{2}\!{\left[p_{Y}(4)\right]}\\&=2\log _{2}\!{6}\\&\approx 5.169925{\text{ Sh}}.\end{aligned}}} If we receive information about the value of the dice without knowledge of which die had which value, we can formalize the approach with so-called counting variables C k := δ k ( X ) + δ k ( Y ) = { 0 , ¬ ( X = k ∨ Y = k ) 1 , X = k ⊻ Y = k 2 , X = k ∧ Y = k {\displaystyle C_{k}:=\delta _{k}(X)+\delta _{k}(Y)={\begin{cases}0,&\neg \,(X=k\vee Y=k)\\1,&\quad X=k\,\veebar \,Y=k\\2,&\quad X=k\,\wedge \,Y=k\end{cases}}} for k ∈ { 1 , 2 , 3 , 4 , 5 , 6 } {\displaystyle k\in \{1,2,3,4,5,6\}} , then ∑ k = 1 6 C k = 2 {\textstyle \sum _{k=1}^{6}{C_{k}}=2} and the counts have the multinomial distribution f ( c 1 , … , c 6 ) = Pr ( C 1 = c 1 and … and C 6 = c 6 ) = { 1 18 1 c 1 ! ⋯ c k ! , when ∑ i = 1 6 c i = 2 0 otherwise, = { 1 18 , when 2 c k are 1 1 36 , when exactly one c k = 2 0 , otherwise. {\displaystyle {\begin{aligned}f(c_{1},\ldots ,c_{6})&{}=\Pr(C_{1}=c_{1}{\text{ and }}\dots {\text{ and }}C_{6}=c_{6})\\&{}={\begin{cases}{\displaystyle {1 \over {18}}{1 \over c_{1}!\cdots c_{k}!}},\ &{\text{when }}\sum _{i=1}^{6}c_{i}=2\\0&{\text{otherwise,}}\end{cases}}\\&{}={\begin{cases}{1 \over 18},\ &{\text{when 2 }}c_{k}{\text{ are }}1\\{1 \over 36},\ &{\text{when exactly one }}c_{k}=2\\0,\ &{\text{otherwise.}}\end{cases}}\end{aligned}}} To verify this, the 6 outcomes ( X , Y ) ∈ { ( k , k ) } k = 1 6 = { ( 1 , 1 ) , ( 2 , 2 ) , ( 3 , 3 ) , ( 4 , 4 ) , ( 5 , 5 ) , ( 6 , 6 ) } {\textstyle (X,Y)\in \left\{(k,k)\right\}_{k=1}^{6}=\left\{(1,1),(2,2),(3,3),(4,4),(5,5),(6,6)\right\}} correspond to the event C k = 2 {\displaystyle C_{k}=2} and a total probability of ⁠ 1 / 6 ⁠ . These are the only events that are faithfully preserved with identity of which dice rolled which outcome because the outcomes are the same. Without knowledge to distinguish the dice rolling the other numbers, the other ( 6 2 ) = 15 {\textstyle {\binom {6}{2}}=15} combinations correspond to one die rolling one number and the other die rolling a different number, each having probability ⁠ 1 / 18 ⁠ . Indeed, 6 ⋅ 1 36 + 15 ⋅ 1 18 = 1 {\textstyle 6\cdot {\tfrac {1}{36}}+15\cdot {\tfrac {1}{18}}=1} , as required. Unsurprisingly, the information content of learning that both dice were rolled as the same particular number is more than the information content of learning that one dice was one number and the other was a different number. Take for examples the events A k = { ( X , Y ) = ( k , k ) } {\displaystyle A_{k}=\{(X,Y)=(k,k)\}} and B j , k = { c j = 1 } ∩ { c k = 1 } {\displaystyle B_{j,k}=\{c_{j}=1\}\cap \{c_{k}=1\}} for j ≠ k , 1 ≤ j , k ≤ 6 {\displaystyle j\neq k,1\leq j,k\leq 6} . For example, A 2 = { X = 2 and Y = 2 } {\displaystyle A_{2}=\{X=2{\text{ and }}Y=2\}} and B 3 , 4 = { ( 3 , 4 ) , ( 4 , 3 ) } {\displaystyle B_{3,4}=\{(3,4),(4,3)\}} . The information contents are I ⁡ ( A 2 ) = − log 2 1 36 = 5.169925 Sh {\displaystyle \operatorname {I} (A_{2})=-\log _{2}\!{\tfrac {1}{36}}=5.169925{\text{ Sh}}} I ⁡ ( B 3 , 4 ) = − log 2 1 18 = 4.169925 Sh {\displaystyle \operatorname {I} \left(B_{3,4}\right)=-\log _{2}\!{\tfrac {1}{18}}=4.169925{\text{ Sh}}} Let Same = ⋃ i = 1 6 A i {\textstyle {\text{Same}}=\bigcup _{i=1}^{6}{A_{i}}} be the event that both dice rolled the same value and Diff = Same ¯ {\displaystyle {\text{Diff}}={\overline {\text{Same}}}} be the event that the dice differed. Then Pr ( Same ) = 1 6 {\textstyle \Pr({\text{Same}})={\tfrac {1}{6}}} and Pr ( Diff ) = 5 6 {\textstyle \Pr({\text{Diff}})={\tfrac {5}{6}}} . The information contents of the events are I ⁡ ( Same ) = − log 2 1 6 = 2.5849625 Sh {\displaystyle \operatorname {I} ({\text{Same}})=-\log _{2}\!{\tfrac {1}{6}}=2.5849625{\text{ Sh}}} I ⁡ ( Diff ) = − log 2 5 6 = 0.2630344 Sh . {\displaystyle \operatorname {I} ({\text{Diff}})=-\log _{2}\!{\tfrac {5}{6}}=0.2630344{\text{ Sh}}.} The probability mass or density function (collectively probability measure ) of the sum of two independent random variables is the convolution of each probability measure . In the case of independent fair 6-sided dice rolls, the random variable Z = X + Y {\displaystyle Z=X+Y} has probability mass function p Z ( z ) = p X ( x ) ∗ p Y ( y ) = 6 − | z − 7 | 36 {\textstyle p_{Z}(z)=p_{X}(x)*p_{Y}(y)={6-|z-7| \over 36}} , where ∗ {\displaystyle *} represents the discrete convolution . The outcome Z = 5 {\displaystyle Z=5} has probability p Z ( 5 ) = 4 36 = 1 9 {\textstyle p_{Z}(5)={\frac {4}{36}}={1 \over 9}} . Therefore, the information asserted is I Z ⁡ ( 5 ) = − log 2 ⁡ 1 9 = log 2 ⁡ 9 ≈ 3.169925 Sh . {\displaystyle \operatorname {I} _{Z}(5)=-\log _{2}{\tfrac {1}{9}}=\log _{2}{9}\approx 3.169925{\text{ Sh}}.} Generalizing the § Fair dice roll example above, consider a general discrete uniform random variable (DURV) X ∼ D U [ a , b ] ; a , b ∈ Z , b ≥ a . {\displaystyle X\sim \mathrm {DU} [a,b];\quad a,b\in \mathbb {Z} ,\ b\geq a.} For convenience, define N := b − a + 1 {\textstyle N:=b-a+1} . The probability mass function is p X ( k ) = { 1 N , k ∈ [ a , b ] ∩ Z 0 , otherwise . {\displaystyle p_{X}(k)={\begin{cases}{\frac {1}{N}},&k\in [a,b]\cap \mathbb {Z} \\0,&{\text{otherwise}}.\end{cases}}} In general, the values of the DURV need not be integers , or for the purposes of information theory even uniformly spaced; they need only be equiprobable . [ 2 ] The information gain of any observation X = k {\displaystyle X=k} is I X ⁡ ( k ) = − log 2 ⁡ 1 N = log 2 ⁡ N Sh . {\displaystyle \operatorname {I} _{X}(k)=-\log _{2}{\frac {1}{N}}=\log _{2}{N}{\text{ Sh}}.} If b = a {\displaystyle b=a} above, X {\displaystyle X} degenerates to a constant random variable with probability distribution deterministically given by X = b {\displaystyle X=b} and probability measure the Dirac measure p X ( k ) = δ b ( k ) {\textstyle p_{X}(k)=\delta _{b}(k)} . The only value X {\displaystyle X} can take is deterministically b {\displaystyle b} , so the information content of any measurement of X {\displaystyle X} is I X ⁡ ( b ) = − log 2 ⁡ 1 = 0. {\displaystyle \operatorname {I} _{X}(b)=-\log _{2}{1}=0.} In general, there is no information gained from measuring a known value. [ 2 ] Generalizing all of the above cases, consider a categorical discrete random variable with support S = { s i } i = 1 N {\textstyle {\mathcal {S}}={\bigl \{}s_{i}{\bigr \}}_{i=1}^{N}} and probability mass function given by p X ( k ) = { p i , k = s i ∈ S 0 , otherwise . {\displaystyle p_{X}(k)={\begin{cases}p_{i},&k=s_{i}\in {\mathcal {S}}\\0,&{\text{otherwise}}.\end{cases}}} For the purposes of information theory, the values s ∈ S {\displaystyle s\in {\mathcal {S}}} do not have to be numbers ; they can be any mutually exclusive events on a measure space of finite measure that has been normalized to a probability measure p {\displaystyle p} . Without loss of generality , we can assume the categorical distribution is supported on the set [ N ] = { 1 , 2 , … , N } {\textstyle [N]=\left\{1,2,\dots ,N\right\}} ; the mathematical structure is isomorphic in terms of probability theory and therefore information theory as well. The information of the outcome X = x {\displaystyle X=x} is given I X ⁡ ( x ) = − log 2 ⁡ p X ( x ) . {\displaystyle \operatorname {I} _{X}(x)=-\log _{2}{p_{X}(x)}.} From these examples, it is possible to calculate the information of any set of independent DRVs with known distributions by additivity . By definition, information is transferred from an originating entity possessing the information to a receiving entity only when the receiver had not known the information a priori . If the receiving entity had previously known the content of a message with certainty before receiving the message, the amount of information of the message received is zero. Only when the advance knowledge of the content of the message by the receiver is less than 100% certain does the message actually convey information. For example, quoting a character (the Hippy Dippy Weatherman) of comedian George Carlin : Weather forecast for tonight: dark. Continued dark overnight, with widely scattered light by morning. [ 10 ] Assuming that one does not reside near the polar regions , the amount of information conveyed in that forecast is zero because it is known, in advance of receiving the forecast, that darkness always comes with the night. Accordingly, the amount of self-information contained in a message conveying content informing an occurrence of event , ω n {\displaystyle \omega _{n}} , depends only on the probability of that event. I ⁡ ( ω n ) = f ( P ⁡ ( ω n ) ) {\displaystyle \operatorname {I} (\omega _{n})=f(\operatorname {P} (\omega _{n}))} for some function f ( ⋅ ) {\displaystyle f(\cdot )} to be determined below. If P ⁡ ( ω n ) = 1 {\displaystyle \operatorname {P} (\omega _{n})=1} , then I ⁡ ( ω n ) = 0 {\displaystyle \operatorname {I} (\omega _{n})=0} . If P ⁡ ( ω n ) < 1 {\displaystyle \operatorname {P} (\omega _{n})<1} , then I ⁡ ( ω n ) > 0 {\displaystyle \operatorname {I} (\omega _{n})>0} . Further, by definition, the measure of self-information is nonnegative and additive. If a message informing of event C {\displaystyle C} is the intersection of two independent events A {\displaystyle A} and B {\displaystyle B} , then the information of event C {\displaystyle C} occurring is that of the compound message of both independent events A {\displaystyle A} and B {\displaystyle B} occurring. The quantity of information of compound message C {\displaystyle C} would be expected to equal the sum of the amounts of information of the individual component messages A {\displaystyle A} and B {\displaystyle B} respectively: I ⁡ ( C ) = I ⁡ ( A ∩ B ) = I ⁡ ( A ) + I ⁡ ( B ) . {\displaystyle \operatorname {I} (C)=\operatorname {I} (A\cap B)=\operatorname {I} (A)+\operatorname {I} (B).} Because of the independence of events A {\displaystyle A} and B {\displaystyle B} , the probability of event C {\displaystyle C} is P ⁡ ( C ) = P ⁡ ( A ∩ B ) = P ⁡ ( A ) ⋅ P ⁡ ( B ) . {\displaystyle \operatorname {P} (C)=\operatorname {P} (A\cap B)=\operatorname {P} (A)\cdot \operatorname {P} (B).} However, applying function f ( ⋅ ) {\displaystyle f(\cdot )} results in I ⁡ ( C ) = I ⁡ ( A ) + I ⁡ ( B ) f ( P ⁡ ( C ) ) = f ( P ⁡ ( A ) ) + f ( P ⁡ ( B ) ) = f ( P ⁡ ( A ) ⋅ P ⁡ ( B ) ) {\displaystyle {\begin{aligned}\operatorname {I} (C)&=\operatorname {I} (A)+\operatorname {I} (B)\\f(\operatorname {P} (C))&=f(\operatorname {P} (A))+f(\operatorname {P} (B))\\&=f{\big (}\operatorname {P} (A)\cdot \operatorname {P} (B){\big )}\\\end{aligned}}} Thanks to work on Cauchy's functional equation , the only monotone functions f ( ⋅ ) {\displaystyle f(\cdot )} having the property such that f ( x ⋅ y ) = f ( x ) + f ( y ) {\displaystyle f(x\cdot y)=f(x)+f(y)} are the logarithm functions log b ⁡ ( x ) {\displaystyle \log _{b}(x)} . The only operational difference between logarithms of different bases is that of different scaling constants, so we may assume f ( x ) = K log ⁡ ( x ) {\displaystyle f(x)=K\log(x)} where log {\displaystyle \log } is the natural logarithm . Since the probabilities of events are always between 0 and 1 and the information associated with these events must be nonnegative, that requires that K < 0 {\displaystyle K<0} . Taking into account these properties, the self-information I ⁡ ( ω n ) {\displaystyle \operatorname {I} (\omega _{n})} associated with outcome ω n {\displaystyle \omega _{n}} with probability P ⁡ ( ω n ) {\displaystyle \operatorname {P} (\omega _{n})} is defined as: I ⁡ ( ω n ) = − log ⁡ ( P ⁡ ( ω n ) ) = log ⁡ ( 1 P ⁡ ( ω n ) ) {\displaystyle \operatorname {I} (\omega _{n})=-\log(\operatorname {P} (\omega _{n}))=\log \left({\frac {1}{\operatorname {P} (\omega _{n})}}\right)} The smaller the probability of event ω n {\displaystyle \omega _{n}} , the larger the quantity of self-information associated with the message that the event indeed occurred. If the above logarithm is base 2, the unit of I ( ω n ) {\displaystyle I(\omega _{n})} is shannon . This is the most common practice. When using the natural logarithm of base e {\displaystyle e} , the unit will be the nat . For the base 10 logarithm, the unit of information is the hartley . As a quick illustration, the information content associated with an outcome of 4 heads (or any specific outcome) in 4 consecutive tosses of a coin would be 4 shannons (probability 1/16), and the information content associated with getting a result other than the one specified would be ~0.09 shannons (probability 15/16). See above for detailed examples.
https://en.wikipedia.org/wiki/Information_content
The term information continuum is used to describe the whole set of all information , in connection with information management . The term may be used in reference to the information or the information infrastructure of a people, a species, a scientific subject or an institution.
https://en.wikipedia.org/wiki/Information_continuum
Within the field of information technology , information criteria are a core component of the COBIT (Control Objectives for Information and Related Technologies) framework that describes the intent of the objectives. The specifics are the control of: Effectiveness deals with information being relevant and pertinent to the business process as well as being delivered in a timely, correct, consistent and usable manner. Efficiency concerns the provision of information through the optimal (most productive and economical) use of resources. Confidentiality concerns the protection of sensitive information from unauthorised disclosure. Integrity relates to the accuracy and completeness of information as well as to its validity in accordance with business values and expectations. Availability relates to information being available when required by the business process now and in the future. It also concerns the safeguarding of necessary resources and associated capabilities. Compliance deals with complying with the laws, regulations and contractual arrangements to which the business process is subject, i.e., externally imposed business criteria as well as internal policies. Reliability relates to the provision of appropriate information for management to operate the entity and exercise its fiduciary and governance responsibilities. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Information_criterion_(information_technology)
Information culture is closely linked with information technology , information systems , and the digital world. It is difficult to give one definition of information culture , and many approaches exist. The literature regarding information culture focuses on the relationship between individuals and information in their work. Curry and Moore [ 1 ] are most frequently cited in the information culture literature, and there is consensus of that values accorded to information, and attitudes towards it are indicators of information culture (McMillan et al., 2012; Curry and Moore, 2003; Furness, 2010; Oliver, 2007; Davenport and Prusak, 1997; Widén-Wulff, 2000; Jarvenpaa and Staples, 2001). [ 2 ] information culture is a culture that is conducive to effective information management where "the value and utility of information in achieving operational and strategic goals is recognized, where information forms the basis of organizational decision making , and information technology is readily exploited as an enabler for effective information systems". [ 1 ] Information culture is a part of the whole organizational culture . It is only by understanding the organisation that progress can be made with information management activities. [ 3 ] Ginman [ 4 ] defines information culture as the culture in which the transformation of intellectual resources is maintained alongside the transformation of material resources. Information culture is the environment where knowledge is produced with social intelligence , social interaction and work knowledge. Multinational organizations (MNOs) are characterized by their engagement in global markets. In order to remain competitive in today's global marketplace. [ 5 ] In many organizations, information culture is described as a form of information technology. As Davenport [ 6 ] writes, many executives think they solve all information problems with buying IT-equipment. Information culture is about effective information management to use information, not machines, and information technology is just a part of information culture, which has an interactive role in it. [ 7 ] Information culture is the part of organizational culture where evaluation and attitudes towards information depend on the situation in which the organization works. In an organization everyone has different attitudes, but the information profile must be explained, so the importance of information should be realized by executives. The information culture is also about formal information systems (technology), common knowledge, individual information systems (attitudes), and information ethics. [ 8 ] information culture does not include written or conscious behavior and what seemingly happening in the organization. Information culture is affected by the behaviors of internal factors of organization more than external factors, which comes in form of information culture, the attitudes and the traditions. Information culture deals with information, information channels, the attitudes, the use and ability to forward or gather information with the environmental circumstances effectively. Knowledge base of any organization can be viewed according to Nonaka's [ 9 ] theories about the organizational knowledge production and Cronin & Davenport's [ 8 ] theories about the social intelligence. According to these theories it is important to look at the organization information culture how the user uses the information. Cultural differences need to be understood before information technology developed for an organization in one country can be effectively implemented in an organization in another country. [ 10 ] It is well understood that any form of information security technology cannot be properly comprehended and appreciated among user employees without guidelines that focus on the human rather than technical perspective, such as information ethics and national security policy. [ 11 ] A highly developed information culture leads the organization to success and work as a strategic goal that positively associated with organizational practices and performance. Choo et al. [ 12 ] looked at information culture as the socially shared patterns of behaviors, norms and values that define the significance and use of information in an organization. Also, scholars like Manuel Castells posits that the information culture transcends the confines of organizations and government participation through policies is relevant for achieving the norms and values. [ 13 ] Norms are standards and values are beliefs and together they mold the information behavior as normal that are expected by the people in organization. In so far, information behavior is the reflection of cultural norms and values. Marchand, Kettinger and Rollins [ 14 ] identifies six information behaviors and values to profile an organization's information culture: Based on a widely applied construct from Cameron and Quinn [ 15 ] that has been used to differentiate organizational culture types and their relationships to organizational effectiveness, Choo [ 7 ] develops a typology of information culture. He emphasizes elements from information behavior research. [ 16 ] The information culture typologies are characterized by a set of five attributes: In addition, Choo classifies information culture into four categories: Relationship-based Culture: information management supports communication, participation, and a sense of identity. Information values and norms emphasize sharing and the proactive use of information. These values promote collaboration and cooperation. The focus is on internal information. Risk-taking Culture: innovation, creativity, and the exploration of new ideas are encouraged while information is managed. Information values and norms emphasize sharing and the proactive use of information. These values promote innovation, development of new products or capabilities, and the boldness to take the initiative. The focus is on external information. Information is used to identify and evaluate opportunities, and promote entrepreneurial risk-taking. Result-oriented Culture: information management enables the organization to compete and succeed in its market or sector. Information values and norms call attention to control and integrity: accurate information is valued in order to assess performance and goal attainment. Information is used to understand clients and competitors, and to evaluate results. Rule-following Culture: information management reinforces the control of internal operations, rules and policies. Information values and norms emphasize control and standardized processes. The focus is on internal information. The organization seeks information about workflows, as well as information about regulatory or accountability requirements. Information is used to control operations, improve efficiency, and provide accountability. Information governance is beginning to gain traction within organizations, particularly where compliance is a concern, and Davenport and Prusak's models of governance are useful tools to inform the design of information governance. Most public sector organizations in Canada have informal information governance models (or policies) [ 2 ] Davenport, Eccles and Prusak [ 17 ] have developed four models of information governance, to inform a progression of control. They describe the levels of information governance using political terms: information federalism, information feudalism, information monarchy, and information anarchy. Their observations allow to evaluate the effectiveness of their governance models in terms of information quality, efficiency, commonality, and access. Oliver's [ 18 ] research on three case study organizations found several factors that characterized and differentiated the information cultures were associated with the organizational information management framework, as well as attitudes and values accorded to information. Compliance requirements for the management of information have a significant place in shaping information culture. Research suggests that poor compliance to formal information governance policies [ 19 ] reinforces the fact that sound knowledge and records management practices are often neglected. Information culture affects support, enthusiasm and cooperation of staff and management of information, asserts Curry and Moore. [ 1 ] If such an information culture is critical to the successful management of information assets, then it becomes vital to develop and nurture the commitment from both management and staff at all levels. Curry and Moore [ 1 ] have developed an exploratory model of information culture, which included components needed within a strong information culture: effective communication flows, cross-organizational partnerships, co-operative working practices and open access to relevant information, management of information systems in accordance with business strategy, and clear guidelines and documentation for information and data management. Trust is a characteristic that has more recently come to the forefront in literature. The social dynamics between supervisors and workers relies upon trust, or the lack of trust, which will also have an effect on information sharing. Curry and Moore [ 1 ] define information culture as "a culture in which the value and utility of information in achieving operational and strategic success is recognised, where information forms the basis of organizational decision making and information technology is readily exploited as an enabler for effective information systems". Information culture is manifested in the organization's values, norms, and practices that affect how information is perceived, created and used. [ 20 ] The six information behaviors and values identified by Marchand [ 14 ] to characterize the information culture of an organization are information integrity, formality, control, sharing, transparency, and proactiveness. A part of culture that deals specifically with information —the perceptions, values, and norms that people have about creating, sharing, and applying information— has a significant effect on information use outcomes. It is possible to systematically identify behaviors and values that describe an organization's information culture. It is possible to systematically identify behaviours and values that characterize an organization's information culture, and that this characterization could be helpful in understanding the information use effectiveness of all sorts of organizations, including private businesses, government agencies, and publicly funded institutions such as libraries and museums. A study by Choo and others suggested that organizations might do well to remember that in the rush to implement strategies and systems, information values and information culture will always have a defining influence on how people share and use information. [ 21 ] In industrialised countries, most of the diseases and injuries are related to mental health problems and are the main reason of employees absenteeism. There are number of risk factors or stressors that may cause psychological strain and ill health, resulted in occupational stress interventions that occur in isolation, independent of organizational culture. Paying more attention to organizational culture paves the way for a contextualized analysis of stress and distress in the workplace. An integrated framework is used in which the association between organizational culture and mental health is mediated by the work organization conditions that qualify the task environment like information management, information sharing and decision making. Organizational cultures somehow intertwined with the information culture. Information culture is a part of organizational Culture as values, behaviour of employees in the organisation somehow effect the information culture. The framework links organizational culture to mental health via work organization conditions and is inscribed within the functionalist perspective that views culture as an organizational construct that influences and shapes organizational characteristics. [ 22 ] Organizational culture is conceptualized in terms of the four quadrants of the Quinn and Rohrbaugh [ 23 ] typology, which are: By knowing these cultures, organisations can easily adopt the relevant culture according to their work related conditions. Although work organization conditions and organizational culture are closely intertwined, they should not be confounded. [ 24 ] [ 25 ] Just as societal cultural values would influence organizationally relevant outcomes (Taras, Kirkman, & Steel, 2010), organizational culture might influence work organization conditions. Schein [ 25 ] views organizational culture as a multilayered construct that includes artifacts, values, social ideals, and basic assumptions. Artifacts such as behaviors, structures, processes, and technology form a first layer. At a more latent level, organizational culture is noticed in the values and social ideals shared by members of the organization (i.e., ideology of the organization). These values and ideals are revealed in symbolic mechanisms such as myths, rituals, stories, legends, and a codified language, as well as in corporate objectives, strategies, management philosophies, and in the justifications given for these. Group Culture encourages employees to make suggestions regarding how to improve their own work and overall performance. As a result, the group culture creates an empowering environment in which individuals perceive they have autonomy and influence. Consequently, in the Group Culture, individuals recognize that their work has meaning and that they have the skills to carry it out. [ 15 ] Considering also that information sharing is an important feature of employee participation, informational support from leaders is likely to be high in the group culture. Group Culture tends to develop task designs that promote the use of skills and decision authority, which are protective factors and also implement work organization conditions that promote social support whether from colleagues or from supervisors, which thereby have a beneficial influence on employee mental health. Developmental Culture is helpful to develop decentralised work design that promotes the use of skills and decision authority with benefit to employee mental health. In Developmental Culture, employees are likely to enjoy significant rewards that could have beneficial effects on employee mental health. Hierarchical Culture is helpful to promote social support and thereby play a beneficial role in employee mental health. In this type of culture, it could well be seniority that determines both compensation and career advancement, giving employees a certain level of job security that could prove beneficial for employee mental health. Rational Culture with clear performance indicators and measurements is likely to minimize conflicting demands that could be beneficial for employee mental health. So these integrated model can help the organisations and the managers to choose the suitable culture. Integration of organizational culture into occupational stress models is a fruitful avenue to achieve a deeper understanding of occupational mental health problems in the workplace and this framework can also helpful to serve as a starting point for multilevel occupational stress research.
https://en.wikipedia.org/wiki/Information_culture
An information diagram is a type of Venn diagram used in information theory to illustrate relationships among Shannon's basic measures of information : entropy , joint entropy , conditional entropy and mutual information . [ 1 ] [ 2 ] Information diagrams are a useful pedagogical tool for teaching and learning about these basic measures of information. Information diagrams have also been applied to specific problems such as for displaying the information theoretic similarity between sets of ontological terms . [ 3 ] This set theory -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Information_diagram
In information theory , information dimension is an information measure for random vectors in Euclidean space , based on the normalized entropy of finely quantized versions of the random vectors . This concept was first introduced by Alfréd Rényi in 1959. [ 1 ] Simply speaking, it is a measure of the fractal dimension of a probability distribution . It characterizes the growth rate of the Shannon entropy given by successively finer discretizations of the space. In 2010, Wu and Verdú gave an operational characterization of Rényi information dimension as the fundamental limit of almost lossless data compression for analog sources under various regularity constraints of the encoder/decoder. The entropy of a discrete random variable Z {\displaystyle Z} is where P Z ( z ) {\displaystyle P_{Z}(z)} is the probability measure of Z {\displaystyle Z} when Z = z {\displaystyle Z=z} , and the s u p p ( P Z ) {\displaystyle supp(P_{Z})} denotes a set { z | z ∈ Z , P Z ( z ) > 0 } {\displaystyle \{z|z\in {\mathcal {Z}},P_{Z}(z)>0\}} . Let X {\displaystyle X} be an arbitrary real-valued random variable. Given a positive integer m {\displaystyle m} , we create a new discrete random variable where the ⌊ ⋅ ⌋ {\displaystyle \lfloor \cdot \rfloor } is the floor operator which converts a real number to the greatest integer less than it. Then and are called lower and upper information dimensions of X {\displaystyle X} respectively. When d _ ( X ) = d ¯ ( X ) {\displaystyle {\underline {d}}(X)={\bar {d}}(X)} , we call this value information dimension of X {\displaystyle X} , Some important properties of information dimension d ( X ) {\displaystyle d(X)} : If the information dimension d {\displaystyle d} exists, one can define the d {\displaystyle d} -dimensional entropy of this distribution by provided the limit exists. If d = 0 {\displaystyle d=0} , the zero-dimensional entropy equals the standard Shannon entropy H 0 ( X ) {\displaystyle \mathbb {H} _{0}(X)} . For integer dimension d = n ≥ 1 {\displaystyle d=n\geq 1} , the n {\displaystyle n} -dimensional entropy is the n {\displaystyle n} -fold integral defining the respective differential entropy . In 1994, Kawabata and Dembo in Kawabata & Dembo 1994 proposed a new way of measuring information based on rate distortion value of a random variable. The measure is defined as where R ( X , D ) {\displaystyle R(X,D)} is the rate-distortion function that is defined as or equivalently, minimum information that could lead to a D {\displaystyle D} -close approximation of X {\displaystyle X} . They further, proved that such definition is equivalent to the definition of information dimension. Formally, Using the above definition of Rényi information dimension, a similar measure to d -dimensional entropy is defined in Charusaie, Amini & Rini 2022 . This value b ( X ) {\displaystyle b(X)} that is named dimensional-rate bias is defined in a way to capture the finite term of rate-distortion function. Formally, The dimensional-rate bias is equal to d -dimensional rate for continuous , discrete , and discrete-continuous mixed distribution. Furthermore, it is calculable for a set of singular random variables , while d -dimensional entropy does not necessarily exist there. Finally, dimensional-rate bias generalizes the Shannon's entropy and differential entropy , as one could find the mutual information I ( X ; Y ) {\displaystyle I(X;Y)} using the following formula: According to Lebesgue decomposition theorem , [ 2 ] a probability distribution can be uniquely represented by the mixture v = p P X d + q P X c + r P X s {\displaystyle v=pP_{Xd}+qP_{Xc}+rP_{Xs}} where p + q + r = 1 {\displaystyle p+q+r=1} and p , q , r ≥ 0 {\displaystyle p,q,r\geq 0} ; P X d {\displaystyle P_{Xd}} is a purely atomic probability measure (discrete part), P X c {\displaystyle P_{Xc}} is the absolutely continuous probability measure, and P X s {\displaystyle P_{Xs}} is a probability measure singular with respect to Lebesgue measure but with no atoms (singular part). Let X {\displaystyle X} be a random variable such that H ( ⌊ X ⌋ ) < ∞ {\displaystyle \mathbb {H} (\lfloor X\rfloor )<\infty } . Assume the distribution of X {\displaystyle X} can be represented as v = ( 1 − ρ ) P X d + ρ P X c {\displaystyle v=(1-\rho )P_{Xd}+\rho P_{Xc}} where P X d {\displaystyle P_{Xd}} is a discrete measure and P X c {\displaystyle P_{Xc}} is the absolutely continuous probability measure with 0 ≤ ρ ≤ 1 {\displaystyle 0\leq \rho \leq 1} . Then d ( X ) = ρ {\displaystyle d(X)=\rho } Moreover, given H 0 ( P X d ) {\displaystyle \mathbb {H} _{0}(P_{Xd})} and differential entropy h ( P X c ) {\displaystyle h(P_{Xc})} , the d {\displaystyle d} -Dimensional Entropy is simply given by H ρ ( X ) = ( 1 − ρ ) H 0 ( P X d ) + ρ h ( P X c ) + H 0 ( ρ ) {\displaystyle \mathbb {H} _{\rho }(X)=(1-\rho )\mathbb {H} _{0}(P_{Xd})+\rho h(P_{Xc})+\mathbb {H} _{0}(\rho )} where H 0 ( ρ ) {\displaystyle \mathbb {H} _{0}(\rho )} is the Shannon entropy of a discrete random variable Z {\displaystyle Z} with P Z ( 1 ) = ρ {\displaystyle P_{Z}(1)=\rho } and P Z ( 0 ) = 1 − ρ {\displaystyle P_{Z}(0)=1-\rho } and given by H 0 ( ρ ) = ρ log 2 ⁡ 1 ρ + ( 1 − ρ ) log 2 ⁡ 1 1 − ρ {\displaystyle \mathbb {H} _{0}(\rho )=\rho \log _{2}{\frac {1}{\rho }}+(1-\rho )\log _{2}{\frac {1}{1-\rho }}} Consider a signal which has a Gaussian probability distribution . We pass the signal through a half-wave rectifier which converts all negative value to 0, and maintains all other values. The half-wave rectifier can be characterized by the function f ( x ) = { x , if x ≥ 0 0 , x < 0 {\displaystyle f(x)={\begin{cases}x,&{\text{if }}x\geq 0\\0,&x<0\end{cases}}} Then, at the output of the rectifier, the signal has a rectified Gaussian distribution . It is characterized by an atomic mass of weight 0.5 and has a Gaussian PDF for all x > 0 {\displaystyle x>0} . With this mixture distribution, we apply the formula above and get the information dimension d {\displaystyle d} of the distribution and calculate the d {\displaystyle d} -dimensional entropy. d ( X ) = ρ = 0.5 {\displaystyle d(X)=\rho =0.5} The normalized right part of the zero-mean Gaussian distribution has entropy h ( P X c ) = 1 2 log 2 ⁡ ( 2 π e σ 2 ) − 1 {\displaystyle h(P_{Xc})={\frac {1}{2}}\log _{2}(2\pi e\sigma ^{2})-1} , hence H 0.5 ( X ) = ( 1 − 0.5 ) ( 1 log 2 ⁡ 1 ) + 0.5 h ( P X c ) + H 0 ( 0.5 ) = 0 + 1 2 ( 1 2 log 2 ⁡ ( 2 π e σ 2 ) − 1 ) + 1 = 1 4 log 2 ⁡ ( 2 π e σ 2 ) + 1 2 bit(s) {\displaystyle {\begin{aligned}\mathbb {H} _{0.5}(X)&=(1-0.5)(1\log _{2}1)+0.5h(P_{Xc})+\mathbb {H} _{0}(0.5)\\&=0+{\frac {1}{2}}({\frac {1}{2}}\log _{2}(2\pi e\sigma ^{2})-1)+1\\&={\frac {1}{4}}\log _{2}(2\pi e\sigma ^{2})+{\frac {1}{2}}\,{\text{ bit(s)}}\end{aligned}}} It is shown [ 3 ] that information dimension and differential entropy are tightly connected. Let X {\displaystyle X} be a random variable with continuous density f ( x ) {\displaystyle f(x)} . Suppose we divide the range of X {\displaystyle X} into bins of length Δ {\displaystyle \Delta } . By the mean value theorem , there exists a value x i {\displaystyle x_{i}} within each bin such that Consider the discretized random variable X Δ = x i {\displaystyle X^{\Delta }=x_{i}} if i Δ ≤ X < ( i + 1 ) Δ {\displaystyle i\Delta \leq X<(i+1)\Delta } . The probability of each support point X Δ = x i {\displaystyle X^{\Delta }=x_{i}} is Let S = supp ⁡ ( P X Δ ) {\displaystyle S=\operatorname {supp} (P_{X^{\Delta }})} . The entropy of X Δ {\displaystyle X^{\Delta }} is If we set Δ = 1 / m {\displaystyle \Delta =1/m} and x i = i / m {\displaystyle x_{i}=i/m} then we are doing exactly the same quantization as the definition of information dimension. Since relabeling the events of a discrete random variable does not change its entropy, we have This yields and when m {\displaystyle m} is sufficiently large, which is the differential entropy h ( x ) {\displaystyle h(x)} of the continuous random variable. In particular, if f ( x ) {\displaystyle f(x)} is Riemann integrable, then Comparing this with the d {\displaystyle d} -dimensional entropy shows that the differential entropy is exactly the one-dimensional entropy In fact, this can be generalized to higher dimensions. Rényi shows that, if X → {\displaystyle {\vec {X}}} is a random vector in a n {\displaystyle n} -dimensional Euclidean space ℜ n {\displaystyle \Re ^{n}} with an absolutely continuous distribution with a probability density function f X → ( x → ) {\displaystyle f_{\vec {X}}({\vec {x}})} and finite entropy of the integer part ( H 0 ( ⟨ X → ⟩ m ) < ∞ {\displaystyle H_{0}(\langle {\vec {X}}\rangle _{m})<\infty } ), we have d ( X → ) = n {\displaystyle d({\vec {X}})=n} and if the integral exists. The information dimension of a distribution gives a theoretical upper bound on the compression rate, if one wants to compress a variable coming from this distribution. In the context of lossless data compression, we try to compress real number with less real number which both have infinite precision. The main objective of the lossless data compression is to find efficient representations for source realizations x n ∈ X n {\displaystyle x^{n}\in {\mathcal {X}}^{n}} by y n ∈ Y n {\displaystyle y^{n}\in {\mathcal {Y}}^{n}} . A ( n , k ) − {\displaystyle (n,k)-} code for { X i : i ∈ N } {\displaystyle \{X_{i}:i\in {\mathcal {N}}\}} is a pair of mappings: The block error probability is P { g n ( f n ( X n ) ) ≠ X n } {\displaystyle {\mathcal {P}}\{g_{n}(f_{n}(X^{n}))\neq X^{n}\}} . Define r ( ϵ ) {\displaystyle r(\epsilon )} to be the infimum of r ≥ 0 {\displaystyle r\geq 0} such that there exists a sequence of ( n , ⌊ r n ⌋ ) − {\displaystyle (n,\lfloor rn\rfloor )-} codes such that P { g n ( f n ( X n ) ) ≠ X n } ≤ ϵ {\displaystyle {\mathcal {P}}\{g_{n}(f_{n}(X^{n}))\neq X^{n}\}\leq \epsilon } for all sufficiently large n {\displaystyle n} . So r ( ϵ ) {\displaystyle r(\epsilon )} basically gives the ratio between the code length and the source length, it shows how good a specific encoder decoder pair is. The fundamental limits in lossless source coding are as follows. [ 4 ] Consider a continuous encoder function f ( x ) : R n → R ⌊ R n ⌋ {\displaystyle f(x):{\mathbb {R} }^{n}\rightarrow {\mathbb {R} }^{\lfloor Rn\rfloor }} with its continuous decoder function g ( x ) : R ⌊ R n ⌋ → R n {\displaystyle g(x):{\mathbb {R} }^{\lfloor Rn\rfloor }\rightarrow {\mathbb {R} }^{n}} . If we impose no regularity on f ( x ) {\displaystyle f(x)} and g ( x ) {\displaystyle g(x)} , due to the rich structure of ℜ {\displaystyle \Re } , we have the minimum ϵ {\displaystyle \epsilon } -achievable rate R 0 ( ϵ ) = 0 {\displaystyle R_{0}(\epsilon )=0} for all 0 < ϵ ≤ 1 {\displaystyle 0<\epsilon \leq 1} . It means that one can build an encoder-decoder pair with infinity compression rate. In order to get some nontrivial and meaningful conclusions, let R ∗ ( ϵ ) {\displaystyle R^{*}(\epsilon )} the minimum ϵ − {\displaystyle \epsilon -} achievable rate for linear encoder and Borel decoder. If random variable X {\displaystyle X} has a distribution which is a mixture of discrete and continuous part. Then R ∗ ( ϵ ) = d ( X ) {\displaystyle R^{*}(\epsilon )=d(X)} for all 0 < ϵ ≤ 1 {\displaystyle 0<\epsilon \leq 1} Suppose we restrict the decoder to be a Lipschitz continuous function and d ¯ ( X ) < ∞ {\displaystyle {\bar {d}}(X)<\infty } holds, then the minimum ϵ − {\displaystyle \epsilon -} achievable rate R ( ϵ ) ≥ d ¯ ( X ) {\displaystyle R(\epsilon )\geq {\bar {d}}(X)} for all 0 < ϵ ≤ 1 {\displaystyle 0<\epsilon \leq 1} . The fundamental role of information dimension in lossless data compression further extends beyond the i.i.d. data. It is shown that for specified processes (e.g., moving-average processes) the ratio of lossless compression is also equal to the information dimension rate. [ 5 ] This result allows for further compression that was not possible by considering only marginal distribution of the process.
https://en.wikipedia.org/wiki/Information_dimension
Information engineering is the engineering discipline that deals with the generation, distribution, analysis, and use of information, data, and knowledge in electrical systems. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] The field first became identifiable in the early 21st century. The components of information engineering include more theoretical fields such as Electromagnetism , machine learning , artificial intelligence , control theory , signal processing , and microelectronics , and more applied fields such as computer vision , natural language processing , bioinformatics , medical image computing , cheminformatics , autonomous robotics , mobile robotics , and telecommunications . [ 1 ] [ 2 ] [ 5 ] [ 6 ] [ 7 ] Many of these originate from Computer Engineering , as well as other branches of engineering such as electrical engineering , computer science and bioengineering . The field of information engineering is based heavily on Engineering and mathematics, particularly probability ,statistics, calculus , linear algebra , optimization , differential equations , variational calculus , and complex analysis . Information engineers often [ citation needed ] hold a degree in information engineering or a related area, and are often part of a professional body such as the Institution of Engineering and Technology or Institute of Measurement and Control . [ 8 ] [ 9 ] [ 10 ] They are employed in almost all industries due to the widespread use of information engineering. In the 1980s/1990s term information engineering referred to an area of software engineering which has come to be known as data engineering in the 2010s/2020s. [ 11 ] Machine learning is the field that involves the use of statistical and probabilistic methods to let computers "learn" from data without being explicitly programmed. [ 12 ] Data science involves the application of machine learning to extract knowledge from data. Subfields of machine learning include deep learning , supervised learning , unsupervised learning , reinforcement learning , semi-supervised learning , and active learning . Causal inference is another related component of information engineering. Control theory refers to the control of ( continuous ) dynamical systems , with the aim being to avoid delays, overshoots, or instability . [ 13 ] Information engineers tend to focus more on control theory rather than the physical design of control systems and circuits (which tends to fall under electrical engineering). Subfields of control theory include classical control , optimal control , and nonlinear control . Signal processing refers to the generation, analysis and use of signals , which could take many forms such as image , sound , electrical, or biological. [ 14 ] Information theory studies the analysis, transmission, and storage of information. Major subfields of information theory include coding and data compression . [ 15 ] Computer vision is the field that deals with getting computers to understand image and video data at a high level. [ 16 ] Natural language processing deals with getting computers to understand human (natural) languages at a high level. This usually means text , but also often includes speech processing and recognition . [ 17 ] Bioinformatics is the field that deals with the analysis, processing, and use of biological data. [ 18 ] This usually means topics such as genomics and proteomics , and sometimes also includes medical image computing . Cheminformatics is the field that deals with the analysis, processing, and use of chemical data. [ 19 ] Robotics in information engineering focuses mainly on the algorithms and computer programs used to control robots . As such, information engineering tends to focus more on autonomous, mobile, or probabilistic robots. [ 20 ] [ 21 ] [ 22 ] Major subfields studied by information engineers include control , perception , SLAM , and motion planning . [ 20 ] [ 21 ] In the past some areas in information engineering such as signal processing used analog electronics , but nowadays most information engineering is done with digital computers . Many tasks in information engineering can be parallelized , and so nowadays information engineering is carried out using CPUs , GPUs , and AI accelerators . [ 23 ] [ 24 ] There has also been interest in using quantum computers for some subfields of information engineering such as machine learning and robotics . [ 25 ] [ 26 ] [ 27 ]
https://en.wikipedia.org/wiki/Information_engineering
Information ethics has been defined as "the branch of ethics that focuses on the relationship between the creation, organization, dissemination, and use of information , and the ethical standards and moral codes governing human conduct in society". [ 1 ] It examines the morality that comes from information as a resource, a product, or as a target. [ 2 ] It provides a critical framework for considering moral issues concerning informational privacy , moral agency (e.g. whether artificial agents may be moral), new environmental issues (especially how agents should behave in the infosphere ), problems arising from the life-cycle (creation, collection, recording, distribution, processing, etc.) of information (especially ownership and copyright, digital divide , and digital rights ). It is very vital to understand that librarians, archivists, information professionals among others, really understand the importance of knowing how to disseminate proper information as well as being responsible with their actions when addressing information. [ 3 ] Information ethics has evolved to relate to a range of fields such as computer ethics , [ 4 ] medical ethics, journalism [ 5 ] and the philosophy of information . As the use and creation of information and data form the foundation of machine learning , artificial intelligence and many areas of mathematics, information ethics also plays a central role in the ethics of artificial intelligence , big data ethics and ethics in mathematics . The term information ethics was first coined by Robert Hauptman and used in the book Ethical Challenges in Librarianship . The field of information ethics has a relatively short but progressive history having been recognized in the United States for nearly 20 years. [ 6 ] The origins of the field are in librarianship though it has now expanded to the consideration of ethical issues in other domains including computer science, the internet, media, journalism, management information systems, and business. [ 6 ] Evidence of scholarly work on this subject can be traced to the 1980s, when an article authored by Barbara J. Kostrewski and Charles Oppenheim and published in the Journal of Information Science , discussed issues relating to the field including confidentiality, information biases, and quality control. [ 6 ] Another scholar, Robert Hauptman, has also written extensively about information ethics in the library field and founded the Journal of Information Ethics in 1992. [ 7 ] One of the first schools to introduce an Information Ethics course was the University of Pittsburgh in 1990. The course was a master's level course on the concept of Information Ethics. Soon after, Kent State University also introduced a master's level course called "Ethical Concerns For Library and Information Professionals." Eventually, the term "Information Ethics" became more associated with the computer science and information technology disciplines in university. Still however, it is uncommon for universities to devote entire courses to the subject. Due to the nature of technology, the concept of information ethics has spread to other realms in the industry. Thus, concepts such as "cyberethics," a concept which discusses topics such as the ethics of artificial intelligence and its ability to reason, and media ethics which applies to concepts such as lies, censorship, and violence in the press. Therefore, due to the advent of the internet, the concept of information ethics has been spread to other fields other than librarianship now that information has become so readily available. Information has become more relevant now than ever now that the credibility of information online is more blurry than print articles due to the ease of publishing online articles. All of these different concepts have been embraced by the International Center for Information Ethics (ICIE), established by Rafael Capurro in 1999. [ 8 ] Dilemmas regarding the life of information are becoming increasingly important in a society that is defined as "the information society". The explosion of so much technology has brought information ethics to a forefront in ethical considerations. Information transmission and literacy are essential concerns in establishing an ethical foundation that promotes fair, equitable, and responsible practices. Information ethics broadly examines issues related to ownership, access, privacy, security, and community. It is also concerned with relational issues such as "the relationship between information and the good of society, the relationship between information providers and the consumers of information". [ 9 ] Information technology affects common issues such as copyright protection, intellectual freedom, accountability, privacy, and security. Many of these issues are difficult or impossible to resolve due to fundamental tensions between Western moral philosophies (based on rules, democracy, individual rights, and personal freedoms) and the traditional Eastern cultures (based on relationships, hierarchy, collective responsibilities, and social harmony). [ 10 ] The multi-faceted dispute between Google and the government of the People's Republic of China reflects some of these fundamental tensions. Professional codes offer a basis for making ethical decisions and applying ethical solutions to situations involving information provision and use which reflect an organization's commitment to responsible information service. Evolving information formats and needs require continual reconsideration of ethical principles and how these codes are applied. Considerations regarding information ethics influence "personal decisions, professional practice, and public policy ". [ 11 ] Therefore, ethical analysis must provide a framework to take into consideration "many, diverse domains" (ibid.) regarding how information is distributed. Censorship is an issue commonly involved in the discussion of information ethics because it describes the inability to access or express opinions or information based on the belief it is bad for others to view this opinion or information. [ 12 ] Sources that are commonly censored include books, articles, speeches, art work, data, music and photos. [ 12 ] Censorship can be perceived both as ethical and non-ethical in the field of information ethics. Those who believe censorship is ethical say the practice prevents readers from being exposed to offensive and objectionable material. [ 12 ] Topics such as sexism, racism, homophobia, and anti-semitism are present in public works and are widely seen as unethical in the public eye. [ 13 ] There is concern regarding the exposure of these topics to the world, especially the young generation. [ 13 ] The Australian Library Journal states proponents for censorship in libraries, the practice of librarians deciphering which books/ resources to keep in their libraries, argue the act of censorship is an ethical way to provide information to the public that is considered morally sound, allowing positive ethics instead of negative ethics to be dispersed. [ 13 ] According to the same journal, librarians have an "ethical duty" to protect the minds, particularly young people, of those who read their books through the lens of censorship to prevent the readers from adopting the unethical ideas and behaviors portrayed in the books. [ 13 ] However, others in the field of information ethics argue the practice of censorship is unethical because it fails to provide all available information to the community of readers. British philosopher John Stuart Mill argued censorship is unethical because it goes directly against the moral concept of utilitarianism . [ 14 ] Mill believes humans are unable to have true beliefs when information is withheld from the population via censorship and acquiring true beliefs without censorship leads to greater happiness. [ 14 ] According to this argument, true beliefs and happiness (of which both concepts are considered ethical) cannot be obtained through the practice of censorship. Librarians and others who disperse information to the public also face the dilemma of the ethics of censorship through the argument that censorship harms students and is morally wrong because they are unable to know the full extent of knowledge available to the world. [ 13 ] The debate of information ethics in censorship was highly contested when schools removed information about evolution from libraries and curriculums due to the topic conflicting with religious beliefs. [ 13 ] In this case, advocates against ethics in censorship argue it is more ethical to include multiple sources information on a subject, such as creation, to allow the reader to learn and decipher their beliefs. [ 13 ] Illegal downloading has also caused some ethical concerns [ 15 ] and raised the question whether digital piracy is equivalent to stealing or not. [ 16 ] [ 17 ] When asked the question "Is it ethical to download copyrighted music for free?" in a survey, 44 percent of a group of primarily college-aged students responded "Yes." [ 18 ] Christian Barry believes that understanding illegal downloading as equivalent to common theft is problematic, because clear and morally relevant differences can be shown "between stealing someone’s handbag and illegally downloading a television series". On the other hand, he thinks consumers should try to respect intellectual property unless doing so imposes unreasonable cost on them. [ 19 ] In an article titled "Download This Essay: A Defence of Stealing Ebooks", Andrew Forcehimes argues that the way we think about copyrights is inconsistent, because every argument for (physical) public libraries is also an argument for illegally downloading ebooks and every argument against downloading ebooks would also be an argument against libraries. [ 20 ] In a reply, Sadulla Karjiker argues that "economically, there is a material difference between permitting public libraries making physical books available and allowing such online distribution of ebooks." [ 21 ] Ali Pirhayati has proposed a thought experiment based on a high-tech library to neutralize the magnitude problem (suggested by Karjiker), and justify Forcehimes’ main idea. [ 22 ] Ethical concerns regarding international security, surveillance, and the right to privacy are on the rise. [ 23 ] The issues of security and privacy commonly overlap in the field of information, due to the interconnectedness of online research and the development of Information Technology (IT) . [ 24 ] Some of the areas surrounding security and privacy are identity theft , online economic transfers, medical records, and state security. [ 25 ] Companies, organizations, and institutions use databases to store, organize, and distribute user's information—with or without their knowledge. [ 25 ] Individuals are far more likely to part with personal information when it seems that they will have some sort of control over the use of the information or if the information is given to an entity that they already have an established relationship with. In these specific circumstances, subjects will be much inclined to believe that their information has been collected for pure collection's sake. An entity may also be offering goods or services in exchange for the client's personal information. This type of collection method may seem valuable to a user due to the fact that the transaction appears to be free in the monetary sense. This forms a type of social contract between the entity offering the goods or services and the client. The client may continue to uphold their side of the contract as long as the company continues to provide them with a good or service that they deem worthy. [ 26 ] The concept of procedural fairness indicates an individual's perception of fairness in a given scenario. Circumstances that contribute to procedural fairness are providing the customer with the ability to voice their concerns or input, and control over the outcome of the contract. Best practice for any company collecting information from customers is to consider procedural fairness. [ 27 ] This concept is a key proponent of ethical consumer marketing and is the basis of United States Privacy Laws, the European Union's privacy directive from 1995, and the Clinton Administration's June 1995 guidelines for personal information use by all National Information Infrastructure participants. [ 28 ] An individual being allowed to remove their name from a mailing list is considered a best information collecting practice. In a few Equifax surveys conducted in the years 1994–1996, it was found that a substantial amount of the American public was concerned about business practices using private consumer information, and that is causes more harm than good. [ 29 ] Throughout the course of a customer-company relationship, the company can likely accumulate a plethora of information from its customer. With data processing technology flourishing, it allows for the company to make specific marketing campaigns for each of their individual customers. [ 26 ] Data collection and surveillance infrastructure has allowed companies to micro-target specific groups and tailor advertisements for certain populations. [ 30 ] A recent trend of medical records is to digitize them. The sensitive information secured within medical records makes security measures vitally important. [ 31 ] The ethical concern of medical record security is great within the context of emergency wards, where any patient records can be accessed at all times. [ 31 ] Within an emergency ward, patient medical records need to be available for quick access; however, this means that all medical records can be accessed at any moment within emergency wards with or without the patient present. [ 31 ] Ironically, the donation of one's body organs "to science" is easier in most western jurisdictions than donating one's medical records for research. [ 32 ] Warfare has also changed the security of countries within the 21st Century. After the events of 9-11 and other terrorism attacks on civilians, surveillance by states raises ethical concerns of the individual privacy of citizens. The USA PATRIOT Act 2001 is a prime example of such concerns. Many other countries, especially European nations within the current climate of terrorism, is looking for a balancing between stricter security and surveillance, and not committing the same ethical concerns associated with the USA Patriot Act. [ 33 ] International security is moving to towards the trends of cybersecurity and unmanned systems, which involve the military application of IT. [ 23 ] Ethical concerns of political entities regarding information warfare include the unpredictability of response, difficulty differentiating civilian and military targets, and conflict between state and non-state actors. [ 23 ] The main, peer-reviewed, academic journals reporting on information ethics are the Journal of the Association for Information Systems , the flagship publication of the Association for Information Systems , and Ethics and Information Technology , published by Springer.
https://en.wikipedia.org/wiki/Information_ethics
Information exchange or information sharing means that people or other entities pass information from one to another. This could be done electronically or through certain systems. [ 1 ] These are terms that can either refer to bidirectional information transfer in telecommunications and computer science or communication seen from a system-theoretic or information-theoretic point of view. As "information," in this context invariably refers to ( electronic ) data that encodes and represents [ 2 ] the information at hand, a broader treatment can be found under data exchange . Information exchange has a long history in information technology . [ 3 ] Traditional information sharing referred to one-to-one exchanges of data between a sender and receiver. Online information sharing gives useful data to businesses for future strategies based on online sharing. [ 4 ] These information exchanges are implemented via dozens of open and proprietary protocols , message, and file formats. Electronic data interchange ( EDI ) is a successful implementation of commercial data exchanges that began in the late 1970s and remains in use today. [ 5 ] Some controversy comes when discussing regulations regarding information exchange. [ 6 ] Initiatives to standardize information sharing protocols include extensible markup language ( XML ), simple object access protocol ( SOAP ), and web services description language ( WSDL ). From the point of view of a computer scientist, the four primary information sharing design patterns are sharing information one-to-one , one-to-many , many-to-many , and many-to-one . Technologies to meet all four of these design patterns are evolving and include blogs , wikis , really simple syndication , tagging , and chat . One example of United States government's attempt to implement one of these design patterns (one to one) is the National Information Exchange Model (NIEM). [ 7 ] [ 8 ] One-to-one exchange models fall short of supporting all of the required design patterns needed to fully implement data exploitation technology. Advanced information sharing platforms provide controlled vocabularies , data harmonization , data stewardship policies and guidelines, standards for uniform data as they relate to privacy , security , and data quality . The term information sharing gained popularity as a result of the 9/11 Commission Hearings and its report of the United States government 's lack of response to information known about the planned terrorist attack on the New York City World Trade Center prior to the event. The resulting commission report led to the enactment of several executive orders by President Bush that mandated agencies to implement policies to "share information" across organizational boundaries. In addition, an Information Sharing Environment Program Manager [ 9 ] (PM-ISE) was appointed, tasked to implement the provisions of the Intelligence Reform and Terrorism Prevention Act of 2004. [ 10 ] In making recommendation toward the creation of an "Information Sharing Environment" the 9/11 Commission based itself on the findings and recommendations made by the Markle Task Force on National Security in the Information Age. [ 11 ]
https://en.wikipedia.org/wiki/Information_exchange
Information flow in an information theoretical context is the transfer of information from a variable x {\displaystyle x} to a variable y {\displaystyle y} in a given process . Not all flows may be desirable; for example, a system should not leak any confidential information (partially or not) to public observers—as it is a violation of privacy on an individual level, or might cause major loss on a corporate level. Securing the data manipulated by computing systems has been a challenge in the past years. Several methods to limit the information disclosure exist today, such as access control lists , firewalls , and cryptography . However, although these methods do impose limits on the information that is released by a system, they provide no guarantees about information propagation . [ 1 ] For example, access control lists of file systems prevent unauthorized file access, but they do not control how the data is used afterwards. Similarly, cryptography provides a means to exchange information privately across a non-secure channel, but no guarantees about the confidentiality of the data are given once it is decrypted. In low level information flow analysis, each variable is usually assigned a security level. The basic model comprises two distinct levels: low and high, meaning, respectively, publicly observable information, and secret information. To ensure confidentiality, flowing information from high to low variables should not be allowed. On the other hand, to ensure integrity, flows to high variables should be restricted. [ 1 ] More generally, the security levels can be viewed as a lattice with information flowing only upwards in the lattice. [ 2 ] For example, considering two security levels L {\displaystyle L} and H {\displaystyle H} (low and high), if L ≤ H {\displaystyle L\leq H} , flows from L {\displaystyle L} to L {\displaystyle L} , from H {\displaystyle H} to H {\displaystyle H} , and L {\displaystyle L} to H {\displaystyle H} would be allowed, while flows from H {\displaystyle H} to L {\displaystyle L} would not. [ 3 ] Throughout this article, the following notation is used: Where L {\displaystyle L} and H {\displaystyle H} are the only two security levels in the lattice being considered. Information flows can be divided in two major categories. The simplest one is explicit flow, where some secret is explicitly leaked to a publicly observable variable. In the following example, the secret in the variable h flows into the publicly observable variable l . The other flows fall into the side channel category. For example, in the timing attack or in the power analysis attack , the system leaks information through, respectively, the time or power it takes to perform an action depending on a secret value. In the following example, the attacker can deduce if the value of h is one or not by the time the program takes to finish: Another side channel flow is the implicit information flow, which consists in leakage of information through the program control flow. The following program (implicitly) discloses the value of the secret variable h to the variable l . In this case, since the h variable is boolean, all the bits of the variable of h is disclosed (at the end of the program, l will be 3 if h is true, and 42 otherwise). Non-interference is a policy that enforces that an attacker should not be able to distinguish two computations from their outputs if they only vary in their secret inputs. However, this policy is too strict to be usable in realistic programs. [ 4 ] The classic example is a password checker program that, in order to be useful, needs to disclose some secret information: whether the input password is correct or not (note that the information that an attacker learns in case the program rejects the password is that the attempted password is not the valid one). A mechanism for information flow control is one that enforces information flow policies. Several methods to enforce information flow policies have been proposed. Run-time mechanisms that tag data with information flow labels have been employed at the operating system level and at the programming language level. Static program analyses have also been developed that ensure information flows within programs are in accordance with policies. Both static and dynamic analysis for current programming languages have been developed. However, dynamic analysis techniques cannot observe all execution paths, and therefore cannot be both sound and precise. In order to guarantee noninterference, they either terminate executions that might release sensitive information [ 5 ] or they ignore updates that might leak information. [ 6 ] A prominent way to enforce information flow policies in a program is through a security type system: that is, a type system that enforces security properties. In such a sound type system, if a program type-checks, it meets the flow policy and therefore contains no improper information flows. In a programming language augmented with a security type system every expression carries both a type (such as boolean, or integer) and a security label. Following is a simple security type system from [ 1 ] that enforces non-interference. The notation ⊢ e x p : τ {\displaystyle \;\vdash exp\;:\;\tau } means that the expression e x p {\displaystyle exp} has type τ {\displaystyle \;\tau } . Similarly, [ s c ] ⊢ C {\displaystyle [sc]\vdash C} means that the command C {\displaystyle C} is typable in the security context s c {\displaystyle sc} . [ E 1 − 2 ] ⊢ e x p : h i g h h ∉ V a r s ( e x p ) ⊢ e x p : l o w {\displaystyle [E1-2]\quad \vdash exp:high\qquad {\frac {h\notin Vars(exp)}{\vdash exp\;:\;low}}} [ C 1 − 3 ] [ s c ] ⊢ skip [ s c ] ⊢ h := e x p ⊢ e x p : l o w [ l o w ] ⊢ l := e x p {\displaystyle [C1-3]\quad [sc]\vdash {\textbf {skip}}\qquad [sc]\vdash h\;:=\;exp\qquad {\frac {\vdash exp\;:\;low}{[low]\vdash l\;:=\;exp}}} [ C 4 − 5 ] [ s c ] ⊢ C 1 [ s c ] ⊢ C 2 [ s c ] ⊢ C 1 ; C 2 ⊢ e x p : s c [ s c ] ⊢ C [ s c ] ⊢ while e x p do C {\displaystyle [C4-5]\quad {\frac {[sc]\vdash C_{1}\quad [sc]\vdash C_{2}}{[sc]\vdash C_{1}\;;\;C_{2}}}\qquad {\frac {\vdash exp\;:\;sc\quad [sc]\vdash C}{[sc]\vdash {\textbf {while}}\ exp\ {\textbf {do}}\ C}}} [ C 6 − 7 ] ⊢ e x p : s c [ s c ] ⊢ C 1 [ s c ] ⊢ C 2 [ s c ] ⊢ if e x p then C 1 else C 2 [ h i g h ] ⊢ C [ l o w ] ⊢ C {\displaystyle [C6-7]\quad {\frac {\vdash exp\;:\;sc\quad [sc]\vdash C_{1}\quad [sc]\vdash C_{2}}{[sc]\vdash {\textbf {if}}\ exp\ {\textbf {then}}\ C_{1}\ {\textbf {else}}\ C_{2}}}\qquad {\frac {[high]\vdash C}{[low]\vdash C}}} Well-typed commands include, for example, Conversely, the program is ill-typed, as it will disclose the value of variable h {\displaystyle h} into l {\displaystyle l} . Note that the rule [ C 7 ] {\displaystyle [C7]} is a subsumption rule, which means that any command that is of security type h i g h {\displaystyle high} can also be l o w {\displaystyle low} . For example, h := 1 {\displaystyle h:=1} can be both h i g h {\displaystyle high} and l o w {\displaystyle low} . This is called polymorphism in type theory . Similarly, the type of an expression e x p {\displaystyle exp} that satisfies h ∉ V a r s ( e x p ) {\displaystyle h\notin Vars(exp)} can be both h i g h {\displaystyle high} and l o w {\displaystyle low} according to [ E 1 ] {\displaystyle [E1]} and [ E 2 ] {\displaystyle [E2]} respectively. As shown previously, non-interference policy is too strict for use in most real-world applications. [ 7 ] Therefore, several approaches to allow controlled releases of information have been devised. Such approaches are called information declassification. Robust declassification requires that an active attacker may not manipulate the system in order to learn more secrets than what passive attackers already know. [ 4 ] Information declassification constructs can be classified in four orthogonal dimensions: what information is released, who is authorized to access the information, where the information is released, and when the information is released. [ 4 ] A what declassification policy controls which information (partial or not) may be released to a publicly observable variable. The following code example shows a declassify construct from. [ 8 ] In this code, the value of the variable h is explicitly allowed by the programmer to flow into the publicly observable variable l . A who declassification policy controls which principals (i.e., who) can access a given piece of information. This kind of policy has been implemented in the Jif compiler. [ 9 ] The following example allows Bob to share its secret contained in the variable b with Alice through the commonly accessible variable ab . A where declassification policy regulates where the information can be released, for example, by controlling in which lines of the source code information can be released. The following example makes use of the flow construct proposed in. [ 10 ] This construct takes a flow policy (in this case, variables in H are allowed to flow to variables in L) and a command, which is run under the given flow policy. A when declassification policy regulates when the information can be released. Policies of this kind can be used to verify programs that implement, for example, controlled release of secret information after payment, or encrypted secrets which should not be released in a certain time given polynomial computational power. An implicit flow occurs when code whose conditional execution is based on private information updates a public variable. This is especially problematic when multiple executions are considered since an attacker could leverage the public variable to infer private information by observing how its value changes over time or with the input. The naïve approach consists on enforcing the confidentiality property on all variables whose value is affected by other variables. This method leads to partially leaked information due to on some instances of the application a variable is Low and in others High. [ clarify ] "No sensitive upgrade" halts the program whenever a High variable affects the value of a Low variable. Since it simply looks for expressions where an information leakage might happen, without looking at the context, it may halt a program that, despite having potential information leakage, never actually leaks information. In the following example x is High and y is Low. In this case the program would be halted since—syntactically speaking—it uses the value of a High variable to change a Low variable, despite the program never leaking information. Permissive-upgrade introduces an extra security class P which will identify information leaking variables. When a High variable affects the value of a Low variable, the latter is labeled P. If a P labeled variable affects a Low variable the program would be halted. To prevent the halting the Low and P variables should be converted to High using a privatization function to ensure no information leakage can occur. On subsequent instances the program will run without interruption. Privatization inference extends permissive upgrade to automatically apply the privatization function to any variable that might leak information. This method should be used during testing where it will convert most variables. Once the program moves into production the permissive-upgrade should be used to halt the program in case of an information leakage and the privatization functions can be updated to prevent subsequent leaks. Beyond applications to programming language, information flow control theories have been applied to operating systems, [ 11 ] distributed systems, [ 12 ] and cloud computing. [ 13 ] [ 14 ]
https://en.wikipedia.org/wiki/Information_flow_(information_theory)
An information flow diagram ( IFD ) is a diagram that shows how information is communicated (or "flows") from a source to a receiver or target (e.g. A→C), through some medium. [ 1 ] : 36–39 The medium acts as a bridge, a means of transmitting the information. Examples of media include word of mouth, radio, email, etc. The concept of IFD was initially used in radio transmission . [ 2 ] The diagrammed system may also include feedback, a reply or response to the signal that was given out. The return paths can be two-way or bi-directional: information can flow back and forth. [ 2 ] An IFD can be used to model the information flow throughout an organisation. An IFD shows the relationship between internal information flows within an organisation and external information flows between organisations. It also shows the relationship between the internal departments and sub-systems. An IFD usually uses "blobs" to decompose the system and sub-systems into elemental parts. [ 2 ] Lines then indicate how the information travels from one system to another. IFDs are used in businesses, government agencies, television and cinematic processes. [ citation needed ] IFDs are often confused with data flow diagrams (DFDs). IFDs show information as sources, destination and flows. DFDs show processes where inputs are transformed into outputs. Databases are also present in DFDs to show where data is held within the systems. In DFDs information destinations are called "sinks". [ 3 ] : 180 Peter Checkland , a British management scientist, described information flows between the different elements that compose various systems. He also defined a system as a "community situated within an environment". [ 2 ] The main purpose of an information flow diagram is so that sources that send and receive information can be displayed neatly and analysed. This allows viewers to see the forwarding of information and the analysis of different situations. [ 2 ] The creation of an IFD is, in most cases, the first step in information analysis. [ 1 ] : 37 IFDs are behaviour diagrams that show the exchange of data between systems. They are also used to describe the circulation of information within systems. [ 4 ] Information that moves along the diagram is represented as either information items or by concrete classifiers. IFDs are used to: [ 2 ] Construction of an information flow diagram requires the knowledge of different information sources and the connections between them. The sources and targets of information flow are one of the following: actor, use case, node, artefact, class, component, port, property, interface, package, activity node, activity partition, or instance specification. [ citation needed ] A dashed line with an open arrow pointing away from the source to the target is used to represent information flow. The keyword "flow" may be written above or below the dashed line. Information items represent the abstraction of data and act as information flow connectors, representing the flow of transfer of information from source to target. Information items do not provide any detail of the information they transfer as they are featureless. [ 4 ] Diagramming software can be used to create IFDs. Examples of diagramming software include Microsoft Visio for Windows or OmniGraffle for OS X . Successful IFDs should highlight gaps that need improvement, display inefficiencies in information, uncover and highlight risks to information such as data confidentiality and Insecure systems, display unsuitable mediums which are being used, and they should also provide clarity about who should receive which information when, where and how. A customer needs to make an order. The customer first posts the order to the sales department. The customer's order details are then entered into a centralised database which can only be accessed by the warehouse to make the order. The goods are handed to the dispatch department with a delivery note attached to them for delivery. On delivery, the customer receives the goods and the delivery note (which are handled by a member of the dispatch department). The sales department then creates an invoice which is posted to the customer. The accounts department then assesses a copy of the invoice from the centralised database. The customer is then required to post the payment to the accounts department. [ 7 ] Limitations of information flow diagrams may include:
https://en.wikipedia.org/wiki/Information_flow_diagram
Information fluctuation complexity is an information-theoretic quantity defined as the fluctuation of information about entropy . It is derivable from fluctuations in the predominance of order and chaos in a dynamic system and has been used as a measure of complexity in many diverse fields. It was introduced in a 1993 paper by Bates and Shepard. [ 1 ] The information fluctuation complexity of a discrete dynamic system is a function of the probability distribution of its states when it is subject to random external input data. The purpose of driving the system with a rich information source such as a random number generator or a white noise signal is to probe the internal dynamics of the system in much the same way as a frequency-rich impulse is used in signal processing . If a system has N {\displaystyle N} possible states and the state probabilities p i {\displaystyle p_{i}} are known, then its information entropy is where I i = − log ⁡ p i {\textstyle I_{i}=-\log p_{i}} is the information content of state i {\displaystyle i} . The information fluctuation complexity of the system is defined as the standard deviation or fluctuation of I {\displaystyle I} about its mean H {\displaystyle \mathrm {H} } : or The fluctuation of state information σ I {\displaystyle \sigma _{I}} is zero in a maximally disordered system with all p i = 1 N {\textstyle p_{i}={\frac {1}{N}}} ; the system simply mimics its random inputs. σ I {\displaystyle \sigma _{I}} is also zero if the system is perfectly ordered with only one fixed state ( p 1 = 1 ) {\textstyle (p_{1}=1)} , regardless of the inputs. σ I {\displaystyle \sigma _{I}} is non-zero between these two extremes with a mixture of higher-probability states and lower-probability states populating state space . As a complex dynamic system evolves over time, how it transitions between states depends on external stimuli in an irregular way. At times it may be more sensitive to external stimuli (unstable) and at other times less sensitive (stable). When a given state has multiple possible next-states, external information determines which one will be next and the system gains this information by following a particular trajectory in state space. However, if several different states all lead to the same next-state, then upon entering the next-state the system loses information about which state preceded it. Thus, a complex system exhibits alternating information gain and loss as it evolves over time. This alternation or fluctuation of information is equivalent to remembering and forgetting — temporary information storage or memory — an essential feature of non-trivial computation. The gain or loss of information associated with transitions between states can be related to state information. The net information gain of a transition from state i {\displaystyle i} to state j {\displaystyle j} is the information gained when leaving state i {\displaystyle i} less the information lost when entering state j {\displaystyle j} : Here p i → j {\textstyle p_{i\rightarrow j}} is the forward conditional probability that if the present state is i {\displaystyle i} then the next state will be j {\displaystyle j} and p i ← j {\textstyle p_{i\leftarrow j}} is the reverse conditional probability that if the present state is j {\displaystyle j} then the previous state was i {\displaystyle i} . The conditional probabilities are related to the transition probability p i j {\displaystyle p_{ij}} , the probability that a transition from state i {\displaystyle i} to state j {\displaystyle j} occurs, by: Eliminating the conditional probabilities: Therefore, the net information gained by the system as a result of the transition depends only on the increase in state information from the initial to the final state. It can be shown that this is true even for multiple consecutive transitions. [ 1 ] Γ = Δ I {\textstyle \Gamma =\Delta I} is reminiscent of the relation between force and potential energy . I {\displaystyle I} is like potential Φ {\displaystyle \Phi } and Γ {\displaystyle \Gamma } is like force F {\displaystyle \mathbf {F} } in F = ∇ Φ {\textstyle \mathbf {F} ={\nabla \Phi }} . External information "pushes" a system "uphill" to a state of higher information potential to accomplish information storage, much like pushing a mass uphill to a state of higher gravitational potential stores energy. The amount of energy stored depends only on the final height, not on the path up the hill. Similarly, the amount of information stored does not depend on the transition path between an initial common state and a final rare state. Once a system reaches a rare state with high information potential, it may then "fall" back to a common state, losing previously stored information. It may be useful to compute the standard deviation of Γ {\displaystyle \Gamma } about its mean (which is zero), namely the fluctuation of net information gain σ Γ {\displaystyle \sigma _{\Gamma }} , [ 1 ] but σ I {\displaystyle \sigma _{I}} takes into account multi-transition memory loops in state space and therefore should be more indicative of the computational power of a system. Moreover, σ I {\displaystyle \sigma _{I}} is easier to apply because there can be many more transitions than states. A dynamic system that is sensitive to external information (unstable) exhibits chaotic behavior whereas one that is insensitive to external information (stable) exhibits orderly behavior. A complex system exhibits both behaviors, fluctuating between them in dynamic balance when subject to a rich information source. The degree of fluctuation is quantified by σ I {\displaystyle \sigma _{I}} ; it captures the alternation in the predominance of chaos and order in a complex system as it evolves over time. Source: [ 2 ] The rule 110 variant of the elementary cellular automaton has been proven to be capable of universal computation . The proof is based on the existence and interactions of cohesive and self-perpetuating cell patterns known as gliders , which are examples of emergent phenomena associated with complex systems and which imply the capability of groups of automaton cells to remember that a glider is passing through them. It is therefore to be expected that there will be memory loops in state space resulting from alternations of information gain and loss, instability and stability, chaos and order. Consider a 3-cell group of adjacent automaton cells that obey rule 110: end-center-end . The next state of the center cell depends on the present state of itself and the end cells as specified by the rule: To compute the information fluctuation complexity of this system, attach a driver cell to each end of the 3-cell group to provide random external stimuli like so, driver→end-center-end←driver , such that the rule can be applied to the two end cells. Next, determine what the next state will be for each possible present state and for each possible combination of driver cell contents, in order to determine the forward conditional probabilities. The state diagram of this system is depicted below, with circles representing states and arrows representing transitions between states. The eight possible states of this system, 1-1-1 to 0-0-0 , are labeled with the octal equivalent of the 3-bit contents of the 3-cell group: 7 to 0. The transition arrows are labeled with forward conditional probabilities. Notice that there is variability in the divergence and convergence of arrows corresponding to variability in gain and loss of information originating from the driver cells. The forward conditional probabilities are determined by the proportion of possible driver cell contents that drive a particular transition. For example, for the four possible combinations of two driver cell contents, state 7 leads to states 5, 4, 1 and 0 and therefore p 7 → 5 {\textstyle p_{7\rightarrow 5}} , p 7 → 4 {\textstyle p_{7\rightarrow 4}} , p 7 → 1 {\textstyle p_{7\rightarrow 1}} , and p 7 → 0 {\textstyle p_{7\rightarrow 0}} are each 1 ⁄ 4 or 25%. Similarly, state 0 leads to states 0, 1, 0 and 1 and therefore p 0 → 1 {\textstyle p_{0\rightarrow 1}} and p 0 → 0 {\textstyle p_{0\rightarrow 0}} are each 1 ⁄ 2 or 50%. And so forth. The state probabilities are related by These linear algebraic equations can be solved for the state probabilities, with the following results: [ 2 ] The information entropy and the complexity can then be computed from the state probabilities: Note that the maximum possible entropy for eight states is 3 bits {\displaystyle 3{\text{ bits}}} , which is the case when all p i = 1 8 {\textstyle p_{i}={\frac {1}{8}}} . Thus, rule 110 has a relatively high entropy or state utilization of 2.86 bits {\displaystyle 2.86{\text{ bits}}} . However, this does not preclude a considerable fluctuation of state information about entropy and thus a considerable value of the complexity. Whereas, maximum entropy would preclude complexity. An alternative method can be used to obtain the state probabilities when the analytical method used above is unfeasible. Simply drive the system at its inputs (the driver cells) with a random source for many generations and observe the state probabilities empirically. When this is done via computer simulation for 10 million generations the results are as follows: [ 2 ] Since both H {\displaystyle \mathrm {H} } and σ I {\displaystyle \sigma _{I}} increase with system size, their dimensionless ratio σ I / H {\displaystyle \sigma _{I}/\mathrm {H} } , the relative information fluctuation complexity , is included to compare systems of different sizes. Notice that the empirical and analytical results agree for the 3-cell automaton and that the relative complexity levels off to about 0.10 {\displaystyle 0.10} by 10 cells. In the paper by Bates and Shepard, [ 1 ] σ I {\displaystyle \sigma _{I}} is computed for all elementary cellular automaton rules and it was observed that the ones that exhibit slow-moving gliders and possibly stationary objects, as rule 110 does, are highly correlated with large values of σ I {\displaystyle \sigma _{I}} . σ I {\displaystyle \sigma _{I}} can therefore be used as a filter to select candidate rules for universal computation, which is challenging to prove. Although the derivation of the information fluctuation complexity formula is based on information fluctuations in dynamical systems , the formula depends only on state probabilities and therefore is also applicable to any probability distribution, including those derived from static images or text. Over the years the original paper [ 1 ] has been referred to by researchers in many diverse fields: complexity theory , [ 3 ] complex systems science, [ 4 ] complex networks , [ 5 ] chaotic dynamics , [ 6 ] many-body localization entanglement, [ 7 ] environmental engineering, [ 8 ] ecological complexity , [ 9 ] ecological time-series analysis , [ 10 ] ecosystem sustainability, [ 11 ] air [ 12 ] and water [ 13 ] pollution, hydrological wavelet analysis, [ 14 ] soil water flow, [ 15 ] soil moisture , [ 16 ] headwater runoff, [ 17 ] groundwater depth, [ 18 ] air traffic control, [ 19 ] flow patterns [ 20 ] and flood events, [ 21 ] topology, [ 22 ] economics, [ 23 ] market forecasting of metal [ 24 ] and electricity [ 25 ] prices, health informatics, [ 26 ] human cognition, [ 27 ] human gait kinematics, [ 28 ] neurology, [ 29 ] EEG analysis, [ 30 ] education, [ 31 ] investing, [ 32 ] artificial life [ 33 ] and aesthetics. [ 34 ]
https://en.wikipedia.org/wiki/Information_fluctuation_complexity
Information geometry is an interdisciplinary field that applies the techniques of differential geometry to study probability theory and statistics . [ 1 ] It studies statistical manifolds , which are Riemannian manifolds whose points correspond to probability distributions . Historically, information geometry can be traced back to the work of C. R. Rao , who was the first to treat the Fisher matrix as a Riemannian metric . [ 2 ] [ 3 ] The modern theory is largely due to Shun'ichi Amari , whose work has been greatly influential on the development of the field. [ 4 ] Classically, information geometry considered a parametrized statistical model as a Riemannian , conjugate connection, statistical, and dually flat manifolds. Unlike usual smooth manifolds with tensor metric and Levi-Civita connection, these take into account conjugate connection, torsion, and Amari-Chentsov metric. [ 5 ] All presented above geometric structures find application in information theory and machine learning . For such models, there is a natural choice of Riemannian metric, known as the Fisher information metric . In the special case that the statistical model is an exponential family , it is possible to induce the statistical manifold with a Hessian metric (i.e a Riemannian metric given by the potential of a convex function). In this case, the manifold naturally inherits two flat affine connections , as well as a canonical Bregman divergence . Historically, much of the work was devoted to studying the associated geometry of these examples. In the modern setting, information geometry applies to a much wider context, including non-exponential families, nonparametric statistics , and even abstract statistical manifolds not induced from a known statistical model. The results combine techniques from information theory , affine differential geometry , convex analysis and many other fields. One of the most perspective information geometry approaches find applications in machine learning . For example, the developing of information-geometric optimization methods (mirror descent [ 6 ] and natural gradient descent [ 7 ] ). The standard references in the field are Shun’ichi Amari and Hiroshi Nagaoka's book, Methods of Information Geometry , [ 8 ] and the more recent book by Nihat Ay and others. [ 9 ] A gentle introduction is given in the survey by Frank Nielsen. [ 10 ] In 2018, the journal Information Geometry was released, which is devoted to the field. The history of information geometry is associated with the discoveries of at least the following people, and many others. As an interdisciplinary field, information geometry has been used in various applications. Here an incomplete list:
https://en.wikipedia.org/wiki/Information_geometry
Information management ( IM ) is the appropriate and optimized capture, storage, retrieval, and use of information . It may be personal information management or organizational. Information management for organizations concerns a cycle of organizational activity: the acquisition of information from one or more sources, the custodianship and the distribution of that information to those who need it, and its ultimate disposal through archiving or deletion and extraction. This cycle of information organisation involves a variety of stakeholders , including those who are responsible for assuring the quality , accessibility and utility of acquired information; those who are responsible for its safe storage and disposal ; and those who need it for decision making . Stakeholders might have rights to originate, change, distribute or delete information according to organisational information management policies . Information management embraces all the generic concepts of management, including the planning , organizing , structuring, processing , controlling , evaluation and reporting of information activities, all of which is needed in order to meet the needs of those with organisational roles or functions that depend on information. These generic concepts allow the information to be presented to the audience or the correct group of people. After individuals are able to put that information to use, it then gains more value. Information management is closely related to, and overlaps with, the management of data , systems , technology , processes and – where the availability of information is critical to organisational success – strategy . This broad view of the realm of information management contrasts with the earlier, more traditional view, that the life cycle of managing information is an operational matter that requires specific procedures, organisational capabilities and standards that deal with information as a product or a service. In the 1970s, the management of information largely concerned matters closer to what would now be called data management : punched cards , magnetic tapes and other record-keeping media , involving a life cycle of such formats requiring origination, distribution, backup, maintenance and disposal. At this time the huge potential of information technology began to be recognised: for example a single chip storing a whole book , or electronic mail moving messages instantly around the world, remarkable ideas at the time. [ 1 ] With the proliferation of information technology and the extending reach of information systems in the 1980s and 1990s, [ 2 ] information management took on a new form. Progressive businesses such as BP transformed the vocabulary of what was then " IT management ", so that " systems analysts " became " business analysts ", "monopoly supply" became a mixture of " insourcing " and " outsourcing ", and the large IT function was transformed into "lean teams" that began to allow some agility in the processes that harness information for business benefit . [ 3 ] The scope of senior management interest in information at BP extended from the creation of value through improved business processes , based upon the effective management of information, permitting the implementation of appropriate information systems (or " applications ") that were operated on IT infrastructure that was outsourced. [ 3 ] In this way, information management was no longer a simple job that could be performed by anyone who had nothing else to do, it became highly strategic and a matter for senior management attention. An understanding of the technologies involved, an ability to manage information systems projects and business change well, and a willingness to align technology and business strategies all became necessary. [ 4 ] In the transitional period leading up to the strategic view of information management, Venkatraman, a strong advocate of this transition and transformation, [ 5 ] proffered a simple arrangement of ideas that succinctly brought together the management of data , information, and knowledge (see the figure) argued that: This is often referred to as the DIKAR model: Data, Information, Knowledge, Action and Result, [ 6 ] it gives a strong clue as to the layers involved in aligning technology and organisational strategies, and it can be seen as a pivotal moment in changing attitudes to information management. The recognition that information management is an investment that must deliver meaningful results is important to all modern organisations that depend on information and good decision-making for their success. [ 7 ] It is commonly believed that good information management is crucial to the smooth working of organisations, and although there is no commonly accepted theory of information management per se , behavioural and organisational theories help. Following the behavioural science theory of management, mainly developed at Carnegie Mellon University and prominently supported by March and Simon, [ 8 ] most of what goes on in modern organizations is actually information handling and decision making. One crucial factor in information handling and decision making is an individual's ability to process information and to make decisions under limitations that might derive from the context: a person's age, the situational complexity, or a lack of requisite quality in the information that is at hand – all of which is exacerbated by the rapid advance of technology and the new kinds of system that it enables, especially as the social web emerges as a phenomenon that business cannot ignore. And yet, well before there was any general recognition of the importance of information management in organisations, March and Simon [ 8 ] argued that organizations have to be considered as cooperative systems , with a high level of information processing and a vast need for decision making at various levels. Instead of using the model of the " economic man ", as advocated in classical theory [ 9 ] they proposed " administrative man " as an alternative, based on their argumentation about the cognitive limits of rationality. Additionally they proposed the notion of satisficing , which entails searching through the available alternatives until an acceptability threshold is met - another idea that still has currency. [ 10 ] In addition to the organisational factors mentioned by March and Simon, there are other issues that stem from economic and environmental dynamics. There is the cost of collecting and evaluating the information needed to take a decision, including the time and effort required. [ 11 ] The transaction cost associated with information processes can be high. In particular, established organizational rules and procedures can prevent the taking of the most appropriate decision, leading to sub-optimum outcomes. [ 12 ] [ 13 ] This is an issue that has been presented as a major problem with bureaucratic organizations that lose the economies of strategic change because of entrenched attitudes. [ 14 ] According to the Carnegie Mellon School an organization's ability to process information is at the core of organizational and managerial competency , and an organization's strategies must be designed to improve information processing capability [ 15 ] and as information systems that provide that capability became formalised and automated, competencies were severely tested at many levels. [ 16 ] It was recognised that organisations needed to be able to learn and adapt in ways that were never so evident before [ 17 ] and academics began to organise and publish definitive works concerning the strategic management of information, and information systems. [ 4 ] [ 18 ] Concurrently, the ideas of business process management [ 19 ] and knowledge management [ 20 ] although much of the optimistic early thinking about business process redesign has since been discredited in the information management literature. [ 21 ] In the strategic studies field, it is considered of the highest priority the understanding of the information environment, conceived as the aggregate of individuals, organizations, and systems that collect, process, disseminate, or act on information. This environment consists of three interrelated dimensions which continuously interact with individuals, organizations, and systems. These dimensions are the physical, informational, and cognitive. [ 22 ] Venkatraman has provided a simple view of the requisite capabilities of an organisation that wants to manage information well – the DIKAR model (see above). He also worked with others to understand how technology and business strategies could be appropriately aligned in order to identify specific capabilities that are needed. [ 23 ] This work was paralleled by other writers in the world of consulting, [ 24 ] practice, [ 25 ] and academia. [ 26 ] Bytheway has collected and organised basic tools and techniques for information management in a single volume. [ 7 ] At the heart of his view of information management is a portfolio model that takes account of the surging interest in external sources of information and the need to organise un-structured information external so as to make it useful (see the figure). Such an information portfolio as this shows how information can be gathered and usefully organised, in four stages: Stage 1 : Taking advantage of public information : recognise and adopt well-structured external schemes of reference data, such as post codes, weather data, GPS positioning data and travel timetables, exemplified in the personal computing press. [ 27 ] Stage 2 : Tagging the noise on the World Wide Web : use existing schemes such as post codes and GPS data or more typically by adding “tags”, or construct a formal ontology that provides structure. Shirky provides an overview of these two approaches. [ 28 ] Stage 3 : Sifting and analysing: in the wider world the generalised ontologies that are under development extend to hundreds of entities and hundreds of relations between them and provide the means to elicit meaning from large volumes of data. Structured data in databases works best when that structure reflects a higher-level information model – an ontology, or an entity-relationship model . [ 29 ] Stage 4 : Structuring and archiving: with the large volume of data available from sources such as the social web and from the miniature telemetry systems used in personal health management , new ways to archive and then trawl data for meaningful information. Map-reduce methods, originating from functional programming , are a more recent way of eliciting information from large archival datasets that is becoming interesting to regular businesses that have very large data resources to work with, but it requires advanced multi-processor resources. [ 30 ] In 2004, the management system " Information Management Body of Knowledge " was first published on the World Wide Web [ 31 ] and set out to show that the required management competencies to derive real benefits from an investment in information are complex and multi-layered. The framework model that is the basis for understanding competencies comprises six "knowledge" areas and four "process" areas: The IMBOK is based on the argument that there are six areas of required management competency, two of which ("business process management" and "business information management") are very closely related. [ 32 ] Even with full capability and competency within the six knowledge areas, it is argued that things can still go wrong. The problem lies in the migration of ideas and information management value from one area of competency to another. Summarising what Bytheway explains in some detail (and supported by selected secondary references): [ 37 ] There are always many ways to see a business, and the information management viewpoint is only one way. Other areas of business activity will also contribute to strategy – it is not only good information management that moves a business forwards. Corporate governance , human resource management , product development and marketing will all have an important role to play in strategic ways, and we must not see one domain of activity alone as the sole source of strategic success. On the other hand, corporate governance, human resource management, product development and marketing are all dependent on effective information management, and so in the final analysis our competency to manage information well, on the broad basis that is offered here, can be said to be predominant. Organizations are often confronted with many information management challenges and issues at the operational level , especially when organisational change is engendered. The novelty of new systems architectures and a lack of experience with new styles of information management requires a level of organisational change management that is notoriously difficult to deliver. As a result of a general organisational reluctance to change, to enable new forms of information management, there might be (for example): a shortfall in the requisite resources, a failure to acknowledge new classes of information and the new procedures that use them, a lack of support from senior management leading to a loss of strategic vision, and even political manoeuvring that undermines the operation of the whole organisation. [ 41 ] However, the implementation of new forms of information management should normally lead to operational benefits. In early work, taking an information processing view of organisation design, Jay Galbraith has identified five tactical areas to increase information processing capacity and reduce the need for information processing. [ 42 ] The lateral relations concept leads to an organizational form that is different from the simple hierarchy, the " matrix organization ". This brings together the vertical (hierarchical) view of an organisation and the horizontal (product or project) view of the work that it does visible to the outside world. The creation of a matrix organization is one management response to a persistent fluidity of external demand, avoiding multifarious and spurious responses to episodic demands that tend to be dealt with individually.
https://en.wikipedia.org/wiki/Information_management
Information processes and technology ( IPT ) is the study of information systems and the processes and technology involved in them. IPT is also a subject offered to senior high school students in Australia in university entrance exams such as the HSC in New South Wales. It focuses on giving the student an understanding of information technology , information processes and the skills to create information systems and some basic programming skills. Some of the social and ethical issues of computer systems may also be included in the course of the subject. In New South Wales , IPT is separated into the preliminary (year 11) and HSC (year 12) courses. A prerequisite for the HSC course is successful completion of the preliminary course. In June 2009 the course from 2010 to 2018 was detailed in the NSW Board of Studies syllabus, [ 1 ] which was renamed to the New South Wales Education Standards Authority ( NESA ) from 2019. [ 2 ] IPT is one of the HSC courses which may be accelerated – students in some schools have the option of completing it in year 10. In Queensland , the information processing and technology course is defined in a Queensland Studies Authority document. [ 3 ]
https://en.wikipedia.org/wiki/Information_processes_and_technology
In information theory , the information projection or I-projection of a probability distribution q onto a set of distributions P is where D K L {\displaystyle D_{\mathrm {KL} }} is the Kullback–Leibler divergence from q to p . Viewing the Kullback–Leibler divergence as a measure of distance, the I-projection p ∗ {\displaystyle p^{*}} is the "closest" distribution to q of all the distributions in P . The I-projection is useful in setting up information geometry , notably because of the following inequality, valid when P is convex: [ 1 ] D K L ⁡ ( p | | q ) ≥ D K L ⁡ ( p | | p ∗ ) + D K L ⁡ ( p ∗ | | q ) {\displaystyle \operatorname {D} _{\mathrm {KL} }(p||q)\geq \operatorname {D} _{\mathrm {KL} }(p||p^{*})+\operatorname {D} _{\mathrm {KL} }(p^{*}||q)} . This inequality can be interpreted as an information-geometric version of Pythagoras' triangle-inequality theorem, where KL divergence is viewed as squared distance in a Euclidean space. It is worthwhile to note that since D K L ⁡ ( p | | q ) ≥ 0 {\displaystyle \operatorname {D} _{\mathrm {KL} }(p||q)\geq 0} and continuous in p, if P is closed and non-empty, then there exists at least one minimizer to the optimization problem framed above. Furthermore, if P is convex, then the optimum distribution is unique. The reverse I-projection also known as moment projection or M-projection is Since the KL divergence is not symmetric in its arguments, the I-projection and the M-projection will exhibit different behavior. For I-projection, p ( x ) {\displaystyle p(x)} will typically under-estimate the support of q ( x ) {\displaystyle q(x)} and will lock onto one of its modes. This is due to p ( x ) = 0 {\displaystyle p(x)=0} , whenever q ( x ) = 0 {\displaystyle q(x)=0} to make sure KL divergence stays finite. For M-projection, p ( x ) {\displaystyle p(x)} will typically over-estimate the support of q ( x ) {\displaystyle q(x)} . This is due to p ( x ) > 0 {\displaystyle p(x)>0} whenever q ( x ) > 0 {\displaystyle q(x)>0} to make sure KL divergence stays finite. The reverse I-projection plays a fundamental role in the construction of optimal e-variables . The concept of information projection can be extended to arbitrary f -divergences and other divergences . [ 2 ] This probability -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Information_projection
In relational databases , the information schema ( information_schema ) is an ANSI -standard set of read-only views that provide information about all of the tables , views , columns , and procedures in a database. [ 1 ] It can be used as a source of the information that some databases make available through non-standard commands, such as: As a notable exception among major database systems, Oracle does not as of 2015 [update] implement the information schema. An open-source project exists to address this. RDBMSs that support information_schema include: RDBMSs that do not support information_schema include: This database -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Information_schema
The information set is the basis for decision making in a game, which includes the actions available to both sides and the benefits of each action. The information set is an important concept in non-perfect games. In game theory , an information set represents all possible points (or decision nodes) in a game that a given player might be at during their turn, based on their current knowledge and observations. These nodes are indistinguishable to the player due to incomplete information about previous actions or the state of the game . Therefore, an information set groups together all decision points where the player, given what they know, cannot tell which specific point they are currently at. For a better idea on decision vertices, refer to Figure 1. If the game has perfect information , every information set contains only one member, namely the point actually reached at that stage of the game, since each player knows the exact mix of chance moves and player strategies up to the current point in the game. Otherwise, it is the case that some players cannot be sure what the game state is; for instance, not knowing what exactly happened in the past or what should be done right now. Information sets are used in extensive form games and are often depicted in game trees . Game trees show the path from the start of a game and the subsequent paths that can be made depending on each player's next move. For non-perfect information game problems, there is hidden information. That is, each player does not have complete knowledge of the opponent's information, such as cards that do not appear in a poker game. When constructing a game tree, it can be challenging for a player to determine their exact location within the tree solely based on their knowledge and observations. This is because players may lack complete information about the actions or strategies of their opponents. As a result, a player may only be certain that they are at one of several possible nodes. The collection of these indistinguishable nodes at a given point is called the 'information set'. Information sets can be easily depicted in game trees to display each player's possible moves typically using dotted lines, circles or even by just labelling the vertices which shows a particular player's options at the current stage of the game as shown in Figure 1. More specifically, in the extensive form, an information set is a set of decision nodes such that: Games in extensive form often involve each player being able to play multiple moves which results in the formation of multiple information sets as well. A player is to make choices at each of these vertices based on the options in the information set. This is known as the player's strategy and can provide the player's path from the start of the game, to the end which is also known as the play of the game. From the play of the game, the outcome will always be known based on the strategy of each player unless chance moves are involved, then there will not always be a singular outcome. Not all games play's are strategy based as they can also involve chance moves. When chance moves are involved, a vector of strategies can result in the probability distribution of the multiple outcomes of the games that could occur. Multiple outcomes of games can be created when chance is involved as the moves are likely to be different each time. However, based on the strength of the strategy, some outcomes could have higher probabilities than others. Assuming that there are multiple information sets in a game, the game transforms from a static game to a dynamic game. The key to solving dynamic game is to calculate each player's information set and make decisions based on their choices at different stages. For example, when player A chooses first, the player B will make the best decision for him based on A's choice. Player A, in turn, can predict B's reaction and make a choice in his favour. The notion of information set was introduced by John von Neumann , motivated by studying the game of Poker . At the right are two versions of the battle of the sexes game, shown in extensive form . Below, the normal form for both of these games is shown as well. The first game is simply sequential―when player 2 makes a choice, both parties are already aware of whether player 1 has chosen O(pera) or F(ootball). The second game is also sequential, but the dotted line shows player 2's information set . This is the common way to show that when player 2 moves, he or she is not aware of what player 1 did. This difference also leads to different predictions for the two games. In the first game, player 1 has the upper hand. They know that they can choose O(pera) safely because once player 2 knows that player 1 has chosen opera, player 2 would rather go along for o(pera) and get 2 than choose f(ootball) and get 0 . Formally, that's applying subgame perfection to solve the game. In the second game, player 2 can't observe what player 1 did, so it might as well be a simultaneous game . So subgame perfection doesn't get us anything that Nash equilibrium can't get us, and we have the standard 3 possible equilibria:
https://en.wikipedia.org/wiki/Information_set_(game_theory)
An information silo , or a group of such silos, is an insular management system in which one information system or subsystem is incapable of reciprocal operation with others that are, or should be, related. Thus information is not adequately shared but rather remains sequestered within each system or subsystem, figuratively trapped within a container as grain is trapped within a silo : there may be much of it, and it may be stacked quite high and be freely available within those limits, but it has no effect outside them. Such data silos are proving an obstacle for businesses wishing to use data mining to make productive use of their data. Information silos occur whenever a data system is incompatible, or not integrated, with other data systems. This incompatibility may occur in the technical architecture, in the application architecture , or in the data architecture of a data system. However, since it has been shown that established data-modeling methods are the root cause of the data-integration problem, [ 1 ] most data systems are at least incompatible in the data-architecture layer. In understanding organizational behaviour , the term silo mentality [ 2 ] often refers to a mindset which creates and maintains information silos within an organization. A silo mentality is created by the divergent goals of different organizational units : it is defined by the Business Dictionary as "a mindset present when certain departments or sectors do not wish to share information with others in the same company". [ 3 ] It can also be described as a variant of the principal–agent problem . [ clarification needed ] A silo mentality primarily occurs in larger organizations and can lead to poorer performance and has a negative impact on the corporate culture . Silo mentalities can be countered by the introduction of shared goals, the increase of internal networking activities and the flattening of hierarchies . [ 4 ] Predictors for the occurrence of silos are Gleeson and Rozo suggest that The silo mindset does not appear accidentally ... more often than not, silos are the result of a conflicted leadership team ... A unified leadership team will encourage trust, create empowerment, and break managers out of their "my department" mentality and into the "our organization" mentality. [ 3 ] The term functional silo syndrome was coined in 1988 by Phil S. Ensor (1931–2018) who worked in organizational development and employee relations for Goodyear Tire and Rubber Company and Eaton Corporation , and as a consultant. Silo and stovepipe (as in " stovepipe organization " and " stovepipe system ") are now used interchangeably and applied broadly. Phil Ensor's use of the term silo reflects his rural Illinois origins and the many grain silos he would pass on return visits as he contemplated the challenges of the modern organizations with which he worked. [ 5 ] [ 6 ] [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/Information_silo
In mathematics , an information source is a sequence of random variables ranging over a finite alphabet Γ, having a stationary distribution . The uncertainty , or entropy rate , of an information source is defined as where is the sequence of random variables defining the information source, and is the conditional information entropy of the sequence of random variables. Equivalently, one has This statistics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Information_source_(mathematics)
An information system ( IS ) is a formal, sociotechnical , organizational system designed to collect, process, store , and distribute information . [ 1 ] From a sociotechnical perspective, information systems comprise four components: task, people, structure (or roles), and technology. [ 2 ] Information systems can be defined as an integration of components for collection, storage and processing of data , comprising digital products that process data to facilitate decision making [ 3 ] and the data being used to provide information and contribute to knowledge. A computer information system is a system, which consists of people and computers that process or interpret information. [ 4 ] [ 5 ] [ 6 ] [ 7 ] The term is also sometimes used to simply refer to a computer system with software installed. " Information systems " is also an academic field of study about systems with a specific reference to information and the complementary networks of computer hardware and software that people and organizations use to collect, filter, process, create and also distribute data . [ 8 ] An emphasis is placed on an information system having a definitive boundary, users, processors, storage, inputs, outputs and the aforementioned communication networks. [ 9 ] In many organizations, the department or unit responsible for information systems and data processing is known as " information services ". [ 10 ] [ 11 ] [ 12 ] [ 13 ] Any specific information system aims to support operations, management and decision-making . [ 14 ] [ 15 ] An information system is the information and communication technology (ICT) that an organization uses, and also the way in which people interact with this technology in support of business processes. [ 16 ] Some authors make a clear distinction between information systems, computer systems , and business processes . Information systems typically include an ICT component but are not purely concerned with ICT, focusing instead on the end-use of information technology . Information systems are also different from business processes. Information systems help to control the performance of business processes. [ 17 ] Alter [ 18 ] [ 19 ] argues that viewing an information system as a special type of work system has its advantages. A work system is a system in which humans or machines perform processes and activities using resources to produce specific products or services for customers. An information system is a work system in which activities are devoted to capturing, transmitting, storing, retrieving, manipulating and displaying information. [ 20 ] As such, information systems inter-relate with data systems on the one hand and activity systems on the other. [ 21 ] An information system is a form of communication system in which data represent and are processed as a form of social memory. An information system can also be considered a semi- formal language which supports human decision making and action. Information systems are the primary focus of study for organizational informatics. [ 22 ] Silver et al. (1995) provided two views on IS that includes software, hardware, data, people, and procedures. [ 23 ] The Association for Computing Machinery defines "Information systems specialists [as] focus[ing] on integrating information technology solutions and business processes to meet the information needs of businesses and other enterprises." [ 24 ] There are various types of information systems, : including transaction processing systems , decision support systems , knowledge management systems , learning management systems , database management systems , and office information systems. Critical to most information systems are information technologies, which are typically designed to enable humans to perform tasks for which the human brain is not well suited, such as: handling large amounts of information, performing complex calculations, and controlling many simultaneous processes. Information technologies are a very important and malleable resource available to executives. [ 25 ] Many companies have created a position of chief information officer (CIO) that sits on the executive board with the chief executive officer (CEO), chief financial officer (CFO), chief operating officer (COO), and chief technical officer (CTO). The CTO may also serve as CIO, and vice versa. The chief information security officer (CISO) focuses on information security management. The six components that must come together in order to produce an information system are: [ 26 ] Data is the bridge between hardware and people. This means that the data we collect is only data until we involve people. At that point, data becomes information. The "classic" view of Information systems found in textbooks [ 28 ] in the 1980s was a pyramid of systems that reflected the hierarchy of the organization, usually transaction processing systems at the bottom of the pyramid, followed by management information systems , decision support systems , and ending with executive information systems at the top. Although the pyramid model remains useful since it was first formulated, a number of new technologies have been developed and new categories of information systems have emerged, some of which no longer fit easily into the original pyramid model. Some examples of such systems are: A computer(-based) information system is essentially an IS using computer technology to carry out some or all of its planned tasks. The basic components of computer-based information systems are: The first four components (hardware, software, database, and network) make up what is known as the information technology platform. Information technology workers could then use these components to create information systems that watch over safety measures, risk and the management of data. These actions are known as information technology services. [ 29 ] Certain information systems support parts of organizations, others support entire organizations, and still others, support groups of organizations. Each department or functional area within an organization has its own collection of application programs or information systems. These functional area information systems (FAIS) are supporting pillars for more general IS namely, business intelligence systems and dashboards . [ citation needed ] As the name suggests, each FAIS supports a particular function within the organization, e.g.: accounting IS, finance IS, production-operation management (POM) IS, marketing IS, and human resources IS. In finance and accounting, managers use IT systems to forecast revenues and business activity, to determine the best sources and uses of funds, and to perform audits to ensure that the organization is fundamentally sound and that all financial reports and documents are accurate. Other types of organizational information systems are FAIS, transaction processing systems , enterprise resource planning , office automation system, management information system , decision support system , expert system , executive dashboard, supply chain management system , and electronic commerce system. Dashboards are a special form of IS that support all managers of the organization. They provide rapid access to timely information and direct access to structured information in the form of reports. Expert systems attempt to duplicate the work of human experts by applying reasoning capabilities, knowledge, and expertise within a specific domain. Information technology departments in larger organizations tend to strongly influence the development, use, and application of information technology in the business. A series of methodologies and processes can be used to develop and use an information system. Many developers use a systems engineering approach such as the system development life cycle (SDLC), to systematically develop an information system in stages. The stages of the system development lifecycle are planning, system analysis, and requirements, system design, development, integration and testing, implementation and operations, and maintenance. Recent research aims at enabling [ 30 ] and measuring [ 31 ] the ongoing, collective development of such systems within an organization by the entirety of human actors themselves. An information system can be developed in house (within the organization) or outsourced. This can be accomplished by outsourcing certain components or the entire system. [ 32 ] A specific case is the geographical distribution of the development team ( offshoring , global information system ). A computer-based information system, following a definition of Langefors , [ 33 ] is a technologically implemented medium for recording, storing, and disseminating linguistic expressions, as well as for drawing conclusions from such expressions. Geographic information systems , land information systems, and disaster information systems are examples of emerging information systems, but they can be broadly considered as spatial information systems. System development is done in stages which include: [ 34 ] The field of study called information systems encompasses a variety of topics including systems analysis and design, computer networking, information security, database management, and decision support systems. Information management deals with the practical and theoretical problems of collecting and analyzing information in a business function area including business productivity tools, applications programming and implementation, electronic commerce, digital media production, data mining, and decision support. Communications and networking deals with telecommunication technologies. Information systems bridges business and computer science using the theoretical foundations of information and computation to study various business models and related algorithmic processes [ 35 ] on building the IT systems [ 36 ] [ 37 ] within a computer science discipline. [ 38 ] [ 39 ] [ 40 ] [ 41 ] [ 42 ] [ 43 ] [ 44 ] [ 45 ] [ 46 ] [ 47 ] [ 48 ] [ 49 ] [ 50 ] Computer information systems (CIS) is a field studying computers and algorithmic processes, including their principles, their software and hardware designs, their applications, and their impact on society, [ 51 ] [ 52 ] [ 53 ] whereas IS emphasizes functionality over design. [ 54 ] Several IS scholars have debated the nature and foundations of information systems which have its roots in other reference disciplines such as computer science , engineering , mathematics , management science , cybernetics , and others. [ 55 ] [ 56 ] [ 57 ] [ 58 ] Information systems also can be defined as a collection of hardware, software, data, people, and procedures that work together to produce quality information. Similar to computer science, other disciplines can be seen as both related and foundation disciplines of IS. The domain of study of IS involves the study of theories and practices related to the social and technological phenomena, which determine the development, use, and effects of information systems in organizations and society. [ 59 ] But, while there may be considerable overlap of the disciplines at the boundaries, the disciplines are still differentiated by the focus, purpose, and orientation of their activities. [ 60 ] In a broad scope, information systems is a scientific field of study that addresses the range of strategic, managerial, and operational activities involved in the gathering, processing, storing, distributing, and use of information and its associated technologies in society and organizations. [ 60 ] The term information systems is also used to describe an organizational function that applies IS knowledge in the industry, government agencies, and not-for-profit organizations. [ 60 ] Information systems often refers to the interaction between algorithmic processes and technology. This interaction can occur within or across organizational boundaries. An information system is a technology an organization uses and also the way in which the organizations interact with the technology and the way in which the technology works with the organization's business processes. Information systems are distinct from information technology (IT) in that an information system has an information technology component that interacts with the processes' components. One problem with that approach is that it prevents the IS field from being interested in non-organizational use of ICT, such as in social networking, computer gaming, mobile personal usage, etc. A different way of differentiating the IS field from its neighbours is to ask, "Which aspects of reality are most meaningful in the IS field and other fields?" [ 61 ] This approach, based on philosophy, helps to define not just the focus, purpose, and orientation, but also the dignity, destiny and, responsibility of the field among other fields. [ 62 ] Business informatics is a related discipline that is well-established in several countries, especially in Europe. While Information systems has been said to have an "explanation-oriented" focus, business informatics has a more "solution-oriented" focus and includes information technology elements and construction and implementation-oriented elements. Information systems workers enter a number of different careers: There is a wide variety of career paths in the information systems discipline. "Workers with specialized technical knowledge and strong communications skills will have the best prospects. Workers with management skills and an understanding of business practices and principles will have excellent opportunities, as companies are increasingly looking to technology to drive their revenue." [ 63 ] Information technology is important to the operation of contemporary businesses, it offers many employment opportunities. The information systems field includes the people in organizations who design and build information systems, the people who use those systems, and the people responsible for managing those systems. The demand for traditional IT staff such as programmers, business analysts, systems analysts, and designer is significant. Many well-paid jobs exist in areas of Information technology. At the top of the list is the chief information officer (CIO). The CIO is the executive who is in charge of the IS function. In most organizations, the CIO works with the chief executive officer (CEO), the chief financial officer (CFO), and other senior executives. Therefore, he or she actively participates in the organization's strategic planning process. Information systems research is generally interdisciplinary concerned with the study of the effects of information systems on the behaviour of individuals, groups, and organizations. [ 68 ] [ 69 ] Hevner et al. (2004) [ 70 ] categorized research in IS into two scientific paradigms including behavioural science which is to develop and verify theories that explain or predict human or organizational behavior and design science which extends the boundaries of human and organizational capabilities by creating new and innovative artifacts. Salvatore March and Gerald Smith [ 71 ] proposed a framework for researching different aspects of information technology including outputs of the research (research outputs) and activities to carry out this research (research activities). They identified research outputs as follows: Also research activities including: Although Information Systems as a discipline has been evolving for over 30 years now, [ 72 ] the core focus or identity of IS research is still subject to debate among scholars. [ 73 ] [ 74 ] [ 75 ] There are two main views around this debate: a narrow view focusing on the IT artifact as the core subject matter of IS research, and a broad view that focuses on the interplay between social and technical aspects of IT that is embedded into a dynamic evolving context. [ 76 ] A third view [ 77 ] calls on IS scholars to pay balanced attention to both the IT artifact and its context. Since the study of information systems is an applied field, industry practitioners expect information systems research to generate findings that are immediately applicable in practice. This is not always the case however, as information systems researchers often explore behavioral issues in much more depth than practitioners would expect them to do. This may render information systems research results difficult to understand, and has led to criticism. [ 78 ] In the last ten years, the business trend is represented by the considerable increase of Information Systems Function (ISF) role, especially with regard to the enterprise strategies and operations supporting. It became a key factor to increase productivity and to support value creation . [ 79 ] To study an information system itself, rather than its effects, information systems models are used, such as EATPUT . The international body of Information Systems researchers, the Association for Information Systems (AIS), and its Senior Scholars Forum Subcommittee on Journals (202), proposed a list of 11 journals that the AIS deems as 'excellent'. [ 80 ] According to the AIS, this list of journals recognizes topical, methodological, and geographical diversity. The review processes are stringent, editorial board members are widely-respected and recognized, and there is international readership and contribution. The list is (or should be) used, along with others, as a point of reference for promotion and tenure and, more generally, to evaluate scholarly excellence. A number of annual information systems conferences are run in various parts of the world, the majority of which are peer reviewed. The AIS directly runs the International Conference on Information Systems (ICIS) and the Americas Conference on Information Systems (AMCIS), while AIS affiliated conferences [ 81 ] include the Pacific Asia Conference on Information Systems (PACIS), European Conference on Information Systems (ECIS), the Mediterranean Conference on Information Systems (MCIS), the International Conference on Information Resources Management (Conf-IRM) and the Wuhan International Conference on E-Business (WHICEB). AIS chapter conferences [ 82 ] include Australasian Conference on Information Systems (ACIS), Scandinavian Conference on Information Systems (SCIS), Information Systems International Conference (ISICO), Conference of the Italian Chapter of AIS (itAIS), Annual Mid-Western AIS Conference (MWAIS) and Annual Conference of the Southern AIS (SAIS). EDSIG, [ 83 ] which is the special interest group on education of the AITP, [ 84 ] organizes the Conference on Information Systems and Computing Education [ 85 ] and the Conference on Information Systems Applied Research [ 86 ] which are both held annually in November.
https://en.wikipedia.org/wiki/Information_system
The information systems success model (alternatively IS success model or Delone and McLean IS success model ) is an information systems (IS) theory which seeks to provide a comprehensive understanding of IS success by identifying, describing, and explaining the relationships among six of the most critical dimensions of success along which information systems are commonly evaluated. Initial development of the theory was undertaken by William H. DeLone and Ephraim R. McLean in 1992, [ 1 ] and was further refined by the original authors a decade later in response to feedback received from other scholars working in the area. [ 2 ] [ 3 ] The IS success model has been cited in thousands of scientific papers, and is considered to be one of the most influential theories in contemporary information systems research. The IS success model identifies and describes the relationships among six critical dimensions of IS success: information quality, system quality, service quality, system use/usage intentions, user satisfaction, and net system benefits. Information quality refers to the quality of the information that the system is able to store, deliver, or produce, and is one of the most common dimensions along which information systems are evaluated. Information quality impacts both a user’s satisfaction with the system and the user’s intentions to use the system, which, in turn, impact the extent to which the system is able to yield benefits for the user and organization. As with information quality, the overall quality of a system is also one of the most common dimensions along which information systems are evaluated. System quality indirectly impacts the extent to which the system is able to deliver benefits by means of mediational relationships through the usage intentions and user satisfaction constructs. Along with information quality and system quality, information systems are also commonly evaluated according to the quality of service that they are able to deliver. Service quality directly impacts usage intentions and user satisfaction with the system, which, in turn, impact the net benefits produced by the system. Intentions to use an information system and actual system use are well-established constructs in the information systems literature. In the IS success model system use and usage intentions are influenced by information, system, and service quality. System use is posited to influence a user’s satisfaction with the information system, which, in turn, is posited to influence usage intentions. In conjunction with user satisfaction, system use directly affects the net benefits that the system is able to provide. User , and by information, system, and service quality. Like actual system use, user satisfaction directly influences the net benefits provided by an information system. Satisfaction refers to the extent to which a user is pleased or contented with the information system, and is posited to be directly affected by system use. The net benefit that an information system is able to deliver is an important facet of the overall value of the system to its users or to the underlying organization. In the IS success model, net system benefits are affected by system use and by user satisfaction with the system. In their own right, system benefits are posited to influence both user satisfaction and a user’s intentions to use the system.
https://en.wikipedia.org/wiki/Information_systems_success_model
An information systems technician is a technician whose responsibility is maintaining communications and computer systems. [ 1 ] Information systems technicians operate and maintain information systems , facilitating system utilization. In many companies, these technicians assemble data sets and other details needed to build databases . This includes data management , procedure writing, writing job setup instructions, and performing program librarian functions. Information systems technicians assist in designing and coordinating the development of integrated information system databases. Information systems technicians also help maintain Internet and Intranet websites . They decide how information is presented and create digital multimedia and presentation using software and related equipment. Information systems technicians install and maintain multi-platform networking computer environments, a variety of data networks, and a diverse set of telecommunications infrastructures. Information systems technicians schedule information gathering for content in a multiple system environment. Information systems technicians are responsible for the operation, programming, and configuration of many pieces of electronics , hardware and software . ITs often are also tasked to investigate, troubleshoot, and resolve end-user problems. Information systems technicians conduct ongoing assessments of short and long-term hardware and software needs for companies, developing, testing, and implementing new and revised programs. Information systems technicians cooperate with other staff to inventory, maintain and manage computer and communication systems. Information systems technicians provide communication links and connectivity to the department in an organization, serving to equipment modification and installation tasks. This includes: Additionally, Information systems technicians can conduct training and provide technical support to end-users, providing this for a departments (sometimes across multiple organizations).
https://en.wikipedia.org/wiki/Information_systems_technician
An information technology generalist is a technology professional proficient in many facets of information technology without any specific specialty. Furthermore, an IT generalist is generally considered to possess general business knowledge and soft skills allowing them to be adaptable in a wide array of work environments. [ 1 ] The IT Generalist is often able to fulfill many different roles within a company depending on specific technology needs. In a small business environment, budgets often delegate many different facets of technology to a single individual, especially considering a small business will often require an individual proficient in desktop support, web page design , databases, phone systems, and even server administration. The role of the IT Generalist within a larger company, however, often becomes more of a project leader or integrations specialist due to a project team consisting of a varying degree of IT specialists and interfacing with end-users requiring soft-skills. [ 2 ] The information technology industry consists of many disparate technologies that each serve a critical piece of the total technology puzzle. As technology practices change new methods, techniques, and tools become available that often require human expertise in order to implement and maintain technology systems. The human expertise required in order to manage these new systems has given rise to what is defined as an IT specialist—someone with an expert level of competency and knowledge on a particular piece of technology. In comparison, the IT Generalist does not possess the expert knowledge to implement an advanced system but instead possesses the knowledge and experience to ensure all of the disparate technologies can function properly together. [ 3 ] The generalist has become an increasingly valuable asset to a company, especially when it comes to the rapidly changing field of technology. In many cases, companies are more apt to demand hiring and retaining employees that are multi-functional, especially those that have analytical abilities such as critical reasoning and statistical analysis. [ 4 ] Market trends tend to be moving away from the hiring of IT specialists and instead individuals who possess a more broad technical base with additional soft-skills such as enthusiasm, passion, and energy. In even some cases, IT specialists with years of experience may be passed over for more malleable hires because as technology changes innovation may be stifled by specialists not being able to adjust to new procedures and processes in the market. [ 5 ] The attributes associated with an IT generalist have been defined in other occupational series as well. In a study conducted over five years with over 250 political science specialists their political predictions were recorded along with a large sample of generalist's political predictions to determine if specialists with their expert knowledge were more effective in their forecasts. The study discovered that those with limited political science exposure were more accurate in their predictions—most likely as a result of their more varied and un-focused exposure to political science as compared to those of the specialists. [ 6 ] The debate between the IT generalist and specialist has been going on for years as companies are trying to determine how to best utilize their IT department skills and assets. Up until recently the general consensus of the market has been the hiring of IT specialists due to the implementation of advanced technologies such as cloud computing, next-generation mobile application development, and the virtualization of technology infrastructure. It has been found, however, that the hiring of specialists has led to the rise of silos of talent within a company leading to difficulty implementing new business processes and taking advantage of inter-departmental collaboration. [ 7 ] This compartmentalization of knowledge and skills has led to companies shifting focus to the hiring and implementing of IT Generalists for the purposes of implementing a more mobile, adaptable, and diverse technology department. According to multiple studies [ 8 ] in 2010 companies who were surveyed noted their goal was to reduce the amount of specialists they hire and instead hiring “versatilists.” These versatilists are synonymous with IT Generalists because they have an overall general sense of technology as well as business knowledge while also possessing “soft skills” that are considered lacking in technology-minded individuals focused on specific technology skill-sets. [ 9 ]
https://en.wikipedia.org/wiki/Information_technology_generalist
The information–action ratio is a concept coined by cultural critic Neil Postman in his work Amusing Ourselves to Death . In short, Postman meant to indicate the relationship between a piece of information and what action, if any, a consumer of that information might reasonably be expected to take once learning it. In a speech to the German Informatics Society ( Gesellschaft für Informatik ) on October 11, 1990 in Stuttgart, sponsored by IBM -Germany, Neil Postman said the following: "The tie between information and action has been severed. Information is now a commodity that can be bought and sold, or used as a form of entertainment, or worn like a garment to enhance one's status. It comes indiscriminately, directed at no one in particular, disconnected from usefulness; we are glutted with information, drowning in information, have no control over it, don't know what to do with it." [ 1 ] In Amusing Ourselves to Death Postman frames the information-action ratio in the context of the telegraph's invention. Prior to the telegraph, Postman says people received information relevant to their lives, creating a high correlation between information and action: "The information-action ratio was sufficiently close so that most people had a sense of being able to control some of the contingencies in their lives” (p. 69). The telegraph allowed bits of information to travel long distances, and so Postman claims "the local and the timeless ... lost their central position in newspapers, eclipsed by the dazzle of distance and speed ... Wars, crimes, crashes, fires, floods—much of it the social and political equivalent of Adelaide's whooping coughs —became the content of what people called 'the news of the day'" (pp. 66–67). A high information-action ratio, therefore, refers to the helplessness people confront when faced with decontextualized information. Someone may know Adelaide has the whooping cough, but what could anyone do about it? Postman said that this kind of access to decontextualized information "made the relationship between information and action both abstract and remote." Information consumers were "faced with the problem of a diminished social and political potency." The term was referenced in Arctic Monkeys ' song " Four Out of Five " of the band's 2018 album Tranquility Base Hotel & Casino , where the Information Action Ratio is the name of a fictional taqueria on the hotel based on the Moon. [ 2 ]
https://en.wikipedia.org/wiki/Information–action_ratio
Informatization or informatisation refers to the extent by which a geographical area, an economy or a society is becoming information-based, i.e. the increase in size of its information labor force. Usage of the term was inspired by Marc Porat ’s categories of ages of human civilization: the Agricultural Age, the Industrial Age and the Information Age (1978). Informatization is to the Information Age what industrialization was to the Industrial Age. It has been stated that: The term has mostly been used within the context of national development. Everett Rogers defines informatization as the process through which new communication technologies are used as a means for furthering development as a nation becomes more and more an information society . [ 1 ] However, some observers, such as Alexander Flor ( 1986 ) have cautioned about the negative impact of informatization on traditional societies. Recently, the technological determinism dimension has been highlighted in informatization. Randy Kluver of Texas A&M University defines informatization as the process primarily by which information technologies, such as the World Wide Web and other communication technologies, have transformed economic and social relations to such an extent that cultural and economic barriers are minimized. Kluver expands the concept to encompass the civic and cultural arenas. He believes that it is a process whereby information and communication technologies shape cultural and civic discourse. G. Wang describes the same phenomenon (1994) which she calls "informatization" as a "process" of change that features (a) the use of informatization and IT (information technologies) to such an extent that they become the dominant forces in commanding economic, political, social and cultural development; and (b) unprecedented growth in the speed, quantity, and popularity of information production and distribution." The term informatisation was coined by Simon Nora and Alain Minc in their publication L'Informatisation de la société: Rapport à M. le Président de la République [ 2 ] which was translated in English in 1980 as The Computerization of Society: A report to the President of France . [ 3 ] ( SAOUG ) However, in an article published in 1987 Minc preferred to use informatisation and not computerization . After the 1978 publication the concept was adopted in French, German and English subject literatures and was broadened to include more aspects than only computers and telecommunications ( SAOUG ). Informatization has many far-reaching consequences in society. Kim ( 2004 ) observes that these include repercussions in economics, politics and other aspects of modern living. In the economic sphere, for example, information is viewed as a focal resource for development, replacing the centrality of labor and capital during the industrial age. In the political arena, there are increased opportunities for participative democracy with the advent of information and communication technology (ICT) that provide easy access to information on varied social and political issues. Industrialization propelled transformation of the economic system from agricultural age to modernized economies, and so informatization ushered the industrial age into an information-rich economy. Unlike the agricultural and industrial ages where economics refers to optimization of scarce resources, the information age deals with maximization of abundant resources. Alexander Flor ( 2008 ) wrote that informatization gives rise to information-based economies and societies wherein information naturally becomes a dominant commodity or resource. The accumulation and efficient use of knowledge has played a central role in the transformation of the economy ( Linden 2004 ). Over the years, globalization and informatization have "redefined industries, politics, cultures, and perhaps the underlying rules of social order" ( Friedman 1999 ). Although they explain different phenomena, their social, political, economic, and cultural functions remarkably overlaps. "Although globalization ultimately refers to the integration of economic institutions, much of this integration occurs through the channels of technology. Although international trade is not a new phenomenon, the advent of communications technologies has accelerated the pace and scope of trade" ( Kluver ). Kim ( 2004 ) proposed to measure the informatization in a country using a composite measure made up of the following extraneous variables: Education, R&D Expenditure, Agricultural Sector and Intellectual Property . Kim also relates increasing democracy as evidence of social informatization. It supposedly take into consideration the three approaches to conceptualizing informatization namely the economic, technological, and stock. Each can be measured with economic data (e.g. GDP), ICT data (e.g. number of computers per population), and amount of information (e.g. number of published technological journals) respectively. Such composite measure is similar to the World Bank 's Knowledge Assessment Methodology (KAM) Variables ( 2008 ) which are clustered into: overall performance of the economy, economic incentive and institutional regime, innovation system, education and human resources, and information and communication technology. The measurement for the level of informatization is an ongoing area of development. Among the issues are the ambiguity of the definition of "information" and whether this entity can be quantifiable in contrast to the tangible products of industrialization. Taylor and Zang ( 2007 ) explored the issues behind the limitations of current theoretical models in terms of quantifying the positive impacts of ICT projects, and provided critiques of the information indicators used to gauge and justify informatization projects. International organizations such as the United Nations, through its World Summit on the Information Society (WSIS) and International Telecommunication Union (ITU); and Organisation for Economic Co-operation and Development (OECD) also recognize this challenge and have initiated efforts to improve the methodologies for measuring an "information society". Informatization is recognized by states as important to national development. Some states have created laws implementing or regulating informatization. In Russia the State Duma enacted the Federal Law on Information, Informatization, and the Protection of Information on January 25, 1995. It was signed into law by President Boris Yeltsin on February 20, 1995. Azerbaijan had a Law on Information, Informatization and Protection of Information in 1998. TV white space refers to the unused TV channels between the active ones in the VHF and UHF spectrum. TV spectrum is commonly referred to as television "white spaces". TV white space can be used to provide broadband internet access, particularly in remote and rural areas (Edwards, 2016), and as such can serve as a tool for increasing information access and social and economic development. In 2008 The Federal Communications Commission voted to reallocate unlicensed white space spectrum for public use (White Space, n.d.)
https://en.wikipedia.org/wiki/Informatization
Infosphere is a metaphysical realm of information , data , knowledge , and communication , populated by informational entities called inforgs (or, informational organisms ). [ 1 ] Infosphere is portmanteau of information and - sphere . Though one example is cyberspace , infospheres are not limited to purely online environments; they can include both offline and analogue information. [ 2 ] The first documented use of the infosphere was in 1970 by Kenneth E. Boulding , [ 3 ] who viewed it as one among the six "spheres" in his own system (the others being the sociosphere, biosphere , hydrosphere , lithosphere , and atmosphere ). Boulding claimed: [T]he infosphere...consists of inputs and outputs of conversation, books, television, radio, speeches, church services, classes, and lectures as well as information received from the physical world by personal observation.... It is clearly a segment of the sociosphere in its own right, and indeed it has considerable claim to dominate the other segments. It can be argued that development of any kind is essentially a learning process and that it is primarily dependent on a network of information flows. [ 4 ] : 15–6 In 1971, the term was used in a Time Magazine book review by R.Z. Sheppard, who wrote: [ 5 ] In much the way that fish cannot conceptualize water or birds the air, man barely understands his infosphere, that encircling layer of electronic and typographical smog composed of cliches from journalism, entertainment, advertising and government. In 1980, it was used by Alvin Toffler in his book The Third Wave , in which he writes: What is inescapably clear, whatever we choose to believe, is that we are altering our infosphere fundamentally...we are adding a whole new strata of communication to the social system. The emerging Third Wave infosphere makes that of the Second Wave era - dominated by its mass media, the post office, and the telephone - seem hopelessly primitive by contrast. Toffler's definition proved to be prophetic, as the use of infosphere in the 1990s expanded beyond media to speculate about the common evolution of the Internet , society and culture . In his book Digital Dharma , Steven Vedro writes: [ 6 ] Emerging from what French philosopher-priest Pierre Teilhard de Chardin called the shared noosphere of collective human thought, invention and spiritual seeking, the Infosphere is sometimes used to conceptualize a field that engulfs our physical, mental and etheric bodies; it affects our dreaming and our cultural life. Our evolving nervous system has been extended, as media sage Marshall McLuhan predicted in the early 1960s, into a global embrace. In 1999, the term was reinterpreted by Luciano Floridi , on the basis of biosphere, to denote the whole informational environment constituted by all informational entities (including informational agents), their properties, interactions, processes, and mutual relations. [ 7 ] [ 8 ] Floridi writes: [T]he computerised description and control of the physical environment, together with the digital construction of a synthetic world, are, finally, intertwined with a fourth area of application, represented by the transformation of the encyclopadeic macrocosm of data, information, ideas, knowledge, beliefs, codified experiences, memories, images, artistic interpretations, and other mental creations into a global infosphere . The infosphere is the whole system of services and documents, encoded in any semiotic and physical media, whose contents include any sort of data, information and knowledge...with no limitations either in size, typology, or logical structure. Hence it ranges from alphanumeric texts (i.e., texts, including letters, numbers, and diacritic symbols) and multimedia products to statistical data, from films and hypertexts to whole text-banks and collections of pictures, from mathematical formulae to sounds and videoclips. [ 8 ] : 8 To him, it is an environment comparable to, but different from cyberspace (which is only one of its sub-regions, as it were), since it also includes offline and analogue spaces of information. According to Floridi, it is possible to equate the infosphere to the totality of Being ; this equation leads him to an informational ontology . The manipulation of the infosphere is subject to metaphysics and its rules. Information is considered to be Shannon information and is treated in a physical sense separate from energy and matter. The manipulations to the infosphere include the erasing, transfer, duplication, and destruction of information. [ 9 ] The term was used by Dan Simmons in the science-fiction saga Hyperion (1989) to indicate what the Internet could become in the future: a place parallel, virtual, formed of billions of networks, with " artificial life " on various scales, from what is equivalent to an insect (small programs) to what is equivalent to a god ( artificial intelligences ), whose motivations are diverse, seeking to both help mankind and harm it. [ citation needed ] In the animated sitcom Futurama , the Infosphere is a huge sphere floating in space, in which a species of giant, talking, floating brains attempts to store all of the information known in the universe. [ citation needed ] The IBM Software Group created the InfoSphere brand in 2008 for its Information Management software products. [ 10 ] This article relating to library science or information science is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Infosphere
Infostate is an index used to measure the Digital Divide . It was proposed by Orbicom , the International Network of UNESCO Chairs in Communications, in "Monitoring the Digital Divide… and beyond" . The conceptual framework of the index introduces the notions of a country's infodensity and info-use. Infodensity refers to the stocks of ICT capital and labour, including networks and ICT skills, indicative of a country's productive capacity and indispensable to function in an Information Society. It includes ICT networks, machinery, and equipment, as well as ICT skills, indispensable for the functioning of information, knowledge-oriented societies. Info-use refers to the uptake and consumption flows of ICTs, as well as their intensity of use by households, businesses and governments and the intensity of their actual use. Infostate is an aggregation of Infodensity and Info-use indexes and represents the degree of a country's ‘ICT-ization’. The Digital Divide is then defined as the relative difference in infostates among economies. The Infostate index is used as a base for another index, the ICT Opportunity Index , proposed by ITU, International Telecommunication Unit in “From the Digital Divide to DIGITAL OPPORTUNITIES: Measuring Infostates for Development”. The ICT Opportunity Index is in fact the merger of two well known initiatives, ITU's Digital Access Index (DAI) and Orbicom's Monitoring the Digital Divide/ Infostate conceptual framework and model. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Infostate
In chronobiology , an infradian rhythm is a rhythm with a period longer than the period of a circadian rhythm , i.e., one cycle is longer than 24 hours. [ 1 ] Some examples of infradian rhythms in mammals include menstruation , breeding , migration , hibernation , molting and fur or hair growth, and tidal or seasonal rhythms. In contrast, ultradian rhythms have periods shorter (<24 hours) than the period of a circadian rhythm. Several infradian rhythms are known to be caused by hormone or neurotransmitter stimulation or by environmental factors such as the lunar cycles. For example, seasonal depression, an example of an infradian rhythm occurring once a year, can be caused by the systematic lowering of light levels during the winter. [ 2 ] The seasonal affective disorder can be classified as infradian, or as circannual, which means occurring on a yearly basis. [ 3 ] [ 4 ] The most well-known infradian rhythm in humans is the fluctuation of estrogens and progesterone across the menstrual cycle . [ 5 ] Another example in humans is the ~10-day rhythms of enamel growth. [ 6 ] [ 7 ] Other infradian rhythms have been documented in organisms such as dormice, lemmings, voles, lynx, mice, etc. Other writers more narrowly define infradian rhythms as rhythms longer than 24 hour but shorter than one year; categorizing rhythms of about a year as circannual rhythms ( circannual cycle ). [ 8 ] [ 9 ] Some studies observe an infradian rhythm with a period of approximately 7 days (circaseptan rhythm). [ 10 ] [ 11 ] However, these rhythms appear to be an artifact of the statistics applied to the raw data. [ 12 ] This biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Infradian_rhythm
Infragravity waves are surface gravity waves with frequencies lower than the wind waves – consisting of both wind sea and swell – thus corresponding with the part of the wave spectrum lower than the frequencies directly generated by forcing through the wind. Infragravity waves are ocean surface gravity waves generated by ocean waves of shorter periods. The amplitude of infragravity waves is most relevant in shallow water, in particular along coastlines hit by high amplitude and long period wind waves and ocean swells . Wind waves and ocean swells are shorter, with typical dominant periods of 1 to 25 s. In contrast, the dominant period of infragravity waves is typically 80 to 300 s, [ 1 ] which is close to the typical periods of tsunamis , with which they share similar propagation properties including very fast celerities in deep water. This distinguishes infragravity waves from normal oceanic gravity waves , which are created by wind acting on the surface of the sea, and are slower than the generating wind. Whatever the details of their generation mechanism, discussed below, infragravity waves are these subharmonics of the impinging gravity waves. [ 2 ] Technically infragravity waves are simply a subcategory of gravity waves and refer to all gravity waves with periods greater than 30 s. This could include phenomena such as tides and oceanic Rossby waves , but the common scientific usage is limited to gravity waves that are generated by groups of wind waves. The term "infragravity wave" appears to have been coined by Walter Munk in 1950. [ 3 ] [ 4 ] Two main processes can explain the transfer of energy from the short wind waves to the long infragravity waves, and both are important in shallow water and for steep wind waves. The most common process is the subharmonic interaction of trains of wind waves which was first observed by Munk and Tucker and explained by Longuet-Higgins and Stewart. [ 5 ] Because wind waves are not monochromatic [ broken anchor ] they form groups. The Stokes drift induced by these groupy waves transports more water where the waves are highest. The waves also push the water around in a way that can be interpreted as a force: the divergence of the radiation stresses. Combining mass and momentum conservation, Longuet-Higgins and Stewart give, with three different methods, the now well-known result. Namely, the mean sea level oscillates with a wavelength that is equal to the length of the group, with a low level where the wind waves are highest and a high level where these waves are lowest. This oscillation of the sea surface is proportional to the square of the short wave amplitude and becomes very large when the group speed approaches the speed of shallow water waves. The details of this process are modified when the bottom is sloping, which is generally the case near the shore, but the theory captures the important effect, observed in most conditions, that the high water of this 'surf beat' arrives with the waves of lowest amplitude. Another process was proposed later by Graham Symonds and his collaborators. [ 6 ] To explain some cases in which this phase of long and short waves were not opposed, they proposed that the position of the breaker line in the surf, moving towards deep water when waves are higher, could act like a wave maker. It appears that this is probably a good explanation for infragravity wave generation on a reef. In the case of coral reefs, the infragravity periods are established by resonances with the reef itself. [ 7 ] [ 8 ] Infragravity waves are thought to be a generating mechanism behind sneaker waves , unusually large and long-duration waves that cause water to surge far onshore and that have killed a number of people in the US Pacific Northwest . [ 9 ] Infragravity waves generated along the Pacific coast of North America have been observed to propagate transoceanically to Antarctica and there to impinge on the Ross Ice Shelf . Their frequencies more closely couple with the ice shelf natural frequencies and they produce a larger amplitude ice shelf movement than the normal ocean swell of gravity waves. Further, they are not damped by sea ice as normal ocean swell is. As a result, they flex floating ice shelves such as the Ross Ice Shelf; this flexure contributes significantly to the breakup on the ice shelf. [ 2 ] [ 10 ]
https://en.wikipedia.org/wiki/Infragravity_wave
The infraorbital vein is a vein that drains structures of the floor of the orbit. It arises on the face and passes backwards through the orbit alongside infraorbital artery and nerve , exiting the orbit through the inferior orbital fissure to drain into the pterygoid venous plexus . The infraorbital vein arises on the face by the union of several tributaries. [ 1 ] Accompanied by the infraorbital artery and the infraorbital nerve , [ 2 ] [ 1 ] it passes posteriorly through the infraorbital foramen , infraorbital canal , and infraorbital groove . [ 1 ] It exits the orbit through the inferior orbital fissure to drain into the pterygoid venous plexus . [ 2 ] [ 1 ] The infraorbital vein drains structures of the floor of the orbit; [ 2 ] receives tributaries from structures that lie close to the floor of the orbit. [ 2 ] [ 1 ] The infraorbital vein communicates with the inferior ophthalmic vein . [ 2 ] [ 1 ] It may sometimes additionally also communicate with the facial vein on the face. [ 2 ]
https://en.wikipedia.org/wiki/Infraorbital_vein
AFM-IR ( atomic force microscope-infrared spectroscopy ) or infrared nanospectroscopy is one of a family of techniques [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] that are derived from a combination of two parent instrumental techniques. AFM-IR combines the chemical analysis power of infrared spectroscopy and the high-spatial resolution of scanning probe microscopy (SPM). The term was first used to denote a method that combined a tuneable free electron laser with an atomic force microscope (AFM, a type of SPM) equipped with a sharp probe that measured the local absorption of infrared light by a sample with nanoscale spatial resolution. [ 16 ] [ 17 ] [ 18 ] Originally the technique required the sample to be deposited on an infrared-transparent prism and be less than 1μm thick. This early setup improved the spatial resolution and sensitivity of photothermal AFM-based techniques from microns [ 7 ] to circa 100 nm. [ 8 ] [ 9 ] [ 10 ] [ 16 ] [ 19 ] [ 20 ] Then, the use of modern pulsed optical parametric oscillators and quantum cascade lasers, in combination with top-illumination, have enabled to investigate samples on any substrate and with increase sensitivity and spatial resolution. As most recent advances, AFM-IR has been proved capable to acquire chemical maps and nanoscale resolved spectra at the single-molecule scale from macromolecular self-assemblies and biomolecules with circa 10 nm diameter, [ 18 ] [ 17 ] [ 21 ] [ 22 ] as well as to overcome limitations of IR spectroscopy and measure in aqueous liquid environments. [ 23 ] Recording the amount of infrared absorption as a function of wavelength or wavenumber , AFM-IR creates an infrared absorption spectra that can be used to chemically characterize and even identify unknown samples. [ 12 ] [ 15 ] [ 24 ] Recording the infrared absorption as a function of position can be used to create chemical composition maps that show the spatial distribution of different chemical components. Novel extensions of the original AFM-IR technique [ 18 ] [ 17 ] and earlier techniques [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 6 ] [ 7 ] [ 24 ] have enabled the development of bench-top devices capable of nanometer spatial resolution, that do not require a prism and can work with thicker samples, and thereby greatly improving ease of use and expanding the range of samples that can be analysed. AFM-IR has achieved lateral spatial resolutions of ca. 10 nm, with a sensitivity down to the scale of molecular monolayer [ 25 ] and single protein molecules with molecular weight down to 400-600 kDa. [ 18 ] [ 17 ] AFM-IR is related to techniques such as tip-enhanced Raman spectroscopy (TERS), scanning near-field optical microscopy (SNOM), [ 26 ] nano-FTIR and other methods of vibrational analysis with scanning probe microscopy. The earliest measurements combining AFM with infrared spectroscopy were performed in 1999 by Hammiche et al . at the University of Lancaster in the United Kingdom, [ 1 ] in an EPSRC -funded project led by M Reading and H M Pollock. Separately, Anderson at the Jet Propulsion Laboratory in the United States made a related measurement in 2000. [ 2 ] Both groups used a conventional Fourier transform infrared spectrometer (FTIR) equipped with a broadband thermal source, the radiation was focused near the tip of a probe that was in contact with a sample. The Lancaster group obtained spectra by detecting the absorption of infrared radiation using a temperature sensitive thermal probe. Anderson [ 2 ] took the different approach of using a conventional AFM probe to detect the thermal expansion . He reported an interferogram but not a spectrum; the first infrared spectrum obtained in this way was reported by Hammiche et al . in 2004: [ 6 ] this represented the first proof that spectral information about a sample could be obtained using this approach. Both of these early experiments used a broadband source in conjunction with an interferometer; these techniques could, therefore, be referred to as AFM-FTIR although Hammiche et al . coined the more general term photothermal microspectroscopy or PTMS in their first paper. [ 1 ] PTMS has various subgroups; [ 27 ] including techniques that measure temperature [ 1 ] [ 3 ] [ 4 ] [ 6 ] [ 7 ] [ 14 ] [ 28 ] measure thermal expansion [ 2 ] [ 6 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] use broadband sources. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 6 ] [ 7 ] use lasers [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 28 ] excite the sample using evanescent waves, [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 15 ] illuminate the sample directly from above [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 12 ] [ 14 ] [ 25 ] [ 28 ] etc. and different combinations of these. Fundamentally, they all exploit the photothermal effect. Different combinations of sources, methods, methods of detection and methods of illumination have benefits for different applications. [ 6 ] Care should be taken to ensure that it is clear which form of PTMS is being used in each case. Currently there is no universally accepted nomenclature. The original technique dubbed AFM-IR that induced resonant motion in the probe using a Free Electron Laser has developed by exploiting the foregoing permutations so that it has evolved into various forms. The pioneering experiments of Hammiche et al and Anderson had limited spatial resolution due to thermal diffusion - the spreading of heat away from the region where the infrared light was absorbed. The thermal diffusion length (the distance the heat spreads) is inversely proportional to the root of the modulation frequency. Consequently, the spatial resolution achieved by the early AFM-IR approaches was around one micron or more, due to the low modulation frequencies of the incident radiation created by the movement of the mirror in the interferometer. Also, the first thermal probes were Wollaston wire devices [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] that were developed originally for Microthermal analysis [ 29 ] (in fact PTMS was originally considered to be one of a family of microthermal techniques [ 4 ] ). The comparatively large size of these probes also limited spatial resolution. Bozec et al . [ 3 ] and Reading et al . [ 7 ] used thermal probes with nanoscale dimensions and demonstrated higher spatial resolution. Ye et al [ 30 ] described a MEM-type thermal probe giving sub-100 nm spatial resolution, which they used for nanothermal analysis. The process of exploring laser sources began in 2001 by Hammiche et al when they acquired the first spectrum using a tuneable laser ( see Resolution improvement with pulsed laser source ). A significant development was the creation by Reading et al . in 2001 [ 4 ] of a custom interface that allowed measurements to be made while illuminating the sample from above; this interface focused the infrared beam to a spot of circa 500μm diameter, close to the theoretical maximum [ Note 1 ] . The use of top-down or top-side illumination has the important benefit that samples of arbitrary thickness can be studied on arbitrary substrates. In many cases this can be done without any sample preparation. All subsequent experiments by Hammiche, Pollock, Reading and their co-workers were made using this type of interface including the instrument constructed by Hill et al . for nanoscale imaging using a pulsed laser. [ 12 ] The work of the University of Lancaster group in collaboration with workers from the University of East Anglia led to the formation of a company, Anasys Instruments, to exploit this and related technologies [ 31 ] ( see Commercialization ). In the first paper on AFM-based infrared by Hammiche et al ., [ 1 ] the relevant well-established theoretical considerations were outlined that predict that high spatial resolution can be achieved using rapid modulation frequencies because of the consequent reduction in the thermal diffusion length. They estimated that spatial resolutions in the range of 20 nm-30 nm should be achievable. [ 32 ] The most readily available sources that can achieve high modulation frequencies are pulsed lasers: even when the rapidity of the pulses is not high, the square wave form of a pulse contains very high modulation frequencies in Fourier space. In 2001, Hammiche et al . used a type of bench-top tuneable, pulsed infrared laser known as an optical parametric oscillator or OPO and obtained the first probe-based infrared spectrum with a pulsed laser, however, they did not report any images. [ 24 ] Nanoscale spatial resolution AFM-IR imaging using a pulsed laser was first demonstrated by Dazzi et al [ 8 ] at the University of Paris-Sud , France. Dazzi and his colleagues used a wavelength-tuneable, free electron laser at the CLIO facility [ Note 2 ] in Orsay, France to provide an infrared source with short pulses. Like earlier workers, [ 2 ] [ 6 ] they used a conventional AFM probe to measure thermal expansion but introduced a novel optical configuration: the sample was mounted on an IR-transparent prism so that it could be excited by an evanescent wave. Absorption of short infrared laser pulses by the sample caused rapid thermal expansion that created a force impulse at the tip of the AFM cantilever. The thermal expansion pulse induced transient resonant oscillations of the AFM cantilever probe. This has led to the technique being dubbed Photo-Thermal Induced Resonance (PTIR), by some workers in the field. [ 10 ] [ 24 ] Some prefer the terms PTIR or PTMS [ 1 ] [ 3 ] [ 5 ] [ 6 ] [ 7 ] to AFM-IR as the technique is not necessarily restricted to infrared wavelengths. The amplitude of the cantilever oscillation is directly related to the amount of infrared radiation absorbed by the sample. [ 33 ] [ 34 ] [ 35 ] [ 36 ] [ 37 ] [ 38 ] [ 39 ] By measuring the cantilever oscillation amplitude as a function of wavenumber, Dazzi's group was able to obtain absorption spectra from nanoscale regions of the sample. Compared to earlier work, this approach improved spatial resolution because the use of short laser pulses reduced the duration of the thermal expansion pulse to the point that the thermal diffusion lengths can be on the scale of nanometres rather than microns. A key advantage of the use of a tuneable laser source, with a narrow wavelength range, is the ability to rapidly map the locations of specific chemical components on the sample surface. To achieve this, Dazzi's group tuned their free electron laser source to a wavelength corresponding to the molecular vibration of the chemical of interest, then mapped the cantilever oscillation amplitude as function of position across the sample. They demonstrated the ability to map chemical composition in E. coli bacteria. They could also visualize polyhydroxybutyrate (PHB) vesicles inside Rhodobacter capsulatus cells [ 35 ] and monitor the efficiency of PHB production by the cells. At the University of East Anglia in the UK, as part of an EPSRC-funded project led by M. Reading and S. Meech, Hill and his co-workers [ 12 ] followed the earlier work of Reading et al . [ 4 ] and Hammiche et al . [ 6 ] and measured thermal expansion using an optical configuration that illuminated the sample from above [ 5 ] in contrast to Dazzi et al . who excited the sample with an evanescent wave from below. [ 8 ] Hill also made use of an optical parametric oscillator as the infrared source in the manner of Hammiche et al . [ 24 ] This novel combination of topside illumination, [ 4 ] OPO source [ 24 ] and measuring thermal expansion [ 2 ] [ 6 ] [ 8 ] proved capable of nanoscale spatial resolution for infrared imaging and spectroscopy (the figures show a schematic of the UEA apparatus and results obtained with it). The use by Hill and co-workers of illumination from above allowed a substantially wider range of samples to be studied than was possible using Dazzi's technique. By introducing the use of a bench top IR source and topdown illumination, the work of Hammiche, Hill and their coworkers made possible the first commercially viable SPM-based infrared instrument (see Commercialization). Reading et al . have explored the use of a broadband QCL combined with thermal expansion measurements. [ 40 ] Above, the inability of thermal broadband sources to achieve high spatial resolution is discussed (see history). In this case the frequency of modulation is limited by the mirror speed of the interferometer which, in turn, limits the lateral spatial resolution that can be achieved. When using a broadband QCL the resolution is limited not by the mirror speed but by the modulation frequency of the laser pulses (or other waveforms). [ 1 ] The benefit of using a broadband source is that an image can be acquired that comprises an entire spectrum or part of a spectrum for each pixel. This is much more powerful than acquiring images bases on a single wavelength. The preliminary results of Reading et al . [ 40 ] show that directing a broadband QCL though an interferometer can give an easily detectable response from a conventional AFM probe measuring thermal expansion. The AFM-IR technique based on a pulsed infrared laser source was commercialized by Anasys Instruments, a company founded by Reading, Hammiche and Pollock in the United Kingdom in 2004; [ 31 ] [ 41 ] a sister, United States corporation was founded a year later. Anasys Instruments developed its product with support from the National Institute of Standards and Technology and the National Science Foundation . Since free electron lasers are rare and available only at select institutions, a key to enabling a commercial AFM-IR was to replace them with a more compact type of infrared source. Following the lead given by Hammiche et al in 2001 [ 24 ] and Hill et al in 2008, [ 12 ] Anasys Instruments introduced an AFM-IR product in early 2010, using a tabletop laser source based on a nanosecond optical parametric oscillator. [ 36 ] The OPO source enabled nanoscale infrared spectroscopy over a tuning range of roughly 1000–4000 cm −1 or 2.5-10 μm. The initial product required samples to be mounted on infrared-transparent prisms, with the infrared light being directed from below in the manner of Dazzi et al . [ Note 3 ] For best operation, this illumination scheme required thin samples, with optimal thickness of less than 1 μm, [ 24 ] prepared on the surface of the prism. In 2013, Anasys released an AFM-IR instrument based on the work of Hill et al . [ 12 ] [ 28 ] that supported top-side illumination. "By eliminating the need to prepare samples on infrared-transparent prisms and relaxing the restriction on sample thickness, the range of samples that could be studied was greatly expanded. The CEO of Anasys Instruments recognised this achievement by calling it " an exciting major advance" in a letter written to the university and included in the final report of EPSRC project EP/C007751/1. [ 42 ] The UEA technique went on to become Anasys Instruments' flagship product. It is worth noting that the first infrared spectrum obtained by measuring thermal expansion using an AFM was obtained by Hammiche and co-workers [ 6 ] without inducing resonant motions in the probe cantilever. In this early example the modulation frequency was too low to achieve high spatial resolution but there is nothing, in principle, preventing the measurement of thermal expansion at higher frequencies without analysing or inducing resonant behaviour. [ 1 ] Possible options for measuring the displacement of the tip rather than the subsequent propagation of waves along the cantilever include; interferometry focused at the end of the cantilever where the tip is located, a torsional motion resulting from an offset probe (it would only be influenced by the motions of the cantilever as a second order effect) and exploiting the fact that the signal from a heated thermal probe is strongly influenced by the position of the tip relative to the surface thus this could provide a measurement of thermal expansion that wasn't strongly influenced by or dependent upon resonance. The advantages of a non-resonant method of detection is that any frequency of light modulation could be used thus depth information could be obtained in a controlled way (see below) whereas methods that rely on resonance are limited to harmonics. The thermal-probe based method of Hammiche et al . [ 1 ] has found a significant number of applications. [ 14 ] [ 28 ] A unique application made possible by the top-down illumination combined with a thermal probe [ 4 ] is localized depth profiling, [ 28 ] this is not possible using either using the Dazzi et al . configuration of AFM-IR or that of Hill et al . despite the fact the latter uses top-down illumination. Obtaining linescans [ 4 ] [ 43 ] and images [ 28 ] with thermal probes has been shown to be possible, sub-diffraction limit spatial resolution can be achieved [ 4 ] and the resolution for delineating boundaries can be enhanced using chemometric techniques. [ 28 ] [ 43 ] In all of these examples a spectrum is acquired that spans the entire mid-IR range for each pixel, this is considerably more powerful than measuring the absorption of a single wavelength as is the case for AFM-IR when using either the method of Dazzi et al . or Hill et al . Reading and his group demonstrated how, because thermal probes can be heated, localized thermal analysis [ 4 ] [ 28 ] [ 29 ] can be combined with photothermal infrared spectroscopy using a single probe. In this way local chemical information could be complemented with local physical properties such melting and glass transition temperatures. [ 29 ] This in turn led to the concept of thermally assisted nanosampling, [ 5 ] [ 28 ] where the heated tip performs a local thermal analysis experiment then the probe is retracted taking with it down to femtograms [ Note 4 ] of softened material that adhere to the tip. [ 38 ] This material can then be manipulated and/or analysed by photothermal infrared spectroscopy or other techniques. [ 5 ] [ 44 ] [ 45 ] [ 46 ] [ 47 ] This considerably increases the analytical power of this type of SPM-based infrared instrument beyond anything that can be achieved with conventional AFM probes such as those used in AFM-IR when using either the Dazzi et al . or the Hill et al . version. Thermal probe techniques have still not achieved the nanoscale spatial resolution that thermal expansion methods have attained though this is theoretically possible. For this, a robust thermal probe and a high intensity source is needed. Recently, the first images using a QCL and a thermal probe have been obtained by Reading et al. [ 40 ] A good signal to noise ratio enabled rapid imaging but sub-micron spatial resolution was not clearly demonstrated. Theory predicts improvements in spatial resolution could be achieved by confining data analysis to the early part of the thermal response to a step change increase in the intensity of the incident radiation. In this way pollution of the measurement from adjacent regions would be avoided, i.e. the measurement window could be confined to a suitable fraction of the time of flight of the thermal wave (using a Fourier analysis of the response could provide a similar outcome by using the high frequency components). This could be achieved by tapping the probe in synchrony with the laser. Similarly, lasers that provide very rapid modulations could further reduce thermal diffusion lengths. Although most effort to date has been focused on thermal expansion measurements, this might change. Truly robust thermal probes have recently become available, [ 48 ] as have affordable compact QCL's that are tuneable over a broad frequency range. Consequently, it may soon be the case that thermal probe techniques will become as widely used as those based on thermal expansion. Ultimately, instruments that can easily switch between modes and even combine them using a single probe will certainly become available, for example, a single probe will eventually be able to measure both temperature and thermal expansion. The original commercial AFM-IR instruments required most samples to be thicker than 50 nm to achieve sufficient sensitivity. Sensitivity improvements were achieved using specialized cantilever probes with an internal resonator [ 49 ] and by wavelet based signal processing techniques. [ 50 ] Sensitivity was further improved by Lu et al . [ 25 ] by using quantum cascade laser (QCL) sources. The high repetition rate of the QCL allows absorbed infrared light to continuously excite the AFM tip at a " contact resonance " [ Note 5 ] of the AFM cantilever. This resonance-enhanced AFM-IR, in combination with electric field enhancement from metallic tips and substrates led to the demonstration of AFM-IR spectroscopy and compositional imaging of films as thin as single self-assembled monolayers. [ 25 ] AFM-IR has also been integrated with other sources including a picosecond OPO [ 24 ] offering a tuning range 1.55 μm to 16 μm (from 6450 cm −1 to 625 cm −1 ). In its initial development, with samples deposited on transparent prisms and using OPO laser sources, the sensitivity of AFM-IR was limited to a minimal thickness of the sample of circa 50-100 nm as mentioned above. [ 8 ] [ 16 ] [ 33 ] [ 51 ] The advent of quantum cascade lasers (QCL) and the use of the electromagnetic field enhancement between metallic probes and substrates have improved the sensitivity and spatial resolution of AFM-IR down to the measurement of large (>0.3 μm) and flat (~2–10 nm) self-assembled monolayers, where still hundreds of molecules are present. [ 25 ] Ruggeri et al. have recently developed off-resonance, low power and short pulse AFM-IR (ORS-nanoIR) to prove the acquisition of infrared absorption spectra and chemical maps at the single molecule level, in the case of macromolecular assemblies [ 17 ] [ 22 ] [ 21 ] and large protein molecules with a spatial resolution of ca. 10 nm. [ 18 ] AFM-IR enables nanoscale infrared spectroscopy , [ 52 ] i.e. the ability to obtain infrared absorption spectra from nanoscale regions of a sample. Chemical compositional mapping AFM-IR can also be used to perform chemical imaging or compositional mapping with spatial resolution down to ~10-20 nm, [ 18 ] limited only by the radius of the AFM tip. In this case, the tuneable infrared source emits a single wavelength, corresponding to a specific molecular resonance, i.e. a specific infrared absorption band. By mapping the AFM cantilever oscillation amplitude as a function of position, it is possible to map out the distribution of specific chemical components. Compositional maps can be made at different absorption bands to reveal the distribution of difference chemical species. The AFM-IR technique can simultaneously provide complementary measurements of the mechanical stiffness and dissipation of a sample surface. When infrared light is absorbed by the sample the resulting rapid thermal expansion excites a "contact resonance" of the AFM cantilever, i.e. a coupled resonance resulting from the properties of both the cantilever and the stiffness and damping of the sample surface. Specifically, the resonance frequency shifts to higher frequencies for stiffer materials and to lower frequencies for softer material. Additionally, the resonance becomes broader for materials with larger dissipation. These contact resonances have been studied extensively by the AFM community ( see, for example, atomic force acoustic microscopy ). Traditional contact resonance AFM requires an external actuator to excite the cantilever contact resonances. In AFM-IR these contact resonances are automatically excited every time an infrared pulse is absorbed by the sample. So the AFM-IR technique can measure the infrared absorption by the amplitude of the cantilever oscillation response and the mechanical properties of the sample via the contact resonance frequency and quality factor. [ 53 ] Applications of AFM-IR have include the characterisation of protein, [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 54 ] polymers composites , [ 15 ] [ 36 ] [ 38 ] [ 39 ] [ 55 ] [ 56 ] bacteria, [ 37 ] [ 57 ] [ 58 ] [ 59 ] cells, [ 60 ] [ 61 ] [ 62 ] [ 63 ] [ 64 ] biominerals, [ 65 ] [ 66 ] pharmaceutical sciences, [ 17 ] [ 35 ] [ 67 ] [ 68 ] photonics/nanoantennas, [ 69 ] [ 70 ] [ 71 ] [ 72 ] fuel cells, [ 73 ] fibers, [ 39 ] [ 74 ] skin, [ 75 ] hair, [ 76 ] metal organic frameworks , [ 77 ] microdroplets, [ 51 ] self-assembled monolayers, [ 25 ] nanocrystals, [ 78 ] and semiconductors . [ 79 ] Polymers blends, composites, multilayer films and fibers AFM-IR has been used to identify and map polymer components in blends, [ 39 ] characterize interfaces in composites, [ 80 ] and even reverse engineer multilayer films [ 15 ] Additionally AFM-IR has been used to study chemical composition in Poly(3][4-ethylenedioxythiophene) (PEDOT) conducting polymers. [ 56 ] and vapor infiltration into polyethylene terephthalate PET fibers. [ 74 ] The chemical and structural properties of protein determine their interactions, and thus their functions, in a wide variety of biochemical processes. Since Ruggeri et al. pioneering work [ 16 ] on the aggregation pathways of the Josephin domain of ataxin-3, responsible for type-3 spinocerebellar ataxia, an inheritable protein-misfolding disease, AFM-IR was used to characterize molecular conformations in a wide spectrum of applications in protein and life sciences. [ 81 ] This approach has delivered new mechanistic insights into the behaviour of disease-related proteins and peptides, such as Aβ42, [ 17 ] huntingtin [ 21 ] and FUS, [ 53 ] which are involved in the onset of Alzheimer's, Huntington's and Amyotrophic lateral sclerosis (ALS). Similarly AFM-IR has been applied to study studying protein based functional biomaterials. [ 54 ] AFM-IR has been used to characterise spectroscopically in detail chromosomes, [ 82 ] bacteria [ 59 ] and cells [ 60 ] with nanoscale resolution. For example, in the case of infection of bacteria by viruses [ 59 ] ( Bacteriophages ), and also the production of polyhydroxybutyrate (PHB) vesicles inside Rhodobacter capsulatus cells [ 58 ] and triglycerides [ 46 ] in Streptomyces bacteria (for biofuel applications). AFM-IR has also been used to evaluate and map mineral content, crystallinity, collagen maturity and acid phosphate content via ratiometric analysis of various absorption bands in bone. [ 66 ] AFM-IR has also been used to perform spectroscopy and chemical mapping of structural lipids in human skin, [ 75 ] cells [ 60 ] and hair [ 76 ] AFM-IR has been used to study hydrated Nafion membranes used as separators in fuel cells . The measurements revealed the distribution of free and ionically bound water on the Nafion surface. [ 73 ] AFM-IR has been used to study the surface plasmon resonance in heavily silicon-doped indium arsenide microparticles. [ 79 ] Gold split ring resonators have been studied for use with Surface-Enhanced Infrared Absorption Spectroscopy. In this case AFM-IR was used to measure the local field enhancement of the plasmonics structures (~30X) at 100 nm spatial resolution. [ 69 ] [ 80 ] AFM-IR has been used to study miscibility and phase separation in drug polymer blends, [ 67 ] [ 68 ] the chemical analysis of nanocrystalline drug particles as small 90 nm across, [ 35 ] the interaction of chromosomes with chemotherapeutics drugs, [ 82 ] and of amyloids with pharmacological approaches to contrast neurodegeneration. [ 17 ]
https://en.wikipedia.org/wiki/Infrared_Nanospectroscopy_(AFM-IR)
The Infrared Science Archive ( IRSA ) is the primary archive for the infrared and submillimeter astronomical projects of NASA , the space agency of the United States . IRSA curates the science products of over 15 missions, including the Spitzer Space Telescope , the Wide-field Infrared Survey Explorer (WISE), the Infrared Astronomical Satellite (IRAS), and the Two Micron All-Sky Survey (2MASS). It also serves data from infrared and submillimeter European Space Agency missions with NASA participation, including the Infrared Space Observatory (ISO), Planck , and the Herschel Space Observatory . As of 2019 [update] , IRSA provides access to more than 1 petabyte of data consisting of roughly 1 trillion astronomical measurements, which span wavelengths from 1 micron to 10 millimeters and include all-sky coverage in 24 bands. Approximately 10% of all refereed astronomical journal articles cite data sets curated by IRSA. [ 1 ] [ 2 ] IRSA is part of the Infrared Processing and Analysis Center (IPAC) and is located on the campus of the California Institute of Technology . It is one of NASA's Astrophysics Data Centers, along with the High Energy Astrophysics Science Archive Research Center (HEASARC), the Mikulski Archive for Space Telescopes (MAST), and others. [ 3 ] This astronomy -related article is a stub . You can help Wikipedia by expanding it . This article about a science website is a stub . You can help Wikipedia by expanding it . This article related to the National Aeronautics and Space Administration is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Infrared_Science_Archive
The Infrared Space Observatory ( ISO ) was a space telescope for infrared light designed and operated by the European Space Agency (ESA), in cooperation with ISAS (now part of JAXA ) and NASA . The ISO was designed to study infrared light at wavelengths of 2.5 to 240 micrometres and operated from 1995 to 1998. [ 1 ] The € 480.1- million satellite [ 2 ] [ 3 ] was launched on 17 November 1995 from the ELA-2 launch pad at the Guiana Space Centre near Kourou in French Guiana. The launch vehicle , an Ariane 4 4P rocket, placed ISO successfully into a highly elliptical geocentric orbit , completing one revolution around the Earth every 24 hours. The primary mirror of its Ritchey-Chrétien telescope measured 60 cm in diameter and was cooled to 1.7 kelvins by means of superfluid helium . The ISO satellite contained four instruments that allowed for imaging and photometry from 2.5 to 240 micrometres and spectroscopy from 2.5 to 196.8 micrometers. ESA and the Infrared Processing and Analysis Center made efforts to improve the data pipelines and specialized software analysis tools to yield the best quality calibration and data reduction methods from the mission. IPAC supports ISO observers and data archive users through in-house visits and workshops. In 1983, the US-Dutch-British IRAS inaugurated space-based infrared astronomy by performing the first-ever 'all-sky survey' at infrared wavelengths . The resulting map of the infrared sky pinpointed some 350,000 infrared sources waiting to be explored by IRAS' successors. In 1979, IRAS was in an advanced stage of planning and the expected results from IRAS led to the first proposal for ISO made to ESA in the same year. With the rapid improvements in infrared detector-technology, ISO was to provide detailed observations for some 30,000 infrared sources with much improved sensitivity and resolution . ISO was to perform 1000 times better in sensitivity and 100 times better in angular resolution at 12 micrometres compared to IRAS. A number of follow-up studies resulted in the selection of ISO as the next installment for the ESA Scientific Programme in 1983. Next came a Call for Experiment and Mission Scientist Proposals to the scientific community, resulting in the selection of the scientific instruments in 1985. The four instruments chosen were developed by teams of researchers from France, Germany, the Netherlands and United Kingdom. Design and development of the satellite started in 1986 with Aérospatiale 's space division (currently absorbed into Thales Alenia Space ) leading an international consortium of 32 companies responsible for manufacture , integration and testing of the new satellite. Final assembly took place at the Cannes Mandelieu Space Center . The basic design of ISO was strongly influenced by that of its immediate predecessor. Like IRAS, ISO was composed of two major components: The payload module also held a conical sun shade, to prevent stray light from reaching the telescope, and two large star trackers . The latter were part of the Attitude and Orbit Control Subsystem (AOCS) which provided three-axis stabilisation of ISO with a pointing accuracy of one arc second . It consisted of Sun and Earth sensors, the before-mentioned star trackers, a quadrant star sensor on the telescope axis, gyroscopes and reaction wheels . A complementary reaction control system (RCS), using hydrazine propellant , was responsible for orbital direction and finetuning shortly after launch . The complete satellite weighed just under 2500 kg, was 5.3 m high, 3.6 m wide and measured 2.3 m in depth. The service module held all the warm electronics , the hydrazine propellant tank and provided up to 600 watts of electrical power by means of solar cells mounted on the sunpointing side of the service module-mounted sunshield. The underside of the service module sported a load-bearing, ring shaped, physical interface for the launch vehicle. The cryostat of the payload module surrounded the telescope and science instrument with a large dewar containing a toroidal tank loaded with 2268 litres of superfluid helium. Cooling by slow evaporation of the helium kept the temperature of the telescope below 3.4 K and the science instruments below 1.9 K. These very low temperatures were required for the scientific instruments to be sensitive enough to detect the small amount of infrared radiation from cosmic sources. Without this extreme cooling, the telescope and instruments would see only their own intense infrared emissions rather than the faint ones from afar. The ISO telescope was mounted on the center line of the dewar, near the bottom-side of the toroidal helium tank. It was of the Ritchey-Chrétien type with an effective entrance pupil of 60 cm, a focal length ratio of 15 and a resulting focal length of 900 cm. Very strict control over straylight, particularly that from bright infrared sources outside the telescope's field of view , was necessary to ensure the guaranteed sensitivity of the scientific instruments. A combination of light-tight shields, baffles inside the telescope and the sunshade on top of the cryostat accomplished full protection against straylight. Furthermore, ISO was constrained from observing too close to the Sun, Earth and Moon; all major sources of infrared radiation. ISO always pointed between 60 and 120 degrees away from the Sun and it never pointed closer than 77 degrees to Earth, 24 degrees to the Moon or closer than 7 degrees to Jupiter . These restrictions meant that at any given time only about 15 percent of the sky was available to ISO. A pyramid-shaped mirror behind the primary mirror of the telescope distributed the infrared light to the four instruments, providing each of them with a 3 arc-minute section of the 20 arc-minute field of view of the telescope. Thus, pointing of a different instrument to the same cosmic object meant repointing the entire ISO satellite. ISO carried an array of four scientific instruments for observations in the infrared: All four instruments were mounted directly behind the primary mirror of the telescope, in a circular arrangement, with each instrument taking up an 80 degree segment of the cylindrical space. The field of view for each instrument was offset to the central axis of the telescope's field of view. This means that every instrument 'saw' a different portion of the sky at a given moment. In standard operational mode one instrument was in primary operation. After a very successful development and integration phase ISO was finally launched into orbit on 17 November 1995, on board an Ariane-44P launch vehicle. Performance of the launch vehicle was very good with the apogee only 43 km lower than expected. ESA's Space Operations Centre in Darmstadt in Germany had full control over ISO in the first four days of flight. After early commissioning primary control over ISO was handed over to the Spacecraft Control Centre (SCC) at Villanueva de la Cañada in Spain ( VILSPA ) for the remainder of the mission. In the first three weeks after launch the orbit was fine-tuned and all satellite systems were activated and tested. Cool-down of the cryostat proved to be more efficient than previously calculated, so the anticipated mission length was extended to 24 months. Between 21 and 26 November all four science instruments were switched on and thoroughly checked out. Between 9 December 1995 and 3 February 1996 the 'Performance Verification Phase' took place, dedicated to commissioning all instruments and fixing problems. Routine observations started from 4 February 1996, and lasted until the last helium coolant depleted on 8 April 1998. The perigee of ISO's orbit lay well inside the Van Allen radiation belt , forcing the science instruments to be shut down for seven hours during each pass through the radiation belt. Thus, 17 hours in each orbit remained for scientific observation. A typical 24-hour orbit of ISO can be broken down into six phases: Contrary to IRAS, no science data was recorded on-board ISO for later transmission to the ground. All data, both science data and housekeeping data were transmitted to the ground in real-time. The perigee point of ISO's orbit was below the radio horizon of the mission control centers at both VILSPA and Goldstone, thus forcing the science instruments to be switched off at perigee. At 07:00 UTC on 8 April 1998 flight controllers at VILSPA noticed a rise in temperature of the telescope. This was a clear sign that the load of superfluid helium coolant had depleted. At 23:07 UTC the same day, the temperature of the science instruments had risen above 4.2 K and science observations were ceased. A few detectors in the SWS instrument were capable of making observations at higher temperatures and remained in use for another 150 hours to make detailed measurements of an additional 300 stars . In the month following depletion of coolant the 'Technology Test Phase' (TTP) was initiated to test several elements of the satellite in off-nominal conditions. After completion of TTP, the perigee of ISO's orbit was lowered sufficiently enough to ensure ISO will burn up in Earth's atmosphere in 20 to 30 years after shutdown. ISO was then permanently switched off on 16 May 1998 at 12:00 UTC. On average, ISO performed 45 observations in each 24-hour orbit. Throughout its lifetime of over 900 orbits ISO performed more than 26,000 successful scientific observations. The huge amounts of scientific data generated by ISO was subject to extensive archiving activities up to 2006. The full data-set has been available to the scientific community since 1998 and many discoveries have been made, with probably many more still to come:
https://en.wikipedia.org/wiki/Infrared_Space_Observatory
Infrared Spectrometer for ExoMars ( ISEM ) is an infrared spectrometer for remote sensing that is part of the science payload on board the European Space Agency 's Rosalind Franklin rover , tasked to search for biosignatures and biomarkers on Mars. The rover is planned to be launched not earlier than 2028 and land on Mars in 2029. ISEM will provide context assessment of the surface mineralogy in the vicinity of the Rosalind Franklin rover for selection of potential astrobiological targets. The Principal Investigator is Oleg Korablev from the Russian Space Research Institute (IKI). [ needs update ] The Infrared Spectrometer for ExoMars (ISEM) is being developed by the Russian Space Research Institute (IKI). [ 4 ] [ 5 ] It will be the first instance of near-infrared spectroscopy (NIR) observations done from the Mars surface. [ 2 ] The instrument will be installed on the Rosalind Franklin rover 's mast to measure reflected solar radiation in the near infrared range for context assessment of the surface mineralogy in the vicinity of Rosalind Franklin for selection of potential astrobiological targets. [ 2 ] [ 6 ] As the number of samples obtained with the drill will be limited, the selection of high-value sites for drilling will be crucial. Working with PanCam (a high-resolution panoramic camera), ISEM will aid in the selection of potential targets, especially water-bearing minerals, for close-up investigations and drilling sites. [ 2 ] ISEM could detect, if present, organic compounds , including evolving trace gases such as hydrocarbons like methane in the Martian atmosphere . [ 2 ] The stated science objectives of ISEM are: [ 3 ] ISEM is a derivative of the Lunar Infrared Spectrometer (LIS) being developed by the Russian Space Research Institute (IKI) in Moscow for the planned Luna-25 and Luna-27 Russian landers. [ 2 ] Collaborating institutions include: Moscow State University , Main Astrophysical Observatory, National Academy of Sciences of Ukraine , the National Research Institute for Physicotechnical and Radio Engineering Measurements (VNIIFTRI) in Russia, Moscow State University , and the Aberystwyth University in United Kingdom. The science team includes researchers from Russia, France, Italy, Sweden, Germany, the United Kingdom, and Canada. [ 2 ] The instrument has been designed to specifically detect carbonates, oxalates, borates, nitrates , NH 4 -bearing minerals, that are good indicators of past habitable conditions such as aqueous minerals. It is also designed to detect organic compounds , including polycyclic aromatic hydrocarbons (PAHs) and those containing aliphatic C-H molecules. [ 2 ] In addition, ISEM can also detect seasonal frost, if present at the landing site, and it can be used to analyse the bore hole excavated by the ExoMars drill, if the rover backs away some distance. [ 2 ]
https://en.wikipedia.org/wiki/Infrared_Spectrometer_for_ExoMars
An infrared excess is a measurement of an astronomical source, typically a star , that in their spectral energy distribution has a greater measured infrared flux than expected by assuming the star is a blackbody radiator . Infrared excesses are often the result of circumstellar dust heated by starlight and reemitted at longer wavelengths. They are common in young stellar objects and evolved stars on the asymptotic giant branch or older . [ 1 ] In addition, monitoring for infrared excess emission from stellar systems is one possible method that could enable a search for large-scale stellar engineering projects of a hypothetical extraterrestrial civilization; for example a Dyson sphere or Dyson swarm . [ 2 ] This infrared excess would be the outcome of the waste heat emitted by the aforementioned structures if they are considered blackbodies at temperatures close to 300 K. [ 3 ] [ 4 ] This astronomy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Infrared_excess
In physics , an infrared fixed point is a set of coupling constants, or other parameters, that evolve from arbitrary initial values at very high energies (short distance) to fixed, stable values, usually predictable, at low energies (large distance). [ 1 ] This usually involves the use of the renormalization group , which specifically details the way parameters in a physical system (a quantum field theory ) depend on the energy scale being probed. Conversely, if the length-scale decreases and the physical parameters approach fixed values, then we have ultraviolet fixed points . The fixed points are generally independent of the initial values of the parameters over a large range of the initial values. This is known as universality . In the statistical physics of second order phase transitions , the physical system approaches an infrared fixed point that is independent of the initial short distance dynamics that defines the material. This determines the properties of the phase transition at the critical temperature , or critical point . Observables, such as critical exponents usually depend only upon dimension of space, and are independent of the atomic or molecular constituents. In the Standard Model , quarks and leptons have " Yukawa couplings " to the Higgs boson which determine the masses of the particles. Most of the quarks' and leptons' Yukawa couplings are small compared to the top quark 's Yukawa coupling. Yukawa couplings are not constants and their properties change depending on the energy scale at which they are measured, this is known as running of the constants. The dynamics of Yukawa couplings are determined by the renormalization group equation : where g 3 {\displaystyle \ g_{3}\ } is the color gauge coupling (which is a function of μ {\displaystyle \ \mu \ } and associated with asymptotic freedom [ 2 ] [ 3 ] ) and y q {\displaystyle \ y_{q}\ } is the Yukawa coupling for the quark q ∈ { u , b , t } . {\displaystyle \ q\in \{\mathrm {u,b,t} \}~.} This equation describes how the Yukawa coupling changes with energy scale μ . {\displaystyle \ \mu ~.} A more complete version of the same formula is more appropriate for the top quark: where g 2 is the weak isospin gauge coupling and g 1 is the weak hypercharge gauge coupling. For small or near constant values of g 1 and g 2 the qualitative behavior is the same. The Yukawa couplings of the up, down, charm, strange and bottom quarks, are small at the extremely high energy scale of grand unification , μ ≈ 10 15 G e V . {\displaystyle \ \mu \approx 10^{15}\mathrm {GeV} ~.} Therefore, the y q 2 {\displaystyle \ y_{q}^{2}\ } term can be neglected in the above equation for all but the top quark. Solving, we then find that y q {\displaystyle \ y_{q}\ } is increased slightly at the low energy scales at which the quark masses are generated by the Higgs, μ ≈ 125 G e V . {\displaystyle \ \mu \approx 125\ \mathrm {GeV} ~.} On the other hand, solutions to this equation for large initial values typical for the top quark y t {\displaystyle \ y_{\mathrm {t} }\ } cause the expression on the right side to quickly approach zero as we descend in energy scale, which stops y t {\displaystyle \ y_{\mathrm {t} }\ } from changing and locks it to the QCD coupling g 3 . {\displaystyle \ g_{3}~.} This is known as a (infrared) quasi-fixed point of the renormalization group equation for the Yukawa coupling. [ a ] No matter what the initial starting value of the coupling is, if it is sufficiently large at high energies to begin with, it will reach this quasi-fixed point value, and the corresponding quark mass is predicted to be about m ≈ 220 G e V . {\displaystyle \ m\approx 220\ \mathrm {GeV} ~.} The renormalization group equation for large values of the top Yukawa coupling was first considered in 1981 by Pendleton & Ross, [ 4 ] and the "infrared quasi-fixed point" was proposed by Hill . [ 5 ] The prevailing view at the time was that the top quark mass would lie in a range of 15 to 26 GeV. The quasi-infrared fixed point emerged in top quark condensation theories of electroweak symmetry breaking in which the Higgs boson is composite at extremely short distance scales, composed of a pair of top and anti-top quarks. [ 6 ] While the value of the quasi-fixed point is determined in the Standard Model of about m ≈ 220 G e V , {\displaystyle \ m\approx 220\ \mathrm {GeV} ~,} if there is more than one Higgs doublet, the value will be reduced by an increase in the ⁠ 9 / 2 ⁠ factor in the equation, and any Higgs mixing angle effects. Since the observed top quark mass of 174 GeV is slightly lower than the standard model prediction by about 20%, this suggests there may be more Higgs doublets beyond the single standard model Higgs boson. If there are many additional Higgs doublets in nature the predicted value of the quasi-fixed point comes into agreement with experiment. [ 7 ] [ 8 ] Even if there are two Higgs doublets, the fixed point for the top mass is reduced, 170~200 GeV. Some theorists believed this was supporting evidence for the Supersymmetric Standard Model, however no other signs of supersymmetry have emerged at the Large Hadron Collider . Another example of an infrared fixed point is the Banks–Zaks fixed point in which the coupling constant of a Yang–Mills theory evolves to a fixed value. The beta-function vanishes, and the theory possesses a symmetry known as conformal symmetry . [ 9 ]
https://en.wikipedia.org/wiki/Infrared_fixed_point
An infrared gas analyzer measures trace gases by determining the absorption of an emitted infrared light source through a certain air sample. Trace gases found in the Earth's atmosphere become excited under specific wavelengths found in the infrared range. The concept behind the technology can be understood as testing how much of the light is absorbed by the air. Different molecules in the air absorb different frequencies of light. Air with much of a certain gas will absorb more of a certain frequency, allowing the sensor to report a high concentration of the corresponding molecule . Infrared gas analyzers usually have two chambers, one is a reference chamber while the other chamber is a measurement chamber. Infrared light is emitted from some type of source on one end of the chamber, passes through a series of chambers that contains given quantities of the various gases in question. The design from 1975 (pictured above) is a Nondispersive infrared sensor . It is the first improved analyzer that is able to detect more than one component of a sample gas at one time. Earlier analyzers were held back by the fact that a particular gas also has lower absorption bands in the infrared. The invention of 1975 has as many detectors as the number of gases to be measured. Each detector has two chambers which both have an optically aligned infrared source and detector, and are both filled with one of the gases in the sample of air to be analyzed. Lying in the optical path are two cells with transparent ends. One contains a reference gas and one will contain the gas to be analyzed. Between the infrared source and the cells is a modulator which interrupts the beams of energy. The output from each detector is combined with the output from any other detector which is measuring a signal opposite to the principal signal of each detector. The amount of signal from other detectors is the amount that will offset the proportion of the total signal that corresponds to the interference. This interference is from gases with a principal lower absorption band that is the same as the principal band of the gas being measured. For instance, if the analyzer is to measure carbon monoxide and dioxide , the chambers must contain a certain amount of these gases. The infrared light is emitted and passes through the sample gas, a reference gas with a known mixture of the gases in question and then through the " detector " chambers containing the pure forms of the gases in question. When a "detector" chamber absorbs some of the infrared radiation, it heats up and expands. This causes a rise in pressure within the sealed vessel that can be detected either with a pressure transducer or with a similar device. The combination of output voltages from the detector chambers from the sample gas can then be compared to the output voltages from the reference chamber. Like earlier infrared gas analyzers, modern analyzers also use nondispersive infrared technology to detect a certain gas by detecting the absorption of infrared wavelengths that is characteristic of that gas. Infrared energy is emitted from a heated filament. By optically filtering the energy, the radiation spectrum is limited to the absorption band of the gas being measured. A detector measures the energy after the infrared energy has passed through the gas to be measured. This is compared to the energy at reference condition of no absorption. Many analyzers are wall-mounted devices intended for long-term, unattended gas monitoring . There are now analysers that measure a range of gases and are highly portable to be suitable for a wider range of geoscience applications. Fast response high-precision analyzers are widely used to measure gas emissions and ecosystem fluxes using eddy covariance method when used together with fast-response sonic anemometer . In some analyzers, the reliability of measurements is enhanced by calibrating the analyzer at the reference condition and a known span concentration. If the air would interfere with measurements, the chamber that houses the energy source is filled with a gas that has no detectable concentration of the gas being measured. Depending on the gas being measured, fresh air, chemically stripped air or nitrogen may be used.
https://en.wikipedia.org/wiki/Infrared_gas_analyzer
Infrared multiple photon dissociation ( IRMPD ) is a technique used in mass spectrometry to fragment molecules in the gas phase usually for structural analysis of the original (parent) molecule. [ 1 ] An infrared laser is directed through a window into the vacuum of the mass spectrometer where the ions are. The mechanism of fragmentation involves the absorption by a given ion of multiple infrared photons . The parent ion becomes excited into more energetic vibrational states until a bond(s) is broken resulting in gas phase fragments of the parent ion. In the case of powerful laser pulses, the dissociation proceeds via inner-valence ionization of electrons. [ 2 ] [ 3 ] IRMPD is most often used in Fourier transform ion cyclotron resonance mass spectrometry. [ 4 ] By applying intense tunable IR lasers, like IR- OPOs or IR free electron lasers , the wavelength dependence of the IRMPD yield can be studied. [ 5 ] [ 6 ] This infrared photodissociation spectroscopy allows for the measurement of vibrational spectra of (unstable) species that can only be prepared in the gas phase. Such species include molecular ions but also neutral species like metal clusters that can be gently ionized after interaction with the IR light for their mass spectrometric detection. [ 7 ] The combination of mass spectrometry and IRMPD with tunable lasers (IR ion spectroscopy) is increasingly recognized as a powerful tool for small-molecule identification. [ 8 ] Examples are metabomics, where biomarkers are identified in body fluids (urine, blood, cerebrospinal) [ 9 ] and forensic sciences, where isomeric designer drugs were identified in seized samples. [ 10 ] Due to the relatively large differences in IR absorption frequencies that are due to different resonance frequencies for molecules containing different isotopes, this technique has been suggested as a way to perform Isotope separation with difficult-to-separate isotopes, in a single pass. For example, molecules of UF 6 containing U-235 might be ionized completely as a result of such a laser resonance, leaving UF 6 containing the heavier U-238 intact.
https://en.wikipedia.org/wiki/Infrared_multiphoton_dissociation
Infrared open-path gas detectors send out a beam of infrared light, detecting gas anywhere along the path of the beam. This linear 'sensor' is typically a few metres up to a few hundred metres in length. Open-path detectors can be contrasted with infrared point sensors . They are widely used in the petroleum and petrochemical industries, mostly to achieve very rapid gas leak detection for flammable gases at concentrations comparable to the lower flammable limit (typically a few percent by volume). They are also used, but so far to a lesser extent, in other industries where flammable concentrations can occur, such as in coal mining and water treatment . In principle the technique can also be used to detect toxic gases, for instance hydrogen sulfide , at the necessary parts-per-million concentrations, but the technical difficulties involved have so far prevented widespread adoption for toxic gases. Usually, there are separate transmitter and receiver units at either end of a straight beam path. Alternatively, the source and receiver are combined, and the beam bounced off a retroreflector at the far end of the measurement path. For portable use, detectors have also been made which use the natural albedo of surrounding objects in place of the retroreflector. The presence of a chosen gas (or class of gases) is detected from its absorption of a suitable infrared wavelength in the beam. Rain, fog etc. in the measurement path can also reduce the strength of the received signal, so it is usual to make a simultaneous measurement at one or more reference wavelengths. The quantity of gas intercepted by the beam is then inferred from the ratio of the signal losses at the measurement and reference wavelengths. The calculation is typically carried out by a microprocessor which also carries out various checks to validate the measurement and prevent false alarms. The measured quantity is the sum of all the gas along the path of the beam, sometimes termed the path-integral concentration of the gas. Thus the measurement has a natural bias (desirable in many applications) towards the total size of an unintentional gas release, rather than the concentration of the gas that has reached any particular point. Whereas the natural units of measurement for an Infrared point sensor are parts-per-million (ppm) or the percentage of the lower flammable limit (%LFL), the natural units of measurement for an open path detector are ppm.metres (ppm.m) or LFL.metres (LFL.m). For instance, the fire and gas safety system on an offshore platform in the North Sea typically has detectors set to a full-scale reading of 5LFL.m, with low and high alarms triggered at 1LFL.m and 3LFL.m respectively. An open path detector usually costs more than a single point detector , so there is little incentive for applications that play to a point detector's strengths: where the point detector can be placed at the known location of the highest gas concentration, and a relatively slow response is acceptable. The open path detector excels in outdoor situations where, even if the likely source of the gas release is known, the evolution of the developing cloud or plume is unpredictable. Gas will almost certainly enter an extended linear beam before finding its way to any single chosen point. Also, point detectors in exposed outdoor locations require weather shields to be fitted, increasing the response time significantly. Open path detectors can also show a cost advantage in any application where a row of point detectors would be required to achieve the same coverage, for instance monitoring along a pipeline, or around the perimeter of a plant. Not only will one detector replace several, but the costs of installation, maintenance, cabling etc. are likely to be lower. In principle any source of infrared radiation could be used, together with an optical system of lenses or mirrors to form the transmitted beam. In practice the following sources have been used, always with some form of modulation to aid the signal processing at the receiver: An incandescent light bulb , modulated by pulsing the current powering the filament or by a mechanical chopper . For systems used outdoors, it is difficult for an incandescent source to compete with the intensity of sunlight when the sun shines directly into the receiver. Also, it is difficult to achieve modulation frequencies distinguishable from those that can be produced naturally, for instance by heat shimmer or by sunlight reflecting off waves at sea. A gas-discharge lamp is capable of exceeding the spectral power of direct sunlight in the infrared, especially when pulsed. Modern open path systems typically use a xenon flashtube powered by a capacitor discharge. Such pulsed sources are inherently modulated. A semiconductor laser provides a relatively weak source, but one that can be modulated at high frequency in wavelength as well as amplitude. This property permits various signal processing schemes based on Fourier analysis , of use when the absorption of the gas is weak but narrow in spectral linewidth . The precise wavelength passbands used must be isolated from the broad infrared spectrum. In principle any conventional spectrometer technique is possible, but the NDIR technique with multilayer dielectric filters and beamsplitters is most often used. These wavelength-defining components are usually located in the receiver, although one design has shared the task with the transmitter. At the receiver, the infrared signal strengths are measured by some form of infrared detector . Generally photodiode detectors are preferred, and are essential for the higher modulation frequencies, whereas slower photoconductive detectors may be required for longer wavelength regions. The signals are fed to low-noise amplifiers , then invariably subject to some form of digital signal processing . The absorption coefficient of the gas will vary across the passband, so the simple Beer–Lambert law cannot be applied directly. For this reason the processing usually employs a calibration table , applicable for a particular gas, type of gas, or gas mixture, and sometimes configurable by the user. The choice of infrared wavelengths used for the measurement largely defines the detector's suitability for a particular applications. Not only must the target gas (or gases) have a suitable absorption spectrum, the wavelengths must lie within a spectral window so the air in the beam path is itself transparent. These wavelength regions have been used: The first open-path detector offered for routine industrial use, as distinct from research instruments built in small numbers, was the Wright and Wright 'Pathwatch' in the US, 1983. Acquired by Det-Tronics (Detector Electronics Corporation) in 1992, the detector operated in the 3.4 μm region with a powerful incandescent source and a mechanical chopper . It did not achieve large volume sales, mainly because of cost and doubts about long-term reliability with moving parts. Beginning in 1985, Shell Research in UK was funded by Shell Natural Gas to develop an open-path detector with no moving parts. The advantages of the 2.3 μm wavelength were identified, and a research prototype was demonstrated. This design had a combined transmitter-receiver with a corner-cube retroreflector at 50 m. It used a pulsed incandescent lamp, PbS photoconductive detectors in the gas and reference channels, and an Intel 8031 microprocessor for signal processing. In 1987 Shell licensed this technology to Sieger-Zellweger (later Honeywell ) who designed and marketed their industrial version as the 'Searchline', using a retro-reflective panel made up of multiple corner-cubes. This was the first open-path detector to be certified for use in hazardous areas and to have no moving parts. Later work by Shell Research used two alternately pulsed incandescent sources in the transmitter and a single PbS detectors in the receiver, avoiding zero drifts caused by the variable responsivity of PbS detectors. This technology was offered to Sieger-Zellweger, and later licensed to PLMS. a company part-owned by Shell Ventures UK. The PLMS GD4001/2 in 1991 were the first detectors to achieve a truly stable zero without moving parts or software compensation of slow drifts. They were also the first infrared gas detectors of any kind to be certified intrinsically safe . The Israeli company Spectronix (also Spectrex) made an important advance in 1996 with their SafEye, the first to use a flash tube source, followed by Sieger-Zellweger with their Searchline Excel in 1998. In 2001 the PLMS Pulsar, soon afterwards acquired by Dräger as their Polytron Pulsar, was the first detector to incorporate sensing to monitor the mutual alignment of the transmitter and receiver during both installation and routine operation.
https://en.wikipedia.org/wiki/Infrared_open-path_detector
Infrared photodissociation (IRPD) spectroscopy uses infrared radiation to break bonds in, often ionic , molecules ( photodissociation ), within a mass spectrometer. [ 1 ] In combination with post-ionization, this technique can also be used for neutral species. IRPD spectroscopy has been shown to use electron ionization, corona discharge, and electrospray ionization to obtain spectra of volatile and nonvolatile compounds. [ 2 ] [ 3 ] Ionized gases trapped in a mass spectrometer can be studied without the need of a solvent as in infrared spectroscopy . [ 4 ] Scientists began to wonder about the energetic of cluster formation early in the 19th century. Henry Eyring developed the activated-complex theory describing kinetics of reactions. [ 5 ] Interest in studying the weak interactions of molecules and ions(e.g. van der Waals) in clusters encouraged gas phase spectroscopy, in 1962 D.H. Rank studied weak interactions in the gas phase using traditional infrared spectroscopy. [ 6 ] D.S. Bomse used IRPD with an ICR to study isotopic compounds in 1980 at California Institute of Technology. [ 7 ] Spectroscopy for weak bonding clusters was limited by low cluster concentration and the variety of accessible cluster states. [ 8 ] Cluster states vary in part due to frequent collisions with other species, to reduce collisions in gas phase IRPD forms clusters in low pressure ion traps (e.g. FT-ICR). Nitrogen and water were one of the first complexes studied with the aid of a mass spectrometer by A. Good at University of Alberta in the 1960s. [ 9 ] [ 3 ] Photodissociation is used to detect electromagnetic activity of ions, compounds, and clusters when spectroscopy cannot be directly applied. Low concentrations of analyte can be one inhibiting factor to spectroscopy esp. in the gas phase. [ 4 ] Mass spectrometers, time-of-flight and ion cyclotron resonance have been used to study hydrated ion clusters. [ 10 ] Instruments are able to use ESI to effectively form hydrated ion clusters. Laser ablation and corona discharge have also been used to form ion clusters. Complexes are directed through a mass spectrometer where they are irradiated with infrared light, Nd:YAG laser . [ 10 ] Infrared photodissociation spectroscopy maintains a powerful capability to study bond energies of coordination complexes . IRPD can measure varying bond energies of compounds, including dative bonds and coordination energies of molecular clusters. [ 1 ] [ 3 ] Structural information about analytes can acquired by using mass selectivity and interpreting fragmentation . The spectroscopic information usually resembles that of linear infrared spectra and can be used to obtain detailed structural information of gas-phase species, in case of metal complexes, insights into ligand coordination, bond activations and successive reactions can be obtained. [ 11 ]
https://en.wikipedia.org/wiki/Infrared_photodissociation_spectroscopy
An infrared spectroscopy correlation table (or table of infrared absorption frequencies ) is a list of absorption peaks and frequencies, typically reported in wavenumber , for common types of molecular bonds and functional groups . [ 1 ] [ 2 ] In physical and analytical chemistry , infrared spectroscopy (IR spectroscopy) is a technique used to identify chemical compounds based on the way infrared radiation is absorbed by the compound. The absorptions in this range do not apply only to bonds in organic molecules. IR spectroscopy is useful when it comes to analysis of inorganic compounds (such as metal complexes or fluoromanganates) as well. [ 3 ] Tables of vibrational transitions of stable [ 4 ] and transient molecules [ 5 ] are also available.
https://en.wikipedia.org/wiki/Infrared_spectroscopy_correlation_table
In botany , an infraspecific name is the scientific name for any taxon below the rank of species , i.e. an infraspecific taxon or infraspecies . The scientific names of botanical taxa are regulated by the International Code of Nomenclature for algae, fungi, and plants (ICN). [ 1 ] As specified by the ICN, the name of an infraspecific taxon is a combination of the name of a species and an infraspecific epithet , [ 2 ] separated by a connecting term that denotes the rank of the taxon. An example of an infraspecific name is Astrophytum myriostigma subvar. glabrum , the name of a subvariety of the species Astrophytum myriostigma (bishop's hat cactus). In the previous example, glabrum is the infraspecific epithet. Names below the rank of species of animals and of cultivated plants are regulated by different codes of nomenclature and are formed somewhat differently. Article 24 of the ICN describes how infraspecific names are constructed. [ 2 ] The order of the three parts of an infraspecific name is: It is customary to italicize all three parts of such a name, but not the connecting term. [ 3 ] For example: The recommended abbreviations for ranks below species are: [ 4 ] Although the connecting terms mentioned above are the recommended ones, the ICN allows for other connecting terms in validly published infraspecific taxa. It specifically mentions that Greek letters α, β, γ, etc. can be used in this way in the original document [ 5 ] and further ranks may be added without limit. [ 6 ] Names that use these connecting terms are now deprecated (though still legal), but they have an importance because they can be basionyms of current species. The commonest cases use "β" and "b"; examples mentioned in the ICN are Cynoglossum cheirifolium β Anchusa ( lanata ) [ 7 ] and Polyporus fomentarius β applanatus [ 8 ] whilst other examples (coming from the fungus database Index Fungorum ) are Agaricus plexipes b fuliginaria [ 9 ] and Peziza capula ß cernua . [ 10 ] The ICN allows the possibility that a validly published name could have no defined rank and uses "[unranked]" as the connecting term in such cases. [ 11 ] Like specific epithets, infraspecific epithets cannot be used in isolation as names. [ 12 ] Thus the name of a particular species of Acanthocalycium is Acanthocalycium klimpelianum , which can be abbreviated to A. klimpelianum where the context makes the genus clear. The species cannot be referred to as just klimpelianum . In the same way, the name of a particular variety of Acanthocalycium klimpelianum is Acanthocalycium klimpelianum var. macranthum , which can be abbreviated to A. k. var. macranthum where the context makes the species clear. The variety cannot be referred to as just macranthum . Sometimes more than three parts will be given; strictly speaking, this is not a name, but a classification . The ICN gives the example of Saxifraga aizoon var. aizoon subvar. brevifolia f. multicaulis subf. surculosa ; the name of the subform would be Saxifraga aizoon subf. surculosa . [ 13 ] For a proposed infraspecific name to be legitimate it must be in accordance with all the rules of the ICN. [ 14 ] Only some of the main points are described here. A key concept in botanical names is that of a type . In many cases the type will be a particular preserved specimen stored in a herbarium , although there are other kinds of type. Like other names, an infraspecific name is attached to a type. Whether a plant should be given a particular infraspecific name can then be decided by comparing it to the type. [ 15 ] There is no requirement for a species to be divided into infraspecific taxa, of whatever rank; in other words, a species does not have to have subspecies, varieties, forms, etc. However, if infraspecific ranks are created, then the name of the type of the species must repeat the specific epithet as its infraspecific epithet. The type acquires this name automatically as soon as any infraspecific rank is created. [ 16 ] As an example, consider Poa secunda J.Presl , whose type specimen is in the Wisconsin State Herbarium. [ 17 ] The same epithet can be used again within a species, at whatever level, only if the names with the re-used epithet are attached to the same type. [ 16 ] Thus there can be a form called Poa secunda f. juncifolia as well as the subspecies Poa secunda subsp. juncifolia if, and only if, the type specimen of Poa secunda f. juncifolia is the same as the type specimen of Poa secunda subsp. juncifolia (in other words, if there is a single type specimen whose classification is Poa secunda subsp. juncifolia f. juncifolia ). If two infraspecific taxa which have different types are accidentally given the same epithet, then a homonym has been created. The earliest published name is the legitimate one and the other must be changed. [ 18 ] When indicating authors for infraspecific names, it is possible to show either just the author(s) of the final, infraspecific epithet, or the authors of both the specific and the infraspecific epithets, as is demonstrated throughout the ICN. [ 19 ] Examples: In zoological nomenclature , names of taxa below species rank are formed somewhat differently, using a trinomen or 'trinomial name'. No connecting term is required as there is only one rank below species, the subspecies . The Prokaryotic Code was split from the ICN in 1975. This nomenclature only governs one infraspecific rank, the subspecies, but allows a number of infrasubspecific subdivisions to be used. The authorship is to be specified in the form " Bacillus subtilis subsp. spizizenii Nakamura et al. 1999.", i.e. with only the infraspecific author. [ 20 ] : Rules 13–4, Appendix 10 The ICN does not regulate the names of cultivated plants, of cultivars , i.e. plants specifically created for use in agriculture or horticulture. Such names are regulated by the International Code of Nomenclature for Cultivated Plants (ICNCP). Although logically below the rank of species (and hence "infraspecific"), a cultivar name may be attached to any scientific name at the genus level or below. The minimum requirement is to specify a genus name. [ 21 ] For example, Achillea 'Cerise Queen' is a cultivar; Pinus nigra 'Arnold Sentinel' is a cultivar of the species P. nigra (which is propagated vegetatively, by cloning ).
https://en.wikipedia.org/wiki/Infraspecific_name
In mathematics , an infrastructure is a group -like structure appearing in global fields . In 1972, D. Shanks first discovered the infrastructure of a real quadratic number field and applied his baby-step giant-step algorithm to compute the regulator of such a field in O ( D 1 / 4 + ε ) {\displaystyle {\mathcal {O}}(D^{1/4+\varepsilon })} binary operations (for every ε > 0 {\displaystyle \varepsilon >0} ), where D {\displaystyle D} is the discriminant of the quadratic field; previous methods required O ( D 1 / 2 + ε ) {\displaystyle {\mathcal {O}}(D^{1/2+\varepsilon })} binary operations. [ 1 ] Ten years later, H. W. Lenstra published [ 2 ] a mathematical framework describing the infrastructure of a real quadratic number field in terms of "circular groups". It was also described by R. Schoof [ 3 ] and H. C. Williams, [ 4 ] and later extended by H. C. Williams, G. W. Dueck and B. K. Schmid to certain cubic number fields of unit rank one [ 5 ] [ 6 ] and by J. Buchmann and H. C. Williams to all number fields of unit rank one. [ 7 ] In his habilitation thesis , J. Buchmann presented a baby-step giant-step algorithm to compute the regulator of a number field of arbitrary unit rank. [ 8 ] The first description of infrastructures in number fields of arbitrary unit rank was given by R. Schoof using Arakelov divisors in 2008. [ 9 ] The infrastructure was also described for other global fields , namely for algebraic function fields over finite fields . This was done first by A. Stein and H. G. Zimmer in the case of real hyperelliptic function fields. [ 10 ] It was extended to certain cubic function fields of unit rank one by Renate Scheidler and A. Stein. [ 11 ] [ 12 ] In 1999, S. Paulus and H.-G. Rück related the infrastructure of a real quadratic function field to the divisor class group. [ 13 ] This connection can be generalized to arbitrary function fields and, combining with R. Schoof's results, to all global fields. [ 14 ] A one-dimensional (abstract) infrastructure ( X , d ) {\displaystyle (X,d)} consists of a real number R > 0 {\displaystyle R>0} , a finite set X ≠ ∅ {\displaystyle X\neq \emptyset } together with an injective map d : X → R / R Z {\displaystyle d:X\to \mathbb {R} /R\mathbb {Z} } . [ 15 ] The map d {\displaystyle d} is often called the distance map . By interpreting R / R Z {\displaystyle \mathbb {R} /R\mathbb {Z} } as a circle of circumference R {\displaystyle R} and by identifying X {\displaystyle X} with d ( X ) {\displaystyle d(X)} , one can see a one-dimensional infrastructure as a circle with a finite set of points on it. A baby step is a unary operation b s : X → X {\displaystyle bs:X\to X} on a one-dimensional infrastructure ( X , d ) {\displaystyle (X,d)} . Visualizing the infrastructure as a circle, a baby step assigns each point of d ( X ) {\displaystyle d(X)} the next one. Formally, one can define this by assigning to x ∈ X {\displaystyle x\in X} the real number f x := inf { f ′ > 0 ∣ d ( x ) + f ′ ∈ d ( X ) } {\displaystyle f_{x}:=\inf\{f'>0\mid d(x)+f'\in d(X)\}} ; then, one can define b s ( x ) := d − 1 ( d ( x ) + f x ) {\displaystyle bs(x):=d^{-1}(d(x)+f_{x})} . Observing that R / R Z {\displaystyle \mathbb {R} /R\mathbb {Z} } is naturally an abelian group , one can consider the sum d ( x ) + d ( y ) ∈ R / R Z {\displaystyle d(x)+d(y)\in \mathbb {R} /R\mathbb {Z} } for x , y ∈ X {\displaystyle x,y\in X} . In general, this is not an element of d ( X ) {\displaystyle d(X)} . But instead, one can take an element of d ( X ) {\displaystyle d(X)} which lies nearby . To formalize this concept, assume that there is a map r e d : R / R Z → X {\displaystyle red:\mathbb {R} /R\mathbb {Z} \to X} ; then, one can define g s ( x , y ) := r e d ( d ( x ) + d ( y ) ) {\displaystyle gs(x,y):=red(d(x)+d(y))} to obtain a binary operation g s : X × X → X {\displaystyle gs:X\times X\to X} , called the giant step operation. Note that this operation is in general not associative . The main difficulty is how to choose the map r e d {\displaystyle red} . Assuming that one wants to have the condition r e d ∘ d = i d X {\displaystyle red\circ d=\mathrm {id} _{X}} , a range of possibilities remain. One possible choice [ 15 ] is given as follows: for v ∈ R / R Z {\displaystyle v\in \mathbb {R} /R\mathbb {Z} } , define f v := inf { f ≥ 0 ∣ v − f ∈ d ( X ) } {\displaystyle f_{v}:=\inf\{f\geq 0\mid v-f\in d(X)\}} ; then one can define r e d ( v ) := d − 1 ( v − f v ) {\displaystyle red(v):=d^{-1}(v-f_{v})} . This choice, seeming somewhat arbitrary, appears in a natural way when one tries to obtain infrastructures from global fields. [ 14 ] Other choices are possible as well, for example choosing an element x ∈ d ( X ) {\displaystyle x\in d(X)} such that | d ( x ) − v | {\displaystyle |d(x)-v|} is minimal (here, | d ( x ) − v | {\displaystyle |d(x)-v|} is stands for inf { | f − v | ∣ f ∈ d ( x ) } {\displaystyle \inf\{|f-v|\mid f\in d(x)\}} , as d ( x ) {\displaystyle d(x)} is of the form v + R Z {\displaystyle v+R\mathbb {Z} } ); one possible construction in the case of real quadratic hyperelliptic function fields is given by S. D. Galbraith, M. Harrison and D. J. Mireles Morales. [ 16 ] D. Shanks observed the infrastructure in real quadratic number fields when he was looking at cycles of reduced binary quadratic forms . Note that there is a close relation between reducing binary quadratic forms and continued fraction expansion ; one step in the continued fraction expansion of a certain quadratic irrationality gives a unary operation on the set of reduced forms, which cycles through all reduced forms in one equivalence class. Arranging all these reduced forms in a cycle, Shanks noticed that one can quickly jump to reduced forms further away from the beginning of the circle by composing two such forms and reducing the result. He called this binary operation on the set of reduced forms a giant step , and the operation to go to the next reduced form in the cycle a baby step . The set R / R Z {\displaystyle \mathbb {R} /R\mathbb {Z} } has a natural group operation and the giant step operation is defined in terms of it. Hence, it makes sense to compare the arithmetic in the infrastructure to the arithmetic in R / R Z {\displaystyle \mathbb {R} /R\mathbb {Z} } . It turns out that the group operation of R / R Z {\displaystyle \mathbb {R} /R\mathbb {Z} } can be described using giant steps and baby steps, by representing elements of R / R Z {\displaystyle \mathbb {R} /R\mathbb {Z} } by elements of X {\displaystyle X} together with a relatively small real number; this has been first described by D. Hühnlein and S. Paulus [ 17 ] and by M. J. Jacobson, Jr., R. Scheidler and H. C. Williams [ 18 ] in the case of infrastructures obtained from real quadratic number fields. They used floating point numbers to represent the real numbers, and called these representations CRIAD-representations resp. ( f , p ) {\displaystyle (f,p)} -representations. More generally, one can define a similar concept for all one-dimensional infrastructures; these are sometimes called f {\displaystyle f} -representations. [ 15 ] A set of f {\displaystyle f} -representations is a subset f R e p {\displaystyle fRep} of X × R / R Z {\displaystyle X\times \mathbb {R} /R\mathbb {Z} } such that the map Ψ f R e p : f R e p → R / R Z , ( x , f ) ↦ d ( x ) + f {\displaystyle \Psi _{fRep}:fRep\to \mathbb {R} /R\mathbb {Z} ,\;(x,f)\mapsto d(x)+f} is a bijection and that ( x , 0 ) ∈ f R e p {\displaystyle (x,0)\in fRep} for every x ∈ X {\displaystyle x\in X} . If r e d : R / R Z → X {\displaystyle red:\mathbb {R} /R\mathbb {Z} \to X} is a reduction map, f R e p r e d := { ( x , f ) ∈ X × R / R Z ∣ r e d ( d ( x ) + f ) = x } {\displaystyle fRep_{red}:=\{(x,f)\in X\times \mathbb {R} /R\mathbb {Z} \mid red(d(x)+f)=x\}} is a set of f {\displaystyle f} -representations; conversely, if f R e p {\displaystyle fRep} is a set of f {\displaystyle f} -representations, one can obtain a reduction map by setting r e d ( f ) = π 1 ( Ψ f R e p − 1 ( f ) ) {\displaystyle red(f)=\pi _{1}(\Psi _{fRep}^{-1}(f))} , where π 1 : X × R / R Z → X , ( x , f ) ↦ x {\displaystyle \pi _{1}:X\times \mathbb {R} /R\mathbb {Z} \to X,\;(x,f)\mapsto x} is the projection on $X$. Hence, sets of f {\displaystyle f} -representations and reduction maps are in a one-to-one correspondence . Using the bijection Ψ f R e p : f R e p → R / R Z {\displaystyle \Psi _{fRep}:fRep\to \mathbb {R} /R\mathbb {Z} } , one can pull over the group operation on R / R Z {\displaystyle \mathbb {R} /R\mathbb {Z} } to f R e p {\displaystyle fRep} , hence turning f R e p {\displaystyle fRep} into an abelian group ( f R e p , + ) {\displaystyle (fRep,+)} by x + y := Ψ f R e p − 1 ( Ψ f R e p ( x ) + Ψ f R e p ( y ) ) {\displaystyle x+y:=\Psi _{fRep}^{-1}(\Psi _{fRep}(x)+\Psi _{fRep}(y))} , x , y ∈ f R e p {\displaystyle x,y\in fRep} . In certain cases, this group operation can be explicitly described without using Ψ f R e p {\displaystyle \Psi _{fRep}} and d {\displaystyle d} . In case one uses the reduction map r e d : R / R Z → X , v ↦ d − 1 ( v − inf { f ≥ 0 ∣ v − f ∈ d ( X ) } ) {\displaystyle red:\mathbb {R} /R\mathbb {Z} \to X,\;v\mapsto d^{-1}(v-\inf\{f\geq 0\mid v-f\in d(X)\})} , one obtains f R e p r e d = { ( x , f ) ∣ f ≥ 0 , ∀ f ′ ∈ [ 0 , f ) : d ( x ) + f ′ ∉ d ( X ) } {\displaystyle fRep_{red}=\{(x,f)\mid f\geq 0,\;\forall f'\in [0,f):d(x)+f'\not \in d(X)\}} . Given ( x , f ) , ( x ′ , f ′ ) ∈ f R e p r e d {\displaystyle (x,f),(x',f')\in fRep_{red}} , one can consider ( x ″ , f ″ ) {\displaystyle (x'',f'')} with x ″ = g s ( x , x ′ ) {\displaystyle x''=gs(x,x')} and f ″ = f + f ′ + ( d ( x ) + d ( x ′ ) − d ( g s ( x , x ′ ) ) ) ≥ 0 {\displaystyle f''=f+f'+(d(x)+d(x')-d(gs(x,x')))\geq 0} ; this is in general no element of f R e p r e d {\displaystyle fRep_{red}} , but one can reduce it as follows: one computes b s − 1 ( x ″ ) {\displaystyle bs^{-1}(x'')} and f ″ − ( d ( x ″ ) − d ( b s − 1 ( x ″ ) ) ) {\displaystyle f''-(d(x'')-d(bs^{-1}(x'')))} ; in case the latter is not negative, one replaces ( x ″ , f ″ ) {\displaystyle (x'',f'')} with ( b s − 1 ( x ″ ) , f ″ − ( d ( x ″ ) − d ( b s − 1 ( x ″ ) ) ) ) {\displaystyle (bs^{-1}(x''),f''-(d(x'')-d(bs^{-1}(x''))))} and continues. If the value was negative, one has that ( x ″ , f ″ ) ∈ f R e p r e d {\displaystyle (x'',f'')\in fRep_{red}} and that Ψ f R e p r e d ( x , f ) + Ψ f R e p r e d ( x ′ , f ′ ) = Ψ f R e p r e d ( x ″ , f ″ ) {\displaystyle \Psi _{fRep_{red}}(x,f)+\Psi _{fRep_{red}}(x',f')=\Psi _{fRep_{red}}(x'',f'')} , i.e. ( x , f ) + ( x ′ , f ′ ) = ( x ″ , f ″ ) {\displaystyle (x,f)+(x',f')=(x'',f'')} .
https://en.wikipedia.org/wiki/Infrastructure_(number_theory)
Infrastructure (also known as " capital goods " , or " fixed capital " ) is a platform for governance, commerce, and economic growth and is "a lifeline for modern societies". [ 1 ] It is the hallmark of economic development . [ 2 ] It has been characterized as the mechanism that delivers the "..fundamental needs of society: food, water, energy, shelter, governance ... without infrastructure, societies disintegrate and people die." [ 3 ] Adam Smith argued that fixed asset spending was the "third rationale for the state, behind the provision of defense and justice." [ 4 ] Societies enjoy the use of "...highway, waterway, air, and rail systems that have allowed the unparalleled mobility of people and goods. Water-borne diseases are virtually nonexistent because of water and wastewater treatment, distribution, and collection systems. In addition, telecommunications and power systems have enabled our economic growth." [ 5 ] This development happened over a period of several centuries. It represents a number of successes and failures in the past that were termed public works and even before that internal improvements . In the 21st century, this type of development is termed infrastructure. [ 6 ] Infrastructure can be described as tangible capital assets (income-earning assets), whether owned by private companies or the government. [ 7 ] Infrastructure may be owned and managed by governments or by private companies, such as sole public utility or railway companies. Generally, most roads, major ports and airports, water distribution systems and sewage networks are publicly owned, whereas most energy and telecommunications networks are privately owned. Publicly owned infrastructure may be paid for from taxes, tolls, or metered user fees, whereas private infrastructure is generally paid for by metered user fees. [ 8 ] [ 9 ] Major investment projects are generally financed by the issuance of long-term bonds . Hence, government owned and operated infrastructure may be developed and operated in the private sector or in public-private partnerships , in addition to in the public sector . In the United States, public spending on infrastructure has varied between 2.3% and 3.6% of GDP since 1950. [ 10 ] Many financial institutions invest in infrastructure . Infrastructure debt is a complex investment category reserved for highly sophisticated institutional investors who can gauge jurisdiction-specific risk parameters, assess a project’s long-term viability, understand transaction risks, conduct due diligence , negotiate (multi) creditors ’ agreements, make timely decisions on consents and waivers , and analyze loan performance over time. Research conducted by the World Pensions Council (WPC) suggests that most UK and European pensions wishing to gain a degree of exposure to infrastructure debt have done so indirectly, through investments made in infrastructure funds managed by specialised Canadian, US and Australian funds. [ 11 ] On November 29, 2011, the British government unveiled an unprecedented plan to encourage large-scale pension investments in new roads, hospitals, airports, etc. across the UK. The plan is aimed at enticing 20 billion pounds ($30.97 billion) of investment in domestic infrastructure projects. Pension and sovereign wealth funds are major direct investors in infrastructure. [ 12 ] [ 13 ] Most pension funds have long-dated liabilities, with matching long-term investments. These large institutional investors need to protect the long-term value of their investments from inflationary debasement of currency and market fluctuations, and provide recurrent cash flows to pay for retiree benefits in the short-medium term: from that perspective, think-tanks such as the World Pensions Council (WPC) have argued that infrastructure is an ideal asset class that provides tangible advantages such as long duration (facilitating cash flow matching with long-term liabilities), protection against inflation and statistical diversification (low correlation with ‘traditional’ listed assets such as equity and fixed income investments), thus reducing overall portfolio volatility. [ 14 ] [ 12 ] Furthermore, in order to facilitate the investment of institutional investors in developing countries' infrastructure markets, it is necessary to design risk-allocation mechanisms more carefully, given the higher risks of developing countries' markets. [ 15 ] The notion of supranational and public co-investment in infrastructure projects jointly with private institutional asset owners has gained traction amongst IMF , World Bank and European Commission policy makers in recent years notably in the last months of 2014/early 2015: Annual Meetings of the International Monetary Fund and the World Bank Group (October 2014) and adoption of the €315 bn European Commission Investment Plan for Europe (December 2014). [ 16 ] Some experts have warned against the risk of "infrastructure nationalism", insisting that steady investment flows from foreign pension and sovereign funds were key for the long-term success of the asset class- notably in large European jurisdictions such as France and the UK [ 17 ] An interesting comparison between privatisation versus government-sponsored public works involves high-speed rail (HSR) projects in East Asia . In 1998, the Taiwan government awarded the Taiwan High Speed Rail Corporation , a private organisation, to construct the 345 km line from Taipei to Kaohsiung in a 35-year concession contract. Conversely, in 2004 the South Korean government charged the Korean High Speed Rail Construction Authority , a public entity, to construct its high-speed rail line, 412 km from Seoul to Busan , in two phases. While different implementation strategies, Taiwan successfully delivered the HSR project in terms of project management (time, cost, and quality), whereas South Korea successfully delivered its HSR project in terms of product success (meeting owners' and users' needs, particularly in ridership). Additionally, South Korea successfully created a technology transfer of high-speed rail technology from French engineers, essentially creating an industry of HSR manufacturing capable of exporting knowledge, equipment, and parts worldwide. [ 18 ] The method of infrastructure asset management is based upon the definition of a Standard of service (SoS) that describes how an asset will perform in objective and measurable terms. The SoS includes the definition of a minimum condition grade , which is established by considering the consequences of a failure of the infrastructure asset. The key components of infrastructure asset management are: After completing asset management, official conclusions are made. The American Society of Civil Engineers gave the United States a "D+" on its 2017 infrastructure report card. [ 19 ] Most infrastructure is designed by civil engineers or architects . [ 20 ] Generally road and rail transport networks, as well as water and waste management infrastructure are designed by civil engineers , electrical power and lighting networks are designed by power engineers and electrical engineers , and telecommunications, computing and monitoring networks are designed by systems engineers . In the case of urban infrastructure, the general layout of roads, sidewalks and public places may sometimes be developed at a conceptual level by urban planners or architects , although the detailed design will still be performed by civil engineers. Depending upon the height of the building, it may be designed by an architect or for tall buildings ,a structural engineer , and if an industrial or processing plant is required, the structures and foundation work will still be done by civil engineers, but the process equipment and piping may be designed by industrial engineer or a process engineer . In terms of engineering tasks, the design and construction management process usually follows these steps: In general, infrastructure is planned by urban planners or civil engineers [ 21 ] at a high level for transportation, water/waste water, electrical, urban zones, parks and other public and private systems. These plans typically analyze policy decisions and impacts of trade offs for alternatives. In addition, planners may lead or assist with environmental review that are commonly required to construct infrastructure. Colloquially this process is referred to as Infrastructure Planning . These activities are usually performed in preparation for preliminary engineering or conceptual design that is led by civil engineers or architects . Preliminary studies may also be performed and may include steps such as: Investment in infrastructure is part of the capital accumulation required for economic development and may affect socioeconomic measures of welfare. [ 22 ] The causality of infrastructure and economic growth has always been in debate. Generally, infrastructure plays a critical role in expanding national production capacity, which leads to increase in a country's wealth. [ 23 ] In developing nations, expansions in electric grids , roadways , and railways show marked growth in economic development. However, the relationship does not remain in advanced nations who witness ever lower rates of return on such infrastructure investments . Nevertheless, infrastructure yields indirect benefits through the supply chain, land values, small business growth, consumer sales, and social benefits of community development and access to opportunity. The American Society of Civil Engineers cite the many transformative projects that have shaped the growth of the United States including the Transcontinental Railroad that connected major cities from the Atlantic to Pacific coast; the Panama Canal that revolutionised shipment in connected the two oceans in the Western hemisphere; the Interstate Highway System that spawned the mobility of the masses; and still others that include the Hoover Dam , Trans-Alaskan pipeline , and many bridges (the Golden Gate , Brooklyn , and San Francisco–Oakland Bay Bridge ). [ 24 ] All these efforts are testimony to the infrastructure and economic development correlation. European and Asian development economists have also argued that the existence of modern rail infrastructure is a significant indicator of a country’s economic advancement: this perspective is illustrated notably through the Basic Rail Transportation Infrastructure Index (known as BRTI Index) [ 25 ] During the Great Depression of the 1930s, many governments undertook public works projects in order to create jobs and stimulate the economy. The economist John Maynard Keynes provided a theoretical justification for this policy in The General Theory of Employment, Interest and Money , [ 26 ] published in 1936. Following the Great Recession , some again proposed investing in infrastructure as a means of stimulating the economy (see the American Recovery and Reinvestment Act of 2009 ). While infrastructure development may initially be damaging to the natural environment , justifying the need to assess environmental impacts, it may contribute in mitigating the "perfect storm" of environmental and energy sustainability , particularly in the role transportation plays in modern society . [ 27 ] Offshore wind power in England and Denmark may cause issues to local ecosystems but are incubators to clean energy technology for the surrounding regions. Ethanol production may overuse available farmland in Brazil but have propelled the country to energy independence . High-speed rail may cause noise and wide swathes of rights-of-way through countrysides and urban communities but have helped China, Spain, France, Germany, Japan, and other nations deal with concurrent issues of economic competitiveness , climate change , energy use , and built environment sustainability .
https://en.wikipedia.org/wiki/Infrastructure_and_economics
Infrastructure as code ( IaC ) is the process of managing and provisioning computer data center resources through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. [ 1 ] The IT infrastructure managed by this process comprises both physical equipment, such as bare-metal servers , as well as virtual machines , and associated configuration resources. The definitions may be in a version control system , rather than maintaining the code through manual processes. The code in the definition files may use either scripts or declarative definitions, but IaC more often employs declarative approaches. IaC grew as a response to the difficulty posed by utility computing and second-generation web frameworks. In 2006, the launch of Amazon Web Services ’ Elastic Compute Cloud and the 1.0 version of Ruby on Rails just months before [ 2 ] created widespread scaling difficulties in the enterprise that were previously experienced only at large, multi-national companies. [ 3 ] With new tools emerging to handle this ever-growing field, the idea of IaC was born. The thought of modeling infrastructure with code, and then having the ability to design, implement, and deploy application infrastructure with known software best practices appealed to both software developers and IT infrastructure administrators. The ability to treat infrastructure as code and use the same tools as any other software project would allow developers to rapidly deploy applications. [ 4 ] The value of IaC can be broken down into three measurable categories: cost, speed, and risk. [ citation needed ] Cost reduction aims at helping not only the enterprise financially, but also in terms of people and effort, meaning that by removing the manual component, people are able to refocus their efforts on other enterprise tasks. [ citation needed ] Infrastructure automation enables speed through faster execution when configuring your infrastructure and aims at providing visibility to help other teams across the enterprise work quickly and more efficiently. Automation removes the risk associated with human error, like manual misconfiguration; removing this can decrease downtime and increase reliability. These outcomes and attributes help the enterprise move towards implementing a culture of DevOps , the combined working of development and operations . [ 5 ] There are generally two approaches to IaC: declarative (functional) vs. imperative (procedural). The difference between the declarative and the imperative approach is essentially 'what' versus 'how' . The declarative approach focuses on what the eventual target configuration should be; the imperative focuses on how the infrastructure is to be changed to meet this. [ 6 ] The declarative approach defines the desired state and the system executes what needs to happen to achieve that desired state. Imperative defines specific commands that need to be executed in the appropriate order to end with the desired conclusion. [ 7 ] Infrastructure as Code (IaC) allows you to manage servers and their configurations using code. There are two ways to send these configurations to servers: the ' push ' and ' pull ' methods. In the 'push' method, the system controlling the configuration directly sends instructions to the server. In the 'pull' method, the server retrieves its own instructions from the controlling system. [ 8 ] There are many tools that fulfill infrastructure automation capabilities and use IaC. Broadly speaking, any framework or tool that performs changes or configures infrastructure declaratively or imperatively based on a programmatic approach can be considered IaC. [ 9 ] Traditionally, server (lifecycle) automation and configuration management tools were used to accomplish IaC. Now enterprises are also using continuous configuration automation tools or stand-alone IaC frameworks, such as Microsoft’s PowerShell DSC [ 10 ] or AWS CloudFormation . [ 11 ] All continuous configuration automation (CCA) tools can be thought of as an extension of traditional IaC frameworks. They leverage IaC to change, configure, and automate infrastructure, and they also provide visibility, efficiency and flexibility in how infrastructure is managed. [ 3 ] These additional attributes provide enterprise-level security and compliance. Community content is a key determinant of the quality of an open source CCA tool. As Gartner states, the value of CCA tools is "as dependent on user-community-contributed content and support as it is on the commercial maturity and performance of the automation tooling". [ 3 ] Established vendors such as Puppet and Chef have created their own communities. Chef has Chef Community Repository and Puppet has PuppetForge . [ 12 ] Other vendors rely on adjacent communities and leverage other IaC frameworks such as PowerShell DSC. [ 10 ] New vendors are emerging that are not content-driven, but model-driven with the intelligence in the product to deliver content. These visual, object-oriented systems work well for developers, but they are especially useful to production-oriented DevOps and operations constituents that value models versus scripting for content. As the field continues to develop and change, the community-based content will become ever more important to how IaC tools are used, unless they are model-driven and object-oriented. Notable CCA tools include: Other tools include AWS CloudFormation , cdist , StackStorm , Juju , and Step CI. IaC can be a key attribute of enabling best practices in DevOps . Developers become more involved in defining configuration and Ops teams get involved earlier in the development process. [ 13 ] Tools that utilize IaC bring visibility to the state and configuration of servers and ultimately provide the visibility to users within the enterprise, aiming to bring teams together to maximize their efforts. [ 14 ] Automation in general aims to take the confusion and error-prone aspect of manual processes and make it more efficient, and productive. Allowing for better software and applications to be created with flexibility, less downtime, and an overall cost-effective way for the company. IaC is intended to reduce the complexity that kills efficiency out of manual configuration. Automation and collaboration are considered central points in DevOps; infrastructure automation tools are often included as components of a DevOps toolchain . [ 15 ] The 2020 Cloud Threat Report released by Unit 42 (the threat intelligence unit of cybersecurity provider Palo Alto Networks ) identified around 200,000 potential vulnerabilities in infrastructure as code templates. [ 16 ]
https://en.wikipedia.org/wiki/Infrastructure_as_code
Dich Ingar Emil Roggen (26 April 1934 – 19 October 2023) was a Norwegian sociologist, and has been described as one of the European social informatics pioneers. [ 1 ] His field of work is focused on the social aspects of virtual space, the social analysis of the Internet , the interaction between man and computer, and with the implications of the information technology usage communication in all fields of society. [ 2 ] In 1996 he introduced the Sociology of the World Wide Web (web sociology) as a web science, based on the principles of social informatics. Roggen earned the PhD ( mag.art. ) degree in sociology in 1970 with a dissertation on social time, where he developed a theory of Relative Social Time with tense logic as method. [ 3 ] In 1974 he was appointed assistant professor in sociology at the University of Oslo , where he was employed until his retirement in 2004. Through a series of studies of health risk factors in Norwegian industries and social services in 1970-1990 he developed a system for detection, prediction and prevention of supermortality in workplaces. From a global historical viewpoint Stein Bråten must be considered as the founder of the science of social informatics, which he originally called "socioinformatics" (Norwegian: sosioinformatikk). He defined it as the common field of psychology, sociology and informatics. (Stein Bråten: Dialogens vilkår i datasamfunnet. Universitetsforlaget 1983.) At the Department of Sociology, University of Oslo (UiO) Stein Bråten inspired a group of pioneers in this field, including Eivind Jahren, Arild Jansen and Ingar Roggen. When web sociology was introduced in 1996, the first World Wide Web Virtual Laboratoratory (known as Weblab at UiO) was established at the Department of Sociology, directed by Ingar Roggen in collaboration with Knut A. G. Hauge and Trond Enger . The Department offered degrees up to the master (Norwegian: Cand.Polit.) level in these fields until 2000. While Kristen Nygaard's Simula was the programming language shared by the early socioinformaticians, the web sociologists gathered around Bill Atkinson's HyperCard with the programming language HyperTalk, the forerunner of the WWW-languages. The late Rob Kling , who had been given a personal introduction by Stein Bråten to the ongoing research on social informatics at the University of Oslo in 1986, and also had noted the introduction of web sociology in January 1996, established the American branch of social informatics at Indiana University later that year. Roggen died on 19 October 2023, at the age of 89. [ 4 ]
https://en.wikipedia.org/wiki/Ingar_Roggen
Ingenieurs zonder Grenzen ( Dutch for Engineers Without Borders ) is a name used by two Belgian organizations, both of which are provisional members [1] of the Engineers Without Borders International network. This international development -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ingenieurs_zonder_Grenzen
Ingenu , formerly known as On-Ramp Wireless , is a provider of wireless networks. The company focuses on machine to machine (M2M) communication by enabling devices to become Internet of Things (IoT) devices. Ingenu was founded in 2008 as On-Ramp Wireless; by the end of 2014, it was valued at $72 million, according to data from PitchBook. [ 1 ] On September 1, 2010, the World Economic Forum announced the company as a Technology Pioneer for 2011. [ 2 ] On April 4, 2011, Bloomberg announced the company as a 2011 New Energy Pioneer. [ 3 ] The company was renamed to Ingenu in September 2015. [ 4 ] Initially, the company focused on utilities, but in 2012 expanded to the gas and oil industries. [ 5 ] The Ingenu brand launch in September 2015 coincided with the announcement of a network dedicated to machine connectivity. [ 6 ] Using the free 2.4 GHz ISM bands , Ingenu’s hardware has been tested at over a 30-mile range in the 2.4 GHz free ISM band while maintaining low power operation. It is optimized for robustness, range and capacity. [ 7 ] [ 8 ] As of September 2015 [update] the company had operations in 20 countries. [ 9 ] Ingenu uses the name random phase multiple access (RPMA) for patented technology used in its network. [ 9 ] [ 10 ] RPMA is used in GE's AMI metering. [ 11 ] RPMA is also used for oil and gas field automation, or digital oilfield. [ 12 ] The technology includes network appliances, the microNode radio module, a reference Application Communication Module for development platform, and a general I/O device. [ 13 ] In September 2015, Ingenu announced a public network exclusively for machines supported by RPMA technology. [ 9 ] [ 10 ] [ 14 ] The network will begin in the US, and as of the launch had 55,000 square miles. [ 9 ] The company planned to cover Phoenix and Dallas by the end of 2015 with coverage across the United States complete by the end of 2017. [ 10 ] [ 15 ] The Machine Network also has coverage in Europe, starting with nationwide coverage of Italy, through a partnership with Meterlinq. [ 16 ] Ingenu has private, regional, machine-to-machine networks. [ 4 ] [ 9 ] [ 14 ] One of these networks is owned by San Diego Gas & Electric . [ 9 ] At the announcement of the Machine Network, Ingenu indicated it would continue to support and pursue private networks. [ 4 ]
https://en.wikipedia.org/wiki/Ingenu
Inger Arvidsdotter Thorén (née Bildt) (30 September 1913 – 18 November 1985), was a Swedish chemical engineer and food chemist. [ 1 ] In 1938, she became the first woman assistant appointed at KTH Royal Institute of Technology . Inger Bildt was born on 30 September 1913 in Sankt Matteus parish in Stockholm , the daughter of Signe Borg and bureau chief Arvid Bildt. She graduated from the KTH Royal Institute of Technology as Master of Science in Engineering in chemical engineering in 1938, and was appointed as the first female assistant in the university after graduation. [ 2 ] After marriage in 1941, she was known as Inger Thorén. [ 3 ] Thorén was an operations engineer at pharmacutical company Kabi in Hornsberg in Stockholm between 1940 and 1949 and a cereal chemist at the Kooperativa Förbundet Cooperative Association's bakery laboratory between 1953 and 1957. [ 4 ] She worked at Kvarnen Tre Kronor's development laboratory from 1957 until 1960, and then as product manager at Kungsörnen 's development department from 1960 to 1970. [ 4 ] Thorén was active in the Swedish Association for Nutrition and held teaching posts at various university departments in the field. [ 4 ] In her development work as a food chemist, [ 3 ] Thorén was interviewed regularly from the 1950s into the 1980s in both national and local newspapers. Pieces covered subjects including school bread, [ 5 ] ideal flour, [ 6 ] corn flakes , [ 7 ] protein additives in flour, [ 8 ] labelling for sell and use by dates when it was introduced, and preservatives. [ 9 ] She also appeared on the TV programme Konsumentens brevlåda in 1964. [ 10 ] She wrote articles for the magazine Livsmedelsteknik and contributed to teaching materials such as Bagerikemi (1956), Bättre brödsäd (1956) and Att baka bröd (1983). [ 3 ] After retirement, Thorén worked as a consultant and carried out studies for the Svensk spannmålshandel (Swedish Grain Trade) and the Brödinstitutet (Bread Institute) in Stockholm. She was a member of the Bildt Family Association and the Bildt Homestead Museum in Morlanda on Orust . Inger Thorén married the property manager Ernst Thorén (1913–1985) in 1941 and had three children with him, including veterinarian Kerstin Thorén Tolling. Inger Thorén died on 18 November 1985 in Huddinge parish in Stockholm County . [ 3 ]
https://en.wikipedia.org/wiki/Inger_Thorén
In mathematics, Ingleton's inequality is an inequality that is satisfied by the rank function of any representable matroid . In this sense it is a necessary condition for representability of a matroid over a finite field. Let M be a matroid and let ρ be its rank function, Ingleton's inequality states that for any subsets X 1 , X 2 , X 3 and X 4 in the support of M , the inequality Aubrey William Ingleton , an English mathematician, wrote an important paper in 1969 [ 1 ] in which he surveyed the representability problem in matroids. Although the article is mainly expository, in this paper Ingleton stated and proved Ingleton's inequality, which has found interesting applications in information theory , matroid theory , and network coding . [ 2 ] There are interesting connections between matroids , the entropy region and group theory . Some of those connections are revealed by Ingleton's Inequality. Perhaps, the more interesting application of Ingleton's inequality concerns the computation of network coding capacities. Linear coding solutions are constrained by the inequality and it has an important consequence: For definitions see, e.g. [ 6 ] Theorem (Ingleton's inequality): [ 7 ] Let M be a representable matroid with rank function ρ and let X 1 , X 2 , X 3 and X 4 be subsets of the support set of M , denoted by the symbol E ( M ). Then: To prove the inequality we have to show the following result: Proposition : Let V 1 , V 2 , V 3 and V 4 be subspaces of a vector space V , then Where V i + V j represent the direct sum of the two subspaces. Proof (proposition) : We will use frequently the standard vector space identity: dim( U ) + dim( W ) = dim( U + W ) + dim( U ∩ W ). 1. It is clear that ( V 1 ∩ V 2 ) + V 3 ⊆ ( V 1 + V 3 ) ∩ ( V 2 + V 3 ), then 2. It is clear that ( V 1 ∩ V 2 ∩ V 3 ) + ( V 1 ∩ V 2 ∩ V 4 ) ⊆ ( V 1 ∩ V 2 ), then 3. From (1) and (2) we have: 4. From (3) we have If we add (dim( V 1 )+dim( V 2 )+dim( V 3 + V 4 )) at both sides of the last inequality, we get Since the inequality dim( V 1 ∩ V 2 ∩ V 3 ∩ V 4 ) ≤ dim( V 3 ∩ V 4 ) holds, we have finished with the proof.♣ Proof (Ingleton's inequality) : Suppose that M is a representable matroid and let A = [ v 1 v 2 … v n ] be a matrix such that M = M ( A ). For X , Y ⊆ E( M ) = {1,2, … ,n}, define U = <{ V i : i ∈ X }>, as the span of the vectors in V i , and we define W = <{ V j : j ∈ Y }> accordingly. If we suppose that U = <{ u 1 , u 2 , … , u m }> and W = <{ w 1 , w 2 , … , w r }> then clearly we have <{ u 1 , u 2 , …, u m , w 1 , w 2 , …, w r }> = U + W . Hence: r ( X ∪ Y ) = dim <{ v i : i ∈ X } ∪ { v j : j ∈ Y }> = dim( V + W ). Finally, if we define V i = { v r : r ∈ X i } for i = 1,2,3,4, then by last inequality and the item (4) of the above proposition, we get the result.
https://en.wikipedia.org/wiki/Ingleton's_inequality
The Inglis–Teller equation represents an approximate relationship between the plasma density and the principal quantum number of the highest bound state of an atom. The equation was derived by David R. Inglis and Edward Teller in 1939. [ 1 ] In a plasma, atomic levels are broadened and shifted due to the Stark effect , caused by electric microfields formed by the charged plasma particles ( ions and electrons ). The Stark broadening increases with the principal quantum number n {\displaystyle n} , while the energy separation between the nearby levels n {\displaystyle n} and ( n + 1 ) {\displaystyle (n+1)} decreases. Therefore, above a certain n {\displaystyle n} all levels become merged. Assuming a neutral atomic radiator in a plasma consisting of singly charged ions (and neglecting the electrons), the equation reads where N {\displaystyle N} is the ion particle density and a 0 {\displaystyle a_{0}} is the Bohr radius . The equation readily generalizes to cases of multiply charged plasma ions and/or charged radiator. Allowance for the effect of electrons is also possible, as was discussed already in the original study. [ 1 ] Spectroscopically , this phenomenon appears as discrete spectral lines merging into continuous spectrum . Therefore, by using the (appropriately generalized) Inglis–Teller equation it is possible to infer the density of laboratory and astrophysical plasmas. [ 2 ]
https://en.wikipedia.org/wiki/Inglis–Teller_equation
Ingmar Malte Hoerr (born 1968 in Neckarsulm ) is a German biologist. He pioneered vaccinology research concerning the use of RNA and is a founder of the German biotechnology company CureVac . He created the initial technology used in RNA vaccines and has reportedly been nominated for a Nobel Prize. [ 1 ] He is currently an Ambassador for the European Innovation Council for the years 2021–2027. [ 2 ] Hoerr graduated from the Johannes-Kepler-Realschule in Wendlingen am Neckar in 1985 [ 3 ] and then attended an agricultural high school in Nürtingen, where he obtained his Abitur in 1988. [ 4 ] From 1988 to 1990, he performed civilian service at the DRK Nürtingen as a paramedic. From 1990 to 1996 he studied biology at the University of Tübingen . During his studies, he spent a year at Madurai Kamaraj University , India. [ 5 ] Hoerr did experimental research on the stabilization of messenger ribonucleic acid (mRNA). In 1999, he received his PhD from Günther Jung, Institute of Organic Chemistry, in cooperation with Hans-Georg Rammensee, Institute of Immunology and Cell Biology (both: University of Tübingen) on the topic of RNA vaccines for the induction of specific cytotoxic T lymphocytes (CTL) and antibodies. In 2000, Hoerr published his doctoral thesis entitled "RNA vaccine for the induction of specific cytotoxic T-lymphocytes (CTL) and antibodies." In his thesis, Hoerr discovered that ribonucleic acid can be stabilized. This discovery made it easy to use ribonucleic acid for the development of vaccines and immunotherapies. [ 6 ] The dissertation investigated the development of RNA vaccines that will play a central role in the fight against COVID-19 starting in 2020. [ 7 ] At the time, he vaccinated laboratory mice with an RNA construct and showed that such a vaccine does not immediately decay, as previously thought. Rather, stabilized RNA stimulates the immune system to produce antibodies and activate T cells that destroy pathogens. [ 8 ] As early as 9 September 1999, Hoerr applied for a first patent for the new technology. In 2008 and 2009, the first clinical trials for the use of mRNA as a cancer vaccine were already underway. [ 9 ] [ 10 ] Bill Gates , whose foundation invested in CureVac, rated Hoerr's pioneering work as groundbreaking in an interview with German newspaper Handelsblatt : "The first mRNA vaccines,developed by Pfizer-Biontech and Moderna in 2020, are the product of a multitude of ideas and discoveries by German scientist Ingmar Hoerr, who spent twenty years experimenting with messenger RNA." [ 11 ] As the success of the m-RNA vaccines grew, so did media interest in Hoerr. Der Spiegel ranked him among the pioneers of m-RNA vaccines, [ 12 ] as did Die Zeit [ 13 ] or Süddeutsche Zeitung [ 14 ] and conducted interviews. There were appearances on popular German talk shows such as Lanz [ 15 ] or Nachtcafe . [ 16 ] International interest ranged from the French L'Express [ 17 ] to the New York Times . [ 18 ] In May 2021, Ingmar Hoerr and Florian von der Mülbe, together with their partners, Sara Hörr and Kiriakoula Kapousouzi, founded the Morpho Foundation , a foundation for the promotion of culture and health projects. [ 19 ] In 2000, Hoerr, together with colleagues from the lab groups of Günther Jung and Hans-Georg Rammensee , founded the biopharmaceutical company CureVac . [ 6 ] In 2018, Hoerr gave up his office as chairman of the board and changed – as Chairman – to the supervisory board. Daniel L. Menichella was hired in that position in order to develop R&D and plants in the U.S., but the board changed its mind in 2020 and fired Menichella. [ 20 ] On 11 March 2020, Hoerr took over the position of CEO again at CureVac , [ 21 ] replacing his interim successor Menichella. [ 22 ] Later, Hoerr was replaced by Jean Stephenne as chairman of the supervisory board. In August 2020, Franz-Werner Haas replaced Hoerr as chief executive officer, after Hoerr suffered a severe health issue that March. [ 23 ]
https://en.wikipedia.org/wiki/Ingmar_Hoerr
Ingression is one of the many changes in the location or relative position of cells that takes place during the gastrulation stage of embryonic development . It produces an animal's mesenchymal cells at the onset of gastrulation. During the epithelial–mesenchymal transition (EMT), the primary mesenchyme cells (PMCs) detach from the epithelium and become internalized mesenchyme cells that can migrate freely. [ 1 ] While the mechanisms of ingression are not fully understood, studies using the sea urchin as a model organism have begun to shed light on this developmental process, and will be the focus here. There are three main changes that must occur within a cell to enable the process of ingression. The ingressing PMCs must first alter their affinity for the neighboring epithelial cells that will remain in the vegetal pole (vertebrate PMCs ingress from the primitive streak ). During this time, these cells must lose their affinity for the hyaline layer to which their apical surface is attached. The ingressing cells will then apically constrict and alter their cellular architecture through a dramatic reorganization of their cytoskeleton . Lastly, these cells will modify their mode of motility and presumably gain affinity for the basal lamina which composes the lining of the blastocoel , the future migration substrate of the PMCs. [ 2 ] [ 3 ] Changes in the adhesion properties of these cells are the best characterized and understood mechanism of ingression. [ 3 ] In sea urchins, epithelial cells adhere to one another as well as the hyaline layer through classic cadherins and adherens junctions . Ingression is a very dynamic process however, and the first sign of an ingressing cell is seen when a future PMC loses its adhesion to hyaline, and cadherin, and increases its adhesion to a basal laminal substrate. These processes occur rapidly, over approximately 30 minutes. It is not understood how the PMCs penetrate the basal lamina. The basal lamina is a loose matrix, therefore it is possible that the ingressing cells squeeze through the matrix. It is also hypothesized that the PMCs use a protease . [ 1 ] EMT is determined by a dynamic gene regulatory network (GRN). snail and twist are two key transcription factors that makes up the GRN. Within an hour of ingression, numerous transcript factors are activated. It is known that beta-catenin (β-catenin) plays a key role in EMT. When β-catenin function is blocked, no EMT results. If β-catenin is over-expressed, too many cells undergo EMT. The vascular endothelial growth factor receptor is also necessary for the PMCs to function as mesenchymal cells. [ 1 ] Lastly, it is thought that the ingression of PMCs is further facilitated simply through the simultaneous ingression neighboring cells. [ 2 ] Within birds and mammals, epiblast cells converge at the midline and ingress at the primitive streak. Ingression of these cells results in formation of the mesoderm . [ 5 ] The use of ingression to internalize presumptive mesoderm is considered a major evolutionary change in mesoderm morphogenesis within chordates . Within chordate embryos, there is an evolutionary trend exhibited in the mechanisms used to internalize presumptive mesoderm. Basal chordates rely predominantly on invagination, anamniote vertebrates and reptiles on a varying combination of involution and ingression, and birds and mammals primarily on ingression. [ 6 ] Besides ingression, two other types of internalizing cell movements may occur during gastrulation: invagination and involution . [ 7 ]
https://en.wikipedia.org/wiki/Ingression_(biology)
An ingression coast or depressed coast is a generally level coastline that is shaped by the penetration of the sea as a result of crustal movements or a rise in the sea level . Such coasts are characterised by a subaerially formed relief that has previously experienced little deformation by littoral ( tidal ) processes, because the sea level, which had fallen by more than 100 metres during the last glacial period , did not reach its current level until about 6,000 years ago. Depending on the geomorphological shaping of the flooded landform – e.g. glacially or fluvially formed relief – various types of ingression coast emerge, such as rias , skerry and fjard coasts as well as förde and bodden coasts. [ 1 ]
https://en.wikipedia.org/wiki/Ingression_coast
In mathematics, a set A {\displaystyle A} is inhabited if there exists an element a ∈ A {\displaystyle a\in A} . In classical mathematics, the property of being inhabited is equivalent to being non- empty . However, this equivalence is not valid in constructive or intuitionistic logic , and so this separate terminology is mostly used in the set theory of constructive mathematics . In the formal language of first-order logic , set A {\displaystyle A} has the property of being inhabited if A set A {\displaystyle A} has the property of being empty if ∀ z . ( z ∉ A ) {\displaystyle \forall z.(z\notin A)} , or equivalently ¬ ∃ z . ( z ∈ A ) {\displaystyle \neg \exists z.(z\in A)} . Here z ∉ A {\displaystyle z\notin A} stands for the negation ¬ ( z ∈ A ) {\displaystyle \neg (z\in A)} . A set A {\displaystyle A} is non-empty if it is not empty, that is, if ¬ ∀ z . ( z ∉ A ) {\displaystyle \neg \forall z.(z\notin A)} , or equivalently ¬ ¬ ∃ z . ( z ∈ A ) {\displaystyle \neg \neg \exists z.(z\in A)} . Modus ponens implies P → ( ( P → Q ) → Q ) {\displaystyle P\to ((P\to Q)\to Q)} , and taking any a false proposition for Q {\displaystyle Q} establishes that P → ¬ ¬ P {\displaystyle P\to \neg \neg P} is always valid. Hence, any inhabited set is provably also non-empty. In constructive mathematics, the double-negation elimination principle is not automatically valid. In particular, an existence statement is generally stronger than its double-negated form. The latter merely expresses that the existence cannot be ruled out, in the strong sense that it cannot consistently be negated. In a constructive reading, in order for ∃ z . ϕ ( z ) {\displaystyle \exists z.\phi (z)} to hold for some formula ϕ {\displaystyle \phi } , it is necessary for a specific value of z {\displaystyle z} satisfying ϕ {\displaystyle \phi } to be constructed or known. Likewise, the negation of a universal quantified statement is in general weaker than an existential quantification of a negated statement. In turn, a set may be proven to be non-empty without one being able to prove it is inhabited. Sets such as { 2 , 3 , 4 , 7 } {\displaystyle \{2,3,4,7\}} or Q {\displaystyle \mathbb {Q} } are inhabited, as e.g. witnessed by 3 ∈ { 2 , 3 , 4 , 7 } {\displaystyle 3\in \{2,3,4,7\}} . The set { } {\displaystyle \{\}} is empty and thus not inhabited. Naturally, the example section thus focuses on non-empty sets that are not provably inhabited. It is easy to give such examples by using the axiom of separation , as with it logical statements can always be translated to set theoretical ones. For example, with a subset S ⊂ { 0 } {\displaystyle S\subset \{0\}} defined as S := { n ∈ { 0 } ∣ P } {\displaystyle S:=\{n\in \{0\}\mid P\}} , the proposition P {\displaystyle P} may always equivalently be stated as 0 ∈ S {\displaystyle 0\in S} . The double-negated existence claim of an entity with a certain property can be expressed by stating that the set of entities with that property is non-empty. Define a subset A ⊂ { 0 , 1 } {\displaystyle A\subset \{0,1\}} via Clearly P ↔ 0 ∈ A {\displaystyle P\leftrightarrow 0\in A} and ( ¬ P ) ↔ 1 ∈ A {\displaystyle (\neg P)\leftrightarrow 1\in A} , and from the principle of non-contradiction one concludes ¬ ( 0 ∈ A ∧ 1 ∈ A ) {\displaystyle \neg (0\in A\land 1\in A)} . Further, ( P ∨ ¬ P ) ↔ ( 0 ∈ A ∨ 1 ∈ A ) {\displaystyle (P\lor \neg P)\leftrightarrow (0\in A\lor 1\in A)} and in turn Already minimal logic proves ¬ ¬ ( P ∨ ¬ P ) {\displaystyle \neg \neg (P\lor \neg P)} , the double-negation for any excluded middle statement, which here is equivalent to ¬ ( 0 ∉ A ∧ 1 ∉ A ) {\displaystyle \neg (0\notin A\land 1\notin A)} . So by performing two contrapositions on the previous implication, one establishes ¬ ¬ ∃ ! ( n ∈ { 0 , 1 } ) . n ∈ A {\displaystyle \neg \neg \exists !(n\in \{0,1\}).n\in A} . In words: It cannot consistently be ruled out that exactly one of the numbers 0 {\displaystyle 0} and 1 {\displaystyle 1} inhabits A {\displaystyle A} . In particular, the latter can be weakened to ¬ ¬ ∃ n . n ∈ A {\displaystyle \neg \neg \exists n.n\in A} , saying A {\displaystyle A} is proven non-empty. As example statements for P {\displaystyle P} , consider the infamous provenly theory-independent statement such as the continuum hypothesis , consistency of the sound theory at hand, or, informally, an unknowable claim about the past or future. By design, these are chosen to be unprovable. A variant of this is to consider mathematical propositions that are merely not yet established - see also Brouwerian counterexamples . Knowledge of the validity of either 0 ∈ A {\displaystyle 0\in A} or 1 ∈ A {\displaystyle 1\in A} is equivalent to knowledge about P {\displaystyle P} as above, and cannot be obtained. Given neither P {\displaystyle P} nor ¬ P {\displaystyle \neg P} can be proven in the theory, it will also not prove A {\displaystyle A} to be inhabited by some particular number. Further, a constructive framework with the disjunction property then cannot prove P ∨ ¬ P {\displaystyle P\lor \neg P} either. There is no evidence for 0 ∈ A {\displaystyle 0\in A} , nor for 1 ∈ A {\displaystyle 1\in A} , and constructive unprovability of their disjunction reflects this. Nonetheless, since ruling out excluded middle is provenly always inconsistent, it is also established that A {\displaystyle A} is not empty. Classical logic adopts P ∨ ¬ P {\displaystyle P\lor \neg P} axiomatically, spoiling a constructive reading. There are various easily characterized sets the existence of which is not provable in Z F {\displaystyle {\mathsf {ZF}}} , but which are implied to exist by the full axiom of choice A C {\displaystyle {\mathrm {AC} }} . As such, that axiom is itself independent of Z F {\displaystyle {\mathsf {ZF}}} . It in fact contradicts other potential axioms for a set theory. Further, it indeed also contradicts constructive principles , in a set theory context. A theory that does not permit excluded middle does also not validate the function existence principle A C {\displaystyle {\mathrm {AC} }} . In Z F {\displaystyle {\mathsf {ZF}}} , the A C {\displaystyle {\mathrm {AC} }} is equivalent to the statement that for every vector space there exists basis. So more concretely, consider the question of existence of a Hamel bases of the real numbers over the rational numbers . This object is elusive in the sense that there are different Z F {\displaystyle {\mathsf {ZF}}} models that either negate and validate its existence. So it is also consistent to just postulate that existence cannot be ruled out here, in the sense that it cannot consistently be negated. Again, that postulate may be expressed as saying that the set of such Hamel bases is non-empty. Over a constructive theory, such a postulate is weaker than the plain existence postulate, but (by design) is still strong enough to then negate all propositions that would imply the non-existence of a Hamel basis. Because inhabited sets are the same as nonempty sets in classical logic, it is not possible to produce a model in the classical sense that contains a nonempty set X {\displaystyle X} but does not satisfy " X {\displaystyle X} is inhabited". However, it is possible to construct a Kripke model M {\displaystyle M} that differentiates between the two notions. Since an implication is true in every Kripke model if and only if it is provable in intuitionistic logic, this indeed establishes that one cannot intuitionistically prove that " X {\displaystyle X} is nonempty" implies " X {\displaystyle X} is inhabited". This article incorporates material from Inhabited set on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
https://en.wikipedia.org/wiki/Inhabited_set
Inhalants are a broad range of household and industrial chemicals whose volatile vapors or pressurized gases can be concentrated and breathed in via the nose or mouth to produce intoxication , in a manner not intended by the manufacturer. They are inhaled at room temperature through volatilization (in the case of gasoline or acetone ) or from a pressurized container (e.g., nitrous oxide or butane ), and do not include drugs that are sniffed after burning or heating. [ a ] While a few inhalants are prescribed by medical professionals and used for medical purposes , as in the case of inhaled anesthetics and nitrous oxide (an anxiolytic and pain relief agent prescribed by dentists), this article focuses on inhalant use of household and industrial propellants, glues, fuels, and other products in a manner not intended by the manufacturer, to produce intoxication or other psychoactive effects . These products are used as recreational drugs for their intoxicating effect. According to a 1995 report by the National Institute on Drug Abuse , the most serious inhalant use occurs among homeless children and teenagers who "live on the streets completely without family ties." [ 3 ] Inhalants are the only substance used more by younger teenagers than by older teenagers. [ 4 ] Inhalant users inhale vapor or aerosol propellant gases using plastic bags held over the mouth or by breathing from a solvent-soaked rag or an open container. The practices are known colloquially as "sniffing", "huffing" or "bagging". The effects of inhalants range from an alcohol -like intoxication and intense euphoria to vivid hallucinations , depending on the substance and the dose. Some inhalant users are injured due to the harmful effects of the solvents or gases or due to other chemicals used in the products that they are inhaling. As with any recreational drug, users can be injured due to dangerous behavior while they are intoxicated, such as driving under the influence . In some cases, users have died from hypoxia (lack of oxygen), pneumonia , heart failure , cardiac arrest , [ 5 ] or aspiration of vomit. Brain damage is typically seen with chronic long-term use of solvents as opposed to short-term exposure. [ 6 ] While legal when used as intended, in England, Scotland, and Wales it is illegal to sell inhalants to persons likely to use them as an intoxicant. [ 7 ] As of 2017, thirty-seven US states impose criminal penalties on some combination of sale, possession or recreational use of various inhalants. In 15 of these states, such laws apply only to persons under the age of 18. [ 8 ] A small number of recreational inhalant drugs are pharmaceutical products that are used illicitly. Several medical anesthetics are used as recreational drugs, including diethyl ether (a drug that is no longer used medically, due to its high flammability and the development of safer alternatives) and nitrous oxide , which has been widely used since the late 20th century by dentists as an anti-anxiety drug and mild anesthetic during dental procedures. Diethyl ether has a long history of use as a recreational drug. The effects of ether intoxication are similar to those of alcohol intoxication, but more potent. Also, due to NMDA antagonism, the user may experience all the psychedelic effects present in classical dissociatives such as ketamine in the forms of thought loops and the feeling of the mind being disconnected from one's body. Nitrous oxide is a dental anesthetic that is used as a recreational drug, either by users who have access to medical-grade gas canisters (e.g., dental hygienists or dentists) or by using the gas contained in whipped cream aerosol containers. Nitrous oxide inhalation can cause pain relief , depersonalization , derealization , dizziness , euphoria , and some sound distortion. [ 9 ] Ingestion of alkyl nitrites can cause methemoglobinemia , and by inhalation it has not been ruled out. [ 10 ] The sale of alkyl nitrite -based poppers was banned in Canada in 2013. Although not considered a narcotic and not illegal to possess or use, they are considered a drug. Sales that are not authorized can now be punished with fines and prison. [ 11 ] Since 2007, reformulated poppers containing isopropyl nitrite are sold in Europe because only isobutyl nitrite is prohibited. In France, the sale of products containing butyl nitrite, pentyl nitrite, or isomers thereof, has been prohibited since 1990 on grounds of danger to consumers. [ 12 ] In 2007, the government extended this prohibition to all alkyl nitrites that were not authorized for sale as drugs. [ 13 ] After litigation by sex shop owners, this extension was quashed by the Council of State on the grounds that the government had failed to justify such a blanket prohibition: according to the court, the risks cited, concerning rare accidents often following abnormal usage, rather justified compulsory warnings on the packaging. [ 14 ] In the United Kingdom, poppers are widely available and frequently (legally) sold in gay clubs/bars , sex shops , drug paraphernalia head shops , over the Internet and on markets. [ 15 ] It is illegal under Medicines Act 1968 to sell them advertised for human consumption, and to bypass this, they are usually sold as odorizers. In the U.S., originally marketed as a prescription drug in 1937, amyl nitrite remained so until 1960, when the Food and Drug Administration removed the prescription requirement due to its safety record. This requirement was reinstated in 1969, after observation of an increase in recreational use. Other alkyl nitrites were outlawed in the U.S. by Congress through the Anti-Drug Abuse Act of 1988. The law includes an exception for commercial purposes. The term commercial purpose is defined to mean any use other than for the production of consumer products containing volatile alkyl nitrites meant for inhaling or otherwise introducing volatile alkyl nitrites into the human body for euphoric or physical effects. [ 16 ] The law came into effect in 1990. Visits to retail outlets selling these products reveal that some manufacturers have since reformulated their products to abide by the regulations, through the use of the legal cyclohexyl nitrite as the primary ingredient in their products, which are sold as video head cleaners, polish removers, or room odorants. Nitrous oxide can be categorized as a dissociative drug, as it can cause visual and auditory hallucinations. Anesthetic gases used for surgery, such as nitrous oxide or enflurane , are believed to induce anesthesia primarily by acting as NMDA receptor antagonists , open-channel blockers that bind to the inside of the calcium channels on the outer surface of the neuron , and provide high levels of NMDA receptor blockade for a short period of time. This makes inhaled anesthetic gases different from other NMDA antagonists, such as ketamine , which bind to a regulatory site on the NMDA-sensitive calcium transporter complex and provide slightly lower levels of NMDA blockade, but for a longer and much more predictable duration. This makes a deeper level of anesthesia achievable more easily using anesthetic gases but can also make them more dangerous than other drugs used for this purpose. Nitrous oxide is thought to be particularly non-toxic, though heavy long-term use can lead to a variety of serious health problems linked to the destruction of vitamin B12 and folic acid . [ 17 ] [ 18 ] In the United States, possession of nitrous oxide is legal under federal law and is not subject to DEA purview. [ 19 ] It is, however, regulated by the Food and Drug Administration under the Food Drug and Cosmetics Act; prosecution is possible under its "misbranding" clauses, prohibiting the sale or distribution of nitrous oxide for the purpose of human consumption as a recreational drug . Many states have laws regulating the possession, sale, and distribution of nitrous oxide. Such laws usually ban distribution to minors or limit the amount of nitrous oxide that may be sold without a special license. [ citation needed ] For example, in the state of California, possession for recreational use is prohibited and qualifies as a misdemeanor. [ 20 ] In New Zealand, the Ministry of Health has warned that nitrous oxide is a prescription medicine, and its sale or possession without a prescription is an offense under the Medicines Act. [ 21 ] This statement would seemingly prohibit all non-medicinal uses of the chemical, though it is implied that only recreational use will be legally targeted. In India , for general anesthesia purposes, nitrous oxide is available as Nitrous Oxide IP. India's gas cylinder rules (1985) prohibit the transfer of gas from one cylinder to another for breathing purposes. Because India's Food & Drug Authority (FDA-India) rules state that transferring a drug from one container to another (refilling) is equivalent to manufacturing, anyone found doing so must possess a drug manufacturing license. In contrast, a few inhalants like amyl nitrite and diethyl ether have medical applications and are not toxic in the same sense as solvents, though they can still be dangerous when used recreationally. Ethanol (the alcohol which is normally drunk) is sometimes inhaled. The ethanol must be converted from liquid into gaseous state (vapor) or aerosol (mist), [ 22 ] in some cases using a nebulizer , a machine that agitates the liquid into an aerosol. The sale of nebulizers for inhaling ethanol was banned in some US states due to safety concerns. [ 23 ] Most inhalant drugs that are used non-medically are ingredients in household or industrial chemical products that are not intended to be concentrated and inhaled. A wide range of volatile solvents intended for household or industrial use are inhaled as recreational drugs . This includes petroleum products (gasoline and kerosene ), toluene (used in paint thinner , permanent markers , contact cement and model glue), and acetone (used in nail polish remover ). These solvents vaporize at room temperature. Until the early 1990s, the most common solvents that were used for the ink in permanent markers were toluene and xylene . These two substances are both harmful [ 24 ] [ 25 ] and characterized by a very strong smell. Today, the ink is usually made on the basis of alcohols (e.g. 1-Propanol , 1-butanol , diacetone alcohol and cresols ). Organochlorine solvents are particularly hazardous; many of these are now restricted in developed countries due to their environmental impact. [ 26 ] Even though solvent glue is normally a legal product, there is a 1983 case where a court ruled that supplying glue to children is illegal. Khaliq v HM Advocate was a Scottish criminal case decided by the High Court of Justiciary on appeal, in which it was decided that it was an offense at common law to supply glue-sniffing materials that were otherwise legal in the knowledge that they would be used recreationally by children. Two shopkeepers in Glasgow were arrested and charged for supplying children with "glue-sniffing kits" consisting of a quantity of petroleum-based glue in a plastic bag. They argued there was nothing illegal about the items that they had supplied. On appeal, the High Court took the view that, even though glue and plastic bags might be perfectly legal, everyday items, the two shopkeepers knew perfectly well that the children were going to use the articles as inhalants and the charge on the indictment should stand. [ 27 ] When the case came to trial at Glasgow High Court the two were sentenced to three years' imprisonment. As of 2023, in England, Scotland, and Wales it is illegal to sell inhalants, including solvent glues, to persons of any age likely to use them as an intoxicant. [ 7 ] As of 2017, thirty-seven US states impose criminal penalties on some combination of sale, possession or recreational use of various inhalants. In 15 of these states, such laws apply only to persons under the age of 18. [ 8 ] Gasoline sniffing can cause lead poisoning , [ 28 ] in locations where leaded gas is not banned . Toluene can damage myelin . [ 29 ] A number of gases intended for household or industrial use are inhaled as recreational drugs. This includes chlorofluorocarbons used in aerosols and propellants (e.g., aerosol hair spray, aerosol deodorant). A gas used as a propellant in whipped cream aerosol containers, nitrous oxide, is used as a recreational drug. Pressurized canisters of propane and butane gas, both of which are intended for use as fuels, are used as inhalants. "New Jersey... prohibits selling or offering to sell minors products containing chlorofluorocarbon that is used in refrigerant." [ 30 ] Statistics on deaths caused by heavy inhalant use are difficult to determine. It may be severely under-reported because death is often attributed to a discrete event such as a stroke or a heart attack, even if the event happened because of inhalant use. [ 31 ] Inhalant use was mentioned on 144 death certificates in Texas during the period 1988–1998 and was reported in 39 deaths in Virginia between 1987 and 1996 from acute voluntary exposure to used inhalants. [ 32 ] Chronic solvent-induced encephalopathy (CSE) is a condition induced by long-term exposure to organic solvents , often—but not always—in the workplace, that lead to a wide variety of persisting sensorimotor polyneuropathies and neurobehavioral deficits even after solvent exposure has been removed. [ 33 ] [ 34 ] [ 35 ] Sudden sniffing death syndrome, first described by Millard Bass in 1970, [ 36 ] is commonly known as SSDS. Solvents have many potential risks in common, including pneumonia, cardiac failure or arrest, [ 5 ] and aspiration of vomit. The inhaling of some solvents can cause hearing loss, limb spasms, and damage to the central nervous system and brain. [ 5 ] Serious but potentially reversible effects include liver and kidney damage and blood-oxygen depletion. Death from inhalants is generally caused by a very high concentration of fumes. Deliberately inhaling solvents from an attached paper or plastic bag or in a closed area greatly increases the chances of suffocation. Brain damage is typically seen with chronic long-term use as opposed to short-term exposure. [ 6 ] Parkinsonism (see: Signs and symptoms of Parkinson's disease ) has been associated with huffing. [ 37 ] Female inhalant users who are pregnant may have adverse effects on the fetus, and the baby may be smaller when it is born and may need additional health care (similar to those seen with alcohol – fetal alcohol syndrome ). There is some evidence of birth defects and disabilities in babies born to women who sniffed solvents such as gasoline. In the short term, death from solvent use occurs most commonly from aspiration of vomit while unconscious or from a combination of respiratory depression and hypoxia . [ 38 ] Inhaling butane gas can cause drowsiness, unconsciousness , asphyxia , and cardiac arrhythmia. [ 39 ] Butane is the most commonly misused volatile solvent in the UK and caused 52% of solvent-related deaths in 2000. When butane is sprayed directly into the throat, the jet of fluid can cool rapidly to −20 °C by adiabatic expansion , causing prolonged laryngospasm . [ 40 ] [ 41 ] Some inhalants can also indirectly cause sudden death by cardiac arrest, in a syndrome known as "sudden sniffing death". [ 42 ] The anaesthetic gases present in the inhalants appear to sensitize the user to adrenaline and, in this state, a sudden surge of adrenaline (e.g., from a frightening hallucination or run-in with aggressors), may cause fatal cardiac arrhythmia . [ 43 ] Furthermore, the inhalation of any gas that is capable of displacing oxygen in the lungs (especially gases heavier than oxygen) carries the risk of hypoxia as a result of the very mechanism by which breathing is triggered. Since reflexive breathing is prompted by elevated carbon dioxide levels (rather than diminished blood oxygen levels), breathing a concentrated, relatively inert gas (such as computer-duster tetrafluoroethane or nitrous oxide) that removes carbon dioxide from the blood without replacing it with oxygen will produce no outward signs of suffocation even when the brain is experiencing hypoxia. Once full symptoms of hypoxia appear, it may be too late to breathe without assistance, especially if the gas is heavy enough to lodge in the lungs for extended periods. Even completely inert gases, such as argon , can have this effect if oxygen is largely excluded. Inhalant drugs are often used by children, teenagers, incarcerated or institutionalized people, and impoverished people, because these solvents and gases are ingredients in hundreds of legally available, inexpensive products, such as deodorant sprays, hair spray , contact cement and aerosol air fresheners . However, most users tend to be "... adolescents (between the ages of 12 and 17)." [ 44 ] In some countries, chronic, heavy inhalant use is concentrated in marginalized, impoverished communities. [ 45 ] [ 46 ] Young people who become used to heavy amounts of inhalants chronically are also more likely to be those who are isolated from their families and community. The article "Epidemiology of Inhalant Abuse: An International Perspective" notes that "[t]he most serious form of obsession with inhalant use probably occurs in countries other than the United States where young children live on the streets completely without family ties. These groups almost always use inhalants at very high levels (Leal et al. 1978). This isolation can make it harder to keep in touch with the sniffer and encourage him or her to stop sniffing." [ 3 ] The article also states that "... high [inhalant use] rates among barrio Hispanics almost undoubtedly are related to the poverty, lack of opportunity, and social dysfunction that occur in barrios" and states that the "... same general tendency appears for Native-American youth" because "... Indian reservations are among the most disadvantaged environments in the United States; there are high rates of unemployment, little opportunity, and high rates of alcoholism and other health problems." [ 3 ] There are a wide range of social problems associated with inhalant use, such as feelings of distress , anxiety and grief for the community; violence and damage to property; violent crime; stresses on the juvenile justice system ; and stresses on youth agencies and support services. Glue and gasoline (petrol) sniffing is also a problem in parts of Africa, especially with street children. In India and South Asia, three of the most widely used inhalants are the Dendrite brand and other forms of contact adhesives and rubber cement manufactured in Kolkata , and toluenes in paint thinners . Genkem is a brand of glue, which had become the generic name for all the glues used by glue-sniffing children in Africa before the manufacturer replaced n-hexane in its ingredients in 2000. [ 47 ] The United Nations Office on Drugs and Crime has reported that glue sniffing is at the core of "street culture" in Nairobi , Kenya , and that the majority of street children in the city are habitual solvent users. [ 48 ] Research conducted by Cottrell-Boyce for the African Journal of Drug and Alcohol Studies found that glue sniffing amongst Kenyan street children was primarily functional – dulling the senses against the hardship of life on the street – but it also provided a link to the support structure of the "street family" as a potent symbol of shared experience. [ 48 ] Similar incidents of glue sniffing among destitute youth in the Philippines have also been reported, most commonly from groups of street children and teenagers collectively known as "Rugby" boys , [ 49 ] which were named after a brand of toluene-laden contact cement. Other toluene-containing substances have also been used, most notably the Vulca Seal brand of roof sealants. Bostik Philippines, which currently owns the Rugby and Vulca Seal brands, has since responded to the issue by adding bitterants such as mustard oil to their Rugby line, [ 50 ] as well as reformulating it by replacing toluene with xylene . Several other manufacturers have also followed suit. Another very common inhalant is Erase-X, a correction fluid that contains toluene. It has become very common for school and college students to use it, because it is easily available in stationery shops in India. This fluid is also used by street and working children in Delhi. [ 51 ] In the UK, marginalized youth use a number of inhalants, such as solvents and propellants. In Russia and Eastern Europe, gasoline sniffing became common on Russian ships following attempts to limit the supply of alcohol to ship crews in the 1980s. The documentary Children Underground depicts the huffing of a solvent called Aurolac (a product used in chroming) by Romanian homeless children. During the interwar period , the inhalation of ether ( etheromania ) was widespread in some regions of Poland, especially in Upper Silesia . Tens of thousands of people were affected by this problem. [ 52 ] In Canada, Native children in the isolated Northern Labrador community of Davis Inlet were the focus of national concern in 1993, when many were found to be sniffing gasoline. The Canadian and provincial Newfoundland and Labrador governments intervened on a number of occasions, sending many children away for treatment. Despite being moved to the new community of Natuashish in 2002, serious inhalant use problems have continued. Similar problems were reported in Sheshatshiu in 2000 and also in Pikangikum First Nation . [ 53 ] In 2012, the issue once again made the news media in Canada. [ 54 ] In Mexico, the inhaling of a mixture of gasoline and industrial solvents, known locally as "Activo" or "Chemo", has risen in popularity among the homeless and among the street children of Mexico City in the 21st century. The mixture is poured onto a handkerchief and inhaled while held in one's fist. In the US, ether was used as a recreational drug during the 1930s Prohibition era , when alcohol was made illegal. Ether was either sniffed or drunk and, in some towns, replaced alcohol entirely. However, the risk of death from excessive sedation or overdose is greater than that with alcohol, and ether drinking is associated with damage to the stomach and gastrointestinal tract. [ 55 ] Use of glue, paint and gasoline became more common after the 1950s. Model airplane glue-sniffing as problematic behavior among youth was first reported in 1959 and increased in the 1960s. [ 56 ] Use of aerosol sprays became more common in the 1980s, as older propellants such as CFCs were phased out and replaced by more environmentally friendly compounds such as propane and butane . Most inhalant solvents and gases are not regulated under drug laws such as the United States Controlled Substances Act . However, many US states and Canadian cities have placed restrictions on the sale of some solvent-containing products to minors, particularly for products widely associated with sniffing, such as model cement . The practice of inhaling such substances is sometimes colloquially referred to as huffing, sniffing (or glue sniffing), dusting, or chroming. Australia has long faced a petrol (gasoline) sniffing problem in isolated and impoverished aboriginal communities. Although some sources argue that sniffing was introduced by United States servicemen stationed in the nation's Top End during World War II [ 57 ] or through experimentation by 1940s-era Cobourg Peninsula sawmill workers, [ 58 ] other sources claim that inhalant abuse (such as glue inhalation) emerged in Australia in the late 1960s. [ 3 ] Chronic, heavy petrol sniffing appears to occur among remote, impoverished indigenous communities, where the ready accessibility of petrol has helped to make it a common addictive substance. In Australia, petrol sniffing now occurs widely throughout remote Aboriginal communities in the Northern Territory , Western Australia , northern parts of South Australia , and Queensland . The number of people sniffing petrol goes up and down over time as young people experiment or sniff occasionally. "Boss", or chronic, sniffers may move in and out of communities; they are often responsible for encouraging young people to take it up. [ 59 ] A 1983 survey of 4,165 secondary students in New South Wales showed that solvents and aerosols ranked just after analgesics (e.g., codeine pills) and alcohol for drugs that were inappropriately used. This 1983 study did not find any common usage patterns or social class factors. [ 3 ] The causes of death for inhalant users in Australia included pneumonia, cardiac failure/arrest, aspiration of vomit, and burns. In 1985, there were 14 communities in Central Australia reporting young people sniffing. In July 1997, it was estimated that there were around 200 young people sniffing petrol across 10 communities in Central Australia. Approximately 40 were classified as chronic sniffers. There have been reports of young Aboriginal people sniffing petrol in the urban areas around Darwin and Alice Springs . In 2005, the Government of Australia and BP Australia began the usage of opal fuel in remote areas prone to petrol sniffing. [ 60 ] Opal is a non-sniffable fuel (which is much less likely to cause a high) and has made a difference in some indigenous communities. [ 61 ] Inhalant users inhale vapors or aerosol propellant gases using plastic bags held over the mouth or by breathing from an open container of solvents, such as gasoline or paint thinner. Nitrous oxide gases from whipped cream aerosol cans, aerosol hairspray or non-stick frying spray are sprayed into plastic bags. Some nitrous oxide users spray the gas into balloons. When inhaling non-stick cooking spray or other aerosol products, some users may filter the aerosolized particles out with a rag. Some gases, such as propane and butane gases, are inhaled directly from the canister. Once these solvents or gases are inhaled, the extensive capillary surface of the lungs rapidly absorbs the solvent or gas, and blood levels peak rapidly. The intoxication effects occur so quickly that the effects of inhalation can resemble the intensity of effects produced by intravenous injection of other psychoactive drugs. [ 62 ] Ethanol is also inhaled, either by vaporizing it by pouring it over dry ice in a narrow container and inhaling with a straw or by pouring alcohol in a corked bottle with a pipe, and then using a bicycle pump to make a spray . Alcohol can be vaporized using a simple container and open-flame heater. [ 63 ] Medical devices such as asthma nebulizers and inhalers were also reported as a means of application. The practice gained popularity in 2004, with the marketing of the device dubbed AWOL (Alcohol without liquid), a play on the military term AWOL (Absent Without Leave). [ 22 ] AWOL, created by British businessman Dominic Simler, [ 22 ] was first introduced in Asia and Europe, and then in the United States in August 2004. AWOL was used by nightclubs, at gatherings and parties, and it garnered attraction as a novelty , as people 'enjoyed passing it around in a group'. [ 64 ] AWOL uses a nebulizer , a machine that agitates the liquid into an aerosol . AWOL's official website states that "AWOL and AWOL 1 are powered by Electrical Air Compressors while AWOL 2 and AWOL 3 are powered by electrical oxygen generators ", [ 65 ] which refer to a couple of mechanisms used by the nebulizer drug delivery device for inhalation. Although the AWOL machine is marketed as having no downsides, such as the lack of calories or hangovers, Amanda Shaffer of Slate describes these claims as "dubious at best". [ 22 ] Although inhaled alcohol does reduce the caloric content, the savings are minimal. [ 66 ] After expressed safety and health concerns, sale or use of AWOL machines was banned in a number of American states. [ 23 ] The effects of solvent intoxication can vary widely depending on the dose and what type of solvent or gas is inhaled. A person who has inhaled a small amount of rubber cement or paint thinner vapor may be impaired in a manner resembling alcohol inebriation. A person who has inhaled a larger quantity of solvents or gases, or a stronger chemical, may experience stronger effects such as distortion in perceptions of time and space, hallucinations , and emotional disturbances. The effects of inhalant use are also modified by the combined use of inhalants and alcohol or other drugs. In the short term, many users experience headaches, nausea and vomiting, slurred speech, loss of motor coordination , and wheezing. A characteristic "glue sniffer's rash" around the nose and mouth is sometimes seen after prolonged use. An odor of paint or solvents on clothes, skin, and breath is sometimes a sign of inhalant abuse, and paint or solvent residues can sometimes emerge in sweat. [ 67 ] According to NIH, even a single session of inhalant use "can disrupt heart rhythms and lower oxygen levels", which can lead to death. "Regular abuse can result in serious harm to the brain, heart, kidneys, and liver." [ 68 ] Many inhalants are volatile organic chemicals and can catch fire or explode, especially when combined with smoking. As with many other drugs, users may also injure themselves due to loss of coordination or impaired judgment, especially if they attempt to operate machinery. All commonly abused inhalants act as asphyxiant gases , although a common myth is that their primary effects are only due to oxygen deprivation . In reality, the majority of abused inhalants still exhibit psychoactive effects, [ 69 ] although oxygen deprivation does add to the notable effects. Regardless of which inhalant is used, inhaling vapors or gases can lead to injury or death. One major risk is hypoxia (lack of oxygen), which can occur due to inhaling fumes from a plastic bag, or from using proper inhalation mask equipment (e.g., a medical mask for nitrous oxide) but not adding oxygen or room air. Another danger is freezing the throat. When a gas that was stored under high pressure is released, it cools abruptly and can cause frostbite if it is inhaled directly from the container. This can occur, for example, with inhaling nitrous oxide. When nitrous oxide is used as an automotive power adder , its cooling effect is used to make the fuel-air charge denser. In a person, this effect is potentially lethal. The second cause being especially a risk with heavier-than-air vapors such as butane or gasoline vapor. Deaths typically occur from complications related to excessive sedation and vomiting. Actual overdose from the drug does occur, however, and inhaled solvent use is statistically more likely to result in life-threatening respiratory depression than intravenous use of opioids such as heroin. [ citation needed ] Most deaths from solvent use could be prevented if individuals were resuscitated quickly when they stopped breathing and their airways cleared if they vomited. However, most inhalant use takes place when people inhale solvents by themselves or in groups of people who are intoxicated. Certain solvents are more hazardous than others, such as gasoline. Use of butane , propane , nitrous oxide and other inhalants can create a risk of freezing burns from contact with the extremely cold liquid. The risk of such contact is greatly increased by the impaired judgement and motor coordination brought on by inhalant intoxication. Toxicity may also result from the pharmacological properties of the drug; excess NMDA antagonism can completely block calcium influx into neurons and provoke cell death through apoptosis , [ 73 ] although this is more likely to be a long-term result of chronic solvent use than a consequence of short-term use. Inhalant-related disorders are a group of mental health conditions associated with the misuse of volatile substances. These disorders are recognised in both the Diagnostic and Statistical Manual of Mental Disorders ( DSM-5 ) and the International Classification of Diseases ( ICD-11 ), though there are notable differences between the two classification systems. [ 74 ] The DSM-5 identifies four primary types of inhalant-related disorders: inhalant intoxication, inhalant use disorder, inhalant-induced disorders, and unspecified inhalant-related disorder. [ 74 ] The ICD-11 includes a diagnosis for inhalant withdrawal , which is not covered in the DSM-5. [ 74 ] One of the early musical references to inhalant use occurs in the 1974 Elton John song " The Bitch Is Back ", in the line "I get high in the evening sniffing pots of glue." Inhalant use, especially glue-sniffing, is widely associated with the late-1970s punk youth subculture in the UK and North America. Raymond Cochrane and Douglas Carroll claim that when glue sniffing became widespread in the late 1970s, it was "adopted by punks because public [negative] perceptions of sniffing fitted in with their self-image" as rebels against societal values. [ 75 ] While punks at first used inhalants "experimentally and as a cheap high, adult disgust and hostility [to the practice] encouraged punks to use glue sniffing as a way of shocking society." As well, using inhalants was a way of expressing their anti-corporatist DIY (do it yourself) credo; [ 75 ] by using inexpensive household products as inhalants, punks did not have to purchase industrially manufactured liquor or beer. One history of the punk subculture argues that "substance abuse was often referred to in the music and did become synonymous with the genre, glue-sniffing especially" because the youths' "faith in the future had died and that the youth just didn't care anymore" due to the "awareness of the threat of nuclear war and a pervasive sense of doom." In a BBC interview with a person who was a punk in the late 1970s, they said that "there was a real fear of imminent nuclear war —people were sniffing glue knowing that it could kill them, but they didn't care because they believed that very soon everybody would be dead anyway." A number of 1970s punk rock and 1980s hardcore punk songs refer to inhalant use. The Ramones , an influential early US punk band, referred to inhalant use in several of their songs. The song " Now I Wanna Sniff Some Glue " describes adolescent boredom, and the song "Carbona not Glue" states, "My brain is stuck from shooting glue." An influential punk fanzine about the subculture and music took its name ( Sniffin' Glue ) from the Ramones song. The 1980s punk band The Dead Milkmen wrote a song, "Life is Shit" from their album Beelzebubba , about two friends hallucinating after sniffing glue. Punk-band-turned-hip-hop group the Beastie Boys penned a song "Hold it Now – Hit It", which includes the line "cause I'm beer drinkin, breath stinkin, sniffing glue." Their song "Shake Your Rump" includes the lines, "Should I have another sip no skip it/In the back of the ride and bust with the whippits". Pop punk band Sum 41 wrote a song, " Fat Lip ", which refers to a character who does not "make sense from all the gas you be huffing..." The song "Lança-Perfume", written and performed by Brazilian pop star Rita Lee , became a national hit in 1980. The song is about chloroethane and its widespread recreational sale and use during the rise of Brazil's carnivals. Inhalants are referred to by bands from other genres, including several grunge bands—an early 1990s genre that was influenced by punk rock. The 1990s grunge band Nirvana , which was influenced by punk music, penned a song, " Dumb ", in which Kurt Cobain sings "my heart is broke / But I have some glue / help me inhale / And mend it with you". L7 , an all-female grunge band, penned a song titled "Scrap" about a skinhead who inhales spray-paint fumes until his mind "starts to gel". Also in the 1990s, the Britpop band Suede had a UK hit with their song " Animal Nitrate " whose title is a thinly veiled reference to amyl nitrite . The Beck song "Fume" from his "Fresh Meat and Old Slabs" release is about inhaling nitrous oxide . Another Beck song, "Cold Ass Fashion", contains the line "O.G. – Original Gluesniffer!" Primus 's 1999 song " Lacquer Head " is about adolescents who use inhalants to get high. Hip hop performer Eminem wrote a song, "Bad Meets Evil", which refers to breathing "... ether in three lethal amounts." The Brian Jonestown Massacre, a retro-rock band from the 1990s, has a song, "Hyperventilation", which is about sniffing model-airplane cement. Frank Zappa's song "Teenage Wind" from 1981 has a reference to glue sniffing: "Nothing left to do but get out the 'ol glue; Parents, parents; Sniff it good now..." A number of films have depicted or referred to the use of solvent inhalants. In the 1968 film How Sweet It Is! , Grif Henderson ( James Garner ), refers to him and his young son once making model aeroplanes together, but says, "...now all he wants to do is sniff the glue". The 1980 comedy film Airplane! , the character of McCroskey ( Lloyd Bridges ) refers to his inhalant use when he states, "I picked the wrong week to quit sniffing glue." In the 1996 film Citizen Ruth , the character Ruth ( Laura Dern ), a homeless drifter, is depicted inhaling patio sealant from a paper bag in an alleyway. In the tragicomedy Love Liza , the main character, played by Philip Seymour Hoffman , plays a man who takes up building remote-controlled airplanes as a hobby to give him an excuse to sniff the fuel in the wake of his wife's suicide. Harmony Korine 's 1997 Gummo depicts adolescent boys inhaling contact cement for a high. Edet Belzberg's 2001 documentary Children Underground chronicles the lives of Romanian street children addicted to inhaling paint. In The Basketball Diaries , a group of boys is huffing Carbona cleaning liquid at 3 minutes and 27 seconds into the movie; further on, a boy is reading a diary describing the experience of sniffing the cleaning liquid. In the David Lynch film Blue Velvet , the bizarre and manipulative character played by Dennis Hopper uses a mask to inhale amyl nitrite. [ 76 ] In Little Shop of Horrors , Steve Martin 's character dies from nitrous oxide inhalation. The 1999 independent film Boys Don't Cry depicts two young low-income women inhaling aerosol computer cleaner (compressed gas) for a buzz. In The Cider House Rules , Michael Caine 's character is addicted to inhaling ether vapors. In Thirteen , the main character, a teen, uses a can of aerosol computer cleaner to get high. In the action movie Shooter , an ex-serviceman on the run from the law ( Mark Wahlberg ) inhales nitrous oxide gas from a number of Whip-It! whipped cream canisters until he becomes unconscious. The South African film The Wooden Camera also depicts the use of inhalants by one of the main characters, a homeless teen, and their use in terms of socio-economic stratification. The title characters in Samson and Delilah sniff petrol; in Samson's case, possibly causing brain damage. In the 2004 film Taxi , Queen Latifah and Jimmy Fallon are trapped in a room with a burst tank containing nitrous oxide. Queen Latifah's character curses at Fallon while they both laugh hysterically. Fallon's character asks if it is possible to die from nitrous oxide, to which Queen Latifah's character responds with "It's laughing gas, stupid!" Neither of them had any side effects other than their voices becoming much deeper while in the room. In the French horror film Them (2006), a French couple living in Romania are pursued by a gang of street children who break into their home at night. Olivia Bonamy's character is later tortured and forced to inhale aurolac from a silver-colored bag. During a flashback scene in the 2001 film Hannibal , Hannibal Lecter gets Mason Verger high on amyl nitrite poppers, then convinces Verger to cut off his own face and feed it to his dogs. The science fiction story " Waterspider " by Philip K. Dick (first published in January 1964 in If magazine) contains a scene in which characters from the future are discussing the culture of the early 1950s. One character says: "You mean he sniffed what they called 'airplane dope'? He was a 'glue-sniffer'?", to which another character replies: "Hardly. That was a mania among adolescents and did not become widespread in fact until a decade later. No, I am speaking about imbibing alcohol." [ 77 ] The book Fear and Loathing in Las Vegas describes how the two main characters inhale diethyl ether and amyl nitrite . In the comedy series Newman and Baddiel in Pieces , Rob Newman's inhaling gas from a foghorn was a running joke in the series. One episode of the Jeremy Kyle Show featured a woman with a 20-year butane gas addiction. [ 78 ] In the series It's Always Sunny in Philadelphia , Charlie Kelly has an addiction to huffing glue. Additionally, season nine episode 8 shows Dennis, Mac, and Dee getting a can of gasoline to use as a solvent, but instead end up taking turns huffing from the canister. A 2008 episode of the reality show Intervention (season 5, episode 9) featured Allison, who was addicted to huffing computer duster for the short-lived, psychoactive effects. Allison has since achieved a small but significant cult following among bloggers and YouTube users. Several remixes of scenes from Allison's episode can be found online. [ citation needed ] Since 2009, Allison has worked with drug and alcohol treatment centers in Los Angeles County. [ citation needed ] In the seventh episode of the fourteenth season of South Park , Towelie, an anthropomorphic towel, develops an addiction to inhaling computer duster. [ citation needed ] In the show Squidbillies , the main character Early Cuyler is often seen inhaling gas or other substances. [ citation needed ]
https://en.wikipedia.org/wiki/Inhalant
In chemistry , inherent chirality is a property of asymmetry in molecules arising, not from a stereogenic or chiral center, but from a twisting of the molecule in 3-D space. The term was first coined by Volker Boehmer in a 1994 review, to describe the chirality of calixarenes arising from their non-planar structure in 3-D space. This phenomenon was described as resulting from "the absence of a place of symmetry or an inversion center in the molecule as a whole". [ 1 ] Boehmer further explains this phenomenon by suggesting that if an inherently chiral calixarene macrocycle were opened up it would produce an "achiral linear molecule". [ 1 ] There are two commonly used notations to describe a molecules inherent chirality: cR/cS (arising from the notation used for classically chiral compounds, with c denoting curvature) and P/M. [ 2 ] [ irrelevant citation ] Inherently chiral molecules, like their classically chiral counterparts, can be used in chiral host–guest chemistry, enantioselective synthesis, and other applications. [ 3 ] There are naturally occurring inherently chiral molecules as well. Retinal, a chromophore in rhodopsin . exists in solution as a racemic pair of enantiomers due to the curvature of an achiral polyene chain. [ 4 ] After creating a series of traditionally chiral calixarenes (through the addition of a chiral substituent group on the top or bottom rim of the macrocycle,) the first inherently chiral calixarenes were synthesized in 1982, though the molecules were not yet described as such. The inherently chiral calixarenes featured an XXYZ or WXYZ substitution pattern, such that the planar representation of the molecule does not show any chirality, and if the macrocycle were to be broken open, this would produce an achiral linear molecule. [ 5 ] The chirality in these calixarenes is instead derived from the curvature of the molecule in space. [ 6 ] Due to the initial lack of a formal definition after the initial conception, the term inherent chirality was utilized to describe a variety of chiral molecules that don't fall into other defined chirality types. The first fully formulated definition of inherent chirality was published in 2004 by Mandolini and Schiaffino, (and later modified by Szumna): [ 4 ] inherent chirality arises from the introduction of a curvature in an ideal planar structure that is devoid of perpendicular symmetry planes in its bidimensional representation. Inherent chirality has been known by a variety of names in the literature including bowl chirality (in fullerene fragments), intrinsic chirality, helicity (see section 3a) residual enantiomers (as applied to sterically hindered molecular propellers,) and cyclochirality (though this is often considered to be a more specific example and cannot be applied to all inherently chiral molecules). [ 4 ] A simple example of inherent chirality is that of corannulene commonly referred to as "bowl chirality" in the literature. The chirality of an unsubstituted corranulene (containing no classic stereogenic centers) cannot be seen in a 2D representation, but becomes clear when a 3D representation is evoked, as the C 5 symmetry of corranulenes provides the molecules with a source of chirality (figure 2.) Racemization of these molecules is possible through an inversion of curvature, though some inherently chiral molecules have inversion barriers comparable to a classic chiral center. [ 4 ] Some inherently chiral molecules contain chirality planes , or planes within a given molecules across which the molecule is dissymmetric. Paracyclophanes often contain chiral planes if the bridge across the phenylene unit is short enough, or if the phenylene contains another substituent, not in the bridge, that hinders rotation of the phenylene unit. Similar to chirality planes, chirality axes arise from an axis about which the spatial arrangement of substituents creates chirality. This can be seen in helical molecules (see section 3a) as well as some alkenes. Spiro compounds (compounds with a twisted structure of two or more rings) can have inherent chirality at the spiroatom, due to the twisting of the achiral ring system. Inherently chiral alkenes have been synthesized through the use of a "buckle" where in an achiral, linear alkene is forced into a chiral conformation. Alkenes have no classical chirality, so generally, an external stereogenic center must be introduced. However, by locking the alkene into a conformation through the use of an achiral buckle allows for the creation of an inherently chiral alkene. Inherently chiral alkenes have been synthesized through the use of dialkoxysilanes, with a large enough racemization barrier that enantiomers have been isolated. [ 7 ]
https://en.wikipedia.org/wiki/Inherent_chirality
Inhibitors of apoptosis are a group of proteins that mainly act on the intrinsic pathway that block programmed cell death, which can frequently lead to cancer or other effects for the cell if mutated or improperly regulated. Many of these inhibitors act to block caspases , a family of cysteine proteases that play an integral role in apoptosis. [ 1 ] Some of these inhibitors include the Bcl-2 family , viral inhibitor crmA, and IAP's. Apoptosis , or programmed cell death, is a highly regulated process used by many multicellular organisms. Like any regulated process, apoptosis is subject to either activation or inhibition by a variety of chemical factors. Apoptosis can be triggered through two main pathways; extrinsic and intrinsic pathways. The extrinsic pathway mostly involves extracellular signals triggering intracellular apoptosis mechanisms by binding to receptors in the cell membrane and sending signals from the outside of the cell. Intrinsic pathways involved internal cell signaling primarily through the mitochondria . [ 2 ] The Bcl-2 family of proteins can either inhibit or promote apoptosis and members are characterized by the Bcl-2 homologous domains BH1, BH2, BH3, and BH4. The combinations of the domains in the proteins determine its role in the apoptosis process. Members of the family that inhibit apoptosis include Bcl-2 itself, Bcl-XL, and Bcl-w, which possess all four of the domains. [ 3 ] Bcl-2 is the most well known of the anti-apoptotic members, and is classified as an oncogene . Studies have shown that the Bcl-2 oncogene may inhibit apoptosis in two ways; either by directly controlling the activation of caspases, or by disrupting the channels that allow proapoptotic factors from leaving the mitochondria. Regarding the activation of caspases, there exists a gene called ced-9 in C. elegans that protects against cell death that is a part of the Bcl-2 family. ced-9 encodes a protein that is structurally similar to Bcl-2 that binds to another protein ced-4 , a homolog of APAF-1 in humans, and prevents it from activating caspase ced-3 , which is necessary for killing of the cell. [ 4 ] In humans APAF-1 actually doesn't interact with Bcl-2; rather it is activated by cytochrome c , the release of which from the mitochondria is regulated by Bcl-2. BAX and BAK are multidomain proapoptoic members of the Bcl-2 family that lie in the cytosol and the mitochondria, respectively. After several stimuli leading to cell death are activated, BAX also moves to the mitochondria where it carries out its functions there. Bcl-2 and Bcl-xl have been found to sequester BH3 domain molecules in the mitochondria, which prevents the activation of BAX and BAK. [ 5 ] crmA, or cytokine response modifier A, is a cowpox serpin that inhibits caspases 1, 6 and 8, forming complexes with these caspases that renders them unable to perform their apoptotic functions. Cowpox is an orthopox virus that increases their chances of survival and infection by inhibition of specific caspases and preventing inflammatory responses and apoptosis. [ 6 ] Serpins generally inhibit serine proteases by a suicide substrate inhibition mechanism, in which the serpin undergoes a drastic change in structure to form an acyl enzyme intermediate. A reactive center loop of the protease is bound to the central beta loop of the serpin, trapping the protease in a state where it is no longer able to perform its catalytic functions. Studies have shown crmA uses a method analogous to serpin inhibition of serine proteases to inhibit cysteine protease caspases. [ 6 ] The Inhibitors of apoptosis proteins (IAP) are a family of functionally and structurally related proteins that serve as endogenous inhibitors of programmed cell death ( apoptosis ). A common feature of all IAPs is the presence of a BIR (Baculovirus IAP Repeat, a ~70 amino acid domain) in one to three copies. The human IAP family consists of 8 members ( NAIP , cIAP1 , cIAP2 , XIAP , Survivin , Bruce/Apollon , ML-IAP/Livin and ILP-2 ), and IAP homologs have been identified in numerous organisms. The first members of the IAPs identified were from the baculovirus IAPs, Cp-IAP and Op-IAP, which bind to and inhibit caspases as a mechanism that contributes to its efficient infection and replication cycle in the host. Later, five more human IAPs were discovered which included XIAP , cIAP1 , C-IAP2 , NAIP , Livin and Survivin . The best characterized IAP is XIAP , which binds caspase-9 , caspase-3 and caspase-7 , thereby inhibiting their activation and preventing apoptosis . Also cIAP1 and cIAP2 have been shown to bind caspases, although how the IAPs inhibit apoptosis mechanistically at the molecular level is not completely understood. Activity of XIAP is blocked by binding to DIABLO (Smac) and HTRA2 (Omi) proteins released from mitochondria after proapoptotic stimuli. [ 8 ] Since the mid-2000s, significant progress has been made into the development of small molecule mimics of the endogenous IAP ligand Smac. One recent example published in 2013 describes the synthesis and testing of peptidomimetics whose structure is based on the AVPI tetrapeptide IAP binding motif present in the N-terminus of mature Smac. These peptidomimetic compounds were specifically noted for their exceptionally high level of binding to Livin, one of the important IAP family members yet to receive much attention from a drug discovery perspective. [ 9 ] LCL161 is a drug that promotes cancer cell death by antagonizing IAPs. Clinical trials are ongoing, but have received mixed response. A Phase I clinical trial determined that LCL161 is well tolerated in the treatment of patients with advanced solid tumors, though they experienced vomiting , nausea , fatigue , and loss of appetite . [ 10 ] Another study found that LCL161 reduced survival and promoted endotoxic shock when used in MYC-driven lymphoma . [ 11 ] Other clinical trials are still enrolling to determine the drugs efficacy. [ 12 ] Xevinapant that targets the IAPs XIAP , c-IAP1 , and c-IAP2 has entered phase III clinical trials for the treatment of squamous cell cancer . [ 13 ]
https://en.wikipedia.org/wiki/Inhibitor_of_apoptosis
The inhibitor protein (IP) is situated in the mitochondrial matrix and protects the cell against rapid ATP hydrolysis during momentary ischaemia . [ 1 ] In oxygen absence, the pH of the matrix drops. This causes IP to become protonated and change its conformation to one that can bind to the F1Fo synthetase and stops it thereby preventing it from moving in a backwards direction and hydrolyze ATP instead of make it. When oxygen is finally incorporated into the system, the pH rises and IP is deprotonated. IP dissociates from the F1Fo synthetase and allows it to resume its ATP synthesis.
https://en.wikipedia.org/wiki/Inhibitor_protein
In mathematics and particularly in dynamic systems , an initial condition , in some contexts called a seed value , [ 1 ] : pp. 160 is a value of an evolving variable at some point in time designated as the initial time (typically denoted t = 0). For a system of order k (the number of time lags in discrete time , or the order of the largest derivative in continuous time ) and dimension n (that is, with n different evolving variables, which together can be denoted by an n -dimensional coordinate vector ), generally nk initial conditions are needed in order to trace the system's variables forward through time. In both differential equations in continuous time and difference equations in discrete time, initial conditions affect the value of the dynamic variables ( state variables ) at any future time. In continuous time, the problem of finding a closed form solution for the state variables as a function of time and of the initial conditions is called the initial value problem . A corresponding problem exists for discrete time situations. While a closed form solution is not always possible to obtain, future values of a discrete time system can be found by iterating forward one time period per iteration, though rounding error may make this impractical over long horizons. A linear matrix difference equation of the homogeneous (having no constant term) form X t + 1 = A X t {\displaystyle X_{t+1}=AX_{t}} has closed form solution X t = A t X 0 {\displaystyle X_{t}=A^{t}X_{0}} predicated on the vector X 0 {\displaystyle X_{0}} of initial conditions on the individual variables that are stacked into the vector; X 0 {\displaystyle X_{0}} is called the vector of initial conditions or simply the initial condition, and contains nk pieces of information, n being the dimension of the vector X and k = 1 being the number of time lags in the system. The initial conditions in this linear system do not affect the qualitative nature of the future behavior of the state variable X ; that behavior is stable or unstable based on the eigenvalues of the matrix A but not based on the initial conditions. Alternatively, a dynamic process in a single variable x having multiple time lags is Here the dimension is n = 1 and the order is k , so the necessary number of initial conditions to trace the system through time, either iteratively or via closed form solution, is nk = k . Again the initial conditions do not affect the qualitative nature of the variable's long-term evolution. The solution of this equation is found by using its characteristic equation λ k − a 1 λ k − 1 − a 2 λ k − 2 − ⋯ − a k − 1 λ − a k = 0 {\displaystyle \lambda ^{k}-a_{1}\lambda ^{k-1}-a_{2}\lambda ^{k-2}-\cdots -a_{k-1}\lambda -a_{k}=0} to obtain the latter's k solutions, which are the characteristic values λ 1 , … , λ k , {\displaystyle \lambda _{1},\dots ,\lambda _{k},} for use in the solution equation Here the constants c 1 , … , c k {\displaystyle c_{1},\dots ,c_{k}} are found by solving a system of k different equations based on this equation, each using one of k different values of t for which the specific initial condition x t {\displaystyle x_{t}} Is known. A differential equation system of the first order with n variables stacked in a vector X is Its behavior through time can be traced with a closed form solution conditional on an initial condition vector X 0 {\displaystyle X_{0}} . The number of required initial pieces of information is the dimension n of the system times the order k = 1 of the system, or n . The initial conditions do not affect the qualitative behavior (stable or unstable) of the system. A single k th order linear equation in a single variable x is Here the number of initial conditions necessary for obtaining a closed form solution is the dimension n = 1 times the order k , or simply k . In this case the k initial pieces of information will typically not be different values of the variable x at different points in time, but rather the values of x and its first k – 1 derivatives, all at some point in time such as time zero. The initial conditions do not affect the qualitative nature of the system's behavior. The characteristic equation of this dynamic equation is λ k + a k − 1 λ k − 1 + ⋯ + a 1 λ + a 0 = 0 , {\displaystyle \lambda ^{k}+a_{k-1}\lambda ^{k-1}+\cdots +a_{1}\lambda +a_{0}=0,} whose solutions are the characteristic values λ 1 , … , λ k ; {\displaystyle \lambda _{1},\dots ,\lambda _{k};} these are used in the solution equation This equation and its first k – 1 derivatives form a system of k equations that can be solved for the k parameters c 1 , … , c k , {\displaystyle c_{1},\dots ,c_{k},} given the known initial conditions on x and its k – 1 derivatives' values at some time t . Nonlinear systems can exhibit a substantially richer variety of behavior than linear systems can. In particular, the initial conditions can affect whether the system diverges to infinity or whether it converges to one or another attractor of the system. Each attractor, a (possibly disconnected) region of values that some dynamic paths approach but never leave, has a (possibly disconnected) basin of attraction such that state variables with initial conditions in that basin (and nowhere else) will evolve toward that attractor. Even nearby initial conditions could be in basins of attraction of different attractors (see for example Newton's method#Basins of attraction ). Moreover, in those nonlinear systems showing chaotic behavior , the evolution of the variables exhibits sensitive dependence on initial conditions : the iterated values of any two very nearby points on the same strange attractor , while each remaining on the attractor, will diverge from each other over time. Thus even on a single attractor the precise values of the initial conditions make a substantial difference for the future positions of the iterates. This feature makes accurate simulation of future values difficult, and impossible over long horizons, because stating the initial conditions with exact precision is seldom possible and because rounding error is inevitable after even only a few iterations from an exact initial condition. Every empirical law has the disquieting quality that one does not know its limitations. We have seen that there are regularities in the events in the world around us which can be formulated in terms of mathematical concepts with an uncanny accuracy. There are, on the other hand, aspects of the world concerning which we do not believe in the existence of any accurate regularities. We call these initial conditions. [ 2 ]
https://en.wikipedia.org/wiki/Initial_condition
Initial stability or primary stability is the resistance of a boat to small changes in the difference between the vertical forces applied on its two sides. [ 1 ] The study of initial stability and secondary stability are part of naval architecture as applied to small watercraft (as distinct from the study of ship stability concerning large ships ). [ citation needed ] The Initial stability is determined by the angle of tilting on each side of the boat as its center of gravity (CG) moves sideways as a result of the passengers or cargo moving laterally or as a response to an external force (e.g., a wave). [ citation needed ] The wider the boat and the further its volume is distributed away from its center line (CL), the greater the initial stability. [ citation needed ] Wide mono-hull small boats such as the johnboat have a great deal of initial stability and allow the occupants to stand upright to engage in fishing activities, and so do narrower small boats such as W-kayaks that feature a twin hull . [ citation needed ] Very narrow mono-hull boats such as canoes and kayaks have little initial stability, but twin-hull W-kayaks are considerably more stable due to the fact that their buoyancy is distributed at a greater distance from their center line and therefore acts more effectively to reduce tilting. For purposes of stability, it is advantageous to keep the centre of gravity as low as possible in small boats, so occupants are generally seated. Flatwater rowing shells , which have length-to- beam ratios of up to 30:1, are inherently unstable. [ citation needed ] After approximately 10 degrees of lateral tilt, hull shape gains importance, and secondary stability becomes the dominant consideration in boat stability. [ citation needed ] This article related to water transport is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Initial_stability
In mathematical analysis , the initial value theorem is a theorem used to relate frequency domain expressions to the time domain behavior as time approaches zero . [ 1 ] Let be the (one-sided) Laplace transform of ƒ ( t ). If f {\displaystyle f} is bounded on ( 0 , ∞ ) {\displaystyle (0,\infty )} (or if just f ( t ) = O ( e c t ) {\displaystyle f(t)=O(e^{ct})} ) and lim t → 0 + f ( t ) {\displaystyle \lim _{t\to 0^{+}}f(t)} exists then the initial value theorem says [ 2 ] Suppose first that f {\displaystyle f} is bounded, i.e. lim t → 0 + f ( t ) = α {\displaystyle \lim _{t\to 0^{+}}f(t)=\alpha } . A change of variable in the integral ∫ 0 ∞ f ( t ) e − s t d t {\displaystyle \int _{0}^{\infty }f(t)e^{-st}\,dt} shows that Since f {\displaystyle f} is bounded, the Dominated Convergence Theorem implies that Of course we don't really need DCT here, one can give a very simple proof using only elementary calculus: Start by choosing A {\displaystyle A} so that ∫ A ∞ e − t d t < ϵ {\displaystyle \int _{A}^{\infty }e^{-t}\,dt<\epsilon } , and then note that lim s → ∞ f ( t s ) = α {\displaystyle \lim _{s\to \infty }f\left({\frac {t}{s}}\right)=\alpha } uniformly for t ∈ ( 0 , A ] {\displaystyle t\in (0,A]} . The theorem assuming just that f ( t ) = O ( e c t ) {\displaystyle f(t)=O(e^{ct})} follows from the theorem for bounded f {\displaystyle f} : Define g ( t ) = e − c t f ( t ) {\displaystyle g(t)=e^{-ct}f(t)} . Then g {\displaystyle g} is bounded, so we've shown that g ( 0 + ) = lim s → ∞ s G ( s ) {\displaystyle g(0^{+})=\lim _{s\to \infty }sG(s)} . But f ( 0 + ) = g ( 0 + ) {\displaystyle f(0^{+})=g(0^{+})} and G ( s ) = F ( s + c ) {\displaystyle G(s)=F(s+c)} , so since lim s → ∞ F ( s ) = 0 {\displaystyle \lim _{s\to \infty }F(s)=0} . This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Initial_value_theorem
In mathematical analysis , initialization of the differintegrals is a topic in fractional calculus , a branch of mathematics dealing with derivatives of non-integer order. The composition law of the differintegral operator states that although: D q D − q = I {\displaystyle \mathbb {D} ^{q}\mathbb {D} ^{-q}=\mathbb {I} } wherein D − q is the left inverse of D q , the converse is not necessarily true: Consider elementary integer-order calculus . Below is an integration and differentiation using the example function 3 x 2 + 1 {\displaystyle 3x^{2}+1} : Now, on exchanging the order of composition: Where C is the constant of integration . Even if it was not obvious, the initialized condition ƒ '(0) = C , ƒ ''(0) = D , etc. could be used. If we neglected those initialization terms, the last equation would show the composition of integration, and differentiation (and vice versa) would not hold. Working with a properly initialized differ integral is the subject of initialized fractional calculus. If the differ integral is initialized properly, then the hoped-for composition law holds. The problem is that in differentiation, information is lost, as with C in the first equation. However, in fractional calculus, given that the operator has been fractionalized and is thus continuous, an entire complementary function is needed. This is called complementary function Ψ {\displaystyle \Psi } .
https://en.wikipedia.org/wiki/Initialized_fractional_calculus
In chemistry , initiation is a chemical reaction that triggers one or more secondary reactions. Initiation creates a reactive centre on a molecule which produces a chain reaction . [ 1 ] The reactive centre generated by initiation is usually a radical , but can also be cations or anions . [ 2 ] Once the reaction is initiated, the species goes through propagation where the reactive species reacts with stable molecules, producing stable species and reactive species. This process can produce very long chains of molecules called polymers , which are the building blocks for many materials. [ 3 ] After propagation, the reaction is then terminated . There are different types of initiation , with the two main ways being thermal initiation and photo- initiation (light). [ 4 ] [ 5 ] Thermal initiation involves initiating a reaction in the presence of heat , usually at very high temperatures. Heating a reaction can result in radical initiation of the substrate(s). [ 6 ] In the presence of heat, a monomer can self-initiate and react with other monomers or pairs of monomers. This process is called spontaneous polymerization and requires a lot of heat to occur (up to 200°C). [ 4 ] For monomers to initiate and polymerize with the same type of monomer (called Homopolymerization ), ~180°C is needed for the monomers to initiate. [ 4 ] Copolymerization , which is when different kinds of monomers are initiated and react with each other, is more stable and can happen at lower temperatures than Homopolymerization . [ 4 ] Self-initiation between homo-monomers is a difficult mechanism to observe because species that are initiated aren't always the same kind of monomer. [ 4 ] Sometimes impurities found in the reaction flask with the monomers get initiated and polymerize with monomers, instead of the monomer getting initiated. [ 4 ] Photo- initiation occurs when monomers get initiated by light irradiation . LED light passes through the reaction flask which excites the monomers turning them into reactive species, mainly radicals and ions , which can then polymerize. [ 5 ] There are two mechanistic classifications of photo-initiation reactions, being either a photoredox process or intramolecular photochemical process. [ 7 ] This type of initiation can happen at much lower temperatures, mainly room temperature, then thermal initiation . [ 5 ] This makes photo- initiation much more practical than thermal initiation . Photo- initiation also produces less side reactions than thermal and has less impurities. [ 5 ] Though thermal initiation is hard to maintain, photo- initiation provides an easy way to initiate monomers to polymerize. [ 5 ] Photo- initiation is even used in application such as making various coatings, adhesives , inks, and microelectronics . [ 5 ]
https://en.wikipedia.org/wiki/Initiation_(chemistry)
In molecular biology , initiation factors are proteins that bind to the small subunit of the ribosome during the initiation of translation , a part of protein biosynthesis . [ 1 ] Initiation factors can interact with repressors to slow down or prevent translation. They have the ability to interact with activators to help them start or increase the rate of translation. In bacteria , they are simply called IFs (i.e.., IF1, IF2, & IF3) and in eukaryotes they are known as eIFs (i.e.., eIF1 , eIF2 , eIF3 ). [ 1 ] Translation initiation is sometimes described as three step process which initiation factors help to carry out. First, the tRNA carrying a methionine amino acid binds to the small subunit of ribosome, then binds to the mRNA , and finally joins together with the large subunit of ribosome. The initiation factors that help with this process each have different roles and structures. [ 2 ] The initiation factors are divided into three major groups by taxonomic domains . There are some homologies shared (click the domain names to see the domain-specific factors): [ 3 ] Many structural domains have been conserved through evolution, as prokaryotic initiation factors share similar structures with eukaryotic factors. [ 2 ] The prokaryotic initiation factor, IF3, assists with start site specificity, as well as mRNA binding. [ 2 ] [ 3 ] This is in comparison with the eukaryotic initiation factor, eIF1, who also performs these functions. The elF1 structure is similar to the C-terminal domain of IF3, as they each contain a five-stranded beta sheet against two alpha helices. [ 2 ] The prokaryotic initiation factors IF1 and IF2 are also homologs of the eukaryotic initiation factors eIF1A and eIF5B . IF1 and eIF1A, both containing an OB-fold , bind to the A site and assist in the assembly of initiation complexes at the start codon . IF2 and eIF5B assist in the joining of the small and large ribosomal subunits. The eIF5B factor also contains elongation factors. Domain IV of eIF5B is closely related to the C-terminal domain of IF2, as they both consist of a beta-barrel. The elF5B also contains a GTP-binding domain, which can switch from an active GTP to an inactive GDP. This switch helps to regulate the affinity of the ribosome for the initiation factor. [ 2 ] A eukaryotic initiation factor eIF3 plays an important role in translational initiation. It has a complex structure, composed of 13 subunits. It helps to create the 43S pre-initiation complex , composed of the small 40S subunit attached to other initiation factors. It also helps to create the 48S pre-initiation complex, consisting of the 43S complex with the mRNA. The eIF3 factor can also be used post-translation in order to separate the ribosomal complex and keep the small and large subunits apart. The initiation factor interacts with the eIF1 and eIF5 factors used for scanning and selection of the start codons. This can create changes in the selection of the factors, binding to different codons. [ 8 ] Another important eukaryotic initiation factor, eIF2 , binds the tRNA containing methionine to the P site of the small ribosome. The P site is where the tRNA carrying an amino acid forms a peptide bond with the incoming amino acids and carries the peptide chain. The factor consists of an alpha, beta, and gamma subunit. The eIF2 gamma subunit is characterized by a GTP-binding domain and beta-barrel folds. It binds to the tRNA through GTP. Once the initiation factor helps the tRNA bind, the GTP hydrolyzes and is released the eIF2. The eIF2 beta subunit is identified by its Zn-finger. The eIF2 alpha subunit is characterized by an OB-fold domain and two beta strands. This subunit helps to regulate translation, as it becomes phosphorylated to inhibit protein synthesis. [ 2 ] The eIF4F complex supports the cap-dependent translation initiation process and is composed of the initiation factors eIF4A , eIF4E , and eIF4G . The cap end of the mRNA, being the 5’ end, is brought to the complex where the 43S ribosomal complex can bind and scan the mRNA for the start codon. During this process, the 60S ribosomal subunit binds and the large 80S ribosomal complex is formed. The eIF4G plays a role, as it interacts with the polyA-binding protein, attracting the mRNA. The eIF4E then binds the cap of the mRNA and the small ribosomal subunit binds to the eIF4G to begin the process of creating the 80S ribosomal complex. The eIF4A works to make this process more successful, as it is a DEAD box helicase. It allows for the unwinding of the untranslated regions of the mRNA to allow for ribosomal binding and scanning. [ 9 ] In cancerous cells, initiation factors assist in cellular transformation and development of tumors. The survival and growth of cancer is directly related to the modification of initiation factors and is used as a target for pharmaceuticals. Cells need increased energy when cancerous and derive this energy from proteins. Over-expression of initiation factors correlates with cancers, as they increase protein synthesis for proteins needed in cancers. Some initiation factors, such as eIF4E, are important in synthesizing specific proteins needed for the proliferation and survival of cancer. [ 10 ] The careful selection of proteins ensures that proteins that are usually limited in translation and only proteins needed for cancer cell growth will be synthesized. This includes proteins involved in growth, malignancy, and angiogenesis. [ 8 ] The eIF4E factor, along with eIF4A and eIF4G, also play a role in transitioning benign cancer cells to metastatic . [ 10 ] The largest initiation factor, eIF3 , is another significant initiation factor in human cancers. Due to its role in creating the 43S pre-initiation complex , it helps to bind the ribosomal subunit to the mRNA. The initiation factor has been linked to cancers through over-expression. For example, one of the thirteen eIF3 proteins, eIF3c, interacts with and represses proteins used in tumor suppression. Limited expression of certain eIF3 proteins, such as eIF3a an eIF3d, has been proven to decrease the vigorous growth of cancer cells. [ 10 ] The over-expression of eIF3a has been linked to breast, lung, cervix, esophagus, stomach, and colon cancers. It is prevalent during early stages of oncogenesis and likely selectively translates proteins needed for cell proliferation. [ 8 ] When eIF3a is suppressed, it has shown to decrease the malignancy of breast and lung cancer, most likely due to its role in tumor growth. [ 10 ]
https://en.wikipedia.org/wiki/Initiation_factor
Injection locking and injection pulling are the frequency effects that can occur when a harmonic oscillator is disturbed by a second oscillator operating at a nearby frequency. When the coupling is strong enough and the frequencies near enough, the second oscillator can capture the first oscillator, causing it to have essentially identical frequency as the second oscillator. This is injection locking. When the second oscillator merely disturbs the first but does not capture it, the effect is called injection pulling. Injection locking and pulling effects are observed in numerous types of physical systems, however the terms are most often associated with electronic oscillators or laser resonators . Injection locking has been used in beneficial and clever ways in the design of early television sets and oscilloscopes , allowing the equipment to be synchronized to external signals at a relatively low cost. Injection locking has also been used in high performance frequency doubling circuits. However, injection locking and pulling, when unintended, can degrade the performance of phase-locked loops and RF integrated circuits . Injection pulling and injection locking can be observed in numerous physical systems where pairs of oscillators are coupled together. Perhaps the first to document these effects was Christiaan Huygens , the inventor of the pendulum clock , who was surprised to note that two pendulum clocks which normally would keep slightly different time nonetheless became perfectly synchronized when hung from a common beam. Modern researchers have confirmed his suspicion that the pendulums were coupled by tiny back-and-forth vibrations in the wooden beam. [ 1 ] The two clocks became injection locked to a common frequency. In a modern-day voltage-controlled oscillator an injection-locking signal may override its low-frequency control voltage, resulting in loss of control. When intentionally employed, injection locking provides a means to significantly reduce power consumption and possibly reduce phase noise in comparison to other frequency synthesizer and PLL design techniques. In similar fashion, the frequency output of large lasers can be purified by injection locking them with high accuracy reference lasers (see injection seeder ). An injection-locked oscillator ( ILO ) is usually based on cross-coupled LC oscillator . It has been employed for frequency division [ 2 ] or jitter reduction in PLL , with the input of pure sinusoidal waveform. It was employed in continuous mode clock and data recovery (CDR) or clock recovery to perform clock restoration from the aid of either preceding pulse generation circuit to convert non-return-to-zero (NRZ) data to pseudo-return-to-zero (PRZ) format [ 3 ] or nonideal retiming circuit residing at the transmitter side to couple the clock signal into the data. [ 4 ] In the late 2000s, the ILO was employed for a burst-mode clock-recovery scheme. [ 5 ] The ability to injection-lock is an inherent property of all oscillators (electronic or otherwise). This capability can be fundamentally understood as the combined effect of the oscillator's periodicity with its autonomy. Specifically, consider a periodic injection (i.e., external disturbance) that advances or lags the oscillator's phase by some phase shift every oscillation cycle. Due to the oscillator's periodicity, this phase shift will be the same from cycle to cycle if the oscillator is injection-locked. Moreover, due to the oscillator's autonomy, each phase shift persists indefinitely. Combining these two effects produces a fixed phase shift per oscillation cycle, which results in a constant frequency shift over time. If the resultant, shifted oscillation frequency matches the injection frequency, the oscillator is said to be injection-locked. However, if the maximum frequency shift that the oscillator can experience due to the injection is not enough to cause the oscillation and injection frequencies to coincide (i.e., the injection frequency lies outside the lock range ), the oscillator can only be injection pulled (see § Injection pulling ). [ 6 ] High-speed logic signals and their harmonics are potential threats to an oscillator. The leakage of these and other high frequency signals into an oscillator through a substrate concomitant with an unintended lock is unwanted injection locking. Injection locking can also provide a means of gain at a low power cost in certain applications. Injection (aka frequency) pulling occurs when an interfering frequency source disturbs an oscillator but is unable to injection lock it. The frequency of the oscillator is pulled towards the frequency source as can be seen in the spectrogram. The failure to lock may be due to insufficient coupling, or because the injection source frequency lies outside the locking window (also known as the lock range) of the oscillator. Injection pulling fundamentally corrupts the inherent periodicity of an oscillator. Entrainment has been used to refer to the process of mode locking of coupled driven oscillators, which is the process whereby two interacting oscillating systems, which have different periods when they function independently, assume a common period. The two oscillators may fall into synchrony , but other phase relationships are also possible. The system with the greater frequency slows down, and the other speeds up. Dutch physicist Christiaan Huygens , the inventor of the pendulum clock , introduced the concept after he noticed, in 1666, that the pendulums of two clocks mounted on a common board had synchronized, and subsequent experiments duplicated this phenomenon. He described this effect as " odd sympathy ". The two pendulum clocks synchronized with their pendulums swinging in opposite directions, 180° out of phase , but in-phase states can also result. Entrainment occurs because small amounts of energy are transferred between the two systems when they are out of phase in such a way as to produce negative feedback . As they assume a more stable phase relationship, the amount of energy gradually reduces to zero. In the realm of physics, Huygens' observations are related to resonance and the resonant coupling of harmonic oscillators , which also gives rise to sympathetic vibrations . A 2002 study of Huygens' observations show that an antiphase stable oscillation was somewhat fortuitous, and that there are other possible stable solutions, including a "death state" where a clock stops running, depending on the strength of the coupling between the clocks. [ 7 ] Mode locking between driven oscillators can be easily demonstrated using mechanical metronomes on a common, easily movable surface. [ 8 ] [ 9 ] [ 10 ] Such mode locking is important for many biological systems including the proper operation of pacemakers . [ 11 ] The use of the word entrainment in the modern physics literature most often refers to the movement of one fluid, or collection of particulates, by another. The use of the word to refer to mode locking of non-linear coupled oscillators appears mostly after about 1980, and remains relatively rare in comparison. A similar coupling phenomenon was characterized in hearing aids when the adaptive feedback cancellation is used. This chaotic artifact (entrainment) is observed when correlated input signals are presented to an adaptive feedback canceller. In recent years, aperiodic entrainment has been identified as an alternative form of entrainment that is of interest in biological rhythms. [ 12 ] [ 13 ] [ 14 ] * Wolaver, Dan H. 1991. Phase-Locked Loop Circuit Design , Prentice Hall, ISBN 0-13-662743-9 , pages 95–105 * Lee, Thomas H. 2004. The Design of CMOS Radio-Frequency Integrated Circuits , Cambridge, ISBN 0-521-83539-9 , pages 563–566
https://en.wikipedia.org/wiki/Injection_locking
An injection molding machine (also spelled injection moulding machine in BrE ), also known as an injection press , is a machine for manufacturing plastic products by the injection molding process. It consists of two main parts, an injection unit and a clamping unit . [ 1 ] Injection molding machine molds can be fastened in either a horizontal or vertical position. Most machines are horizontally oriented, but vertical machines are used in some niche applications such as insert molding, allowing the machine to take advantage of gravity. Some vertical machines also do not require the mold to be fastened. There are many ways to fasten the tools to the platens , the most common are manual clamps (both halves are bolted to the platens); however, hydraulic clamps (chocks are used to hold the tool in place) and magnetic clamps are also used. The magnetic and hydraulic clamps are used where fast tool changes are required. The person designing the mold chooses whether the mold uses a cold runner system or a hot runner system to carry the plastic and fillers from the injection unit to the cavities. A cold runner is a simple channel carved into the mold. The plastic that fills the cold runner cools as the part cools and is then ejected with the part as a sprue . A hot runner system is more complicated, often using cartridge heaters to keep the plastic in the runners hot as the part cools. After the part is ejected, the plastic remaining in a hot runner is injected into the next part. Machines are classified primarily by the type of driving systems they use: hydraulic, mechanical, electrical, or hybrid Hydraulic machines have historically been the only option available to molders until Nissei Plastic Industrial introduced the first all-electric injection molding machine in 1983. [ 2 ] Hydraulic machines, although not nearly as precise, are the predominant type in most of the world, with the exception of Japan. [ 3 ] Mechanical type machines use the toggle system for building up tonnage on the clamps of the machine. Tonnage is required on all machines so that the clamps of the machine do not open due to the injection pressure. If the mold partially opens up, it will create flashing in the plastic product. The electric press, also known as Electric Machine Technology (EMT), reduces operation costs by cutting energy consumption and also addresses some of the environmental concerns surrounding the hydraulic press. Electric presses have been shown to be quieter, faster, and have a higher accuracy, however the machines are more expensive. Hybrid injection (sometimes referred to as "Servo-Hydraulic") molding machines claim to take advantage of the best features of both hydraulic and electric systems, but in actuality use almost the same amount of electricity to operate as an electric injection molding machine depending on the manufacturer. [ 4 ] [ 5 ] A robotic arm is often used to remove the molded components; either by side or top entry, but it is more common for parts to drop out of the mold, through a chute and into a container. Consists of three main components: Consists of three main components: [ 6 ]
https://en.wikipedia.org/wiki/Injection_molding_machine
Injection of vinylite and corrosion is an anatomical technique used to visualize branching and pathways of the circulatory system . It consists of filling the circulatory system of the piece with vinyl acetate and its use of corrosion technique for the removal of the superposed matter, that is, the organic matter . The technique of vinylite followed by corrosion, besides having low cost, provides a long period of conservation, satisfying the need of undergraduate students as the study of anatomy. [ 1 ] The technique of filling by vinilite is considered an angiotechnical , which consists of the study of blood vessels. This is used to mark the circulatory system ( arterial and venous ) with the use of pre-pigmented vinyl acetate to fill the vessels of the part to be studied in order to be able to visualize the ducts and duly filled systems. For corrosion or semi-corrosion, hydrochloric acid is the most viable substance used to obtain templates for the vascularization of organs or parts. [ 2 ] [ 3 ]
https://en.wikipedia.org/wiki/Injection_of_vinylite_and_corrosion
An injection well is a device that places fluid deep underground into porous rock formations, such as sandstone or limestone, or into or below the shallow soil layer. The fluid may be water , wastewater , brine (salt water), or water mixed with industrial chemical waste. [ 1 ] The U.S. Environmental Protection Agency (EPA) defines an injection well as "a bored, drilled, or driven shaft, or a dug hole that is deeper than it is wide, or an improved sinkhole, or a subsurface fluid distribution system". Well construction depends on the injection fluid injected and depth of the injection zone. Deep wells that are designed to inject hazardous wastes or carbon dioxide deep below the Earth's surface have multiple layers of protective casing and cement, whereas shallow wells injecting non-hazardous fluids into or above drinking water sources are more simply constructed. [ 1 ] Injection wells are used for many purposes. Treated wastewater can be injected into the ground between impermeable layers of rocks to avoid polluting surface waters. Injection wells are usually constructed of solid walled pipe to a deep elevation in order to prevent injectate from mixing with the surrounding environment. [ 1 ] Injection wells utilize the earth as a filter to treat the wastewater before it reaches the aquifer. This method of wastewater disposal also serves to spread the injectate over a wide area, further decreasing environmental impacts. [ citation needed ] In the United States, there are about 800 deep injection waste disposal wells used by industries such as chemical manufacturers, petroleum refineries, food producers and municipal wastewater plants. [ 2 ] Most produced water generated by oil and gas extraction wells in the US is also disposed in deep injection wells. [ 3 ] Critics of wastewater injection wells cite concerns about potential groundwater contamination. It is argued that the impacts of some injected wastes in groundwater is not fully understood, and that the science and regulatory agencies have not kept up with the rapid expansion of disposal practices in US, where there are over 680,000 wells as of 2012. [ 4 ] Alternatives to injection wells include direct discharge of treated wastewater to receiving waters, conditioning of oil drilling and fracking produced water for reuse, utilization of treated water for irrigation or livestock watering, or processing of water at industrial wastewater treatment plants. [ 5 ] Direct discharge does not disperse the water over a wide area; the environmental impact is focused on a particular segment of a river and its downstream reaches or on a coastal water body. Extensive irrigation is not typical in areas where the produced water tends to be salty, [ 5 ] and this practice is often prohibitively expensive and requires ongoing maintenance and large electricity usage. [ 6 ] Since the early 1990s, Maui County , Hawaii has been engaged in a struggle over the 3 to 5 million gallons per day of wastewater that it injects below the Lahaina Wastewater Reclamation Facility, over the claim that the water was emerging in seeps that were causing algae blooms and other environmental damage. After some twenty years, it was sued by environmental groups after multiple studies showed that more than half the injectate was appearing in nearby coastal waters. The judge in the suit rejected the County's arguments, potentially subjecting it to millions of dollars in federal fines. A 2001 consent decree required the county to obtain a water quality certification from the Hawaii Department of Health , which it failed to do until 2010, after the suit was filed. [ 7 ] The case proceeded through the United States Court of Appeals for the Ninth Circuit and subsequently to the Supreme Court of the United States . In 2020 the Court ruled in County of Maui v. Hawaii Wildlife Fund that injection wells may be the "functional equivalent of a direct discharge" under the Clean Water Act, and instructed the EPA to work with the courts to establish regulations when these types of wells should require permits. [ 8 ] Another use of injection wells is in natural gas and petroleum production . Steam, carbon dioxide , water, and other substances can be injected into an oil-producing unit in order to maintain reservoir pressure, heat the oil or lower its viscosity, allowing it to flow to a producing well nearby. [ 9 ] Yet another use for injection wells is in environmental remediation , for cleanup of either soil or groundwater contamination . Injection wells can insert clean water into an aquifer , thereby changing the direction and speed of groundwater flow, perhaps towards extraction wells downgradient, which could then more speedily and efficiently remove the contaminated groundwater. Injection wells can also be used in cleanup of soil contamination, for example by use of an ozonation system. Complex hydrocarbons and other contaminants trapped in soil and otherwise inaccessible can be broken down by ozone , a highly reactive gas, often with greater cost-effectiveness than could be had by digging out the affected area. Such systems are particularly useful in built-up urban environments where digging may be impractical due to overlying buildings. [ 10 ] Recently the option of refilling natural aquifers with injection or percolation has become more important, particularly in the driest region of the world, the MENA region (Middle East and North Africa). [ 11 ] Surface runoff can also be recharged into dry wells , or simply barren wells that have been modified to functions as cisterns. [ 12 ] These hybrid stormwater management systems, called recharge wells , have the advantage of aquifer recharge and instantaneous supply of potable water at the same time. They can utilize existing infrastructure and require very little effort for the modification and operation. The activation can be as simple as inserting a polymer cover (foil) into the well shaft. Vertical pipes for conduction of the overflow to the bottom can enhance performance. The area around the well acts as funnel. If this area is maintained well the water will require little purification before it enters the cistern. [ 13 ] Injection wells are used to tap geothermal energy in hot, porous rock formations below the surface by injecting fluids into the ground, which is heated in the ground, then extracted from adjacent wells as fluid, steam, or a combination of both. The heated steam and fluid can then be utilized to generate electricity or directly for geothermal heating . [ 14 ] [ 15 ] [ 16 ] In the United States, injection well activity is regulated by EPA and state governments under the Safe Drinking Water Act (SDWA). [ 1 ] The “State primary enforcement responsibility” section of the SDWA provides for States to submit their proposed UIC program to the EPA to request State assumption of primary enforcement responsibility. [ 17 ] Thirty-four states have been granted UIC primacy enforcement authority for Class I, II, III, IV and V wells. [ 18 ] For states without an approved UIC program, the EPA administrator prescribes a program to apply. [ 19 ] EPA has issued Underground Injection Control (UIC) regulations in order to protect drinking water sources. [ 20 ] [ 21 ] EPA regulations define six classes of injection wells. Class I wells are used for the injection of municipal and industrial wastes beneath underground sources of drinking water. Class II wells are used for the injection of fluids associated with oil and gas production, including waste from hydraulic fracturing. Class III wells are used for the injection of fluids used in mineral solution mining beneath underground sources of drinking water. Class IV wells, like Class I wells, were used for the injection of hazardous wastes but inject waste into or above underground sources of drinking water instead of below. EPA banned the use of Class IV wells in 1984. [ 22 ] Class V wells are those used for all non-hazardous injections that are not covered by Classes I through IV. Examples of Class V wells include stormwater drainage wells and septic system leach fields . Finally, Class VI wells are used for the injection of carbon dioxide for sequestration , or long term storage. [ 1 ] Since the introduction of Class VI in 2010, only two Class VI wells have been constructed as of 2022, both at the same Illinois facility; four other approved projects did not proceed to construction. [ 23 ] A July 2013 study by US Geological Survey scientist William Ellsworth links earthquakes to wastewater injection sites. In the four years from 2010-2013 the number of earthquakes of magnitude 3.0 or greater in the central and eastern United States increased dramatically. After decades of a steady earthquake rate (average of 21 events/year), activity increased starting in 2001 and peaked at 188 earthquakes in 2011, including a record-breaking 5.7-magnitude earthquake near Prague, Oklahoma which was the strongest earthquake ever recorded in Oklahoma. USGS scientists have found that at some locations the increase in seismicity coincides with the injection of wastewater in deep disposal wells. Injection-induced earthquakes are thought to be caused by pressure changes due to excess fluid injected deep below the surface and are being dubbed “man-made” earthquakes. [ 24 ] On September 3, 2016, a 5.8-magnitude earthquake occurred near Pawnee, Oklahoma , followed by nine aftershocks between magnitudes 2.6 and 3.6 within three and one-half hours. The earthquake broke the previous record set five years earlier. Tremors were felt as far away as Memphis, Tennessee , and Gilbert, Arizona . Mary Fallin , the Oklahoma governor, declared a local emergency and shutdown orders for local disposal wells were ordered by the Oklahoma Corporation Commission. [ 25 ] [ 26 ] Results of ongoing multi-year research on induced earthquakes by the United States Geological Survey (USGS) published in 2015 suggested that most of the significant earthquakes in Oklahoma, such as the 1952 magnitude 5.5 El Reno earthquake may have been induced by deep injection of waste water by the oil industry. [ 27 ]
https://en.wikipedia.org/wiki/Injection_well