text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Interpersonal complementarity hypothesis asserts that individuals often behave in ways that evoke complementary or reciprocal behavior from others. [ 1 ] More specifically, this hypothesis predicts that positive behaviors evoke positive behaviors, negative behaviors evoke negative behaviors, dominant behaviors evoke submissive behaviors , and vice versa. [ 2 ]
Essentially, each action carried out by a member of a group has the ability to elicit predictable actions from other group members. For example, individuals who display evidence of positive behavior (e.g., smiling, behaving cooperatively) tend to trigger positively valenced behaviors from others. [ 3 ] In much the same way, group members who behave in a docile or submissive fashion tend to elicit complementary, dominant behaviors from other members of the group. This behavioral congruency , as it applies to obedience and authority , has been illustrated in several studies assessing power hierarchies present in groups. [ 4 ] [ 5 ] These studies highlight the increased comfort experienced by individuals when the power or status behavior of others complement that of their own (e.g., a "leader" preferring a "follower"). | https://en.wikipedia.org/wiki/Interpersonal_complementarity_hypothesis |
Interphase is the active portion of the cell cycle that includes the G1 , S , and G2 phases, where the cell grows , replicates its DNA , and prepares for mitosis , respectively. Interphase was formerly called the " resting phase ," but the cell in interphase is not simply dormant . Calling it so would be misleading since a cell in interphase is very busy synthesizing proteins , transcribing DNA into RNA , engulfing extracellular material , and processing signals , to name just a few activities. The cell is quiescent only in G0 . Interphase is the phase of the cell cycle in which a typical cell spends most of its life. Interphase is the "daily living" or metabolic phase of the cell, in which the cell obtains nutrients and metabolizes them, grows, replicates its DNA in preparation for mitosis , and conducts other "normal" cell functions. [ 1 ]
A common misconception is that interphase is the first stage of mitosis , but since mitosis is the division of the nucleus , prophase is actually the first stage. [ 2 ]
In interphase, the cell gets itself ready for mitosis or meiosis . Somatic cells , or normal diploid cells of the body, go through mitosis in order to reproduce themselves through cell division, whereas diploid germ cells (i.e., primary spermatocytes and primary oocytes ) go through meiosis in order to create haploid gametes (i.e., sperm and ova ) for the purpose of sexual reproduction.
There are three stages of cellular interphase, with each phase ending when a cellular checkpoint checks the accuracy of the stage's completion before proceeding to the next. The stages of interphase are:
The duration of time spent in interphase and in each stage of interphase is variable and depends on both the type of cell and the species of organism it belongs to. Most cells of adult mammals spend about 24 hours in interphase; this accounts for about 90%-96% of the total time involved in cell division. [ 4 ] Interphase includes G1, S, and G2 phases. Mitosis and cytokinesis , however, are separate from interphase.
DNA double-strand breaks can be repaired during interphase by two principal processes. [ 5 ] The first process, non-homologous end joining (NHEJ), can join the two broken ends of DNA in the G1 , S and G2 phases of interphase. The second process, homologous recombinational repair (HRR), is more accurate than NHEJ in repairing double-strand breaks. However HRR is only active during the S and G2 phases of interphase when DNA replication is either partially or fully accomplished, since HRR requires two adjacent homologous chromosomes .
When G 2 is completed, the cell enters a relatively brief period of nuclear and cellular division, composed of mitosis and cytokinesis, respectively. After the successful completion of mitosis and cytokinesis, both resulting daughter cells re-enter G 1 of interphase.
In the cell cycle , interphase is preceded by telophase and cytokinesis of the M phase . In alternative fashion, interphase is sometimes interrupted by G 0 phase , which, in some circumstances, may then end and be followed by the remaining stages of interphase. After the successful completion of the G 2 checkpoint , the final checkpoint in interphase, the cell proceeds to prophase , or in plants to preprophase , which is the first stage of mitosis.
G 0 phase is viewed as either an extended G 1 phase where the cell is neither dividing nor preparing to divide, or as a distinct quiescent stage which occurs outside of the cell cycle. [ 6 ]
In gamete production, interphase is succeeded by meiosis . In programmed cell death , interphase is followed or preempted by apoptosis .
The transition region between two materials. For example between the fibre and matrix of a composite material . | https://en.wikipedia.org/wiki/Interphase |
The Interplanetary Transport Network ( ITN ) [ 1 ] is a collection of gravitationally determined pathways through the Solar System that require very little energy for an object to follow. The ITN makes particular use of Lagrange points as locations where trajectories through space can be redirected using little or no energy. These points have the peculiar property of allowing objects to orbit around them, despite lacking an object to orbit, as these points exist where gravitational forces between two celestial bodies are equal. While it would use little energy, transport along the network would take a long time. [ 2 ]
Interplanetary transfer orbits are solutions to the gravitational three-body problem , which, for the general case, does not have analytical solutions, and is addressed by numerical analysis approximations. However, a small number of exact solutions exist, most notably the five orbits referred to as " Lagrange points ", which are orbital solutions for circular orbits in the case when one body is significantly more massive.
The key to discovering the Interplanetary Transport Network was the investigation of the nature of the winding paths near the Earth-Sun and Earth-Moon Lagrange points. They were first investigated by Henri Poincaré in the 1890s. He noticed that the paths leading to and from any of those points would almost always settle, for a time, on an orbit about that point. [ 3 ] There are in fact an infinite number of paths taking one to the point and away from it, and all of which require nearly zero change in energy to reach. When plotted, they form a tube with the orbit about the Lagrange point at one end.
The derivation of these paths traces back to mathematicians Charles C. Conley and Richard P. McGehee in 1968. [ 4 ] Hiten , Japan's first lunar probe, was moved into lunar orbit using similar insight into the nature of paths between the Earth and the Moon . Beginning in 1997, Martin Lo , Shane D. Ross , and others wrote a series of papers identifying the mathematical basis that applied the technique to the Genesis solar wind sample return , and to lunar and Jovian missions. They referred to it as an Interplanetary Superhighway (IPS). [ 5 ]
As it turns out, it is very easy to transit from a path leading to the point to one leading back out. This makes sense, since the orbit is unstable, which implies one will eventually end up on one of the outbound paths after spending no energy at all. Edward Belbruno coined the term " weak stability boundary " [ 6 ] or "fuzzy boundary" [ 7 ] for this effect.
With careful calculation, one can pick which outbound path one wants. This turns out to be useful, as many of these paths lead to some interesting points in space, such as the Earth's Moon or between the Galilean moons of Jupiter , within a few months or years. [ 8 ]
For trips from Earth to other planets, they are not useful for crewed or uncrewed probes, as the trip would take many generations. Nevertheless, they have already been used to transfer spacecraft to the Earth–Sun L 1 point, a useful point for studying the Sun that was employed in a number of recent missions, including the Genesis mission , the first to return solar wind samples to Earth. [ 9 ] The network is also relevant to understanding Solar System dynamics; [ 10 ] [ 11 ] Comet Shoemaker–Levy 9 followed such a trajectory on its collision path with Jupiter. [ 12 ] [ 13 ]
The ITN is based around a series of orbital paths predicted by chaos theory and the restricted three-body problem leading to and from the orbits around the Lagrange points – points in space where the gravity between various bodies balances with the centrifugal force of an object there. For any two bodies in which one body orbits around the other, such as a star/planet or planet/moon system, there are five such points, denoted L 1 through L 5 . For instance, the Earth–Moon L 1 point lies on a line between the two, where gravitational forces between them exactly balance with the centrifugal force of an object placed in orbit there. These five points have particularly low delta-v requirements, and appear to be the lowest-energy transfers possible, even lower than the common Hohmann transfer orbit that has dominated orbital navigation since the start of space travel.
Although the forces balance at these points, the first three points (the ones on the line between a certain large mass, e.g. a star , and a smaller, orbiting mass, e.g. a planet ) are not stable equilibrium points. If a spacecraft placed at the Earth–Moon L 1 point is given even a slight nudge away from the equilibrium point, the spacecraft's trajectory will diverge away from the L 1 point. The entire system is in motion, so the spacecraft will not actually hit the Moon, but will travel in a winding path, off into space. There is, however, a semi-stable orbit around each of these points, called a halo orbit . The orbits for two of the points, L 4 and L 5 , are stable, but the halo orbits for L 1 through L 3 are stable only on the order of months .
In addition to orbits around Lagrange points, the rich dynamics that arise from the gravitational pull of more than one mass yield interesting trajectories, also known as low energy transfers . [ 4 ] For example, the gravity environment of the Sun–Earth–Moon system allows spacecraft to travel great distances on very little fuel, [ citation needed ] albeit on an often circuitous route.
Launched in 1978, the ISEE-3 spacecraft was sent on a mission to orbit around one of the Lagrange points. [ 14 ] The spacecraft was able to maneuver around the Earth's neighborhood using little fuel by taking advantage of the unique gravity environment. After the primary mission was completed, ISEE-3 went on to accomplish other goals, including a flight through the geomagnetic tail and a comet flyby. The mission was subsequently renamed the International Cometary Explorer (ICE).
The first low energy transfer using what would later be called the ITN was the rescue of Japan 's Hiten lunar mission in 1991. [ 15 ]
Another example of the use of the ITN was NASA 's 2001–2003 Genesis mission , which orbited the Sun–Earth L 1 point for over two years collecting material, before being redirected to the L 2 Lagrange point, and finally redirected from there back to Earth. [ 1 ]
The 2003–2006 SMART-1 of the European Space Agency used another low energy transfer from the ITN. [ citation needed ]
In a more recent example, the Chinese spacecraft Chang'e 2 used the ITN to travel from lunar orbit to the Earth-Sun L 2 point, then on to fly by the asteroid 4179 Toutatis . [ citation needed ]
The asteroid 39P/Oterma 's path from outside Jupiter's orbit, to inside, and back to outside is said to follow these low energy paths. [ 1 ] | https://en.wikipedia.org/wiki/Interplanetary_Transport_Network |
Interplanetary contamination refers to biological contamination of a planetary body by a space probe or spacecraft , either deliberate or unintentional.
There are two types of interplanetary contamination:
The main focus is on microbial life and on potentially invasive species . Non-biological forms of contamination have also been considered, including contamination of sensitive deposits (such as lunar polar ice deposits) of scientific interest. [ 1 ] In the case of back contamination, multicellular life is thought unlikely but has not been ruled out. In the case of forward contamination, contamination by multicellular life (e.g. lichens) is unlikely to occur for robotic missions, but it becomes a consideration in crewed missions to Mars . [ 2 ]
Current space missions are governed by the Outer Space Treaty and the COSPAR guidelines for planetary protection . Forward contamination is prevented primarily by sterilizing the spacecraft. In the case of sample-return missions , the aim of the mission is to return extraterrestrial samples to Earth, and sterilization of the samples would make them of much less interest. So, back contamination would be prevented mainly by containment, and breaking the chain of contact between the planet of origin and Earth. It would also require quarantine procedures for the materials and for anyone who comes into contact with them.
Most of the Solar System appears hostile to life as we know it. No extraterrestrial life has ever been discovered. But if extraterrestrial life exists, it may be vulnerable to interplanetary contamination by foreign microorganisms. Some extremophiles may be able to survive space travel to another planet, and foreign life could possibly be introduced by spacecraft from Earth. If possible, some believe this poses scientific and ethical concerns.
Locations within the Solar System where life might exist today include the oceans of liquid water beneath the icy surface of Europa , Enceladus ,
and Titan (its surface has oceans of liquid ethane / methane , but it may also have liquid water below the surface and ice volcanoes ). [ 3 ] [ 4 ]
There are multiple consequences for both forward- and back-contamination. If a planet becomes contaminated with Earth life, it might then be difficult to tell whether any lifeforms discovered originated there or came from Earth. [ 5 ] Furthermore, the organic chemicals produced by the introduced life would confuse sensitive searches for biosignatures of living or ancient native life. The same applies to other more complex biosignatures. Life on other planets could have a common origin with Earth life, since in the early Solar System there was much exchange of material between the planets which could have transferred life as well. If so, it might be based on nucleic acids too ( RNA or DNA ).
The majority of the species isolated are not well understood or characterized and cannot be cultured in labs, and are known only from DNA fragments obtained with swabs. [ 6 ] On a contaminated planet, it might be difficult to distinguish the DNA of extraterrestrial life from the DNA of life brought to the planet by the exploring. Most species of microorganisms on Earth are not yet well understood or DNA sequenced. This particularly applies to the unculturable archaea , and so are difficult to study. This can be either because they depend on the presence of other microorganisms, are slow growing, or depend on other conditions not yet understood. In typical habitats , 99% of microorganisms are not culturable . [ 7 ] Introduced Earth life could contaminate resources of value for future human missions, such as water. [ 8 ]
Invasive species could outcompete native life or consume it, if there is life on the planet. [ 9 ] However, the experience on earth shows that species moved from one continent to another may be able to out compete the native life adapted to that continent. [ 9 ] Additionally, evolutionary processes on Earth might have developed biological pathways different from extraterrestrial organisms, and so may be able to outcompete it. The same is also possible the other way around for contamination introduced to Earth's biosphere .
In addition to science research concerns, there are also attempts to raise ethical and moral concerns regarding intentional or unintentional interplanetary transport of life. [ 10 ] [ 11 ] [ 12 ] [ 13 ]
Enceladus and Europa show the best evidence for current habitats, mainly due to the possibility of their hosting liquid water and organic compounds.
There is ample evidence to suggest that Mars once offered habitable conditions for microbial life. [ 14 ] [ 15 ] It is therefore possible that microbial life may have existed on Mars, although no evidence has been found. [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ]
It is thought that many bacterial spores ( endospores ) from Earth were transported on Mars spacecraft. [ 23 ] [ 24 ] Some may be protected within Martian rovers and landers on the shallow surface of the planet. [ 25 ] [ 26 ] In that sense, Mars may have already been contaminated.
Certain lichens from the arctic permafrost are able to photosynthesize and grow in the absence of any liquid water, simply by using the humidity from the atmosphere. They are also highly tolerant of UV radiation , using melanin and other more specialized chemicals to protect their cells. [ 27 ] [ 28 ]
Although numerous studies point to resistance to some of Mars conditions, they do so separately, and none have considered the full range of Martian surface conditions, including temperature, pressure, atmospheric composition, radiation, humidity, oxidizing regolith, and others, all at the same time and in combination. [ 29 ] Laboratory simulations show that whenever multiple lethal factors are combined, the survival rates plummet quickly. [ 30 ]
Other studies have suggested the potential for life to survive using deliquescing salts . These, similarly to the lichens, use the humidity of the atmosphere. If the mixture of salts is right, the organisms may obtain liquid water at times of high atmospheric humidity, with salts capturing enough to be capable of supporting life.
Research published in July 2017 shows that when irradiated with a simulated Martian UV flux, perchlorates become even more lethal to bacteria ( bactericide effect). Even dormant spores lost viability within minutes. [ 31 ] In addition, two other compounds of the Martian surface, iron oxides and hydrogen peroxide , act in synergy with irradiated perchlorates to cause a 10.8-fold increase in cell death when compared to cells exposed to UV radiation after 60 seconds of exposure. [ 31 ] [ 32 ] It was also found that abraded silicates (quartz and basalt) lead to the formation of toxic reactive oxygen species . [ 33 ] The researchers concluded that "the surface of Mars is lethal to vegetative cells and renders much of the surface and near-surface regions uninhabitable." [ 34 ] This research demonstrates that the present-day surface is more uninhabitable than previously thought, [ 31 ] [ 35 ] and reinforces the notion to inspect at least a few meters into the ground to ensure the levels of radiation would be relatively low. [ 35 ] [ 36 ]
The Cassini spacecraft directly sampled the plumes escaping from Enceladus . Measured data indicates that these geysers are made primarily of salt rich particles with an 'ocean-like' composition, which is thought to originate from a subsurface ocean of liquid saltwater, rather than from the moon's icy surface. [ 37 ] Data from the geyser flythroughs also indicate the presence of organic chemicals in the plumes. Heat scans of Enceladus's surface also indicate higher temperatures around the fissures where the geysers originate, with temperatures reaching −93 °C (−135 °F), which is 115 °C (207 °F) warmer than the surrounding surface regions. [ 38 ]
Europa has much indirect evidence for its sub-surface ocean. Models of how Europa is affected by tidal heating require a subsurface layer of liquid water in order to accurately reproduce the linear fracturing of the surface. Indeed, observations by the Galileo spacecraft of how Europa's magnetic field interacts with Jupiter's field strengthens the case for a liquid, rather than solid, layer; an electrically conductive fluid deep within Europa would explain these results. [ 39 ] Observations from the Hubble Space Telescope in December 2012 appear to show an ice plume spouting from Europa's surface, [ 40 ] which would immensely strengthen the case for a liquid subsurface ocean. As was the case for Enceladus, vapour geysers would allow for easy sampling of the liquid layer. [ 41 ] Unfortunately, there appears to be little evidence that geysering is a frequent event on Europa due to the lack of water in the space near Europa. [ 42 ]
Forward contamination is prevented by sterilizing space probes sent to sensitive areas of the Solar System. Missions are classified depending on whether their destinations are of interest for the search for life, and whether there is any chance that Earth life could reproduce there.
NASA made these policies official with the issuing of Management Manual NMI-4-4-1, NASA Unmanned Spacecraft Decontamination Policy on September 9, 1963. [ 43 ] Prior to NMI-4-4-1 the same sterilization requirements were required on all outgoing spacecraft regardless of their target. Difficulties in the sterilization of Ranger probes sent to the Moon are the primary reasons for NASA's change to a target-by-target basis in assessing the likelihood forward contamination.
Some destinations such as Mercury need no precautions at all. Others such as the Moon require documentation but nothing more, while destinations such as Mars require sterilization of the rovers sent there.
Back contamination would be prevented by containment or quarantine. However, there have been no sample-returns thought to have any possibility of a back contamination risk since the Apollo missions . The Apollo regulations have been rescinded and new regulations have yet to be developed. See suggested precautions for sample-returns .
Crewed spacecraft are of particular concern for interplanetary contamination because of the impossibility to sterilize a human to the same level as a robotic spacecraft. Therefore, the chance of forwarding contamination is higher than for a robotic mission. [ 44 ] Humans are typically host to a hundred trillion microorganisms in ten thousand species in the human microbiome which cannot be removed while preserving the life of the human. Containment seems the only option, but effective containment to the same standard as a robotic rover appears difficult to achieve with present-day technology. In particular, adequate containment in the event of a hard landing is a major challenge.
Human explorers may be potential carriers back to Earth of microorganisms acquired on Mars, if such microorganisms exist. [ 45 ] Another issue is the contamination of the water supply by Earth microorganisms shed by humans in their stools, skin and breath, which could have a direct effect on the long-term human colonization of Mars. [ 8 ]
Historical examples of measures taken to prevent planetary contamination of the moon include the inclusion of a anti-bacterial filter in the Apollo Lunar Module , from Apollo 13 and onward. This was placed on the cabin relief valve in order to prevent contaminants from the cabin being released into the lunar environment during the depressurization of the crew compartment, prior to EVA . [ 46 ]
The Apollo 11 missions incited public concern about the possibility of microbes on the Moon, [ 47 ] creating fears about a plague being brought to Earth when the astronauts returned. [ 48 ] NASA received thousands of letters from Americans concerned with the potential for back contamination. [ 49 ]
The Moon has been suggested as a testbed for new technology to protect sites in the Solar System, and astronauts, from forward and back contamination. Currently, the Moon has no contamination restrictions because it is considered to be "not of interest" for prebiotic chemistry and origins of life . Analysis of the contamination left by the Apollo program astronauts could also yield useful ground truth for planetary protection models. [ 50 ] [ 51 ]
One of the most reliable ways to reduce the risk of forward and back contamination during visits to extraterrestrial bodies is to use only robotic spacecraft . [ 44 ] Humans in close orbit around the target planet could control equipment on the surface in real time via telepresence, so bringing many of the benefits of a surface mission, without its associated increased forward and back contamination risks. [ 52 ] [ 53 ] [ 54 ]
Since the Moon is now generally considered to be free from life, the most likely source of contamination would be from Mars during either a Mars sample-return mission or as a result of a crewed mission to Mars . The possibility of new human pathogens, or environmental disruption due to back contamination, is considered to be of extremely low probability but cannot yet be ruled out.
NASA and ESA are actively developing a Mars Sample Return Program to return samples collected by the Perseverance Rover to Earth. The European Space Foundation report cites many advantages of a Mars sample-return. In particular, it would permit extensive analyses on Earth, without the size and weight constraints for instruments sent to Mars on rovers. These analyses could also be carried out without the communication delays for experiments carried out by Martian rovers. It would also make it possible to repeat experiments in multiple laboratories with different instruments to confirm key results. [ 55 ]
Carl Sagan was first to publicise back contamination issues that might follow from a Mars sample-return. In Cosmic Connection (1973) he wrote:
Precisely because Mars is an environment of great potential biological interest, it is possible that on Mars there are pathogens, organisms which, if transported to the terrestrial environment, might do enormous biological damage. [ 56 ]
Later in Cosmos (1980) Carl Sagan wrote:
Perhaps Martian samples can be safely returned to Earth. But I would want to be very sure before considering a returned-sample mission. [ 57 ]
NASA and ESA views are similar. The findings were that with present-day technology, Martian samples can be safely returned to Earth provided the right precautions are taken. [ 58 ]
NASA has already had experience with returning samples thought to represent a low back contamination risk when samples were returned for the first time by Apollo 11 . At the time, it was thought that there was a low probability of life on the Moon, so the requirements were not very stringent. The precautions taken then were inadequate by current standards, however. The regulations used then have been rescinded, and new regulations and approaches for a sample-return would be needed. [ 59 ]
A sample-return mission would be designed to break the chain of contact between Mars and the exterior of the sample container, for instance, by sealing the returned container inside another larger container in the vacuum of space before it returns to Earth. [ 60 ] [ 61 ] In order to eliminate the risk of parachute failure, the capsule could fall at terminal velocity and the impact would be cushioned by the capsule's thermal protection system. The sample container would be designed to withstand the force of the impact. [ 61 ]
To receive, analyze and curate extraterrestrial soil samples, NASA has proposed to build a biohazard containment facility, tentatively known as the Mars Sample Return Receiving Facility (MSRRF). [ 62 ] This future facility must be rated biohazard level 4 ( BSL-4 ). [ 62 ] While existing BSL-4 facilities deal primarily with fairly well-known organisms, a BSL-4 facility focused on extraterrestrial samples must pre-plan the systems carefully while being mindful that there will be unforeseen issues during sample evaluation and curation that will require independent thinking and solutions. [ 63 ]
The facility's systems must be able to contain unknown biohazards, as the sizes of any putative Martian microorganisms are unknown. In consideration of this, additional requirements were proposed. Ideally it should filter particles of 0.01 μm or larger, and release of a particle 0.05 μm or larger is unacceptable under any circumstance. [ 60 ]
The reason for this extremely small size limit of 0.01 μm is for consideration of gene transfer agents (GTAs) which are virus-like particles that are produced by some microorganisms that package random segments of DNA capable of horizontal gene transfer . [ 60 ] These randomly incorporate segments of the host genome and can transfer them to other evolutionarily distant hosts, and do that without killing the new host. In this way many archaea and bacteria can swap DNA with each other. This raises the possibility that Martian life, if it has a common origin with Earth life in the distant past, could swap DNA with Earth microorganisms in the same way. [ 60 ] In one experiment reported in 2010, researchers left GTAs (DNA conferring antibiotic resistance) and marine bacteria overnight in natural conditions and found that by the next day up to 47% of the bacteria had incorporated the genetic material from the GTAs. [ 64 ] [ 65 ] Another reason for the 0.05 μm limit is because of the discovery of ultramicrobacteria as small as 0.2 μm across. [ 60 ]
The BSL-4 containment facility must also double as a cleanroom to preserve the scientific value of the samples. A challenge is that, while it is relatively easy to simply contain the samples once returned to Earth, researchers would also want to remove parts of the sample and perform analyses. During all these handling procedures, the samples would need to be protected from Earthly contamination. A cleanroom is normally kept at a higher pressure than the external environment to keep contaminants out, while a biohazard laboratory is kept at a lower pressure to keep the biohazards in. This would require compartmentalizing the specialized rooms in order to combine them in a single building. Solutions suggested include a triple walled containment facility, and extensive robotic handling of the samples. [ 66 ] [ 67 ] [ 68 ] [ 69 ]
The facility would be expected to take 7 to 10 years from design to completion, [ 70 ] [ 71 ] and an additional two years recommended for the staff to become accustomed to the facilities. [ 70 ] [ 60 ]
Robert Zubrin, from the Mars Society , maintains that the risk of back contamination is negligible. He supports this using an argument based on the possibility of transfer of life from Earth to Mars on meteorites. [ 72 ] [ 73 ]
Margaret Race has examined in detail the legal process of approval for a MSR. [ 59 ] She found that under the National Environmental Policy Act (NEPA) (which did not exist in the Apollo era), a formal environment impact statement is likely to be required, and public hearings during which all the issues would be aired openly. This process is likely to take up to several years to complete.
During this process, she found, the full range of worst accident scenarios, impact, and project alternatives would be played out in the public arena. Other agencies such as the Environment Protection Agency, Occupational Health and Safety Administration, etc., might also get involved in the decision-making process.
The laws on quarantine would also need to be clarified as the regulations for the Apollo program were rescinded. In the Apollo era, NASA delayed announcement of its quarantine regulations until the day Apollo was launched, bypassing the requirement for public debate - something that would likely not be tolerated today.
It is also probable that the presidential directive NSC-25 would apply, requiring a review of large scale alleged effects on the environment to be carried out subsequent to other domestic reviews and through a long process, leading eventually to presidential approval of the launch.
Apart from those domestic legal hurdles, there would be numerous international regulations and treaties to be negotiated in the case of a Mars sample-return, especially those relating to environmental protection and health. Race concluded that the public of necessity has a significant role to play in the development of the policies governing Mars sample-return.
Several exobiologists have suggested that a Mars sample-return is not necessary at this stage, and that it is better to focus more on in situ studies on the surface first. Although it is not their main motivation, this approach of course also eliminates back contamination risks.
Some of these exobiologists advocate more in situ studies followed by a sample-return in the near future. Others go as far as to advocate in situ study instead of a sample-return at the present state of understanding of Mars. [ 74 ] [ 75 ] [ 76 ]
Their reasoning is that life on Mars is likely to be hard to find. Any present day life is likely to be sparse and occur in only a few niche habitats. Past life is likely to be degraded by cosmic radiation over geological time periods if exposed in the top few meters of the Mars surface. Also, only certain special deposits of salts or clays on Mars would have the capability to preserve organics for billions of years. So, they argue, there is a high risk that a Mars sample-return at our current stage of understanding would return samples that are no more conclusive about the origins of life on Mars or present day life than the Martian meteorite samples we already have.
Another consideration is the difficulty of keeping the sample completely free from Earth life contamination during the return journey and during handling procedures on Earth. This might make it hard to show conclusively that any biosignatures detected does not result from contamination of the samples.
Instead they advocate sending more sensitive instruments on Mars surface rovers. These could examine many different rocks and soil types, and search for biosignatures on the surface and so examine a wide range of materials which could not all be returned to Earth with current technology at reasonable cost.
A sample-return to Earth would then be considered at a later stage, once we have a reasonably thorough understanding of conditions on Mars, and possibly have already detected life there, either current or past life, through biosignatures and other in situ analyses.
During the “Exploration Telerobotics Symposium" in 2012, experts on telerobotics from industry, NASA, and academics met to discuss telerobotics and its applications to space exploration. Amongst other issues, particular attention was given to Mars missions and a Mars sample-return.
They came to the conclusion that telerobotic approaches could permit direct study of the samples on the Mars surface via telepresence from Mars orbit, permitting rapid exploration and use of human cognition to take advantage of chance discoveries and feedback from the results obtained. [ 85 ]
They found that telepresence exploration of Mars has many advantages. The astronauts have near real-time control of the robots, and can respond immediately to discoveries. It also prevents contamination both ways and has mobility benefits as well. [ 86 ]
Finally, return of the sample to orbit has the advantage that it permits analysis of the sample without delay, to detect volatiles that may be lost during a voyage home. [ 85 ] [ 87 ]
Similar methods could be used to directly explore other biologically sensitive moons such as Europa , Titan , or Enceladus , once human presence in the vicinity becomes possible.
In August 2019, scientists reported that a capsule containing tardigrades (a resilient microbial animal) in a cryptobiotic state may have survived for a while on the Moon after the April 2019 crash landing of Beresheet , a failed Israeli lunar lander . [ 88 ] [ 89 ] | https://en.wikipedia.org/wiki/Interplanetary_contamination |
The interplanetary dust cloud , or zodiacal cloud (as the source of the zodiacal light ), consists of cosmic dust (small particles floating in outer space ) that pervades the space between planets within planetary systems , such as the Solar System . [ 2 ] This system of particles has been studied for many years in order to understand its nature, origin, and relationship to larger bodies. There are several methods to obtain space dust measurement .
In the Solar System, interplanetary dust particles have a role in scattering sunlight and in emitting thermal radiation , which is the most prominent feature of the night sky 's radiation, with wavelengths ranging 5–50 μm . [ 3 ] The particle sizes of grains characterizing the infrared emission near Earth's orbit typically range 10–100 μm. [ 4 ] Microscopic impact craters on lunar rocks returned by the Apollo Program [ 5 ] revealed the size distribution of cosmic dust particles bombarding the lunar surface. The ’’Grün’’ distribution of interplanetary dust at 1 AU, [ 6 ] describes the flux of cosmic dust from nm to mm sizes at 1 AU.
The total mass of the interplanetary dust cloud is approximately 3.5 × 10 16 kg , or the mass of an asteroid of radius 15 km (with density of about 2.5 g/cm 3 ). [ 7 ] Straddling the zodiac along the ecliptic , this dust cloud is visible as the zodiacal light in a moonless and naturally dark sky and is best seen sunward during astronomical twilight .
The Pioneer spacecraft observations in the 1970s linked the zodiacal light with the interplanetary dust cloud in the Solar System. [ 8 ] Also, the VBSDC instrument on the New Horizons probe was designed to detect impacts of the dust from the zodiacal cloud in the Solar System. [ 9 ]
The sources of interplanetary dust particles (IDPs) include at least: asteroid collisions, cometary activity and collisions in the inner Solar System, Kuiper belt collisions, and interstellar medium grains (Backman, D., 1997). The origins of the zodiacal cloud have long been subject to one of the most heated controversies in the field of astronomy.
It was believed that IDPs had originated from comets or asteroids whose particles had dispersed throughout the extent of the cloud. However, further observations have suggested that Mars dust storms may be responsible for the zodiacal cloud's formation. [ 10 ] [ 2 ]
The main physical processes "affecting" (destruction or expulsion mechanisms) interplanetary dust particles are: expulsion by radiation pressure , inward Poynting-Robertson (PR) radiation drag , solar wind pressure (with significant electromagnetic effects), sublimation , mutual collisions, and the dynamical effects of planets (Backman, D., 1997).
The lifetimes of these dust particles are very short compared to the lifetime of the Solar System. If one finds grains around a star that is older than about 10,000,000 years, then the grains must have been from recently released fragments of larger objects, i.e. they cannot be leftover grains from the protoplanetary disk (Backman, private communication). [ citation needed ] Therefore, the grains would be "later-generation" dust. The zodiacal dust in the Solar System is 99.9% later-generation dust and 0.1% intruding interstellar medium dust. All primordial grains from the Solar System's formation were removed long ago.
Particles which are affected primarily by radiation pressure are known as "beta meteoroids". They are generally less than 1.4 × 10 −12 g and are pushed outward from the Sun into interstellar space. [ 11 ]
The interplanetary dust cloud has a complex structure (Reach, W., 1997). Apart from a background density, this includes:
Interplanetary dust has been found to form rings of dust in the orbital space of Mercury and Venus. [ 13 ] Venus's orbital dust ring is suspected to originate either from yet undetected Venus trailing asteroids, [ 13 ] interplanetary dust migrating in waves from orbital space to orbital space, or from the remains of the Solar System's circumstellar disc , out of which its proto-planetary disc and then itself, the Solar planetary system , formed. [ 14 ]
In 1951, Fred Whipple predicted that micrometeorites smaller than 100 micrometers in diameter might be decelerated on impact with the Earth's upper atmosphere without melting. [ 15 ] The modern era of laboratory study of these particles began with the stratospheric collection flights of Donald E. Brownlee and collaborators in the 1970s using balloons and then U-2 aircraft. [ 16 ]
Although some of the particles found were similar to the material in present-day meteorite collections, the nanoporous nature and unequilibrated cosmic-average composition of other particles suggested that they began as fine-grained aggregates of nonvolatile building blocks and cometary ice. [ 17 ] [ 18 ] The interplanetary nature of these particles was later verified by noble gas [ 19 ] and solar flare track [ 20 ] observations.
In that context a program for atmospheric collection and curation of these particles was developed at Johnson Space Center in Texas. [ 21 ] This stratospheric micrometeorite collection, along with presolar grains from meteorites, are unique sources of extraterrestrial material (not to mention being small astronomical objects in their own right) available for study in laboratories today.
Spacecraft that have carried dust detectors include Helios , Pioneer 10 , Pioneer 11 , Ulysses (heliocentric orbit out to the distance of Jupiter), Galileo (Jupiter Orbiter), Cassini (Saturn orbiter), and New Horizons (see Venetia Burney Student Dust Counter ).
The Solar interplanetary dust cloud obscures the extragalactic background light , making observations of it from the Inner Solar System very limited. [ 12 ]
Collections of review articles on various aspects of interplanetary dust and related fields appeared in the following books:
In 1978 Tony McDonnell edited the book Cosmic Dust [ 22 ] which contained chapters [ 23 ] on comets along with zodiacal light as indicator of interplanetary dust, meteors, interstellar dust, microparticle studies by sampling techniques, and microparticle studies by space instrumentation. Attention is also given to lunar and planetary impact erosion, aspects of particle dynamics, and acceleration techniques and high-velocity impact processes employed for the laboratory simulation of effects produced by micrometeoroids.
2001 Eberhard Grün , Bo Gustafson, Stan Dermott, and Hugo Fechtig published the book Interplanetary Dust . [ 24 ] Topics covered [ 25 ] are: historical perspectives; cometary dust; near-Earth environment; meteoroids and meteors; properties of interplanetary dust, information from collected samples; in situ measurements of cosmic dust; numerical modeling of the Zodiacal Cloud structure; synthesis of observations; instrumentation; physical processes; optical properties of interplanetary dust; orbital evolution of interplanetary dust; circumplanetary dust, observations and simple physics; interstellar dust and circumstellar dust disks.
2019 Rafael Rodrigo, Jürgen Blum, Hsiang-Wen Hsu, Detlef V. Koschny, Anny-Chantal Levasseur-Regourd , Jesús Martín-Pintado, Veerle J. Sterken, and Andrew Westphal collected reviews in the book Cosmic Dust from the Laboratory to the Stars . [ 26 ] Included are discussions [ 27 ] of dust in various environments: from planetary atmospheres and airless bodies over interplanetary dust, meteoroids, comet dust and emissions from active moons to interstellar dust and protoplanetary disks. Diverse research techniques and results, including in-situ measurement, remote observation, laboratory experiments and modelling, and analysis of returned samples are discussed. | https://en.wikipedia.org/wiki/Interplanetary_dust_cloud |
The interplanetary magnetic field ( IMF ), also commonly referred to as the heliospheric magnetic field ( HMF ), [ 2 ] is the component of the solar magnetic field that is dragged out from the solar corona by the solar wind flow to fill the Solar System .
The coronal and solar wind plasmas are highly electrically conductive , meaning the magnetic field lines and the plasma flows are effectively "frozen" together [ 3 ] [ 4 ] and the magnetic field cannot diffuse through the plasma on time scales of interest. In the solar corona, the magnetic pressure greatly exceeds the plasma pressure and thus the plasma is primarily structured and confined by the magnetic field . However, with increasing altitude through the corona, the solar wind accelerates as it extracts energy from the magnetic field through the Lorentz force interaction, resulting in the flow momentum exceeding the restraining magnetic tension force and the coronal magnetic field is dragged out by the solar wind to form the IMF. This acceleration often leads the IMF to be locally supersonic up to 160 AU away from the sun. [ 5 ]
The dynamic pressure of the wind dominates over the magnetic pressure through most of the Solar System (or heliosphere ), so that the magnetic field is pulled into an Archimedean spiral pattern (the Parker spiral [ 6 ] ) by the combination of the outward motion and the Sun's rotation . In near-Earth space, the IMF nominally makes an angle of approximately 45° to the Earth–Sun line, though this angle varies with solar wind speed. The angle of the IMF to the radial direction reduces with helio-latitude, as the speed of the photospheric footpoint is reduced.
Depending on the polarity of the photospheric footpoint, the heliospheric magnetic field spirals inward or outward; the magnetic field follows the same shape of spiral in the northern and southern parts of the heliosphere, but with opposite field direction. These two magnetic domains are separated by a current sheet (an electric current that is confined to a curved plane). This heliospheric current sheet has a shape similar to a twirled ballerina skirt , and changes in shape through the solar cycle as the Sun's magnetic field reverses about every 11 years.
The plasma in the interplanetary medium is also responsible for the strength of the Sun's magnetic field at the orbit of the Earth being over 100 times greater than originally anticipated. If space were a vacuum, then the Sun's magnetic dipole field — about 10 −4 teslas at the surface of the Sun [ citation needed ] — would reduce with the inverse cube of the distance to about 10 −11 teslas. But satellite observations show that it is about 100 times greater at around 10 −9 teslas. [ citation needed ] Magnetohydrodynamic (MHD) theory predicts that the motion of a conducting fluid (e.g., the interplanetary medium) in a magnetic field induces electric currents, which in turn generates magnetic fields — and, in this respect, it behaves like an MHD dynamo . [ citation needed ]
The interplanetary magnetic field at the Earth's orbit varies with waves and other disturbances in the solar wind, known as " space weather ." The field is a vector, with components in the radial and azimuthal directions as well as a component perpendicular to the ecliptic. The field varies in strength near the Earth from 1 to 37 nT, averaging about 6 nT. [ 7 ] Since 1997, the solar magnetic field has been monitored in real time by the Advanced Composition Explorer (ACE) satellite located in a halo orbit at the Sun–Earth Lagrange Point L1; since July 2016, it has been monitored by the Deep Space Climate Observatory (DSCOVR) satellite, also at the Sun–Earth L1 (with the ACE continuing to serve as a back-up measurement). [ 8 ] | https://en.wikipedia.org/wiki/Interplanetary_magnetic_field |
The interplanetary medium ( IPM ) or interplanetary space consists of the mass and energy which fills the Solar System , and through which all the larger Solar System bodies, such as planets , dwarf planets , asteroids , and comets , move. The IPM stops at the heliopause , outside of which the interstellar medium begins. Before 1950, interplanetary space was widely considered to either be an empty vacuum, or consisting of " aether ".
The interplanetary medium includes interplanetary dust , cosmic rays , and hot plasma from the solar wind . [ 2 ] [ failed verification ] The density of the interplanetary medium is very low, decreasing in inverse proportion to the square of the distance from the Sun. It is variable, and may be affected by magnetic fields and events such as coronal mass ejections . Typical particle densities in the interplanetary medium are about 5-40 particles/cm 3 , but exhibit substantial variation. [ 3 ] : Figure 1 In the vicinity of the Earth , it contains about 5 particles/cm 3 , [ 4 ] : 326 but values as high as 100 particles/cm 3 have been observed. [ 3 ] : Figure 2
The temperature of the interplanetary medium varies through the solar system. Joseph Fourier estimated that interplanetary medium must have temperatures comparable to those observed at Earth's poles , but on faulty grounds : lacking modern estimates of atmospheric heat transport , he saw no other means to explain the relative consistency of Earth's climate . [ 5 ] A very hot interplanetary medium remained a minor position among geophysicists as late as 1959, when Chapman proposed a temperature on the order of 10000 K, [ 6 ] but observation in Low Earth orbit of the exosphere soon contradicted his position. [ citation needed ] In fact, both Fourier and Chapman's final predictions were correct: because the interplanetary medium is so rarefied , it does not exhibit thermodynamic equilibrium . Instead, different components have different temperatures. [ 3 ] : 4 [ 4 ] [ 7 ] The solar wind exhibits temperatures consistent with Chapman's estimate in cislunar space , [ 4 ] : 326, 329 [ 7 ] [ 8 ] and dust particles near Earth's orbit exhibit temperatures 257–298 K (3–77 °F), [ 9 ] : 157 averaging about 283 K (50 °F). [ 10 ] In general, the solar wind temperature decreases proportional to the inverse-square of the distance to the Sun; [ 6 ] the temperature of the dust decreases proportional to the inverse cube root of the distance. [ 9 ] : 157 For dust particles within the asteroid belt , typical temperatures range from 200 K (−100 °F) at 2.2 AU down to 165 K (−163 °F) at 3.2 AU. [ 11 ]
Since the interplanetary medium is a plasma , or gas of ions , the interplanetary medium has the characteristics of a plasma, rather than a simple gas. For example, it carries the Sun's magnetic field with it, is highly electrically conductive (resulting in the heliospheric current sheet ), forms plasma double layers where it comes into contact with a planetary magnetosphere or at the heliopause , and exhibits filamentation (such as in aurorae ).
The plasma in the interplanetary medium is also responsible for the strength of the Sun's magnetic field at the orbit of the Earth being over 100 times greater than originally anticipated. If space were a vacuum, then the Sun's 10 −4 tesla magnetic dipole field would reduce with the cube of the distance to about 10 −11 tesla. But satellite observations show that it is about 100 times greater at around 10 −9 tesla. Magnetohydrodynamic (MHD) theory predicts that the motion of a conducting fluid (e.g., the interplanetary medium) in a magnetic field induces electric currents which in turn generate magnetic fields, and in this respect it behaves like an MHD dynamo .
The outer edge of the heliosphere is the boundary between the flow of the solar wind and the interstellar medium . This boundary is known as the heliopause and is believed to be a fairly sharp transition of the order of 110 to 160 astronomical units from the Sun. The interplanetary medium thus fills the roughly spherical volume contained within the heliopause.
How the interplanetary medium interacts with planets depends on whether they have magnetic fields or not. Bodies such as the Moon have no magnetic field and the solar wind can impact directly on their surface. Over billions of years, the lunar regolith has acted as a collector for solar wind particles, and so studies of rocks from the lunar surface can be valuable in studies of the solar wind.
High-energy particles from the solar wind impacting on the lunar surface also cause it to emit faintly at X-ray wavelengths.
Planets with their own magnetic field, such as the Earth and Jupiter , are surrounded by a magnetosphere within which their magnetic field is dominant over the Sun 's. This disrupts the flow of the solar wind, which is channelled around the magnetosphere. Material from the solar wind can "leak" into the magnetosphere, causing aurorae and also populating the Van Allen radiation belts with ionised material.
The interplanetary medium is responsible for several optical phenomena visible from Earth. Zodiacal light is a broad band of faint light sometimes seen after sunset and before sunrise, stretched along the ecliptic and appearing brightest near the horizon. This glow is caused by sunlight scattered by dust particles in the interplanetary medium between Earth and the Sun.
A similar phenomenon centered at the antisolar point , gegenschein is visible in a naturally dark, moonless night sky . Much fainter than zodiacal light, this effect is caused by sunlight backscattered by dust particles beyond Earth's orbit.
The term "interplanetary" appears to have been first used in print in 1691 by the scientist Robert Boyle : "The air is different from the æther (or vacuum) in the... interplanetary spaces" Boyle Hist. Air . In 1898, American astronomer Charles Augustus Young wrote: "Inter-planetary space is a vacuum, far more perfect than anything we can produce by artificial means..." ( The Elements of Astronomy , Charles Augustus Young, 1898).
The notion that space is considered to be a vacuum filled with an " aether ", or just a cold, dark vacuum continued up until the 1950s. Tufts University Professor of astronomy, Kenneth R. Lang, writing in 2000 noted, "Half a century ago, most people visualized our planet as a solitary sphere traveling in a cold, dark vacuum of space around the Sun". [ 13 ] In 2002, Akasofu stated "The view that interplanetary space is a vacuum into which the Sun intermittently emitted corpuscular streams was changed radically by Ludwig Biermann (1951, 1953) who proposed on the basis of comet tails, that the Sun continuously blows its atmosphere out in all directions at supersonic speed" ( Syun-Ichi Akasofu , Exploring the Secrets of the Aurora , 2002) | https://en.wikipedia.org/wiki/Interplanetary_medium |
In the field of mathematical analysis , an interpolation inequality is an inequality of the form
where for 0 ≤ k ≤ n {\displaystyle 0\leq k\leq n} , u k {\displaystyle u_{k}} is an element of some particular vector space X k {\displaystyle X_{k}} equipped with norm ‖ ⋅ ‖ k {\displaystyle \|\cdot \|_{k}} and α k {\displaystyle \alpha _{k}} is some real exponent, and C {\displaystyle C} is some constant independent of u 0 , . . , u n {\displaystyle u_{0},..,u_{n}} . The vector spaces concerned are usually function spaces , and many interpolation inequalities assume u 0 = u 1 = ⋯ = u n {\displaystyle u_{0}=u_{1}=\cdots =u_{n}} and so bound the norm of an element in one space with a combination norms in other spaces, such as Ladyzhenskaya's inequality and the Gagliardo-Nirenberg interpolation inequality , both given below. Nonetheless, some important interpolation inequalities involve distinct elements u 0 , . . , u n {\displaystyle u_{0},..,u_{n}} , including Hölder's Inequality and Young's inequality for convolutions which are also presented below.
The main applications of interpolation inequalities lie in fields of study, such as partial differential equations , where various function spaces are used. An important example are the Sobolev spaces , consisting of functions whose weak derivatives up to some (not necessarily integer ) order lie in L p spaces for some p. There interpolation inequalities are used, roughly speaking, to bound derivatives of some order with a combination of derivatives of other orders. They can also be used to bound products, convolutions, and other combinations of functions, often with some flexibility in the choice of function space. Interpolation inequalities are fundamental to the notion of an interpolation space , such as the space W s , p {\displaystyle W^{s,p}} , which loosely speaking is composed of functions whose s t h {\displaystyle s^{th}} order weak derivatives lie in L p {\displaystyle L^{p}} . Interpolation inequalities are also applied when working with Besov spaces B p , q s ( Ω ) {\displaystyle B_{p,q}^{s}(\Omega )} , which are a generalization of the Sobolev spaces. [ 1 ] Another class of space admitting interpolation inequalities are the Hölder spaces .
A simple example of an interpolation inequality — one in which all the u k are the same u , but the norms ‖·‖ k are different — is Ladyzhenskaya's inequality for functions u : R 2 → R {\displaystyle u:\mathbb {R} ^{2}\rightarrow \mathbb {R} } , which states that whenever u is a compactly supported function such that both u and its gradient ∇ u are square integrable, it follows that the fourth power of u is integrable and [ 2 ]
i.e.
A slightly weaker form of Ladyzhenskaya's inequality applies in dimension 3, and Ladyzhenskaya's inequality is actually a special case of a general result that subsumes many of the interpolation inequalities involving Sobolev spaces, the Gagliardo-Nirenberg interpolation inequality . [ 3 ] : 276–280
The following example, this one allowing interpolation of non-integer Sobolev spaces, is also a special case of the Gagliardo-Nirenberg interpolation inequality. [ 4 ] Denoting the L 2 {\displaystyle L^{2}} Sobolev spaces by H k = W k , 2 {\displaystyle H^{k}=W^{k,2}} , and given real numbers 1 ≤ k < ℓ < m {\textstyle 1\leq k<\ell <m} and a function u ∈ H m {\displaystyle u\in H^{m}} , we have
‖ u ‖ H ℓ ≤ ‖ u ‖ H k m − ℓ m − k ‖ u ‖ H m ℓ − k m − k . {\displaystyle \|u\|_{H^{\ell }}\leq \|u\|_{H^{k}}^{\frac {m-\ell }{m-k}}\|u\|_{H^{m}}^{\frac {\ell -k}{m-k}}.}
The elementary interpolation inequality for Lebesgue spaces , which is a direct consequence of the Hölder's inequality [ 3 ] : 707 reads: for exponents 1 ≤ p ≤ r ≤ q ≤ ∞ {\displaystyle 1\leq p\leq r\leq q\leq \infty } , every f ∈ L p ( X , μ ) ∩ L q ( X , μ ) {\displaystyle f\in L^{p}(X,\mu )\cap L^{q}(X,\mu )} is also in L r ( X , μ ) , {\displaystyle L^{r}(X,\mu ),} and one has
where, in the case of p < q < ∞ , {\displaystyle p<q<\infty ,} 1 r {\displaystyle {\frac {1}{r}}} is written as a convex combination 1 r = t p + 1 − t q {\displaystyle {\frac {1}{r}}={\frac {t}{p}}+{\frac {1-t}{q}}} , that is, with t := p ( q − r ) r ( q − p ) {\displaystyle t:={\frac {p(q-r)}{r(q-p)}}} and 1 − t = q ( r − p ) r ( q − p ) {\displaystyle 1-t={\frac {q(r-p)}{r(q-p)}}} ; in the case of p < q = ∞ {\displaystyle p<q=\infty } , r {\displaystyle r} is written as r = p t {\displaystyle r={\frac {p}{t}}} with t := p r {\displaystyle t:={\frac {p}{r}}} and 1 − t = r − p r . {\displaystyle 1-t={\frac {r-p}{r}}.}
An example of an interpolation inequality where the elements differ is Young's inequality for convolutions . [ 5 ] Given exponents 1 ≤ p , q , r ≤ ∞ {\displaystyle 1\leq p,q,r\leq \infty } such that 1 p + 1 q = 1 + 1 r {\displaystyle {\tfrac {1}{p}}+{\tfrac {1}{q}}=1+{\tfrac {1}{r}}} and functions f ∈ L p , g ∈ L q {\displaystyle f\in L^{p},\ g\in L^{q}} , their convolution lies in L r {\displaystyle L^{r}} and | https://en.wikipedia.org/wiki/Interpolation_inequality |
The Interpolation Theory , also known as the Intercalation Theory or the Antithetic Theory , is a theory that attempts to explain the origin of the alternation of generations in plants . The Interpolation Theory suggests that the sporophyte generation progenated from a haploid , green algal thallus in which repeated mitotic cell divisions of a zygote produced an embryo retained on the thallus and gave rise to the diploid phase ( sporophyte ). Ensuing evolution caused the sporophyte to become increasingly complex, both organographically and anatomically.
The Interpolation Theory was introduced by Čelakovský (1874) as the Antithetic Theory. Bower (1889) further developed this theory and renamed it the Interpolation Theory. The theory was later supported by Overton (1893), Scott (1896), Strasburger (1897), Williams (1904), and others.
The gradual evolution of an independent, sporophyte phase was viewed by Bower as being closely related to the transition from aquatic to terrestrial plant life on Earth .
Evidence supporting this theory can be found in the life cycle of modern Bryophytes in which the sporophyte is physiologically dependent on the gametophyte . Competing theories include the Transformation theory , which was introduced as the Homologous theory by Čelakovský, and also renamed by Bower. | https://en.wikipedia.org/wiki/Interpolation_theory |
Interpolymer complexes ( IPC ) are the products of non-covalent interactions between complementary unlike macromolecules in solutions. [ 1 ] There are four types of these complexes:
Interpolymer complexes can be prepared either by mixing complementary polymers in solutions or by matrix (template) polymerisation. It is also possible to prepare IPCs at liquid-liquid interfaces or at solid or soft surfaces. Usually the structure of IPCs formed will depend on many factors, including the nature of interacting polymers, concentrations of their solutions, nature of solvent and presence of inorganic ions or organic molecules in solutions. Mixing of dilute polymer solutions usually leads to formation of IPCs as a colloidal dispersion , whereas more concentrated polymer solutions form IPCs in the form of a gel .
Methods to study interpolymer complexes could be classified into:
IPCs are finding applications in pharmaceutics in the design of novel dosage forms. [ 8 ] [ 9 ] [ 10 ] They also are increasingly used to form various coatings using layer-by-layer deposition approach. [ 11 ] Some IPCs were proposed for application as membranes and films. [ 12 ] They also have been used for structuring of soils to protect from erosion . [ 13 ] Other applications include encapsulation technologies. [ 14 ] | https://en.wikipedia.org/wiki/Interpolymer_complexes |
In mathematical logic , interpretability is a relation between formal theories that expresses the possibility of interpreting or translating one into the other.
Assume T and S are formal theories . Slightly simplified, T is said to be interpretable in S if and only if the language of T can be translated into the language of S in such a way that S proves the translation of every theorem of T . Of course, there are some natural conditions on admissible translations here, such as the necessity for a translation to preserve the logical structure of formulas .
This concept, together with weak interpretability , was introduced by Alfred Tarski in 1953. Three other related concepts are cointerpretability , logical tolerance , and cotolerance , introduced by Giorgi Japaridze in 1992–93.
This logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Interpretability |
An interpretation is an assignment of meaning to the symbols of a formal language . Many formal languages used in mathematics , logic , and theoretical computer science are defined in solely syntactic terms, and as such do not have any meaning until they are given some interpretation. The general study of interpretations of formal languages is called formal semantics .
The most commonly studied formal logics are propositional logic , predicate logic and their modal analogs, and for these there are standard ways of presenting an interpretation. In these contexts an interpretation is a function that provides the extension of symbols and strings of an object language. For example, an interpretation function could take the predicate symbol T {\displaystyle T} and assign it the extension { ( a ) } {\displaystyle \{(\mathrm {a} )\}} . All our interpretation does is assign the extension { ( a ) } {\displaystyle \{(\mathrm {a} )\}} to the non-logical symbol T {\displaystyle T} , and does not make a claim about whether T {\displaystyle T} is to stand for tall and a {\displaystyle \mathrm {a} } for Abraham Lincoln. On the other hand, an interpretation does not have anything to say about logical symbols, e.g. logical connectives " a n d {\displaystyle \mathrm {and} } ", " o r {\displaystyle \mathrm {or} } " and " n o t {\displaystyle \mathrm {not} } ". Though we may take these symbols to stand for certain things or concepts, this is not determined by the interpretation function.
An interpretation often (but not always) provides a way to determine the truth values of sentences in a language. If a given interpretation assigns the value True to a sentence or theory , the interpretation is called a model of that sentence or theory.
A formal language consists of a possibly infinite set of sentences (variously called words or formulas ) built from a fixed set of letters or symbols . The inventory from which these letters are taken is called the alphabet over which the language is defined. To distinguish the strings of symbols that are in a formal language from arbitrary strings of symbols, the former are sometimes called well-formed formulæ (wff). The essential feature of a formal language is that its syntax can be defined without reference to interpretation. For example, we can determine that ( P or Q ) is a well-formed formula even without knowing whether it is true or false.
A formal language W {\displaystyle {\mathcal {W}}} can be defined with the
alphabet α = { △ , ◻ } {\displaystyle \alpha =\{\triangle ,\square \}} , and with a word being in W {\displaystyle {\mathcal {W}}} if it begins with △ {\displaystyle \triangle } and is composed solely of the symbols △ {\displaystyle \triangle } and ◻ {\displaystyle \square } .
A possible interpretation of W {\displaystyle {\mathcal {W}}} could assign the decimal digit '1' to △ {\displaystyle \triangle } and '0' to ◻ {\displaystyle \square } . Then △ ◻ △ {\displaystyle \triangle \square \triangle } would denote 101 under this interpretation of W {\displaystyle {\mathcal {W}}} .
In the specific cases of propositional logic and predicate logic, the formal languages considered have alphabets that are divided into two sets: the logical symbols ( logical constants ) and the non-logical symbols. The idea behind this terminology is that logical symbols have the same meaning regardless of the subject matter being studied, while non-logical symbols change in meaning depending on the area of investigation.
Logical constants are always given the same meaning by every interpretation of the standard kind, so that only the meanings of the non-logical symbols are changed. Logical constants include quantifier symbols ∀ ("all") and ∃ ("some"), symbols for logical connectives ∧ ("and"), ∨ ("or"), ¬ ("not"), parentheses and other grouping symbols, and (in many treatments) the equality symbol =.
Many of the commonly studied interpretations associate each sentence in a formal language with a single truth value, either True or False. These interpretations are called truth functional ; [ dubious – discuss ] they include the usual interpretations of propositional and first-order logic. The sentences that are made true by a particular assignment are said to be satisfied by that assignment.
In classical logic , no sentence can be made both true and false by the same interpretation, although this is not true of glut logics such as LP. [ 1 ] Even in classical logic, however, it is possible that the truth value of the same sentence can be different under different interpretations. A sentence is consistent if it is true under at least one interpretation; otherwise it is inconsistent . A sentence φ is said to be logically valid if it is satisfied by every interpretation (if φ is satisfied by every interpretation that satisfies ψ then φ is said to be a logical consequence of ψ).
Some of the logical symbols of a language (other than quantifiers) are truth-functional connectives that represent truth functions — functions that take truth values as arguments and return truth values as outputs (in other words, these are operations on truth values of sentences).
The truth-functional connectives enable compound sentences to be built up from simpler sentences. In this way, the truth value of the compound sentence is defined as a certain truth function of the truth values of the simpler sentences. The connectives are usually taken to be logical constants , meaning that the meaning of the connectives is always the same, independent of what interpretations are given to the other symbols in a formula.
This is how we define logical connectives in propositional logic:
So under a given interpretation of all the sentence letters Φ and Ψ (i.e., after assigning a truth-value to each sentence letter), we can determine the truth-values of all formulas that have them as constituents, as a function of the logical connectives. The following table shows how this kind of thing looks. The first two columns show the truth-values of the sentence letters as determined by the four possible interpretations. The other columns show the truth-values of formulas built from these sentence letters, with truth-values determined recursively.
Now it is easier to see what makes a formula logically valid. Take the formula F : (Φ ∨ ¬Φ). If our interpretation function makes Φ True, then ¬Φ is made False by the negation connective. Since the disjunct Φ of F is True under that interpretation, F is True. Now the only other possible interpretation of Φ makes it False, and if so, ¬Φ is made True by the negation function. That would make F True again, since one of F s disjuncts, ¬Φ, would be true under this interpretation. Since these two interpretations for F are the only possible logical interpretations, and since F comes out True for both, we say that it is logically valid or tautologous.
An interpretation of a theory is the relationship between a theory and some subject matter when there is a many-to-one correspondence between certain elementary statements of the theory, and certain statements related to the subject matter. If every elementary statement in the theory has a correspondent it is called a full interpretation , otherwise it is called a partial interpretation . [ 2 ]
The formal language for propositional logic consists of formulas built up from propositional symbols (also called sentential symbols, sentential variables, propositional variables ) and logical connectives. The only non-logical symbols in a formal language for propositional logic are the propositional symbols, which are often denoted by capital letters. To make the formal language precise, a specific set of propositional symbols must be fixed.
The standard kind of interpretation in this setting is a function that maps each propositional symbol to one of the truth values true and false. This function is known as a truth assignment or valuation function. In many presentations, it is literally a truth value that is assigned, but some presentations assign truthbearers instead.
For a language with n distinct propositional variables there are 2 n distinct possible interpretations. For any particular variable a , for example, there are 2 1 =2 possible interpretations: 1) a is assigned T , or 2) a is assigned F . For the pair a , b there are 2 2 =4 possible interpretations: 1) both are assigned T , 2) both are assigned F , 3) a is assigned T and b is assigned F , or 4) a is assigned F and b is assigned T .
Given any truth assignment for a set of propositional symbols, there is a unique extension to an interpretation for all the propositional formulas built up from those variables. This extended interpretation is defined inductively, using the truth-table definitions of the logical connectives discussed above.
Unlike propositional logic, where every language is the same apart from a choice of a different set of propositional variables, there are many different first-order languages. Each first-order language is defined by a signature . The signature consists of a set of non-logical symbols and an identification of each of these symbols as either a constant symbol, a function symbol, or a predicate symbol . In the case of function and predicate symbols, a natural number arity is also assigned. The alphabet for the formal language consists of logical constants, the equality relation symbol =, all the symbols from the signature, and an additional infinite set of symbols known as variables.
For example, in the language of rings , there are constant symbols 0 and 1, two binary function symbols + and ·, and no binary relation symbols. (Here the equality relation is taken as a logical constant.)
Again, we might define a first-order language L , as consisting of individual symbols a, b, and c; predicate symbols F, G, H, I and J; variables x, y, z; no function letters; no sentential symbols.
Given a signature σ, the corresponding formal language is known as the set of σ-formulas. Each σ-formula is built up out of atomic formulas by means of logical connectives; atomic formulas are built from terms using predicate symbols. The formal definition of the set of σ-formulas proceeds in the other direction: first, terms are assembled from the constant and function symbols together with the variables. Then, terms can be combined into an atomic formula using a predicate symbol (relation symbol) from the signature or the special predicate symbol "=" for equality (see the section " Interpreting equality" below). Finally, the formulas of the language are assembled from atomic formulas using the logical connectives and quantifiers.
To ascribe meaning to all sentences of a first-order language, the following information is needed.
An object carrying this information is known as a structure (of signature σ), or σ-structure, or L -structure (of language L), or as a "model".
The information specified in the interpretation provides enough information to give a truth value to any atomic formula, after each of its free variables , if any, has been replaced by an element of the domain. The truth value of an arbitrary sentence is then defined inductively using the T-schema , which is a definition of first-order semantics developed by Alfred Tarski. The T-schema interprets the logical connectives using truth tables, as discussed above. Thus, for example, φ ∧ ψ is satisfied if and only if both φ and ψ are satisfied.
This leaves the issue of how to interpret formulas of the form ∀ x φ( x ) and ∃ x φ( x ) . The domain of discourse forms the range for these quantifiers. The idea is that the sentence ∀ x φ( x ) is true under an interpretation exactly when every substitution instance of φ( x ), where x is replaced by some element of the domain, is satisfied. The formula ∃ x φ( x ) is satisfied if there is at least one element d of the domain such that φ( d ) is satisfied.
Strictly speaking, a substitution instance such as the formula φ( d ) mentioned above is not a formula in the original formal language of φ, because d is an element of the domain. There are two ways of handling this technical issue. The first is to pass to a larger language in which each element of the domain is named by a constant symbol. The second is to add to the interpretation a function that assigns each variable to an element of the domain. Then the T-schema can quantify over variations of the original interpretation in which this variable assignment function is changed, instead of quantifying over substitution instances.
Some authors also admit propositional variables in first-order logic, which must then also be interpreted. A propositional variable can stand on its own as an atomic formula. The interpretation of a propositional variable is one of the two truth values true and false. [ 3 ]
Because the first-order interpretations described here are defined in set theory , they do not associate each predicate symbol with a property [ b ] (or relation), but rather with the extension of that property (or relation). In other words, these first-order interpretations are extensional [ c ] not intensional .
An example of interpretation I {\displaystyle {\mathcal {I}}} of the language L described above is as follows.
In the interpretation I {\displaystyle {\mathcal {I}}} of L:
As stated above, a first-order interpretation is usually required to specify a nonempty set as the domain of discourse. The reason for this requirement is to guarantee that equivalences such as ( ϕ ∨ ∃ x ψ ) ↔ ∃ x ( ϕ ∨ ψ ) , {\displaystyle (\phi \lor \exists x\psi )\leftrightarrow \exists x(\phi \lor \psi ),} where x is not a free variable of φ, are logically valid. This equivalence holds in every interpretation with a nonempty domain, but does not always hold when empty domains are permitted. For example, the equivalence [ ∀ y ( y = y ) ∨ ∃ x ( x = x ) ] ≡ ∃ x [ ∀ y ( y = y ) ∨ x = x ] {\displaystyle [\forall y(y=y)\lor \exists x(x=x)]\equiv \exists x[\forall y(y=y)\lor x=x]} fails in any structure with an empty domain. Thus the proof theory of first-order logic becomes more complicated when empty structures are permitted. However, the gain in allowing them is negligible, as both the intended interpretations and the interesting interpretations of the theories people study have non-empty domains. [ 4 ] [ 5 ]
Empty relations do not cause any problem for first-order interpretations, because there is no similar notion of passing a relation symbol across a logical connective, enlarging its scope in the process. Thus it is acceptable for relation symbols to be interpreted as being identically false. However, the interpretation of a function symbol must always assign a well-defined and total function to the symbol.
The equality relation is often treated specially in first order logic and other predicate logics. There are two general approaches.
The first approach is to treat equality as no different than any other binary relation. In this case, if an equality symbol is included in the signature, it is usually necessary to add various axioms about equality to axiom systems (for example, the substitution axiom saying that if a = b and R ( a ) holds then R ( b ) holds as well). This approach to equality is most useful when studying signatures that do not include the equality relation, such as the signature for set theory or the signature for second-order arithmetic in which there is only an equality relation for numbers, but not an equality relation for set of numbers.
The second approach is to treat the equality relation symbol as a logical constant that must be interpreted by the real equality relation in any interpretation. An interpretation that interprets equality this way is known as a normal model , so this second approach is the same as only studying interpretations that happen to be normal models. The advantage of this approach is that the axioms related to equality are automatically satisfied by every normal model, and so they do not need to be explicitly included in first-order theories when equality is treated this way. This second approach is sometimes called first order logic with equality , but many authors adopt it for the general study of first-order logic without comment.
There are a few other reasons to restrict study of first-order logic to normal models. First, it is known that any first-order interpretation in which equality is interpreted by an equivalence relation and satisfies the substitution axioms for equality can be cut down to an elementarily equivalent interpretation on a subset of the original domain. Thus there is little additional generality in studying non-normal models. Second, if non-normal models are considered, then every consistent theory has an infinite model; this affects the statements of results such as the Löwenheim–Skolem theorem , which are usually stated under the assumption that only normal models are considered.
A generalization of first order logic considers languages with more than one sort of variables. The idea is different sorts of variables represent different types of objects. Every sort of variable can be quantified; thus an interpretation for a many-sorted language has a separate domain for each of the sorts of variables to range over (there is an infinite collection of variables of each of the different sorts). Function and relation symbols, in addition to having arities, are specified so that each of their arguments must come from a certain sort.
One example of many-sorted logic is for planar Euclidean geometry [ clarification needed ] . There are two sorts; points and lines. There is an equality relation symbol for points, an equality relation symbol for lines, and a binary incidence relation E which takes one point variable and one line variable. The intended interpretation of this language has the point variables range over all points on the Euclidean plane , the line variable range over all lines on the plane, and the incidence relation E ( p , l ) holds if and only if point p is on line l .
A formal language for higher-order predicate logic looks much the same as a formal language for first-order logic. The difference is that there are now many different types of variables. Some variables correspond to elements of the domain, as in first-order logic. Other variables correspond to objects of higher type: subsets of the domain, functions from the domain, functions that take a subset of the domain and return a function from the domain to subsets of the domain, etc. All of these types of variables can be quantified.
There are two kinds of interpretations commonly employed for higher-order logic. Full semantics require that, once the domain of discourse is satisfied, the higher-order variables range over all possible elements of the correct type (all subsets of the domain, all functions from the domain to itself, etc.). Thus the specification of a full interpretation is the same as the specification of a first-order interpretation. Henkin semantics , which are essentially multi-sorted first-order semantics, require the interpretation to specify a separate domain for each type of higher-order variable to range over. Thus an interpretation in Henkin semantics includes a domain D , a collection of subsets of D , a collection of functions from D to D , etc. The relationship between these two semantics is an important topic in higher order logic.
The interpretations of propositional logic and predicate logic described above are not the only possible interpretations. In particular, there are other types of interpretations that are used in the study of non-classical logic (such as intuitionistic logic ), and in the study of modal logic.
Interpretations used to study non-classical logic include topological models , Boolean-valued models , and Kripke models . Modal logic is also studied using Kripke models.
Many formal languages are associated with a particular interpretation that is used to motivate them. For example, the first-order signature for set theory includes only one binary relation, ∈, which is intended to represent set membership, and the domain of discourse in a first-order theory of the natural numbers is intended to be the set of natural numbers.
The intended interpretation is called the standard model (a term introduced by Abraham Robinson in 1960). [ 6 ] In the context of Peano arithmetic , it consists of the natural numbers with their ordinary arithmetical operations. All models that are isomorphic to the one just given are also called standard; these models all satisfy the Peano axioms . There are also non-standard models of the (first-order version of the) Peano axioms , which contain elements not correlated with any natural number.
While the intended interpretation can have no explicit indication in the strictly formal syntactical rules , it naturally affects the choice of the formation and transformation rules of the syntactical system. For example, primitive signs must permit expression of the concepts to be modeled; sentential formulas are chosen so that their counterparts in the intended interpretation are meaningful declarative sentences ; primitive sentences need to come out as true sentences in the interpretation; rules of inference must be such that, if the sentence I j {\displaystyle {\mathcal {I}}_{j}} is directly derivable from a sentence I i {\displaystyle {\mathcal {I}}_{i}} , then I i → I j {\displaystyle {\mathcal {I}}_{i}\to {\mathcal {I}}_{j}} turns out to be a true sentence, with → {\displaystyle \to } meaning implication , as usual. These requirements ensure that all provable sentences also come out to be true. [ 7 ]
Most formal systems have many more models than they were intended to have (the existence of non-standard models is an example). When we speak about 'models' in empirical sciences , we mean, if we want reality to be a model of our science, to speak about an intended model . A model in the empirical sciences is an intended factually-true descriptive interpretation (or in other contexts: a non-intended arbitrary interpretation used to clarify such an intended factually-true descriptive interpretation.) All models are interpretations that have the same domain of discourse as the intended one, but other assignments for non-logical constants . [ 8 ] [ page needed ]
Given a simple formal system (we shall call this one F S ′ {\displaystyle {\mathcal {FS'}}} ) whose alphabet α consists only of three symbols { ◼ , ★ , ⧫ } {\displaystyle \{\blacksquare ,\bigstar ,\blacklozenge \}} and whose formation rule for formulas is:
The single axiom schema of F S ′ {\displaystyle {\mathcal {FS'}}} is:
A formal proof can be constructed as follows:
In this example the theorem produced " ◼ ★ ◼ ◼ ◼ ⧫ ◼ ◼ ◼ ◼ {\displaystyle \blacksquare \ \bigstar \ \blacksquare \ \blacksquare \ \blacksquare \ \blacklozenge \ \blacksquare \ \blacksquare \ \blacksquare \ \blacksquare } " can be interpreted as meaning "One plus three equals four." A different interpretation would be to read it backwards as "Four minus three equals one." [ 9 ] [ failed verification ]
There are other uses of the term "interpretation" that are commonly used, which do not refer to the assignment of meanings to formal languages.
In model theory , a structure A is said to interpret a structure B if there is a definable subset D of A , and definable relations and functions on D , such that B is isomorphic to the structure with domain D and these functions and relations. In some settings, it is not the domain D that is used, but rather D modulo an equivalence relation definable in A . For additional information, see Interpretation (model theory) .
A theory T is said to interpret another theory S if there is a finite extension by definitions T ′ of T such that S is contained in T ′. | https://en.wikipedia.org/wiki/Interpretation_(logic) |
In model theory , interpretation of a structure M in another structure N (typically of a different signature ) is a technical notion that approximates the idea of representing M inside N . For example, every reduct or definitional expansion of a structure N has an interpretation in N .
Many model-theoretic properties are preserved under interpretability. For example, if the theory of N is stable and M is interpretable in N , then the theory of M is also stable.
Note that in other areas of mathematical logic , the term "interpretation" may refer to a structure , [ 1 ] [ 2 ] rather than being used in the sense defined here. These two notions of "interpretation" are related but nevertheless distinct. Similarly, " interpretability " may refer to a related but distinct notion about representation and provability of sentences between theories.
An interpretation of a structure M in a structure N with parameters (or without parameters , respectively)
is a pair ( n , f ) {\displaystyle (n,f)} where n is a natural number and f {\displaystyle f} is a surjective map from a subset of N n onto M such that the f {\displaystyle f} -preimage (more precisely the f k {\displaystyle f^{k}} -preimage) of every set X ⊆ M k definable in M by a first-order formula without parameters
is definable (in N ) by a first-order formula with parameters (or without parameters, respectively) [ clarification needed ] .
Since the value of n for an interpretation ( n , f ) {\displaystyle (n,f)} is often clear from context, the map f {\displaystyle f} itself is also called an interpretation.
To verify that the preimage of every definable (without parameters) set in M is definable in N (with or without parameters), it is sufficient to check the preimages of the following definable sets:
In model theory the term definable often refers to definability with parameters; if this convention is used, definability without parameters is expressed by the term 0-definable . Similarly, an interpretation with parameters may be referred to as simply an interpretation, and an interpretation without parameters as a 0-interpretation .
If L, M and N are three structures, L is interpreted in M, and M is interpreted in N, then one can naturally construct a composite interpretation of L in N. If two structures M and N are interpreted in each other, then by combining the interpretations in two possible ways, one obtains an interpretation of each of the two structures in itself.
This observation permits one to define an equivalence relation among structures, reminiscent of the homotopy equivalence among topological spaces .
Two structures M and N are bi-interpretable if there exists an interpretation of M in N and an interpretation of N in M such that the composite interpretations of M in itself and of N in itself are definable in M and in N , respectively (the composite interpretations being viewed as operations on M and on N ).
The partial map f from Z × Z onto Q that maps ( x , y ) to x / y if y ≠ 0 provides an interpretation of the field Q of rational numbers in the ring Z of integers (to be precise, the interpretation is (2, f )).
In fact, this particular interpretation is often used to define the rational numbers.
To see that it is an interpretation (without parameters), one needs to check the following preimages of definable sets in Q : | https://en.wikipedia.org/wiki/Interpretation_(model_theory) |
In set theory , the intersection of two sets A {\displaystyle A} and B , {\displaystyle B,} denoted by A ∩ B , {\displaystyle A\cap B,} [ 1 ] is the set containing all elements of A {\displaystyle A} that also belong to B {\displaystyle B} or equivalently, all elements of B {\displaystyle B} that also belong to A . {\displaystyle A.} [ 2 ]
Intersection is written using the symbol " ∩ {\displaystyle \cap } " between the terms; that is, in infix notation . For example: { 1 , 2 , 3 } ∩ { 2 , 3 , 4 } = { 2 , 3 } {\displaystyle \{1,2,3\}\cap \{2,3,4\}=\{2,3\}} { 1 , 2 , 3 } ∩ { 4 , 5 , 6 } = ∅ {\displaystyle \{1,2,3\}\cap \{4,5,6\}=\varnothing } Z ∩ N = N {\displaystyle \mathbb {Z} \cap \mathbb {N} =\mathbb {N} } { x ∈ R : x 2 = 1 } ∩ N = { 1 } {\displaystyle \{x\in \mathbb {R} :x^{2}=1\}\cap \mathbb {N} =\{1\}} The intersection of more than two sets (generalized intersection) can be written as: ⋂ i = 1 n A i {\displaystyle \bigcap _{i=1}^{n}A_{i}} which is similar to capital-sigma notation .
For an explanation of the symbols used in this article, refer to the table of mathematical symbols .
The intersection of two sets A {\displaystyle A} and B , {\displaystyle B,} denoted by A ∩ B {\displaystyle A\cap B} , [ 3 ] is the set of all objects that are members of both the sets A {\displaystyle A} and B . {\displaystyle B.} In symbols: A ∩ B = { x : x ∈ A and x ∈ B } . {\displaystyle A\cap B=\{x:x\in A{\text{ and }}x\in B\}.}
That is, x {\displaystyle x} is an element of the intersection A ∩ B {\displaystyle A\cap B} if and only if x {\displaystyle x} is both an element of A {\displaystyle A} and an element of B . {\displaystyle B.} [ 3 ]
For example:
We say that A {\displaystyle A} intersects (meets) B {\displaystyle B} if there exists some x {\displaystyle x} that is an element of both A {\displaystyle A} and B , {\displaystyle B,} in which case we also say that A {\displaystyle A} intersects (meets) B {\displaystyle B} at x {\displaystyle x} . Equivalently, A {\displaystyle A} intersects B {\displaystyle B} if their intersection A ∩ B {\displaystyle A\cap B} is an inhabited set , meaning that there exists some x {\displaystyle x} such that x ∈ A ∩ B . {\displaystyle x\in A\cap B.}
We say that A {\displaystyle A} and B {\displaystyle B} are disjoint if A {\displaystyle A} does not intersect B . {\displaystyle B.} In plain language, they have no elements in common. A {\displaystyle A} and B {\displaystyle B} are disjoint if their intersection is empty , denoted A ∩ B = ∅ . {\displaystyle A\cap B=\varnothing .}
For example, the sets { 1 , 2 } {\displaystyle \{1,2\}} and { 3 , 4 } {\displaystyle \{3,4\}} are disjoint, while the set of even numbers intersects the set of multiples of 3 at the multiples of 6.
Binary intersection is an associative operation; that is, for any sets A , B , {\displaystyle A,B,} and C , {\displaystyle C,} one has
A ∩ ( B ∩ C ) = ( A ∩ B ) ∩ C . {\displaystyle A\cap (B\cap C)=(A\cap B)\cap C.} Thus the parentheses may be omitted without ambiguity: either of the above can be written as A ∩ B ∩ C {\displaystyle A\cap B\cap C} . Intersection is also commutative . That is, for any A {\displaystyle A} and B , {\displaystyle B,} one has A ∩ B = B ∩ A . {\displaystyle A\cap B=B\cap A.} The intersection of any set with the empty set results in the empty set; that is, that for any set A {\displaystyle A} , A ∩ ∅ = ∅ {\displaystyle A\cap \varnothing =\varnothing } Also, the intersection operation is idempotent ; that is, any set A {\displaystyle A} satisfies that A ∩ A = A {\displaystyle A\cap A=A} . All these properties follow from analogous facts about logical conjunction .
Intersection distributes over union and union distributes over intersection. That is, for any sets A , B , {\displaystyle A,B,} and C , {\displaystyle C,} one has A ∩ ( B ∪ C ) = ( A ∩ B ) ∪ ( A ∩ C ) A ∪ ( B ∩ C ) = ( A ∪ B ) ∩ ( A ∪ C ) {\displaystyle {\begin{aligned}A\cap (B\cup C)=(A\cap B)\cup (A\cap C)\\A\cup (B\cap C)=(A\cup B)\cap (A\cup C)\end{aligned}}} Inside a universe U , {\displaystyle U,} one may define the complement A c {\displaystyle A^{c}} of A {\displaystyle A} to be the set of all elements of U {\displaystyle U} not in A . {\displaystyle A.} Furthermore, the intersection of A {\displaystyle A} and B {\displaystyle B} may be written as the complement of the union of their complements, derived easily from De Morgan's laws : A ∩ B = ( A c ∪ B c ) c {\displaystyle A\cap B=\left(A^{c}\cup B^{c}\right)^{c}}
The most general notion is the intersection of an arbitrary nonempty collection of sets.
If M {\displaystyle M} is a nonempty set whose elements are themselves sets, then x {\displaystyle x} is an element of the intersection of M {\displaystyle M} if and only if for every element A {\displaystyle A} of M , {\displaystyle M,} x {\displaystyle x} is an element of A . {\displaystyle A.} In symbols: ( x ∈ ⋂ A ∈ M A ) ⇔ ( ∀ A ∈ M , x ∈ A ) . {\displaystyle \left(x\in \bigcap _{A\in M}A\right)\Leftrightarrow \left(\forall A\in M,\ x\in A\right).}
The notation for this last concept can vary considerably. Set theorists will sometimes write " ⋂ M {\displaystyle \bigcap M} ", while others will instead write " ⋂ A ∈ M A {\displaystyle {\bigcap }_{A\in M}A} ".
The latter notation can be generalized to " ⋂ i ∈ I A i {\displaystyle {\bigcap }_{i\in I}A_{i}} ", which refers to the intersection of the collection { A i : i ∈ I } . {\displaystyle \left\{A_{i}:i\in I\right\}.} Here I {\displaystyle I} is a nonempty set, and A i {\displaystyle A_{i}} is a set for every i ∈ I . {\displaystyle i\in I.}
In the case that the index set I {\displaystyle I} is the set of natural numbers , notation analogous to that of an infinite product may be seen: ⋂ i = 1 ∞ A i . {\displaystyle \bigcap _{i=1}^{\infty }A_{i}.}
When formatting is difficult, this can also be written " A 1 ∩ A 2 ∩ A 3 ∩ ⋯ {\displaystyle A_{1}\cap A_{2}\cap A_{3}\cap \cdots } ". This last example, an intersection of countably many sets, is actually very common; for an example, see the article on σ-algebras .
In the previous section, we excluded the case where M {\displaystyle M} was the empty set ( ∅ {\displaystyle \varnothing } ). The reason is as follows: The intersection of the collection M {\displaystyle M} is defined as the set (see set-builder notation ) ⋂ A ∈ M A = { x : for all A ∈ M , x ∈ A } . {\displaystyle \bigcap _{A\in M}A=\{x:{\text{ for all }}A\in M,x\in A\}.} If M {\displaystyle M} is empty, there are no sets A {\displaystyle A} in M , {\displaystyle M,} so the question becomes "which x {\displaystyle x} 's satisfy the stated condition?" The answer seems to be every possible x {\displaystyle x} . When M {\displaystyle M} is empty, the condition given above is an example of a vacuous truth . So the intersection of the empty family should be the universal set (the identity element for the operation of intersection), [ 4 ] but in standard ( ZF ) set theory, the universal set does not exist.
However, when restricted to the context of subsets of a given fixed set X {\displaystyle X} , the notion of the intersection of an empty collection of subsets of X {\displaystyle X} is well-defined. In that case, if M {\displaystyle M} is empty, its intersection is ⋂ M = ⋂ ∅ = { x ∈ X : x ∈ A for all A ∈ ∅ } {\displaystyle \bigcap M=\bigcap \varnothing =\{x\in X:x\in A{\text{ for all }}A\in \varnothing \}} . Since all x ∈ X {\displaystyle x\in X} vacuously satisfy the required condition, the intersection of the empty collection of subsets of X {\displaystyle X} is all of X . {\displaystyle X.} In formulas, ⋂ ∅ = X . {\displaystyle \bigcap \varnothing =X.} This matches the intuition that as collections of subsets become smaller, their respective intersections become larger; in the extreme case, the empty collection has an intersection equal to the whole underlying set.
Also, in type theory x {\displaystyle x} is of a prescribed type τ , {\displaystyle \tau ,} so the intersection is understood to be of type s e t τ {\displaystyle \mathrm {set} \ \tau } (the type of sets whose elements are in τ {\displaystyle \tau } ), and we can define ⋂ A ∈ ∅ A {\displaystyle \bigcap _{A\in \emptyset }A} to be the universal set of s e t τ {\displaystyle \mathrm {set} \ \tau } (the set whose elements are exactly all terms of type τ {\displaystyle \tau } ). | https://en.wikipedia.org/wiki/Intersection_(set_theory) |
Intersection assistant is an advanced driver-assistance system first introduced in 2009. [ 1 ] [ 2 ] [ 3 ]
City junctions are a major accident blackspot. [ citation needed ] These collisions can mostly be attributed to driver distraction or misjudgement. Whereas humans often react too slowly, assistance systems are immune to that brief moment of shock.
The system monitors cross traffic in an intersection/road junction. If this anticipatory system detects a hazardous situation of this type, it prompts the driver to start emergency braking by activating visual and acoustic warnings and automatically engaging brakes.
This article about an automotive technology is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intersection_assistant |
In topology , a branch of mathematics , intersection homology is an analogue of singular homology especially well-suited for the study of singular spaces , discovered by Mark Goresky and Robert MacPherson in the fall of 1974 and developed by them over the next few years.
Intersection cohomology was used to prove the Kazhdan–Lusztig conjectures and the Riemann–Hilbert correspondence . It is closely related to L 2 cohomology .
The homology groups of a compact , oriented , connected , n -dimensional manifold X have a fundamental property called Poincaré duality : there is a perfect pairing
Classically—going back, for instance, to Henri Poincaré —this duality was understood in terms of intersection theory . An element of
is represented by a j -dimensional cycle. If an i -dimensional and an ( n − i ) {\displaystyle (n-i)} -dimensional cycle are in general position , then their intersection is a finite collection of points. Using the orientation of X one may assign to each of these points a sign; in other words intersection yields a 0 -dimensional cycle. One may prove that the homology class of this cycle depends only on the homology classes of the original i - and ( n − i ) {\displaystyle (n-i)} -dimensional cycles; one may furthermore prove that this pairing is perfect .
When X has singularities —that is, when the space has places that do not look like R n {\displaystyle \mathbb {R} ^{n}} —these ideas break down. For example, it is no longer possible to make sense of the notion of "general position" for cycles. Goresky and MacPherson introduced a class of "allowable" cycles for which general position does make sense. They introduced an equivalence relation for allowable cycles (where only "allowable boundaries" are equivalent to zero), and called the group
of i -dimensional allowable cycles modulo this equivalence relation "intersection homology". They furthermore showed that the intersection of an i - and an ( n − i ) {\displaystyle (n-i)} -dimensional allowable cycle gives an (ordinary) zero-cycle whose homology class is well-defined.
Intersection homology was originally defined on suitable spaces with a stratification , though the groups often turn out to be independent of the choice of stratification. There are many different definitions of stratified spaces. A convenient one for intersection homology is an n -dimensional topological pseudomanifold . This is a ( paracompact , Hausdorff ) space X that has a filtration
of X by closed subspaces such that:
If X is a topological pseudomanifold, the i -dimensional stratum of X is the space X i ∖ X i − 1 {\displaystyle X_{i}\setminus X_{i-1}} .
Examples:
Intersection homology groups I p H i ( X ) {\displaystyle I^{\mathbf {p} }H_{i}(X)} depend on a choice of perversity p {\displaystyle \mathbf {p} } , which measures how far cycles are allowed to deviate from transversality. (The origin of the name "perversity" was explained by Goresky (2010) .) A perversity p {\displaystyle \mathbf {p} } is a function
from integers ≥ 2 {\displaystyle \geq 2} to the integers such that
The second condition is used to show invariance of intersection homology groups under change of stratification.
The complementary perversity q {\displaystyle \mathbf {q} } of p {\displaystyle \mathbf {p} } is the one with
Intersection homology groups of complementary dimension and complementary perversity are dually paired.
Fix a topological pseudomanifold X of dimension n with some stratification, and a perversity p .
A map σ from the standard i -simplex Δ i {\displaystyle \Delta ^{i}} to X (a singular simplex) is called allowable if
is contained in the i − k + p ( k ) {\displaystyle i-k+p(k)} skeleton of Δ i {\displaystyle \Delta ^{i}} .
The chain complex I p ( X ) {\displaystyle I^{p}(X)} is a subcomplex of the complex of singular chains on X that consists of all singular chains such that both the chain and its boundary are linear combinations of allowable singular simplexes. The singular intersection homology groups (with perversity p )
are the homology groups of this complex.
If X has a triangulation compatible with the stratification, then simplicial intersection homology groups can be defined in a similar way, and are naturally isomorphic to the singular intersection homology groups.
The intersection homology groups are independent of the choice of stratification of X .
If X is a topological manifold, then the intersection homology groups (for any perversity) are the same as the usual homology groups.
A resolution of singularities
of a complex variety Y is called a small resolution if for every r > 0, the space of points of Y where the fiber has dimension r is of codimension greater than 2 r . Roughly speaking, this means that most fibers are small. In this case the morphism induces an isomorphism from the (intersection) homology of X to the intersection homology of Y (with the middle perversity).
There is a variety with two different small resolutions that have different ring structures on their cohomology, showing that there is in general no natural ring structure on intersection (co)homology.
Deligne 's formula for intersection cohomology states that
where I C p ( X ) {\displaystyle IC_{p}(X)} is the intersection complex, a certain complex of constructible sheaves on X (considered as an element of the derived category , so the cohomology on the right means the hypercohomology of the complex). The complex I C p ( X ) {\displaystyle IC_{p}(X)} is given by starting with the constant sheaf on the open set X ∖ X n − 2 {\displaystyle X\setminus X_{n-2}} and repeatedly extending it to larger open sets X ∖ X n − k {\displaystyle X\setminus X_{n-k}} and then truncating it in the derived category; more precisely it is given by Deligne's formula
where τ ≤ p {\displaystyle \tau _{\leq p}} is a truncation functor in the derived category, i k {\displaystyle i_{k}} is the inclusion of X ∖ X n − k {\displaystyle X\setminus X_{n-k}} into X ∖ X n − k − 1 {\displaystyle X\setminus X_{n-k-1}} , and C X ∖ X n − 2 {\displaystyle \mathbb {C} _{X\setminus X_{n-2}}} is the constant sheaf on X ∖ X n − 2 {\displaystyle X\setminus X_{n-2}} . [ 1 ]
By replacing the constant sheaf on X ∖ X n − 2 {\displaystyle X\setminus X_{n-2}} with a local system, one can use Deligne's formula to define intersection cohomology with coefficients in a local system.
Given a smooth elliptic curve X ⊂ C P 2 {\displaystyle X\subset \mathbb {CP} ^{2}} defined by a cubic homogeneous polynomial f {\displaystyle f} , [ 2 ] such as x 3 + y 3 + z 3 {\displaystyle x^{3}+y^{3}+z^{3}} , the affine cone V ( f ) ⊂ C 3 {\displaystyle \mathbb {V} (f)\subset \mathbb {C} ^{3}} has an isolated singularity at the origin since f ( 0 ) = 0 {\displaystyle f(0)=0} and all partial derivatives ∂ i f ( 0 ) = 0 {\displaystyle \partial _{i}f(0)=0} vanish. This is because it is homogeneous of degree 3 {\displaystyle 3} , and the derivatives are homogeneous of degree 2. Setting U = V ( f ) − { 0 } {\displaystyle U=\mathbb {V} (f)-\{0\}} and i : U ↪ X {\displaystyle i:U\hookrightarrow X} the inclusion map, the intersection complex I C V ( f ) {\displaystyle IC_{\mathbb {V} (f)}} is given as τ ≤ 1 R i ∗ Q U {\displaystyle \tau _{\leq 1}\mathbf {R} i_{*}\mathbb {Q} _{U}} This can be computed explicitly by looking at the stalks of the cohomology. At p ∈ V ( f ) {\displaystyle p\in \mathbb {V} (f)} where p ≠ 0 {\displaystyle p\neq 0} the derived pushforward is the identity map on a smooth point, hence the only possible cohomology is concentrated in degree 0 {\displaystyle 0} . For p = 0 {\displaystyle p=0} the cohomology is more interesting since R k i ∗ Q U | p = 0 = colim V ⊂ U H k ( V ; Q ) {\displaystyle \mathbf {R} ^{k}i_{*}\mathbb {Q} _{U}|_{p=0}=\mathop {\underset {V\subset U}{\text{colim}}} H^{k}(V;\mathbb {Q} )} for V {\displaystyle V} where the closure of i ( V ) {\displaystyle i(V)} contains the origin p = 0 {\displaystyle p=0} . Since any such V {\displaystyle V} can be refined by considering the intersection of an open disk in C 3 {\displaystyle \mathbb {C} ^{3}} with U {\displaystyle U} , we can just compute the cohomology H k ( U ; Q ) {\displaystyle H^{k}(U;\mathbb {Q} )} . This can be done by observing U {\displaystyle U} is a C ∗ {\displaystyle \mathbb {C} ^{*}} bundle over the elliptic curve X {\displaystyle X} , the hyperplane bundle , and the Wang sequence gives the cohomology groups H 0 ( U ; Q ) ≅ H 0 ( X ; Q ) = Q H 1 ( U ; Q ) ≅ H 1 ( X ; Q ) = Q ⊕ 2 H 2 ( U ; Q ) ≅ H 1 ( X ; Q ) = Q ⊕ 2 H 3 ( U ; Q ) ≅ H 2 ( X ; Q ) = Q {\displaystyle {\begin{aligned}H^{0}(U;\mathbb {Q} )&\cong H^{0}(X;\mathbb {Q} )=\mathbb {Q} \\H^{1}(U;\mathbb {Q} )&\cong H^{1}(X;\mathbb {Q} )=\mathbb {Q} ^{\oplus 2}\\H^{2}(U;\mathbb {Q} )&\cong H^{1}(X;\mathbb {Q} )=\mathbb {Q} ^{\oplus 2}\\H^{3}(U;\mathbb {Q} )&\cong H^{2}(X;\mathbb {Q} )=\mathbb {Q} \\\end{aligned}}} hence the cohomology sheaves at the stalk p = 0 {\displaystyle p=0} are H 2 ( R i ∗ Q U | p = 0 ) = Q p = 0 H 1 ( R i ∗ Q U | p = 0 ) = Q p = 0 ⊕ 2 H 0 ( R i ∗ Q U | p = 0 ) = Q p = 0 {\displaystyle {\begin{matrix}{\mathcal {H}}^{2}\left(\mathbf {R} i_{*}\mathbb {Q} _{U}|_{p=0}\right)&=&\mathbb {Q} _{p=0}\\{\mathcal {H}}^{1}\left(\mathbf {R} i_{*}\mathbb {Q} _{U}|_{p=0}\right)&=&\mathbb {Q} _{p=0}^{\oplus 2}\\{\mathcal {H}}^{0}\left(\mathbf {R} i_{*}\mathbb {Q} _{U}|_{p=0}\right)&=&\mathbb {Q} _{p=0}\end{matrix}}} Truncating this gives the nontrivial cohomology sheaves H 0 , H 1 {\displaystyle {\mathcal {H}}^{0},{\mathcal {H}}^{1}} , hence the intersection complex I C V ( f ) {\displaystyle IC_{\mathbb {V} (f)}} has cohomology sheaves H 0 ( I C V ( f ) ) = Q V ( f ) H 1 ( I C V ( f ) ) = Q p = 0 ⊕ 2 H i ( I C V ( f ) ) = 0 for i ≠ 0 , 1 {\displaystyle {\begin{matrix}{\mathcal {H}}^{0}(IC_{\mathbb {V} (f)})&=&\mathbb {Q} _{\mathbb {V} (f)}\\{\mathcal {H}}^{1}(IC_{\mathbb {V} (f)})&=&\mathbb {Q} _{p=0}^{\oplus 2}\\{\mathcal {H}}^{i}(IC_{\mathbb {V} (f)})&=&0&{\text{for }}i\neq 0,1\end{matrix}}}
The complex IC p ( X ) has the following properties
As usual, q is the complementary perversity to p . Moreover, the complex is uniquely characterized by these conditions, up to isomorphism in the derived category. The conditions do not depend on the choice of stratification, so this shows that intersection cohomology does not depend on the choice of stratification either.
Verdier duality takes IC p to IC q shifted by n = dim( X ) in the derived category. | https://en.wikipedia.org/wiki/Intersection_homology |
Intersex people are people born with any of several sex characteristics , including chromosome patterns, gonads , or genitals that, according to the Office of the United Nations High Commissioner for Human Rights , "do not fit typical binary notions of male or female bodies". [ 1 ] [ 2 ]
Sex assignment at birth usually aligns with a child's external genitalia . The number of births with ambiguous genitals is in the range of 1:4,500–1:2,000 (0.02%–0.05%). [ 3 ] Other conditions involve the development of atypical chromosomes, gonads, or hormones. [ 4 ] [ 2 ] The portion of the population that is intersex has been reported differently depending on which definition of intersex is used and which conditions are included. Estimates range from 0.018% (one in 5,500 births) to 1.7%. [ 4 ] [ 5 ] [ 6 ] The difference centers on whether conditions in which chromosomal sex matches a phenotypic sex which is clearly identifiable as male or female, such as late onset congenital adrenal hyperplasia (1.5 percentage points) and Klinefelter syndrome , should be counted as intersex. [ 4 ] [ 7 ] Whether intersex or not, people may be assigned and raised as a girl or boy but then identify with another gender later in life, while most continue to identify with their assigned sex. [ 8 ] [ 9 ] [ 10 ]
Terms used to describe intersex people are contested, and change over time and place. Intersex people were previously referred to as " hermaphrodites " or "congenital eunuchs ". [ 11 ] [ 12 ] In the 19th and 20th centuries, some medical experts devised new nomenclature in an attempt to classify the characteristics that they had observed, the first attempt to create a taxonomic classification system of intersex conditions. Intersex people were categorized as either having " true hermaphroditism ", "female pseudohermaphroditism ", or "male pseudohermaphroditism". [ 13 ] These terms are no longer used, and terms including the word "hermaphrodite" are considered to be misleading, stigmatizing, and scientifically specious in reference to humans. [ 14 ] In biology, the term "hermaphrodite" is used to describe an organism that can produce both male and female gametes . [ 15 ] [ 16 ] Some people with intersex traits use the term "intersex", and some prefer other language. [ 17 ] [ 18 ] [ page range too broad ] In clinical settings, the term " disorders of sex development " (DSD) has been used since 2006, [ 19 ] a shift in language considered controversial since its introduction. [ 20 ] [ 21 ] [ 22 ]
Intersex people face stigmatization and discrimination from birth, or following the discovery of intersex traits at stages of development such as puberty . [ 23 ] Intersex people may face infanticide , abandonment, and stigmatization from their families. [ 24 ] [ 25 ] [ 26 ] Globally, some intersex infants and children, such as those with ambiguous outer genitalia, are surgically or hormonally altered to create more socially acceptable sex characteristics. This is considered controversial, with no firm evidence of favorable outcomes. [ 27 ] Such treatments may involve sterilization . Adults, including elite female athletes, have also been subjects of such treatment. [ 28 ] [ 29 ] Increasingly, these issues are considered human rights abuses , with statements from international [ 30 ] [ 31 ] and national human rights and ethics institutions. [ 32 ] [ 33 ] Intersex organizations have also issued statements about human rights violations, including the 2013 Malta declaration of the third International Intersex Forum . [ 34 ] In 2011, Christiane Völling became the first intersex person known to have successfully sued for damages in a case brought for non-consensual surgical intervention. [ 35 ] In April 2015, Malta became the first country to outlaw non-consensual medical interventions to modify sex anatomy, including that of intersex people. [ 36 ] [ 37 ]
There is no clear consensus definition of intersex and no clear delineation of which specific conditions qualify an individual as intersex. [ 38 ] The World Health Organization's International Classification of Diseases (ICD), the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM), and many medical journals classify intersex traits or conditions among disorders of sex development (DSD). [ 39 ]
A common adjective for people with disorders of sex development (DSD) is "intersex". [ citation needed ]
In 1917, Richard Goldschmidt created the term "intersexuality" to refer to a variety of physical sex ambiguities. [ 13 ] However, according to The SAGE Encyclopedia of LGBTQ Studies , it was not until Anne Fausto Sterling published her article "The Five Sexes: Why Male and Female Are Not Enough" in 1993 that the term reached popularity. [ 40 ]
According to the UN Office of the High Commissioner for Human Rights:
Intersex people are born with sex characteristics (including genitals, gonads and chromosome patterns) that do not fit typical binary notions of male or female bodies. Intersex is an umbrella term used to describe a wide range of natural bodily variations. [ 2 ]
Some intersex organizations reference "intersex people" and "intersex variations or traits" [ 41 ] while others use more medicalized language such as "people with intersex conditions", [ 42 ] or people "with intersex conditions or DSDs (differences of sex development)" and "children born with variations of sex anatomy". [ 43 ] In May 2016, interACT published a statement recognizing "increasing general understanding and acceptance of the term 'intersex'". [ 44 ]
Australian sociological research on 272 "people born with atypical sex characteristics", published in 2016, found that 60% of respondents used the term "intersex" to self-describe their sex characteristics, including people identifying themselves as intersex, describing themselves as having an intersex variation or, in smaller numbers, having an intersex condition. Respondents also commonly used diagnostic labels and referred to their sex chromosomes, with word choices depending on audience. [ 9 ] [ 45 ]
Research on 202 respondents by the Lurie Children's Hospital , Chicago, and the AIS-DSD Support Group (now known as InterConnect Support Group) [ 46 ] published in 2017 found that 80% of Support Group respondents "strongly liked, liked or felt neutral about intersex" as a term, while caregivers were less supportive. [ 47 ] The hospital reported that the use of the term "disorders of sex development" may negatively affect care . [ 48 ]
Another study by a group of children's hospitals in the United States found that 53% of 133 parent and adolescent participants recruited at five clinics did not like the term "intersex". [ 49 ] Participants who were members of support groups were more likely to dislike the term. [ 49 ] A "dsd-LIFE" study in 2020 found that around 43% of 179 participants thought the term "intersex" was bad, 20% felt neutral about the term, while 37% thought the term was good. [ 50 ]
Historically, the term "hermaphrodite" was used in law to refer to people whose sex was in doubt. The 12th century Decretum Gratiani states that "Whether an hermaphrodite may witness a testament, depends on which sex prevails" ("Hermafroditus an ad testamentum adhiberi possit, qualitas sexus incalescentis ostendit"). [ 51 ] [ 52 ] Similarly, the 17th century English jurist and judge Edward Coke (Lord Coke), wrote in his Institutes of the Lawes of England on laws of succession stating, "Every heire is either a male, a female, or an hermaphrodite, that is both male and female. And an hermaphrodite (which is also called Androgynus ) shall be heire, either as male or female, according to that type of sexe which doth prevaile." [ 53 ] [ 54 ]
During the Victorian era , medical authors attempted to ascertain whether or not humans could be hermaphrodites, adopting a precise biological definition for the term, [ 55 ] and making distinctions between "male pseudohermaphrodite", "female pseudohermaphrodite" and especially " true hermaphrodite ". [ 56 ] These terms, which reflected histology (microscopic appearance) of the gonads , are rarely used in the 2020s. [ 57 ] [ 58 ] [ 59 ] Until the mid-20th century, "hermaphrodite" was used synonymously with "intersex". [ 60 ] Medical terminology shifted in the early 21st century, not only due to concerns about language, but also a shift to understandings based on genetics . [ citation needed ] The term "hermaphrodite" is also controversial as it implies the existence of someone fully male and fully female . [ 61 ] As such the term "hermaphrodite" is often seen as degrading and offensive, although many intersex activists use it as a direct form of self empowerment and critique such as in the ISNA 's first newsletter Hermaphrodites with Attitude . [ 61 ]
The Intersex Society of North America has stated that hermaphrodites should not be confused with intersex people and that using "hermaphrodite" to refer to intersex individuals is considered to be stigmatizing and misleading. [ 62 ]
Estimates of the number of people who are intersex vary, depending on which conditions are counted as intersex. [ 4 ] The now-defunct Intersex Society of North America said:
If you ask experts at medical centers how often a child is born so noticeably atypical in terms of genitalia that a specialist in sex differentiation is called in, the number comes out to about 1 in 1,500 to 1 in 2,000 births [0.07–0.05%] . But a lot more people than that are born with subtler forms of sex anatomy variations, some of which won't show up until later in life. [ 63 ]
Anne Fausto-Sterling et al. , said in 2000 that "[a]dding the estimates of all known causes of nondimorphic sexual development suggests that approximately 1.7% of all live births do not conform to a Platonic ideal of absolute sex chromosome, gonadal, genital, and hormonal dimorphism"; [ 6 ] [ 5 ] these publications have been widely quoted by intersex activists. [ 64 ] [ 65 ] [ 66 ] Of the 1.7%, 1.5% points (88% of those considered "nondimorphic sexual development" in this figure) consist of individuals with late onset congenital adrenal hyperplasia (LOCAH) which may be asymptomatic but can present after puberty and cause infertility. [ 67 ]
Leonard Sax , in response to Fausto-Sterling, estimated that the prevalence of intersex was about 0.018% of the world's population, [ 4 ] discounting several conditions included in Fausto-Sterling's estimate that included LOCAH, Klinefelter syndrome (47,XXY), Turner syndrome (45,X), the chromosomal variants of 47,XYY and 47,XXX, and vaginal agenesis. Sax reasons that in these conditions chromosomal sex is consistent with phenotypic sex and phenotype is classifiable as either male or female. [ 4 ]
In a 2003 letter to the editor, political scientist Carrie Hull analyzed the data used by Fausto-Sterling and said the estimated intersex rate should instead have been 0.37%, due to many errors. [ 68 ] In a response letter published simultaneously, Fausto-Sterling welcomed the additional analysis and said "I am not invested in a particular final estimate, only that there BE an estimate." [ 68 ] A 2018 review reported that the number of births with ambiguous genitals is in the range of 0.02% to 0.05%. [ 3 ]
Intersex Human Rights Australia says it maintains 1.7% as its preferred upper limit "despite its flaws", stating both that the estimate "encapsulates the entire population of people who are stigmatized—or risk stigmatization—due to innate sex characteristics", and that Sax's definitions exclude individuals who experience such stigma and who have helped to establish the intersex movement. [ 69 ] According to InterACT , a major organization for intersex rights in the US , states that 1.7% of people have some variation of sexual development , 0.5% have atypical genitalia, and 0.05% have mixed/ambiguous genitalia. [ 70 ] [ 71 ] [ 72 ] [ 73 ] [ a ] A study relying on a nationally representative survey conducted in Mexico between 2021 and 2022 obtained similar estimates: around 1.6% of individuals aged 15 to 64 reported being born with sex variations. [ 23 ]
The following summarizes prevalences of traits that some medical experts consider to be intersex (where sex chromosome anomalies are involved, the karyotype is often summarized by the total number of chromosomes followed by the sex chromosomes present in each cell):
Notes:
From early history, societies have been aware of intersex people. Some of the earliest evidence is found in mythology: the Greek historian Diodorus Siculus wrote of the mythological Hermaphroditus in the first century BC, who was "born with a physical body which is a combination of that of a man and that of a woman", and reputedly possessed supernatural properties. [ 100 ] He also recounted the lives of Diophantus of Abae and Callon of Epidaurus . [ 101 ] Ardhanarishvara , an androgynous composite form of male deity Shiva and female deity Parvati , originated in Kushan culture as far back as the first century AD. [ 102 ] A statue depicting Ardhanarishvara is included in India's Meenakshi Temple ; this statue clearly shows both male and female bodily elements. [ 103 ]
Hippocrates ( c. 460 – c. 370 BC, Greek physician) and Galen (129 – c. 200/216 AD, Roman physician, surgeon, and philosopher) both viewed sex as a spectrum between men and women, with "many shades in between, including hermaphrodites, a perfect balance of male and female". [ 104 ] Pliny the Elder (AD 23/24–79), a Roman naturalist , described "those who are born of both sexes, whom we call hermaphrodites, at one time androgyni " (from the Greek andr- , "man", and gyn- , "woman"). [ 105 ] Augustine (354 – 430 AD), the influential Catholic theologian, wrote in The Literal Meaning of Genesis that humans were created in two sexes, despite "as happens in some births, in the case of what we call androgynes". [ 104 ]
In medieval and early modern European societies, Roman law , post-classical canon law , and later common law , referred to a person's sex as male, female or hermaphrodite, with legal rights as male or female depending on the characteristics that appeared most dominant. [ 106 ] The 12th century Decretum Gratiani states, "Whether an hermaphrodite may witness a testament, depends on which sex prevails." [ 107 ] [ 108 ] [ 109 ] The foundation of common law, the 17th century Institutes of the Lawes of England described how a hermaphrodite could inherit "either as male or female, according to that kind of sexe which doth prevaile". [ 110 ] [ 54 ] Legal cases have been described in canon law and elsewhere over the centuries.
Some non-European societies have sex or gender systems that recognize more than the two categories of male/man and female/woman. Some of these cultures, for instance the South-Asian Hijra communities, may include intersex people in a third gender category. [ 111 ] [ 112 ] Although—according to Morgan Holmes —early Western anthropologists categorized such cultures as "primitive", Holmes has argued that analyses of these cultures have been simplistic or romanticized and fail to take account of the ways that subjects of all categories are treated. [ 113 ]
During the Victorian era , medical authors introduced the terms " true hermaphrodite " for an individual who has both ovarian and testicular tissue, "male pseudo-hermaphrodite" for a person with testicular tissue, but either female or ambiguous sexual anatomy, and "female pseudo-hermaphrodite" for a person with ovarian tissue, but either male or ambiguous sexual anatomy. Some later shifts in terminology have reflected advances in genetics, while other shifts are suggested to be due to pejorative associations. [ 114 ]
The term "intersexuality" was coined by Richard Goldschmidt in 1917. [ 115 ] The first suggestion to replace the term "hermaphrodite" with "intersex" was made by Cawadias in the 1940s. [ 60 ]
Since the rise of modern medical science, some intersex people with ambiguous external genitalia have had their genitalia surgically modified to resemble either female or male genitals. Surgeons pinpointed intersex babies as a "social emergency" when born. [ 116 ] An 'optimal gender policy', initially developed by John Money , stated that early intervention helped avoid gender identity confusion, but this lacks evidence. [ 117 ] Early interventions have adverse consequences for psychological and physical health. [ 33 ] Since advances in surgery have made it possible for intersex conditions to be concealed, many people are not aware of how frequently intersex conditions arise in human beings or that they occur at all. [ 118 ]
Dialogue between what were once antagonistic groups of activists and clinicians has led to only slight changes in medical policies and how intersex patients and their families are treated in some locations. [ 119 ] In 2011, Christiane Völling became the first intersex person known to have successfully sued for damages in a case brought for non-consensual surgical intervention. [ 35 ] In April 2015, Malta became the first country to outlaw non-consensual medical interventions to modify sex anatomy, including that of intersex people. [ 36 ] Many civil society organizations and human rights institutions now call for an end to unnecessary "normalizing" interventions, including in the Malta declaration . [ 120 ] [ citation needed ]
Human rights institutions are placing increasing scrutiny on harmful practices and issues of discrimination against intersex people. These issues have been addressed by a rapidly increasing number of international institutions including, in 2015, the Council of Europe , the United Nations Office of the United Nations High Commissioner for Human Rights and the World Health Organization (WHO). In 2024, the United Nations Human Rights Council adopted its first resolution to protect the rights of intersex people . [ 121 ] These developments have been accompanied by International Intersex Forums and increased cooperation among civil society organizations. However, the implementation, codification, and enforcement of intersex human rights in national legal systems remains slow.
Stigmatization and discrimination from birth may include infanticide, abandonment, and the stigmatization of families. The birth of an intersex child was often viewed as a curse or a sign of a witch mother, especially in parts of Africa. [ 24 ] [ 25 ] Abandonments and infanticides have been reported in Uganda , [ 24 ] Kenya , [ 122 ] South Asia , [ 123 ] and China . [ 26 ]
Infants, children and adolescents also experience "normalising" interventions on intersex people that are medically unnecessary and the pathologisation of variations in sex characteristics. In countries where the human rights of intersex people have been studied, medical interventions to modify the sex characteristics of intersex people have still taken place without the consent of the intersex person. [ 124 ] [ 125 ] Interventions have been described by human rights defenders as a violation of many rights, including (but not limited to) bodily integrity, non-discrimination, privacy, and experimentation. [ 126 ] These interventions have frequently been performed with the consent of the intersex person's parents, when the person is legally too young to consent. Such interventions have been criticized by the WHO, other UN bodies such as the Office of the High Commissioner for Human Rights, and an increasing number of regional and national institutions due to their adverse consequences, including trauma, impact on sexual function and sensation, and violation of rights to physical and mental integrity. [ citation needed ] The UN organizations decided that infant intervention should not be allowed, in favor of waiting for the child to mature enough to be a part of the decision-making—this allows for a decision to be made with total consent. [ 127 ] In April 2015, Malta became the first country to outlaw surgical intervention without consent. [ 36 ] [ 37 ] In the same year, the Council of Europe became the first institution to state that intersex people have the right not to undergo sex affirmation interventions. [ 64 ]
People born with intersex bodies are seen as different. Intersex infants, children, adolescents and adults "are often stigmatized and subjected to multiple human rights violations", including discrimination in education, healthcare, employment, sport, and public services. [ 2 ] Researchers have documented significant disparities in mental, physical, and sexual health when comparing intersex individuals to the general population, including higher rates of bullying, stigmatization, harassment, violence, and suicidal intention, as well as substantial barriers in the workplace. [ 23 ]
Several countries have so far explicitly protected intersex people from discrimination, with landmarks including South Africa , [ 128 ] Australia , [ 129 ] [ 130 ] and, most comprehensively, Malta. [ 131 ] [ 132 ] [ 133 ]
Claims for compensation and remedies for human rights abuses include the 2011 case of Christiane Völling in Germany . [ 35 ] [ 134 ] A second case was adjudicated in Chile in 2012, involving a child and his parents. [ 135 ] [ 136 ] A further successful case in Germany, taken by Michaela Raab, was reported in 2015. [ 137 ] In the United States, the Minor Child ( M.C. v Aaronson ) lawsuit was "a medical malpractice case related to the informed consent for a surgery performed on the Crawford's adopted child (known as M.C.) at [Medical University of South Carolina] in April 2006". [ 138 ] The case was one of the first lawsuit of its type to challenge "legal, ethical, and medical issues regarding genital-normalizing surgery" in minors, and was eventually settled out of court by the Medical University of South Carolina for $440,000 in 2017. [ 139 ]
Access to information , medical records, peer and other counselling and support. With the rise of modern medical science in Western societies, a secrecy-based model was also adopted, in the belief that this was necessary to ensure normal physical and psychosocial development. [ 140 ] [ 141 ] [ 142 ]
The Asia Pacific Forum of National Human Rights Institutions states that legal recognition is firstly "about intersex people who have been issued a male or a female birth certificate being able to enjoy the same legal rights as other men and women". [ 34 ] In some regions, obtaining any form of birth certification may be an issue. A Kenyan court case in 2014 established the right of an intersex boy, "Baby A", to a birth certificate. [ 143 ]
Like all individuals, some intersex individuals may be raised as a certain sex (male or female) but then identify with another later in life, while most do not. [ 144 ] [ 8 ] [ 145 ] [ 146 ] Recognition of third sex or gender classifications occurs in several countries, [ 147 ] [ 148 ] [ 149 ] [ 150 ] however, it is controversial when it becomes assumed or coercive, as is the case with some German infants . [ 151 ] [ 152 ] Sociological research in Australia, a country with a third 'X' sex classification, shows that 19% of people born with atypical sex characteristics selected an "X" or "other" option, while 75% of survey respondents self-described as male or female (52% as women, 23% as men), and 6% as unsure. [ 9 ] [ 45 ]
On January 20, 2025, US president Donald Trump signed Executive Order 14168 , entitled "Defending Women from Gender Ideology Extremism and Restoring Biological Truth to the Federal Government". This executive order asserts that it is the policy of the United States that there are only two sexes, male and female, and that these sexes are immutable throughout a person's life, starting at conception. [ 153 ] The executive order does not acknowledge the existence of, or make provision for, intersex people. According to intersex advocate Alicia Roth Weigel , this order "attempts to negate our very existence". [ 154 ]
Intersex conditions can be contrasted with transgender gender identities and the attached gender dysphoria a transgender person may feel, wherein their gender identity does not match their assigned sex. [ 155 ] [ 156 ] [ 157 ] However, some people are both intersex and transgender; although intersex people by definition have variable sex characteristics that do not align with either typically male or female, this may be considered separate to an individual's assigned gender, the way they are raised and perceived, and their internal gender identity. [ 158 ] A 2012 clinical review paper found that between 8.5% and 20% of people with intersex variations experienced gender dysphoria. [ 145 ] In an analysis of the use of preimplantation genetic diagnosis to eliminate intersex traits, Behrmann and Ravitsky state: "Parental choice against intersex may ... conceal biases against same-sex attractedness and gender nonconformity." [ 159 ]
The relationship of intersex people and communities to LGBTQ communities is complex, [ 160 ] but intersex people are often added to the LGBT acronym, resulting in the acronym LGBTI (or when also including asexual people, LGBTQIA+ [ 161 ] ). Emi Koyama describes how inclusion of intersex in LGBTI can fail to address intersex-specific human rights issues, including creating false impressions "that intersex people's rights are protected" by laws protecting LGBT people, and failing to acknowledge that many intersex people are not LGBT. [ 162 ] Organisation Intersex International Australia states that some intersex individuals are homosexual, and some are heterosexual, but "LGBTI activism has fought for the rights of people who fall outside of expected binary sex and gender norms." [ 163 ] [ 164 ] Julius Kaggwa of SIPD Uganda has written that, while the gay community "offers us a place of relative safety, it is also oblivious to our specific needs". [ 165 ] Mauro Cabral has written that transgender people and organizations "need to stop approaching intersex issues as if they were trans issues", including use of intersex conditions and people as a means of explaining being transgender; "we can collaborate a lot with the intersex movement by making it clear how wrong that approach is." [ 166 ]
Heinlein's acclaimed 1959 "'—All You Zombies—'" is an early intersex science fiction story, based on time-travel .
An intersex character is the narrator in Jeffrey Eugenides ' Pulitzer Prize-winning novel Middlesex .
The memoir, Born Both: An Intersex Life ( Hachette Books , 2017), by intersex author and activist Hida Viloria , received strong praise from The New York Times Book Review , The Washington Post , Rolling Stone , People Magazine , and Psychology Today , was one of School Library Journal 's 2017 Top Ten Adult Books for Teens, and was a 2018 Lambda Literary Award nominee.
Television works about intersex and films about intersex are scarce. The Spanish-language film XXY won the Critics' Week grand prize at the 2007 Cannes Film Festival and the ACID/CCAS Support Award. [ 167 ] Faking It is notable for providing both the first intersex main character in a television show, [ 168 ] and television's first intersex character played by an intersex actor. [ 169 ]
Intersex peer support and advocacy organizations have existed since at least 1985, with the establishment of the Androgen Insensitivity Syndrome Support Group Australia in 1985. [ 170 ] The Androgen Insensitivity Syndrome Support Group (UK) was established in 1988. [ 171 ] The Intersex Society of North America (ISNA) may have been one of the first intersex civil society organizations to have been open to people regardless of diagnosis; it was active from 1993 to 2008. [ 172 ]
Intersex Awareness Day is an internationally observed civil awareness day designed to highlight the challenges faced by intersex people, occurring annually on 26 October. It marks the first public demonstration by intersex people, which took place in Boston on 26 October 1996, outside a venue where the American Academy of Pediatrics was holding its annual conference. [ 173 ]
Intersex Day of Remembrance , also known as Intersex Solidarity Day, is an internationally observed civil awareness day designed to highlight issues faced by intersex people, occurring annually on 8 November. It marks the birthday of Herculine Barbin , a French intersex person whose memoirs were later published by Michel Foucault in Herculine Barbin: Being the Recently Discovered Memoirs of a Nineteenth-century French Hermaphrodite .
The intersex flag was created in July 2013 by Morgan Carpenter of Intersex Human Rights Australia to create a flag "that is not derivative, but is yet firmly grounded in meaning". The circle is described as "unbroken and unornamented, symbolising wholeness and completeness, and our potentialities. We are still fighting for bodily autonomy and genital integrity, and this symbolises the right to be who and how we want to be." [ 174 ]
In 2021, Valentino Vecchietti of Intersex Equality Rights UK redesigned the Progress Pride Flag to incorporate the intersex flag. [ 175 ] This design added a yellow triangle with a purple circle in it to the chevron of the Progress Pride flag. It also changed the color of green to a lighter shade without adding new symbolism. Intersex Equality Rights UK posted the new flag on Instagram and Twitter. [ 176 ] [ 177 ]
Because the word " orchid " comes from the Greek word for testicle , and the orchiectomy is a common surgery performed on intersex infants, the orchid flower is a symbol of being intersex and of opposition to non-consensual genital surgery. [ 178 ]
In Judaism , the Talmud contains extensive discussion concerning the status of two types of intersex people in Jewish law; namely, the androgynous, who exhibit both male and female external sexual organs, and the tumtum , who exhibit neither. In the 1970s and 1980s, the treatment of intersex babies started to be discussed in Orthodox Jewish medical halacha by prominent rabbinic leaders, such as Eliezer Waldenberg and Moshe Feinstein . [ 179 ]
Erik Schinegger , Foekje Dillema , Maria José Martínez-Patiño and Santhi Soundarajan were subject to adverse sex verification testing resulting in ineligibility to compete in organised competitive competition. Stanisława Walasiewicz , an athlete diagnosed posthumously with Turner syndrome was posthumously ruled ineligible to have competed. [ 180 ]
The South African middle-distance runner Caster Semenya won 3 World Championships gold medals and 2 Olympic gold medals in the women's 800 metres. When Semenya won gold at the 2009 World Championships, the International Association of Athletics Federations (IAAF) requested sex verification tests on the very same day. The results were not released, and Semenya was ruled eligible to compete. [ 181 ] In 2019, new IAAF rules came into force for athletes like Semenya with certain disorders of sex development (DSDs) requiring medication to suppress testosterone levels in order to participate in 400m, 800m, and 1500m women's events. Semenya objected to undergoing the treatment which is now mandatory. She has filed a series of legal cases to restore her ability to compete in these events without testosterone suppression, arguing that the World Athletics rules are discriminatory. [ 182 ]
Katrina Karkazis , Rebecca Jordan-Young , Georgiann Davis and Silvia Camporesi have claimed that IAAF policies on "hyperandrogenism" in female athletes are "significantly flawed", arguing that the policy does not protect against breaches of privacy, requires athletes to undergo unnecessary treatment in order to compete, and intensifies "gender policing", and recommended that athletes be able to compete in accordance with their legally-recognised gender. [ 183 ]
In April 2014, the BMJ reported that four elite women athletes with XY chromosomes and 5α-reductase 2 deficiency were subjected to sterilization and "partial clitoridectomies" in order to compete in sport. The authors noted that partial clitoridectomy was "not medically indicated" and "does not relate to real or perceived athletic 'advantage'". [ 28 ] Intersex advocates [ who? ] regarded this intervention as "a clearly coercive process". [ 184 ] In 2016, the United Nations Special Rapporteur on health, Dainius Pūras, criticized "current and historic" sex verification policies, describing how "a number of athletes have undergone gonadectomy (removal of reproductive organs) and partial clitoridectomy (a form of female genital mutilation ) in the absence of symptoms or health issues warranting those procedures." [ 185 ]
The notion of intersex individuals can be understood in the context of sexual system biology that varies across different types of organisms. Most animal species (~95%, including humans) are gonochoric , in which individuals are of either a female or male sex. [ 186 ] Hermaphroditic species (some animals and most flowering plants [ 187 ] ) are represented by individuals that can express both sexes simultaneously or sequentially during their lifetimes. [ 188 ] Intersex individuals in a number of gonochoric species, who express both female and male phenotypic characters to some degree, [ 189 ] are known to exist at very low prevalences.
Although "hermaphrodite" and "intersex" have been used synonymously in humans, [ 190 ] [ pages needed ] a hermaphrodite is specifically an individual capable of producing female and male gametes. [ 191 ] While there are reports of individuals that seemed to have the potential to produce both types of gamete, [ 192 ] in more recent years the term hermaphrodite as applied to humans has fallen out of favor, since female and male reproductive functions have not been observed together in the same individual. [ 193 ]
Research in the late 20th century led to a growing medical consensus that diverse intersex bodies are normal, but relatively rare, forms of human biology. [ 8 ] [ 194 ] [ 195 ] [ 196 ] Clinician and researcher Milton Diamond stresses the importance of care in the selection of language related to intersex people:
Foremost, we advocate use of the terms "typical", "usual", or "most frequent" where it is more common to use the term "normal". When possible avoid expressions such as maldeveloped or undeveloped, errors of development, defective genitals, abnormal, or mistakes of nature. Emphasize that all of these conditions are biologically understandable while they are statistically uncommon. [ 197 ]
The common pathway of sexual differentiation , where a productive human female has an XX chromosome pair, and a productive male has an XY pair, is relevant to the development of intersex conditions.
During fertilization, the sperm adds either an X (female) or a Y (male) chromosome to the X in the ovum. This determines the genetic sex of the embryo. During the first weeks of development, genetic male and female fetuses are "anatomically indistinguishable", with primitive gonads beginning to develop during approximately the sixth week of gestation. The gonads, in a bipotential state, may develop into either testes (the male gonads) or ovaries (the female gonads), depending on the consequent events. [ 198 ] Up until and including the seventh week, genetically female and genetically male fetuses appear identical.
At around eight weeks of gestation, the gonads of an XY embryo differentiate into functional testes, secreting testosterone. Ovarian differentiation, for XX embryos, does not occur until approximately week 12 of gestation. In typical female differentiation, the Müllerian duct system develops into the uterus , fallopian tubes , and inner third of the vagina.
In males, the Müllerian duct-inhibiting hormone AMH causes this duct system to regress. Next, androgens cause the development of the Wolffian duct system , which develops into the vas deferens , seminal vesicles, and ejaculatory ducts. [ 198 ] By birth, the typical fetus has been completely sexed male or female, meaning that the genetic sex (XY-male or XX-female) corresponds with the phenotypical sex; that is to say, genetic sex corresponds with internal and external gonads, and external appearance of the genitals.
There are a variety of symptoms that can occur. Ambiguous genitalia is the most common sign. There can be micropenis , clitoromegaly , partial labial fusion , electrolyte abnormalities, delayed or absent puberty, unexpected changes at puberty, hypospadias, labial or inguinal (groin) masses (which may turn out to be testes) in girls and undescended testes (which may turn out to be ovaries) in boys. [ 199 ]
Ambiguous genitalia may appear as a large clitoris or as a small penis .
Because there is variation in all of the processes of the development of the sex organs , a child can be born with a sexual anatomy that is typically female or feminine in appearance with a larger-than-average clitoris ( clitoral hypertrophy ) or typically male or masculine in appearance with a smaller-than-average penis that is open along the underside. The appearance may be quite ambiguous, describable as female genitals (a vulva ) with a very large clitoris and partially fused labia, or as male genitals with a very small penis, completely open along the midline (" hypospadic "), and empty scrotum . Fertility is variable. [ citation needed ]
The orchidometer is a medical instrument to measure the volume of the testicles. It was developed by Swiss pediatric endocrinologist Andrea Prader . The Prader scale [ 200 ] and Quigley scale are visual rating systems that measure genital appearance. These measurement systems were satirized in the Phall-O-Meter , created by the (now defunct) Intersex Society of North America . [ 201 ] [ 202 ] [ 203 ]
In order to help in classification, methods other than a genitalia inspection can be performed. For instance, a karyotype display of a tissue sample may determine which of the causes of intersex is prevalent in the case. Additionally, electrolyte tests, endoscopic exam, ultrasound and hormone stimulation tests can be done. [ 204 ]
Intersex can be divided into four categories which are: 46, XX intersex; 46, XY intersex; true gonadal intersex; and complex or undetermined intersex. [ 199 ]
This condition used to be called "female pseudohermaphroditism ". People with this condition have female internal genitalia and karyotype (XX) and various degree of external genitalia virilization . [ 205 ] External genitalia is masculinized congenitally when female fetus is exposed to excess androgenic environment. [ 199 ] Hence, the chromosome of the person is of a female, the ovaries of a female, but external genitals that appear like a male. The labia fuse, and the clitoris enlarges to appear like a penis. The causes of this can be male hormones taken during pregnancy, congenital adrenal hyperplasia, male-hormone-producing tumors in the mother and aromatase deficiency . [ 199 ]
This condition used to be called "male pseudohermaphroditism". This is defined as incomplete masculinization of the external genitalia. [ 206 ] Thus, the person has male chromosomes, but the external genitals are incompletely formed, ambiguous, or clearly female. [ 199 ] [ 207 ] This condition is also called 46, XY with undervirilization. [ 199 ] 46, XY intersex has many possible causes, which can be problems with the testes and testosterone formation. [ 199 ] Also, there can be problems with using testosterone. Some people lack the enzyme needed to convert testosterone to dihydrotestosterone , which is a cause of 5-alpha-reductase deficiency . [ 199 ] Androgen insensitivity syndrome is the most common cause of 46, XY intersex. [ 199 ]
This condition used to be called " true hermaphroditism ". This is defined as having asymmetrical gonads with ovarian and testicular differentiation on either sides separately or combined as ovotestis. [ 208 ] In most cases, the cause of this condition is unknown.
This is the condition of having any chromosome configurations rather than 46, XX or 46, XY intersex. This condition does not result in an imbalance between internal and external genitalia. However, there may be problems with sex hormone levels, overall sexual development, and altered numbers of sex chromosomes. [ 199 ]
There are a variety of opinions on what conditions or traits are and are not intersex, dependent on the definition of intersex that is used. Current human rights based definitions stress a broad diversity of sex characteristics that differ from expectations for male or female bodies. [ 2 ] During 2015, the Council of Europe , [ 64 ] the European Union Agency for Fundamental Rights [ 209 ] and Inter-American Commission on Human Rights [ 210 ] have called for a review of medical classifications on the basis that they presently impede enjoyment of the right to health ; the Council of Europe expressed concern that "the gap between the expectations of human rights organisations of intersex people and the development of medical classifications has possibly widened over the past decade." [ 64 ] [ 209 ] [ 210 ]
Medical interventions take place to address physical health concerns and psychosocial risks. Both types of rationale are the subject of debate, particularly as the consequences of surgical (and many hormonal) interventions are lifelong and irreversible. Questions regarding physical health include accurately assessing risk levels, necessity, and timing. Psychosocial rationales are particularly susceptible to questions of necessity as they reflect social and cultural concerns. [ citation needed ]
There remains no clinical consensus about an evidence base, surgical timing, necessity, type of surgical intervention, and degree of difference warranting intervention. [ 211 ] [ 212 ] [ 213 ] Such surgeries are the subject of significant contention due to consequences that include trauma, impact on sexual function and sensation, and violation of rights to physical and mental integrity. [ citation needed ] This includes community activism, [ 114 ] and multiple reports by international human rights [ 30 ] [ 64 ] [ 34 ] [ 214 ] and health [ 142 ] institutions and national ethics bodies. [ 33 ] [ 215 ]
In the cases where gonads may pose a cancer risk, as in some cases of androgen insensitivity syndrome , [ 216 ] concern has been expressed that treatment rationales and decision-making regarding cancer risk may encapsulate decisions around a desire for surgical "normalization". [ 32 ]
The term "hermaphrodite" has sometimes been used to refer to humans whose biological sex is ambiguous. This usage has fallen out of favor and in any case was technically incorrect. The essential characteristic of hermaphrodites is the ability to reproduce as both male and female. No such case has been identified in any human | https://en.wikipedia.org/wiki/Intersex |
Intersex is a general term for an organism that has sex characteristics that are between male and female . [ 1 ] It typically applies to a minority of members of gonochoric animal species such as mammals (as opposed to hermaphroditic species in which the majority of members can have both male and female sex characteristics). [ 2 ] Such organisms are usually sterile . [ 3 ]
Intersexuality can occur due to both genetic and environmental factors [ 4 ] and has been reported in mammals , fish , nematodes , and crustaceans .
Intersex can occur in mammals such as pigs , with it being estimated that 0.1% to 1.4% of pigs are intersex. [ 5 ] In Vanuatu , Narave pigs are sacred intersex pigs that are found on Malo Island . An analysis of Narave pig mitochondrial DNA by Lum et al. (2006) found that they are descended from Southeast Asian pigs. [ 6 ] [ 7 ] [ 8 ] Female spotted hyenas have a pseudo-penis which led to a myth that they are hermaphroditic. [ 9 ]
At least six different mole species have an adaption where by the female mole has an ovotestis , "a hybrid organ made up of both ovarian and testicular tissue. The evolved purpose of this adoption is to give them an extra dose of testosterone to make them just as muscular and aggressive as male moles". Only the ovarian part of the ovotestis is reproductively functional. [ 10 ] [ 11 ]
Intersexuality in humans is relatively rare. Depending on the definition, the prevalence of intersex among humans has been reported to range around a figure of 0.018%. [ 12 ] [ 13 ]
Intersex is known to occur in all main groups of nematodes . Most of them are functionally female. Male intersexes with female characteristics have been reported but are less common. [ 14 ]
Gonadal intersex occurs in fishes, where the individual has both ovarian and testicular tissue. Although it is a rare anomaly among gonochoric fishes, it is a transitional state in fishes that are protandric or protogynous . [ 15 ] Intersexuality has been reported in 23 fish families. [ 16 ]
The oldest evidence for intersexuality in crustaceans comes from fossils dating back 70 million years ago. [ 4 ] Intersex has been reported in gonochoric crustaceans as early as 1729. A large amount of literature exists on intersexuality for isopoda and amphipoda , with there being reports of both intersex males and intersex females. [ 17 ]
According to National Geographic , scientists have discovered that some chickens possess a mixture of genetically male and female cells. [ 18 ] Hens can take on male characteristics if their functioning ovary is damaged or diseased . [ 19 ] | https://en.wikipedia.org/wiki/Intersex_(biology) |
Interspecies design is design practice that intentionally involves and emphasizes the contributions of multiple species, focusing on the participation and outcomes for both human and non-human lifeforms. It aims to create a mutual benefit and centers on designing for and with all life. [ 1 ]
Interspecies design is characterized by the participation of more than one species in design activities and the use of design outcomes by multiple species. This concept extends to all design practices that could potentially involve multiple species, making it a broad and inclusive field. [ 2 ] [ 3 ]
The field arises from a need to include all those at risk of harm, domination, or oppression in the design process, highlighting the ethical dimension of design decisions. This approach challenges traditional practices by considering the impact on and inclusion of non-human species. [ 4 ]
Interspecies design is related to but distinct from concepts such as interspecies cultures, [ 5 ] multispecies design, ecocentric design, ecological engineering, and more-than-human design.
In the realm of art and design, interspecies design has been applied in creating shared spaces and experiences for multiple species, such as in the design of prosthetic habitat-structures for owls. [ 6 ] | https://en.wikipedia.org/wiki/Interspecies_design |
Interspecies hydrogen transfer ( IHT ) is a form of interspecies electron transfer. [ 1 ] It is a syntrophic process by which H 2 is transferred from one organism to another, particularly in the rumen and other anaerobic environments. [ 1 ]
IHT was discovered between Methanobacterium bryantii strain M.o.H and an "S" organism in 1967 by Marvin Bryant, Eileen Wolin, Meyer Wolin, and Ralph Wolfe at the University of Illinois. The two form a culture that was mistaken as a species Methanobacillus omelianskii . [ 2 ] It was shown in 1973 that this process occurs between Ruminococcus albus and Wolinella succinogenes . [ 3 ] A more recent publication describes how the gene expression profiles of these organisms changes when they undergo interspecies hydrogen transfer; of note, a switch to an electron-confurcating hydrogenase occurs in R. albus 7. [ 4 ]
This process affects the carbon cycle : methanogens can participate in interspecies hydrogen transfer combining H 2 and CO 2 to produce CH 4 . [ 5 ] Besides methanogens, acetogens, and sulfate-reducing bacteria can participate in IHT. [ 6 ] | https://en.wikipedia.org/wiki/Interspecies_hydrogen_transfer |
Interspecific competition , in ecology , is a form of competition in which individuals of different species compete for the same resources in an ecosystem (e.g. food or living space). This can be contrasted with mutualism , a type of symbiosis . Competition between members of the same species is called intraspecific competition .
If a tree species in a dense forest grows taller than surrounding tree species, it is able to absorb more of the incoming sunlight. However, less sunlight is then available for the trees that are shaded by the taller tree, thus interspecific competition. Leopards and lions can also be in interspecific competition, since both species feed on the same prey , and can be negatively impacted by the presence of the other because they will have less food.
Competition is only one of many interacting biotic and abiotic factors that affect community structure. Moreover, competition is not always a straightforward, direct, interaction. Interspecific competition may occur when individuals of two separate species share a limiting resource in the same area. If the resource cannot support both populations, then lowered fecundity , growth, or survival may result in at least one species. Interspecific competition has the potential to alter populations , communities and the evolution of interacting species. On an individual organism level, competition can occur as interference or exploitative competition.
All of the types described here can also apply to intraspecific competition , that is, competition among individuals within a species. Also, any specific example of interspecific competition can be described in terms of both a mechanism (e.g., resource or interference) and an outcome (symmetric or asymmetric).
Exploitative competition, also referred to as resource competition, is a form of competition in which one species consumes and either reduces or more efficiently uses a shared limiting resource and therefore depletes the availability of the resource for the other species. [ 1 ] Thus, it is an indirect interaction because the competing species interact via a shared resource.
Interference competition is a form of competition in which individuals of one species interacts directly with individuals of another species via antagonistic displays or more aggressive behavior.
In a review and synthesis of experimental evidence regarding interspecific competition, Schoener [ 2 ] described six specific types of mechanisms by which competition occurs, including consumptive, preemptive, overgrowth, chemical, territorial, and encounter. Consumption competition is always resource competition, but the others cannot always be regarded as exclusively exploitative or interference.
Separating the effect of resource use from that of interference is not easy. A good example of exploitative competition is found in aphid species competing over the sap in plant phloem . Each aphid species that feeds on host plant sap uses some of the resource, leaving less for competing species. In one study, Fordinae geoica was observed to out-compete F. formicaria to the extent that the latter species exhibited a reduction in survival by 84%. Another example is the one of competition for calling space in amphibians, where the calling activity of a species prevents the other one from calling in an area as wide as it would in allopatry. [ 3 ] A last example is driving of bisexual rock lizards of genus Darevskia from their natural habitats by a daughter unisexual form; [ 4 ] interference competition can be ruled out in this case, because parthenogenetic forms of the lizards never demonstrate aggressive behavior.
This type of competition can also be observed in forests where large trees dominate the canopy and thus allow little light to reach smaller competitors living below. These interactions have important implications for the population dynamics and distribution of both species.
Scramble and contest competition refer to the relative success of competitors. Scramble competition is said to occur when each competitor is equal suppressed, either through reduction in survival or birth rates. Contest competition is said to occur when one or a few competitors are unaffected by competition, but all others suffer greatly, either through reduction in survival or birth rates. Sometimes these types of competition are referred to as symmetric (scramble) vs. asymmetric (contest) competition. Scramble and contest competition are two ends of a spectrum, of completely equal or completely unequal effects.
Apparent competition is actually an example of predation that alters the relative abundances of prey on the same trophic level . It occurs when two or more species in a habitat affect shared natural enemies in a higher trophic level . [ 5 ] If two species share a common predator , for example, apparent competition can exist between the two prey items in which the presence of each prey species increases the abundance of the shared enemy, and thereby suppresses one or both prey species. [ 6 ] This mechanism gets its name from experiments in which one prey species is removed and the second prey species increases in abundance. Investigators sometimes mistakenly attribute the increase in abundance in the second species as evidence for resource competition between prey species. It is "apparently" competition, but is in fact due to a shared predator, parasitoid, parasite, or pathogen. Notably, species competing for resources may often also share predators in nature. Interactions via resource competition and shared predation may thus often influence one another, thus making it difficult to study and predict their outcome by only studying one of them. [ 7 ]
Many studies, including those cited previously, have shown major impacts on both individuals and populations from interspecific competition. Documentation of these impacts has been found in species from every major branch of organism. The effects of interspecific competition can also reach communities and can even influence the evolution of species as they adapt to avoid competition. This evolution may result in the exclusion of a species in the habitat, niche separation , and local extinction . The changes of these species over time can also change communities as other species must adapt.
The competitive exclusion principle, also called " Gause's law " [ 8 ] which arose from mathematical analysis and simple competition models states that two species that use the same limiting resource in the same way in the same space and time cannot coexist and must diverge from each other over time in order for the two species to coexist. One species will often exhibit an advantage in resource use. This superior competitor will out-compete the other with more efficient use of the limiting resource. As a result, the inferior competitor will suffer a decline in population over time. It will be excluded from the area and replaced by the superior competitor.
A well-documented example of competitive exclusion was observed to occur between Dolly Varden charr (Trout)( Salvelinus malma ) and white spotted char (Trout)( S. leucomaenis ) in Japan. Both of these species were morphologically similar but the former species was found primarily at higher elevations than the latter. Although there was a zone of overlap, each species excluded the other from its dominant region by becoming better adapted to its habitat over time. In some such cases, each species gets displaced into an exclusive segment of the original habitat. Because each species suffers from competition, natural selection favors the avoidance of competition in such a way.
Niche differentiation is a process by which competitive exclusion leads to differences in resource use. In some cases, niche differentiation results in spatial displacement, where species avoid direct competition by occupying different areas. However, niche differentiation can also cause other changes, such as altered behaviors or ecological roles, that help species avoid competition. If competition avoidance is possible, species may specialize in different areas of the niche, minimizing overlap and resource competition (Watts & Holekamp, 2008). For example, spotted hyenas (Crocuta crocuta) and lions (Panthera leo) in Africa share similar habitats and prey but have different hunting strategies. Hyenas use stamina to chase prey over long distances, while lions rely on ambush hunting. This difference in hunting strategies helps reduce direct competition for food (Hayward & Slotow, 2009).
Another example of niche differentiation comes from birds, where species with similar ecological requirements shift their behavior to avoid competition. In the Galapagos Islands, finch species have been observed to change their feeding habits within a few generations, adapting to new dietary resources to minimize competition. This adaptation allowed different finch species to coexist despite overlapping habitats and food sources (Kruuk, 1972). Similarly, hyenas and lions may alter their roles in the ecosystem through spatial and behavioral differentiation, helping them avoid direct conflict and share resources (Groenewald et al., 2009).
In some ecosystems, niche differentiation is influenced by third-party species or predators. For example, a keystone predator can significantly alter the behavior of competing species. Hyenas, by preying on lions or scavenging their kills, can reduce the lions’ ability to dominate a territory. This helps other predators and scavengers, like cheetahs, access resources they might otherwise be excluded from (Hayward & Slotow, 2009). Additionally, in bacterial ecosystems, phage parasites have been shown to mediate coexistence between competing bacterial species by reducing the dominance of one species. This kind of interaction helps maintain biodiversity in microbial communities, which can have important implications for both medical research and ecological theory (Groenewald et al., 2009).
Although local extinction of one or more competitors has been less documented than niche separation or competitive exclusion, it does occur. In an experiment involving zooplankton in artificial rock pools, local extinction rates were significantly higher in areas of interspecific competition. [ 9 ] In these cases, therefore, the negative effects are not only at the population level but also species richness of communities.
As mentioned previously, interspecific competition has great impact on community composition and structure. Niche separation of species, local extinction and competitive exclusion are only some of the possible effects. In addition to these, interspecific competition can be the source of a cascade of effects that build on each other. An example of such an effect is the introduction of an invasive species to the United States, purple-loosestrife . This plant when introduced to wetland communities often outcompetes much of the native flora and decreases species richness, food and shelter to many other species at higher trophic levels. In this way, one species can influence the populations of many other species as well as through a myriad of other interactions. Because of the complicated web of interactions that make up every ecosystem and habitat, the results of interspecific competition are complex and site-specific.
The impacts of interspecific competition on populations have been formalized in a mathematical model called the Competitive Lotka–Volterra equations , which creates a theoretical prediction of interactions. It combines the effects of each species on the other. These effects are calculated separately for the first and second population respectively:
In these formulae, N is the population size, t is time, K is the carrying capacity , r is the intrinsic rate of increase and α and β are the relative competition coefficients. [ 10 ] The results show the effect that the other species has on the species being calculated. The results can be graphed to show a trend and possible prediction for the future of the species. One problem with this model is that certain assumptions must be made for the calculation to work. These include the lack of migration and constancy of the carrying capacities and competition coefficients of both species. The complex nature of ecology determines that these assumptions are rarely true in the field but the model provides a basis for improved understanding of these important concepts.
An equivalent formulation of these models [ 11 ] is:
In these formulae, α 11 {\displaystyle \alpha _{11}} is the effect that an individual of species 1 has on its own population growth rate. Similarly, α 12 {\displaystyle \alpha _{12}} is the effect that an individual of species 2 has on the population growth rate of species 1. One can also read this as the effect on species 1 of species 2. In comparing this formulation to the one above, we note that α 11 = 1 / K 1 , α 22 = 1 / K 2 {\displaystyle \alpha _{11}=1/K_{1},~\alpha _{22}=1/K_{2}} , and α 12 = α / K 1 {\displaystyle \alpha _{12}=\alpha /K_{1}} .
Coexistence between competitors occurs when α 11 > α 12 {\displaystyle \alpha _{11}>\alpha _{12}} and α 22 > α 21 {\displaystyle \alpha _{22}>\alpha _{21}} . We can translate this as coexistence occurs when the effect of each species on itself is greater the effect of the competitor.
There are other mathematical representations that model species competition, such as using non-polynomial functions. [ 12 ]
Interspecific competition is a major factor in macroevolution . [ 13 ] Darwin assumed that interspecific competition limits the number of species on Earth, as formulated in his wedge metaphor: "Nature may be compared to a surface covered with ten-thousand sharp wedges ... representing different species, all packed closely together and driven in by incessant blows, . . . sometimes a wedge of one form and sometimes another being struck; the one driven deeply in forcing out others; with the jar and shock often transmitted very far to other wedges in many lines of direction." (From Natural Selection - the "big book" from which Darwin abstracted the Origin ). [ 14 ] The question whether interspecific competition limits global biodiversity is disputed today, [ 15 ] but analytical studies of the global Phanerozoic fossil record are in accordance with the existence of global (although not constant) carrying capacities for marine biodiversity. [ 16 ] [ 17 ] Interspecific competition is also the basis for Van Valen's Red Queen hypothesis , and it may underlie the positive correlation between origination and extinction rates that is seen in almost all major taxa. [ 13 ]
In the previous examples, the macroevolutionary role of interspecific competition is that of a limiting factor of biodiversity, but interspecific competition also promotes niche differentiation and thus speciation and diversification. [ 18 ] [ 19 ] The impact of interspecific competition may therefore change during phases of diversity build-up, from an initial phase where positive feedback mechanisms dominate to a later phase when niche-peremption limits further increase in the number of species; a possible example for this situation is the re-diversification of marine faunas after the end-Permian mass extinction event. [ 20 ] | https://en.wikipedia.org/wiki/Interspecific_competition |
The Dwight D. Eisenhower National System of Interstate and Defense Highways , commonly known as the Interstate Highway System , or the Eisenhower Interstate System , is a network of controlled-access highways that forms part of the National Highway System in the United States . The system extends throughout the contiguous United States and has routes in Hawaii , Alaska , and Puerto Rico .
In the 20th century, the United States Congress began funding roadways through the Federal Aid Road Act of 1916 , and started an effort to construct a national road grid with the passage of the Federal Aid Highway Act of 1921 . In 1926, the United States Numbered Highway System was established, creating the first national road numbering system for cross-country travel. The roads were funded and maintained by U.S. states , and there were few national standards for road design. United States Numbered Highways ranged from two-lane country roads to multi-lane freeways. After Dwight D. Eisenhower became president in 1953, his administration developed a proposal for an interstate highway system, eventually resulting in the enactment of the Federal-Aid Highway Act of 1956 .
Unlike the earlier United States Numbered Highway System, the interstates were designed to be all freeways, with nationally unified standards for construction and signage. While some older freeways were adopted into the system, most of the routes were completely new. In dense urban areas, the choice of routing destroyed many well-established neighborhoods, often intentionally as part of a program of " urban renewal ". [ 3 ] In the two decades following the 1956 Highway Act, the construction of the freeways displaced one million people, [ 4 ] and as a result of the many freeway revolts during this era, several planned Interstates were abandoned or re-routed to avoid urban cores.
Construction of the original Interstate Highway System was proclaimed complete in 1992, despite deviations from the original 1956 plan and several stretches that did not fully conform with federal standards . The construction of the Interstate Highway System cost approximately $114 billion (equivalent to $618 billion in 2023). The system has continued to expand and grow as additional federal funding has provided for new routes to be added, and many future Interstate Highways are currently either being planned or under construction.
Though heavily funded by the federal government, Interstate Highways are owned by the state in which they were built. With few exceptions , all Interstates must meet specific standards , such as having controlled access, physical barriers or median strips between lanes of oncoming traffic, breakdown lanes , avoiding at-grade intersections , no traffic lights , and complying with federal traffic sign specifications. Interstate Highways use a numbering scheme in which primary Interstates are assigned one- or two-digit numbers, and shorter routes which branch off from longer ones are assigned three-digit numbers where the last two digits match the parent route. The Interstate Highway System is partially financed through the Highway Trust Fund , which itself is funded by a combination of a federal fuel tax and transfers from the Treasury's general fund. [ 5 ] Though federal legislation initially banned the collection of tolls, some Interstate routes are toll roads , either because they were grandfathered into the system or because subsequent legislation has allowed for tolling of Interstates in some cases.
As of 2022 [update] , about one quarter of all vehicle miles driven in the country used the Interstate Highway System, [ 6 ] which has a total length of 48,890 miles (78,680 km). [ 2 ] In 2022 and 2023, the number of fatalities on the Interstate Highway System amounted to more than 5,000 people annually, with nearly 5,600 fatalities in 2022. [ 7 ]
The United States government's efforts to construct a national network of highways began on an ad hoc basis with the passage of the Federal Aid Road Act of 1916 , which provided $75 million over a five-year period for matching funds to the states for the construction and improvement of highways. [ 8 ] The nation's revenue needs associated with World War I prevented any significant implementation of this policy, which expired in 1921.
In December 1918, E. J. Mehren, a civil engineer and the editor of Engineering News-Record , presented his "A Suggested National Highway Policy and Plan" [ 9 ] during a gathering of the State Highway Officials and Highway Industries Association at the Congress Hotel in Chicago. [ 10 ] In the plan, Mehren proposed a 50,000-mile (80,000 km) system, consisting of five east–west routes and 10 north–south routes. The system would include two percent of all roads and would pass through every state at a cost of $25,000 per mile ($16,000/km), providing commercial as well as military transport benefits. [ 9 ]
In 1919, the US Army sent an expedition across the US to determine the difficulties that military vehicles would have on a cross-country trip. Leaving from the Ellipse near the White House on July 7, the Motor Transport Corps convoy needed 62 days to drive 3,200 miles (5,100 km) on the Lincoln Highway to the Presidio of San Francisco along the Golden Gate . The convoy suffered many setbacks and problems on the route, such as poor-quality bridges, broken crankshafts, and engines clogged with desert sand. [ 11 ]
Dwight Eisenhower , then a 28-year-old brevet lieutenant colonel, [ 12 ] accompanied the trip "through darkest America with truck and tank," as he later described it. Some roads in the West were a "succession of dust, ruts, pits, and holes." [ 11 ]
As the landmark 1916 law expired, new legislation was passed—the Federal Aid Highway Act of 1921 (Phipps Act). This new road construction initiative once again provided for federal matching funds for road construction and improvement, $75 million allocated annually. [ 13 ] Moreover, this new legislation for the first time sought to target these funds to the construction of a national road grid of interconnected "primary highways", setting up cooperation among the various state highway planning boards. [ 13 ]
The Bureau of Public Roads asked the Army to provide a list of roads that it considered necessary for national defense. [ 14 ] In 1922, General John J. Pershing , former head of the American Expeditionary Force in Europe during the war, complied by submitting a detailed network of 20,000 miles (32,000 km) of interconnected primary highways—the so-called Pershing Map . [ 15 ]
A boom in road construction followed throughout the decade of the 1920s, with such projects as the New York parkway system constructed as part of a new national highway system. As automobile traffic increased, planners saw a need for such an interconnected national system to supplement the existing, largely non-freeway, United States Numbered Highways system. By the late 1930s, planning had expanded to a system of new superhighways.
In 1938, President Franklin D. Roosevelt gave Thomas MacDonald , chief at the Bureau of Public Roads, a hand-drawn map of the United States marked with eight superhighway corridors for study. [ 16 ] In 1939, Bureau of Public Roads Division of Information chief Herbert S. Fairbank wrote a report called Toll Roads and Free Roads , "the first formal description of what became the Interstate Highway System" and, in 1944, the similarly themed Interregional Highways . [ 17 ]
The Interstate Highway System gained a champion in President Dwight D. Eisenhower, who was influenced by his experiences as a young Army officer crossing the country in the 1919 Motor Transport Corps convoy that drove in part on the Lincoln Highway , the first road across America. He recalled that, "The old convoy had started me thinking about good two-lane highways... the wisdom of broader ribbons across our land." [ 11 ] Eisenhower also gained an appreciation of the Reichsautobahn system, the first "national" implementation of modern Germany's Autobahn network, as a necessary component of a national defense system while he was serving as Supreme Commander of Allied Forces in Europe during World War II . [ 18 ] In 1954, Eisenhower appointed General Lucius D. Clay to head a committee charged with proposing an interstate highway system plan. [ 19 ] Summing up motivations for the construction of such a system, Clay stated,
It was evident we needed better highways. We needed them for safety, to accommodate more automobiles. We needed them for defense purposes, if that should ever be necessary. And we needed them for the economy. Not just as a public works measure, but for future growth. [ 20 ]
Clay's committee proposed a 10-year, $100 billion program ($1.17 trillion in 2024), which would build 40,000 miles (64,000 km) of divided highways linking all American cities with a population of greater than 50,000. Eisenhower initially preferred a system consisting of toll roads , but Clay convinced Eisenhower that toll roads were not feasible outside of the highly populated coastal regions. In February 1955, Eisenhower forwarded Clay's proposal to Congress. The bill quickly won approval in the Senate, but House Democrats objected to the use of public bonds as the means to finance construction. Eisenhower and the House Democrats agreed to instead finance the system through the Highway Trust Fund , which itself would be funded by a gasoline tax. [ 21 ] In June 1956, Eisenhower signed the Federal Aid Highway Act of 1956 into law. Under the act, the federal government would pay for 90 percent of the cost of construction of Interstate Highways. Each Interstate Highway was required to be a freeway with at least four lanes and no at-grade crossings. [ 22 ]
The publication in 1955 of the General Location of National System of Interstate Highways , informally known as the Yellow Book , mapped out what became the Interstate Highway System. [ 23 ] Assisting in the planning was Charles Erwin Wilson , who was still head of General Motors when President Eisenhower selected him as Secretary of Defense in January 1953.
Some sections of highways that became part of the Interstate Highway System actually began construction earlier.
Three states have claimed the title of first Interstate Highway. Missouri claims that the first three contracts under the new program were signed in Missouri on August 2, 1956. The first contract signed was for upgrading a section of US Route 66 to what is now designated Interstate 44 . [ 24 ] On August 13, 1956, work began on US 40 (now I-70) in St. Charles County. [ 25 ] [ 24 ]
Kansas claims that it was the first to start paving after the act was signed. Preliminary construction had taken place before the act was signed, and paving started September 26, 1956. The state marked its portion of I-70 as the first project in the United States completed under the provisions of the new Federal-Aid Highway Act of 1956. [ 24 ]
The Pennsylvania Turnpike could also be considered one of the first Interstate Highways, and is nicknamed "Grandfather of the Interstate System". [ 25 ] On October 1, 1940, 162 miles (261 km) of the highway now designated I‑70 and I‑76 opened between Irwin and Carlisle . The Commonwealth of Pennsylvania refers to the turnpike as the Granddaddy of the Pikes, a reference to turnpikes . [ 24 ]
Milestones in the construction of the Interstate Highway System include:
The initial cost estimate for the system was $25 billion over 12 years; it ended up costing $114 billion (equivalent to $425 billion in 2006 [ 37 ] or $618 billion in 2023 [ 38 ] ) and took 35 years. [ 39 ]
The system was proclaimed complete in 1992, but two of the original Interstates— I-95 and I-70 —were not continuous: both of these discontinuities were due to local opposition, which blocked efforts to build the necessary connections to fully complete the system. I-95 was made a continuous freeway in 2018, [ 40 ] and thus I-70 remains the only original Interstate with a discontinuity.
I-95 was discontinuous in New Jersey because of the cancellation of the Somerset Freeway . This situation was remedied when the construction of the Pennsylvania Turnpike/Interstate 95 Interchange Project started in 2010 [ 41 ] and partially opened on September 22, 2018, which was already enough to fill the gap. [ 40 ]
However, I-70 remains discontinuous in Pennsylvania , because of the lack of a direct interchange with the Pennsylvania Turnpike at the eastern end of the concurrency near Breezewood . Traveling in either direction, I-70 traffic must exit the freeway and use a short stretch of US 30 (which includes a number of roadside services) to rejoin I-70. The interchange was not originally built because of a legacy federal funding rule, since relaxed, which restricted the use of federal funds to improve roads financed with tolls. [ 42 ] Solutions have been proposed to eliminate the discontinuity, but they have been blocked by local opposition, fearing a loss of business. [ 43 ]
The Interstate Highway System has been expanded numerous times. The expansions have both created new designations and extended existing designations. For example, I-49 , added to the system in the 1980s as a freeway in Louisiana , was designated as an expansion corridor, and FHWA approved the expanded route north from Lafayette, Louisiana , to Kansas City, Missouri . The freeway exists today as separate completed segments, with segments under construction or in the planning phase between them. [ 44 ]
In 1966, the FHWA designated the entire Interstate Highway System as part of the larger Pan-American Highway System, [ 45 ] and at least two proposed Interstate expansions were initiated to help trade with Canada and Mexico spurred by the North American Free Trade Agreement (NAFTA). Long-term plans for I-69 , which currently exists in several separate completed segments (the largest of which are in Indiana and Texas ), is to have the highway route extend from Tamaulipas , Mexico to Ontario , Canada. The planned I-11 will then bridge the Interstate gap between Phoenix, Arizona and Las Vegas, Nevada , and thus form part of the CANAMEX Corridor (along with I-19 , and portions of I-10 and I-15 ) between Sonora , Mexico and Alberta , Canada.
Political opposition from residents canceled many freeway projects around the United States, including:
In addition to cancellations, removals of freeways are planned:
The American Association of State Highway and Transportation Officials (AASHTO) has defined a set of standards that all new Interstates must meet unless a waiver from the Federal Highway Administration (FHWA) is obtained. One almost absolute standard is the controlled access nature of the roads. With few exceptions , traffic lights (and cross traffic in general) are limited to toll booths and ramp meters (metered flow control for lane merging during rush hour ).
Being freeways , Interstate Highways usually have the highest speed limits in a given area. Speed limits are determined by individual states. From 1975 to 1986, the maximum speed limit on any highway in the United States was 55 miles per hour (90 km/h), in accordance with federal law. [ 49 ]
Typically, lower limits are established in Northeastern and coastal states, while higher speed limits are established in inland states west of the Mississippi River . [ 50 ] For example, the maximum speed limit is 75 mph (120 km/h) in northern Maine, varies between 50 and 70 mph (80 and 115 km/h) [ 51 ] from southern Maine to New Jersey, and is 50 mph (80 km/h) in New York City and the District of Columbia. [ 50 ] Currently, rural speed limits elsewhere generally range from 65 to 80 miles per hour (105 to 130 km/h). Several portions of various highways such as I-10 and I-20 in rural western Texas, I-80 in Nevada between Fernley and Winnemucca (except around Lovelock) and portions of I-15 , I-70 , I-80 , and I-84 in Utah have a speed limit of 80 mph (130 km/h). Other Interstates in Idaho, Montana, Oklahoma, South Dakota and Wyoming also have the same high speed limits.
In some areas, speed limits on Interstates can be significantly lower in areas where they traverse significantly hazardous areas. The maximum speed limit on I-90 is 50 mph (80 km/h) in downtown Cleveland because of two sharp curves with a suggested limit of 35 mph (55 km/h) in a heavily congested area; I-70 through Wheeling, West Virginia , has a maximum speed limit of 45 mph (70 km/h) through the Wheeling Tunnel and most of downtown Wheeling; and I-68 has a maximum speed limit of 40 mph (65 km/h) through Cumberland, Maryland , because of multiple hazards including sharp curves and narrow lanes through the city. In some locations, low speed limits are the result of lawsuits and resident demands; after holding up the completion of I-35E in St. Paul, Minnesota , for nearly 30 years in the courts, residents along the stretch of the freeway from the southern city limit to downtown successfully lobbied for a 45 mph (70 km/h) speed limit in addition to a prohibition on any vehicle weighing more than 9,000 pounds (4,100 kg) gross vehicle weight . I-93 in Franconia Notch State Park in northern New Hampshire has a speed limit of 45 mph (70 km/h) because it is a parkway that consists of only one lane per side of the highway. On the other hand, Interstates 15, 80, 84, and 215 in Utah have speed limits as high as 70 mph (115 km/h) within the Wasatch Front , Cedar City , and St. George areas, and I-25 in New Mexico within the Santa Fe and Las Vegas areas along with I-20 in Texas along Odessa and Midland and I-29 in North Dakota along the Grand Forks area have higher speed limits of 75 mph (120 km/h).
As one of the components of the National Highway System , Interstate Highways improve the mobility of military troops to and from airports, seaports, rail terminals, and other military bases. Interstate Highways also connect to other roads that are a part of the Strategic Highway Network , a system of roads identified as critical to the US Department of Defense . [ 52 ]
The system has also been used to facilitate evacuations in the face of hurricanes and other natural disasters. An option for maximizing traffic throughput on a highway is to reverse the flow of traffic on one side of a divider so that all lanes become outbound lanes. This procedure, known as contraflow lane reversal , has been employed several times for hurricane evacuations. After public outcry regarding the inefficiency of evacuating from southern Louisiana prior to Hurricane Georges ' landfall in September 1998, government officials looked towards contraflow to improve evacuation times. In Savannah, Georgia , and Charleston, South Carolina , in 1999, lanes of I-16 and I-26 were used in a contraflow configuration in anticipation of Hurricane Floyd with mixed results. [ 53 ]
In 2004, contraflow was employed ahead of Hurricane Charley in the Tampa, Florida area and on the Gulf Coast before the landfall of Hurricane Ivan ; [ 54 ] however, evacuation times there were no better than previous evacuation operations. Engineers began to apply lessons learned from the analysis of prior contraflow operations, including limiting exits, removing troopers (to keep traffic flowing instead of having drivers stop for directions), and improving the dissemination of public information. As a result, the 2005 evacuation of New Orleans, Louisiana, prior to Hurricane Katrina ran much more smoothly. [ 55 ]
According to urban legend , early regulations required that one out of every five miles of the Interstate Highway System must be built straight and flat, so as to be usable by aircraft during times of war. There is no evidence of this rule being included in any Interstate legislation. [ 56 ] [ 57 ] It is also commonly believed the Interstate Highway System was built for the sole purpose of evacuating cities in the event of nuclear warfare . While military motivations were present, the primary motivations were civilian. [ 58 ] [ 59 ]
The numbering scheme for the Interstate Highway System was developed in 1957 by the American Association of State Highway and Transportation Officials (AASHTO). The association's present numbering policy dates back to August 10, 1973. [ 60 ] Within the contiguous United States, primary Interstates—also called main line Interstates or two-digit Interstates—are assigned numbers less than 100. [ 60 ]
While numerous exceptions do exist, there is a general scheme for numbering Interstates. Primary Interstates are assigned one- or two-digit numbers, while shorter routes (such as spurs, loops, and short connecting roads) are assigned three-digit numbers where the last two digits match the parent route (thus, I-294 is a loop that connects at both ends to I-94 , while I-787 is a short spur route attached to I-87 ). In the numbering scheme for the primary routes, east–west highways are assigned even numbers and north–south highways are assigned odd numbers. Odd route numbers increase from west to east, and even-numbered routes increase from south to north (to avoid confusion with the US Highways , which increase from east to west and north to south). [ 61 ] This numbering system usually holds true even if the local direction of the route does not match the compass directions. Numbers divisible by five are intended to be major arteries among the primary routes, carrying traffic long distances. [ 62 ] [ 63 ] Primary north–south Interstates increase in number from I-5 between Canada and Mexico along the West Coast to I‑95 between Canada and Miami, Florida along the East Coast . Major west–east arterial Interstates increase in number from I-10 between Santa Monica, California , and Jacksonville, Florida , to I-90 between Seattle, Washington , and Boston, Massachusetts , with two exceptions. There are no I-50 and I-60, as routes with those numbers would likely pass through states that currently have US Highways with the same numbers, which is generally disallowed under highway administration guidelines. [ 60 ] [ 64 ]
Several two-digit numbers are shared between unconnected road segments at opposite ends of the country for various reasons. Some such highways are incomplete Interstates (such as I-69 and I-74 ) and some just happen to share route designations (such as I-76 , I-84 , I‑86 , I-87 , and I-88 ). Some of these were due to a change in the numbering system as a result of a new policy adopted in 1973. Previously, letter-suffixed numbers were used for long spurs off primary routes; for example, western I‑84 was I‑80N, as it went north from I‑80 . The new policy stated, "No new divided numbers (such as I-35W and I-35E , etc.) shall be adopted." The new policy also recommended that existing divided numbers be eliminated as quickly as possible; however, an I-35W and I-35E still exist in the Dallas–Fort Worth metroplex in Texas, and an I-35W and I-35E that run through Minneapolis and Saint Paul , Minnesota, still exist. [ 60 ] Additionally, due to Congressional requirements, three sections of I-69 in southern Texas will be divided into I-69W , I-69E , and I-69C (for Central). [ 65 ]
AASHTO policy allows dual numbering to provide continuity between major control points. [ 60 ] This is referred to as a concurrency or overlap. For example, I‑75 and I‑85 share the same roadway in Atlanta ; this 7.4-mile (11.9 km) section, called the Downtown Connector , is labeled both I‑75 and I‑85. Concurrencies between Interstate and US Highway numbers are also allowed in accordance with AASHTO policy, as long as the length of the concurrency is reasonable. [ 60 ] In rare instances, two highway designations sharing the same roadway are signed as traveling in opposite directions; one such wrong-way concurrency is found between Wytheville and Fort Chiswell , Virginia, where I‑81 north and I‑77 south are equivalent (with that section of road traveling almost due east), as are I‑81 south and I‑77 north.
Auxiliary Interstate Highways are circumferential, radial, or spur highways that principally serve urban areas . These types of Interstate Highways are given three-digit route numbers, which consist of a single digit prefixed to the two-digit number of its parent Interstate Highway. Spur routes deviate from their parent and do not return; these are given an odd first digit. Circumferential and radial loop routes return to the parent, and are given an even first digit. Unlike primary Interstates, three-digit Interstates are signed as either east–west or north–south, depending on the general orientation of the route, without regard to the route number. For instance, I-190 in Massachusetts is labeled north–south, while I-195 in New Jersey is labeled east–west. Some looped Interstate routes use inner–outer directions instead of compass directions, when the use of compass directions would create ambiguity. Due to the large number of these routes, auxiliary route numbers may be repeated in different states along the mainline. [ 66 ] Some auxiliary highways do not follow these guidelines, however.
The Interstate Highway System also extends to Alaska , Hawaii , and Puerto Rico , even though they have no direct land connections to any other states or territories. However, their residents still pay federal fuel and tire taxes.
The Interstates in Hawaii, all located on the most populous island of Oahu , carry the prefix H . There are three one-digit routes in the state ( H-1 , H-2 , and H-3 ) and one auxiliary route ( H-201 ). These Interstates connect several military and naval bases together, as well as the important communities spread across Oahu, and especially within the urban core of Honolulu .
Both Alaska and Puerto Rico also have public highways that receive 90 percent of their funding from the Interstate Highway program. The Interstates of Alaska and Puerto Rico are numbered sequentially in order of funding without regard to the rules on odd and even numbers. They also carry the prefixes A and PR , respectively. However, these highways are signed according to their local designations, not their Interstate Highway numbers. Furthermore, these routes were neither planned according to nor constructed to the official Interstate Highway standards . [ 67 ]
On one- or two-digit Interstates, the mile marker numbering almost always begins at the southern or western state line. If an Interstate originates within a state, the numbering begins from the location where the road begins in the south or west. As with all guidelines for Interstate routes, however, numerous exceptions exist.
Three-digit Interstates with an even first number that form a complete circumferential (circle) bypass around a city feature mile markers that are numbered in a clockwise direction, beginning just west of an Interstate that bisects the circumferential route near a south polar location. In other words, mile marker 1 on I-465 , a 53-mile (85 km) route around Indianapolis, is just west of its junction with I-65 on the south side of Indianapolis (on the south leg of I-465), and mile marker 53 is just east of this same junction. An exception is I-495 in the Washington metropolitan area , with mileposts increasing counterclockwise because part of that road is also part of I-95 .
Most Interstate Highways use distance-based exit numbers so that the exit number is the same as the nearest mile marker. If multiple exits occur within the same mile, letter suffixes may be appended to the numbers in alphabetical order starting with A. [ 68 ] A small number of Interstate Highways (mostly in the Northeastern United States) use sequential-based exit numbering schemes (where each exit is numbered in order starting with 1, without regard for the mile markers on the road). One Interstate Highway, I-19 in Arizona, is signed with kilometer-based exit numbers. In the state of New York, most Interstate Highways use sequential exit numbering, with some exceptions. [ 69 ]
AASHTO defines a category of special routes separate from primary and auxiliary Interstate designations. These routes do not have to comply to Interstate construction or limited-access standards but are routes that may be identified and approved by the association. The same route marking policy applies to both US Numbered Highways and Interstate Highways; however, business route designations are sometimes used for Interstate Highways. [ 70 ] Known as Business Loops and Business Spurs , these routes principally travel through the corporate limits of a city, passing through the central business district when the regular route is directed around the city. They also use a green shield instead of the red and blue shield. [ 70 ] An example would be Business Loop Interstate 75 at Pontiac, Michigan , which follows surface roads into and through downtown. Sections of BL I-75's routing had been part of US 10 and M-24 , predecessors of I-75 in the area.
Interstate Highways and their rights-of-way are owned by the state in which they were built. The last federally owned portion of the Interstate System was the Woodrow Wilson Bridge on the Washington Capital Beltway . The new bridge was completed in 2009 and is collectively owned by Virginia and Maryland. [ 71 ] Maintenance is generally the responsibility of the state department of transportation. However, there are some segments of Interstate owned and maintained by local authorities.
About 70 percent of the construction and maintenance costs of Interstate Highways in the United States have been paid through user fees, primarily the fuel taxes collected by the federal, state, and local governments. To a much lesser extent they have been paid for by tolls collected on toll highways and bridges. The federal gasoline tax was first imposed in 1932 at one cent per gallon; during the Eisenhower administration, the Highway Trust Fund , established by the Highway Revenue Act in 1956, prescribed a three-cent-per-gallon fuel tax, soon increased to 4.5 cents per gallon. Since 1993 the tax has remained at 18.4 cents per gallon. [ 72 ] Other excise taxes related to highway travel also accumulated in the Highway Trust Fund. [ 72 ] Initially, that fund was sufficient for the federal portion of building the Interstate system, built in the early years with "10 cent dollars", from the perspective of the states, as the federal government paid 90% of the costs while the state paid 10%. The system grew more rapidly than the rate of the taxes on fuel and other aspects of driving (e. g., excise tax on tires).
The rest of the costs of these highways are borne by general fund receipts, bond issues, designated property taxes, and other taxes. The federal contribution is funded primarily through fuel taxes and through transfers from the Treasury's general fund. [ 5 ] Local government contributions are overwhelmingly from sources besides user fees. [ 73 ] As decades passed in the 20th century and into the 21st century, the portion of the user fees spent on highways themselves covers about 57 percent of their costs, with about one-sixth of the user fees being sent to other programs, including the mass transit systems in large cities. Some large sections of Interstate Highways that were planned or constructed before 1956 are still operated as toll roads, for example the Massachusetts Turnpike (I-90), the New York State Thruway (I-87 and I-90), and Kansas Turnpike (I-35, I-335, I-470, I-70). Others have had their construction bonds paid off and they have become toll-free, such as the Connecticut Turnpike (I‑95, I-395), the Richmond-Petersburg Turnpike in Virginia (also I‑95), and the Kentucky Turnpike (I‑65).
As American suburbs have expanded, the costs incurred in maintaining freeway infrastructure have also grown, leaving little in the way of funds for new Interstate construction. [ 74 ] This has led to the proliferation of toll roads (turnpikes) as the new method of building limited-access highways in suburban areas. Some Interstates are privately maintained (for example, the VMS company maintains I‑35 in Texas) [ 75 ] to meet rising costs of maintenance and allow state departments of transportation to focus on serving the fastest-growing regions in their states.
Parts of the Interstate System might have to be tolled in the future to meet maintenance and expansion demands, as has been done with adding toll HOV / HOT lanes in cities such as Atlanta , Dallas , and Los Angeles . Although part of the tolling is an effect of the SAFETEA‑LU act, which has put an emphasis on toll roads as a means to reduce congestion, [ 76 ] [ 77 ] present federal law does not allow for a state to change a freeway section to a tolled section for all traffic. [ citation needed ]
About 2,900 miles (4,700 km) of toll roads are included in the Interstate Highway System. [ 78 ] While federal legislation initially banned the collection of tolls on Interstates, many of the toll roads on the system were either completed or under construction when the Interstate Highway System was established. Since these highways provided logical connections to other parts of the system, they were designated as Interstate highways. Congress also decided that it was too costly to either build toll-free Interstates parallel to these toll roads, or directly repay all the bondholders who financed these facilities and remove the tolls. Thus, these toll roads were grandfathered into the Interstate Highway System. [ 79 ]
Toll roads designated as Interstates (such as the Massachusetts Turnpike ) were typically allowed to continue collecting tolls, but are generally ineligible to receive federal funds for maintenance and improvements. Some toll roads that did receive federal funds to finance emergency repairs (notably the Connecticut Turnpike (I-95) following the Mianus River Bridge collapse) were required to remove tolls as soon as the highway's construction bonds were paid off. In addition, these toll facilities were grandfathered from Interstate Highway standards . A notable example is the western approach to the Benjamin Franklin Bridge in Philadelphia , where I-676 has a surface street section through a historic area.
Policies on toll facilities and Interstate Highways have since changed. The Federal Highway Administration has allowed some states to collect tolls on existing Interstate Highways, while a recent extension of I-376 included a section of Pennsylvania Route 60 that was tolled by the Pennsylvania Turnpike Commission before receiving Interstate designation. Also, newer toll facilities (like the tolled section of I-376, which was built in the early 1990s) must conform to Interstate standards. A new addition of the Manual on Uniform Traffic Control Devices in 2009 requires a black-on-yellow "Toll" sign to be placed above the Interstate trailblazer on Interstate Highways that collect tolls. [ 80 ]
Legislation passed in 2005 known as SAFETEA-LU encouraged states to construct new Interstate Highways through "innovative financing" methods. SAFETEA-LU facilitated states to pursue innovative financing by easing the restrictions on building interstates as toll roads, either through state agencies or through public–private partnerships . However, SAFETEA-LU left in place a prohibition of installing tolls on existing toll-free Interstates, and states wishing to toll such routes to finance upgrades and repairs must first seek approval from Congress. Many states have started using High-occupancy toll lane and other partial tolling methods, whereby certain lanes of highly congested freeways are tolled, while others are left free, allowing people to pay a fee to travel in less congested lanes. Examples of recent projects to add HOT lanes to existing freeways include the Virginia HOT lanes on the Virginia portions of the Capital Beltway and other related interstate highways (I-95, I-495, I-395) and the addition of express toll lanes to Interstate 77 in North Carolina in the Charlotte metropolitan area .
Interstate Highways financed with federal funds are known as "chargeable" Interstate routes, and are considered part of the 42,000-mile (68,000 km) network of highways. Federal laws also allow "non-chargeable" Interstate routes, highways funded similarly to state and US Highways to be signed as Interstates, if they both meet the Interstate Highway standards and are logical additions or connections to the system. [ 81 ] [ 82 ] These additions fall under two categories: routes that already meet Interstate standards, and routes not yet upgraded to Interstate standards. Only routes that meet Interstate standards may be signed as Interstates once their proposed number is approved. [ 67 ]
Interstate Highways are signed by a number placed on a red, white, and blue sign . The shield design itself is a registered trademark of the American Association of State Highway and Transportation Officials . [ 83 ] The colors red, white, and blue were chosen because they are the colors of the American flag . In the original design, the name of the state was displayed above the highway number, but in many states, this area is now left blank, allowing for the printing of larger and more-legible digits. Signs with the shield alone are placed periodically throughout each Interstate as reassurance markers . These signs usually measure 36 inches (91 cm) high, and are 36 inches (91 cm) wide for two-digit Interstates or 45 inches (110 cm) for three-digit Interstates. [ 84 ]
Interstate business loops and spurs use a special shield in which the red and blue are replaced with green, the word "BUSINESS" appears instead of "INTERSTATE", and the word "SPUR" or "LOOP" usually appears above the number. [ 84 ] The green shield is employed to mark the main route through a city's central business district, which intersects the associated Interstate at one (spur) or both (loop) ends of the business route. The route usually traverses the main thoroughfare(s) of the city's downtown area or other major business district. [ 85 ] A city may have more than one Interstate-derived business route, depending on the number of Interstates passing through a city and the number of significant business districts therein. [ 86 ]
Over time, the design of the Interstate shield has changed. In 1957 the Interstate shield designed by Texas Highway Department employee Richard Oliver was introduced, the winner of a contest that included 100 entries; [ 87 ] [ 88 ] at the time, the shield color was a dark navy blue and only 17 inches (43 cm) wide. [ 89 ] The Manual on Uniform Traffic Control Devices (MUTCD) standards revised the shield in the 1961, [ 90 ] 1971, [ 91 ] and 1978 [ 92 ] editions.
The majority of Interstates have exit numbers . Like other highways, Interstates feature guide signs that list control cities to help direct drivers through interchanges and exits toward their desired destination. All traffic signs and lane markings on the Interstates are supposed to be designed in compliance with the Manual on Uniform Traffic Control Devices (MUTCD). There are, however, many local and regional variations in signage.
For many years, California was the only state that did not use an exit numbering system. It was granted an exemption in the 1950s due to having an already largely completed and signed highway system; placing exit number signage across the state was deemed too expensive. To control costs, California began to incorporate exit numbers on its freeways in 2002—Interstate, US, and state routes alike. Caltrans commonly installs exit number signage only when a freeway or interchange is built, reconstructed, retrofitted, or repaired, and it is usually tacked onto the top-right corner of an already existing sign. Newer signs along the freeways follow this practice as well. Most exits along California's Interstates now have exit number signage, particularly in rural areas. California, however, still does not use mileposts, although a few exist for experiments or for special purposes. [ 93 ] [ self-published source ]
In 2010–2011, the Illinois State Toll Highway Authority posted all new mile markers to be uniform with the rest of the state on I‑90 (Jane Addams Memorial/Northwest Tollway) and the I‑94 section of the Tri‑State Tollway, which previously had matched the I‑294 section starting in the south at I‑80/I‑94/IL Route 394. This also applied to the tolled portion of the Ronald Reagan Tollway (I-88). The tollway also added exit number tabs to the exits. [ citation needed ]
Exit numbers correspond to Interstate mileage markers in most states. On I‑19 in Arizona , however, length is measured in kilometers instead of miles because, at the time of construction, a push for the United States to change to a metric system of measurement had gained enough traction that it was mistakenly assumed that all highway measurements would eventually be changed to metric (and some distance signs retain metric distances); [ 94 ] proximity to metric-using Mexico may also have been a factor, as I‑19 indirectly connects I‑10 to the Mexican Federal Highway system via surface streets in Nogales . Mileage count increases from west to east on most even-numbered Interstates; on odd-numbered Interstates mileage count increases from south to north.
Some highways, including the New York State Thruway , use sequential exit-numbering schemes. Exits on the New York State Thruway count up from Yonkers traveling north, and then west from Albany. I‑87 in New York State is numbered in three sections. The first section makes up the Major Deegan Expressway in the Bronx , with interchanges numbered sequentially from 1 to 14. The second section of I‑87 is a part of the New York State Thruway that starts in Yonkers (exit 1) and continues north to Albany (exit 24); at Albany, the Thruway turns west and becomes I‑90 for exits 25 to 61. From Albany north to the Canadian border, the exits on I‑87 are numbered sequentially from 1 to 44 along the Adirondack Northway . This often leads to confusion as there is more than one exit on I‑87 with the same number. For example, exit 4 on Thruway section of I‑87 connects with the Cross County Parkway in Yonkers, but exit 4 on the Northway is the exit for the Albany airport. These two exits share a number but are located 150 miles (240 km) apart.
Many northeastern states label exit numbers sequentially, regardless of how many miles have passed between exits. States in which Interstate exits are still numbered sequentially are Connecticut, Delaware, New Hampshire, New York, and Vermont; as such, three of the main Interstate Highways that remain completely within these states ( 87 , 88 , 89 ) have interchanges numbered sequentially along their entire routes. Maine, Massachusetts, Pennsylvania, Virginia, Georgia, and Florida followed this system for a number of years, but have since converted to mileage-based exit numbers. Georgia renumbered in 2000, while Maine did so in 2004. Massachusetts converted its exit numbers in 2021, and most recently Rhode Island in 2022. [ 95 ] The Pennsylvania Turnpike uses both mile marker numbers and sequential numbers. Mile marker numbers are used for signage, while sequential numbers are used for numbering interchanges internally. The New Jersey Turnpike , including the portions that are signed as I‑95 and I‑78, also has sequential numbering, but other Interstates within New Jersey use mile markers.
There are four common signage methods on Interstates:
Following the passage of the Federal Aid Highway Act of 1956, passenger rail declined sharply as did freight rail for a short time, but the trucking industry expanded dramatically and the cost of shipping and travel fell sharply. [ 107 ] [ citation needed ] Suburbanization became possible, with the rapid growth of larger, sprawling, and more car-dependent housing than was available in central cities, enabling racial segregation by white flight . [ 108 ] [ 109 ] [ 110 ] A sense of isolationism developed in suburbs, with suburbanites wanting to keep urban areas disconnected from the suburbs. [ 108 ] Tourism dramatically expanded, creating a demand for more service stations, motels, restaurants and visitor attractions. The Interstate System was the basis for urban expansion in the Sun Belt, and many urban areas in the region are thus very car-dependent. [ 111 ] The highways may have contributed to increased economic productivity in, and thereby increased migration to, the Sun Belt . [ 112 ] In rural areas, towns and small cities off the grid lost out as shoppers followed the interstate and new factories were located near them. [ 113 ]
The system had a profound effect on interstate shipping. The Interstate Highway System was being constructed at the same time as the intermodal shipping container made its debut. These containers could be placed on trailers behind trucks and shipped across the country with ease. A new road network and shipping containers that could be easily moved from ship to train to truck, meant that overseas manufacturers and domestic startups could get their products to market quicker than ever, allowing for accelerated economic growth. [ 114 ] Forty years after its construction, the Interstate Highway system returned on investment, making $6 [ among whom? ] for every $1 spent on the project. [ 115 ] [ better source needed ] According to research by the FHWA , "from 1950 to 1989, approximately one-quarter of the nation's productivity increase is attributable to increased investment in the highway system." [ 116 ]
The system had a particularly strong effect in Southern states, where major highways were inadequate [ citation needed ] . The new system facilitated the relocation of heavy manufacturing to the South and spurred the development of Southern-based corporations like Walmart (in Arkansas) and FedEx (in Tennessee). [ 114 ]
The Interstate Highway System also dramatically affected American culture, contributing to cars becoming more central to the American identity. Before, driving was considered an excursion that required some amount of skill and could have some chance of unpredictability. With the standardization of signs, road widths and rules, certain unpredictabilities lessened. Justin Fox wrote, "By making road more reliable and by making Americans more reliant on them, they took away most of the adventure and romance associated with driving." [ 114 ]
The Interstate Highway System has been criticized for contributing to the decline of some cities that were divided by Interstates, and for displacing minority neighborhoods in urban centers. [ 3 ] Between 1957 and 1977, the Interstate System alone displaced over 475,000 households and one million people across the country. [ 4 ] Highways have also been criticized for increasing racial segregation by creating physical barriers between neighborhoods, [ 117 ] and for overall reductions in available housing and population in neighborhoods affected by highway construction. [ 118 ] Other critics have blamed the Interstate Highway System for the decline of public transportation in the United States since the 1950s, [ 119 ] which minorities and low-income residents are three to six times more likely to use. [ 120 ] Previous highways, such as US 66 , were also bypassed by the new Interstate system, turning countless rural communities along the way into ghost towns. [ 121 ] The Interstate System has also contributed to continued resistance against new public transportation. [ 108 ]
The Interstate Highway System had a negative impact on minority groups, especially in urban areas. Even though the government used eminent domain to obtain land for the Interstates, it was still economical to build where land was cheapest. This cheap land was often located in predominately minority areas. [ 111 ] Not only were minority neighborhoods destroyed, but in some cities the Interstates were used to divide white and minority neighborhoods. [ 108 ] These practices were common in cities both in the North and South, including Nashville , Miami , Chicago , Detroit , and many other cities. The division and destruction of neighborhoods led to the limitation of employment and other opportunities, which deteriorated the economic fabric of neighborhoods. [ 120 ] Neighborhoods bordering Interstates have a much higher level of particulate air pollution and are more likely to be chosen for polluting industrial facilities. [ 120 ] | https://en.wikipedia.org/wiki/Interstate_Highway_System |
Interstellar formaldehyde (a topic relevant to astrochemistry ) was first discovered in 1969 by L. Snyder et al. using the National Radio Astronomy Observatory . Formaldehyde (H 2 CO) was detected by means of the 1 11 - 1 10 ground state rotational transition at 4830 MHz. [ 1 ] On 11 August 2014, astronomers released studies, using the Atacama Large Millimeter/Submillimeter Array (ALMA) for the first time, that detailed the distribution of HCN , HNC , H 2 CO , and dust inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON) . [ 2 ] [ 3 ]
Formaldehyde was first discovered in interstellar space in 1969 by L. Snyder et al. using the National Radio Astronomy Observatory . H 2 CO was detected by means of the 1 11 - 1 10 ground state rotational transition at 4830 MHz. [ 4 ]
Formaldehyde was the first polyatomic organic molecule detected in the interstellar medium and since its initial detection has been observed in many regions of the galaxy. [ 5 ] The isotopic ratio of [ 12 C]/[ 13 C] was determined to be about or less than 50% in the galactic disk . [ 6 ] Formaldehyde has been used to map out kinematic features of dark clouds located near Gould's Belt of local bright stars. [ 7 ] In 2007, the first H 2 CO 6 cm maser flare was detected. [ 8 ] It was a short duration outburst in IRAS 18566 + 0408 that produced a line profile consistent with the superposition of two Gaussian components, which leads to the belief that an event outside the maser gas triggered simultaneous flares at two different locations. [ 8 ] Although this was the first maser flare detected, H 2 CO masers have been observed since 1974 by Downes and Wilson in NGC 7538. [ 9 ] Unlike OH, H 2 O, and CH 3 OH, only five galactic star forming regions have associated formaldehyde maser emission, which has only been observed through the 1 10 → 1 11 transition. [ 9 ]
According to Araya et al. , H 2 CO are different from other masers in that they are weaker than most other masers (such as OH, CH 3 OH, and H 2 O) and have only been detected near very young massive stellar objects. [ 10 ] Unlike OH, H 2 O, and CH 3 OH, only five galactic star forming regions have associated formaldehyde maser emission, which has only been observed through the 1 10 → 1 11 transition. [ 11 ] Because of the widespread interest in interstellar formaldehyde it has recently been extensively studied, yielding new extragalactic sources, including NGC 253, NGC 520, NGC 660 , NGC 891, NGC 2903, NGC 3079, NGC 3628, NGC 6240, NGC 6946, IC 342, IC 860, Arp 55, Arp 220, M82, M83, IRAS 10173+0828, IRAS 15107+0724, and IRAS 17468+1320. [ 12 ]
The gas-phase reaction that produces formaldehyde possesses modest barriers and is too inefficient to produce the abundance of formaldehyde that has been observed. [ 13 ] One proposed mechanism for the formation is the hydrogenation of CO ice, shown below. [ 13 ]
This is the basic production mechanism leading to H 2 CO; there are several side reactions that take place with each step of the reaction that are based on the nature of the ice on the grain according to David Woon. [ 13 ] The rate constant presented is for the hydrogenation of CO. The rate constant for the hydrogenation of HCO was not provided as it was much larger than that of the hydrogenation of CO, likely because HCO is a radical. [ 14 ] Awad et al. mention that this is a surface level reaction only and only the monolayer is considered in calculations; this includes the surface within cracks in the ice. [ 14 ]
Formaldehyde is relatively inactive in gas phase chemistry in the interstellar medium. Its action is predominantly focused in grain-surface chemistry on dust grains in interstellar clouds [ 15 ] , . [ 16 ] Reactions involving formaldehyde have been observed to produce molecules containing C-H, C-O, O-H, and C-N bonds. [ 16 ] While these products are not necessarily well known, Schutte et al. believe these to be typical products of formaldehyde reactions at higher temperatures, polyoxymethylene , methanolamine , methanediol , and methoxyethanol for example (see Table 2 [ 15 ] ). Formaldehyde is believed to be the primary precursor for most of the complex organic material in the interstellar medium, including amino acids . [ 16 ] Formaldehyde most often reacts with NH 3 , H 2 O, CH 3 OH, CO, and itself, H 2 CO [ 15 ] , . [ 16 ] The three dominating reactions are shown below. [ 15 ]
There is no kinetic data available for these reactions as the entire reaction is not verified nor well understood. These reactions are believed to take place during warm-up of the ice on grains which releases the molecules to react. These reactions begin at temperatures as low as 40K - 80K but may take place at even lower temperatures.
Note that many other reactions are listed on the UMIST RATE06 database .
Formaldehyde appears to be a useful probe for astrochemists due to its low reactivity in the gas phase and to the fact that the 1 10 - 1 11 and 2 11 - 2 12 K-doublet transitions are rather clear. Formaldehyde has been used in many capacities and to investigate many systems including,
Above is the rotational spectrum at the ground state vibrational level of H 2 CO at 30 K. This spectrum was simulated using Pgopher and S-Reduction Rotational constants from Muller et al. [ 18 ] The observed transitions are the 6.2 cm 1 11 - 1 10 and 2.1 cm 2 12 - 2 11 K-doublet transitions. At right is the rotational energy level diagram. The ortho/para splitting is determined by the parity of K a , ortho if K a is odd and para if K a is even. [ 17 ] | https://en.wikipedia.org/wiki/Interstellar_formaldehyde |
Interstellar ice consists of grains of volatiles in the ice phase that form in the interstellar medium . Ice and dust grains form the primary material out of which the Solar System was formed. Grains of ice are found in the dense regions of molecular clouds , where new stars are formed. Temperatures in these regions can be as low as 10 K (−263 °C; −442 °F), allowing molecules that collide with grains to form an icy mantle. Thereafter, atoms undergo thermal motion across the surface, eventually forming bonds with other atoms. This results in the formation of water and methanol . [ 1 ] Indeed, the ices are dominated by water and methanol, as well as ammonia , carbon monoxide and carbon dioxide . Frozen formaldehyde and molecular hydrogen may also be present. Found in lower abundances are nitriles , ketones , esters [ 2 ] and carbonyl sulfide . [ 1 ] The mantles of interstellar ice grains are generally amorphous, becoming crystalline only in the presence of a star. [ 3 ]
The composition of interstellar ice can be determined through its infrared spectrum . As starlight passes through a molecular cloud containing ice, molecules in the cloud absorb energy. This adsorption occurs at the characteristic frequencies of vibration of the gas and dust. Ice features in the cloud are relatively prominently in this spectra, and the composition of the ice can be determined by comparison with samples of ice materials on Earth. [ 4 ] In the sites directly observable from Earth, around 60–70% of the interstellar ice consists of water, which displays a strong emission at 3.05 μm from stretching of the O–H bond. [ 1 ]
In September 2012, NASA scientists reported that polycyclic aromatic hydrocarbons (PAHs) , subjected to interstellar medium (ISM) conditions, are transformed, through hydrogenation , oxygenation and hydroxylation , to more complex organics - "a step along the path toward amino acids and nucleotides , the raw materials of proteins and DNA , respectively". [ 5 ] [ 6 ] Further, as a result of these transformations, the PAHs lose their spectroscopic signature which could be one of the reasons "for the lack of PAH detection in interstellar ice grains , particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks ." [ 5 ] [ 6 ]
Research published in the journal Science estimates that about 30–50% of the water in the Solar System , such as the water on Earth, the discs around Saturn , and the meteorites of other planets, was present before the birth of the Sun . [ 7 ]
On 18 November 2014, spacecraft Philae revealed presence of large amount of water ice on the comet 67P/Churyumov–Gerasimenko , the report stating that "the strength of the ice found under a layer of dust on the first landing site is surprisingly high". The team responsible for the MUPUS (Multi-Purpose Sensors for Surface and Sub-Surface Science) instrument, which hammered a probe into the comet, estimated that the comet is hard as ice. "Although the power of the hammer was gradually increased, we were not able to go deep into the surface," explained Tilman Spohn from the DLR Institute for Planetary Research , who led the research team. [ 8 ] | https://en.wikipedia.org/wiki/Interstellar_ice |
The interstellar medium ( ISM ) is the matter and radiation that exists in the space between the star systems in a galaxy . This matter includes gas in ionic , atomic , and molecular form, as well as dust and cosmic rays . It fills interstellar space and blends smoothly into the surrounding intergalactic medium . The energy that occupies the same volume, in the form of electromagnetic radiation , is the interstellar radiation field . Although the density of atoms in the ISM is usually far below that in the best laboratory vacuums, the mean free path between collisions is short compared to typical interstellar lengths, so on these scales the ISM behaves as a gas (more precisely, as a plasma : it is everywhere at least slightly ionized ), responding to pressure forces, and not as a collection of non-interacting particles.
The interstellar medium is composed of multiple phases distinguished by whether matter is ionic, atomic, or molecular, and the temperature and density of the matter. The interstellar medium is composed primarily of hydrogen , followed by helium with trace amounts of carbon , oxygen , and nitrogen . [ 1 ] The thermal pressures of these phases are in rough equilibrium with one another. Magnetic fields and turbulent motions also provide pressure in the ISM, and are typically more important, dynamically , than the thermal pressure. In the interstellar medium, matter is primarily in molecular form and reaches number densities of 10 12 molecules per m 3 (1 trillion molecules per m 3 ). In hot, diffuse regions, gas is highly ionized, and the density may be as low as 100 ions per m 3 . Compare this with a number density of roughly 10 25 molecules per m 3 for air at sea level, and 10 16 molecules per m 3 (10 quadrillion molecules per m 3 ) for a laboratory high-vacuum chamber. Within our galaxy, by mass , 99% of the ISM is gas in any form, and 1% is dust. [ 2 ] Of the gas in the ISM, by number 91% of atoms are hydrogen and 8.9% are helium, with 0.1% being atoms of elements heavier than hydrogen or helium, [ 3 ] known as " metals " in astronomical parlance. By mass this amounts to 70% hydrogen, 28% helium, and 1.5% heavier elements. The hydrogen and helium are primarily a result of primordial nucleosynthesis , while the heavier elements in the ISM are mostly a result of enrichment (due to stellar nucleosynthesis ) in the process of stellar evolution .
The ISM plays a crucial role in astrophysics precisely because of its intermediate role between stellar and galactic scales. Stars form within the densest regions of the ISM, which ultimately contributes to molecular clouds and replenishes the ISM with matter and energy through planetary nebulae , stellar winds , and supernovae . This interplay between stars and the ISM helps determine the rate at which a galaxy depletes its gaseous content, and therefore its lifespan of active star formation.
Voyager 1 reached the ISM on August 25, 2012, making it the first artificial object from Earth to do so. Interstellar plasma and dust will be studied until the estimated mission end date of 2025. Its twin Voyager 2 entered the ISM on November 5, 2018. [ 4 ]
Table 1 shows a breakdown of the properties of the components of the ISM of the Milky Way.
Field, Goldsmith & Habing (1969) put forward the static two phase equilibrium model to explain the observed properties of the ISM. Their modeled ISM included a cold dense phase ( T < 300 K ), consisting of clouds of neutral and molecular hydrogen, and a warm intercloud phase ( T ~ 10 4 K), consisting of rarefied neutral and ionized gas. McKee & Ostriker (1977) added a dynamic third phase that represented the very hot ( T ~ 10 6 K) gas that had been shock heated by supernovae and constituted most of the volume of the ISM.
These phases are the temperatures where heating and cooling can reach a stable equilibrium. Their paper formed the basis for further study over the subsequent three decades. However, the relative proportions of the phases and their subdivisions are still not well understood. [ 3 ]
The basic physics behind these phases can be understood through the behaviour of hydrogen, since this is by far the largest constituent of the ISM. The different phases are roughly in pressure balance over most of the Galactic disk, since regions of excess pressure will expand and cool, and likewise under-pressure regions will be compressed and heated. Therefore, since P = n k T , hot regions (high T ) generally have low particle number density n . Coronal gas has low enough density that collisions between particles are rare and so little radiation is produced, hence there is little loss of energy and the temperature can stay high for periods of hundreds of millions of years. In contrast, once the temperature falls to O(10 5 K) with correspondingly higher density, protons and electrons can recombine to form hydrogen atoms, emitting photons which take energy out of the gas, leading to runaway cooling. Left to itself this would produce the warm neutral medium. However, OB stars are so hot that some of their photons have energy greater than the Lyman limit , E > 13.6 eV , enough to ionize hydrogen. Such photons will be absorbed by, and ionize, any neutral hydrogen atom they encounter, setting up a dynamic equilibrium between ionization and recombination such that gas close enough to OB stars is almost entirely ionized, with temperature around 8000 K (unless already in the coronal phase), until the distance where all the ionizing photons are used up. This ionization front marks the boundary between the Warm ionized and Warm neutral medium.
OB stars, and also cooler ones, produce many more photons with energies below the Lyman limit, which pass through the ionized region almost unabsorbed. Some of these have high enough energy (> 11.3 eV) to ionize carbon atoms, creating a C II ("ionized carbon") region outside the (hydrogen) ionization front. In dense regions this may also be limited in size by the availability of photons, but often such photons can penetrate throughout the neutral phase and only get absorbed in the outer layers of molecular clouds. Photons with E > 4 eV or so can break up molecules such as H 2 and CO, creating a photodissociation region (PDR) which is more or less equivalent to the Warm neutral medium. These processes contribute to the heating of the WNM. The distinction between Warm and Cold neutral medium is again due to a range of temperature/density in which runaway cooling occurs.
The densest molecular clouds have significantly higher pressure than the interstellar average, since they are bound together by their own gravity. When stars form in such clouds, especially OB stars, they convert the surrounding gas into the warm ionized phase, a temperature increase of several hundred. Initially the gas is still at molecular cloud densities, and so at vastly higher pressure than the ISM average: this is a classical H II region. The large overpressure causes the ionized gas to expand away from the remaining molecular gas (a Champagne flow ), and the flow will continue until either the molecular cloud is fully evaporated or the OB stars reach the end of their lives, after a few millions years. At this point the OB stars explode as supernovas , creating blast waves in the warm gas that increase temperatures to the coronal phase ( supernova remnants , SNR). These too expand and cool over several million years until they return to average ISM pressure.
Most discussion of the ISM concerns spiral galaxies like the Milky Way , in which nearly all the mass in the ISM is confined to a relatively thin disk , typically with scale height about 100 parsecs (300 light years ), which can be compared to a typical disk diameter of 30,000 parsecs. Gas and stars in the disk orbit the galactic centre with typical orbital speeds of 200 km/s. This is much faster than the random motions of atoms in the ISM, but since the orbital motion of the gas is coherent, the average motion does not directly affect structure in the ISM. The vertical scale height of the ISM is set in roughly the same way as the Earth's atmosphere, as a balance between the local gravitation field (dominated by the stars in the disk) and the pressure. Further from the disk plane, the ISM is mainly in the low-density warm and coronal phases, which extend at least several thousand parsecs away from the disk plane. This galactic halo or 'corona' also contains significant magnetic field and cosmic ray energy density.
The rotation of galaxy disks influences ISM structures in several ways. Since the angular velocity declines with increasing distance from the centre, any ISM feature, such as giant molecular clouds or magnetic field lines, that extend across a range of radius are sheared by differential rotation, and so tend to become stretched out in the tangential direction; this tendency is opposed by interstellar turbulence (see below) which tends to randomize the structures. Spiral arms are due to perturbations in the disk orbits - essentially ripples in the disk, that cause orbits to alternately converge and diverge, compressing and then expanding the local ISM. The visible spiral arms are the regions of maximum density, and the compression often triggers star formation in molecular clouds, leading to an abundance of H II regions along the arms. Coriolis force also influences large ISM features.
Irregular galaxies such as the Magellanic Clouds have similar interstellar mediums to spirals, but less organized. In elliptical galaxies the ISM is almost entirely in the coronal phase, since there is no coherent disk motion to support cold gas far from the center: instead, the scale height of the ISM must be comperable to the radius of the galaxy. This is consistent with the observation that there is little sign of current star formation in ellipticals. Some elliptical galaxies do show evidence for a small disk component, with ISM similar to spirals, buried close to their centers. The ISM of lenticular galaxies , as with their other properties, appear intermediate between spirals and ellipticals.
Very close to the center of most galaxies (within a few hundred light years at most), the ISM is profoundly modified by the central supermassive black hole : see Galactic Center for the Milky Way, and Active galactic nucleus for extreme examples in other galaxies. The rest of this article will focus on the ISM in the disk plane of spirals, far from the galactic center.
Astronomers describe the ISM as turbulent , meaning that the gas has quasi-random motions coherent over a large range of spatial scales. Unlike normal turbulence, in which the fluid motions are highly subsonic , the bulk motions of the ISM are usually larger than the sound speed . Supersonic collisions between gas clouds cause shock waves which compress and heat the gas, increasing the sounds speed so that the flow is locally subsonic; thus supersonic turbulence has been described as 'a box of shocklets', and is inevitably associated with complex density and temperature structure. In the ISM this is further complicated by the magnetic field, which provides wave modes such as Alfvén waves which are often faster than pure sound waves: if turbulent speeds are supersonic but below the Alfvén wave speed, the behaviour is more like subsonic turbulence.
Stars are born deep inside large complexes of molecular clouds , typically a few parsecs in size. During their lives and deaths, stars interact physically with the ISM.
Stellar winds from young clusters of stars (often with giant or supergiant HII regions surrounding them) and shock waves created by supernovae inject enormous amounts of energy into their surroundings, which leads to hypersonic turbulence. The resultant structures – of varying sizes – can be observed, such as stellar wind bubbles and superbubbles of hot gas, seen by X-ray satellite telescopes or turbulent flows observed in radio telescope maps.
Stars and planets, once formed, are unaffected by pressure forces in the ISM, and so do not take part in the turbulent motions, although stars formed in molecular clouds in a galactic disk share their general orbital motion around the galaxy center. Thus stars are usually in motion relative to their surrounding ISM. The Sun is currently traveling through the Local Interstellar Cloud , an irregular clump of the warm neutral medium a few parsecs across, within the low-density Local Bubble , a 100-parsec radius region of coronal gas.
In October 2020, astronomers reported a significant unexpected increase in density in the space beyond the Solar System as detected by the Voyager 1 and Voyager 2 space probes . According to the researchers, this implies that "the density gradient is a large-scale feature of the VLISM (very local interstellar medium) in the general direction of the heliospheric nose ". [ 6 ] [ 7 ]
The interstellar medium begins where the interplanetary medium of the Solar System ends. The solar wind slows to subsonic velocities at the termination shock , 90–100 astronomical units from the Sun. In the region beyond the termination shock, called the heliosheath , interstellar matter interacts with the solar wind. Voyager 1 , the farthest human-made object from the Earth (after 1998 [ 8 ] ), crossed the termination shock December 16, 2004 and later entered interstellar space when it crossed the heliopause on August 25, 2012, providing the first direct probe of conditions in the ISM ( Stone et al. 2005 ).
Dust grains in the ISM are responsible for extinction and reddening , the decreasing light intensity and shift in the dominant observable wavelengths of light from a star. These effects are caused by scattering and absorption of photons and allow the ISM to be observed with the naked eye in a dark sky. The apparent rifts that can be seen in the band of the Milky Way – a uniform disk of stars – are caused by absorption of background starlight by dust in molecular clouds within a few thousand light years from Earth. This effect decreases rapidly with increasing wavelength ("reddening" is caused by greater absorption of blue than red light), and becomes almost negligible at mid- infrared wavelengths (> 5 μm).
Extinction provides one of the best ways of mapping the three-dimensional structure of the ISM, especially since the advent of accurate distances to millions of stars from the Gaia mission . The total amount of dust in front of each star is determined from its reddening, and the dust is then located along the line of sight by comparing the dust column density in front of stars projected close together on the sky, but at different distances. By 2022 it was possible to generate a map of ISM structures within 3 kpc (10,000 light years) of the Sun. [ 9 ]
Far ultraviolet light is absorbed effectively by the neutral hydrogen gas in the ISM. Specifically, atomic hydrogen absorbs very strongly at about 121.5 nanometers, the Lyman-alpha transition, and also at the other Lyman series lines. Therefore, it is nearly impossible to see light emitted at those wavelengths from a star farther than a few hundred light years from Earth, because most of it is absorbed during the trip to Earth by intervening neutral hydrogen. All photons with wavelength < 91.6 nm, the Lyman limit, can ionize hydrogen and are also very strongly absorbed. The absorption gradually decreases with increasing photon energy, and the ISM begins to become transparent again in soft X-rays , with wavelengths shorter than about 1 nm.
The ISM is usually far from thermodynamic equilibrium . Collisions establish a Maxwell–Boltzmann distribution of velocities, and the 'temperature' normally used to describe interstellar gas is the 'kinetic temperature', which describes the temperature at which the particles would have the observed Maxwell–Boltzmann velocity distribution in thermodynamic equilibrium. However, the interstellar radiation field is typically much weaker than a medium in thermodynamic equilibrium; it is most often roughly that of an A star (surface temperature of ~10,000 K) highly diluted. Therefore, bound levels within an atom or molecule in the ISM are rarely populated according to the Boltzmann formula ( Spitzer 1978 , § 2.4).
Depending on the temperature, density, and ionization state of a portion of the ISM, different heating and cooling mechanisms determine the temperature of the gas.
Grain heating by thermal exchange is very important in supernova remnants where densities and temperatures are very high.
Gas heating via grain-gas collisions is dominant deep in giant molecular clouds (especially at high densities). Far infrared radiation penetrates deeply due to the low optical depth. Dust grains are heated via this radiation and can transfer thermal energy during collisions with the gas. A measure of efficiency in the heating is given by the accommodation coefficient: α = T 2 − T T d − T {\displaystyle \alpha ={\frac {T_{2}-T}{T_{d}-T}}} where T is the gas temperature, T d the dust temperature, and T 2 the post-collision temperature of the gas atom or molecule. This coefficient was measured by ( Burke & Hollenbach 1983 ) as α = 0.35.
Despite its extremely low density, photons generated in the ISM are prominent in nearly all bands of the electromagnetic spectrum. In fact the optical band, on which astronomers relied until well into the 20th century, is the one in which the ISM is least obvious.
Radio waves are affected by the plasma properties of the ISM. The lowest frequency radio waves, below ≈ 0.1 MHz, cannot propagate through the ISM since they are below its plasma frequency . At higher frequencies, the plasma has a significant refractive index, decreasing with increasing frequency, and also dependent on the density of free electrons. Random variations in the electron density cause interstellar scintillation , which broadens the apparent size of distant radio sources seen through the ISM, with the broadening decreasing with frequency squared. The variation of refractive index with frequency causes the arrival times of pulses from pulsars and Fast radio bursts to be delayed at lower frequencies (dispersion). The amount of delay is proportional to the column density of free electrons (Dispersion measure, DM), which is useful for both mapping the distribution of ionized gas in the Galaxy and estimating distances to pulsars (more distant ones have larger DM). [ 15 ]
A second propagation effect is Faraday rotation , which affects linearly polarized radio waves, such as those produced by synchrotron radiation , one of the most common sources of radio emission in astrophysics. Faraday rotation depends on both the electron density and the magnetic field strength, and so is used as a probe of the interstellar magnetic field.
The ISM is generally very transparent to radio waves, allowing unimpeded observations right through the disk of the Galaxy. There are a few exceptions to this rule. The most intense spectral lines in the radio spectrum can become opaque, so that only the surface of the line-emitting cloud is visible. This mainly affects the carbon monoxide lines at millimetre wavelengths that are used to trace molecular clouds, but the 21-cm line from neutral hydrogen can become opaque in the cold neutral medium. Such absorption only affects photons at the line frequencies: the clouds are otherwise transparent. The other significant absorption process occurs in dense ionized regions. These emit photons, including radio waves, via thermal bremsstrahlung . At short wavelengths, typically microwaves , these are quite transparent, but their brightness approaches the black body limit as ∝ λ 2.1 {\displaystyle \propto \lambda ^{2.1}} , and at wavelengths long enough that this limit is reached, they become opaque. Thus metre-wavelength observations show H II regions as cool spots blocking the bright background emission from Galactic synchrotron radiation, while at decametres the entire galactic plane is absorbed, and the longest radio waves observed, 1 km, can only propagate 10-50 parsecs through the Local Bubble. [ 16 ] The frequency at which a particular nebula becomes optically thick depends on its emission measure
the column density of squared electron number density. Exceptionally dense nebulae can become optically thick at centimetre wavelengths: these are just-formed and so both rare and small ('Ultra-compact H II regions')
The general transparency of the ISM to radio waves, especially microwaves, may seem surprising since radio waves at frequencies > 10 GHz are significantly attenuated by Earth's atmosphere (as seen in the figure). But the column density through the atmosphere is vastly larger than the column through the entire Galaxy, due to the extremely low density of the ISM.
The word 'interstellar' (between the stars) was coined by Francis Bacon in the context of the ancient theory of a literal sphere of fixed stars . [ 18 ] Later in the 17th century, when the idea that stars were scattered through infinite space became popular, it was debated whether that space was a true vacuum [ 19 ] or filled with a hypothetical fluid, sometimes called aether , as in René Descartes ' vortex theory of planetary motions. While vortex theory did not survive the success of Newtonian physics , an invisible luminiferous aether was re-introduced in the early 19th century as the medium to carry light waves; e.g., in 1862 a journalist wrote: "this efflux occasions a thrill, or vibratory motion, in the ether which fills the interstellar spaces." [ 20 ]
In 1864, William Huggins used spectroscopy to determine that a nebula is made of gas. [ 21 ] Huggins had a private observatory with an 8-inch telescope, with a lens by Alvan Clark ; but it was equipped for spectroscopy, which enabled breakthrough observations. [ 22 ]
From around 1889, Edward Barnard pioneered deep photography of the sky, finding many 'holes in the Milky Way'. At first he compared them to sunspots , but by 1899 was prepared to write: "One can scarcely conceive a vacancy with holes in it, unless there is nebulous matter covering these apparently vacant places in which holes might occur". [ 23 ] These holes are now known as dark nebulae , dusty molecular clouds silhouetted against the background star field of the galaxy; the most prominent are listed in his Barnard Catalogue . The first direct detection of cold diffuse matter in interstellar space came in 1904, when Johannes Hartmann observed the binary star Mintaka (Delta Orionis) with the Potsdam Great Refractor . [ 24 ] [ 25 ] Hartmann reported [ 26 ] that absorption from the "K" line of calcium appeared "extraordinarily weak, but almost perfectly sharp" and also reported the "quite surprising result that the calcium line at 393.4 nanometres does not share in the periodic displacements of the lines caused by the orbital motion of the spectroscopic binary star". The stationary nature of the line led Hartmann to conclude that the gas responsible for the absorption was not present in the atmosphere of the star, but was instead located within an isolated cloud of matter residing somewhere along the line of sight to this star. This discovery launched the study of the interstellar medium.
Interstellar gas was further confirmed by Slipher in 1909, and then by 1912 interstellar dust was confirmed by Slipher. [ 27 ] Interstellar sodium was detected by Mary Lea Heger in 1919 through the observation of stationary absorption from the atom's "D" lines at 589.0 and 589.6 nanometres towards Delta Orionis and Beta Scorpii . [ 28 ]
In the series of investigations, Viktor Ambartsumian introduced the now commonly accepted notion that interstellar matter occurs in the form of clouds. [ 29 ]
Subsequent observations of the "H" and "K" lines of calcium by Beals (1936) revealed double and asymmetric profiles in the spectra of Epsilon and Zeta Orionis . These were the first steps in the study of the very complex interstellar sightline towards Orion . Asymmetric absorption line profiles are the result of the superposition of multiple absorption lines, each corresponding to the same atomic transition (for example the "K" line of calcium), but occurring in interstellar clouds with different radial velocities . Because each cloud has a different velocity (either towards or away from the observer/Earth), the absorption lines occurring within each cloud are either blue-shifted or red-shifted (respectively) from the lines' rest wavelength through the Doppler Effect . These observations confirming that matter is not distributed homogeneously were the first evidence of multiple discrete clouds within the ISM.
The growing evidence for interstellar material led Pickering (1912) to comment: "While the interstellar absorbing medium may be simply the ether, yet the character of its selective absorption, as indicated by Kapteyn , is characteristic of a gas, and free gaseous molecules are certainly there, since they are probably constantly being expelled by the Sun and stars."
The same year, Victor Hess 's discovery of cosmic rays , highly energetic charged particles that rain onto the Earth from space, led others to speculate whether they also pervaded interstellar space. The following year, the Norwegian explorer and physicist Kristian Birkeland wrote: "It seems to be a natural consequence of our points of view to assume that the whole of space is filled with electrons and flying electric ions of all kinds. We have assumed that each stellar system in evolutions throws off electric corpuscles into space. It does not seem unreasonable therefore to think that the greater part of the material masses in the universe is found, not in the solar systems or nebulae , but in 'empty' space" ( Birkeland 1913 ).
Thorndike (1930) noted that "it could scarcely have been believed that the enormous gaps between the stars are completely void. Terrestrial aurorae are not improbably excited by charged particles emitted by the Sun. If the millions of other stars are also ejecting ions, as is undoubtedly true, no absolute vacuum can exist within the galaxy."
In September 2012, NASA scientists reported that polycyclic aromatic hydrocarbons (PAHs) , subjected to interstellar medium (ISM) conditions, are transformed, through hydrogenation , oxygenation and hydroxylation , to more complex organics , "a step along the path toward amino acids and nucleotides , the raw materials of proteins and DNA , respectively". [ 31 ] [ 32 ] Further, as a result of these transformations, the PAHs lose their spectroscopic signature , which could be one of the reasons "for the lack of PAH detection in interstellar ice grains , particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks ." [ 31 ] [ 32 ]
In February 2014, NASA announced a greatly upgraded database [ 33 ] for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life . PAHs seem to have been formed shortly after the Big Bang , are widespread throughout the universe, and are associated with new stars and exoplanets . [ 34 ]
In April 2019, scientists, working with the Hubble Space Telescope , reported the confirmed detection of the large and complex ionized molecules of buckminsterfullerene (C 60 ) (also known as "buckyballs") in the interstellar medium spaces between the stars. [ 35 ] [ 36 ]
In September 2020, evidence was presented of solid-state water in the interstellar medium, and particularly, of water ice mixed with silicate grains in cosmic dust grains. [ 37 ] | https://en.wikipedia.org/wiki/Interstellar_medium |
In materials science , an interstitial defect is a type of point crystallographic defect where an atom of the same or of a different type, occupies an interstitial site in the crystal structure . When the atom is of the same type as those already present they are known as a self-interstitial defect . Alternatively, small atoms in some crystals may occupy interstitial sites, such as hydrogen in palladium . Interstitials can be produced by bombarding a crystal with elementary particles having energy above the displacement threshold for that crystal, but they may also exist in small concentrations in thermodynamic equilibrium . The presence of interstitial defects can modify the physical and chemical properties of a material.
The idea of interstitial compounds was started in the late 1930s and they are often called Hagg phases after Hägg. [ 1 ] Transition metals generally crystallise in either the hexagonal close packed or face centered cubic structures, both of which can be considered to be made up of layers of hexagonally close packed atoms. In both of these very similar lattices there are two sorts of interstice, or hole:
It was suggested by early workers that:
These were not viewed as compounds, but rather as solutions, of say carbon, in the metal lattice, with a limiting upper “concentration” of the smaller atom that was determined by the number of interstices available.
A more detailed knowledge of the structures of metals, and binary and ternary phases of metals and non metals shows that:
One example is the solubility of carbon in iron. The form of pure iron stable between 910 °C and 1390 °C, γ-iron, forms a solid solution with carbon termed austenite which is also known as steel .
Self-interstitial defects are interstitial defects which contain only atoms which are the same as those already present in the lattice.
The structure of interstitial defects has been experimentally determined in some metals and semiconductors .
Contrary to what one might intuitively expect, most self-interstitials in metals with a known structure have a 'split' structure, in which two atoms share the same lattice site. [ 2 ] [ 3 ] Typically the center of mass of the two atoms is at the lattice site, and they are displaced symmetrically from it along one of the principal lattice directions . For instance, in several common face-centered cubic (fcc) metals such as copper, nickel and platinum, the ground state structure of the self-interstitial is the split [100] interstitial structure, where two atoms are displaced in a positive and negative [100] direction from the lattice site. In body-centered cubic (bcc) iron the ground state interstitial structure is similarly a [110] split interstitial.
These split interstitials are often called dumbbell interstitials, because plotting the two atoms forming the interstitial with two large spheres and a thick line joining them makes the structure resemble a dumbbell weight-lifting device.
In other bcc metals than iron, the ground state structure is believed based on recent density-functional theory calculations to be the [111] crowdion interstitial, [ 4 ] which can be understood as a long chain (typically some 10–20) of atoms along the [111] lattice direction, compressed compared to the perfect lattice such that the chain contains one extra atom.
In semiconductors the situation is more complex, since defects may be charged and different charge states may have different structures. For instance, in silicon, the interstitial may either have a split [110] structure or a tetrahedral truly interstitial one. [ 5 ]
Carbon, notably in graphite and diamond, has a number of interesting self-interstitials - recently discovered using Local-density approximation -calculations is the "spiro-interestitial" in graphite, named after spiropentane , as the interstitial carbon atom is situated between two basal planes and bonded in a geometry similar to spiropentane. [ 6 ]
Small impurity interstitial atoms are usually on true interstitial sites between the lattice atoms. Large impurity interstitials can also be in split interstitial configurations together with a lattice atom, similar to those of the self-interstitial atom.
Interstitials modify the physical and chemical properties of materials. | https://en.wikipedia.org/wiki/Interstitial_defect |
In crystallography , interstitial sites , holes or voids are the empty space that exists between the packing of atoms (spheres) in the crystal structure . [ citation needed ]
The holes are easy to see if you try to pack circles together; no matter how close you get them or how you arrange them, you will have empty space in between. The same is true in a unit cell ; no matter how the atoms are arranged, there will be interstitial sites present between the atoms. These sites or holes can be filled with other atoms ( interstitial defect ). The picture with packed circles is only a 2D representation. In a crystal lattice , the atoms (spheres) would be packed in a 3D arrangement . This results in different shaped interstitial sites depending on the arrangement of the atoms in the lattice.
A close packed unit cell, both face-centered cubic and hexagonal close packed, can form two different shaped holes. Looking at the three green spheres in the hexagonal packing illustration at the top of the page, they form a triangle-shaped hole. If an atom is arranged on top of this triangular hole it forms a tetrahedral interstitial hole. If the three atoms in the layer above are rotated and their triangular hole sits on top of this one, it forms an octahedral interstitial hole. [ citation needed ] In a close-packed structure there are 4 atoms per unit cell and it will have 4 octahedral voids (1:1 ratio) and 8 tetrahedral voids (1:2 ratio) per unit cell. [ 1 ] The tetrahedral void is smaller in size and could fit an atom with a radius 0.225 times the size of the atoms making up the lattice. An octahedral void could fit an atom with a radius 0.414 times the size of the atoms making up the lattice. [ 1 ] An atom that fills this empty space could be larger than this ideal radius ratio, which would lead to a distorted lattice due to pushing out the surrounding atoms, but it cannot be smaller than this ratio. [ 1 ]
If half of the tetrahedral sites of the parent FCC lattice are filled by ions of opposite charge, the structure formed is the zincblende crystal structure . If all the tetrahedral sites of the parent FCC lattice are filled by ions of opposite charge, the structure formed is the fluorite structure or antifluorite structure. If all the octahedral sites of the parent FCC lattice are filled by ions of opposite charge, the structure formed is the rock-salt structure .
If half of the tetrahedral sites of the parent HCP lattice are filled by ions of opposite charge, the structure formed is the wurtzite crystal structure. If all the octahedral sites of the anion HCP lattice are filled by cations, the structure formed is the nickel arsenide structure.
A simple cubic unit cell, with stacks of atoms arranged as if at the eight corners of a cube would form a single cubic hole or void in the center. If these voids are occupied by ions of opposite charge from the parent lattice, the cesium chloride structure is formed.
A body-centered cubic unit cell has six octahedral voids located at the center of each face of the unit cell, and twelve further ones located at the midpoint of each edge of the same cell, for a total of six net octahedral voids. Additionally, there are 24 tetrahedral voids located in a square spacing around each octahedral void, for a total of twelve net tetrahedral voids. These tetrahedral voids are not local maxima and are not technically voids, but they do occasionally appear in multi-atom unit cells.
An interstitial defect refers to additional atoms occupying some interstitial sites at random as crystallographic defects in a crystal which normally has empty interstitial sites by default. | https://en.wikipedia.org/wiki/Interstitial_site |
An interstitial space is an intermediate space located between regular-use floors, commonly located in hospitals and laboratory -type buildings to allow space for the mechanical systems of the building. By providing this space, laboratory and hospital rooms may be easily rearranged throughout their lifecycles and therefore reduce lifecycle cost.
An interstitial space is useful when the mechanical system of the building is highly sophisticated and changing the space on the primary floors is a distinct possibility. The heights of these spaces are generally 6 to 8 feet (1.8 to 2.4 m) and allow easy access for repair or alteration. [ 1 ] If changes or maintenance need to be performed in the interstitial space, the primary space does not need to be shut down, which is important in buildings like hospitals where the equipment in the space must operate constantly. Unlike traditionally built buildings, where the mechanical space is located in the basement or on the top floor, the interstitial space needs few vertical penetrations and therefore leaves more open space on the primary floor. The entire floor plan of these buildings can be more open because there are fewer fixed vertical penetrations through the floor and walls.
Another way to use an interstitial space is to incorporate a design that divides the functions of the building into groups and localizes them. The Zeidler Partnership Architects ’ (ZPA) design of the William Osler Health Centre (WOHC) in Brampton, Ontario , is one example of this design. (Note: this was designed but ZPA was not awarded the project.) The groups in this design are based on similar structural and mechanical systems. Flexible design allows for easy expansion or redesign in the future. Horizontal expansion is especially easy because of the interstitial space between the surgical suite and the emergency floors, where the mechanical system functions are the most crucial in this building. Double floor height is used to maintain the horizontal flow of connections throughout the rest of the building without causing any interference with other building systems.
The idea of using an interstitial space was started in the 1960s by professors at Texas A&M University ’s College of Architecture. Their concept was to standardize spaces and allow for rapid changes in medical facilities. While the spaces for building systems like plumbing, mechanical, and electrical systems were not as large as today’s, it was an important beginning of an idea to separate the systems by floor. The first building to actually use an interstitial space design was Louis Kahn ’s Salk Institute of Biological Studies in La Jolla, California . The design allowed the building to keep up with ever-changing technology. [ 2 ] From there, designs progressed to concepts created by Zeidler Partnership Architects (ZPA); a firm who has completed over 40 healthcare and lab facilities for buildings using an interstitial space design. [ 3 ] Today, many firms have drawn inspiration from ZPA and use their concept to develop their own design. Some designs cover the whole floor area and some, like the WOHC, are partial interstitial spaces.
Interstitial spaces are exceedingly useful when a building needs to be re-modeled . In a medical or lab facility, where technology can change quickly, future equipment sizes and requirements can be unpredictable. With an interstitial space, room layouts in the primary floor may be altered much more easily than traditionally designed buildings since there are fewer service stacks penetrating the floors. The walls can be arranged and rearranged freely. If a drastic renovation must occur, only one floor at a time has to be shut down for renovation, instead of the whole building. The cost of the building is reduced significantly since major equipment does not have to be changed during a renovation. Lifecycle cost includes anything that pertains to the building from when it is in its schematic design phase until it is demolished. A chart of the cost distribution is shown in figure 2. If the equipment itself must be retrofitted , it can be done faster, since the spaces have ample area to work and are separated by floor. The lifetime of the building may also be increased, since the adaptable spaces may be retrofitted instead of needing to be torn down for a redesigned building.
Separating the building systems from the primary space can also be helpful during construction. If sequenced correctly, it can decrease the installation time of major equipment significantly. Each trade may work on one floor and move to the next after another is finished. Also, wall, ceiling, and floor finishes may be worked on while the building systems are installed as opposed to a traditionally designed building where they would have to wait for more equipment to be installed.
Another advantage of using an interstitial space is that easy access to the equipment in them may encourage preventative maintenance. If a more efficient system can be installed easily, it can again reduce lifecycle cost.
The largest and most well known negative of interstitial spaces is a high first cost. [ 4 ] Adding more floors increases the amount of material used for floor decks, walls, etc. The construction expense becomes much larger when anything is added because it affects many other systems in the building. One main thing that increases is the amount of façade material necessary to cover the skin of the building. Depending on that material, the cost and time of construction could be largely inflated.
Equipment costs can become a large deterrent of owners also. Several smaller pieces of equipment must be purchased for each floor instead of one large piece for the whole building. The large piece of equipment is much cheaper than all of the small pieces combined in almost every case. | https://en.wikipedia.org/wiki/Interstitial_space |
In anatomy , the interstitium is a contiguous fluid-filled space existing between a structural barrier, such as a cell membrane or the skin , and internal structures, such as organs , including muscles and the circulatory system . [ 1 ] [ 2 ] The fluid in this space is called interstitial fluid , comprises water and solutes , and drains into the lymph system. [ 2 ] The interstitial compartment is composed of connective and supporting tissues within the body – called the extracellular matrix – that are situated outside the blood and lymphatic vessels and the parenchyma of organs. [ 2 ] [ 3 ] The role of the interstitium in solute concentration, protein transport and hydrostatic pressure impacts human pathology and physiological responses such as edema , inflammation and shock . [ 4 ]
The non-fluid parts of the interstitium are predominantly collagen types I, III, and V; elastin ; and glycosaminoglycans , such as hyaluronan and proteoglycans , that are cross-linked to form a honeycomb -like reticulum. [ 3 ] Collagen bundles of the extracellular matrix form scaffolding with a high tensile strength. Interstitial cells (e.g., fibroblasts , dendritic cells, adipocytes , interstitial cells of Cajal and inflammatory cells, such as macrophages and mast cells ), serve a variety of structural and immune functions. [ 3 ] [ 4 ] Fibroblasts synthesize the production of structural molecules as well as enzymes that break down polymeric molecules. [ 3 ] Such structural components exist both for the general interstitium of the body, [ 2 ] and within individual organs, such as the myocardial interstitium of the heart , [ 5 ] the renal interstitium of the kidney , [ 6 ] and the pulmonary interstitium of the lung .
The interstitium in the submucosae of visceral organs, the dermis, superficial fascia, and perivascular adventitia are fluid-filled spaces supported by a collagen bundle lattice. Blind end, highly permeable, lymphatic capillaries extend into the interstitium. The fluid spaces communicate with draining lymph nodes, although they do not have lining cells or structures of lymphatic channels. [ 7 ] Interstitial fluid entering the lymphatic system becomes lymph , which is transported through lymphatic vessels until it empties into the microcirculation and the venous system . [ 4 ]
The interstitial fluid is a reservoir and transportation system for nutrients and solutes distributing among organs, cells, and capillaries , for signaling molecules communicating between cells, and for antigens and cytokines participating in immune regulation . [ 2 ] The structure of the gel reticulum plays a role in the distribution of solutes across the interstitium, as the microstructure of the extracellular matrix in some parts excludes larger molecules (exclusion volume). The density of the collagen matrix fluctuates with the fluid volume of the interstitium. Increasing fluid volume is associated with a decrease in matrix fiber density, and a lower exclusion volume. [ 8 ] [ 3 ]
The total fluid volume of the interstitium during health is about 20% of body weight ( volume is not weight ) , but this space is dynamic and may change in volume and composition during immune responses and in conditions such as cancer , and specifically within the interstitium of tumors . [ 2 ] The amount of interstitial fluid varies from about 50% of the tissue weight in skin to about 10% in skeletal muscle . [ 2 ] Interstitial fluid pressure is variable, ranging from −1 to −4 mmHg in tissues like the skin, intestine and lungs to 21 to 24 mmHg in the liver, kidney and myocardium. Generally, increasing interstitial volume is associated with increased interstitial pressure and microvascular filtration. [ 8 ]
The renal interstitium facilitates solute and water transport between blood and urine in the vascular and tubular elements of the kidneys, and water reabsorption through changes in solute concentrations and hydrostatic gradients. [ 9 ] [ 10 ] The myocardial interstitium participates in ionic exchanges associated with the spread of electrical events. [ 11 ] The pulmonary interstitium allows for fluctuations in lung volume between inspiration and expiration. [ 12 ]
The composition and chemical properties of the interstitial fluid vary among organs and undergo changes in chemical composition during normal function, as well as during body growth , conditions of inflammation , and development of diseases , [ 2 ] as in heart failure [ 5 ] and chronic kidney disease . [ 6 ]
In people with lung diseases , heart disease, cancer , kidney disease, immune disorders , and periodontal disease , the interstitial fluid and lymph system are sites where disease mechanisms may develop. [ 2 ] [ 5 ] [ 6 ] [ 13 ] Interstitial fluid flow is associated with the migration of cancer cells to metastatic sites. [ 2 ] [ 14 ] The enhanced permeability and retention effects refers to increased interstitial flow causing a neutral or reversed pressure differential between blood vessels and healthy tissue, limiting the distribution of intravenous drugs to tumors, which under other circumstances display a high-pressure gradient at their periphery. [ 14 ]
Changes in interstitial volume and pressure play critical roles in the onset of conditions like shock and inflammation. [ 3 ] [ 4 ] During hypovolemic shock , digestive enzymes and inflammatory agents diffuse to the interstitial space, then drain into the mesenteric lymphatic system and enter into circulation, contributing to systemic inflammation . [ 4 ] Accumulating fluid in the interstitial space (interstitial edema) is caused by increased microvascular pressure and permeability, a positive feedback loop mechanism resulting in an associated in increasing the rate of microvascular filtration into the interstitial space. [ 4 ] Decreased lymphatic drainage due to blockage can compound these effects. Interstitial edema can prevent oxygen diffusion across tissue and in the brain, kidney and intestines lead to the onset of compartment syndrome. [ 4 ] | https://en.wikipedia.org/wiki/Interstitium |
Intersubband transitions (also known as intraband transitions ) are dipolar allowed optical excitations between the quantized electronic energy levels within the conduction band of semiconductor heterostructures . Intersubband transitions when coupled with an optical resonator form new, mixed-state photons . This mixing is referred to as an intersubband cavity-polariton . These transitions exhibit an anticrossing in energy with a separation known as vacuum-Rabi splitting , similar to level repulsion in atomic physics .
A cascading of intersubband transitions is the mechanism behind a quantum cascade laser which produces a monochromatic coherent light-source at infrared wavelengths.
Most metals reflect almost all visible light, due to the presence of free charges, and are therefore silvery in color or mirror-like. However, some metals like gold and copper are more reddish, and this is due to absorption from intersubband transitions that occur at blue wavelengths.
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intersubband_polariton |
In telecommunications , intersymbol interference ( ISI ) is a form of distortion of a signal in which one symbol interferes with subsequent symbols. This is an unwanted phenomenon as the previous symbols have a similar effect as noise , thus making the communication less reliable. The spreading of the pulse beyond its allotted time interval causes it to interfere with neighboring pulses. [ 1 ] ISI is usually caused by multipath propagation or the inherent linear or non-linear frequency response of a communication channel causing successive symbols to blur together.
The presence of ISI in the system introduces errors in the decision device at the receiver output. Therefore, in the design of the transmitting and receiving filters, the objective is to minimize the effects of ISI, and thereby deliver the digital data to its destination with the smallest error rate possible.
Ways to alleviate intersymbol interference include adaptive equalization and error correcting codes . [ 2 ]
One of the causes of intersymbol interference is multipath propagation in which a wireless signal from a transmitter reaches the receiver via multiple paths. The causes of this include reflection (for instance, the signal may bounce off buildings), refraction (such as through the foliage of a tree) and atmospheric effects such as atmospheric ducting and ionospheric reflection . Since the various paths can be of different lengths, this results in the different versions of the signal arriving at the receiver at different times. These delays mean that part or all of a given symbol will be spread into the subsequent symbols, thereby interfering with the correct detection of those symbols. Additionally, the various paths often distort the amplitude and/or phase of the signal, thereby causing further interference with the received signal.
Another cause of intersymbol interference is the transmission of a signal through a bandlimited channel, i.e., one where the frequency response is zero above a certain frequency (the cutoff frequency). Passing a signal through such a channel results in the removal of frequency components above this cutoff frequency. In addition, components of the frequency below the cutoff frequency may also be attenuated by the channel.
This filtering of the transmitted signal affects the shape of the pulse that arrives at the receiver. The effects of filtering a rectangular pulse not only change the shape of the pulse within the first symbol period, but it is also spread out over the subsequent symbol periods. When a message is transmitted through such a channel, the spread pulse of each individual symbol will interfere with following symbols.
Bandlimited channels are present in both wired and wireless communications. The limitation is often imposed by the desire to operate multiple independent signals through the same area/cable; due to this, each system is typically allocated a piece of the total bandwidth available. For wireless systems, they may be allocated a slice of the electromagnetic spectrum to transmit in (for example, FM radio is often broadcast in the 87.5–108 MHz range). This allocation is usually administered by a government agency ; in the case of the United States this is the Federal Communications Commission (FCC). In a wired system, such as an optical fiber cable , the allocation will be decided by the owner of the cable.
The bandlimiting can also be due to the physical properties of the medium - for instance, the cable being used in a wired system may have a cutoff frequency above which practically none of the transmitted signal will propagate.
Communication systems that transmit data over bandlimited channels usually implement pulse shaping to avoid interference caused by the bandwidth limitation. If the channel frequency response is flat and the shaping filter has a finite bandwidth, it is possible to communicate with no ISI at all. Often the channel response is not known beforehand, and an adaptive equalizer is used to compensate the frequency response.
One way to study ISI in a PCM or data transmission system experimentally is to apply the received wave to the vertical deflection plates of an oscilloscope and to apply a sawtooth wave at the transmitted symbol rate R (R = 1/T) to the horizontal deflection plates. The resulting display is called an eye pattern because of its resemblance to the human eye for binary waves. The interior region of the eye pattern is called the eye opening. An eye pattern provides a great deal of information about the performance of the pertinent system.
An eye pattern, which overlays many samples of a signal, can give a graphical representation of the
signal characteristics. The first image above is the eye pattern for a binary phase-shift keying (PSK) system in which a one is represented by an amplitude of −1 and a zero by an amplitude of +1. The current sampling time is at the center of the image and the previous and next sampling times are at the edges of the image. The various transitions from one sampling time to another (such as one-to-zero, one-to-one and so forth) can clearly be seen on the diagram.
The noise margin - the amount of noise required to cause the receiver to get an error - is given by the distance between the signal and the zero amplitude point at the sampling time; in other words, the further from zero at the sampling time the signal is the better. For the signal to be correctly interpreted, it must be sampled somewhere between the two points where the zero-to-one and one-to-zero transitions cross. Again, the further apart these points are the better, as this means the signal will be less sensitive to errors in the timing of the samples at the receiver.
The effects of ISI are shown in the second image which is an eye pattern of the same system when operating over a multipath channel. The effects of receiving delayed and distorted versions of the signal can be seen in the loss of definition of the signal transitions. It also reduces both the noise margin and the window in which the signal can be sampled, which shows that the performance of the system will be worse (i.e. it will have a greater bit error ratio ).
There are several techniques in telecommunications and data storage that try to work around the problem of intersymbol interference.
Coded modulation systems also exist that intentionally build a controlled amount of ISI into the system at the transmitter side, known as faster-than-Nyquist signaling . Such a design trades a computational complexity penalty at the receiver against a Shannon capacity gain of the overall transceiver system. [ 3 ] | https://en.wikipedia.org/wiki/Intersymbol_interference |
Intersystem crossing ( ISC ) is an isoenergetic radiationless process involving a transition between the two electronic states with different spin multiplicity . [ 1 ]
When an electron in a molecule with a singlet ground state is excited ( via absorption of radiation) to a higher energy level, either an excited singlet state or an excited triplet state will form. Singlet state is a molecular electronic state such that all electron spins are paired. That is, the spin of the excited electron is still paired with the ground state electron (a pair of electrons in the same energy level must have opposite spins, per the Pauli exclusion principle ). In a triplet state the excited electron is no longer paired with the ground state electron; that is, they are parallel (same spin). Since excitation to a triplet state involves an additional "forbidden" spin transition, it is less probable that a triplet state will form when the molecule absorbs radiation.
When a singlet state nonradiatively passes to a triplet state, or conversely a triplet transitions to a singlet, that process is known as intersystem crossing. In essence, the spin of the excited electron is reversed. The probability of this process occurring is more favorable when the vibrational levels of the two excited states overlap, since little or no energy must be gained or lost in the transition. As the spin/orbital interactions in such molecules are substantial and a change in spin is thus more favourable, intersystem crossing is most common in heavy-atom molecules (e.g. those containing iodine or bromine ). This process is called " spin-orbit coupling ". Simply-stated, it involves coupling of the electron spin with the orbital angular momentum of non-circular orbits. In addition, the presence of paramagnetic species in solution enhances intersystem crossing. [ 2 ]
The radiative decay from an excited triplet state back to a singlet state is known as phosphorescence . Since a transition in spin multiplicity occurs, phosphorescence is a manifestation of intersystem crossing. The time scale of intersystem crossing is on the order of 10 −8 to 10 −3 s, one of the slowest forms of relaxation. [ 3 ]
Once a metal complex undergoes metal-to-ligand charge transfer , the system can undergo intersystem crossing, which, in conjunction with the tunability of MLCT excitation energies, produces a long-lived intermediate whose energy can be adjusted by altering the ligands used in the complex. Another species can then react with the long-lived excited state via oxidation or reduction, thereby initiating a redox pathway via tunable photoexcitation . Complexes containing high atomic number d 6 metal centers, such as Ru(II) and Ir(III), are commonly used for such applications due to them favoring intersystem crossing as a result of their more intense spin-orbit coupling. [ 4 ]
Complexes that have access to d orbitals are able to access spin multiplicities besides the singlet and triplet states, as some complexes have orbitals of similar or degenerate energies so that it is energetically favorable for electrons to be unpaired. It is possible then for a single complex to undergo multiple intersystem crossings, which is the case in light-induced excited spin-state trapping (LIESST), where, at low temperatures, a low-spin complex can be irradiated and undergo two instances of intersystem crossing. For Fe(II) complexes, the first intersystem crossing occurs from the singlet to the triplet state, which is then followed by intersystem crossing between the triplet and the quintet state. At low temperatures, the low-spin state is favored, but the quintet state is unable to relax back to the low-spin ground state due to their differences in zero-point energy and metal-ligand bond length. The reverse process is also possible for cases such as [Fe( ptz ) 6 ](BF 4 ) 2 , but the singlet state is not fully regenerated, as the energy needed to excite the quintet ground state to the necessary excited state to undergo intersystem crossing to the triplet state overlaps with multiple bands corresponding to excitations of the singlet state that lead back to the quintet state. [ 5 ]
Fluorescence microscopy relies upon fluorescent compounds, or fluorophores , in order to image biological systems. Since fluorescence and phosphorescence are competitive methods of relaxation, a fluorophore that undergoes intersystem crossing to the triplet excited state no longer fluoresces and instead remains in the triplet excited state, which has a relatively long lifetime, before phosphorescing and relaxing back to the singlet ground state so that it may continue to undergo repeated excitation and fluorescence. This process in which fluorophores temporarily do not fluoresce is called blinking . While in the triplet excited state, the fluorophore may undergo photobleaching , a process in which the fluorophore reacts with another species in the system, which can lead to the loss of the fluorescent characteristic of the fluorophore. [ 6 ]
In order to regulate these processes dependent upon the triplet state, the rate of intersystem crossing can be adjusted to either favor or disfavor formation of the triplet state. Fluorescent biomarkers, including both quantum dots and fluorescent proteins , are often optimized in order to maximize quantum yield and intensity of fluorescent signal, which in part is accomplished by decreasing the rate of intersystem crossing. Methods of adjusting the rate of intersystem crossing include the addition of Mn 2+ to the system, which increases the rate of intersystem crossing for rhodamine and cyanine dyes. [ 7 ] The changing of the metal that is a part of the photosensitizer groups bound to CdTe quantum dots can also affect rate of intersystem crossing, as the use of a heavier metal can cause intersystem crossing to be favored due to the heavy atom effect. [ 8 ]
The viability of organometallic polymers in bulk heterojunction organic solar cells has been investigated due to their donor capability. The efficiency of charge separation at the donor-acceptor interface can be improved through the use of heavy metals, as their increased spin-orbit coupling promotes the formation of the triplet MLCT excited state, which could improve exciton diffusion length and reduce the probability of recombination due to the extended lifespan of the spin-forbidden excited state. By improving the efficiency of charge separation step of the bulk heterojunction solar cell mechanism, the power conversion efficiency also improves. Improved charge separation efficiency has been shown to be a result of the formation of the triplet excited state in some conjugated platinum-acetylide polymers. However, as the size of the conjugated system increases, the increased conjugation reduces the impact of the heavy atom effect and instead makes the polymer more efficient due to the increased conjugation reducing the bandgap . [ 9 ]
In 1933, Aleksander Jabłoński published his conclusion that the extended lifetime of phosphorescence was due to a metastable excited state at an energy lower than the state first achieved upon excitation. Based upon this research, Gilbert Lewis and coworkers, during their investigation of organic molecule luminescence in the 1940s, concluded that this metastable energy state corresponded to the triplet electron configuration. The triplet state was confirmed by Lewis via application of a magnetic field to the excited phosphor, as only the metastable state would have a long enough lifetime to be analyzed and the phosphor would have only responded if it was paramagnetic due to it having at least one unpaired electron. Their proposed pathway of phosphorescence included the forbidden spin transition occurring when the potential energy curves of the singlet excited state and the triplet excited state crossed, from which the term intersystem crossing arose. [ 10 ] | https://en.wikipedia.org/wiki/Intersystem_crossing |
In economics , intertemporal choice is the study of the relative value people assign to two or more payoffs at different points in time. This relationship is usually simplified to today and some future date. Intertemporal choice was introduced by Canadian economist John Rae in 1834 in the "Sociological Theory of Capital". Later, Eugen von Böhm-Bawerk in 1889 and Irving Fisher in 1930 elaborated on the model.
According to this model there are three types of consumption: past, present and future .
When making decisions between present and future consumption, the consumer takes his/her previous consumption into account.
This decision making is based on an indifference map with negative slope because if he consumes something today it means that he can't consume it in the future and vice versa.
The revenue is in form of interest rate. Nominal interest rate - inflation = real interest rate
Denote
Then maximum present consumption is: Y ( t ) + Y ( t + 1 ) 1 + r {\displaystyle Y(t)+{\frac {Y(t+1)}{1+r}}}
The maximum future consumption is: ( 1 + r ) Y ( t ) + Y ( t + 1 ) {\displaystyle (1+r)Y(t)+Y(t+1)}
This economics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intertemporal_choice |
An intertidal bioflim is a biofilm that forms on the intertidal region of bodies of water. Bacteria and various microorganisms , including algae and fungi , form communities of adhered cells called biofilms . [ 1 ] A matrix of extracellular polymeric substances (EPS) within the biofilm forms sticky coatings on individual sediment particles and detrital surfaces. [ 2 ] This feature protects bacteria against environmental stresses like temperature and pH fluctuations, UV exposure, changes in salinity, depletion of nutrients, antimicrobial agents , desiccation , and predation . [ 1 ] [ 2 ] Particularly, in the ever-changing environments of intertidal systems , biofilms can facilitate a range of microbial processes and create protective microenvironments where cells communicate with each other and regulate further biofilm formation via Quorum Sensing (QS) . [ 2 ] , [ 3 ] While biofilm formation is advantageous to bacteria and other microorganisms involved, the attachment of microorganisms to ship hulls can increase fuel consumption and emission of greenhouse gases , as well as introduce Non-Indigenous Species (NIS), potentially resulting in harmful economic and ecological impacts on the receiving ecosystems . [ 4 ]
Biofilm formation begins with the initial attachment of microorganisms to a substrate, such as rocks, shells, or sand in the intertidal zone. This process occurs during the reversible attachment phase, in which the microorganisms only lightly adhere to the substrate. [ 5 ] In this phase, the bacteria are encompassed in small amounts of EPS; they are still capable of individual movement and may return to planktonic life. [ 5 ] , [ 6 ] Microorganisms may attach to the surface of substrates by weak Van der Waals forces and hydrophobic effects . [ 7 ] A study of Pseudomonas aeruginosa mutants showed that twitching motility by type IV pili contributes to the organism's ability to aggregate on substrates. [ 8 ] Another mechanism by which bacteria may adhere to surfaces is the binary division of attached cells. [ 5 ] Similar to colony formation on agar plates, as cells divide, the daughter cells spread expansively, forming cell clusters. [ 5 ] In all cases, adhesion depends on the microorganisms involved, the nature of the substrate, and the chemical and biological conditions of the environment.
The next stage is the irreversible attachment stage, in which microbes start producing EPS. This process creates a three-dimensional polymer network that acts as the biofilm matrix and encloses the bacteria. [ 9 ] In this stage, EPS prevent bacterial cells from moving, keeping them in long-term close contact and allowing interactions such as cell-to-cell communication and horizontal gene transfer to occur. [ 9 ] In most biofilms, the microbes constitute less than 10% of the dry mass, while the EPS matrix can comprise over 90%. [ 9 ]
Following the irreversible phase, the next phase of the biofilm life cycle is maturation. In this stage, EPS play a critical role in protecting the biofilm from environmental fluctuations such as oxidative damage , antimicrobials, and host immune system response . [ 10 ] Microcolonies are formed as a result of the aggregation of microbial cells and the increase of microbes with accessible nutrients. [ 6 ] With the increase in cells, the biofilm matures and develops into a "tower" or "mushroom" like structure with a complex architecture of fluid-filled channels and pores. [ 5 ] , [ 6 ] , [ 10 ]
Detachment, also known as dispersal, is the final stage of the biofilm life cycle. In this stage, cells are released from the biofilm matrix, individually or in clusters, and either resume planktonic life or attach to another surface. [ 5 ] , [ 6 ] Various factors can lead to cell detachment, including insufficient nutrients, competition, lack of oxygen, and environmental factors. [ 10 ]
Marine biofilm communities have rich and diverse taxa, [ 11 ] with Cyanobacteria and Proteobacteria being the dominant phyla. [ 12 ] Actinobacteria , Bacteroidetes , and Planctomycetes are also considered to be dominant phyla but their relative abundances differ between locations. [ 12 ] Site-specific differences also arise within intertidal biofilms. For instance, intertidal biofilms in Río de la Plata contained high amounts of Betaproteobacteria from the Thauera genus, [ 13 ] whereas intertidal biofilms along the Pearl River Estuary contained Alphaproteobacteria and Gammaproteobacteria as the most prominent taxa. [ 14 ]
Diatoms are a major component of intertidal biofilms, [ 15 ] and they excrete EPS that support many functions, such as desiccation resistance, motility, and metabolite exchange. [ 16 ] The EPS produced by microalgae also enhance biofilm growth and help other members of the biofilm with adhesion and migration. [ 17 ] EPS are mostly composed of polysaccharides, but may also include proteins , nucleic acids , lipids , and low-molecular-weight, non-carbohydrate compounds. [ 16 ]
Intertidal biofilms exhibit stratification , where different microorganisms arrange themselves in layers based on factors like seasonality . Microalgae are found on the lower shore [ 18 ] but their distribution can change. During the winter, a greater abundance and biomass of microalgae are found on the upper shore compared to the lower shore. [ 19 ] Seasonal variability is also observed in the relative abundance of microalgae in intertidal biofilms. Specifically, microalgae in tropical and temperate intertidal biofilms are most abundant during winter and spring, with abundance decreasing in the warmer months. [ 20 ] Cyanobacteria are relatively less affected by seasonal variation. [ 21 ] This may be attributed to their greater tolerance to stressors such as temperature and insolation. [ 22 ]
Interactions within biofilms are bidirectional. They can be affected by negative and positive feedback loops , as well as indirect effects. [ 23 ] These interactions contribute to the resilience and adaptability of intertidal biofilms. [ 24 ]
Within intertidal biofilms, trophic interactions exist between microphytobenthos and bacteria. [ 25 ] EPS, which are produced by microphytobenthos, act as a trophic resource, but their large size makes them difficult to break down and assimilate. [ 26 ] Bacteria secrete various enzymes like β-glucosidase to break down complex carbohydrate compounds in EPS. [ 27 ] These carbohydrates serve as a nutrient source for heterotrophic bacteria and sulfate-reducing bacteria (SRB) , [ 2 ] as well as a carbon source for consumers such as marine invertebrates. [ 28 ]
Biofilm communities facilitate both intra-species communication and inter-species communication through QS, which relies on the production and release of signaling molecules known as autoinducers . [ 29 ] When autoinducers reach a specific threshold concentration, signaling pathways are activated, resulting in physiological changes. [ 2 ] QS, alongside other methods of cell signaling regulation, is important for intertidal biofilms, as it allows them to survive in fluctuating environments and varying conditions. [ 30 ] This is because the expression of many genes in biofilms is shown to be density-dependent, with QS playing a crucial role in modulating feedback loops. [ 31 ] Autoinducer signals have also resulted in biofilms having a very different architecture compared to those with no QS capabilities. [ 32 ]
Intertidal biofilms exhibit diverse adaptation mechanisms to cope with fluctuating conditions such as light stress, metal ion and oxidative stress , and desiccation stress.
As intertidal biofilms are found in locations with fluctuating environmental conditions, biofilm microalgae are often damaged by the accumulation of reactive oxygen species (ROS) . [ 33 ] High levels of ROS induce photoinhibition , inactivating the photosystem II protein D1 and negatively affecting primary productivity. [ 34 ] In these conditions, estuarine diatoms improve the efficiency of the xanthophyll cycle , [ 35 ] limiting the amount of photodamage and providing the biofilm with a photo-protective mechanism. [ 36 ] Vertical migration also allows members of the biofilm community to adapt to light stress. [ 37 ] Cells migrate toward the sediment surface when a tide leaves, then migrate downwards upon the arrival of an incoming tide . [ 38 ]
Industrial activities in intertidal regions lead to increased concentrations of heavy metals such as copper , zinc , and cadmium , resulting in metal ion stress for the biofilms. [ 2 ] To adapt to these conditions, genes involved in metal ion transport and secondary metabolism are over-expressed by intertidal microorganisms, allowing them to transport the heavy metals against electrochemical gradients and prevent toxicity. [ 39 ] The expression of EPS is also enhanced when exposed to increased levels of heavy metals. [ 39 ] EPS serves as an adaptive mechanism to tolerate metal ion stress as its components have functional groups that bind toxic heavy metals and prevent heavy metal toxicity. [ 40 ]
Desiccation leads to a significant decrease in the photosynthetic activity of microphytobenthos in biofilms. [ 41 ] To slow down desiccation, diatoms and bacteria in the biofilm produce EPS, decreasing the rate of water loss and dehydration . [ 42 ] EPS produced from a Microbacterium species have also been identified to have surfactant properties, playing a role in protecting against desiccation. [ 43 ] Alternatively, another protection mechanism against desiccation involves vertical migration, the same strategy that microorganisms use to protect against light stress. Motile diatoms migrate downwards when exposed to extreme light and temperature conditions, [ 44 ] as this allows them to be present in a moist microenvironment and mitigate the effects on photosynthetic activity.
The cohesive nature of EPS contributes not only to the sediment's stability, preventing its resuspension under erosion, but also enhances flocculation processes. Flocculation processes involve the accumulation of fine sediments into larger flocs, modifying biogeochemical exchanges . This stabilization is important for geomorphologic evolution and the ecosystem health of coastal areas. [ 45 ] A study from Jiangsu coast, China concluded that flocculation processes affect the density, particle size, and settling velocity of suspended particles, which are essential for sedimentation and sediment transport. [ 45 ] These processes are also important in biogeochemical cycles for nutrients and heavy metals due to the adsorption ability and transport function of particles in flocs. [ 45 ]
Intertidal marine biofilms on rocky substrates significantly impact estuarine carbon and nutrient dynamics. Biofilms in the Douro River estuary were observed to engage actively in biogeochemical processes, showing considerable net primary production that greatly exceeded respiration rates. [ 46 ] These biofilms play a key role in nutrient fluxes, consistently removing nitrate and silicate from the water column while exhibiting variable fluxes of ammonium depending on light conditions, indicating a preference for ammonium assimilation by primary producers within the biofilms. Despite their limited spatial coverage, rocky biofilms account for a significant portion of the nitrate and silicate uptake compared to adjacent sandy and muddy sediments within the estuary. [ 46 ]
The attachment and growth of marine organisms on submerged artificial structures, such as ship hulls and aquaculture infrastructures, can cause ecological and economic issues. This biofouling leads to increased drag resistance, fuel consumption, and greenhouse gas emissions for ships. It also restricts water exchange, raises disease risk, and causes deformation in aquaculture setups. [ 4 ]
Biofouling on ships, both as hull fouling and through solid ballast (sand, rocks, and soil), is a major pathway for the arrival of NIS into new regions. [ 4 ] This introduces significant risks to receiving ecosystems, potentially resulting in significant economic and ecological impacts. Ports, which are primary receivers of maritime trade goods, are particularly at high risk for NIS introductions. [ 4 ] Monitoring NIS presence and impacts, while implementing preventive measures to minimize their settlement, are critical for marine environmental management.
A research study conducted along the southeast coast of Brazil showed that human activities, such as trampling , were minimal. A trend of increased variability in biofilm biomass was observed with more intense trampling but no significant differences were found across trampling frequencies and intensities. The microorganisms' small size, which prevents complete removal by trampling, and the biofilms' capacity for rapid recovery may contribute to their high resilience to physical disturbance. [ 47 ] | https://en.wikipedia.org/wiki/Intertidal_biofilm |
Intertidal ecology is the study of intertidal ecosystems , where organisms live between the low and high tide lines. At low tide, the intertidal is exposed whereas at high tide, the intertidal is underwater. Intertidal ecologists therefore study the interactions between intertidal organisms and their environment, as well as between different species of intertidal organisms within a particular intertidal community. The most important environmental and species interactions may vary based on the type of intertidal community being studied, the broadest of classifications being based on substrates— rocky shore and soft bottom communities. [ 1 ] [ 2 ]
Organisms living in this zone have a highly variable and often hostile environment, and have evolved various adaptations to cope with and even exploit these conditions. One easily visible feature of intertidal communities is vertical zonation , where the community is divided into distinct vertical bands of specific species going up the shore. Species ability to cope with abiotic factors associated with emersion stress, such as desiccation determines their upper limits, while biotic interactions e.g. competition with other species sets their lower limits. [ 1 ]
Intertidal regions are utilized by humans for food and recreation, but anthropogenic actions also have major impacts, with overexploitation , invasive species and climate change being among the problems faced by intertidal communities. In some places Marine Protected Areas have been established to protect these areas and aid in scientific research . [ 3 ]
Intertidal habitats can be characterized as having either hard or soft bottoms substrates. [ 4 ] Rocky intertidal communities occur on rocky shores , such as headlands , cobble beaches , or human-made jetties . Their degree of exposure may be calculated using the Ballantine Scale . [ 5 ] [ 6 ] Soft-sediment habitats include sandy beaches , and intertidal wetlands (e.g., mudflats and salt marshes ). These habitats differ in levels of abiotic , or non-living, environmental factors. Rocky shores tend to have higher wave action, requiring adaptations allowing the inhabitants to cling tightly to the rocks. Soft-bottom habitats are generally protected from large waves but tend to have more variable salinity levels. They also offer a third habitable dimension: depth. Thus, many soft-sediment inhabitants are adapted for burrowing. [ 7 ] [ 8 ]
Because intertidal organisms endure regular periods of immersion and emersion, they essentially live both underwater and on land and must be adapted to a large range of climatic conditions. The intensity of climate stressors varies with relative tide height because organisms living in areas with higher tide heights are emersed for longer periods than those living in areas with lower tide heights. This gradient of climate with tide height leads to patterns of intertidal zonation , with high intertidal species being more adapted to emersion stresses than low intertidal species. These adaptations may be behavioral (i.e. movements or actions), morphological (i.e. characteristics of external body structure), or physiological (i.e. internal functions of cells and organs ). [ 9 ] In addition, such adaptations generally cost the organism in terms of energy (e.g. to move or to grow certain structures), leading to trade-offs (i.e. spending more energy on deterring predators leaves less energy for other functions like reproduction).
Intertidal organisms, especially those in the high intertidal, must cope with a large range of temperatures . While they are underwater, temperatures may only vary by a few degrees over the year. However, at low tide, temperatures may dip to below freezing or may become scaldingly hot, leading to a temperature range that may approach 30 °C (86 °F) during a period of a few hours. Many mobile organisms, such as snails and crabs, avoid temperature fluctuations by crawling around and searching for food at high tide and hiding in cool, moist refuges (crevices or burrows) at low tide. [ 10 ] Besides simply living at lower tide heights, non-motile organisms may be more dependent on coping mechanisms. For example, high intertidal organisms have a stronger stress response, a physiological response of making proteins that help recovery from temperature stress just as the immune response aids in the recovery from infection. [ 11 ]
Intertidal organisms are also especially prone to desiccation during periods of emersion. Again, mobile organisms avoid desiccation in the same way as they avoid extreme temperatures: by hunkering down in mild and moist refuges. Many intertidal organisms, including Littorina snails, prevent water loss by having waterproof outer surfaces, pulling completely into their shells, and sealing shut their shell opening. Limpets ( Patella ) do not use such a sealing plate but occupy a home-scar to which they seal the lower edge of their flattened conical shell using a grinding action. They return to this home-scar after each grazing excursion, typically just before emersion. On soft rocks, these scars are quite obvious. Still other organisms, such as the algae Ulva and Porphyra , are able to rehydrate and recover after periods of severe desiccation.
The level of salinity can also be quite variable. Low salinities can be caused by rainwater or river inputs of freshwater. Estuarine species must be especially euryhaline , or able to tolerate a wide range of salinities. High salinities occur in locations with high evaporation rates, such as in salt marshes and high intertidal pools. Shading by plants, especially in the salt marsh, can slow evaporation and thus ameliorate salinity stress. In addition, salt marsh plants tolerate high salinities by several physiological mechanisms, including excreting salt through salt glands and preventing salt uptake into the roots.
In addition to these exposure stresses (temperature, desiccation, and salinity), intertidal organisms experience strong mechanical stresses, especially in locations of high wave action . There are myriad ways in which the organisms prevent dislodgement due to waves. [ 12 ] Morphologically, many mollusks (such as limpets and chitons) have low-profile, hydrodynamic shells. Types of substrate attachments include mussels' tethering byssal threads and glues, sea stars ' thousands of suctioning tube feet, and isopods' hook-like appendages that help them hold on to intertidal kelps. Higher profile organisms, such as kelps, must also avoid breaking in high flow locations, and they do so with their strength and flexibility. Finally, organisms can also avoid high flow environments, such as by seeking out low flow microhabitats. Additional forms of mechanical stresses include ice and sand scour, as well as dislodgment by water-borne rocks, logs, etc.
For each of these climate stresses, species exist that are adapted to and thrive in the most stressful of locations. For example, the tiny crustacean copepod Tigriopus thrives in very salty, high intertidal tidepools, and many filter feeders find more to eat in wavier and higher flow locations. Adapting to such challenging environments gives these species competitive edges in such locations.
During tidal immersion, the food supply to intertidal organisms is subsidized by materials carried in seawater, including photosynthesizing phytoplankton and consumer zooplankton . These plankton are eaten by numerous forms of filter feeders — mussels , clams , barnacles , sea squirts , and polychaete worms—which filter seawater in their search for planktonic food sources. [ 13 ] The adjacent ocean is also a primary source of nutrients for autotrophs , photosynthesizing producers ranging in size from microscopic algae (e.g. benthic diatoms ) to huge kelps and other seaweeds . These intertidal producers are eaten by herbivorous grazers, such as limpets that scrape rocks clean of their diatom layer and kelp crabs that creep along blades of the feather boa kelp Egregia eating the tiny leaf-shaped bladelets. Crabs are eaten by goliath grouper , which are then eaten by sharks. Higher up the food web , predatory consumers—especially voracious starfish —eat other grazers (e.g. snails ) and filter feeders (e.g. mussels ). [ 14 ] Finally, scavengers , including crabs and sand fleas , eat dead organic material, including dead producers and consumers.
In addition to being shaped by aspects of climate, intertidal habitats—especially intertidal zonation patterns—are strongly influenced by species interactions, such as predation, competition, facilitation, and indirect interactions. Ultimately, these interactions feed into the food web structure, described above. Intertidal habitats have been a model system for many classic ecological studies, including those introduced below, because the resident communities are particularly amenable to experimentation.
One dogma of intertidal ecology—supported by such classic studies—is that species' lower tide height limits are set by species interactions whereas their upper limits are set by climate variables. Classic studies by Robert Paine [ 13 ] [ 15 ] established that when sea star predators are removed, mussel beds extend to lower tide heights, smothering resident seaweeds. Thus, mussels' lower limits are set by sea star predation. Conversely, in the presence of sea stars, mussels' lower limits occur at a tide height at which sea stars are unable to tolerate climate conditions.
Competition, especially for space, is another dominant interaction structuring intertidal communities. Space competition is especially fierce in rocky intertidal habitats, where habitable space is limited compared to soft-sediment habitats in which three-dimensional space is available. As seen with the previous sea star example, mussels are competitively dominant when they are not kept in check by sea star predation. Joseph Connell 's research on two types of high intertidal barnacles, Balanus balanoides , now Semibalanus balanoides , and a Chthamalus stellatus , showed that zonation patterns could also be set by competition between closely related organisms. [ 16 ] In this example, Balanus outcompetes Chthamalus at lower tide heights but is unable to survive at higher tide heights. Thus, Balanus conforms to the intertidal ecology dogma introduced above: its lower tide height limit is set by a predatory snail and its higher tide height limit is set by climate. Similarly, Chthamalus , which occurs in a refuge from competition (similar to the temperature refuges discussed above), has a lower tide height limit set by competition with Balanus and a higher tide height limit is set by climate.
Although intertidal ecology has traditionally focused on these negative interactions (predation and competition), there is emerging evidence that positive interactions are also important. [ 17 ] Facilitation refers to one organism helping another without harming itself. For example, salt marsh plant species of Juncus and Iva are unable to tolerate the high soil salinities when evaporation rates are high, thus they depend on neighboring plants to shade the sediment, slow evaporation, and help maintain tolerable salinity levels. [ 18 ] In similar examples, many intertidal organisms provide physical structures that are used as refuges by other organisms. Mussels, although they are tough competitors with certain species, are also good facilitators as mussel beds provide a three-dimensional habitat to species of snails, worms, and crustaceans.
All of the examples given so far are of direct interactions: Species A eat Species B or Species B eats Species C. Also important are indirect interactions [ 19 ] where, using the previous example, Species A eats so much of Species B that predation on Species C decreases and Species C increases in number. Thus, Species A indirectly benefits Species C. Pathways of indirect interactions can include all other forms of species interactions. To follow the sea star-mussel relationship, sea stars have an indirect negative effect on the diverse community that lives in the mussel bed because, by preying on mussels and decreasing mussel bed structure, those species that are facilitated by mussels are left homeless.
Additional important species interactions include mutualism , which is seen in symbioses between sea anemones and their internal symbiotic algae, and parasitism , which is prevalent but is only beginning to be appreciated for its effects on community structure.
Humans are highly dependent on intertidal habitats for food and raw materials, [ 20 ] and over 50% of humans live within 100 km of the coast. Therefore, intertidal habitats are greatly influenced by human impacts to both ocean and land habitats. Some of the conservation issues associated with intertidal habitats and at the head of the agendas of managers and intertidal ecologists are:
1. Climate change : Intertidal species are challenged by several of the effects of global climate change, including increased temperatures, sea level rise , and increased storminess. Ultimately, it has been predicted that the distributions and numbers of species will shift depending on their abilities to adapt (quickly!) to these new environmental conditions. [ 20 ] Due to the global scale of this issue, scientists are mainly working to understand and predict possible changes to intertidal habitats.
2. Invasive species : Invasive species are especially prevalent in intertidal areas with high volumes of shipping traffic, such as large estuaries, because of the transport of non-native species in ballast water. [ 21 ] San Francisco Bay , in which an invasive Spartina cordgrass from the east coast is currently transforming mudflat communities into Spartina meadows, is among the most invaded estuaries in the world. Conservation efforts are focused on trying to eradicate some species (like Spartina ) in their non-native habitats as well as preventing further species introductions (e.g. by controlling methods of ballast water uptake and release).
3. Marine protected areas : Many intertidal areas are lightly to heavily exploited by humans for food gathering (e.g. clam digging in soft-sediment habitats and snail, mussel, and algal collecting in rocky intertidal habitats). In some locations, marine protected areas have been established where no collecting is permitted. The benefits of protected areas may spill over to positively impact adjacent unprotected areas. For example, a greater number of larger egg capsules of the edible snail Concholepus in protected vs. non-protected areas in Chile indicates that these protected areas may help replenish snail stocks in areas open to harvesting. [ 22 ] The degree to which collecting is regulated by law differs with the species and habitat. | https://en.wikipedia.org/wiki/Intertidal_ecology |
The intertidal zone or foreshore is the area above water level at low tide and underwater at high tide; in other words, it is the part of the littoral zone within the tidal range . This area can include several types of habitats with various species of life , such as sea stars , sea urchins , and many species of coral with regional differences in biodiversity. Sometimes it is referred to as the littoral zone or seashore , although those can be defined as a wider region.
The intertidal zone also includes steep rocky cliffs , sandy beaches , bogs or wetlands (e.g., vast mudflats ). This area can be a narrow strip, such as in Pacific islands that have only a narrow tidal range, or can include many meters of shoreline where shallow beach slopes interact with high tidal excursion. The peritidal zone is similar but somewhat wider, extending from above the highest tide level to below the lowest. Organisms in the intertidal zone are well-adapted to their environment, facing high levels of interspecific competition and the rapidly changing conditions that come with the tides . [ 1 ] The intertidal zone is also home to several species from many different phyla ( Porifera , Annelida , Coelenterata , Mollusca , Arthropoda , etc.).
The water that comes with the tides can vary from brackish waters , fresh with rain , to highly saline and dry salt , with drying between tidal inundations. Wave splash can dislodge residents from the littoral zone. With the intertidal zone's high exposure to sunlight , the temperature can range from very hot with full sunshine to near freezing in colder climates. Some microclimates in the littoral zone are moderated by local features and larger plants such as mangroves . Adaptations in the littoral zone allow the utilization of nutrients supplied in high volume on a regular basis from the sea , which is actively moved to the zone by tides. The edges of habitats, in this case the land and sea, are themselves often significant ecosystems , and the littoral zone is a prime example.
A typical rocky shore can be divided into a spray zone or splash zone (also known as the supratidal zone ), which is above the spring high-tide line and is covered by water only during storms, and an intertidal zone, which lies between the high and low tidal extremes. Along most shores , the intertidal zone can be clearly separated into the following subzones: high tide zone, middle tide zone, and low tide zone. The intertidal zone is one of a number of marine biomes or habitats , including estuaries , the neritic zone , the photic zone , and deep zones .
Marine biologists divide the intertidal region into three zones (low, middle, and high), based on the overall average exposure of the zone. [ 2 ] The low intertidal zone, which borders on the shallow subtidal zone, is only exposed to air at the lowest of low tides and is primarily marine in character. The mid intertidal zone is regularly exposed and submerged by average tides. The high intertidal zone is only covered by the highest of the high tides, and spends much of its time as terrestrial habitat. The high intertidal zone borders on the splash zone (the region above the highest still-tide level, but which receives wave splash). On shores exposed to heavy wave action , the intertidal zone will be influenced by waves, as the spray from breaking waves will extend the intertidal zone.
Depending on the substratum and topography of the shore, additional features may be noticed. On rocky shores , tide pools form in depressions that fill with water as the tide rises. Under certain conditions, such as those at Morecambe Bay , quicksand may form. [ 3 ]
This subregion is mostly submerged – it is only exposed at the point of low tide and for a longer period of time during extremely low tides. This area is teeming with life; [ 2 ] the most notable difference between this subregion and the other three is that there is much more marine vegetation, especially seaweeds . There is also a great biodiversity. Organisms in this zone generally are not well adapted to periods of dryness and temperature extremes. Some of the organisms in this area are abalone , sea anemones , brown seaweed , chitons , crabs , green algae , hydroids , isopods , limpets , mussels , nudibranchs , sculpin , sea cucumber , sea lettuce , sea palms , starfish , sea urchins , shrimp , snails , sponges , surf grass , tube worms , and whelks . Creatures in this area can grow to larger sizes because there is more available energy in the localized ecosystem. Also, marine vegetation can grow to much greater sizes than in the other three intertidal subregions due to the better water coverage. The water is shallow enough to allow plenty of sunlight to reach the vegetation to allow substantial photosynthetic activity, and the salinity is at almost normal levels. This area is also protected from large predators such as fish because of the wave action and the relatively shallow water.
The intertidal region is an important model system for the study of ecology , especially on wave-swept rocky shores. The region contains a high diversity of species, and the zonation created by the tides causes species ranges to be compressed into very narrow bands. This makes it relatively simple to study species across their entire cross-shore range, something that can be extremely difficult in, for instance, terrestrial habitats that can stretch thousands of kilometres. Communities on wave-swept shores also have high turnover due to disturbance, so it is possible to watch ecological succession over years rather than decades.
The burrowing invertebrates that make up large portions of sandy beach ecosystems are known to travel relatively great distances in cross-shore directions as beaches change on the order of days, semilunar cycles, seasons, or years. The distribution of some species has been found to correlate strongly with geomorphic datums such as the high tide strand and the water table outcrop.
Since the foreshore is alternately covered by the sea and exposed to the air, organisms living in this environment must be adapted to both wet and dry conditions. Intertidal zone biomass reduces the risk of shoreline erosion from high intensity waves. [ 4 ] Typical inhabitants of the intertidal rocky shore include sea urchins , sea anemones , barnacles , chitons , crabs , isopods , mussels , starfish , and many marine gastropod molluscs such as limpets and whelks . Sexual and asexual reproduction varies by inhabitants of the intertidal zones. [ 5 ]
Humans have historically used intertidal zones as foraged food sources during low tide . Migratory birds also rely on intertidal species for feeding areas because of low water habitats consisting of an abundance of mollusks and other marine species. [ 4 ]
As with the dry sand part of a beach, legal and political disputes can arise over the ownership and use of the foreshore. One recent example is the New Zealand foreshore and seabed controversy . In legal discussions, the foreshore is often referred to as the wet-sand area .
For privately owned beaches in the United States , some states such as Massachusetts use the low-water mark as the dividing line between the property of the State and that of the beach owner; however the public still has fishing, fowling, and navigation rights to the zone between low and high water. Other states such as California use the high-water mark.
In the United Kingdom , the foreshore is generally deemed to be owned by the Crown , with exceptions for what are termed several fisheries , which can be historic deeds to title, dating back to King John 's time or earlier, and the Udal Law , which applies generally in Orkney and Shetland .
In Greece , according to the L. 2971/01, the foreshore zone is defined as the area of the coast that might be reached by the maximum climbing of the waves on the coast (maximum wave run-up on the coast) in their maximum capacity (maximum referring to the "usually maximum winter waves" and of course not to exceptional cases, such as tsunamis ). The foreshore zone, a part of the exceptions of the law, is public, and permanent constructions are not allowed on it. In Italy, about half the shoreline is owned by the government but leased to private beach clubs called lidos. [ 6 ]
In the East African and West Indian Ocean, intertidal zone management is often neglected of being a priority due to there being no intent for collective economic productivity. [ 7 ] According to workshops performing questionaries, it is stated that eighty-six percent of respondents believe mismanagement of mangrove and coastal ecosystems are due to lack of knowledge to steward the ecosystems, yet forty-four percent of respondents state that there is a fair amount of knowledge used in those regions for fisheries. [ citation needed ]
Intertidal zones are sensitive habitats with an abundance of marine species that can experience ecological hazards associated with tourism and human-induced environmental impacts . A variety of other threats that have been summarized by scientists include nutrient pollution , overharvesting , habitat destruction , and climate change . [ 8 ] Habitat destruction is advanced through activities including harvesting fisheries with drag nets and a neglect of the sensitivity of intertidal zones. [ 9 ] | https://en.wikipedia.org/wiki/Intertidal_zone |
In mathematics , an interval contractor (or contractor for short) [ 1 ] associated to a set X {\displaystyle X} is an operator C {\displaystyle C} which associates to a hyperrectangle [ x ] {\displaystyle [x]} in R n {\displaystyle {\mathbf {R}}^{n}} another box C ( [ x ] ) {\displaystyle C([x])} of R n {\displaystyle {\mathbf {R}}^{n}} such that the two following properties are always satisfied:
A contractor associated to a constraint (such as an equation or an inequality ) is a
contractor associated to the set X {\displaystyle X} of all x {\displaystyle x} which satisfy the constraint.
Contractors make it possible to improve the efficiency of branch-and-bound algorithms classically used in interval analysis .
A contractor C is monotonic if we have [ x ] ⊂ [ y ] ⇒ C ( [ x ] ) ⊂ C ( [ y ] ) {\displaystyle [x]\subset [y]\Rightarrow C([x])\subset C([y])} .
It is minimal if for all boxes [ x ], we have C ( [ x ] ) = [ [ x ] ∩ X ] {\displaystyle C([x])=[[x]\cap X]} ,
where [ A ] is the interval hull of the set A , i.e., the smallest
box enclosing A .
The contractor C is thin if for all points x , C ( { x } ) = { x } ∩ X {\displaystyle C(\{x\})=\{x\}\cap X} where { x } denotes the degenerated box enclosing x as a single point.
The contractor C is idempotent if for all boxes [ x ], we have C ∘ C ( [ x ] ) = C ( [ x ] ) . {\displaystyle C\circ C([x])=C([x]).}
The contractor C is convergent if for all sequences [ x ]( k ) of boxes containing x , we have [ x ] ( k ) → x ⟹ C ( [ x ] ( k ) ) → { x } ∩ X . {\displaystyle [x](k)\rightarrow x\implies C([x](k))\rightarrow \{x\}\cap X.}
Figure 1 represents the set X painted grey and some boxes, some of them degenerated (i.e., they correspond to singletons). Figure 2 represents these boxes
after contraction . Note that no point of X has been removed by the contractor. The contractor
is minimal for the cyan box but is pessimistic for the green one. All degenerated blue boxes are contracted to
the empty box. The magenta box and the red box cannot be contracted.
Some operations can be performed on contractors to build more complex contractors. [ 2 ] The intersection , the union , the composition and the repetition are defined as follows.
There exist different ways to build contractors associated to equations and inequalities, say, f ( x ) in [ y ].
Most of them are based on interval arithmetic.
One of the most efficient and most simple is the forward/backward contractor (also called as HC4-revise). [ 3 ] [ 4 ]
The principle is to evaluate f ( x ) using interval arithmetic (this is the forward step).
The resulting interval is intersected with [ y ]. A backward evaluation of f ( x ) is then performed
in order to contract the intervals for the x i (this is the backward step). We now illustrate the principle on a simple example.
Consider the constraint ( x 1 + x 2 ) ⋅ x 3 ∈ [ 1 , 2 ] . {\displaystyle (x_{1}+x_{2})\cdot x_{3}\in [1,2].} We can evaluate the function f ( x ) by introducing the two intermediate variables a and b , as follows
The two previous constraints are called forward constraints . We get the backward constraints by taking each forward constraint in the reverse order and isolating each variable on the right hand side. We get
The resulting forward/backward contractor C ( [ x 1 ] , [ x 2 ] , [ x 3 ] ) {\displaystyle C([x_{1}],[x_{2}],[x_{3}])} is obtained by evaluating the forward and the backward constraints using interval analysis . | https://en.wikipedia.org/wiki/Interval_contractor |
In mathematics , especially order theory ,
the interval order for a collection of intervals on the real line
is the partial order corresponding to their left-to-right precedence relation—one interval, I 1 , being considered less than another, I 2 , if I 1 is completely to the left of I 2 .
More formally, a countable poset P = ( X , ≤ ) {\displaystyle P=(X,\leq )} is an interval order if and only if
there exists a bijection from X {\displaystyle X} to a set of real intervals,
so x i ↦ ( ℓ i , r i ) {\displaystyle x_{i}\mapsto (\ell _{i},r_{i})} ,
such that for any x i , x j ∈ X {\displaystyle x_{i},x_{j}\in X} we have x i < x j {\displaystyle x_{i}<x_{j}} in P {\displaystyle P} exactly when r i < ℓ j {\displaystyle r_{i}<\ell _{j}} .
Such posets may be equivalently
characterized as those with no induced subposet isomorphic to the
pair of two-element chains , in other words as the ( 2 + 2 ) {\displaystyle (2+2)} -free posets
. [ 1 ] Fully written out, this means that for any two pairs of elements a > b {\displaystyle a>b} and c > d {\displaystyle c>d} one must have a > d {\displaystyle a>d} or c > b {\displaystyle c>b} .
The subclass of interval orders obtained by restricting the intervals to those of unit length, so they all have the form ( ℓ i , ℓ i + 1 ) {\displaystyle (\ell _{i},\ell _{i}+1)} , is precisely the semiorders .
The complement of the comparability graph of an interval order ( X {\displaystyle X} , ≤)
is the interval graph ( X , ∩ ) {\displaystyle (X,\cap )} .
Interval orders should not be confused with the interval-containment orders, which are the inclusion orders on intervals on the real line (equivalently, the orders of dimension ≤ 2).
Interval orders' practical applications include modelling evolution of species and archeological histories of pottery styles. [ 2 ] [ example needed ]
An important parameter of partial orders is order dimension : the dimension of a partial order P {\displaystyle P} is the least number of linear orders whose intersection is P {\displaystyle P} . For interval orders, dimension can be arbitrarily large. And while the problem of determining the dimension of general partial orders is known to be NP-hard , determining the dimension of an interval order remains a problem of unknown computational complexity . [ 3 ]
A related parameter is interval dimension , which is defined analogously, but in terms of interval orders instead of linear orders. Thus, the interval dimension of a partially ordered set P = ( X , ≤ ) {\displaystyle P=(X,\leq )} is the least integer k {\displaystyle k} for which there exist interval orders ⪯ 1 , … , ⪯ k {\displaystyle \preceq _{1},\ldots ,\preceq _{k}} on X {\displaystyle X} with x ≤ y {\displaystyle x\leq y} exactly when x ⪯ 1 y , … , {\displaystyle x\preceq _{1}y,\ldots ,} and x ⪯ k y {\displaystyle x\preceq _{k}y} .
The interval dimension of an order is never greater than its order dimension. [ 4 ]
In addition to being isomorphic to ( 2 + 2 ) {\displaystyle (2+2)} -free posets,
unlabeled interval orders on [ n ] {\displaystyle [n]} are also in bijection
with a subset of fixed-point-free involutions on ordered sets with cardinality 2 n {\displaystyle 2n} . [ 5 ] These are the
involutions with no so-called left- or right-neighbor nestings where, for any involution f {\displaystyle f} on [ 2 n ] {\displaystyle [2n]} , a left nesting is
an i ∈ [ 2 n ] {\displaystyle i\in [2n]} such that i < i + 1 < f ( i + 1 ) < f ( i ) {\displaystyle i<i+1<f(i+1)<f(i)} and a right nesting is an i ∈ [ 2 n ] {\displaystyle i\in [2n]} such that f ( i ) < f ( i + 1 ) < i < i + 1 {\displaystyle f(i)<f(i+1)<i<i+1} .
Such involutions, according to semi-length, have ordinary generating function [ 6 ]
The coefficient of t n {\displaystyle t^{n}} in the expansion of F ( t ) {\displaystyle F(t)} gives the number of unlabeled interval orders of size n {\displaystyle n} . The sequence of these numbers (sequence A022493 in the OEIS ) begins | https://en.wikipedia.org/wiki/Interval_order |
In music , an interval ratio is a ratio of the frequencies of the pitches in a musical interval . For example, a just perfect fifth (for example C to G) is 3:2 ( Play ⓘ ), 1.5, and may be approximated by an equal tempered perfect fifth ( Play ⓘ ) which is 2 7/12 (about 1.498). If the A above middle C is 440 Hz , the perfect fifth above it would be E , at (440*1.5=) 660 Hz, while the equal tempered E5 is 659.255 Hz.
Ratios, rather than direct frequency measurements, allow musicians to work with relative pitch measurements applicable to many instruments in an intuitive manner, whereas one rarely has the frequencies of fixed pitched instruments memorized and rarely has the capabilities to measure the changes of adjustable pitch instruments ( electronic tuner ). Ratios have an inverse relationship to string length, for example stopping a string at two-thirds (2:3) its length produces a pitch one and one-half (3:2) that of the open string (not to be confused with inversion ).
Intervals may be ranked by relative consonance and dissonance . As such ratios with lower integers are generally more consonant than intervals with higher integers. For example, 2:1 ( Play ⓘ ), 4:3 ( Play ⓘ ), 9:8 ( Play ⓘ ), 65536:59049 ( Play ⓘ ), etc.
Consonance and dissonance may more subtly be defined by limit , wherein the ratios whose limit, which includes its integer multiples, is lower are generally more consonant. For example, the 3-limit 128:81 ( Play ⓘ ) and the 7-limit 14:9 ( Play ⓘ ). Despite having larger integers 128:81 is less dissonant than 14:9, as according to limit theory.
For ease of comparison intervals may also be measured in cents , a logarithmic measurement. For example, the just perfect fifth is 701.955 cents while the equal tempered perfect fifth is 700 cents.
Frequency ratios are used to describe intervals in both Western and non-Western music. They are most often used to describe intervals between notes tuned with tuning systems such as Pythagorean tuning , just intonation , and meantone temperament , the size of which can be expressed by small- integer ratios.
When a musical instrument is tuned using a just intonation tuning system, the size of the main intervals can be expressed by small- integer ratios, such as 1:1 ( unison ), 2:1 ( octave ), 3:2 ( perfect fifth ), 4:3 ( perfect fourth ), 5:4 ( major third ), 6:5 ( minor third ). Intervals with small-integer ratios are often called just intervals , or pure intervals . To most people, just intervals sound consonant , i.e. pleasant and well-tuned.
Most commonly, however, musical instruments are nowadays tuned using a different tuning system, called 12-tone equal temperament , in which the main intervals are typically perceived as consonant, but none is justly tuned and as consonant as a just interval, except for the unison and octave. [ 1 ] Although the size of equally tuned intervals is typically similar to that of just intervals, in most cases it cannot be expressed by small-integer ratios. For instance, an equal tempered perfect fifth has a frequency ratio of about 1.4983:1 (or 14983:10000). For a comparison between the size of intervals in different tuning systems, see section Size in different tuning systems . | https://en.wikipedia.org/wiki/Interval_ratio |
In chemistry, intervalence charge transfer , often abbreviated IVCT or even IT , is a type of charge-transfer band that is associated with mixed valence compounds . It is most common for systems with two metal sites differing only in oxidation state. Quite often such electron transfer reverses the oxidation states of the sites. The term is frequently extended to the case of metal-to-metal charge transfer between non-equivalent metal centres. [ 1 ] The transition produces a characteristically intense absorption in the electromagnetic spectrum . The band is usually found in the visible or near infrared region of the spectrum and is broad.
The process can be described as follows:
Since the energy states of valence tautomers affect the IVCT band, the strength of electronic interaction between the sites, known as α (the mixing coefficient), can be determined by analysis of the IVCT band. [ 2 ] Depending on the value of α, mixed valence complexes are classified into three groups: | https://en.wikipedia.org/wiki/Intervalence_charge_transfer |
In social studies and social policy , intervention theory is the analysis of the decision making problems of intervening effectively in a situation in order to secure desired outcomes. Intervention theory addresses the question of when it is desirable not to intervene and when it is appropriate to do so. It also examines the effectiveness of different types of intervention. The term is used across a range of social and medical practices, including health care, child protection and law enforcement. It is also used in business studies .
Within the theory of nursing, intervention theory is included within a larger scope of practice theories. Burns and Grove point out that it directs the implementation of a specific nursing intervention and provides theoretical explanations of how and why the intervention is effective in addressing a particular patient care problem. These theories are tested through programs of research to validate the effectiveness of the intervention in addressing the problem. [ 1 ]
In Intervention Theory and Method Chris Argyris argues that in organization development , effective intervention depends on appropriate and useful knowledge that offers a range of clearly defined choices and that the target should be for as many people as possible to be committed to the option chosen and to feel responsibility for it. Overall, interventions should generate a situation in which actors believe that they are working to internal rather than external influences on decisions. [ 2 ] | https://en.wikipedia.org/wiki/Intervention_theory |
Intestines-on-a-chip ( gut-on-a-chip , mini-intestine ) are microfluidic bioengineered 3D-models of the real organ, which better mimic physiological features than conventional 3D intestinal organoid culture. [ 1 ] A variety of different intestine-on-a-chip models systems have been developed and refined, all holding their individual strengths and weaknesses and collectively holding great promise to the ultimate goal of establishing these systems as reliable high-throughput platforms for drug testing and personalised medicine . The intestine is a highly complex organ system performing a diverse set of vital tasks, from nutrient digestion and absorption, hormone secretion, and immunological processes to neuronal activity, [ 2 ] which makes it particularly challenging to model in vitro .
Conventional intestinal models, such as traditional 2D cell culture of immortalised cell lines (e.g. CaCo2 or HT29 ), transwell cultures, Ussing chambers , and everted gut sacs, have been used extensively to understand better (patho-)physiological processes in the intestine. However, many intestinal functions are difficult to recapitulate and study using such simplistic models. Thus, these systems' translational and experimental value is limited. [ 3 ]
In 2009, the development of intestinal organoids [ 4 ] marked a milestone in the in vitro modelling of intestinal tissue. Intestinal organoids mimic the in vivo stem cell niche as intestinal stem cells spontaneously give rise to a closed, cystic mini-tissue with outward-facing buds representing the characteristic crypt-villus architecture of the intestinal epithelium . Intestinal organoids can contain all the different cell types of the intestinal epithelium, e.g. enterocytes , goblet cells , Paneth cells and enteroendocrine cells . [ 5 ] Together with the accurate representation of the tissue architecture and cell-type composition, organoids have been shown to also exhibit key functional similarities to the native tissue. [ 6 ] Furthermore, their long-term stability in culture, derivation from healthy and diseased origin and genetic manipulation possibilities make intestinal organoids a useful though simplistic model for large spread use as a platform for functional studies and disease modelling. [ 7 ]
Nevertheless, several limitations restrict their usefulness as an intestinal model. First and foremost, the organoids' closed cystic structure makes their inner (apical) surface inaccessible, and separate treatment of apical and basolateral sides — and thus transport studies — highly cumbersome. Moreover, this closed cystic structure implies that intestinal organoids accumulate shed dead cells in their lumen putting spatial strain on the organoids, thus impeding undisturbed organoid culture over longer periods of time without disruption by mechanical disruption and passaging. Furthermore, intestinal organoid cultures suffer from strongly variable sizes, shapes, morphologies and localisations between single organoids in their 3D culture environment. [ 8 ]
Although organoids usually are referred to as miniature organs, they lack vital features to mimic organ-level complexity. For this reason, biofabricated devices have been developed, which surpass organoid limitations. Especially microfluidic devices hold great potential as platforms for in vitro models of organs, as they enable perfusion mimicking the function of blood circulation in tissues. [ 1 ] [ 9 ] Apart from fluidic flow, other culture parameters are incorporated into intestine-on-a-chip devices, including architectural cues, mechanical stimulation, oxygen gradients and co-cultures with other cell populations and the microbiota, to more accurately display the physiological behaviour of the actual organ. [ citation needed ]
Opposite to traditional static cell culture, in microfluidic devices, fluid flows can be created, which closely mimick physiological fluid flow patterns. Fluid flow introduces physiological shear stress to cell surfaces, introduces apical delivery of nutrients and growth factors and enables the establishment of chemical gradients of, e.g. growth factors, which are vital for proper organ development. Overall, microfluidic devices increase the control over the organ-specific microenvironment, which allows for more precise models. [ 7 ]
Different technologies have been used to introduce microfluidic flows in intestine-on-a-chip devices, including peristaltic pumps , [ 10 ] syringe pumps , [ 11 ] pressure generators [ 12 ] and pumpless systems [ 13 ] driven by hydrostatic pressure and gravity. An example of a gravity-driven microfluidic intestine-on-a-chip device is the OrganoPlate platform by Mimetas , which has been used as a disease model for inflammatory bowel disease by Beaurivage et al. [ 14 ]
Beginning from the early stages of embryonic development up to the post-natal life, the intestine is constantly exposed to a wide range of mechanical forces. Peristalsis , the involuntary and cyclic propulsion of intestinal contents, is an essential part of the digestive process. It facilitates food digestion, nutrient absorption and intestinal emptying on a macro scale and applies shear stress and radial pressure on the intestinal epithelium on a micro-scale. [ 15 ] In particular, mechanical factors were shown to influence intestinal development and homeostasis, such as gut looping, [ 16 ] villi formation, [ 17 ] and crypt localisation. [ 18 ] Moreover, the chronic absence of mechanical stimuli in the human intestine has been associated with intestinal morbidity. [ 1 ]
A prominent example where both mechanical stimulations in the form of peristalsis and microfluidic flow are used in combination is the Emulate intestine-on-a-chip system. The system consists of a two-way central cell culture microchannel, which is separated by a porous, extracellular matrix -coated, PDMS membrane allowing the separate culture of two different cell populations in the upper and lower microchannel. The central chamber is enclosed by two vacuum chambers running in parallel. The application of vacuum allows the cyclic unidirectional expansion of the porous membrane separating the channels to mimic peristaltic motion [ 19 ]
As in traditional organoid culture, introducing a third culture dimension is critical for a better representation of the microanatomy of a tissue. Since 3D cell cultures implement more physiologically relevant biochemical and mechanical cues, 3D cultures generally achieve better cell viability and a more physiological transcriptome and proteome . Moreover, tissue homeostasis processes such as proliferation , differentiation and cell death are represented in a more physiological manner. [ 20 ] [ 21 ] The 3D support of cell cultures is commonly based on hydrogels, which mimick the native extracellular matrix . Cells can either be embedded into hydrogels or grown on a predefined micro-engineered hydrogel surface. [ 1 ] The most commonly used hydrogel for 3D intestinal systems is Matrigel , [ 22 ] a solubilised basement membrane extract from mouse sarcoma . However, Matrigel has significant disadvantages such as a xenogeneic origin , bath-to-batch variability, high cost and a poorly defined composition. As these factors hinder clinical translation , other hydrogels are increasingly used in 3D intestinal models, including fibrin , collagen , hyaluronic acid and PEG -based synthetic hydrogels. [ 23 ]
In tissue engineering , microfabrication techniques are of critical importance, especially in modelling the tissue microenvironment. Apart from designing and fabricating the microfluidic device itself, microfabrication techniques are also used to create 3D microstructures which allow the patterning of cell culture surfaces closely resembling the native tissue topography, i.e. the crypt-villus-axis. [ 1 ]
A prominent example of an intestine-on-a-chip system relying on architectural cues is the homeostatic mini-intestines by Nikolaev et al. [ 24 ] They use microfabricated intestine-on-a-chip devices with a hydrogel chamber. The collagen-Matrigel-mix hydrogel is laser-ablated to generate a microchannel for a tubular intestinal lumen with crypt structures. The culture of intestinal stem cells in this device results in their self-organisation into a functional epithelium with the physiological spatial arrangement of the crypt-villus domains. These mini-intestines allow for an extended long term culture and give rise to rare intestinal cell types not commonly found in other 3D models. Another example for architecturally driven morphogenesis of intestine-on-a-chip models are the surface patterning techniques published by Gjorevski et al., they developed microfabricated devices to pattern hydrogel surfaces in order to reproducibly direct intestinal organoid geometry, size and cell distributions. [ 25 ]
These examples show, that intestine-on-a-chip systems with extrinsically guided morphogenesis enable spatial and temporal control of signalling gradients and may provide a platform to extensively study intestinal morphogenesis, stem cell maintenance, crypt dynamics, and epithelial regeneration. [ 1 ]
The healthy intestine has a wide range of different functions, which requires a vast set of different cell types to fulfil them. The primary intestinal function, the absorption of nutrients, requires close contact between the intestinal epithelium and blood and lymph endothelial cells. Moreover, the intestinal microbiota plays a critical part in the digestion of food, which makes a reliable immune defence indispensable. Furthermore, muscle and nerve cells control peristalsis and satiety. Finally, mesenchymal cells are essential components of the intestinal stem cell niche as they provide physical support and secrete growth factors. Thus, incorporating different cell types in intestine-on-a-chip systems is vital to model different aspects of intestinal functions adequately. [ 1 ]
First steps were taken in co-culturing the intestinal epithelium and the microbiota in intestine-on-a-chip systems. Examples are the establishment of an in vitro model for intestinal Shigella flexneri infection using the Emulate intestine-on-a-chip system [ 26 ] or the recreation of a complex faeces-derived microbiota population with both aerobic and anaerobic species. [ 27 ] Similarly, researchers have tried to recreate an immunocompetent intestinal epithelium in intestine-on-a-chip systems, by co-culturing the intestinal epithelium with peripheral blood mononuclear cells , [ 28 ] monocytes , [ 29 ] macrophages [ 30 ] or neutrophils . [ 31 ] Moreover, the epithelial- endothelial interface has been modelled in several different systems by culturing endothelial monolayers and the intestinal epithelium on opposite sides of a porous membrane. [ 19 ] [ 27 ] [ 29 ] [ 32 ]
Apart from co-culturing intestinal cells with other cell types, also the cell population of the intestinal epithelium is of high relevance. While some rather simplistic approaches use immortalised cell lines as cell source for an intestinal epithelium, [ 14 ] there is a shift towards the use of organoid-derived intestinal stem cells, which allows the derivation of intestinal epithelia with a more physiological cell type composition. [ 1 ] [ 24 ] [ 32 ] | https://en.wikipedia.org/wiki/Intestine-on-a-chip |
The Intex Cloud FX is an affordable smartphone running on Firefox OS that was sold in India for ₹1,799 (initially ₹1,999 ). It is also sold under the Cherry Mobile brand in the Philippines where it is known as the Ace . Sporting a price tag of only 999 PhP. Intex said it had sold 15,000 devices within three days of the launch and plans to sell 500,000 devices until the end of the year. [ 1 ]
Intex Cloud FX is the first Firefox OS-powered phone to be introduced in the Indian market. Its Philippine equivalent is also the first Firefox OS smartphone in the Philippines and in Southeast Asia. [ 2 ] [ 3 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intex_Cloud_FX |
In chemistry , the intimate ion pair concept, introduced by Saul Winstein , describes the interactions between a cation , anion and surrounding solvent molecules. [ 1 ] In ordinary aqueous solutions of inorganic salts , an ion is completely solvated and shielded from the counterion . In less polar solvents, two ions can still be connected to some extent. In a tight , intimate , or contact ion pair, there are no solvent molecules between the two ions. When solvation increases, ionic bonding decreases and a loose or solvent-shared ion pair results. The ion pair concept explains stereochemistry in solvolysis .
The concept of intimate ion pairs is used to explain the slight tendency for inversion of stereochemistry during an S N 1 reaction . It is proposed that solvent or other ions in solution may assist in the removal of a leaving group to form a carbocation which reacts in an S N 1 fashion; similarly, the leaving group may associate loosely with the cationic intermediate . The association of solvent or an ion with the leaving group effectively blocks one side of the incipient carbocation, while allowing the backside to be attacked by a nucleophile . This leads to a slight excess of the product with inverted stereochemistry, whereas a purely S N 1 reaction should lead to a racemic product. Intimate ion pairs are also invoked in the S N i mechanism . Here, part of the leaving group detaches and attacks from the same face, leading to retention. [ 2 ] | https://en.wikipedia.org/wiki/Intimate_ion_pair |
Visionic is a network management computer system and network monitoring software application produced by Intorel . [ who? ] In 2002, Intorel launched the first version of later to become its flagship product - Visionic . | https://en.wikipedia.org/wiki/Intorel |
Intra-frame coding is a data compression technique used within a video frame, enabling smaller file sizes and lower bitrates, with little or no loss in quality. Since neighboring pixels within an image are often very similar, rather than storing each pixel independently, the frame image is divided into blocks and the typically minor difference between each pixel can be encoded using fewer bits.
Intra-frame prediction exploits spatial redundancy, i.e. correlation among pixels within one frame, by calculating prediction values through extrapolation from already coded pixels for effective delta coding . It is one of the two classes of predictive coding methods in video coding . Its counterpart is inter-frame prediction which exploits temporal redundancy. Temporally independently coded so-called intra frames use only intra coding. The temporally coded predicted frames (e.g. MPEG's P- and B-frames) may use intra- as well as inter-frame prediction.
Usually only few of the spatially closest known samples are used for the extrapolation. Formats that operate sample by sample like Portable Network Graphics (PNG) can usually use one of four adjacent pixels (above, above left, above right, left) or some function of them like e.g. their average. Block-based (frequency transform) formats prefill whole blocks with prediction values extrapolated from usually one or two straight lines of pixels that run along their top and left borders.
Inter frame has been specified by the CCITT in 1988–1990 by H.261 for the first time. H.261 was meant for teleconferencing and ISDN telephoning.
Data is usually read from a video camera or a video card in the YCbCr data format (often informally called YUV for brevity). The coding process varies greatly depending on which type of encoder is used (e.g., JPEG or H.264 ), but the most common steps usually include: partitioning into macroblocks , transformation (e.g., using a DCT or wavelet ), quantization and entropy encoding .
It is used in codecs like ProRes : a group of pictures codec without inter frames .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intra-frame_coding |
Intra-species recognition is the recognition by a member of an animal species of a conspecific (another member of the same species). In many species, such recognition is necessary for procreation.
Different species may employ different methods, but all of them are based on one or more senses . [ 1 ] The recognition may happen by the chemical signature ( smell ), [ 2 ] by having a distinctive shape or color ( sight ), [ 2 ] [ 3 ] by emitting certain sounds ( hearing ), or even by behaviour patterns. Often a combination of these is used. [ 1 ]
Among human beings , the sense of sight is usually in charge of recognizing other members of the same species, with maybe the subconscious help of smell. In particular, the human brain has a disproportionate amount of processing power dedicated to finely analyze the features of a human face. This is why most humans are able to distinguish human beings from one other (barring look-alikes ), and a human being from a similar species like some anthropomorphic ape , with only a quick glance.
Some intra-species recognition is learned, for example in waterfowl , known as imprinting . [ 1 ] [ 3 ]
Intra-species recognition has been hypothesised as an explanation for the bizarre and varied structures found in dinosaurs , as it drives rapid evolution without a specific direction. [ 4 ] However, this has raised criticism and the prevelance of species recognition in dinosaur evolution is doubted by many, [ 5 ] not least because it's a vague concept. [ 6 ]
Intra-species recognition systems are often subtle. For example, the chiffchaff and the willow warbler appear similar by eye, but their call distinguishes them greatly. [ 7 ] Sometimes, intra-species recognition is fallible: in many species of frog , males commonly misdirect their amplexus (mounting) to other species or even inanimate objects. [ 8 ] [ 9 ]
Heliconius charithonia displays intra-species recognition by roosting with conspecifics. They do this with the help of UV rhodopsins in the eye that help them distinguish between ultraviolet yellow pigments and regular yellow pigments. [ 10 ] They have also been known to emit chemical cues to recognize members of their own species. [ 11 ] | https://en.wikipedia.org/wiki/Intra-species_recognition |
Intracellular bacteria are bacteria that have the capability to enter and survive within the cells of the host organism. [ 1 ] These bacteria include many different pathogens that live in the cytoplasm and nuclei of the host cell's they inhabit. Mycobacterium tuberculosis is an example of an intracellular bacterial species. [ 2 ] There are two types of intracellular bacteria: facultative intracellular bacteria, which can grow extracellularly or intracellularly, and obligate intracellular bacteria, which can grow only intracellularly. [ 3 ]
Examples of facultative intracellular bacteria include members of the genera Brucella , Legionella , Listeria , and Mycobacterium . These bacteria invade the human body and replicate inside the cells, evading the immune system and causing disease by disrupting the human's cells normal function. Diseases caused by facultative intracellular bacteria include Listeriosis (Listeria monocytogenes) , Typhoid Fever (Salmonella typhi) , Legionnaires' disease (Legionella pneumophila) , and Salmonellosis (Salmonella enterica) to name a few. [ 3 ] While they can invade the human body, they are also capable of living extracellularly. These bacteria can replicate within the environment, sustain their metabolic state, and survive harsh conditions by using mechanisms such as a bacterium-containing vacuole, lysosome resistance, and entering a survival state called persistence. [ 4 ]
Examples of obligate intracellular bacteria include members of the order Rickettsiales and members of the genus Mycoplasma . [ 1 ] These bacteria need the human host to be able to reproduce and when they have invaded the body, they cause disease. Unlike facultative intracellular bacteria that can grow within or outside of a host's body, obligate bacteria cannot survive without host cells. These bacteria cannot reproduce outside of the host cell because they lack the metabolic processes and enzymes needed to reproduce, which the host cell gives them. [ 3 ] Diseases caused by obligate intracellular bacteria include Chlamydia (Chlamydia trachomatis) , Rocky Mountain Spotted Fever (Rickettsia rickettsii) , and Tuberculosis (Mycobacterium tuberculosis) to name a few. [ 3 ]
Hosts usually come into contact with the bacteria through the skin, but there are chances of contracting the bacteria from a bite, such as that of ticks, mites, and/or mosquitoes (Rickettsia rickettsii). [ 5 ] Listeria monocytogenes is found in soil, water, and also decaying animals and plants. It is generally transmitted through food being processed or handled in areas contaminated with L. monocytogenes . [ 6 ] Legionella pneumoniae are found in aquatic conditions, such as artificial water systems, like that of hot tubs and showers. [ 7 ] Salmonella typhi and Salmonella enterica are both transmitted orally through feces or through food and/or water that has the bacteria. [ 8 ] Chlamydia trachomatis is spread by having unprotected sex. [ 9 ] Mycobacterium tuberculosis is spread through the air when being near anyone with tuberculosis. [ 10 ]
This bacteria -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intracellular_bacteria |
Intracellular delivery is the process of introducing external materials into living cells. Materials that are delivered into cells include nucleic acids ( DNA and RNA ), proteins , peptides , impermeable small molecules , synthetic nanomaterials , organelles , and micron-scale tracers, devices and objects. [ 1 ] [ 2 ] Such molecules and materials can be used to investigate cellular behavior, engineer cell operations or correct a pathological function.
Medical applications of intracellular delivery range from in vitro fertilisation (IVF) [ 3 ] and mRNA vaccines [ 4 ] to gene therapy [ 5 ] and preparation of CAR-T cells . [ 6 ] Industrial applications include protein production , [ 7 ] biomanufacture , [ 8 ] and genetic engineering of plants and animals. [ 9 ] Intracellular delivery is a fundamental technique in the study of biology and genetics, such as the use of DNA plasmid transfection to investigate protein function in living cells. [ 10 ] A wide range of approaches exist for performing intracellular delivery including biological, chemical and physical techniques that work through either membrane disruption or packaging the delivery material in carriers. [ 1 ] [ 11 ] [ 12 ]
Intracellular delivery is at the intersection of cell biology and technology , and is related to many fields across science and medicine including genetics , biotechnology , bioengineering and drug delivery .
Analogous to the way computers operate through electronic signals, cells process and transmit information through molecules. Depending on the molecules and materials that are loaded into cells, different outcomes or applications can be achieved (see Figure "Applications of Intracellular Delivery" for examples). Below are some of the main classifications of cargo materials used to investigate and engineer cells through intracellular delivery.
Transfection refers to the intracellular delivery of nucleic acids: DNA , RNA and their analogues . Nucleic acids materials that are commonly transfected into cells are plasmid DNA , mRNA , siRNA , and oligonucleotides . The transfection applications span across 3 main areas: [ 1 ]
In basic research, transfection is a cornerstone technique in fields ranging from cell biology and genetics to immunology and drug discovery . [ 13 ] In biomanufacture , transfection is used for production of proteins, antibodies, viral vectors, and virus-like particles for vaccines. In cell-based therapies transfection is used for applications such as ex-vivo gene therapy, [ 14 ] hematopoietic stem cell engineering, [ 15 ] production of induced pluripotent stem cells , [ 16 ] and ex-vivo preparation of cells for immunotherapy [ 17 ] Over the last 50 years nucleic acid transfection has been the most common subcategory of intracellular delivery.
Plasmid DNA began to be transfected into animal cells for the purpose of gene expression in the late 1970s via microinjection [ 18 ] and calcium phosphate methods. [ 19 ] Since then, it has been used to investigate gene and protein function in manifold studies. DNA plasmids are physically large and cumbersome molecules with a 5-10 kilo-basepair plasmid being >100 nm diameter in solution when free and uncondensed. [ 20 ] Nevertheless, due to the well-established and relatively low-cost techniques for editing and preparing them, they have been very commonly used in biological research.
In the 1970s it was shown that microinjection of mRNA resulted in protein expression. [ 21 ] In certain situations, mRNA transfection is considered advantageous for inducing protein expression compared with DNA plasmids due the following reasons
Thus, mRNA is considered a better option than DNA for most therapeutic applications although it is more expensive and intrinsically unstable.
Oligonucleotides are single or double-stranded sequences of DNA or RNA of less than 30 nucleotides in length. Small interfering RNA (siRNAs) are short 21-22 base pair duplexes of RNA that can be transfected into cells to silence gene expression . [ 23 ] Since their Nobel prize winning discovery in 1998, siRNA have been transfected into cells in thousands of biological studies in order to perturb gene function. Other oligonucleotides of interest for intracellular delivery include antisense oligonucleotides (ASOs), micro RNAs (miRNAs), and aptamers . Such oligonucleotides can be used to alter cell behaviour through several different mechanisms. [ 24 ]
Lipid nanoparticles and electroporation are currently widespread strategies for nucleic acid transfection . However, effective transfection remains a hurdle in many primary cells , stem cells , patient-derived cells and neurons . [ 25 ] The ability to conduct biological research and carry out potential medical applications in such cells is often limited by transfection efficiency and tolerance to treatment. Furthermore, there is currently a poor understanding of the long-term effects of performing transfection on cells within the human body.
Delivery of proteins into living cells, such as genome-editing nucleases , active inhibitory antibodies , or stimulatory transcription factors , represents a powerful toolset for manipulating and analyzing cell function. [ 26 ] Furthermore, effective intracellular delivery could expand the repertoire of usable protein drugs as most current protein-based therapeutics hit extracellular targets and this is a frontier of current research efforts. [ 27 ]
Delivery of purified proteins into cells began as early as the 1960s. Examples include amoeba microinjected with ferritin [ 28 ] and mouse eggs microinjected with bovine albumin . Because proteins have diverse size, shape and charge, they cannot easily be delivered into cells with one-size-fits-all solutions that cationic lipids use for nucleic acid transfection. In contrast, a diverse range of methods have been used to deliver proteins into cells including: microinjection , osmotic lysis of pinosomes, hypotonic shock, scrape loading, bead loading, syringe loading, detergent exposure, electroporation , pore-forming toxins , cell penetrating peptides , nanocarriers, cell squeezing, nanoneedles, acoustic perturbations, and vapor nanobubbles. [ 1 ] For the purposes of genome editing, Cas9 protein combined with sgRNA has been delivered by methods ranging from electroporation, microinjection, lipid nanoparticle formulations, osmotically induced endocytosis followed by endosome disruption, microfluidic deformation, and cell penetrating peptides among others. [ 1 ]
Small molecules requiring intracellular delivery include:
An example of the former is bleomycin , an anticancer drug with poor permeability due to its positive charge and hydrophobicity. By performing intracellular delivery with electroporation, bleomycin potency can be increased more than a hundred-fold. [ 29 ] As for small molecule probes, when delivered to the cell interior, these molecules are capable of reporting cellular properties such as membrane potential, pH, and concentrations of ions. [ 1 ] One example is PFBI, a fluorescent dye that can be employed for measurement of intracellular potassium concentration. Finally, some candidate cryoprotectant molecules such as impermeable sugars are highly hydrophilic and do not ready diffuse across cell membranes. For example, trehalose (Mw = 342 Da) is a natural disaccharide synthesized by a range of organisms to help them withstand desiccation or freezing. Trehalose loaded into animal cells at concentrations up to 200 mM has been shown to provide excellent cryoprotection during freezing and thawing. [ 30 ]
Cargo materials in the microscale have been successfully delivered into cells for a variety of applications. For a century microinjection has been the dominant method for introducing microscale cargo into cells. A classic example was the transplant of a somatic cell nucleus into a frog egg to demonstrate that nuclei from fully differentiated somatic cells could grow into a new animal when inserted into an egg. [ 31 ] Microinjection was first used to inject sperm into eggs as a proof of concept for IVF in animals. [ 32 ] Artificial chromosomes have been engineered and transferred into cells by microinjection for proof-of-concept gene therapy. [ 33 ] Transplant of mitochondria has also been demonstrated in several cell types via microinjection. [ 34 ] More recently laser-triggered cavitation bubbles have been used to open transient holes in the cell membrane for the purpose of delivering bacteria and mitochondria. [ 35 ] Using microinjection or ballistic propulsion, micron-scale particles, spheres, and beads have been loaded into cells for cellular microrheology studies that assess internal mechanical behavior of cells. [ 36 ] For example, using microinjected PEGylated tracer beads of up to 5.6 micron, it was shown that motor-driven cytoplasmic mixing substantially enhanced intracellular movement of both small and large cellular components. [ 37 ]
Other materials of interest for intracellular delivery include carbon nanotubes , quantum dots , magnetic nanoparticles , and nanodevices that serve as sensors or probes [ 38 ] [ 39 ]
The following are examples of medical treatments that rely on intracellular delivery in at least one step.
Hematopoietic stem cell -based gene therapies :
CAR-T cell immunotherapies :
In vivo viral vector-mediated gene therapy :
siRNA medicines:
mRNA vaccines :
of covid spike protein to prime the immune system against future exposure to SARS-CoV-2 . [ 42 ]
Antisense oligonucleotides (ASOs):
The ASOs gain entry into liver cells to prevent the expression of pathogenic ApoB mRNA (US approval granted 2013).
In vitro fertilisation (IVF) for human pregnancies:
Safety. Research is still in the early stages of understanding the immediate and long-term medical effects of intracellular delivery of materials to human cells. For example, investigations have shown that children born through ICSI suffer more health problems than those naturally conceived. However, it is not known whether this is due to poorer health of the parents reproductive systems or aspects of the IVF and ICSI procedure. [ 45 ] HSC-based gene therapies prepared with gamma retroviral and lentiviral vectors have in some cases shown an increased risk of leukemia down the track due to genotoxicity, [ 46 ] as occurred with Strimvelis. [ 47 ] Furthermore, the lipids used for intracellular delivery of therapeutic siRNA and mRNA may cause inflammatory reactions. [ 48 ] [ 49 ] In the case of patisiran, pretreatment with multiple anti-inflammatory drugs is used to minimize reactions to the nanoparticle. [ 50 ] There is currently little data available on the medical impact of intracellular delivery of novel chemical components of mRNA vaccines, such as SM-102 and ALC-0315, on both the short and long-term health of the recipient population. [ 51 ] Thus, safety and unintended side-effects will continue to be a topic of importance for medical treatments that utilize intracellular delivery.
Current methods of intracellular delivery can be placed into two broad categories:
Membrane disruption-mediated techniques involve creating temporary holes in the cell membrane and delivering the cargo molecules via either [ 1 ] [ 11 ] A) Permeabilization and diffusive influx of materials from the extracellular solution B) Direct penetration with a vehicle or carrier that both punctures the plasma membrane and introduces the cargo of interest. The plasma membrane of the cell can be disrupted through mechanical, electrical, chemical, optical or thermal means. Intracellular delivery methods that employ permeabilization include:
Intracellular delivery methods that utilize direct penetration include:
Membrane disruption-mediated delivery methods can deliver almost any material that can be dispersed in solution, making them more universal than carrier-mediated methods. The major challenge for membrane disruption-mediated methods is to create holes of the optimal shape, size, location, and duration for the required delivery application. Excessive membrane damage should be avoided as it can kill cells or impair their function.
Carrier-mediated delivery techniques package the cargo into or onto a nanoscale carrier, which then enters the cell to deliver the cargo. Carriers generally gain entry to the cell interior via either [ 52 ] [ 53 ]
However, there are rare reports of certain carriers crossing or transiently disrupting the plasma membrane through hitherto unknown mechanisms
. [ 54 ]
Carrier-based approaches comprise various biochemical assemblies, mostly of molecular to nanoscale dimensions. The purpose of carriers is threefold,
Carriers can be bio-inspired, such as reconstituted viruses, virus like particles , vesicles , cell ghosts, and functional ligands and peptides . They may be based upon synthesis techniques from chemistry , materials science and nanotechnology , involving assembly of macromolecular complexes from organic and inorganic origins. Carriers that have been used for intracellular delivery include:
Research into how carriers enter cells indicates that most carriers enter via endocytosis before escaping from endosomal compartments into the cytoplasm . [ 53 ] [ 52 ] Mechanisms of endocytosis available to nanocarriers include phagocytosis and pinocytosis through clathrin-dependent and clathrin-independent pathways
. [ 53 ] The internalization pathways employed by target cells depend on the size, shape, material composition, surface chemistry, and/or charge of the carrier
. [ 53 ] [ 52 ] Cargo not able to escape endosomes are trafficked to lysosomes for degradation or recycled back to the cell surface
. [ 55 ] [ 56 ] Efficiencies of around 1% endosomal escape have been reported for most non-viral carrier strategies, including lipid nanoparticles and cell-penetrating peptides
. [ 56 ] [ 52 ] [ 57 ] Moreover, the exact mechanisms of endosome escape remain unclear and are a matter of ongoing research
. [ 57 ]
Apart from endocytosis, some carriers are able to directly merge with the plasma membrane through fusion. Fusion events in biology include vesicle fusion, cell–cell fusion and cell–virus fusion. In these cases, juxtaposed membranes are pulled into close contact by specific protein–protein interactions and interfacial water is excluded to promote lipid mixing and subsequent fusion. Enveloped viruses may employ transmembrane viral proteins to mediate fusion with target cell membranes and this mechanism has been exploited for engineered intracellular delivery
. [ 52 ] An early example was the use of sendai virus to fuse pre-loaded red blood cell ghosts with the plasma membrane of target cells
. [ 58 ] A variation on this technique utilized expression of influenza hemagglutinin (HA) at the target cell membrane, which then binds sialic acid residues on the red blood cell surface to induce fusion
. [ 59 ] Virosomes , which consist of viral membrane components reconstituted into liposomes or vesicles, also exhibit fusion capabilities for the purposes of intracellular delivery. Functional virosomes have been constructed with fusion components from sendai , influenza , vesicular stomatitis and other viruses
. [ 52 ] Some exosomes and extracellular vesicles have been reported to fuse with target cells [ 60 ] and may furthermore be engineering to fuse on demand
. [ 61 ] Interestingly, fusogenic liposomes used for protein delivery have been reported to be capable of fusion by modulating only the lipid composition without any need for the presence of fusogenic proteins or peptides
. [ 62 ] Fusogenic carriers that have been used for intracellular delivery include
(1) cell ghosts, dead cells that have had their cytoplasm replaced with cargo, (2) virosomes , cargo-loaded vesicles reconstituted to display functional viral proteins, and (3) fusogenic liposomes .
Viral vectors . Viral vectors exploit the viral infection pathway to enter cells but avoid the subsequent expression of viral genes that leads to replication and pathogenicity . This is done by deleting coding regions of the viral genome and replacing them with the DNA to be delivered, which either integrates into host chromosomal DNA or exists as an episomal vector. Viral vectors were first employed for gene delivery from the 1970s, constructed from SV40 [ 63 ] or retroviruses . [ 64 ] Newer generations of viral vector platforms have been produced based on components from lentivirus , retrovirus , adenovirus or adeno-associated virus , and other viruses
. [ 65 ] While highly efficient for DNA delivery, notable weaknesses of viral vectors are (1) labor-intensive and expensive protocols, (2) safety issues, (3) risk of causing immune/ inflammatory responses , (4) integration into the genome with recombinant vectors, (5) risk of insertional genotoxicity, and (6) limited packaging capacity (Adeno and AAV typically restricted to carry 5−7.5 kb).
Nanoparticles for transfection . The most commonly used nanoparticles for intracellular delivery of nucleic acids are based on assemblies of cationic lipids and polymers. These cationic molecules condense DNA plasmids (~50-200 nm), mRNA (10-100 nm) and other nucleic acids (see "Properties of common molecules of interest for intracellular delivery") into compact nanoparticles with dimensions down to tens of nanometers. The positive charge of these particles facilitates their attraction to the cell surface due to the natural negative charge of most animal cells (−35 to −80 mV membrane potential ). Upon binding, endocytosis is thought to be most efficient for particles in the size range below 100 nm
. [ 53 ] Complexation into nanoparticles also confers protection for nucleic acids against degradation until they are released to the appropriate intracellular compartment
. [ 66 ]
From the 1960s it was observed that mixing nucleic acids with cationic molecules leads to the formation of macromolecular complexes that can transfect cells. Two early examples were the polymer diethylaminoethyl- dextran (DEAE-dextran)/nucleic acid combination (1968) [ 67 ] and the insoluble ionic salt calcium phosphate / nucleic acid precipitant (1973)
. [ 68 ] The use of cationic lipids for transfection began in the 1980s [ 69 ] , was termed " lipofection ", and became the basis for the popular product lipofectamine launched in 1993. Other cationic transfection reagents were developed in the 1990s based on dendrimers such as PAMAM [ 70 ] in 1993 (“superfect” reagent launched in late 1990s) and cationic polymers such as PEI in 1995 [ 71 ] (marketed as “polyjet” soon after). Currently in research, most nucleic acid transfection is performed with lipid reagents, with polymer reagents and electroporation as other major options. Certain recalcitrant cells or in vivo applications may be better suited for viral vectors
. [ 72 ]
Advancements in microfabrication , nanotechnology , chemistry and other research fields have contributed to the improvements in precision and performance of intracellular delivery methods
. [ 11 ] [ 12 ] [ 73 ] [ 74 ]
Electroporation .
Early versions of electroporation used bulk electrodes to apply electrical pulses of defined voltage to cells in solution in a cuvette . [ 75 ] Electroporation was then brought down to the microscale through the use of microfluidics in the late 90s
. [ 76 ] Following that nano-electroporation was achieved through the use of nanoapertures and nanostraws. [ 77 ] The nano and micro versions of electroporation feature much higher precision and control over the size and location of membrane disruptions imposed on target cells
. [ 78 ] A company called Maxcyte has developed a high-throughput version of flow electroporation that can process hundreds of millions of cells in tens of minutes
. [ 79 ] Furthermore, other research groups have employed deep learning to improve electroporation parameters in high throughput multi-well systems
. [ 80 ]
Mechanical Contact .
The first versions of intracellular delivery protocols exploiting the mechanical force of objects striking the cell membrane were simple and crude methods such as scrape loading and glass bead loading
. [ 81 ] In scrape loading, for example, a spatula is dragged across adherent cells that have been cultured on a flat substrate. As the cells peel off the substrate they undergo a variable amount of membrane damage and are able to take up molecules in solution. Scrape and bead loading have been used in many biological studies to introduce proteins and small molecules into cells
. [ 1 ]
Since the late 1990s researchers have worked to improve the precision of solid contract-based membrane disruption through the use of nanoneedles and microfabricated devices
. [ 11 ] Nanoneedles were first used for nucleic acid transfection in 2003 [ 82 ] then demonstrated delivery of diverse cargoes in 2010
. [ 83 ] They have been combined with electroporation, flow reservoirs, and detergents to add more functionality
. [ 84 ] Moreover, nanostraws have been used to both insert and extract molecules into cells in a time-resolved manner
. [ 85 ]
In 1999 it was found that passing cells through holes in polycarbonate filters created temporary disruptions in the cell membrane to achieve DNA transfection
. [ 86 ] The method was termed "filtroporation" and did not receive much attention at the time. In 2012 researchers at MIT found that passing cells through constrictions in silicon microfluidic devices was capable of disrupting the cell membrane to achieve intracellular delivery of diverse materials
. [ 87 ] [ 88 ] The method, termed cell squeezing, was spun out into a company called SQZ biotech that focuses on leveraging intracellular delivery technology to develop cell-based therapies . By adjusting the flow speed of cells and the shape and size of microfluidic constriction, cell squeezing can be tailored for different cell types and delivery applications. Other research groups have demonstrated the cell squeezing concept in microsieves and PDMS-based microfluidic devices
. [ 89 ]
Cell squeezing has been combined with electroporation to achieve rapid delivery of DNA and other materials into the nucleus . [ 90 ] This works by first introducing holes into the plasma membrane, then having an electrical pulse serve to 1) disrupt the nuclear membrane , and 2) drive negatively charged nucleic acids into the cell
. [ 91 ] Microfluidic cell squeezing followed by downstream electroporation has been shown to cause temporary disruptions in nuclear membrane that were repaired within 15 minutes
. [ 90 ]
Fluid Shear .
Early examples of using fluid shear forces to controllably disrupt the cell membrane include conventional ultrasound [ 92 ] , syringe loading for cells in suspension [ 93 ] and the use of cone-plate viscometers on adherent cells
. [ 94 ] In syringe loading, suspensions of cells are sucked and expelled from a syringe through a fine needle tip. The fluid shear forces at the tip of the needle depend upon the flow velocity and can be tailored to disrupt the cell membrane. Since the 1990s, more precise strategies to employ fluid shear forces to permeabilize cells include microfluidics , ultrasound , shock waves , and laser -based methods
. [ 1 ]
Laser irradiance of an absorbent object in an aqueous environment can produce a variety of effects including cavitation , plasma production , chemical reactions, and heat
. [ 1 ] Both laser-particle and laser-surface interactions have been exploited to create cavitation events that expose cells to locally concentrated fluid shear forces. For example, a metallic nanostructure can be used as a seed structure to harvest short laser pulse energy and convert it into highly localized explosive vapor bubbles. A high throughput version of this concept was unveiled in 2015 [ 95 ] Substrates arrayed with pores lined by metallic absorbers were irradiated to generate exploding cavitation bubbles underneath the basal side of adherent cells. Membrane permeabilization was synchronized with active pumping of cargo through the pores to successfully introduce living bacteria (>1 micron) into the cytoplasm of several cell types.
Thermal Effects .
A simple way of delivering molecules into cells is to heat the plasma membrane until holes form. At sufficiently high temperatures, lipid bilayers will dissociate due to kinetic energy of the constituent molecules being greater than the forces that maintain the membrane formation, namely the hydrophobic forces that repel water from the lipid tails. [ 1 ] The downside of this method is that it is incredibly non-specific and may cause excessive harm to cells. Strategies for permeabilizing cells by thermal means include (1) cycling cells through a cooling−heating cycle, which may or may not involve freezing, (2) heating cells to supraphysiological temperatures, and (3) transient intense heating of a small part of the cell. [ 1 ] In the latter case, thermal inkjet printers have been successfully used for intracellular delivery and transfection in animal cells. [ 96 ] Laser-particle interactions have been reported to precisely thermally disrupt cell membranes. Gold nanoparticles were packed into a dense surface layer where >10 s of infrared laser irradiation heats the underside of cells to trigger permeabilization and delivery of dyes, dextrans and plasmids. [ 97 ] In 2021 this concept was developed further when researchers showed that light-sensitive iron oxide nanoparticles embedded in biocompatible electrospun nanofibres can trigger membrane permeabilization by photothermal effects without direct contact between cells and nanoparticles
. [ 98 ] This method was capable of delivering CRISPR–Cas9 machinery and siRNA to adherent and suspension cells, including embryonic stem cells and hard-to-transfect T cells .
mRNA medicines .
Advances in lipid nanoparticle formulation and nucleic acid chemistry have been critical in the development of nucleic acid therapeutics, such as mRNA vaccines. For example, design of the cationic ionizable lipids, which are a key component of lipid nanoparticle formulations, with an acid dissociation constant (pKa) close to the early endosomal pH enable endosomal release into the cytoplasm after endocytosis
. [ 99 ] In the Moderna covid vaccine lipid nanoparticles are composed of ionizable lipid SM-102 , cholesterol , 1,2-distearoyl-snglycero-3 phosphocholine (DSPC) and PEG 2000-DMG to encapsulate mRNA. [ 4 ] The Pfizer/BioNTech covid vaccines employ ALC-0315 lipid from Acuitas Therapeutics and formulate it with cholesterol , DSPC , and a PEG -Lipid ( ALC-0159 ) together with mRNA
. [ 4 ] After intramuscular injection , the nanoparticles enter cells, mRNA is released into the cytoplasm, and the expression of SARS-CoV2 spike protein occurs in patient cells.
siRNA Medicines . patisiran , the first siRNA based medicine to receive regulatory approval from the FDA in 2018, is based on lipid nanoparticle formulations that package and delivery siRNA to the liver for the silencing of an abnormal pathogenic form of transthyretin gene. The therapeutic siRNA is formulated with 2 lipid excipients, DLin-MC3-DMA and PEG2000-C-DMG, in a lipid nanoparticle that is intravenously infused into the patient and targets hepatocytes in the liver. Moreover, Chemically modified nucleotides in siRNA therapeutics improve chemical stability and efficacy, assist in targeting certain cell types, and serve to reduce adverse immunological reactions
. [ 100 ] Diverse ligands including small molecules , carbohydrates , aptamers , peptides and antibodies have been covalently linked to siRNA in order to improve cellular uptake and target specific cell types. For example, GalNAc -siRNA conjugates not only provide an approach for ligand based cell internalization without the need of cationic materials, but also target hepatocytes specifically
. [ 101 ] GalNAc -siRNA conjugates were employed in the second FDA-approved siRNA medicine, Givosiran , which is administered to treat acute Hepatic Porphyria by down-regulating ALAS1 expression in the liver.
ASO Medicines .
The first approved antisense oligonucleotides (ASO) medication was unveiled in 1998 with fomivirsen , a 21-mer oligonucleotide that blocks the translation of cytomegalovirus mRNA
. [ 102 ] By binding pre-mRNA or mRNA, ASOs can post-transcriptionally regulate protein synthesis through mechanisms including modification of pre-mRNA processing and splicing, competitive inhibition, steric blockade of translational machinery, and degradation of bound target RNA
. [ 43 ] Chemical modifications of ASO nucleosides, nucleobases, and the internucleoside backbone are key for improving pharmacokinetics and pharmacodynamics while maintaining target affinity and efficacy. Therapeutically effective ASOs are heavily modified, so they do not require a carrier for intracellular delivery. Most medically applicable ASOs are naked molecules that are able to enter cells through endocytosis and exert their therapeutic effects by binding their intracellular target
. [ 4 ]
Improved Viral Vectors .
Nearly 70% of gene therapy clinical trials have utilized viral vectors for the gene delivery step
. [ 103 ] Adenovirus, Adeno-associated virus (AAV) and lentiviral vectors are currently the main viral vectors used in biotechnology and clinical applications
. [ 104 ] AAV is as a prominent example of improvements made in viral vectors. Vector engineering can increase AAV transduction efficiency (by optimizing the transgene cassette), vector tropism (using capsid engineering) and the ability of the capsid and transgene to avoid the host immune response (by genetically modifying these components), as well as optimize the large-scale production of AAV [ 105 ] Moreover, vector engineering approaches including directed evolution have greatly enhanced the efficiency and targeting of AAV vectors, resulting in >100-fold improvement in delivery efficiency in some cases. [ 106 ] In another example of AAV engineering, machine learning has been applied to generate AAV variants that can circumvent immune responses from previous exposure
. [ 107 ]
Virus-like Particles . Virus-like particles (VLPs) are assemblies of viral proteins that package cargo materials such as mRNAs, proteins, or RNPs in addition to, or instead of, viral genetic material. Because VLPs are derived from existing viral scaffolds, they exploit natural properties of viruses that enable efficient intracellular delivery, including their ability to encapsulate cargos, escape endosomes, and be reprogrammed to target different cell types. However, unlike viruses, VLPs can deliver their cargo as mRNA or protein instead of as DNA, which substantially reduces the risks of viral genome integration. VLPs are thus of interest for delivering molecular cargo such as gene editing agents as they can offer benefits of both viral and non-viral delivery
. [ 108 ] Engineered DNA-free VLPs have recently been shown to efficiently package and deliver base editor or Cas9 ribonucleoproteins to mammalian cells for the purpose of gene editing [ 109 ] It was reported that delivery of gene editing proteins with VLPs offered substantially minimized off-target editing compared with plasmid and viral delivery in vitro . | https://en.wikipedia.org/wiki/Intracellular_delivery |
Every organism requires energy to be active. [ 1 ] However, to obtain energy from its outside environment, cells must not only retrieve molecules from their surroundings but also break them down. [ 1 ] This process is known as intracellular digestion. [ 1 ] In its broadest sense, intracellular digestion is the breakdown of substances within the cytoplasm of a cell . In detail, a phagocyte's duty is obtaining food particles and digesting it in a vacuole. [ 2 ] For example, following phagocytosis , the ingested particle (or phagosome) fuses with a lysosome containing hydrolytic enzymes to form a phagolysosome ; the pathogens or food particles within the phagosome are then digested by the lysosome's enzymes.
Intracellular digestion can also refer to the process in which animals that lack a digestive tract bring food items into the cell for the purposes of digestion for nutritional needs. This kind of intracellular digestion occurs in many unicellular protozoans, in Pycnogonida , in some molluscs , Cnidaria and Porifera . There is another type of digestion, called extracellular digestion . In amphioxus , digestion is both extracellular and intracellular.
Intracellular digestion is divided into heterophagic digestion and autophagic digestion. [ 3 ] These two types take place in the lysosome and they both have very specific functions. [ 3 ] Heterophagic intracellular digestion has an important job which is to break down all molecules that are brought into a cell by endocytosis. [ 3 ] The degraded molecules need to be delivered to the cytoplasm; however, this will not be possible if the molecules are not hydrolyzed in the lysosome. [ 3 ] Autophagic intracellular digestion is processed in the cell, which means it digests the internal molecules. [ 3 ]
Generally, autophagy includes three small branches, which are macroautophagy , microautophagy , and chaperone-mediated autophagy . [ 4 ]
Most organisms that use intracellular digestion belong to Kingdom Protista, such as amoeba and paramecium .
Amoeba
Amoeba uses pseudopodia to capture food for nutrition in a process called phagocytosis .
Paramecium
Paramecium uses cilia in the oral groove to bring food into the mouth pore which goes to the gullet. At the end of the gullet, a food vacuole forms. Undigested food is carried to the anal pore.
Euglena
Euglena is photosynthetic but also engulfs and digests microorganisms. | https://en.wikipedia.org/wiki/Intracellular_digestion |
Intracellular pH ( pH i ) is the measure of the acidity or basicity (i.e., pH ) of intracellular fluid . The pH i plays a critical role in membrane transport and other intracellular processes. In an environment with the improper pH i , biological cells may have compromised function. [ 1 ] [ 2 ] Therefore, pH i is closely regulated in order to ensure proper cellular function, controlled cell growth, and normal cellular processes. [ 3 ] The mechanisms that regulate pH i are usually considered to be plasma membrane transporters of which two main types exist — those that are dependent and those that are independent of the concentration of bicarbonate ( HCO − 3 ). Physiologically normal intracellular pH is most commonly between 7.0 and 7.4, though there is variability between tissues (e.g., mammalian skeletal muscle tends to have a pH i of 6.8–7.1). [ 4 ] [ 5 ] There is also pH variation across different organelles , which can span from around 4.5 to 8.0. [ 6 ] [ 7 ] pH i can be measured in a number of different ways. [ 3 ] [ 8 ]
Intracellular pH is typically lower than extracellular pH due to lower concentrations of HCO 3 − . [ 9 ] A rise of extracellular (e.g., serum ) partial pressure of carbon dioxide ( pCO 2 ) above 45 mmHg leads to formation of carbonic acid , which causes a decrease of pH i as it dissociates : [ 10 ]
Since biological cells contain fluid that can act as a buffer, pH i can be maintained fairly well within a certain range. [ 11 ] Cells adjust their pH i accordingly upon an increase in acidity or basicity, usually with the help of CO 2 or HCO 3 – sensors present in the membrane of the cell. [ 3 ] These sensors can permit H+ to pass through the cell membrane accordingly, allowing for pH i to be interrelated with extracellular pH in this respect. [ 12 ]
Major intracellular buffer systems include those involving proteins or phosphates. Since the proteins have acidic and basic regions, they can serve as both proton donors or acceptors in order to maintain a relatively stable intracellular pH. In the case of a phosphate buffer, substantial quantities of weak acid and conjugate weak base (H 2 PO 4 – and HPO 4 2– ) can accept or donate protons accordingly in order to conserve intracellular pH: [ 13 ] [ 14 ]
The pH within a particular organelle is tailored for its specific function.
For example, lysosomes have a relatively low pH of 4.5. [ 6 ] Additionally, fluorescence microscopy techniques have indicated that phagosomes also have a relatively low internal pH. [ 15 ] Since these are both degradative organelles that engulf and break down other substances, they require high internal acidity in order to successfully perform their intended function. [ 15 ]
In contrast to the relatively low pH inside lysosomes and phagosomes, the mitochondrial matrix has an internal pH of around 8.0, which is approximately 0.9 pH units higher than that of inside intermembrane space. [ 6 ] [ 16 ] Since oxidative phosphorylation must occur inside the mitochondria, this pH discrepancy is necessary to create a gradient across the membrane. This membrane potential is ultimately what allows for the mitochondria to generate large quantities of ATP. [ 17 ]
There are several common ways in which intracellular pH (pH i ) can be measured including with a microelectrode, dye that is sensitive to pH, or with nuclear magnetic resonance techniques. [ 18 ] [ 19 ] For measuring pH inside of organelles, a technique utilizing pH-sensitive green fluorescent proteins (GFPs) may be used. [ 20 ]
Overall, all three methods have their own advantages and disadvantages. Using dyes is perhaps the easiest and fairly precise, while NMR presents the challenge of being relatively less precise. [ 18 ] Furthermore, using a microelectrode may be challenging in situations where the cells are too small, or the intactness of the cell membrane should remain undisturbed. [ 19 ] GFPs are unique in that they provide a noninvasive way of determining pH inside different organelles, yet this method is not the most quantitatively precise way of determining pH. [ 21 ]
The microelectrode method for measuring pH i consists of placing a very small electrode into the cell’s cytosol by making a very small hole in the plasma membrane of the cell. [ 19 ] Since the microelectrode has fluid with a high H+ concentration inside, relative to the outside of the electrode, there is a potential created due to the pH discrepancy between the inside and outside of the electrode. [ 18 ] [ 19 ] From this voltage difference, and a predetermined pH for the fluid inside the electrode, one can determine the intracellular pH (pH i ) of the cell of interest. [ 19 ]
Another way to measure Intracellular pH (pH i ) is with dyes that are sensitive to pH, and fluoresce differently at various pH values. [ 15 ] [ 22 ] This technique, which makes use of fluorescence spectroscopy, consists of adding this special dye to the cytosol of a cell. [ 18 ] [ 19 ] By exciting the dye in the cell with energy from light, and measuring the wavelength of light released by the photon as it returns to its native energy state, one can determine the type of dye present, and relate that to the intracellular pH of the given cell. [ 18 ] [ 19 ]
In addition to using pH-sensitive electrodes and dyes to measure pH i , Nuclear Magnetic Resonance (NMR) spectroscopy can also be used to quantify pH i . [ 19 ] NMR, typically speaking, reveals information about the inside of a cell by placing the cell in an environment with a potent magnetic field. [ 18 ] [ 19 ] Based on the ratio between the concentrations of protonated, compared to deprotonated, forms of phosphate compounds in a given cell, the internal pH of the cell can be determined. [ 18 ] Additionally, NMR may also be used to reveal the presence of intracellular sodium, which can also provide information about the pH i . [ 23 ]
Using NMR Spectroscopy, it has been determined that lymphocytes maintain a constant internal pH of 7.17± 0.06, though, like all cells, the intracellular pH changes in the same direction as extracellular pH. [ 24 ]
To determine the pH inside organelles, pH-sensitive GFPs are often used as part of a noninvasive and effective technique. [ 20 ] By using cDNA as a template along with the appropriate primers, the GFP gene can be expressed in the cytosol, and the proteins produced can target specific regions within the cell, such as the mitochondria, golgi apparatus, cytoplasm, and endoplasmic reticulum. [ 21 ] If certain GFP mutants that are highly sensitive to pH in intracellular environments are used in these experiments, the relative amount of resulting fluorescence can reveal the approximate surrounding pH. [ 21 ] [ 25 ] | https://en.wikipedia.org/wiki/Intracellular_pH |
Intracellular space is the interior space of the plasma membrane . [ 1 ] [ 2 ] It contains about two-thirds of TBW . Cellular rupture may occur if the intracellular space becomes dehydrated, or if the opposite happens, where it becomes too bloated. Thus it is important for the liquid to stay in optimal quantity. [ 1 ] | https://en.wikipedia.org/wiki/Intracellular_space |
Intracellular transport is the movement of vesicles and substances within a cell . Intracellular transport is required for maintaining homeostasis within the cell by responding to physiological signals. [ 1 ] Proteins synthesized in the cytosol are distributed to their respective organelles, according to their specific amino acid’s sorting sequence. [ 2 ] Eukaryotic cells transport packets of components to particular intracellular locations by attaching them to molecular motors that haul them along microtubules and actin filaments. Since intracellular transport heavily relies on microtubules for movement, the components of the cytoskeleton play a vital role in trafficking vesicles between organelles and the plasma membrane by providing mechanical support. Through this pathway, it is possible to facilitate the movement of essential molecules such as membrane‐bounded vesicles and organelles, mRNA , and chromosomes.
Intracellular transport is unique to eukaryotic cells because they possess organelles enclosed in membranes that need to be mediated for exchange of cargo to take place. [ 3 ] Conversely, in prokaryotic cells, there is no need for this specialized transport mechanism because there are no membranous organelles and compartments to traffic between. Prokaryotes are able to subsist by allowing materials to enter the cell via simple diffusion . Intracellular transport is more specialized than diffusion; it is a multifaceted process which utilizes transport vesicles . Transport vesicles are small structures within the cell consisting of a fluid enclosed by a lipid bilayer that hold cargo. These vesicles will typically execute cargo loading and vesicle budding, vesicle transport, the binding of the vesicle to a target membrane and the fusion of the vesicle membranes to target membrane. To ensure that these vesicles embark in the right direction and to further organize the cell, special motor proteins attach to cargo-filled vesicles and carry them along the cytoskeleton. For example, they have to ensure that lysosomal enzymes are transferred specifically to the golgi apparatus and not to another part of the cell which could lead to deleterious effects.
Small membrane bound vesicles responsible for transporting proteins from one organelle to another are commonly found in endocytic and secretory pathways . Vesicles bud from their donor organelle and release the contents of their vesicle by a fusion event in a particular target organelle. [ 4 ] : 634 The endoplasmic reticulum serves as a channel that proteins will pass through bound for their final destination. [ 3 ] Outbound proteins from the endoplasmic reticulum will bud off into transport vesicles that travel along the cell cortex to reach their specific destinations. [ 3 ] Since the ER is the site of protein synthesis, it would serve as the parent organelle, and the cis face of the golgi, where proteins and signals are received, would be the acceptor. In order for the transport vesicle to accurately undergo a fusion event, it must first recognize the correct target membrane then fuse with that membrane.
Rab proteins on the surface of the transport vesicle are responsible for aligning with the complementary tethering proteins found on the respective organelle's cytosolic surface. [ 3 ] This fusion event allows for the delivery of the vesicles contents mediated by proteins such as SNARE proteins. SNAREs are small, tail-anchored proteins which are often post-translationally inserted into membranes that are responsible for the fusion event necessary for vesicles to transport between organelles in the cytosol. There are two forms of SNARES, the t-SNARE and v-SNARE, which fit together similar to a lock and key. The t-SNAREs function by binding to the membranes of the target organelles, while the v-SNAREs function by binding to the vesicle membranes.
Intracellular transport is an overarching category of how cells obtain nutrients and signals. One very well understood form of intracellular transport is known as endocytosis . Endocytosis is defined as the uptake of material by the invagination of the plasma membrane. [ 4 ] More specifically, eukaryotic cells use endocytosis of the uptake of nutrients, down regulation of growth factor receptors’ and as a mass regulator of the signaling circuit. This method of transport is largely intercellular in lieu of uptake of large particles such as bacteria via phagocytosis in which a cell engulfs a solid particle to form an internal vesicle called a phagosome. However, many of these processes have an intracellular component. Phagocytosis is of great importance to intracellular transport because once a substance is deemed harmful and engulfed in a vesicle, it can be trafficked to the appropriate location for degradation. These endocytosed molecules are sorted into early endosomes within the cell, which serves to further sort these substances to the correct final destination (in the same way the Golgi does in the secretory pathway). From here, the early endosome starts a cascade of transport where the cargo is eventually hydrolyzed inside the lysosome for degradation. This capability is necessary for the degradation of any cargo that is harmful or unnecessary for the cell; this is commonly seen in response to foreign material. Phagocytosis has an immunologic function and role in apoptosis . Additionally, endocytosis can be observed through the nonspecific internalization of fluid droplets via pinocytosis and in receptor mediated endocytosis .
The transport mechanism depends on the material being moved. Intracellular transport that requires quick movement will use an actin-myosin mechanism while more specialized functions require microtubules for transport. [ 5 ] Microtubules function as tracks in the intracellular transport of membrane-bound vesicles and organelles. This process is propelled by motor proteins such as dynein . Motor proteins connect the transport vesicles to microtubules and actin filaments to facilitate intracellular movement. [ 1 ] Microtubules are organized so their plus ends extend through the periphery of the cells and their minus ends are anchored within the centrosome, so they utilize the motor proteins kinesin ’s (positive end directed) and dynein ’s (negative end directed) to transport vesicles and organelles in opposite directions through the cytoplasm. [ 6 ] Each type of membrane vesicle is specifically bound to its own kinesin motor protein via binding within the tail domain. One of the major roles of microtubules is to transport membrane vesicles and organelles through the cytoplasm of eukaryotic cells. It is speculated that areas within the cell considered "microtubule-poor" are probably transported along microfilaments aided by a myosin motor protein. In this manner, microtubules assist the transport of chromosomes towards the spindle poles by utilizing the dynein motor proteins during anaphase .
By understanding the components and mechanisms of intracellular transport it is possible to see its implication in diseases. Defects encompass improper sorting of cargo into transport carriers, vesicle budding, issues in movement of vesicles along cytoskeletal tracks, and fusion at the target membrane. Since the life cycle of the cell is a highly regulated and important process, if any component goes awry there is the possibility for deleterious effects. If the cell is unable to correctly execute components of the intracellular pathway there is the impending possibility for protein aggregates to form. Growing evidence supports the concept that deficits in axonal transport contributes to pathogenesis in multiple neurodegenerative diseases. It is proposed that protein aggregations due to faulty transport is a leading cause of the development of ALS , Alzheimer’s and dementia . [ 7 ]
On the other hand, targeting the intracellular transport processes of these motor proteins constitutes the possibility for pharmacological targeting of drugs. By understanding the method in which substances move along neurons or microtubules it is possible to target specific pathways for disease. Currently, many drug companies are aiming to utilize the trajectory of intracellular transport mechanisms to deliver drugs to localized regions and target cells without harming healthy neighboring cells. The potential for this type of treatment in anti-cancer drugs is an exciting, promising area of research. | https://en.wikipedia.org/wiki/Intracellular_transport |
In astronomy , the intracluster medium ( ICM ) is the superheated plasma that permeates a galaxy cluster . The gas consists mainly of ionized hydrogen and helium and accounts for most of the baryonic material in galaxy clusters. The ICM is heated to temperatures on the order of 10 to 100 megakelvins , emitting strong X-ray radiation.
The ICM is composed primarily of ordinary baryons , mainly ionised hydrogen and helium. [ 1 ] This plasma is enriched with heavier elements, including iron . The average amount of heavier elements relative to hydrogen, known as metallicity in astronomy, ranges from a third to a half of the value in the sun . [ 1 ] [ 2 ] Studying the chemical composition of the ICMs as a function of radius has shown that cores of the galaxy clusters are more metal-rich than at larger radii. [ 2 ] In some clusters (e.g. the Centaurus cluster ) the metallicity of the gas can rise to above that of the sun. [ 3 ] Due to the gravitational field of clusters, metal-enriched gas ejected from supernova remains gravitationally bound to the cluster as part of the ICM. [ 2 ] By looking at varying redshift , which corresponds to looking at different epochs of the evolution of the Universe, the ICM can provide a history record of element production in a galaxy. [ 4 ]
Roughly 15% of a galaxy cluster's mass resides in the ICM. The stars and galaxies contribute only around 5% to the total mass. It is theorized that most of the mass in a galaxy cluster consists of dark matter and not baryonic matter. For the Virgo Cluster , the ICM contains roughly 3 × 10 14 M ☉ while the total mass of the cluster is estimated to be 1.2 × 10 15 M ☉ . [ 1 ] [ 5 ]
Although the ICM on the whole contains the bulk of a cluster's baryons, it is not very dense, with typical values of 10 −3 particles per cubic centimeter. The mean free path of the particles is roughly 10 16 m, or about one lightyear. The density of the ICM rises towards the centre of the cluster with a relatively strong peak. In addition, the temperature of the ICM typically drops to 1/2 or 1/3 of the outer value in the central regions. Once the density of the plasma reaches a critical value, enough interactions between the ions ensures cooling via X-ray radiation. [ 6 ]
As the ICM is at such high temperatures, it emits X-ray radiation, mainly by the bremsstrahlung process and X-ray emission lines from the heavy elements. [ 1 ] These X-rays can be observed using an X-ray telescope and through analysis of this data, it is possible to determine the physical conditions, including the temperature, density, and metallicity of the plasma.
Measurements of the temperature and density profiles in galaxy clusters allow for a determination of the mass distribution profile of the ICM through hydrostatic equilibrium modeling. The mass distributions determined from these methods reveal masses that far exceed the luminous mass seen and are thus a strong indication of dark matter in galaxy clusters. [ 7 ]
Inverse Compton scattering of low energy photons through interactions with the relativistic electrons in the ICM cause distortions in the spectrum of the cosmic microwave background radiation (CMB) , known as the Sunyaev–Zel'dovich effect . These temperature distortions in the CMB can be used by telescopes such as the South Pole Telescope to detect dense clusters of galaxies at high redshifts. [ 8 ]
In December 2022, the James Webb Space Telescope is reported to be studying the faint light emitted in the intracluster medium. [ 9 ] A 2018 study found this to be an "accurate luminous tracer of dark matter". [ 10 ]
Plasma in regions of the cluster, with a cooling time shorter than the age of the system, should be cooling due to strong X-ray radiation where emission is proportional to the density squared. Since the density of the ICM is highest towards the center of the cluster, the radiative cooling time drops a significant amount. [ 11 ] The central cooled gas can no longer support the weight of the external hot gas and the pressure gradient drives what is known as a cooling flow where the hot gas from the external regions flows slowly towards the center of the cluster. This inflow would result in regions of cold gas and thus regions of new star formation . [ 12 ] Recently however, with the launch of new X-ray telescopes such as the Chandra X-ray Observatory , images of galaxy clusters with better spatial resolution have been taken. These new images do not indicate signs of new star formation on the order of what was historically predicted, motivating research into the mechanisms that would prevent the central ICM from cooling. [ 11 ]
There are two popular explanations of the mechanisms that prevent the central ICM from cooling: feedback from active galactic nuclei through injection of relativistic jets of plasma [ 13 ] and sloshing of the ICM plasma during mergers with subclusters. [ 14 ] [ 15 ] The relativistic jets of material from active galactic nuclei can be seen in images taken by telescopes with high angular resolution such as the Chandra X-ray Observatory . | https://en.wikipedia.org/wiki/Intracluster_medium |
An intracolonic explosion or colonic gas explosion is an explosion inside the colon of a person due to ignition of explosive gases such as methane . This can happen during colonic exploration, as a result of the electrical nature of a colonoscope .
A colonic gas explosion is rare; [ 1 ] however, the result can be acute colonic perforation , which can be fatal. [ 1 ]
An explosion is triggered by a combination of combustible gases such as hydrogen or methane, combustive gas such as oxygen, and heat. An explosion can also be caused by Crohn's disease. [ 1 ]
Careful bowel preparation, such as cleansing the colon before a procedure, is key to preventing an intracolonic explosion. [ 2 ]
This surgery article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intracolonic_explosion |
Intracrine signaling is a mode of hormone and growth factor action in which signaling molecules exert their effects within the same cell that produces them, without being secreted into the extracellular environment. The term intracrine was originally coined to describe peptides that either act within the cell that synthesized them or function after being internalized by their target cells. [ 1 ] While this model was initially developed through studies on the intracellular action of angiotensin II , it has since been recognized as a fundamental mechanism applicable to numerous peptide hormones and growth factors.
Unlike classical endocrine , autocrine , and paracrine signaling, where signaling molecules leave the cell and interact with membrane-bound receptors, intracrine signaling functions exclusively within the intracellular environment, often targeting nuclear or cytoplasmic receptors. [ 2 ] This mechanism allows cells to autonomously regulate essential biological functions, including gene expression, differentiation, and survival. One of the most well-characterized examples of intracrine signaling is the local synthesis and action of sex steroids within immune cells, which modulate inflammatory responses and metabolic pathways. [ 3 ]
The intracrine hypothesis has been instrumental in predicting novel functions for peptide hormones and has generated significant insights with potential therapeutic implications. Since its initial proposal, an expanding body of observational evidence—independent of the hypothesis itself—has reinforced the role of intracrine signaling in various physiological and pathological processes, including immune regulation, metabolic control, and cancer progression.
As described above, intracrine signaling, also called intracrine action, is a process in which a cell produces a hormone that acts within the same cell that synthesized it. However, the term "intracrines" can be used more broadly to refer to all hormones that act on receptors within the cell, regardless of whether they act on their cell of origin.
This means that while some intracrines function in a strictly intracrine manner, others may be secreted to influence neighboring cells. In such cases, an intracrine can function in a paracrine manner while still exerting its effects within the original cell through intracellular signaling.
The field of intracrinology was introduced about 40 years ago and is only now [ when? ] gaining widespread recognition. This shift has been driven by overwhelming evidence that many cells, beyond the traditionally recognized endocrine organs, can synthesize, metabolize, and regulate their own sex hormones . This paradigm challenges the traditional endocrine model, which held that sex steroid production and regulation occur primarily in the gonads. [ 3 ]
Intracrinology has transformed our understanding of tissue autonomy, emphasizing how local hormone production enables precise, cell-specific regulation of physiological processes. This perspective has had profound implications for rheumatology , oncology , and metabolic research, where local steroidogenesis influences disease progression and treatment responses. [ 1 ]
For many intracrines, once they stimulate the upregulation of a gene, a positive feedback loop is initiated. The intracrine promotes cell proliferation and stimulates further intracellular signaling, leading to increased synthesis and release of the intracrine itself, thereby reinforcing the loop. [ 4 ] In multicellular organisms, an intracrine may also be secreted, causing neighboring cells to proliferate and enter a similar positive feedback loop. This mechanism results in a coordinated response that contributes to tissue growth and development. [ 4 ]
Angiotensin II is a key component of the renin-angiotensin system and is traditionally recognized for its role as an extracellular hormone regulating blood pressure, fluid balance, and vascular function. However, emerging evidence suggests that Ang II also functions as an intracrine factor within cardiac myocytes and vascular smooth muscle cells. This intracrine role of Ang II contributes to cardiac hypertrophy, fibrosis, and arrhythmogenesis, making it a critical regulator of cardiovascular physiology and pathology . [ 4 ]
Intracellular Ang II is generated within cardiac cells either through internalization of circulating Ang II or by intracellular synthesis via non-secreted renin and angiotensinogen. Unlike its extracellular counterpart, intracrine Ang II does not rely on traditional cell surface receptors; instead, it binds to nuclear AT1 receptors, modulating gene transcription and intracellular signaling pathways. [ 4 ]
Studies have demonstrated that intracrine Ang II localizes to the nucleus and mitochondria of cardiac myocytes, where it influences cellular metabolism, oxidative stress, and calcium homeostasis. Additionally, intracellular Ang II has been shown to enhance the transcription of genes involved in hypertrophy and fibrosis, contributing to pathological cardiac remodeling. [ 4 ]
Intracrine Ang II has been implicated in the development of cardiac hypertrophy, a process characterized by the enlargement of cardiac myocytes in response to increased workload or stress. Experimental models have shown that overexpression of non-secreted Ang II in cardiac cells leads to rapid hypertrophy independent of extracellular Ang II signaling. This suggests that intracellular Ang II plays a direct role in cardiomyocyte growth and structural remodeling. [ 4 ]
Similarly, intracrine Ang II contributes to myocardial fibrosis by upregulating profibrotic cytokines and growth factors, such as transforming growth factor-beta ( TGF-β ) and platelet-derived growth factor ( PDGF ). This promotes the excessive deposition of extracellular matrix proteins, leading to stiffening of the cardiac tissue and impaired cardiac function. [ 4 ]
Beyond its structural effects, intracrine Ang II has been shown to alter cardiac electrical conductivity, increasing the risk of arrhythmias . Intracellular dialysis of Ang II in cardiomyocytes has been observed to reduce junctional conductance and alter calcium signaling, which may contribute to the development of atrial fibrillation and other arrhythmias in conditions such as heart failure. [ 4 ]
Additionally, the ability of intracrine Ang II to modulate gap junctions and ion channels highlights its potential role in electrical remodeling of the heart. This mechanism may underlie the persistent electrical abnormalities seen in pathological cardiac conditions. [ 4 ]
Parathyroid hormone-related protein (PTHrP) is a multifunctional peptide that plays a crucial role in calcium homeostasis, vascular regulation, and cellular proliferation. While it is traditionally recognized as a secreted factor that binds to surface receptors, PTHrP also functions as an intracrine regulator within cardiac cells. Its intracrine actions influence myocardial growth, vascular remodeling, and responses to stress, making it a key factor in cardiovascular physiology and pathology.
PTHrP exists in multiple isoforms, with some retained intracellularly rather than secreted. In cardiac myocytes and vascular smooth muscle cells, intracrine PTHrP is known to localize within the nucleus, where it regulates gene transcription, modulates cell proliferation, and affects intracellular calcium handling. [ 4 ]
One of the hallmark intracrine functions of PTHrP is its ability to influence angiogenesis. Studies have shown that nuclear PTHrP interacts with ribosomal DNA to upregulate genes involved in endothelial cell proliferation and vascular development. This action mirrors other intracrine factors, such as VEGF, which also regulate angiogenesis through nuclear localization. [ 4 ]
Intracrine PTHrP has been implicated in myocardial development and adaptation to stress. It is particularly active during embryonic heart development, where it influences cardiomyocyte differentiation and growth. Additionally, under conditions of cardiac stress, such as ischemia or hypertrophy, PTHrP expression is upregulated, suggesting a protective role in maintaining myocardial function. [ 4 ]
Moreover, in vascular smooth muscle cells, intracrine PTHrP plays a dual role. While secreted PTHrP can inhibit cell proliferation via receptor-mediated pathways, intracellular PTHrP exerts a mitogenic effect, promoting vascular remodeling and adaptation in response to hemodynamic changes. [ 4 ]
The dual extracellular and intracellular actions of PTHrP make it a promising target for cardiovascular therapies. Given its role in regulating cardiac cell growth and vascular integrity, modulating PTHrP expression or its intracellular signaling pathways could be beneficial in conditions such as heart failure, atherosclerosis, and ischemic heart disease. Additionally, therapeutic strategies that enhance intracrine PTHrP activity could improve angiogenesis and myocardial repair following injury. [ 4 ]
In conclusion, PTHrP is a key intracrine regulator in the cardiovascular system, influencing both myocardial and vascular function. Its ability to act within the nucleus and cytoplasm of cardiac cells highlights its potential as a therapeutic target for cardiovascular diseases. Future research focusing on the intracrine mechanisms of PTHrP may provide novel insights into cardiac regeneration and vascular remodeling.
Vascular endothelial growth factor (VEGF) is widely recognized for its role in angiogenesis and vascular homeostasis. However, beyond its classical extracellular signaling functions, VEGF also exerts intracellular, or intracrine, effects, particularly in cardiac tissues. Intracrine VEGF plays a significant role in cardiac development, angiogenesis, and the adaptive response to ischemic injury.
VEGF has been identified as an intracrine factor, meaning that it not only acts through autocrine and paracrine pathways but also functions within the cells that produce it. In cardiac myocytes and endothelial cells, VEGF can be synthesized and retained intracellularly, where it directly influences gene expression, protein synthesis, and cellular survival mechanisms. Unlike its secreted counterpart, intracrine VEGF operates independently of cell-surface receptors, exerting effects within the nucleus and cytoplasm.
Studies suggest that intracrine VEGF contributes to cellular differentiation during cardiac organogenesis. In embryonic and progenitor cardiac cells, VEGF facilitates the transcription of genes involved in cell survival, proliferation, and vascular patterning. Its presence in stem cell nuclei suggests that it may regulate ribosomal DNA transcription, similar to other intracrines, thereby coordinating cellular growth and differentiation. [ 4 ]
The intracrine actions of VEGF have been implicated in cardioprotection, particularly in response to ischemic stress. Cardiac myocytes exposed to hypoxic conditions exhibit increased intracellular VEGF, which appears to play a role in cellular adaptation to oxygen deprivation. This intracrine mechanism promotes the expression of stress-response genes, enhances mitochondrial function, and modulates intracellular calcium signaling, which is critical for maintaining contractility under stress conditions. [ 4 ]
VEGF has been shown to interact with intracellular angiogenin, another intracrine involved in endothelial cell survival. This interaction establishes a feedback loop where VEGF upregulates angiogenin, which, in turn, enhances VEGF expression. This loop suggests that intracrine VEGF may be a crucial component in the regulation of myocardial vascularization and repair. [ 4 ]
Understanding VEGF's intracrine role in the heart opens new avenues for therapeutic intervention in cardiovascular diseases. Unlike traditional VEGF-targeted therapies that focus on extracellular angiogenesis, modulating intracrine VEGF could provide a more cell-specific approach to enhancing cardiac repair and regeneration. Targeting intracrine VEGF pathways may offer novel strategies for treating ischemic heart disease , heart failure, and other cardiovascular pathologies where vascular dysfunction is a contributing factor. [ 4 ]
In conclusion, VEGF functions not only as an extracellular angiogenic factor but also as an intracrine regulator of cardiac cell survival and development. Future research into intracrine VEGF mechanisms may provide critical insights into cardiac regeneration and the development of more effective cardiovascular therapies.
Intracrines play a crucial role in the cardiovascular system by exerting intracellular actions that go beyond traditional extracellular signaling pathways. These factors, including VEGF, PTHrP, and Angiotensin II, influence key processes such as cardiac development, hypertrophy, fibrosis, angiogenesis, and electrical conductivity. By operating within the cells that synthesize them, intracrines regulate gene expression, protein synthesis, and intracellular signaling, allowing for precise control over physiological and pathological responses.
The recognition of intracrine signaling has significant implications for cardiovascular disease treatment. Understanding the intracellular mechanisms of these factors opens new therapeutic avenues, particularly for conditions such as heart failure, ischemic heart disease, and arrhythmias. Targeting intracrine pathways could lead to more effective interventions by modulating disease progression at the cellular level rather than relying solely on extracellular receptor blockade. As research continues to uncover the complexities of intracrine physiology, it holds promise for the development of innovative strategies to improve cardiovascular health.
Intracrines are essential regulators of cellular differentiation and development. The intracellular mode of action allows for highly localized and sustained control over developmental processes, particularly in stem cell differentiation, organogenesis, and tissue remodeling. [ 1 ] [ 5 ]
Intracrines play a crucial role in maintaining stem cell populations and guiding their differentiation into specialized cell types. Many stem cell regulatory proteins, including vascular endothelial growth factor (VEGF), high-mobility group protein B1 ( HMGB1 ), and homeodomain transcription factors such as Pax6 and Oct3/4 , operate through intracrine mechanisms. These factors establish intracellular feedback loops that sustain differentiation programs, ensuring that once a stem cell commits to a particular lineage, the developmental process continues even after the external signal is removed. [ 5 ]
For instance, VEGF, a well-known angiogenic factor, is also an intracrine that promotes the survival and differentiation of hematopoietic stem cells. In VEGF-deficient cells, survival and colony formation are impaired, but these defects can be rescued by restoring intracellular VEGF levels, highlighting the necessity of intracrine VEGF in stem cell regulation. [ 5 ] Similarly, the homeodomain transcription factor Pdx-1 can be internalized by target cells, where it upregulates its own synthesis and drives pancreatic duct cells toward an insulin -producing phenotype, demonstrating the ability of intracrines to induce cell fate changes. [ 1 ]
The development of organs relies on complex signaling interactions that regulate cell proliferation, migration, and differentiation. Intracrines such as dynorphin B and transforming growth factor-beta (TGF-β) have been implicated in cardiac development, promoting the differentiation of cardiac progenitor cells and guiding their maturation into functional cardiomyocytes. [ 5 ] In cardiac embryogenesis, intracrine feedback loops involving dynorphin B and Nkx-2.5 (a homeodomain transcription factor) have been shown to drive the expression of cardiac-specific genes, reinforcing the role of intracrines in orchestrating organ development. [ 5 ]
Furthermore, intracrines contribute to the spatial and temporal regulation of developmental cues. Homeodomain proteins, which are critical for embryonic patterning, can be secreted and internalized by neighboring cells, enabling coordinated differentiation across tissues. This form of intracrine signaling ensures that developing structures, such as the eye or heart, maintain proper cellular identity and function. [ 1 ]
The discovery of intracrine loops in stem cell regulation has profound implications for regenerative medicine . Because intracrines can establish long-lasting differentiation programs, they offer potential therapeutic targets for tissue regeneration and repair. For example, in cardiac repair, HMGB1 has been shown to enhance the proliferation and differentiation of cardiac stem cells following myocardial infarction, suggesting that modulating intracrine pathways could improve heart regeneration. [ 5 ]
The ability of certain intracrines to reprogram cells into pluripotent-like states also opens new avenues for regenerative therapies. Oct3/4, Sox2 , and Nanog , all of which are involved in maintaining stem cell pluripotency, can potentially be introduced into cells to drive reprogramming without the need for genetic modification. This approach could provide safer and more controlled methods for generating patient-specific stem cells. [ 1 ]
Intracrines are fundamental to development, acting as intracellular regulators that guide stem cell differentiation, organogenesis, and tissue remodeling. By establishing self-sustaining feedback loops, intracrines ensure that developmental programs continue even after the initial external signals disappear. Understanding these mechanisms not only provides insights into embryonic development but also offers promising strategies for regenerative medicine and tissue engineering. As research into intracrine biology advances, it holds the potential to revolutionize therapeutic approaches for organ repair, disease treatment, and stem cell-based therapies.
Intracrines involvement in cancer is primarily through their regulation of growth factors, angiogenesis, and cellular signaling networks that contribute to tumor growth and therapy resistance.
Intracrines such as fibroblast growth factor-2 (FGF2), vascular endothelial growth factor (VEGF), and insulin-like growth factor-1 (IGF-1) regulate cellular proliferation. In cancer, these factors often establish self-sustaining feed-forward loops, enhancing uncontrolled tumor growth. [ 1 ] For example, VEGF's intracrine action is implicated in hematopoietic malignancies, while angiogenin has been identified in the nuclei of breast cancer cells, where it promotes proliferation. [ 1 ]
The formation of new blood vessels is essential for tumor survival and expansion. Intracrines like VEGF and angiogenin regulate angiogenesis within tumor cells and surrounding endothelial cells. [ 1 ] Inhibiting intracrine trafficking of angiogenin to the nucleus has been shown to blunt cancer cell proliferation, making this an emerging therapeutic target. [ 1 ] | https://en.wikipedia.org/wiki/Intracrine |
An intracule is a quantum mechanical mathematical function for the two electron density which depends not upon the absolute values of position or momentum but rather upon their relative values. Its use is leading to new methods in physics and computational chemistry to investigate the electronic structure of molecules and solids. These methods are a development of Density functional theory (DFT), but with the two electron density replacing the one electron density.
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it .
This computational physics -related article is a stub . You can help Wikipedia by expanding it .
This computational chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intracule |
Intraflagellar transport ( IFT ) is a bidirectional motility along axoneme microtubules that is essential for the formation ( ciliogenesis ) and maintenance of most eukaryotic cilia and flagella . [ 1 ] It is thought to be required to build all cilia that assemble within a membrane projection from the cell surface. Plasmodium falciparum cilia and the sperm flagella of Drosophila are examples of cilia that assemble in the cytoplasm and do not require IFT. The process of IFT involves movement of large protein complexes called IFT particles or trains from the cell body to the ciliary tip and followed by their return to the cell body. The outward or anterograde movement is powered by kinesin -2 while the inward or retrograde movement is powered by cytoplasmic dynein 2/1b. The IFT particles are composed of about 20 proteins organized in two subcomplexes called complex A and B. [ 2 ]
IFT was first reported in 1993 by graduate student Keith Kozminski while working in the lab of Dr. Joel Rosenbaum at Yale University . [ 3 ] [ 4 ] The process of IFT has been best characterized in the biflagellate alga Chlamydomonas reinhardtii as well as the sensory cilia of the nematode Caenorhabditis elegans . [ 5 ]
It has been suggested based on localization studies that IFT proteins also function outside of cilia. [ 6 ]
Intraflagellar transport (IFT) describes the bi-directional movement of non-membrane-bound particles along the doublet microtubules of the flagellar, and motile cilia axoneme , between the axoneme and the plasma membrane. Studies have shown that the movement of IFT particles along the microtubule is carried out by two different microtubule motors ; the anterograde (towards the flagellar tip) motor is heterotrimeric kinesin-2, and the retrograde (towards the cell body) motor is cytoplasmic dynein 1b. IFT particles carry axonemal subunits to the site of assembly at the tip of the axoneme; thus, IFT is necessary for axonemal growth. Therefore, since the axoneme needs a continually fresh supply of proteins, an axoneme with defective IFT machinery will slowly shrink in the absence of replacement protein subunits. In healthy flagella, IFT particles reverse direction at the tip of the axoneme, and are thought to carry used proteins, or "turnover products," back to the base of the flagellum. [ 7 ] [ 8 ]
The IFT particles themselves consist of two sub-complexes, [ 9 ] each made up of several individual IFT proteins . The two complexes, known as 'A' and 'B,' are separable via sucrose centrifugation (both complexes at approximately 16S, but under increased ionic strength complex B sediments more slowly, thus segregating the two complexes). The many subunits of the IFT complexes have been named according to their molecular weights:
IFT-B complex have been further subcategorized to IFT-B1 (core) and IFT-B2 (peripheral) subcomplexes. These subcomplexes were first described by Lucker et al. in an experiment on Chlamydomonas reinhardtii , using increased ionic strength to dissociate the peripheral particles from the whole IFT-B complex. They realized that the core particles do not need the peripheral ones in order to form an assembly. [ 15 ]
The biochemical properties and biological functions of IFT subunits are just beginning to be elucidated, for example they interact with components of the basal body like CEP170 or proteins which are required for cilium formation like tubulin chaperone and membrane proteins. [ 17 ]
Due to the importance of IFT in maintaining functional cilia, defective IFT machinery has now been implicated in many disease phenotypes generally associated with non-functional (or absent) cilia. IFT88, for example, encodes a protein also known as Tg737 or Polaris in mouse and human, and the loss of this protein has been found to cause an autosomal - recessive polycystic kidney disease model phenotype in mice. Further, the mislocalization of this protein following WDR62 knockdown in mice results in brain malformation and ciliopathies. [ 18 ] Other human diseases such as retinal degeneration , situs inversus (a reversal of the body's left-right axis), Senior–Løken syndrome , liver disease , primary ciliary dyskinesia , nephronophthisis , Alström syndrome , Meckel–Gruber syndrome , Sensenbrenner syndrome , Jeune syndrome , and Bardet–Biedl syndrome , which causes both cystic kidneys and retinal degeneration, have been linked to the IFT machinery. This diverse group of genetic syndromes and genetic diseases are now understood to arise due to malfunctioning cilia, and the term " ciliopathy " is now used to indicate their common origin. [ 19 ] These and possibly many more disorders may be better understood via study of IFT. [ 7 ]
One of the most recent discoveries regarding IFT is its potential role in signal transduction. IFT has been shown to be necessary for the movement of other signaling proteins within the cilia, and therefore may play a role in many different signaling pathways. Specifically, IFT has been implicated as a mediator of sonic hedgehog signaling, [ 27 ] one of the most important pathways in embryogenesis . | https://en.wikipedia.org/wiki/Intraflagellar_transport |
Intragenomic and intrauterine conflicts in humans arise between mothers and their offspring. Parental investment theory states that parents and their offspring will often be in conflict over the optimal amount of investment that the parent should provide. [ 1 ] This is because the best interests of the parent do not always match the best interests of the offspring. Maternal-infant conflict is of interest due to the intensity of maternal investment in her offspring. In humans, mothers often invest years of care into their children due to the long developmental period before children become self-sufficient.
Parents and their children are typically engaged in a cooperative venture in which both benefit by the survival and future reproduction of the offspring. However, their interests cannot be identical because their genes are not identical. While both parent and offspring are 100% related to themselves, they share only 50% of their genes with each other, which means both parent and child will at times be in conflict with each other. [ 2 ]
Maternal-infant conflict begins during pregnancy, where the mother's body must maintain her own health while also providing for the developing fetus. [ 3 ] Broadly, both the fetus and the mother have the same evolutionary interests in the fetus coming to term and resulting in a healthy birth. However, there may also be conflicts between the amount of nutrients the fetus prefers to receive and the amount the mother prefers to give. For example, there may be conflict between the mother and the fetus over who should control fetal growth, wherein the fetus would prefer optimal growth and the mother would prefer to control fetal growth relative to the level of resources she has available.
The placenta plays an important role in this conflict since it is the source of nutrient delivery from mother to fetus, and also receives signals from both mother and fetus. The placenta is believed to play a balancing role in the conflict between mother and fetus, and confers optimal fitness to the fetus given the developmental constraints of the mother's resource availability. For instance, Rutherford and Tardiff suggested that in marmosets with variable litter sizes, litters of triplets were associated with smaller shares of the placenta each, suggesting that the fetuses were able to increase placental efficiency, perhaps by manipulating placental structure and function to solicit additional maternal resources, in the face of competition between the fetuses for limited maternal resources. [ 4 ] However, they also note that the infrequency of successful gestation and weaning of complete triplet litters suggests that even in the face of a fetal mechanism for efficiency, maternal mechanisms may otherwise constrain energetic investments, perhaps especially after birth. [ 5 ]
A key discovery related to both intrauterine and intragenomic conflict is that the fetal genotype can influence maternal physiology. [ 6 ] Fetal manipulation of the maternal endometrium allows the fetus to gain access to maternal arterial blood. [ 2 ] Through this manipulation, the mother cannot restrict blood flow to the fetus without restricting blood flow to herself. When mothers are unable to defend against fetal alterations, development of pregnancy related syndromes such as gestational diabetes and pre-eclampsia occur. Gestational diabetes occurs when higher blood glucose leads to increased production of insulin in response to fetally-influenced resistance to insulin. Pre-eclampsia is high blood pressure that occurs during pregnancy which may be the result of fetal manipulations which increase blood flow.
Genomic imprinting refers to the different effects of the same gene depending on whether the gene was inherited from one's mother or father. While most genes do not have different effects based on their parental origin, a small subset do. These genes are referred to as imprinted. Imprinting arises when the effect of the gene's expression in the offspring is likely to have a strong effect on a parent's fitness. [ 7 ] For example, genes which are implicated in the offspring's acquisition of resources from the mother are strongly beneficial to the father's fitness because his offspring who have genes which are highly successful at acquiring the mother's resources are likely to survive and reproduce. Furthermore, the father pays no direct cost of breastfeeding the infant whereas the mother does pay the cost. In this case, the father's copy of these genes should be selected to acquire more resources than the mother's copy of these genes. Currently, there are around 100 known imprinted genes in mice, and 50 in humans. Some imprinted genes appear to have no effect, or if the gene is clearly related to function the imprinted expression of the gene does not.
As with intrauterine conflict, genomic imprinting is believed to be highly related to placentation, because evidence of genomic imprinting is found only in placental mammals, leading to the hypothesis that genomic imprinting and placentation are evolutionarily linked. [ 8 ] Imprinted gene expression is shown to be integral to normal development and functioning of the placenta. [ 9 ] Furthermore, the web of signaling which is expressed between fetus, placenta, and maternal hypothalamus is likely the result of co-adaptations of gene expression related to fetal growth and brain development. [ 10 ]
The discovery of the genomic imprinting phenomenon helped researchers to understand the basis for several disorders with otherwise unclear inheritance patterns including Prader-Willi/Angelman Syndromes, Beckwith-Wiedemann Syndrome, and Silver-Russell Syndrome. [ 10 ] Disorders of imprinting are thought to be related to abnormal DNA methylation, which may involve multiple imprinted loci. [ 11 ] Evidence that assistive-reproductive technology and peri-conceptional environmental factors such as maternal diet are implicated in imprinting-related disorders draws a link between oocyte health and proper imprinting development. [ 8 ]
Prader-Willi and Angelman Syndromes are genetic disorders which are caused when the only copy of an imprinted gene is the 'silent' copy and the active copy is absent, either due to a deletion or to uniparental disomy. Both are due to the absence of gene expression at 15q11–q13, wherein Prader-Willi is believed to reflect the absence of the paternally derived gene, and Angelman Syndrome reflects the absence of the maternal copy. [ 12 ]
In the case of Prader-Willi Syndrome, the paternal copy is absent while the maternal 'silent' copy is present. Prader-Willi Syndrome is characterized by low birth weight, hypersomnolence, low appetite and poor suckling. The child typically develops a voracious appetite around 1–2 years of age which typically leads to early onset obesity. Children with Prader-Willi Syndrome typically have reduced height throughout childhood and absence of pubertal growth.
A paradigm used to study genomic imprinting is kinship theory. [ 13 ] [ 12 ] Kinship theory argues that imprinting evolves due to conflicts between the interests of paternal and maternal genes within an infant, specifically in regards to infant use of maternal resources. [ 14 ] [ 15 ] Mothers can have children who have different fathers, therefore paternally-derived genes are expected to exploit maternal resources in favor of offspring growth, while maternally-derived genes are expected to constrain maternal resource allocation in order to spread resources over multiple offspring.
Through kinship theory, the occurrence of Prader-Willi Syndrome is theorized to result from the absence of the paternally derived gene, and the only copy is the maternally 'silent' copy. In this case, the child is expected to express behaviors which reduced maternal costs in evolutionary history. In particular, a reduction of infant feeding responses and low appetite over the period of years when the child would near completely rely on breastmilk from its mother would allow mothers to allocate their resources across themselves and multiple offspring. Breastfeeding an infant is estimated to cost around an additional 500 kcals per day, if additional energy is not consumed during lactation, body stores are used making breastfeeding a costly maternal endeavor. [ 16 ] Low appetite in early age is followed by a sudden onset of appetite and feeding behaviors around age 2. In traditional subsistence communities, this corresponds with the age that children would be weaned from breastmilk and offered supplementary solid foods. [ 17 ] It is likely that these weaning foods were less directly costly to the mother than breastfeeding, either because gathering these food items were less energetically costly for her to gather, or she could rely on social partners to assist her in gathering the food. Therefore, Prader-Willi allows us to see the roles of normally invisible imprinting effects which arose in relation to ancestral parental provisioning conflicts. [ 18 ] In typically developing individuals, the imprinted genes related to Prader-Willi and Angelman Syndrome are balanced in a "tug-of-war" designed by natural selection. However, when one of these genes is missing the balance is disrupted and the effects result in the phenotypes seen in these disorders.
Some authors have argued that variability in the timing of onset and the symptom expression of menopause may represent intragenomic conflict. [ 19 ] Human female reproductive capacity ends around age 50, which is around two decades earlier than women's expected lifespans, including in communities without access to medical care. Many have questioned why menopause evolved, since women who did not experience menopause would be able to increase their fitness by extending their reproductive capacities and having more children. The Grandmother Hypothesis argues that menopause evolved because it is an adaptation which increases a woman's fitness through inclusive fitness, that is promoting her genes through her relatives such as her grandchildren. [ 20 ] Selective forces related to the onset of menopause may be different between paternal and maternal interests. Ecological differences in female-biased dispersal patterns in ancestral environments may be related to current difference between populations as to the onset and symptomatology of menopause. If so, women whose ancestors evolved in populations with lower female bias in dispersal will be more likely to experience more severe symptoms and earlier menopause than women whose ancestors evolved in populations with higher female bias in dispersal. [ 19 ] | https://en.wikipedia.org/wiki/Intragenomic_and_intrauterine_conflict_in_humans |
Intragenomic conflict refers to the evolutionary phenomenon where genes have phenotypic effects that promote their own transmission in detriment of the transmission of other genes that reside in the same genome . [ 1 ] [ 2 ] [ 3 ] [ 4 ] The selfish gene theory postulates that natural selection will increase the frequency of those genes whose phenotypic effects cause their transmission to new organisms, and most genes achieve this by cooperating with other genes in the same genome to build an organism capable of reproducing and/or helping kin to reproduce. [ 5 ] The assumption of the prevalence of intragenomic cooperation underlies the organism-centered concept of inclusive fitness . However, conflict among genes in the same genome may arise both in events related to reproduction (a selfish gene may "cheat" and increase its own presence in gametes or offspring above the expected according to fair Mendelian segregation and fair gametogenesis ) and altruism (genes in the same genome may disagree on how to value other organisms in the context of helping kin because coefficients of relatedness diverge between genes in the same genome). [ 6 ] [ 7 ] [ 8 ]
Autosomic genes usually have the same mode of transmission in sexually reproducing species due to the fairness of Mendelian segregation , but conflicts among alleles of autosomic genes may arise when an allele cheats during gametogenesis (segregation distortion) or eliminates embryos that do not contain it (lethal maternal effects). An allele may also directly convert its rival allele into a copy of itself (homing endonucleases). Finally, mobile genetic elements completely bypass Mendelian segregation, being able to insert new copies of themselves into new positions in the genome (transposons).
In principle, the two parental alleles have equal probabilities of being present in the mature gamete . However, there are several mechanisms that lead to an unequal transmission of parental alleles from parents to offspring. One example is a gene drive complex, called a segregation distorter , that "cheats" during meiosis or gametogenesis and thus is present in more than half of the functional gametes. The most studied examples are sd in Drosophila melanogaster ( fruit fly ), [ 9 ] t haplotype in Mus musculus ( mouse ) and sk in Neurospora spp. ( fungus ). Possible examples have also been reported in humans. [ 10 ] Segregation distorters that are present in sexual chromosomes (as is the case with the X chromosome in several Drosophila species [ 11 ] [ 12 ] ) are denominated sex-ratio distorters, as they induce a sex-ratio bias in the offspring of the carrier individual.
The simplest model of meiotic drive involves two tightly linked loci: a Killer locus and a Target locus. The segregation distorter set is composed by the allele Killer (in the Killer locus) and the allele Resistant (in the Target locus), while its rival set is composed by the alleles Non-killer and Non-resistant . So, the segregation distorter set produces a toxin to which it is itself resistant, while its rival is not. Thus, it kills those gametes containing the rival set and increases in frequency. The tight linkage between these loci is crucial, so these genes usually lie on low-recombination regions of the genome.
Other systems do not involve gamete destruction, but rather use the asymmetry of meiosis in females: the driving allele ends up in the oocyte instead of in the polar bodies with a probability greater than one half. This is termed true meiotic drive , as it does not rely on a post-meiotic mechanism. The best-studied examples include the neocentromeres (knobs) of maize, as well as several chromosomal rearrangements in mammals. The general molecular evolution of centromeres is likely to involve such mechanisms.
The Medea gene causes the death of progeny from a heterozygous mother that do not inherit it. It occurs in the flour beetle ( Tribolium castaneum ). [ 13 ] Maternal-effect selfish genes have been successfully synthesized in the lab. [ 14 ]
Transposons are autonomous replicating genes that encode the ability to move to new positions in the genome and therefore accumulate in the genomes. They replicate themselves in spite of being detrimental to the rest of the genome.
They are often called 'jumping genes' or parasitic DNA and were discovered by Barbara McClintock in 1944.
Homing endonuclease genes (HEG) convert their rival allele into a copy of themselves, and are thus inherited by nearly all meiotic daughter cells of a heterozygote cell. They achieve this by encoding an endonuclease which breaks the rival allele. This break is repaired by using the sequence of the HEG as template. [ 15 ]
HEGs encode sequence-specific endonucleases. The recognition sequence (RS) is 15–30 bp long and usually occurs once in the genome. HEGs are located in the middle of their own recognition sequences.
Most HEGs are encoded by self-splicing introns (group I & II) and inteins . Inteins are internal protein fragments produced from protein splicing and usually contain endonuclease and splicing activities.
The allele without the HEGs are cleaved by the homing endonuclease and the double-strand break are repaired by homologous recombination (gene conversion) using the allele containing HEGs as template. Both chromosomes will contain the HEGs after repair. [ 16 ]
B-chromosomes are nonessential chromosomes ; not homologous with any member of the normal (A) chromosome set; morphologically and structurally different from the As; and they are transmitted at higher-than-expected frequencies, leading to their accumulation in progeny. In some cases, there is strong evidence to support the contention that they are simply selfish and that they exist as parasitic chromosomes . [ 17 ] They are found in all major taxonomic groupings of both plants and animals .
Since nuclear and cytoplasmic genes usually have different modes of transmission, intragenomic conflicts between them may arise. [ 18 ] Mitochondria and chloroplasts are two examples of sets of cytoplasmic genes that commonly have exclusive maternal inheritance, similar to endosymbiont parasites in arthropods, like Wolbachia . [ 19 ]
Anisogamy generally produces zygotes that inherit cytoplasmic elements exclusively from the female gamete. Thus, males represent dead-ends to these genes. Because of this fact, cytoplasmic genes have evolved a number of mechanisms to increase the production of female descendants and eliminate offspring not containing them. [ 20 ]
Male organisms are converted into females by cytoplasmic inherited protists ( Microsporidia ) or bacteria ( Wolbachia ), regardless of nuclear sex-determining factors. This occurs in amphipod and isopod Crustacea and Lepidoptera .
Male embryos (in the case of cytoplasmic inherited bacteria) or male larvae (in the case of Microsporidia) are killed. In the case of embryo death, this diverts investment from males to females who can transmit these cytoplasmic elements (for instance, in ladybird beetles, infected female hosts eat their dead male brothers, which is positive from the viewpoint of the bacterium). In the case of microsporidia-induced larval death, the agent is transmitted out of the male lineage (through which it cannot be transmitted) into the environment, where it may be taken up again infectiously by other individuals. Male-killing occurs in many insects . In the case of male embryo death, a variety of bacteria have been implicated, including Wolbachia .
In some cases anther tissue (male gametophyte ) is killed by mitochondria in monoecious angiosperms , increasing energy and material spent in developing female gametophytes. This leads to a shift from monoecy to gynodioecy , where part of the plants in the population are male-sterile.
In certain haplodiploid Hymenoptera and mites , in which males are produced asexually, Wolbachia and Cardinium can induce duplication of the chromosomes and thus convert the organisms into females. The cytoplasmic bacterium forces haploid cells to go through incomplete mitosis to produce diploid cells which therefore will be female. This produces an entirely female population. If antibiotics are administered to populations which have become asexual in this way, they revert to sexuality instantly, as the cytoplasmic bacteria forcing this behaviour upon them are removed.
In many arthropods , zygotes produced by sperm of infected males and ova of non-infected females can be killed by Wolbachia or Cardinium . [ 19 ]
Conflict between chromosomes has been proposed as an element in the evolution of sex . [ 21 ] | https://en.wikipedia.org/wiki/Intragenomic_conflict |
Intraguild predation , or IGP , is the killing and sometimes eating of a potential competitor of a different species. [ 1 ] [ 2 ] [ 3 ] This interaction represents a combination of predation and competition , because both species rely on the same prey resources and also benefit from preying upon one another. Intraguild predation is common in nature and can be asymmetrical, in which one species feeds upon the other, or symmetrical, in which both species prey upon each other. [ 1 ] Because the dominant intraguild predator gains the dual benefits of feeding and eliminating a potential competitor, IGP interactions can have considerable effects on the structure of ecological communities.
Intraguild predation can be classified as asymmetrical or symmetrical. In asymmetrical interactions one species consistently preys upon the other, while in symmetrical interactions both species prey equally upon each other. [ 1 ] Intraguild predation can also be age structured, in which case the vulnerability of a species to predation is dependent on age and size, so only juveniles or smaller individuals of one of the predators are fed upon by the other. [ 1 ] A wide variety of predatory relationships are possible depending on the symmetry of the interaction and the importance of age structure. IGP interactions can range from predators incidentally eating parasites attached to their prey to direct predation between two apex predators . [ 1 ]
Intraguild predation is common in nature and widespread across communities and ecosystems. [ 2 ] Intraguild predators must share at least one prey species and usually occupy the same trophic guild , and the degree of IGP depends on factors such as the size, growth, and population density of the predators, as well as the population density and behavior of their shared prey. [ 1 ] When creating theoretical models for intraguild predation, the competing species are classified as the "top predator" or the "intermediate predator," (the species more likely to be preyed upon). In theory, intraguild predation is most stable if the top predator benefits strongly from killing off or feeding on the intermediate predator, and if the intermediate predator is a better competitor for the shared prey resource. [ 3 ]
The ecological effects of intraguild predation include direct effects on the survival and distribution of the competing predators, as well as indirect effects on the abundance and distribution of prey species and other species within the community. Because they are so common, IGP interactions are important in structuring communities. [ 2 ] Intraguild predation may actually benefit the shared prey species by lowering overall predation pressure, particularly if the intermediate predator consumes more of the shared prey. [ 4 ] Intraguild predation can also dampen the effects of trophic cascades by providing redundancy in predation: if one predator is removed from the ecosystem, the other is still consuming the same prey species. [ 5 ] [ 6 ] Asymmetrical IGP can be a particularly strong influence on habitat selection. Often, intermediate predators will avoid otherwise optimal habitat because of the presence of the top predator. [ 7 ] Behavioral changes in intermediate predator distribution due to increased risk of predation can influence community structure more than direct mortality caused by the top predators. [ 8 ]
Intraguild predation is well documented in terrestrial arthropods such as insects and arachnids . [ 9 ] [ 10 ] Hemipteran insects and larval lacewings both prey upon aphids , but the competing predators can cause high enough mortality among the lacewings to effectively relieve predation upon the aphids. [ 9 ] Several species of centipede are considered to be intraguild predators. [ 10 ]
Among the most dramatic examples of intraguild predation are those between large mammalian carnivores . Large canines and felines are the mammal groups most often involved in IGP, with larger species such as lions and gray wolves preying upon smaller species such as foxes and cheetah . [ 11 ] In North America, coyotes function as intraguild predators of gray foxes and bobcats , and may exert a strong influence over the population and distribution of gray foxes. [ 12 ] However, in areas where wolves have been reintroduced, coyotes become an intermediate predator and experience increased mortality and a more restricted range. [ 13 ]
Intraguild predation is also important in aquatic and marine ecosystems. As top predators in most marine environments, sharks show strong IGP interactions, both between species of sharks and with other top predators like toothed whales . In tropical areas where multiple species of sharks may have significantly overlapping diets, the risk of injury or predation can determine the local range and available prey resources for different species. [ 14 ] Large pelagic species such as blue and mako sharks are rarely observed feeding in the same areas as great white sharks , and the presence of white sharks will prevent other species from scavenging on whale carcasses. [ 15 ] Intraguild predation between sharks and toothed whales usually involves large sharks preying upon dolphins and porpoises while also competing with them for fish prey, but orcas reverse this trend by preying upon large sharks while competing for large fish and seal prey. [ 16 ] Intraguild predation can occur in freshwater systems as well. For example, invertebrate predators such as insect larvae and predatory copepods and cladocerans can act as intraguild prey, with planktivorous fish the interguild predator and herbivorous zooplankton acting as the basal resource. [ 5 ]
The presence and intensity of intraguild predation is important to both management and conservation of species. [ 8 ] [ 13 ] [ 17 ] Human influence on communities and ecosystems can affect the balance of these interactions, and the direct and indirect effects of IGP may have economic consequences.
Fisheries managers have only recently begun to understand the importance of intraguild predation on the availability of fish stocks as they attempt to move towards ecosystem-based management . IGP interactions between sharks and seals may prevent seals from feeding in areas where commercially important fish species are abundant, which may indirectly make more of these fish available to fishermen. [ 18 ] However, IGP may also negatively influence fisheries. Intraguild predation by spiny dogfish and various skate species on economically important fishes like cod and haddock have been cited as a possible reason for the slow recovery of the groundfish fishery in the western North Atlantic. [ 17 ]
Intraguild predation is also an important consideration for restoring ecosystems. Because the presence of top predators can so strongly affect the distribution and abundance of both intermediate predator and prey species, efforts to either restore or control predator populations can have significant and often unintended ecological consequences. In Yellowstone National Park , the reintroduction of wolves caused them to become intraguild predators of coyotes, which had far-reaching effects on both the animal and plant communities in the park. [ 13 ] Intraguild predation is an important ecological interaction, and conservation and management measures will need to take it into consideration. [ 8 ] | https://en.wikipedia.org/wiki/Intraguild_predation |
Intralocus sexual conflict is a type of sexual conflict that occurs when a genetic locus harbours alleles which have opposing effects on the fitness of each sex, such that one allele improves the fitness of males (at the expense of females), while the alternative allele improves the fitness of females (at the expense of males). [ 1 ] Such "sexually antagonistic" polymorphisms are ultimately generated by two forces: (i) the divergent reproductive roles of each sex, such as conflicts over optimal mating strategy, [ 2 ] [ 3 ] and (ii) the shared genome of both sexes, which generates positive between-sex genetic correlations for most traits. [ 4 ] In the long term, intralocus sexual conflict is resolved when genetic mechanisms evolve that decouple the between-sex genetic correlations between traits. This can be achieved, for example, via the evolution of sex-biased or sex-limited genes .
Intralocus sexual conflict can be considered a form of maladaptation , [ 5 ] as it results in a deviation of both sexes from their fitness optima, with both sexes expressing traits that are sub-optimal for that sex's fitness. Intralocus sexual conflict can also be considered a form of pleiotropy , in which genetic variants have opposing effects on different classes of individual within a population (i.e., males and females), rather than opposing effects on different components of fitness (e.g. survival vs. mating success). Intralocus sexual conflict has important implications for the evolution of sexual dimorphism, [ 6 ] the evolution of sex chromosomes [ 7 ] [ 8 ] and the maintenance of genetic variation. [ 9 ]
All sexual conflict between the sexes arises because members of one sex express traits that benefit their ability to successfully survive and reproduce, while the expression of these traits negatively impacts the fitness of the opposite sex. In genetics, a locus refers to the exact location of a gene on a chromosome, and sexual conflict can have different consequences depending on the underlying genetics. Interlocus sexual conflict occurs at different loci in each sex, whereas intralocus sexual conflict occurs at the same loci. An example of interlocus sexual conflict is the expression of accessory gland proteins by males during mating, which negatively affect female fitness. This may result in a counter adaptation at a different locus to reduce harm in females. For example, a female may diminish detrimental consequences of being subjected to male accessory gland proteins during mating by waiting longer to re-mate, or she may develop an opposite physical adaptation of her reproductive tract. [ 10 ]
Alternatively, sexual conflict may occur within the same locus, in which case it is known as intralocus sexual conflict. Intralocus sexual conflict occurs because many phenotypic traits are determined by a common set of genes which are found and expressed in both male and female individuals. For example, phenotypic traits such as body size, diet, development time, longevity, and locomotory activity are typically positively genetically correlated between sexes. These traits have also been suggested to underlie intralocus sexual conflict [ 11 ] because they may be subjected to antagonistic patterns of selection, in which elevated values of one trait result in enhanced fitness in one sex but decreased fitness in the other, generating a negative correlation for fitness between male and female individuals that express a particular trait. [ 12 ]
Bonduriansky and Chenowith proposed a four phase model for the development of intralocus sexual conflict, in which the first phase is stabilizing selection on a trait in both sexes. Intralocus conflict then originates in the second phase when a change in physical or social conditions causes intense selection on that trait in males and/or females, and both sexes are displaced from their optimum. In the third phase, diverging selection continues on both sexes, but is attenuated. In the fourth phase, intralocus conflict is fully resolved and sexual dimorphism has occurred. [ 13 ]
A good example of intralocus sexual conflict can be seen in humans, regarding the selection pressures on height that varies between sexes. In nature, a negative correlation between the height of a woman and her reproductive success has been seen, with selection favoring relatively shorter women. On the other hand, men of average height are preferred, and have higher reproductive success than men who are shorter or taller in nature. [ 14 ] Studies have been able to produce evidence concluding that higher reproductive success is obtained by females in sibling pairs that were shorter in height, whereas reproductive success in sibling pairs of average height was much higher in males. These findings show that intralocus sexual conflict over a physical trait, such as height, can have an effect on Darwinian fitness in humans. [ 14 ]
Intralocus sexual conflict diminishes the benefits of sexual selection. [ 15 ] Examples of intralocus sexual conflict can be seen all throughout nature.
In humans, males and females who appear to be more masculine in their physical appearance for their sex report to have brothers that score a higher mate value relative to their sisters. [ 16 ] Similarly, individuals who are of normal weight and have higher levels of estradiol are positively correlated with higher mate values in women, and higher levels of testosterone are positively correlated with higher mate values in men. [ 16 ] Individuals that are physically and hormonally more masculine tend to have brothers that are fairly more attractive than their sisters, while more feminine individuals have sisters that are more attractive than their brothers. This suggests that intralocus sexual conflict can mediate and determine the fitness of an individual. Human hips are another example, where females need larger hips for childbirth as opposed to smaller hips (optimal for walking) for males. [ 17 ] The genes that affect hip size must reach a compromise that is at neither the male optimum nor the female optimum.
In the Ibiza wall lizard (Podarcis pityusensis) , intralocus sexual conflict exist over color. In this species, color is used as a signal of male fighting ability. Males that are more brightly colored are perceived as better fighters. As lizards in this species age, they become larger and more colorful. During mating seasons, males will typically compete for females and resources by fighting with each other. Males will select opponents based on the intensity of the color of their opponent's coat. Females of this species also possess brightly colored coats. This trait is detrimental for females, since being colorful makes them more conspicuous to males [ clarification needed ] and predators. However, in males, being colorful helps males win fights and increases their reproductive success.
Another example can be seen in the features of the soay sheep (Ovis aries) horns, and the length of the serin finch's (Serinus serin ) tail. Males that possess larger horns or longer tails in these species have higher success during male competition and increased reproductive success. However, these features require a great deal of energy for females to possess and do not benefit females in any significant way. [ 10 ]
Intralocus genetic differences between males and females have been identified in a variety of fish species using RAD sequencing , including gulf pipefish [ 18 ] and deacon rockfish . [ 19 ] It has been hypothesised that some of the loci in deacon rockfish may be examples of intralocus sexual conflict but their function and evolutionary significance is currently uncertain. [ 19 ]
There have been several hypotheses made that attempt to explain possible resolutions for intralocus sexual conflict. In one proposition, it is suggested that intralocus sexual conflict can be minimized through sex-dependent gene regulation. By doing this, genes that are negatively selected may evolve sexually dimorphic traits that encourage sex- specific optima. Sexual dimorphism is thought to be an effective resolution, since it can be made irreversible under short term selection. [ 20 ] As a result, sexual dimorphism could pose as a resolution to intralocus sexual conflict. Another proposed hypothesis suggests that intralocus sexual conflict can be resolved through alternative splicing . In this mechanism, the sex of an organism will ultimately decide the final form of the protein that is created from a shared coding region within a set of genes. Through this posttranscriptional process, RNA that is created by a gene is spliced in various ways that allow it to ultimately join exons in a variety of ways [ 20 ] Genomic imprinting also presents as a possible resolution for intralocus sexual conflict. In genomic imprinting, genes are marked through methylation of DNA with information of its parental origin. In order for genomic imprinting to resolve intralocus sexual conflict, parents would have to imprint their genes in sex- specific matter. For example, males could imprint their genes in a way so that sexually antagonistic alleles that benefit males are not expressed in sperm that is only X-bearing. [ 20 ] | https://en.wikipedia.org/wiki/Intralocus_sexual_conflict |
In organic chemistry , an intramolecular Diels-Alder cycloaddition is a Diels–Alder reaction in which the diene and the dienophile are both part of the same molecule . [ 1 ] The reaction leads to the formation of the cyclohexene -like structure as usual for a Diels–Alder reaction, but as part of a more complex fused or bridged cyclic ring system. This reaction can gives rise to various natural derivatives of decalin . [ 2 ]
Because the two reacting groups are already attached, two basic modes of addition are possible in this reaction. Depending on whether the tether that links to the dienophile is attached to the end or the middle of the diene, fused or bridged polycyclic ring systems can be formed. [ 3 ]
The tether than attaches the two reacting groups also affects the geometry of the reaction. As a result of its conformational and other structural restrictions, the exo vs endo results [ 4 ] are usually not based on the simple (intermolecular) Diels–Alder reaction effects.
Intramolecular Diels-Alder cycloaddition has been used in total synthesis . Through this reaction polycyclic compounds can be accessed with high stereoselectivity . The following potential drugs have been synthesized using the intramolecular Diels-Alder reaction: salvinorin A , [ 7 ] himbacine , [ 8 ] and solanapyrone A . [ 9 ] | https://en.wikipedia.org/wiki/Intramolecular_Diels–Alder_cycloaddition |
The intramolecular Heck reaction (IMHR) in chemistry is the coupling of an aryl or alkenyl halide with an alkene in the same molecule . The reaction may be used to produce carbocyclic or heterocyclic organic compounds with a variety of ring sizes. Chiral palladium complexes can be used to synthesize chiral intramolecular Heck reaction products in non- racemic form. [ 1 ]
The Heck reaction is the palladium-catalyzed coupling of an aryl or alkenyl halide with an alkene to form a substituted alkene . [ 2 ] Intramolecular variants of the reaction may be used to generate cyclic products containing endo or exo double bonds . Ring sizes produced by the intramolecular Heck reaction range from four to twenty-seven atoms . Additionally, in the presence of a chiral palladium catalyst, the intramolecular Heck reaction may be used to establish tertiary or quaternary stereocenters with high enantioselectivity . [ 3 ] A number of tandem reactions , in which the intermediate alkylpalladium complex is intercepted either intra- or intermolecularly before β-hydride elimination, have also been developed. [ 4 ]
(1)
As shown in Eq. 2, the neutral pathway of the Heck reaction begins with the oxidative addition of the aryl or alkenyl halide into a coordinatively unsaturated palladium(0) complex (typically bound to two phosphine ligands ) to give complex I . Dissociation of a phosphine ligand followed by association of the alkene yields complex II , and migratory insertion of the alkene into the carbon-palladium bond establishes the key carbon-carbon bond.
Insertion takes place in a suprafacial fashion, but the dihedral angle between the alkene and palladium-carbon bond during insertion can vary from 0° to ~90°. After insertion, β-hydride elimination affords the product and a palladium(II)-hydrido complex IV , which is reduced by base back to palladium(0). [ 5 ]
(2)
Most asymmetric Heck reactions employing chiral phosphines proceed by the cationic pathway, which does not require the dissociation of a phosphine ligand. Oxidative addition of an aryl perfluorosulfonate generates a cationic palladium aryl complex V . The mechanism then proceeds as in the neutral case, with the difference that an extra site of coordinative unsaturation exists on palladium throughout the process.
Thus, coordination of the alkene does not require ligand dissociation. Stoichiometric amounts of base are still required to reduce the palladium(II)-hydrido complex VIII back to palladium(0). [ 6 ] Silver salts may be used to initiate the cationic pathway in reactions of aryl halides . [ 7 ]
(3)
Reactions involving palladium(II) acetate and phosphine ligands proceed by a third mechanism, the anionic pathway. [ 8 ] Base mediates the oxidation of a phosphine ligand by palladium(II) to a phosphine oxide . Oxidative addition then generates the anionic palladium complex IX . Loss of halide leads to neutral complex X , which undergoes steps analogous to the neutral pathway to regenerate anionic complex IX . A similar anionic pathway is also likely operative in reactions of bulky palladium tri( tert -butyl)phosphine complexes. [ 9 ]
(4)
Asymmetric Heck reactions establish quaternary or tertiary stereocenters . If migratory insertion generates a quaternary center adjacent to the palladium-carbon bond (as in reactions of trisubstituted or 1,1-disubstituted alkenes), β-hydride elimination toward that center is not possible and it is retained in the product. [ 3 ] Similarly, β-hydride elimination is not possible if a hydrogen syn to the palladium-carbon bond is not available. Thus, tertiary stereocenters can be established in conformationally restricted systems. [ 10 ]
(5)
The intramolecular Heck reaction may be used to form rings of a variety of sizes and topologies . β-Hydride elimination need not be the final step of the reaction, and tandem methods have been developed that involve the interception of palladium alkyl intermediates formed after migratory insertion by an additional reactant. This section discusses the most common ring sizes formed by the intramolecular Heck reaction and some of its tandem and asymmetric variants.
5- Exo cyclization, which establishes a five-membered ring with an exocyclic alkene, is the most facile cyclization mode in intramolecular Heck reactions. In this and many other modes of intramolecular Heck cyclization, annulations typically produce a cis ring juncture. [ 11 ]
(6)
6- Exo cyclization is also common. The high stability of Heck reaction catalysts permits the synthesis of highly strained compounds at elevated temperatures. In the example below, the arene and alkene must both be in energetically unfavorable axial positions in order to react. [ 12 ]
(7)
Endo cyclization is observed most often when small or large rings are involved. For instance, 5- endo cyclization is generally preferred over 4- exo cyclization. [ 13 ] The yield of endo product increases with increasing ring size in the synthesis of cycloheptenes , - octenes , and - nonenes . [ 14 ]
(8)
Tandem reactions initiated by IMHR have been extensively explored. Palladium alkyl intermediates generated after migratory insertion may undergo a second round of insertion in the presence of a second alkene (either intra- or intermolecular). [ 15 ] When dienes are involved in the intramolecular Heck reaction, insertion affords π-allylpalldium intermediates, which may be intercepted by nucleophiles . This idea was applied to a synthesis of (–)- morphine . [ 16 ]
(9)
Asymmetric IMHR may establish tertiary or quaternary stereocenters. BINAP is the most commonly chiral ligand used in this context. An interesting application of IMHR is group-selective desymmetrization (enantiotopic group selection), in which the chiral palladium aryl intermediate undergoes insertion predominantly with one of the enantiotopic double bonds. [ 17 ]
(10)
The high functional group tolerance of the intramolecular Heck reaction allows it to be used at a very late stage in synthetic routes. In a synthesis of (±)-FR900482, IMHR establishes a tricyclic ring system in high yield without disturbing any of the sensitive functionality nearby. [ 18 ]
(11)
Intramolecular Heck reactions have been employed for the construction of complex natural products. An example is the late-stage, macrocyclic ring closure in the total synthesis of the cytotoxic natural product (–)- Mandelalide A . [ 19 ] In another example a fully intramolecular tandem Heck reaction is used in a synthesis of (–)-scopadulcic acid. A 6-exo cyclization sets the quaternary center and provides a neopentyl σ-palladium intermediate, which undergoes a 5-exo reaction to provide the ring system. [ 20 ]
(12)
The closest competing method to IMHR is radical cyclization . [ 21 ] Radical cyclizations are often reductive, which can cause undesired side reactions to occur if sensitive substrates are employed. The IMHR, on the other hand, can be run under reductive conditions if desired. [ 22 ] Unlike the IMHR, radical cyclization does not require the coupling of two sp 2 -hybridized carbons. In some cases, the results of radical cyclization and IMHR are complementary. [ 23 ]
A variety of experimental concerns exist for IMHR reactions. Although most of the common Pd(0) catalysts are commercially available (Pd(PPh 3 ) 4 , Pd 2 (dba) 3 , and derivatives), they may also be prepared by simple, high-yielding procedures. [ 24 ] Palladium(II) acetate is cheap and may be reduced in situ to palladium(0) with phosphine . Three equivalents of phosphine per equivalent of palladium acetate are commonly used; these conditions generate Pd(PR 3 ) 2 as the active catalyst. Bidentate phosphine ligands are common in asymmetric reactions to enhance stereoselectivity.
A wide variety of bases may be used, and the base is often employed in excess. Potassium carbonate is the most common base employed, and inorganic bases are generally used more often than organic bases. A number of additives have also been identified for the Heck reaction—silver salts may be used to drive the reaction down the cationic pathway, and halide salts may be used to convert aryl triflates via the neutral pathway. Alcohols have been shown to enhance catalyst stability in some cases, [ 25 ] and acetate salts are beneficial in reactions following the anionic pathway. [ 8 ]
(13)
A solution of the amide (0.365 g, 0.809 mmol), Pd(PPh 3 ) 4 (0.187 g, 0.162 mmol), and triethylamine (1.12 mL, 8.08 mmol) in MeCN (8 mL) in a sealed tube was heated slowly to 120°. After stirring for 4 hours, the reaction mixture was cooled to room temperature, and the solvent was evaporated. The residue was chromatographed (loaded with CH 2 Cl 2 ) to give the title product 316 (0.270 g, 90%) as a colorless oil; R f 0.42 (EtOAc/ petroleum ether 10:1); [α] 22 D +14.9 (c, 1.0, CHCl 3 ); IR 3027, 2930, 1712, 1673, 1608, 1492, 1343, 1248 cm −1 ; 1 H NMR (400 MHz) δ 7.33–7.21 (m, 6 H), 7.07 (dd, J = 7.3, 16.4 Hz, 1 H), 7.00 (t, J = 7.5 Hz, 1 H), 6.77 (d, J = 7.7 Hz, 1 H), 6.30 (dd, J = 8.7, 11.4 Hz, 1 H), 5.32 (d, J = 15.7 Hz, 1 H), 5.04 (s, 1 H), 4.95 (s, 1 H), 4.93 (d, J = 11.1 Hz, 1 H), 4.17 (s, 1 H), 3.98 (d, J = 15.7 Hz, 1 H), 3.62 (d, J = 8.7 Hz, 1 H), 3.17 (s, 3 H), 2.56 (dd, J = 3.5, 15.5 Hz, 1 H), 2.06 (dd, J = 2.8, 15.5 Hz, 1 H); 13 C NMR (100 MHz) δ 177.4, 172.9, 147.8, 142.2, 136.5, 132.2, 131.6, 128.8, 128.4, 128.2, 127.7, 127.1, 123.7, 122.9, 107.9, 105.9, 61.0, 54.7, 49.9, 44.4, 38.2, 26.4; HRMS Calcd. for C 24 H 22 N 2 O 2 : 370.1681. Found: 370.1692. | https://en.wikipedia.org/wiki/Intramolecular_Heck_reaction |
Intramolecular aglycon delivery is a synthetic strategy for the construction of glycans . This approach is generally used for the formation of difficult glycosidic linkages .
Glycosylation reactions are very important reactions in carbohydrate chemistry , leading to the synthesis of oligosaccharides , preferably in a stereoselective manner. The stereoselectivity of these reactions has been shown to be affected by both the nature and the configuration of the protecting group at C-2 on the glycosyl donor ring. While 1,2- trans -glycosides (e.g. α-mannosides and β-glucosides) can be synthesised easily in the presence of a participating group (such as OAc, or NHAc) at the C-2 position in the glycosyl donor ring, 1,2- cis -glycosides are more difficult to prepare. 1,2- cis -glycosides with the α configuration (e.g. glucosides or galactosides) can often be prepared using a non-participating protecting group (such as Bn, or All) on the C-2 hydroxy group. However, 1,2- cis -glycosides with the β configuration are the most difficult to achieve, and present the greatest challenge in glycosylation reactions.
One of the most recent approaches to prepare 1,2- cis -β-glycosides in a stereospecific manner is termed ‘ Intramolecular Aglycon Delivery ’, and various methods have been developed based on this approach. [ 1 ] In this approach, the glycosyl acceptor is tethered onto the C-2-O-protecting group (X) in the first step. Upon activation of the glycosyl donor group (Y) (usually SR, OAc, or Br group) in the next step, the tethered aglycon traps the developing oxocarbenium ion at C-1, and is transferred from the same face as OH-2, forming the glycosidic bond stereospecifically. The yield of this reaction drops as the bulkiness of the alcohol increases.
In this method, the glycosyl donor is protected at the C-2 position by an OAc group. The C-2-OAc protecting group is transformed into an enol ether by the Tebbe reagent (Cp 2 Ti=CH 2 ), and then the glycosyl acceptor is tethered to the enol ether under acid-catalysed conditions to generate a mixed acetal. In a subsequent step, the β-mannoside is formed upon activation of the anomeric leaving group (Y), followed by work up. [ 2 ]
This method is similar to the previous method in that the glycosyl donor is protected at C-2 by an OAc group, which is converted into an enol ether by the Tebbe reagent . However, in this approach, N -iodosuccinimide (NIS) is used to tether the glycosyl acceptor to the enol ether, and in a second step, activation of the anomeric leaving group leads to intramolecular delivery of the aglycon to C-1 and formation of the 1,2- cis -glycoside product. [ 3 ]
The glycosyl donor is protected at C-2 by OAll group. The allyl group is then isomerized to a prop-1-enyl ether using a rhodium hydride generated from Wilkinson's catalyst ((PPh 3 ) 3 RhCl) and butyllithium (BuLi). The resulting enol ether is then treated with NIS and the glycosyl acceptor to generate a mixed acetal. The 1,2- cis (e.g. β-mannosyl) product is formed in a final step through activation of the anomeric leaving group, delivery of the aglycon from the mixed acetal and finally hydrolytic work-up to remove the remains of the propenyl ether from O-2. [ 4 ]
In this method, the glycosyl donor is protected at C-2 by a para -methoxybenzyl (PMB) group. The glycosyl acceptor is then tethered at the benzylic position of the PMB protecting group in the presence of 2,3-Dichloro-5,6-dicyano-1,4-benzoquinone (DDQ). The anomeric leaving group (Y) is then activated, and the developing oxocarbenium ion is captured by the tethered aglycon alcohol (OR) to give 1,2- cis β-glycoside product. [ 5 ]
This is a modification of the method of oxidative tethering to a para -methoxybenzyl ether. The difference here is that the para -alkoxybenzyl group is attached to a solid support; the β-mannoside product is released into the solution phase in the last step, while the by-products remain attached to the solid phase. This makes the purification of the β-glycoside easier; it is formed as the almost exclusive product. [ 6 ]
The initial step in this method involves the formation of a silyl ether at the C-2 hydroxy group of the glycosyl donor upon addition of dimethyldichlorosilane in the presence of a strong base such as butyllithium (BuLi); then the glycosyl acceptor is added to form a mixed silaketal. Activation of the anomeric leaving group in the presence of a hindered base then leads to the β-glycoside. [ 7 ]
A modified silicon-tethering method involves mixing of the glycosyl donor with the glycosyl acceptor and dimethyldichlorosilane in the presence of imidazole to give the mixed silaketal in one pot. Activation of the tethered intermediate then leads to the β-glycoside product. [ 8 ] | https://en.wikipedia.org/wiki/Intramolecular_aglycon_delivery |
An intramolecular force (from Latin intra- 'within') is any force that binds together the atoms making up a molecule . [ 1 ] Intramolecular forces are stronger than the intermolecular forces that govern the interactions between molecules. [ 2 ]
The classical model identifies three main types of chemical bonds — ionic, covalent, and metallic — distinguished by the degree of charge separation between participating atoms. [ 3 ] The characteristics of the bond formed can be predicted by the properties of constituent atoms, namely electronegativity. They differ in the magnitude of their bond enthalpies , a measure of bond strength, and thus affect the physical and chemical properties of compounds in different ways. % of ionic character is directly proportional difference in electronegativity of bonded atom. [ clarification needed ]
An ionic bond can be approximated as complete transfer of one or more valence electrons of atoms participating in bond formation, resulting in a positive ion and a negative ion bound together by electrostatic forces. [ 4 ] Electrons in an ionic bond tend to be mostly found around one of the two constituent atoms due to the large electronegativity difference between the two atoms, generally more than 1.9, (greater difference in electronegativity results in a stronger bond); this is often described as one atom donating electrons to the other. [ 5 ] This type of bond is generally formed between a metal and nonmetal , such as sodium and chlorine in NaCl . Sodium would give an electron to chlorine, forming a positively charged sodium ion and a negatively charged chloride ion.
In a true covalent bond , the electrons are uniformly shared between the two atoms of the bond; there is little or no charge separation. Covalent bonds are generally formed between two nonmetals. There are several types of covalent bonds: in polar covalent bonds , electrons are more likely to be found around one of the two atoms, whereas in nonpolar covalent bonds, electrons are evenly shared. Homonuclear diatomic molecules are purely covalent. The polarity of a covalent bond is determined by the electronegativities of each atom and thus a polar covalent bond has a dipole moment pointing from the partial positive end to the partial negative end. [ 6 ] Polar covalent bonds represent an intermediate type in which the electrons are neither completely transferred from one atom to another nor evenly shared.
Metallic bonds generally form within a pure metal or metal alloy . Metallic electrons are generally delocalized ; the result is a large number of free electrons around positive nuclei , sometimes called an electron sea.
Bonds are formed by atoms so that they are able to achieve a lower energy state. Free atoms will have more energy than a bonded atom. This is because some energy is released during bond formation, allowing the entire system to achieve a lower energy state. The bond length, or the minimum separating distance between two atoms participating in bond formation, is determined by their repulsive and attractive forces along the internuclear direction. [ 3 ] As the two atoms get closer and closer, the positively charged nuclei repel, creating a force that attempts to push the atoms apart. As the two atoms get further apart, attractive forces work to pull them back together. Thus an equilibrium bond length is achieved and is a good measure of bond stability.
Intramolecular forces are extremely important in the field of biochemistry, where it comes into play at the most basic levels of biological structures. Intramolecular forces such as disulfide bonds give proteins and DNA their structure. Proteins derive their structure from the intramolecular forces that shape them and hold them together. The main source of structure in these molecules is the interaction between the amino acid residues that form the foundation of proteins. [ 7 ] The interactions between residues of the same proteins forms the secondary structure of the protein, allowing for the formation of beta sheets and alpha helices , which are important structures for proteins and in the case of alpha helices, for DNA. | https://en.wikipedia.org/wiki/Intramolecular_force |
In chemistry , intramolecular describes a process or characteristic limited within the structure of a single molecule , a property or phenomenon limited to the extent of a single molecule.
In intramolecular organic reactions , two reaction sites are contained within a single molecule. This configuration elevates the effective concentration of the reacting partners resulting in high reaction rates . Many intramolecular reactions are observed where the intermolecular version does not take place.
Intramolecular reactions, especially ones leading to the formation of 5- and 6-membered rings, are rapid compared to an analogous intermolecular process. This is largely a consequence of the reduced entropic cost for reaching the transition state of ring formation and the absence of significant strain associated with formation of rings of these sizes. For the formation of different ring sizes via cyclization of substrates of varying tether length, the order of reaction rates (rate constants k n for the formation of an n -membered ring) is usually k 5 > k 6 > k 3 > k 7 > k 4 as shown below for a series of ω-bromoalkylamines. This somewhat complicated rate trend reflects the interplay of these entropic and strain factors:
For the 'small rings' ( 3- and 4- membered ), the slow rates is a consequence of angle strain experienced at the transition state. Although three-membered rings are more strained, formation of aziridine is faster than formation of azetidine due to the proximity of the leaving group and nucleophile in the former, which increases the probability that they would meet in a reactive conformation. The same reasoning holds for the 'unstrained rings' ( 5-, 6-, and 7-membered ). The formation of 'medium-sized rings' ( 8- to 13-membered ) is particularly disfavorable due to a combination of an increasingly unfavorable entropic cost and the additional presence of transannular strain arising from steric interactions across the ring. Finally, for 'large rings' ( 14-membered or higher ), the rate constants level off, as the distance between the leaving group and nucleophile is now so large the reaction is now effectively intermolecular. [ 1 ] [ 2 ]
Although the details may change somewhat, the general trends hold for a variety of intramolecular reactions, including radical-mediated and (in some cases) transition metal-catalyzed processes.
Many reactions in organic chemistry can occur in either an intramolecular or intermolecular senses. Some reactions are by definition intramolecular or are only practiced intramolecularly, e.g.,
Some transformations that are enabled or enhanced intramolecularly. For example, the acyloin condensation of diesters almost uniquely produces 10-membered carbocycles, which are difficult to construct otherwise. [ 5 ] Another example is the 2+2 cycloaddition of norbornadiene to give quadricyclane .
Many tools and concepts have been developed to exploit the advantages of intramolecular cyclizations. For example, installing large substituents exploits the Thorpe-Ingold effect . High dilution reactions suppress intermolecular processes. One set of tools involves tethering as discussed below.
Tethered intramolecular [2+2] reactions entail the formation of cyclobutane and cyclobutanone via intramolecular 2+2 photocycloadditions . Tethering ensures formation of a multi-cyclic system.
The length of the tether affects the stereochemical outcome of the [2+2] reaction. Longer tethers tend to generate the "straight" product where the terminal carbon of the alkene is linked to the α {\displaystyle \alpha } -carbon of the enone . [ 6 ] When the tether consists only two carbons, the “bent” product is generated where the β {\displaystyle \beta } -carbon of the enone is connected to the terminal carbon of the alkene. [ 7 ]
Tethered [2+2] reactions have been used to synthesize organic compounds with interesting ring systems and topologies . For example, [2+2] photocyclization was used to construct the tricyclic core structure in ginkgolide B. [ 8 ]
Otherwise-intermolecular reactions can be made temporarily intramolecular by linking both reactants by a tether with all the advantages associated to it. Popular choices of tether contain a carbonate ester , boronic ester , silyl ether , or a silyl acetal link ( silicon tethers ) [ 9 ] [ 10 ] which are fairly inert in many organic reactions yet can be cleaved by specific reagents. The main hurdle for this strategy to work is selecting the proper length for the tether and making sure reactive groups have an optimal orientation with respect to each other. An examples is a Pauson–Khand reaction of an alkene and an alkyne tethered together via a silyl ether. [ 11 ]
In this particular reaction, the tether angle bringing the reactive groups together is effectively reduced by placing isopropyl groups on the silicon atom via the Thorpe–Ingold effect . No reaction takes place when these bulky groups are replaced by smaller methyl groups. Another example is a photochemical [2+2] cycloaddition with two alkene groups tethered through a silicon acetal group (racemic, the other enantiomer not depicted), which is subsequently cleaved by TBAF yielding the endo-diol.
Without the tether, the exo isomer forms. [ 12 ] | https://en.wikipedia.org/wiki/Intramolecular_reaction |
Intramolecular reactions of diazocarbonyl compounds include addition to carbon–carbon double bonds to form fused cyclopropanes and insertion into carbon–hydrogen bonds or carbon–carbon bonds . [ 1 ]
In the presence of an appropriate transition metal (typically copper or rhodium [ 2 ] ), α-diazocarbonyl compounds are converted to transition metal carbenes , which undergo addition reactions in the presence of carbon–carbon double bonds to form cyclopropanes. [ 3 ] Insertion into carbon–carbon or carbon–hydrogen bonds is possible in substrates lacking a double bond. [ 4 ] The intramolecular version of this reaction forms fused carbocycles, although yields of reactions mediated by copper are typically moderate. For enantioselective cyclopropanations and insertions, both copper- and rhodium-based catalysts are employed, although the latter have been more heavily studied in recent years. [ 5 ]
(1)
The reaction mechanism of decomposition of diazocarbonyl compounds with copper begins with the formation of a copper carbene complex. Evidence for the formation of copper carbenes is provided by comparison to the behavior of photolytically generated free carbenes [ 6 ] and the observation of appreciable enantioselectivity in cyclopropanations with chiral copper complexes. [ 7 ] Upon formation of the copper carbene, either insertion or addition takes place to afford carbocycles or cyclopropanes, respectively. Both addition and insertion proceed with retention of configuration . [ 8 ] [ 9 ] Thus, diastereoselectivity may often be dictated by the configuration of the starting material.
(2)
Either copper powder or copper salts can be used very generally for intramolecular reactions of diazocarbonyl compounds. This section describes the different types of diazocarbonyl compounds that may undergo intramolecular reactions in the presence of copper. Note that for intermolecular reactions of diazocarbonyl compounds, the use of rhodium catalysts is preferred. [ 2 ]
Diazoketones containing pendant double bonds undergo cyclopropanation in the presence of copper. The key step in one synthesis of barbaralone is the selective intramolecular cyclopropanation of a cycloheptatriene . [ 10 ]
(3)
α,β-Cyclopropyl ketones may act as masked α,β-unsaturated ketones. In one example, intramolecular participation of an aryl group leads to the formation of a polycyclic ring system with complete diastereoselectivity. [ 11 ]
(4)
α-Diazoesters are not as efficient as diazoketones at intramolecular cyclizations in some cases because of the propensity of esters to exist in the trans conformation about the carbon–oxygen single bond. [ 12 ] However, intramolecular reactions of diazoesters do take place—in the example in equation (5), copper(II) sulfate is used to effect the formation of the cyclopropyl ester shown. [ 6 ]
(5)
In the presence of a catalytic amount of acid, diazomethyl ketone substrates containing a pendant double bond or aryl group undergo cyclization. The mechanism of this process most likely involves protonation of the diazocarbonyl group to form a diazonium salt , followed by displacement of nitrogen by the unsaturated functionality and deprotonation. In the example below, demethylation affords a quinone . [ 13 ]
(6)
When no unsaturated functionality is present in the substrate, C-H insertion may occur. C-H Insertion is particularly facile in conformationally restricted substrates in which a C-H bond is held in close proximity to the diazo group. [ 14 ]
(7)
Transannular insertions, which form fused carbocyclic products, have also been observed. Yields are often low for these reactions, however. [ 15 ]
(8)
Insertion into carbon–carbon bonds has been observed. In the example in equation (9), the methyl group is held in close proximity to the diazo group, facilitating C-C insertion. [ 14 ]
(9)
Intramolecular cyclopropanation of a diazoketone is applied in a racemic synthesis of sirenin . A single cyclopropane diastereomer was isolated in 55% yield after diazoketone formation and cyclization. [ 16 ]
(10)
Diazo compounds may be explosive and should be handled with care. Very often, the diazocarbonyl compound is prepared and immediately used via treatment of the corresponding acid chloride with an excess of diazomethane (see Eq. (18) below for an example). [ 17 ] Reactions mediated by copper are typically on the order of hours, and in some cases, slow addition of the diazocarbonyl compound is necessary. Reactions should be carried out under an inert atmosphere in anhydrous conditions.
Source: [ 18 ]
(11)
A solution of the olefinic acid (0.499 g, 2.25 mmol) dissolved in benzene (20 ml, freshly distilled from calcium hydride) was stirred at 0 °C (ice bath) under nitrogen while oxalyl chloride (1.35 ml, 2.0 g, 15.75 mmol) was added dropwise. The ice bath
was removed and the solution was stirred at room temperature for 2 hr. The solvent and excess reagent were removed in vacuo . The resulting orange oil was dissolved in benzene (2 x 5.0 mi, freshly distilled from calcium hydride ) under nitrogen.
This solution was added dropwise at 0 °C (ice bath) to an anhydrous ethereal solution of diazomethane (50 ml, −20 mmol, predried over sodium metal) with vigorous stirring under nitrogen. The resulting solution was stirred at 0 °C for 1 hr and then at room temperature for 1.5 hr. The solvents and excess reagent were removed in vacuo .
Tetrahydrofuran (40 ml, freshly distilled from lithium aluminum hydride) and finely divided metallic copper powder (0.67 g) were added to the crude diazo ketone, sequentially. This suspension was vigorously stirred at reflux under nitrogen for 2 hr. The resulting suspension was allowed to stir at room temperature for an additional 14 hr. The solution was filtered into water (100 ml). The mixture was shaken vigorously for 5 min and then extracted with ether (3 x 50 ml). The combined ethereal extracts were washed with saturated sodium bicarbonate solution (4 X 40 ml), water (40 ml), and saturated sodium chloride solution (40 ml), dried (Na 2 SO 4 ), and concentrated in vacuo to give 0.673 g of a crude brown oil. This crude oil was chromatographed on silica gel (67 g) in a 2-cm diameter column using 10% ether-90% petroleum ether to develop the column, taking 37-ml sized fractions. Fractions 11–16 gave 0.164 g (33%) of pure ketone product: mp 64-64.5° (from pentane); IR (CCl 4 ) 3095 (cyclopropyl CH)
and 1755 cm −1 (CO); NMR (CCl 4 ) δ 1.18 (s, 3H, CH 3 ) 1.03 (9, 3H, CH 3 ),
0.97 (s, 3H, CH 3 ), and 0.90 ppm (s, 3H, CH 3 ). Anal. Calcd for C 15 H 22 O: C, 82.52; H, 10.16. Found: C, 82.61; H, 10.01. | https://en.wikipedia.org/wiki/Intramolecular_reactions_of_diazocarbonyl_compounds |
Intramolecular vibrational energy redistribution ( IVR ) is a process in which energy is redistributed between different quantum states of a vibrationally excited molecule , which is required by successful theories explaining unimolecular reaction rates [ 1 ] [ 2 ] such as RRKM theory . Such theories assume a full statistical redistribution between all vibrational modes, but restricted redistribution could enable bond selective chemistry for which deposited energy must remain in a particular mode for as long as it takes for the required reaction to take place. [ 3 ]
This quantum chemistry -related article is a stub . You can help Wikipedia by expanding it .
This molecular physics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intramolecular_vibrational_energy_redistribution |
Intraspecific antagonism means a disharmonious or antagonistic interaction between two individuals of the same species . As such, it could be a sociological term, but was actually coined by Alan Rayner and Norman Todd working at Exeter University in the late 1970s, to characterise a particular kind of zone line formed between wood-rotting fungal mycelia . Intraspecific antagonism is one of the expressions of a phenomenon known as vegetative or somatic incompatibility. [ 1 ]
Zone lines form in wood for many reasons, including host reactions against parasitic encroachment, and inter-specific interactions, but the lines observed by Rayner and Todd when transversely-cut sections of brown-rotted birch tree trunk or branch were incubated in plastic bags appeared to be due to a reaction between different individuals of the same species of fungus . [ 2 ]
This was a startling inference at a time when the prevailing orthodoxy within the mycological community was that of the "unit mycelium". This was the theory that when two different individuals of the same species of basidiomycete wood rotting fungi grew and met within the substratum, they fused, cooperated, and shared nuclei freely. [ 3 ] Rayner and Todd's insight was that basidiomycete fungi individuals do, in most "adult" or dikaryotic cases anyway, retain their individuality. [ 4 ]
A small stable of postgraduate and postdoctoral students helped elucidate the mechanisms underlying these intermycelial interactions, at Exeter University (Todd) and the University of Bath (Rayner), over the next few years.
Although the attribution of individual status to the mycelia confined by intraspecific zone lines is a comparatively new idea, zone lines themselves have been known since time immemorial. The term spalting is applied by woodworkers to wood showing strongly-figured zone lines, particularly those cases where the area of "no-man's land" between two antagonistic conspecific mycelia is colonised by another species of fungus. Dematiaceous hyphomycetes, with their dark-coloured mycelia, produce particularly attractive black zone lines when they colonise the areas occupied by two antagonistic basidiomycete individuals. Spalted wood can be difficult to work, since different individual wood-rotting fungi have different decay efficiencies, and thus produce zones of different softness, and the zone lines themselves are usually unrotted and hard.
Instraspecific antagonism can also sometimes be of assistance in quickly recognising the membership of clones in those fungi, particularly root-rots such as Armillaria where individual mycelia may colonise large areas, or more than one tree.
It is even the subject of a recent patent. [1] | https://en.wikipedia.org/wiki/Intraspecific_antagonism |
Intraspecific competition is an interaction in population ecology , whereby members of the same species compete for limited resources. This leads to a reduction in fitness for both individuals, but the more fit individual survives and is able to reproduce. [ 1 ] By contrast, interspecific competition occurs when members of different species compete for a shared resource. Members of the same species have rather similar requirements for resources, whereas different species have a smaller contested resource overlap , resulting in intraspecific competition generally being a stronger force than interspecific competition. [ 2 ]
Individuals can compete for food, water, space, light, mates, or any other resource which is required for survival or reproduction. The resource must be limited for competition to occur; if every member of the species can obtain a sufficient amount of every resource then individuals do not compete and the population grows exponentially . [ 1 ] Prolonged exponential growth is rare in nature because resources are finite and so not every individual in a population can survive, leading to intraspecific competition for the scarce resources.
When resources are limited, an increase in population size reduces the quantity of resources available for each individual, reducing the per capita fitness in the population. As a result, the growth rate of a population slows as intraspecific competition becomes more intense, making it a negatively density dependent process. The falling population growth rate as population increases can be modelled effectively with the logistic growth model . [ 3 ] The rate of change of population density eventually falls to zero, the point ecologists have termed the carrying capacity ( K ). However, a population can only grow to a very limited number within an environment. [ 3 ] The carrying capacity, defined by the variable k, of an environment is the maximum number of individuals or species an environment can sustain and support over a longer period of time. [ 3 ] The resources within an environment are limited, and are not endless. [ 3 ] An environment can only support a certain number of individuals before its resources completely diminish. [ 3 ] Numbers larger than this will suffer a negative population growth until eventually reaching the carrying capacity, whereas populations smaller than the carrying capacity will grow until they reach it. [ 3 ]
Intraspecific competition does not just involve direct interactions between members of the same species (such as male deer locking horns when competing for mates) but can also include indirect interactions where an individual depletes a shared resource (such as a grizzly bear catching a salmon that can then no longer be eaten by bears at different points along a river).
The way in which resources are partitioned by organisms also varies and can be split into scramble and contest competition. Scramble competition involves a relatively even distribution of resources among a population as all individuals exploit a common resource pool. In contrast, contest competition is the uneven distribution of resources and occurs when hierarchies in a population influence the amount of resource each individual receives. Organisms in the most prized territories or at the top of the hierarchies obtain a sufficient quantity of the resources, whereas individuals without a territory don’t obtain any of the resource. [ 1 ]
Interference competition is the process by which individuals directly compete with one another in pursuit of a resource. It can involve fighting, stealing or ritualised combat . Direct intraspecific competition also includes animals claiming a territory which then excludes other animals from entering the area. There may not be an actual conflict between the two competitors, but the animal excluded from the territory suffers a fitness loss due to a reduced foraging area and is unable to enter the area as it risks confrontation from a more dominant member of the population . As organisms are encountering each other during interference competition, they are able to evolve behavioural strategies and morphologies to out-compete rivals in their population. [ 4 ]
For example, different populations of the northern slimy salamander ( Plethodon glutinosus ) have evolved varying levels of aggression depending on the intensity of intraspecific competition. In populations where the resources are scarcer, more aggressive behaviours are likely to evolve. It is a more effective strategy to fight rivals within the species harder instead of searching for other options due to the lack of available food. [ 5 ] More aggressive salamanders are more likely obtain the resources they require to reproduce whereas timid salamanders may starve before reproducing, so aggression can spread through the population .
In addition, a study on Chilean flamingos ( Phoenicopterus chilensis ) found that birds in a bond were much more aggressive than single birds. The paired birds were significantly more likely to start an agonistic encounter in defense of their mate or young whereas single birds were typically non-breeding and less likely to fight. [ 6 ] Not all flamingos can mate in the population because of an unsuitable sex ratio or some dominant flamingos mating with multiple partners. Mates are a fiercely contested resource in many species as the production of offspring is essential for an individual to propagate its genes.
Organisms can compete indirectly, either via exploitative or apparent competition . Exploitative competition involves individuals depleting a shared resource and both suffering a loss in fitness as a result. The organisms may not actually come into contact and only interact via the shared resource indirectly.
For instance, exploitative competition has been shown experimentally between juvenile wolf spiders ( Schizocosa ocreata ). Both increasing the density of young spiders and reducing the available food supply lowered the growth of individual spiders. Food is clearly a limiting resource for the wolf spiders but there was no direct competition between juveniles for food, just a reduction in fitness due to the increased population density . [ 7 ] The negative density dependence in young wolf spiders is evident: as the population density increases further, growth rates continues to fall and could potentially reach zero (as predicted by the logistic growth model ). This is also seen in Viviparous lizard , or Lacerta vivipara , where the existence of color morphs within a population depends on the density and intraspecific competition.
In stationary organisms, such as plants, exploitative competition plays a much larger role than interference competition because individuals are rooted to a specific area and utilise resources in their immediate surroundings. Saplings will compete for light, most of which will be blocked and utilised by taller trees. [ 8 ] The saplings can be easily out-competed by larger members of their own species, which is one of the reasons why seed dispersal distances can be so large. Seeds that germinate in close proximity to the parents are very likely to be out-competed and die.
Apparent competition occurs in populations that are predated upon. An increase in population of the prey species will bring more predators to the area, which increases the risk of an individual being eaten and hence lowers its survivorship. Like exploitative competition, the individuals aren’t interacting directly but rather suffer a reduction in fitness as a consequence of the increasing population size. Apparent competition is generally associated with inter rather than intraspecific competition, whereby two different species share a common predator . An adaptation that makes one species less likely to be eaten results in a reduction in fitness for the other prey species because the predator species hunts more intensely as food has become more difficult to obtain. For example, native skinks ( Oligosoma ) in New Zealand suffered a large decline in population after the introduction of rabbits ( Oryctolagus cuniculus ). [ 9 ] Both species are eaten by ferrets ( Mustela furo ) so the introduction of rabbits resulted in immigration of ferrets to the area, which then depleted skink numbers.
Contest competition takes place when a resource is associated with a territory or hierarchical structure within the population. For instance: white-faced capuchin monkeys ( Cebus capucinus ) have different energy intakes based on their ranking within the group. [ 10 ] Both males and females compete for territories with the best access to food and the most successful monkeys are able to obtain a disproportionately large quantity of food and therefore have a higher fitness in comparison to the subordinate members of the group. In the case of Ctenophorus pictus lizards, males compete for territory. Among the polymorphic variants, red lizards have are more aggressive in defending their territory compared to their yellow counterparts. [ 11 ]
Aggressive encounters are potentially costly for individuals as they can get injured and be less able to reproduce. As a result, many species have evolved forms of ritualised combat to determine who wins access to a resource without having to undertake a dangerous fight. Male adders ( Vipera berus ) undertake complex ritualised confrontations when courting females. Generally, the larger male will win and fights rarely escalate to injury to either combatant. [ 12 ]
However, sometimes the resource may be so prized that potentially fatal confrontations can occur to acquire them. Male elephant seals, Mirounga augustirostris , engage in fierce competitive displays in an attempt to control a large harem of females with which to mate. The distribution of females and subsequent reproductive success is very uneven between males. The reproductive success of most males is zero; they die before breeding age or are prevented from mating by higher ranked males. In addition, just a few dominant males account for the majority of copulations. [ 13 ] The potential reproductive success for males is so great that many are killed before breeding age as they attempt to move up the hierarchy in their population.
Contest competition produces relatively stable population dynamics. The uneven distribution of resources results in some individuals dying off but helps to ensure that the members of the population that hold a territory can reproduce. As the number of territories in an area stays the same over time, the breeding population remains constant which produces a similar number of new individuals every breeding season.
Scramble competition involves a more equal distribution of resources than contest competition and occurs when there is a common resource pool that an individual cannot be excluded from. For instance, grazing animals compete more strongly for grass as their population grows and food becomes a limiting resource. Each herbivore receives less food as more individuals compete for the same quantity of food. [ 4 ]
Scramble completion can lead to unstable population dynamics, the equal division of resources can result in very few of the organisms obtaining enough to survive and reproduce and this can cause population crashes. This phenomenon is called overcompensation . For instance, the caterpillars of cinnabar moths feed via scramble competition, and when there are too many caterpillars competing very few are able to pupate and there is a large population crash. [ 14 ] Subsequently, very few cinnabar moths are competing intraspecifically in the next generation so the population grows rapidly before crashing again.
The major impact of intraspecific competition is reduced population growth rates as population density increases. When resources are infinite, intraspecific competition does not occur and populations can grow exponentially. Exponential population growth is exceedingly rare, but has been documented, most notably in humans since 1900. Elephant ( Loxodonta africana ) populations in Kruger National Park (South Africa) also grew exponentially in the mid-1900s after strict poaching controls were put in place. [ 15 ]
d N ( t ) d t = r N ( t ) ( 1 − N ( t ) K ) {\displaystyle {dN(t) \over dt}=rN(t)\left(1-{\frac {N(t)}{K}}\right)} .
dN(t)/dt = rate of change of population density
N(t) = population size at time t
r = per capita growth rate
K = carrying capacity
The logistic growth equation is an effective tool for modelling intraspecific competition despite its simplicity, and has been used to model many real biological systems. At low population densities, N(t) is much smaller than K and so the main determinant for population growth is just the per capita growth rate. However, as N(t) approaches the carrying capacity the second term in the logistic equation becomes smaller, reducing the rate of change of population density. [ 16 ]
The logistic growth curve is initially very similar to the exponential growth curve. When population density is low, individuals are free from competition and can grow rapidly. However, as the population reaches its maximum (the carrying capacity), intraspecific competition becomes fiercer and the per capita growth rate slows until the population reaches a stable size. At the carrying capacity, the rate of change of population density is zero because the population is as large as possible based on the resources available. [ 4 ] Experiments on Daphnia growth rates showed a striking adherence to the logistic growth curve. [ 17 ] The inflexion point in the Daphnia population density graph occurred at half the carrying capacity, as predicted by the logistic growth model.
Gause’s 1930s lab experiments showed logistic growth in microorganisms. Populations of yeast grown in test tubes initially grew exponentially. But as resources became scarcer, their growth rates slowed until reaching the carrying capacity. [ 3 ] If the populations were moved to a larger container with more resources they would continue to grow until reaching their new carrying capacity. The shape of their growth can be modeled very effectively with the logistic growth model. | https://en.wikipedia.org/wiki/Intraspecific_competition |
Intrasporangiaceae is an actinomycete family. The family is named after the type genus Intrasporangium . The type species of Intrasporangium ( I. calvum ) was originally thought to form endospores ; however, the mycelium of this strain may bear intercalary vesicles that were originally identified as spores. [ 3 ] No members of Intrasporangiaceae are known to form spores.
The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature [ 2 ] and the phylogeny is based on whole-genome sequences. [ 4 ] [ a ]
Intrasporangium
Janibacter
Tetrasphaera
Pedococcus
Phycicoccus
Knoellia
Dermacoccaceae
This Actinomycetota -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intrasporangiaceae |
Intratracheal instillation is the introduction of a substance directly into the trachea . It is widely used to test the respiratory toxicity of a substance as an alternative to inhalation in animal testing . [ 1 ] Intratracheal instillation was reported as early as 1923 in studies of the carcinogenicity of coal tar . Modern methodology was developed by several research groups in the 1970s. [ 1 ] By contrast, tracheal administration of pharmaceutical drugs in humans is called endotracheal administration . [ 2 ]
As compared to inhalation, intratracheal instillation allows greater control over the dose and location of the substance, is cheaper and less technically demanding, allows lower amounts of scarce or expensive substances to be used, allows substances to be tested that can be inhaled by humans but not small mammals, and minimizes exposure to laboratory workers and to the skin of laboratory animals. Disadvantages include its nonphysiological and invasive nature, the confounding effects of the delivery vehicle and anesthesia, and the fact that it bypasses the upper respiratory tract . Instillation results in a less uniform distribution of the substance than inhalation, and the substance is cleared from the respiratory tract more slowly. [ 1 ] Their results provide a quick screen of potential toxicity and can be used to test its mechanism, but may not be directly applicable to occupational exposure that occurs over an extended period. [ 3 ] Some of these difficulties are overcome by another method, pharyngeal aspiration , which is less technically difficult and causes less trauma to the animal, [ 4 ] and has a pulmonary deposition pattern more similar to inhalation. [ 5 ]
Intratracheal instillation is often performed with mice , rats , or hamsters , with hamsters often preferred because their mouth can be opened widely to aid viewing the procedure, [ 6 ] and because they are more resistant to lung diseases than rats. [ 7 ] Instillation is performed either through inserting a needle or catheter down the mouth and throat, or through surgically exposing the trachea and penetrating it with a needle. Generally, short-acting inhaled anesthetic drugs such as halothane , metaphane , or enflurane are used during the instillation procedure. Saline solution is usually used as a delivery vehicle in a typical volume of 1–2 mL/kg body weight. [ 1 ] A wide range of substances can be tested, including both soluble materials and insoluble particles or fibers, including nanomaterials . [ 1 ] [ 3 ] [ 5 ] | https://en.wikipedia.org/wiki/Intratracheal_instillation |
Intravascular immunity describes the immune response in the bloodstream, and its role is to fight and prevent the spread of pathogens . [ 1 ] [ 2 ] Components of intravascular immunity include the cellular immune response and the macromolecules secreted by these cells. It can result in responses such as inflammation and immunothrombosis. [ 3 ] [ 4 ] Dysregulated intravascular immune response or pathogen evasion can create conditions like thrombosis , sepsis , or disseminated intravascular coagulation . [ 1 ] [ 5 ] [ 4 ] [ 6 ] [ 3 ] [ 2 ]
In a healthy individual, immune cells patrol blood vessels to detect and respond to danger through molecules frequently found on pathogens called PAMPs , and molecules that are released by damaged cells, DAMPs . [ 1 ] [ 2 ] Immune cells involved in intravascular surveillance are neutrophils , monocytes , invariant natural killer T cells , kupffer cells , platelets , and mast cells . [ 1 ] [ 2 ] These cells express particular receptors such as toll-like receptors and proteins like CD36 that allow them to recognize and respond to danger signals. [ 2 ] Endothelial cells lining the vasculature are also a part of the intravasculature's cellular defense system. They express molecules such as, CD14 , TLR2 , TLR4 , TLR9 , MD2, and MyD88 , to detect bacteria in the blood. [ 2 ]
Leukocytes move through blood vessels using protein-protein interactions between cells and are also assisted by blood flow . [ 2 ] Circulating immune cells behave differently in the presence and absence of an infection. For example, in the absence of an invader, monocytes migrate randomly throughout the microvasculature, cerebral vessels, and mesentery vessels. However, in the presence of an invader, monocytes emigrate to the infected area. [ 2 ] Similarly, neutrophils use a rolling mechanism to counteract the blood flow and localize to the infected area. [ 4 ] [ 2 ] In a healthy state, neutrophils have been observed to exhibit a similar but brief crawling mechanism. The function and precise mechanism is not yet known. [ 2 ]
For more details on this topic, see Inflammation .
Inflammation is an immune response in the body tissue due to stimulation of immune cells by pathogens, DAMPs, or stress. [ 5 ] [ 6 ] The vasculature provides a means of transportation for alerting and recruiting immune cells. [ 2 ] [ 5 ]
Thrombosis is the formation of blood coagulation and platelet aggregation and may result in lack of blood flow through the circulatory system. The depletion of oxygen may cause irreversible damage to organs. However, in other circumstances, the physiological process can be beneficial for the body. This process is known as immunothrombosis. [ 3 ] [ 1 ] The process isolates infections using blood clots formed by activated platelets , leukocytes , and coagulation factors assist leukocytes in adhering and migrating to infected areas. Activated platelets produce fibrin in the blood vessel which seal leaky vessels and are important in blood coagulation. Fibrin provides a matrix to trap pathogens and recruit immune cells. Characteristics such as elongation and thickness of fibrin and protofibril, the precursor of fibrin, are determined by many factors including environmental conditions, physiological conditions, and branching of the fibrin fibers. [ 6 ] [ 3 ] This in turn influences the clot structure such as permeability, stiffness, and how easily the clot can be retracted. The shift from immunothrombosis to a more pathogenic thrombosis is due to dysregulated immunothrombosis. [ 3 ] | https://en.wikipedia.org/wiki/Intravascular_immunity |
Intravital microscopy is a form of microscopy that allows observing biological processes in live animals ( in vivo ) at a high resolution that makes distinguishing between individual cells of a tissue possible. [ 1 ]
In mammals , in some experimental settings a surgical implantation of an imaging window is performed prior to intravital microscopy. This allows repeated observations over several days or weeks. For example, if researchers want to visualize liver cells of a live mouse they will implant an imaging window into mouse's abdomen . [ 2 ] Mice are the most common choice of animals for intravital microscopy but in special cases other rodents such as rats might be more suitable. Animals are usually anesthetized throughout surgeries and imaging sessions.
Intravital microscopy is used in several areas of research including neurology , immunology , stem cell studies and others. This technique is particularly useful to assess a progression of a disease or an effect of a drug. [ 1 ]
Intravital microscopy involves imaging cells of a live animal through an imaging window that is implanted into the animal tissue during a special surgery. The main advantage of intravital microscopy is that it allows imaging living cells while they are in the true environment of a complex multicellular organism . Thus, intravital microscopy allows researchers to study the behavior of cells in their natural environment or in vivo rather than in a cell culture . Another advantage of intravital microscopy is that the experiment can be set up in a way to allow observing changes in a living tissue of an organism over a period of time. This is useful for many areas of research including immunology [ 3 ] and stem cell research. [ 1 ] High quality of modern microscopes and imaging software also permits subcellular imaging in live animals that in turn allows studying cell biology at molecular level in vivo . Advancements in fluorescent protein technology and genetic tools that enable controlled expression of a given gene at a specific time in a tissue of interest also played an important role in intravital microscopy development. [ 1 ]
The possibility of generating appropriate transgenic mice is crucial for intravital microscopy studies. For example, in order to study the behavior of microglial cells in Alzheimer's disease researchers will need to crossbreed a transgenic mouse that is a mouse model of Alzheimer's disease with another transgenic mouse that is a mouse model for visualization of microglial cells. Cells need to produce a fluorescent protein to be visualized and this can be achieved by introducing a transgene . [ 4 ]
Intravital microscopy can be performed using several light microscopy techniques including widefield fluorescence, confocal , multiphoton , spinning disc microscopy and others. The main consideration for the choice of a particular technique is the penetration depth needed to image the area and the amount of cell-cell interaction details required.
If the area of interest is located more than 50–100 μm below the surface or there is a need to capture small-scale interactions between cells, multiphoton microscopy is required. Multiphoton microscopy provides considerably greater depth of penetration than single-photon confocal microscopy. [ 5 ] Multiphoton microscopy also allows visualizing cells located underneath bone tissues such as cells of the bone marrow . [ 6 ] The maximum depth for the imaging with multiphoton microscopy depends on the optical properties of the tissue and experimental equipment. The more homogenous the tissue is the better it is suited for intravital microscopy. More vascularized tissues are generally more difficult to image because red blood cells cause absorption and scattering of the microscope light beam. [ 1 ]
Fluorescence labeling of different cell lineages with differently coloured proteins allows visualizing cellular dynamics in a context of their microenvironment . If the image resolution is high enough (50 – 100 μm) it can be possible to use several images to generate 3D models of cellular interactions, including protrusions that cells make while extending toward each other. 3D models from time-lapse image sequences allow assessing speed and directionality of cellular movements. Vascular structures can also be reconstructed in 3D space and changes of their permeability can be monitored throughout a period of time as fluorescent signal intensity of dyes changes when vascular permeability does. High resolution intravital microscopy can be used to visualize spontaneous and transient events. [ 1 ] It might be useful to pair up multiphoton and confocal microscopy as this allows getting more information from every imaging session. This includes visualization of more different cell types and structures to obtain more informative images and using a single animal to obtain images of all the different cell types and structures that are of interest for a given experiment. [ 7 ] This latter is an example of the Three Rs principle implementation.
In the past, intravital microscopy could only be used to image biological processes at tissue or single-cell levels. However, due to development of subcellular labeling techniques and advances in minimizing motion artifacts (errors generated by heartbeat, breath and peristaltic movements of an animal during imaging session) it is now becoming possible to image dynamics of intracellular organelles in some tissues. [ 1 ]
One of the main advantages of intravital microscopy is the opportunity to observe how cells interact with their microenvironment . However, visualization of all the cell types of the microenvironment is limited by the number of distinguishable fluorescent labels available. [ 5 ] It is also widely accepted that some tissues such as brain can be visualized easier than others such as skeletal muscle . These differences occur due to variability in homogeneity and transparency of different tissues.
In addition, generating transgenic mice with a phenotype of interest and fluorescent proteins in appropriate cell types is often challenging and time-consuming. [ 5 ] Another problem associated with the use of transgenic mice is that it is sometimes difficult to interpret changes observed between a wild-type mouse and a transgenic mouse that represents the phenotype of interest. The reason for this is that genes of similar function can often compensate for the altered gene that leads to some degree of adaptation. [ 8 ] | https://en.wikipedia.org/wiki/Intravital_microscopy |
Intrinsic DNA fluorescence is the fluorescence emitted directly by DNA when it absorbs ultraviolet (UV) radiation. It contrasts to that stemming from fluorescent labels that are either simply bound to DNA or covalently attached to it, [ 1 ] [ 2 ] widely used in biological applications; such labels may be chemically modified, not naturally occurring, nucleobases. [ 3 ] [ 4 ]
The intrinsic DNA fluorescence was discovered in the 1960s by studying nucleic acids in low temperature glasses. [ 5 ] Since the beginning of the 21st century, the much weaker emission of nucleic acids in fluid solutions is being studied at room temperature by means sophisticated spectroscopic techniques, using as UV source femtosecond laser pulses, and following the evolution of the emitted light from femtoseconds to nanoseconds . [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] The development of specific experimental protocols has been crucial for obtaining reliable results.
Fluorescence studies combined to theoretical computations [ 11 ] [ 12 ] [ 13 ] and transient absorption measurements [ 14 ] [ 15 ] bring information about the relaxation of the electronic excited states and, thus, contribute to understanding the very first steps of a complex series of events triggered by UV radiation, ultimately leading to DNA damage. [ 16 ] The principles governing the behavior of the intrinsic RNA fluorescence, to which only a few studies have been dedicated, [ 17 ] [ 18 ] [ 19 ] are the same as those described for DNA.
The knowledge of the fundamental processes underlying the DNA fluorescence paves the way for the development of label-free biosensors . [ 20 ] [ 21 ] The development of such optoelectronic devices for certain applications would have the advantage of bypassing thew step of chemical synthesis or avoiding the uncertainties due to non-covalent biding of fluorescent dyes to nucleic acids.
Due to the weak intensity of the intrinsic DNA fluorescence, specific cautions are necessary in order to perform correct measurements and obtain reliable results. A first requirement concerns the purity of both the DNA samples and that of the chemicals and the water used to the preparation of the buffered solutions. The buffer emission must be systematically recorded and, in certain cases, subtracted in an appropriate way. [ 22 ] A second requirement is associated with the DNA damage provoked by the exciting UV light which alters its fluorescence. [ 23 ] In order to overcome these difficulties, continuous stirring of the solution is needed. For measurements using laser excitation, the circulation of the DNA solution by means of a peristaltic pump is recommended; the reproducibility of successive fluorescence signal needs to be checked.
The fluorescence spectra of the DNA monomeric chromophores (nucleobases, nucleosides or nucleotides) in neutral aqueous solution, obtained with excitation around 260 nm, peak in the near ultraviolet (300-400 nm); and a long tail, extending all over the visible domain is present in their emission spectrum. The spectra of the DNA multimers (composed of more than one nucleobase) are not the sum of the spectra of their monomeric constituents. In some cases, in addition to the main peak located in the UV, a second band [ 24 ] [ 25 ] [ 26 ] is present at longer wavelengths; it is attributed to excimer or exciplex formation. [ 27 ] [ 28 ]
The duplex spectra are affected by their size [ 29 ] and the viscosity of the solution, [ 30 ] while those of G-Quadruplexes by the metal cations present in their central cavity. [ 31 ] [ 32 ] [ 33 ] Due to the fluorescence dependence on the secondary structure, it is possible to follow the formation [ 34 ] and the melting [ 35 ] of G-Quadruplexes by monitoring their emission; and also to detect the occurrence of hairpin loops in these systems. [ 36 ] [ 37 ]
The fluorescence quantum yields Φ, that is the number of emitted photons over the number of absorbed photons, are typically in the range of 10 −4 -10 −3 . The highest values are encountered for G-quadruplexes. [ 38 ] [ 39 ] [ 40 ] The DNA nucleoside thymidine (dT) was proposed as a reference for the determination of small fluorescence quantum yields. [ 41 ]
A limited number of measurements were also performed with UVA excitation (330 nm), where DNA single and double strands, but not their monomeric units, absorb weakly. [ 42 ] The UVA-induced fluorescence peaks between 415 and 430 nm; the corresponding Φ values are at least one order of magnitude higher compared to those determined with excitation around 260 nm. [ 43 ]
The fluorescence of some minor, naturally occurring nucleobases, such as 5-methyl cytosine, N7-methylated guanosine or N6-methyladenine, has been studied both in monomeric form and in multimers. [ 44 ] [ 45 ] [ 46 ] The emission spectra of these systems are red-shifted compared to those of the major nucleobases and give rise to exciplexes.
The emission spectra described in this section are derived from fundamental studies; they may differ from those reported in application-oriented studies, which are shifted to longer wavelengths. The reason is that the latter are usually recorded for solutions with higher concentration. As a result, photons emitted at short wavelengths are reabsorbed by the DNA solution (inner filter effect) and the blue part of the spectrum is truncated.
The specificity of the intrinsic DNA fluorescence is that, contrary to most fluorescent molecules, its time-evolution cannot be described by a constant decay rate (described by a mono-exponential function). For the monomeric units, the fluorescence lasts at most a few picoseconds. In the case of multimers, the fluorescence continues over much longer times, lasting in some cases, for several tens of nanoseconds. The time constants derived from fittings with multi-exponential functions depend of the probed time window.
In order to obtain a complete picture of this complex time evolution, a femtosecond laser is needed as excitation source. Time-resolved techniques employed to this end are fluorescence upconversion, [ 47 ] [ 48 ] [ 49 ] [ 50 ] Kerr-gated fluorescence spectroscopy [ 51 ] and time-correlated single photon counting . [ 52 ] In addition to the changes in the fluorescence intensity, all of them allow the recording of time-resolved fluorescent spectra [ 53 ] [ 54 ] and fluorescence anisotropies, [ 55 ] [ 56 ] which provide information about the relaxation of the excited electronic states and the type of the emitting excited states.
The early studies were performed using time-correlated single photon counting combined with nanosecond sources (synchrotron radiation or lasers). [ 57 ] [ 58 ] [ 59 ] Although they discovered the existence of nanosecond components exclusively for multimeric nucleic acids, they failed to obtain a full picture of the fluorescence dynamics.
Emission from the monomeric DNA chromophores arises from their lower in energy electronic excited states, that is the ππ* states of the nucleobases. These are bright states, in the sense that they are also responsible for photon absorption. [ 60 ]
Their lifetimes are extremely short: they fully decay within, at most, a few ps. [ 61 ] [ 62 ] [ 63 ] [ 64 ] Such ultrafast decays are due to the existence of conical intersections connecting the excited state with the ground state. [ 65 ] [ 66 ] [ 67 ] Therefore, the dominant deactivation pathway is non-radiative, [ 68 ] leading to very low fluorescence quantum yields.
The evolution toward the conical intersection is accompanied by conformational movements. An important part of the photons is emitted while the system is moving along the potential energy surface of the excited state, before reaching a point of minimum energy. As motions on a low-dimensional surfaces do not follow exponential patterns, [ 69 ] [ 70 ] the fluorescence decays are not characterized by constant decay rates. [ 71 ]
Due to their close proximity, nucleobases in DNA multimers may be electronically coupled. This leads to delocalization of the excited states responsible for photon absorption (Franck-Condon states) over more than one nucleobase (collective states). [ 72 ] [ 73 ] [ 74 ] [ 75 ] [ 76 ] The electronic coupling depends on the geometrical arrangement of the chromophores. Therefore, the properties of the collective states are affected by factors that determine the relative position of the nucleobases. [ 77 ] Among others, the conformational disorder characterizing the nucleic acids modulates the coupling values, [ 78 ] [ 79 ] giving rise to a large number of Franck-Condon states. Each one of them evolves along a specific energy surface.
One can distinguish two limiting types of emitting states in DNA. On the one hand, ππ* states, localized on single nucleobases or delocalized over several of them. And on the other, excited charge transfer states in which an important fraction of an atomic charge has been transferred from one nucleobase to another. The latter are weakly emissive. And between these two types, there is a multitude of emitting states, more or less delocalized, with different amounts of charge transfer. The properties of the emitting states may be modified during their lifetime under the effect of conformational motions of the nucleic acid, occurring on the same time-scale. [ 80 ] [ 81 ] [ 82 ] [ 83 ] Because of this complexity, the description of the fluorescence decays by multiexponential functions is only phenomenological. [ 84 ]
Experimentally, the different types of emitting states can be differentiated through their fluorescence anisotropy. [ 85 ] The charge transfer character of an excited state lowers the fluorescence anisotropy. [ 86 ] The decrease of fluorescence anisotropy observed for all the DNA multimers on the femtosecond time-scale was explained by an ultrafast transfer of the excitation energy among the nucleobases. [ 87 ] [ 88 ] [ 89 ] [ 90 ] [ 91 ]
A particular class of emitting excitons with weak charge transfer character [ 92 ] [ 93 ] was detected in all types of duplexes, including genomic DNA. [ 94 ] Their specificity is that their emission appears at short wavelengths (λ<330 nm) and represents the longest-living components of the overall duplex fluorescence, decaying on the nanosecond timescale. It contrasts with the excimer/exciplex emission, characterized by a pronounced charge transfer character, appearing at long wavelengths and decaying on the sub-nanosecond time-scale. The contribution of the high energy emitting states to the total fluorescence increases with the local rigidity of the duplex (depending on the number of the Watson-Crick hydrogen bonds or the size of the system) and the excitation wavelength. The latter point, associated with the very weak spectral width observed for the most representative example (polymeric duplex with alternating guanine-cytosine sequence) is reminiscent of the emission stemming from J-aggregates. [ 95 ] [ 96 ]
The utilization of the intrinsic fluorescence of nucleic acids for various applications has been under scrutiny since 2019. Several approaches have been explored, primarily focusing on the variation of its intensity upon binding of different molecular species to nucleic acids. Thus, target DNA in human serum, [ 97 ] Pb 2+ ions in water, [ 98 ] aptamer binding, [ 99 ] as well as the interaction of quinoline dyes (commonly used in the food and pharmaceutical industries) [ 100 ] were detected.
In parallel, the screening of a large number of sequences was explored by multivariate analysis. [ 101 ] The technique of synchronous fluorescence scanning was employed for the authentication of COVID 19 vaccines. [ 102 ] And the assessment of the intrinsic fluorescence was included in a multi-attribute analysis of adeno-associated virus. [ 103 ] Along the same line, an optical assay has been developed in order to assess the binding to G-Quadruplexes small molecules with potential anticancer properties. [ 104 ]
The prospect of probing DNA damage by monitoring the intrinsic fluorescence has been also discussed. [ 105 ] This potential application could leverage the short wavelength emission of duplexes, associated with collective excited states whose properties are highly sensitive to the geometrical arrangement of the nucleobases. And the generation of various lesions are known to induce structural distortions. [ 106 ] [ 107 ] | https://en.wikipedia.org/wiki/Intrinsic_DNA_fluorescence |
Intrinsic ageing and extrinsic ageing are terms used to describe cutaneous ageing of the skin and other parts of the integumentary system , which while having epidermal concomitants , seems to primarily involve the dermis . [ 1 ] Intrinsic ageing is influenced by internal physiological factors alone, and extrinsic ageing by many external factors. Intrinsic ageing is also called chronologic ageing , and extrinsic ageing is most often referred to as photoageing . [ citation needed ]
The effects of intrinsic ageing are caused primarily by internal factors alone. It is sometimes referred to as chronological ageing and is an inherent degenerative process due to declining physiologic functions and capacities. Such an ageing process may include qualitative and quantitative changes and includes diminished or defective synthesis of collagen and elastin in the dermis. [ citation needed ]
Extrinsic ageing of skin is a distinctive declination process caused by external factors, which include ultra-violet radiation, cigarette smoking, air pollution, among others. Of all extrinsic causes, radiation from sunlight has the most widespread documentation of its negative effects on the skin. Because of this, extrinsic ageing is often referred to as photoageing. [ 2 ] [ 3 ] [ 4 ] Photoageing may be defined as skin changes caused by chronic exposure to UV light. Photodamage implies changes beyond those associated with ageing alone, defined as cutaneous damage caused by chronic exposure to solar radiation and is associated with emergence of neoplastic lesions. [ citation needed ] | https://en.wikipedia.org/wiki/Intrinsic_and_extrinsic_ageing |
In geometry , an intrinsic equation of a curve is an equation that defines the curve using a relation between the curve's intrinsic properties, that is, properties that do not depend on the location and possibly the orientation of the curve. Therefore an intrinsic equation defines the shape of the curve without specifying its position relative to an arbitrarily defined coordinate system .
The intrinsic quantities used most often are arc length s {\displaystyle s} , tangential angle θ {\displaystyle \theta } , curvature κ {\displaystyle \kappa } or radius of curvature , and, for 3-dimensional curves, torsion τ {\displaystyle \tau } . Specifically:
The equation of a circle (including a line) for example is given by the equation κ ( s ) = 1 r {\displaystyle \kappa (s)={\tfrac {1}{r}}} where s {\displaystyle s} is the arc length, κ {\displaystyle \kappa } the curvature and r {\displaystyle r} the radius of the circle.
These coordinates greatly simplify some physical problem. For elastic rods for example, the potential energy is given by
where B {\displaystyle B} is the bending modulus E I {\displaystyle EI} . Moreover, as κ ( s ) = d θ / d s {\displaystyle \kappa (s)=d\theta /ds} , elasticity of rods can be given a simple variational form. | https://en.wikipedia.org/wiki/Intrinsic_equation |
Intrinsic immunity refers to a set of cellular -based anti-viral defense mechanisms, notably genetically encoded proteins which specifically target eukaryotic retroviruses . Unlike adaptive and innate immunity effectors, intrinsic immune proteins are usually expressed at a constant level, allowing a viral infection to be halted quickly. Intrinsic antiviral immunity refers to a form of innate immunity that directly restricts viral replication and assembly, thereby rendering a cell non-permissive to a specific class or species of viruses. Intrinsic immunity is conferred by restriction factors preexisting in certain cell types, although these factors can be further induced by virus infection. Intrinsic viral restriction factors recognize specific viral components, but unlike other pattern recognition receptors that inhibit viral infection indirectly by inducing interferons and other antiviral molecules, intrinsic antiviral factors block viral replication immediately and directly. [ 1 ]
Eukaryotic organisms have been exposed to viral infections for millions of years. The development of the innate and adaptive immune system reflects the evolutionary importance of fighting infection . Some viruses, however, have proven to be so deadly or refractory to conventional immune mechanisms that specific, genetically encoded cellular defense mechanisms have evolved to combat them. Intrinsic immunity comprises cellular proteins which are always active and have evolved to block infection by specific viruses or viral taxa . [ 2 ]
The recognition of intrinsic immunity as a potent anti-viral defense mechanism is a recent discovery and is not yet discussed in most immunology courses or texts. Though the extent of protection intrinsic immunity affords is still unknown, it is possible that intrinsic immunity may eventually be considered a third branch of the traditionally bipartite immune system . [ citation needed ]
Intrinsic Immunity combines aspects of the two traditional branches of the immune system – adaptive and innate immunity – but is mechanistically distinct. Innate cellular immunity recognizes viral infection using toll-like receptors (TLRs), or pattern recognition receptors , which sense Pathogen-associated molecular patterns (PAMPs), triggering the expression of nonspecific antiviral proteins. Intrinsic immune proteins, however, are specific both in virus recognition and their mechanism of viral attenuation . Like innate immunity, however, the intrinsic immune system does not respond differently upon repeat infection by the same pathogen. Also, like adaptive immunity, intrinsic immunity is specifically tailored to a single type or class of pathogens, notably retroviruses . [ citation needed ]
Unlike adaptive and innate immunity, which must sense the infection to be turned on (and can take weeks to become effective in the case of adaptive immunity) intrinsic immune proteins are constitutively expressed and ready to shut down infection immediately following viral entry. This is particularly important in retroviral infections since viral integration into the host genome occurs quickly after entry and reverse transcription and is largely irreversible. [ citation needed ]
Because the production of intrinsic immune mediating proteins cannot be increased during infection, these defenses can become saturated and ineffective if a cell is infected with a high level of virus. [ citation needed ]
Other intrinsic immune proteins have been discovered which block Murine leukemia virus (MLV), Herpes simplex virus (HSV), and Human Cytomegalovirus (HCMV). In many cases, such as that of APOBEC3G above, viruses have evolved mechanisms for disrupting the actions of these proteins. Another example is the cellular protein Daxx , which silences viral promoters , but is degraded by an active HCMV protein early in infection. [ 5 ] | https://en.wikipedia.org/wiki/Intrinsic_immunity |
In chemical kinetics , an intrinsic low-dimensional manifold is a technique to simplify the study of reaction mechanisms using dynamical systems , first proposed in 1992. [ 1 ] [ 2 ] [ 3 ]
The ILDM approach fixes a low dimensional surface which describes well the slow dynamics and assumes that after a short time the fast dynamics are less important and the system can be described in the lower-dimensional space. [ 4 ]
This physical chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intrinsic_low-dimensional_manifold |
Intrinsic motivation , in the study of artificial intelligence and robotics , is a mechanism for enabling artificial agents (including robots ) to exhibit inherently rewarding behaviours such as exploration and curiosity, grouped under the same term in the study of psychology . Psychologists consider intrinsic motivation in humans to be the drive to perform an activity for inherent satisfaction – just for the fun or challenge of it. [ 1 ]
An intelligent agent is intrinsically motivated to act if the information content alone, or the experience resulting from the action, is the motivating factor.
Information content in this context is measured in the information-theoretic sense of quantifying uncertainty. A typical intrinsic motivation is to search for unusual, surprising situations (exploration), in contrast to a typical extrinsic motivation such as the search for food (homeostasis). [ 2 ] Extrinsic motivations are typically described in artificial intelligence as task-dependent or goal-directed .
The study of intrinsic motivation in psychology and neuroscience began in the 1950s with some psychologists explaining exploration through drives to manipulate and explore, however, this homeostatic view was criticised by White. [ 3 ] An alternative explanation from Berlyne in 1960 was the pursuit of an optimal balance between novelty and familiarity. [ 4 ] Festinger described the difference between internal and external view of the world as dissonance that organisms are motivated to reduce. [ 5 ] A similar view was expressed in the '70s by Kagan as the desire to reduce the incompatibility between cognitive structure and experience. [ 6 ] In contrast to the idea of optimal incongruity, Deci and Ryan identified in the mid 80's an intrinsic motivation based on competence and self-determination . [ 7 ]
An influential early computational approach to implement artificial curiosity in the early 1990s by Schmidhuber , has since been developed into a "Formal theory of creativity, fun, and intrinsic motivation”. [ 8 ]
Intrinsic motivation is often studied in the framework of computational reinforcement learning [ 9 ] [ 10 ] (introduced by Sutton and Barto ), where the rewards that drive agent behaviour are intrinsically derived rather than externally imposed and must be learnt from the environment. [ 11 ] Reinforcement learning is agnostic to how the reward is generated - an agent will learn a policy (action strategy) from the distribution of rewards afforded by actions and the environment. Each approach to intrinsic motivation in this scheme is essentially a different way of generating the reward function for the agent.
Intrinsically motivated artificial agents exhibit behaviour that resembles curiosity or exploration . Exploration in artificial intelligence and robotics has been extensively studied in reinforcement learning models, [ 12 ] usually by encouraging the agent to explore as much of the environment as possible, to reduce uncertainty about the dynamics of the environment (learning the transition function) and how best to achieve its goals (learning the reward function). Intrinsic motivation, in contrast, encourages the agent to first explore aspects of the environment that confer more information, to seek out novelty. Recent work unifying state visit count exploration and intrinsic motivation has shown faster learning in a video game setting. [ 13 ]
Oudeyer and Kaplan have made a substantial contribution to the study of intrinsic motivation. [ 14 ] [ 2 ] [ 15 ] They define intrinsic motivation based on Berlyne's theory, [ 4 ] and divide approaches to the implementation of intrinsic motivation into three categories that broadly follow the roots in psychology: "knowledge-based models", "competence-based models" and "morphological models". [ 2 ] Knowledge-based models are further subdivided into "information-theoretic" and "predictive". [ 15 ] Baldassare and Mirolli present a similar typology, differentiating knowledge-based models between prediction-based and novelty-based. [ 16 ]
The quantification of prediction and novelty to drive behaviour is generally enabled through the application of information-theoretic models, where agent state and strategy (policy) over time are represented by probability distributions describing a markov decision process and the cycle of perception and action treated as an information channel. [ 17 ] [ 18 ] These approaches claim biological feasibility as part of a family of bayesian approaches to brain function . The main criticism and difficulty of these models is the intractability of computing probability distributions over large discrete or continuous state spaces. [ 2 ] Nonetheless, a considerable body of work has built up modelling the flow of information around the sensorimotor cycle, leading to de facto reward functions derived from the reduction of uncertainty, including most notably active inference , [ 19 ] but also infotaxis, [ 20 ] predictive information, [ 21 ] [ 22 ] and empowerment . [ 23 ]
Steels' autotelic principle [ 24 ] is an attempt to formalise flow (psychology) . [ 25 ]
Other intrinsic motives that have been modelled computationally include achievement, affiliation and power motivation. [ 26 ] These motives can be implemented as functions of probability of success or incentive. Populations of agents can include individuals with different profiles of achievement, affiliation and power motivation, modelling population diversity and explaining why different individuals take different actions when faced with the same situation.
A more recent computational theory of intrinsic motivation attempts to explain a large variety of psychological findings based on such motives. Notably this model of intrinsic motivation goes beyond just achievement, affiliation and power, by taking into consideration other important human motives. Empirical data from psychology were computationally simulated and accounted for using this model. [ 27 ]
Intrinsically motivated (or curiosity-driven) learning is an emerging research topic in artificial intelligence and developmental robotics [ 28 ] that aims to develop agents that can learn general skills or behaviours, that can be deployed to improve performance in extrinsic tasks, such as acquiring resources. [ 29 ] Intrinsically motivated learning has been studied as an approach to autonomous lifelong learning in machines [ 30 ] [ 31 ] and open-ended learning in computer game characters. [ 32 ] In particular, when the agent learns a meaningful abstract representation, a notion of distance between two representations can be used to gauge novelty, hence allowing for an efficient exploration of its environment. [ 33 ] Despite the impressive success of deep learning in specific domains (e.g. AlphaGo ), many in the field (e.g. Gary Marcus ) have pointed out that the ability to generalise remains a fundamental challenge in artificial intelligence. Intrinsically motivated learning, although promising in terms of being able to generate goals from the structure of the environment without externally imposed tasks, faces the same challenge of generalisation – how to reuse policies or action sequences, how to compress and represent continuous or complex state spaces and retain and reuse the salient features that have been learnt. [ 29 ] | https://en.wikipedia.org/wiki/Intrinsic_motivation_(artificial_intelligence) |
In quantum mechanics , the intrinsic parity is a phase factor that arises as an eigenvalue of the parity operation x i → x i ′ = − x i {\displaystyle x_{i}\rightarrow x_{i}'=-x_{i}} (a reflection about the origin). [ 1 ] To see that the parity's eigenvalues are phase factors, we assume an eigenstate of the parity operation (this is realized because the intrinsic parity is a property of a particle species) and use the fact that two parity transformations leave the particle in the same state, thus the new wave function can differ by only a phase factor, i.e.: P 2 ψ = e i ϕ ψ {\displaystyle P^{2}\psi =e^{i\phi }\psi } thus P ψ = ± e i ϕ / 2 ψ {\displaystyle P\psi =\pm e^{i\phi /2}\psi } , since these are the only eigenstates satisfying the above equation.
The intrinsic parity's phase is conserved for strong and electromagnetic interactions (the product of the intrinsic parities is the same before and after the reaction), but not for weak interactions. [ 1 ] : 123 As [ P , H ] = 0 {\displaystyle [P,H]=0} the Hamiltonian is invariant under a parity transformation. The intrinsic parity of a system is the product of the intrinsic parities of the particles, [ 2 ] : 136 for instance for noninteracting particles we have P ( | 1 ⟩ | 2 ⟩ ) = ( P | 1 ⟩ ) ( P | 2 ⟩ ) {\displaystyle P(|1\rangle |2\rangle )=(P|1\rangle )(P|2\rangle )} . Since the parity commutes with the Hamiltonian and d P d t = 0 {\displaystyle {\frac {dP}{dt}}=0} its eigenvalue does not change with time , therefore the intrinsic parities phase is a conserved quantity.
A consequence of the Dirac equation is that the intrinsic parity of fermions and antifermions obey the relation P f ¯ P f = − 1 {\displaystyle P_{\bar {f}}P_{f}=-1} , so particles and their antiparticles have the opposite parity. [ 2 ] : 137 Single leptons can never be created or destroyed in experiments, as lepton number is a conserved quantity. This means experiments are unable to distinguish the sign of a leptons parity, so by convention it is chosen that leptons have intrinsic parity +1, antileptons have P = − 1 {\displaystyle P=-1} . Similarly the parity of the quarks is chosen to be +1, and antiquarks is -1. [ 2 ] : 139
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intrinsic_parity |
Intrinsic safety ( IS ) is a protection technique for safe operation of electrical equipment in hazardous areas by limiting the energy, electrical and thermal, available for ignition. In signal and control circuits that can operate with low currents and voltages, the intrinsic safety approach simplifies circuits and reduces installation cost over other protection methods. Areas with dangerous concentrations of flammable gases or dust are found in applications such as petrochemical refineries and mines. As a discipline, it is an application of inherent safety in instrumentation. High-power circuits such as electric motors or lighting cannot use intrinsic safety methods for protection.
Intrinsic safety devices, can be subdivided in to:
Intrinsically safe apparatuses are electrical devices that have connected circuits that are intrinsically safe circuits whilst in the hazardous area.
Associated apparatuses are electrical devices that have both intrinsically safe and non-intrinsically safe circuits and is designed in a way that the non-intrinsically safe circuits cannot negatively affect the intrinsically safe circuits.
An intrinsically safe circuit is designed to not be capable of causing ignition of a given explosive atmosphere, by any spark or any thermal effect under normal operation and specified fault conditions.
In normal use, electrical equipment often creates tiny electric arcs (internal sparks) in switches, motor brushes, connectors, and in other places. Compact electrical equipment generates heat as well, which under some circumstances can become an ignition source.
There are multiple ways to make equipment safe for use in explosive-hazardous areas. Intrinsic safety (denoted by "i" in the ATEX and IECEx Explosion Classifications) is one of several available methods for electrical equipment. see Types of protection for more info.
For handheld electronics, intrinsic safety is the only realistic method that allows a functional device to be explosion protected. A device which is termed "intrinsically safe" has been designed to be incapable of producing heat or spark sufficient to ignite an explosive atmosphere, even if the device has experienced deterioration or has been damaged.
There are several considerations in designing intrinsically safe electronics devices:
Elimination of spark potential within components is accomplished by limiting the available energy in any given circuit and the system as a whole.
Temperature, under certain fault conditions such as an internal short in a semiconductor device, becomes an issue as the temperature of a component can rise to a level that can ignite some explosive gasses, even in normal use.
Safeguards, such as current limiting by resistors and fuses, must be employed to ensure that in no circumstance can a component reach a temperature that could cause autoignition of a combustible atmosphere. In the highly compact electronic devices used today PCBs often have component spacing that create the possibility of an arc between components if dust or other particulate matter works into the circuitry, thus component spacing, siting and isolation become important to the design.
The primary concept behind intrinsic safety is the restriction of available electrical and thermal energy in the system so that ignition of a hazardous atmosphere (explosive gas or dust) cannot occur. This is achieved by ensuring that only low voltages and currents enter the hazardous area, and that no significant energy storage is possible.
One of the most common methods for protection is to limit electric current by using series resistors (using types of resistors that always fail open); and limit the voltage with multiple zener diodes. In zener barriers dangerous incoming potentials are grounded, with galvanic isolation barriers there is no direct connection between the safe- and hazardous-area circuits by interposing a layer of insulation between the two. Certification standards for intrinsic safety designs (mainly IEC 60079-11 but since 2015 also IEC TS 60079-39) generally require that the barrier do not exceed approved levels of voltage and current with specified damage to limiting components.
Equipment or instrumentation for use in a hazardous area will be designed to operate with low voltage and current, and will be designed without any large capacitors or inductors that could discharge in a spark. The instrument will be connected, using approved wiring methods, back to a control panel in a non-hazardous area that contains safety barriers. The safety barriers ensure that, in normal operation, and with the application of faults according to the equipment protection level (EPL), even if accidental contact occurs between the instrument circuit and other power sources, no more than the approved voltage and current enters the hazardous area.
For example, during marine transfer operations when flammable products are transferred between the marine terminal and tanker ships or barges, two-way radio communication needs to be constantly maintained in case the transfer needs to stop for unforeseen reasons such as a spill. The United States Coast Guard requires that the two way radio must be certified as intrinsically safe.
Another example is intrinsically safe or explosion-proof mobile phones used in explosive atmospheres, such as refineries. Intrinsically safe mobile phones must meet special battery design criteria in order to achieve UL, ATEX directive , or IECEx certification for use in explosive atmospheres.
Only properly designed battery -operated, self-contained devices can be intrinsically safe by themselves. Other field devices and wiring are intrinsically safe only when employed in a properly designed IS system. Requirements for intrinsically safe electrical systems are given in the IEC 60079 series of standards.
Standards for intrinsic protection are mainly developed by International Electrotechnical Commission (IEC), [ 1 ] but different agencies also develop standards for intrinsic safety. Agencies may be run by governments or may be composed of members from insurance companies, manufacturers, and industries with an interest in safety standards. Certifying agencies allow manufacturers to affix a label or mark to identify that the equipment has been designed to the relevant product safety standards. Examples of such agencies in North America are the Factory Mutual Research Corporation, which certifies radios, Underwriters Laboratories (UL) that certifies mobile phones, and in Canada the Canadian Standards Association . [ 2 ] In the EU the standard for intrinsic safety certification is the CENELEC [ 3 ] standard EN 60079-11 and shall be certified according to the ATEX directive , while in other countries around the world the IEC standards are followed. To facilitate world trade, standards agencies around the world engage in harmonization activity so that intrinsically safe equipment manufactured in one country eventually might be approved for use in another without redundant, expensive testing and documentation. | https://en.wikipedia.org/wiki/Intrinsic_safety |
Intrinsic , or rho-independent termination , is a process to signal the end of transcription and release the newly constructed RNA molecule. In bacteria such as E. coli , transcription is terminated either by a rho-dependent process or rho-independent process. In the Rho-dependent process, the rho-protein locates and binds the signal sequence in the mRNA and signals for cleavage. Contrarily, intrinsic termination does not require a special protein to signal for termination and is controlled by the specific sequences of RNA. When the termination process begins, the transcribed mRNA forms a stable secondary structure hairpin loop, also known as a stem-loop . This RNA hairpin is followed by multiple uracil nucleotides. The bonds between uracil (rU) and adenine (dA) are very weak. A protein bound to RNA polymerase (nusA) binds to the stem-loop structure tightly enough to cause the polymerase to temporarily stall. This pausing of the polymerase coincides with transcription of the poly-uracil sequence. The weak adenine-uracil bonds lower the energy of destabilization for the RNA-DNA duplex, allowing it to unwind and dissociate from the RNA polymerase. Overall, the modified RNA structure is what terminates transcription.
Stem-loop structures that are not followed by a poly-uracil sequence cause the RNA polymerase to pause, but it will typically continue transcription after a brief time because the duplex is too stable to unwind far enough to cause termination.
Rho-independent transcription termination is a frequent mechanism underlying the activity of cis -acting RNA regulatory elements, such as riboswitches .
The purpose function of intrinsic termination is to signal for the dissociation of the ternary elongation complex (TEC) , ending the transcript. Intrinsic termination independent of the protein Rho , as opposed to Rho-dependent termination, where the bacterial Rho protein comes in and acts on the RNA polymerase, causing it to dissociate. [ 1 ] Here, there is no extra protein and the transcript forms its own loop structure. Intrinsic termination thus regulates the level of transcription as well, determining how many Polymerase can transcribe a gene over a given period of time, and can help prevent interactions with neighboring chromosomes. [ 1 ]
The process itself is regulated through both positive and negative termination factors, usually through modification of the hairpin structure. This is accomplished through interactions with single stranded RNA that corresponds to the upstream area of the loop, resulting in disruption of the termination process. Furthermore, there is some implication that the nut site may also contribute to regulation, as it is involved in recruitment of some critical components in the formation of the hairpin. [ 2 ]
In intrinsic termination, the RNA transcript doubles back and base pairs with itself, creating an RNA stem-loop , or hairpin, structure. This structure is critical for the release of both the transcript and polymerase at the end of transcription. [ 3 ] In living cells, the key components are the stable stem-loop itself, as well as the sequence of 6-8 uracil residues that follow it. [ 3 ] The stem usually consists of 8-9 mostly guanine and cytosine (G-C) base pairs, and the loop consists of 4-8 residues. It is thought that the stem portion of the structure is essential for transcription termination, while the loop is not. [ 4 ] This is suggested by the fact that termination can be achieved in non-native structures that do not include the loop. [ 5 ]
The stem portion of the hairpin is usually rich in G-C base pairs. G-C base pairs have significant base-stacking interactions , and can form three hydrogen bonds with each other, which makes them very thermodynamically favorable. Conversely, while the uracil-rich sequence that follows the hairpin is not always necessary for termination, [ 6 ] it is hypothesized that the uracil-rich sequence aids in intrinsic termination because the U-A bond is not as strong as G-C bonds. [ 4 ] This inherent instability acts to kinetically favor the dissociation of the RNA transcript. [ 4 ]
To determine the optimal length of the stem, researchers modified its length and observed how quickly termination occurred. [ 3 ] When the length of the stem was lengthened or shortened from the standard 8-9 base pair length, termination was less efficient, and if the changes were great enough, termination ceased completely. [ 3 ]
Experiments determined that if an oligonucleotide sequence that is identical to the downstream portion of the stem is present, it will base pair with the upstream portion. [ 5 ] This creates a structure that is analogous to the native stem-loop structure but is missing the loop at the end. Without the presence of the loop, intrinsic termination is still able to occur. [ 5 ] This indicates that the loop is not inherently necessary for intrinsic termination. [ citation needed ]
Generally, the absence of the uracil-rich sequence following the stem-loop will result in a delay or pause in transcription, but termination will not cease completely. [ 6 ]
Intrinsic termination is cued by signals directly encoded in the DNA and RNA. Signal appears in as a hairpin and is followed by 8 Uridines at the 3' end. This leads to a rapid dissociation of the elongation complex . Hairpin inactivates and destabilizes the TEC by weakening interactions in the RNA-DNA binding site and other sites that hold this complex together. The pausing induced by the stretch of uracils is important and provides time for hairpin formation. In absence of U-tract, hair pin formation does not result in efficient termination, indicating its importance in this process. [ 7 ]
The elongation destabilization process occurs in four steps [ 7 ]
In terms of inhibitors of intrinsic termination, much is still unknown. One of the few examples that is known is bacteriophage protein 7. This is made up of 3.4A and 4.0A cryo-EM structures of P7-NusA-TEC and P7-TEC. [ 8 ] This bacteriophage protein 7 stops transcription termination by blocking the RNA polymerase (RNAP) RNA-exit channel and impeding RNA-hairpin formation at the intrinsic terminator. Furthermore, bacteriophage protein 7 inhibits RNAP-clamp motions. [ 8 ] Shortening the C-terminal half-helix of the RNAP slightly decreases the inhibitory activity. These RNAP clamp motions have been targeted by some other inhibitors of bacterial RNAP. These inhibitors include myxopyronin , corallopyronin, and ripostatin. These work by inhibiting isomerization. [ 8 ]
RNA polymerases in all three domains of life have some version of factor-independent termination. All of them use poly-uracil tracts, though the exact mechanisms and accessory sequences vary. In archaea and eukaryotes, there appears to be no requirement of a hairpin. [ 9 ]
Archaeal transcription shares eukaryotic and bacterial ties. With eukaryotes, it shares similarities with its initiation factors that help transcription identify appropriate sequences such as TATA box homologs as well as factors that maintain transcription elongation. However, additional transcription factors similar to those found in bacteria are needed for the whole process to occur. [ 9 ]
In terms of transcription termination, the archaeal genome is unique in that it is sensitive to both intrinsic termination and factor-dependent termination. Bioinformatic analysis has shown that approximately half of the genes and operons in Archaea arrange themselves into signals or contain signals for intrinsic termination. [ 10 ] Archaeal RNA polymerase is responsive to intrinsic signals both in vivo and in vitro such as the poly-U-rich regions. However, unlike bacterial intrinsic termination, no specific RNA structure or hairpin is needed. The surrounding environment and other genome factors can still influence the termination. [ 10 ]
Factor-dependent termination in archaea is also distinct from factor-dependent termination in bacteria. [ 9 ] The terminational factor aCASP1 (also known as FttA) recognizes poly-U-rich regions, probably cooperating with the "intrinsic" mode to achieve more efficient termination. [ 11 ]
RNA polymerase III performs "intrinsic-like" termination. The majority of genes transcribed by RNAP III have a poly(dT) region. However, although poly(dT) pauses every RNA polymerase, it alone cannot be insufficient; some other mechanism must destabilize the clamp. In RNAP III, some poly(dT) sites are indeed occasionally read-through: some genes have multiple such regions, allowing transcripts of different lengths to be produced. [ 12 ]
The instability of rU:dA hybrids likely is essential to termination by RNAP III. Parts of core subunits C1 and C2, as well as "subcomplexes" C53/37 and C11 are functionally important. A number of extraneous factors can modify the termination behavior. [ 12 ] | https://en.wikipedia.org/wiki/Intrinsic_termination |
Intrinsic viscosity [ η ] {\displaystyle \left[\eta \right]} is a measure of a solute's contribution to the viscosity η {\displaystyle \eta } of a solution . If η 0 {\displaystyle \eta _{0}} is the viscosity in the absence of the solute, η {\displaystyle \eta } is (dynamic or kinematic) viscosity of the solution and ϕ {\displaystyle \phi } is the volume fraction of the solute in the solution, then intrinsic viscosity is defined as the dimensionless number [ η ] = lim ϕ → 0 η − η 0 η 0 ϕ {\displaystyle \left[\eta \right]=\lim _{\phi \rightarrow 0}{\frac {\eta -\eta _{0}}{\eta _{0}\phi }}} It should not be confused with inherent viscosity , which is the ratio of the natural logarithm of the relative viscosity to the mass concentration of the polymer.
When the solute particles are rigid spheres at infinite dilution, the intrinsic viscosity equals 5 2 {\displaystyle {\frac {5}{2}}} , as shown first by Albert Einstein .
In practical settings, ϕ {\displaystyle \phi } is usually solute mass concentration ( c , g/dL), and the units of intrinsic viscosity [ η ] {\displaystyle \left[\eta \right]} are deciliters per gram (dL/g), otherwise known as inverse concentration.
Generalizing from spheres to spheroids with an axial semiaxis a {\displaystyle a} (i.e., the semiaxis of revolution) and equatorial semiaxes b {\displaystyle b} , the intrinsic viscosity can be written
where the constants are defined
The J {\displaystyle J} coefficients are the Jeffery functions
It is possible to generalize the intrinsic viscosity formula from spheroids to arbitrary ellipsoids with semiaxes a {\displaystyle a} , b {\displaystyle b} and c {\displaystyle c} .
The intrinsic viscosity formula may also be generalized to include a frequency dependence.
The intrinsic viscosity is very sensitive to the axial ratio of spheroids, especially of prolate spheroids. For example, the intrinsic viscosity can provide rough estimates of the number of subunits in a protein fiber composed of a helical array of proteins such as tubulin . More generally, intrinsic viscosity can be used to assay quaternary structure . In polymer chemistry intrinsic viscosity is related to molar mass through the Mark–Houwink equation . A practical method for the determination of intrinsic viscosity is with a Ubbelohde viscometer or with a RheoSense VROC viscometer. | https://en.wikipedia.org/wiki/Intrinsic_viscosity |
Intrinsically photosensitive retinal ganglion cells ( ipRGCs ), also called photosensitive retinal ganglion cells ( pRGC ), or melanopsin-containing retinal ganglion cells ( mRGCs ), are a type of neuron in the retina of the mammalian eye . The presence of an additional photoreceptor was first suspected in 1927 when mice lacking rod and cone cells still responded to changing light levels through pupil constriction ; [ 1 ] this suggested that rods and cones are not the only light-sensitive tissue. [ 2 ] However, it was unclear whether this light sensitivity arose from an additional retinal photoreceptor or elsewhere in the body. Recent [ when? ] research has shown that these retinal ganglion cells , unlike other retinal ganglion cells, are intrinsically photosensitive due to the presence of melanopsin , a light-sensitive protein. Therefore, they constitute a third class of photoreceptors, in addition to rod and cone cells . [ 3 ]
Compared to the rods and cones, the ipRGCs respond more sluggishly and signal the presence of light over the long term. [ 5 ] They represent a very small subset (~1%) of the retinal ganglion cells. [ 6 ] Their functional roles are non-image-forming and fundamentally different from those of pattern vision; they provide a stable representation of ambient light intensity. They have at least three primary functions:
Photoreceptive ganglion cells have been isolated in humans, where, in addition to regulating the circadian rhythm, they have been shown to mediate a degree of light recognition in rodless, coneless subjects suffering with disorders of rod and cone photoreceptors. [ 9 ] Work by Farhan H. Zaidi and colleagues showed that photoreceptive ganglion cells may have some visual function in humans.
The photopigment of photoreceptive ganglion cells, melanopsin, is excited by light mainly in the blue portion of the visible spectrum (absorption peaks at ~480 nanometers [ 10 ] ). The phototransduction mechanism in these cells is not fully understood, but seems likely to resemble that in invertebrate rhabdomeric photoreceptors. In addition to responding directly to light, these cells may receive excitatory and inhibitory influences from rods and cones by way of synaptic connections in the retina.
The axons from these ganglia innervate regions of the brain related to object recognition, including the superior colliculus and dorsal lateral geniculate nucleus . [ 8 ]
These photoreceptor cells project both throughout the retina and into the brain. They contain the photopigment melanopsin in varying quantities along the cell membrane, including on the axons up to the optic disc, the soma, and dendrites of the cell. [ 3 ] ipRGCs contain membrane receptors for the neurotransmitters glutamate, glycine , and GABA . [ 11 ] Photosensitive ganglion cells respond to light by depolarizing, thus increasing the rate at which they fire nerve impulses, which is opposite to that of other photoreceptor cells, which hyperpolarize in response to light. [ 12 ]
Results of studies in mice suggest that the axons of ipRGCs are unmyelinated . [ 3 ]
Unlike other photoreceptor pigments, melanopsin has the ability to act as both the excitable photopigment and as a photoisomerase. Unlike the visual opsins in rod cells and cone cells , which rely on the standard visual cycles for recharging all-trans -retinal back into the photosensitive 11-cis -retinal , melanopsin is able to isomerize all-trans- retinal into 11-cis- retinal itself when stimulated with another photon. [ 11 ] An ipRGC therefore does not rely on Müller cells and/or retinal pigment epithelium cells for this conversion.
The two isoforms of melanopsin differ in their spectral sensitivity, for the 11-cis -retinal isoform is more responsive to shorter wavelengths of light, while the all-trans isoform is more responsive to longer wavelengths of light. [ 13 ]
ipRGCs are both pre- and postsynaptic to dopaminergic amacrine cells (DA cells) via reciprocal synapses, with ipRGCs sending excitatory signals to the DA cells, and the DA cells sending inhibitory signals to the ipRGCs. These inhibitory signals are mediated through GABA , which is co-released from the DA cells along with dopamine . Dopamine has functions in the light-adaptation process by up-regulating melanopsin transcription in ipRGCs and thus increasing the photoreceptor's sensitivity. [ 3 ] In parallel with the DA amacrine cell inhibition, somatostatin-releasing amacrine cells, themselves inhibited by DA amacrine cells, inhibit ipRGCs. [ 14 ] Other synaptic inputs to ipRGC dendrites include cone bipolar cells and rod bipolar cells. [ 11 ]
One postsynaptic target of ipRGCs is the suprachiasmatic nucleus (SCN) of the hypothalamus, which serves as the circadian clock in an organism. ipRGCs release both pituitary adenylyl cyclase-activating protein (PACAP) and glutamate onto the SCN via a monosynaptic connection called the retinohypothalamic tract (RHT). [ 15 ] Glutamate has an excitatory effect on SCN neurons, and PACAP appears to enhance the effects of glutamate in the hypothalamus. [ 16 ]
Other post synaptic targets of ipRGCs include: the intergenticulate leaflet (IGL), a cluster of neurons located in the thalamus, which play a role in circadian entrainment; the olivary pretectal nucleus (OPN), a cluster of neurons in the midbrain that controls the pupillary light reflex; the ventrolateral preoptic nucleus (VLPO), located in the hypothalamus and is a control center for sleep; as well as to [ clarify ] the amygdala. [ 3 ]
Using various photoreceptor knockout mice, researchers have identified the role of ipRGCs in both the transient and sustained signaling of the pupillary light reflex (PLR). [ 17 ] Transient PLR occurs at dim to moderate light intensities and is a result of phototransduction occurring in rod cells , which provide synaptic input onto ipRGCs, which in turn relay the information to the olivary pretectal nucleus in the midbrain . [ 18 ] The neurotransmitter involved in the relay of information to the midbrain from the ipRGCs in the transient PLR is glutamate . At brighter light intensities the sustained PLR occurs, which involves both phototransduction of the rod providing input to the ipRGCs and phototransduction of the ipRGCs themselves via melanopsin. Researchers have suggested that the role of melanopsin in the sustained PLR is due to its lack of adaptation to light stimuli in contrast to rod cells, which exhibit adaptation. The sustained PLR is maintained by PACAP release from ipRGCs in a pulsatile manner. [ 17 ]
Experiments with rodless, coneless humans allowed another possible role for the receptor to be studied. In 2007, a new role was found for the photoreceptive ganglion cell. Zaidi and colleagues showed that in humans the retinal ganglion cell photoreceptor contributes to conscious sight as well as to non-image-forming functions like circadian rhythms, behaviour and pupillary reactions. [ 9 ] Since these cells respond mostly to blue light, it has been suggested that they have a role in mesopic vision [ citation needed ] and that the old theory of a purely duplex retina with rod (dark) and cone (light) light vision was simplistic. Zaidi and colleagues' work with rodless, coneless human subjects hence has also opened the door into image-forming (visual) roles for the ganglion cell photoreceptor.
The discovery that there are parallel pathways for vision was made: one classic rod- and cone-based arising from the outer retina, the other a rudimentary visual brightness detector arising from the inner retina. The latter seems to be activated by light before the former. [ 9 ] Classic photoreceptors also feed into the novel photoreceptor system, and colour constancy may be an important role as suggested by Foster [ citation needed ] .
It has been suggested by the authors of the rodless, coneless human model that the receptor could be instrumental in understanding many diseases, including major causes of blindness worldwide such as glaucoma , a disease which affects ganglion cells.
In other mammals, photosensitive ganglia have proven to have a genuine role in conscious vision. Tests conducted by Jennifer Ecker et al. found that rats lacking rods and cones were able to learn to swim toward sequences of vertical bars rather than an equally luminescent gray screen. [ 8 ]
Most work suggests that the peak spectral sensitivity of the receptor is between 460 and 484 nm. Lockley et al. in 2003 [ 19 ] showed that 460 nm (blue) wavelengths of light suppress melatonin twice as much as 555 nm (green) light, the peak sensitivity of the photopic visual system. In work by Zaidi, Lockley and co-authors using a rodless, coneless human, it was found that a very intense 481 nm stimulus led to some conscious light perception, meaning that some rudimentary vision was realized. [ 9 ]
In 1923, Clyde E. Keeler observed that the pupils in the eyes of blind mice he had accidentally bred still responded to light. [ 2 ] The ability of the rodless, coneless mice to retain a pupillary light reflex was suggestive of an additional photoreceptor cell. [ 11 ]
In the 1980s, research in rod- and cone-deficient rats showed regulation of dopamine in the retina, a known neuromodulator for light adaptation and photoentrainment. [ 3 ]
Research continued in 1991, when Russell G. Foster and colleagues, including Ignacio Provencio , showed that rods and cones were not necessary for photoentrainment, the visual drive of the circadian rhythm , nor for the regulation of melatonin secretion from the pineal gland , via rod- and cone-knockout mice. [ 20 ] [ 11 ] Later work by Provencio and colleagues showed that this photoresponse was mediated by the photopigment melanopsin , present in the ganglion cell layer of the retina. [ 21 ]
The photoreceptors were identified in 2002 by Samer Hattar , David Berson and colleagues, where they were shown to be melanopsin expressing ganglion cells that possessed an intrinsic light response and projected to a number of brain areas involved in non-image-forming vision. [ 22 ] [ 23 ]
In 2005, Panda, Melyan, Qiu, and colleagues demonstrated that the melanopsin photopigment was the phototransduction pigment in ganglion cells. [ 24 ] [ 25 ] Dennis Dacey and colleagues showed in a species of Old World monkey that giant ganglion cells expressing melanopsin projected to the lateral geniculate nucleus (LGN). [ 26 ] [ 6 ] Previously only projections to the midbrain (pre-tectal nucleus) and hypothalamus ( supra-chiasmatic nuclei , SCN) had been shown. However, a visual role for the receptor was still unsuspected and unproven.
Attempts were made to hunt down the receptor in humans, but humans posed special challenges and demanded a new model. Unlike in other animals, researchers could not ethically induce rod and cone loss either genetically or with chemicals so as to directly study the ganglion cells. For many years, only inferences could be drawn about the receptor in humans, though these were at times pertinent.
In 2007, Zaidi and colleagues published their work on rodless, coneless humans, showing that these people retain normal responses to nonvisual effects of light. [ 9 ] [ 27 ] The identity of the non-rod, non-cone photoreceptor in humans was found to be a ganglion cell in the inner retina as shown previously in rodless, coneless models in some other mammals. The work was done using patients with rare diseases that wiped out classic rod and cone photoreceptor function but preserved ganglion cell function. [ 9 ] [ 27 ] Despite having no rods or cones, the patients continued to exhibit circadian photoentrainment, circadian behavioural patterns, melatonin suppression, and pupil reactions, with peak spectral sensitivities to environmental and experimental light that match the melanopsin photopigment. Their brains could also associate vision with light of this frequency. Clinicians and scientists are now seeking to understand the new receptor's role in human diseases and blindness. [ citation needed ] Intrinsically photosensitive RGCs have also been implicated in the exacerbation of headache by light during migraine attacks. [ 28 ] | https://en.wikipedia.org/wiki/Intrinsically_photosensitive_retinal_ganglion_cell |
An introduced species , alien species , exotic species , adventive species , immigrant species , foreign species , non-indigenous species , or non-native species is a species living outside its native distributional range , but which has arrived there by human activity, directly or indirectly, and either deliberately or accidentally. Non-native species can have various effects on the local ecosystem. Introduced species that become established and spread beyond the place of introduction are considered naturalized . The process of human-caused introduction is distinguished from biological colonization , in which species spread to new areas through "natural" (non-human) means such as storms and rafting . The Latin expression neobiota captures the characteristic that these species are new biota to their environment in terms of established biological network (e.g. food web ) relationships. Neobiota can further be divided into neozoa (also: neozoons, sing. neozoon, i.e. animals) and neophyta (plants).
The impact of introduced species is highly variable. Some have a substantial negative effect on a local ecosystem (in which case they are also classified more specifically as an invasive species ), while other introduced species may have little or no negative impact (no invasiveness), and integrate well into the ecosystem they have been introduced to. Some species have been introduced intentionally to combat pests. They are called biocontrols and may be regarded as beneficial as an alternative to pesticides in agriculture for example. In some instances the potential for being beneficial or detrimental in the long run remains unknown. [ 1 ] [ 2 ] [ 3 ] The effects of introduced species on natural environments have gained much scrutiny from scientists, governments, farmers and others.
The formal definition of an introduced species from the United States Environmental Protection Agency is "A species that has been intentionally or inadvertently brought into a region or area. Also called an exotic or non-native species". [ 4 ] [ 5 ]
In the broadest and most widely used sense, an introduced species is synonymous with "non-native" and therefore applies as well to most garden and farm organisms; these adequately fit the basic definition given above. However, some sources add to that basic definition "and are now reproducing in the wild", [ 6 ] which means that species growing in a garden, farm, or house may not meet the criteria unless they escape and persist.
There are many terms associated with introduced species that represent subsets of introduced species, and the terminology associated with introduced species is now in flux for various reasons. Examples of these terms are "invasive", "acclimatized", "adventive", "naturalized", and "immigrant" species.
The term "invasive" is used to describe introduced species that cause ecological, economic, or other damage to the area in which they were introduced.
Acclimatized species are introduced species that have changed physically and/or behaviorally in order to adjust to their new environment. Acclimatized species are not necessarily optimally adjusted to their new environment and may just be physically/behaviorally sufficient for the new environment.
Adventive species are often considered synonymous with "introduced species", but this term is sometimes applied exclusively to introduced species that are not permanently established. [ 7 ]
Naturalized species are often introduced species that do not need human help to reproduce and maintain their population in an area outside their native range (no longer adventive), but that also applies to populations migrating and establishing in a novel environment (e.g.: in Europe , house sparrows are well established since early Iron Age though they originated from Asia ).
Immigrant species are species that travel, sometimes by themselves, but often with human help, between two habitats. Invasiveness is not a requirement. [ 8 ]
Introduction of a species outside its native range is all that is required to be qualified as an "introduced species". Such species might be termed naturalized , "established", or "wild non-native species". If they further spread beyond the place of introduction and cause damage to nearby species, they are called " invasive species ". The transition from introduction, to establishment and to invasion has been described in the context of plants. [ 9 ] Introduced species are essentially "non-native" species. Invasive species are those introduced species that spread widely or quickly and cause harm, be that to the environment, [ 10 ] human health, other valued resources, or the economy. There have been calls from scientists to consider a species "invasive" only in terms of their spread and reproduction rather than the harm they may cause. [ 11 ]
According to a practical definition, an invasive species is one that has been introduced and become a pest in its new location, spreading (invading) by natural means. The term is used to imply both a sense of urgency and actual or potential harm. For example, U.S. Executive Order 13112 (1999) defines "invasive species" as "an alien species whose introduction does or is likely to cause economic or environmental harm or harm to human health". [ 12 ] The biological definition of invasive species, on the other hand, makes no reference to the harm they may cause, only to the fact that they spread beyond the area of original introduction.
Some argue that "invasive" is a loaded word and harm is difficult to define. [ 6 ]
From a regulatory perspective, it is neither desirable nor practical to list as undesirable or outright ban all non-native species (although the State of Hawaii has adopted an approach that comes close to this). Regulations require a definitional distinction between non-natives that are deemed especially onerous and all others. Introduced "pest" species, that are officially listed as invasive, best fit the definition of an invasive species. Early detection and rapid response is the most effective strategy for regulating a pest species and reducing economic and environmental impacts of an introduction. [ 13 ] Management of invasion pathways are on the forefront of eliminating unwanted invasive species this would include preliminary steps; educating the public, cooperation from industries and government resources. [ 14 ]
In Great Britain , the Wildlife and Countryside Act 1981 prevents the introduction of any animal not naturally occurring in the wild or any of a list of both animals or plants introduced previously and proved to be invasive.
By definition , a species is considered "introduced" when its transport into an area outside of its native range is human mediated. Introductions by humans can be described as either intentional or accidental. Intentional introductions have been motivated by individuals or groups who either (1) believe that the newly introduced species will be in some way beneficial to humans in its new location or, (2) species are introduced intentionally but with no regard to the potential impact. Unintentional or accidental introductions are most often a byproduct of human movements and are thus unbound to human motivations. Subsequent range expansion of introduced species may or may not involve human activity.
Species that humans intentionally transport to new regions can subsequently become successfully established in two ways. In the first case, organisms are purposely released for establishment in the wild. It is sometimes difficult to predict whether a species will become established upon release, and if not initially successful, humans have made repeated introductions to improve the probability that the species will survive and eventually reproduce in the wild. In these cases, it is clear that the introduction is directly facilitated by human desires.
In the second case, species intentionally transported into a new region may escape from captive or cultivated populations and subsequently establish independent breeding populations. Escaped organisms are included in this category because their initial transport to a new region is human motivated.
The widespread phenomena of intentional introduction has also been described as biological globalization .
Positive Introductions
Although most introduced species have negative impacts on the ecosystems they enter into, there are still some species that have affected the ecosystem in a positive way. For example, in New Hampshire invasive plants can provide some benefits to some species. Invasive species such as autumn olive, oriental bittersweet, and honeysuckle produce fruit that is used by a handful of fruit-eating bird species. [ 15 ] The invasive plants can also be a source of pollen and nectar for many insects, such as bees. These invasive plants were able to help their ecosystem thriving, and increase the native animal's chances of survival. Several introduced exotic trees served as nest sites for resident waterbird species in Udaipur city, India. [ 16 ]
Perhaps the most common motivation for introducing a species into a new place is that of economic gain. Non-native species can become such a common part of an environment, culture, and even diet that little thought is given to their geographic origin. For example, soybeans , kiwi fruit , wheat , honey bees , and all livestock except the American bison and the turkey are non-native species to North America. Collectively, non-native crops and livestock account for 98% of US food. [ 17 ] These and other benefits from non-natives are so vast that, according to the Congressional Research Service, they probably exceed the costs. [ 18 ]
Other examples of species introduced for the purposes of benefiting agriculture , aquaculture or other economic activities are widespread. [ 19 ] Eurasian carp was first introduced to the United States as a potential food source. The apple snail was released in Southeast Asia with the intent that it be used as a protein source, and subsequently to places like Hawaii to establish a food industry. In Alaska, foxes were introduced to many islands to create new populations for the fur trade. About twenty species of African and European dung beetles have established themselves in Australia after deliberate introduction by the Australian Dung Beetle Project in an effort to reduce the impact of livestock manure. The timber industry promoted the introduction of Monterey pine ( Pinus radiata ) from California to Australia and New Zealand as a commercial timber crop. These examples represent only a small subsample of species that have been moved by humans for economic interests.
The rise in the use of genetically modified organisms has added another potential economic advantage to introducing new/modified species into different environments. Companies such as Monsanto that earn much of their profit through the selling of genetically modified seeds has added to the controversy surrounding introduced species. The effect of genetically modified organisms varies from organism to organism and is still being researched today, however, the rise of genetically modified organisms has added complexity to the conversations surrounding introduced species.
Introductions have also been important in supporting recreation activities or otherwise increasing human enjoyment. Numerous fish and game animals have been introduced for the purposes of sport fishing and hunting. The introduced amphibian ( Ambystoma tigrinum ) that threatens the endemic California salamander ( A. californiense ) was introduced to California as a source of bait for fishermen. [ 20 ] Pet animals have also been frequently transported into new areas by humans, and their escapes have resulted in several introductions, such as feral cats , [ 21 ] parrots , [ 22 ] and pond slider . [ 23 ]
Lophura nycthemera ( silver pheasant ), a native of East Asia, has been introduced into parts of Europe for ornamental reasons.
Many plants have been introduced with the intent of aesthetically improving public recreation areas or private properties. The introduced Norway maple for example occupies a prominent status in many of Canada's parks. [ 24 ] The transport of ornamental plants for landscaping use has and continues to be a source of many introductions. Some of these species have escaped horticultural control and become invasive. Notable examples include water hyacinth , salt cedar , and purple loosestrife .
In other cases, species have been translocated for reasons of "cultural nostalgia", which refers to instances in which humans who have migrated to new regions have intentionally brought with them familiar organisms. Famous examples include the introduction of common starlings to North America by the American Eugene Schieffelin , a lover of the works of Shakespeare and the chairman of the American Acclimatization Society , who, it is rumoured, wanted to introduce all of the birds mentioned in Shakespeare's plays into the United States. He deliberately released eighty starlings into Central Park in New York City in 1890, and another forty in 1891.
Yet another prominent example of an introduced species that became invasive is the European rabbit in Australia . Thomas Austin , a British landowner, had rabbits released on his estate in Victoria because he missed hunting them. A more recent example is the introduction of the common wall lizard ( Podarcis muralis) to North America by a Cincinnati boy, George Rau, around 1950 after a family vacation to Italy . [ 25 ]
Intentional introductions have also been undertaken with the aim of ameliorating environmental problems. A number of fast spreading plants such as kudzu have been introduced as a means of erosion control. Other species have been introduced as biological control agents to control invasive species . This involves the purposeful introduction of a natural enemy of the target species with the intention of reducing its numbers or controlling its spread.
A special case of introduction is the reintroduction of a species that has become locally endangered or extinct, done in the interests of conservation. [ 26 ] Examples of successful reintroductions include wolves to Yellowstone National Park in the U.S., and the red kite to parts of England and Scotland. Introductions or translocations of species have also been proposed in the interest of genetic conservation , which advocates the introduction of new individuals into genetically depauperate populations of endangered or threatened species. [ 27 ]
Unintentional introductions occur when species are transported by human vectors. Increasing rates of human travel are providing accelerating opportunities for species to be accidentally transported into areas in which they are not considered native. For example, three species of rat (the black, Norway and Polynesian) have spread to most of the world as hitchhikers on ships, and arachnids such as scorpions and exotic spiders are sometimes transported to areas far beyond their native range by riding in shipments of tropical fruit. This was seen during the introduction of Steatoda nobilis (Noble false widow) worldwide through banana shipments. [ 28 ]
Further there are numerous examples of marine organisms being transported in ballast water , among them the invasive comb jelly Mnemiopsis leidyi , the dangerous bacterium Vibrio cholerae , or the fouling zebra mussel . The Mediterranean and Black Seas, with their high volume shipping from exotic sources, are most impacted by this problem. [ 29 ] Busy harbors are all potential hotspots as well: over 200 species have been introduced to the San Francisco Bay in this manner making it the most heavily invaded estuary in the world. [ 30 ]
There is also the accidental release of the Africanized honey bees (AHB), known colloquially as "killer bees") or Africanized bee to Brazil in 1957 and the Asian carp to the United States. The insect commonly known as the brown marmorated stink bug ( Halyomorpha halys ) was introduced accidentally in Pennsylvania. Another form of unintentional introductions is when an intentionally introduced plant carries a parasite or herbivore with it. Some become invasive, for example, the oleander aphid , accidentally introduced with the ornamental plant, oleander .
Yet another unintentional pathway of introduction is during the delivery of humanitarian aid in the aftermath of natural disasters. [ 31 ] [ 32 ] This occurred during relief efforts for Hurricane Maria in Dominica , it was found that the common green iguana , the Cuban tree frog , and potentially the Venezuela snouted tree frog were introduced with the former two becoming established. [ 32 ]
Most accidentally or intentionally introduced species do not become invasive as the ones mentioned above. For instance, Some 179 coccinellid species have been introduced to the U.S. and Canada; about 27 of these non-native species have become established, and only a handful can be considered invasive, including the intentionally introduced Harmonia axyridis , multicolored Asian lady beetle . [ 33 ] However the small percentage of introduced species that become invasive can produce profound ecological changes. In North America, Harmonia axyridis has become the most abundant lady beetle and probably accounts for more observations than all the native lady beetles put together. [ 34 ]
Many non-native plants have been introduced into new territories, initially as either ornamental plants or for erosion control, stock feed, or forestry. Whether an exotic will become an invasive species is seldom understood in the beginning. [ 36 ]
A very troublesome marine species in southern Europe is the seaweed Caulerpa taxifolia . Caulerpa was first observed in the Mediterranean Sea in 1984, off the coast of Monaco . By 1997, it had covered some 50 km 2 . It has a strong potential to overgrow natural biotopes , and represents a major risk for sublittoral ecosystems . The origin of the alga in the Mediterranean was thought to be either as a migration through the Suez Canal from the Red Sea, or as an accidental introduction from an aquarium. [ 37 ]
This species has become invasive in Australia, where it threatens native rare plants and causes erosion and soil slumping around river banks. [ 38 ] It has also become invasive in France where it has been listed as an invasive plant species of concern in the Mediterranean region, where it can form monocultures that threaten critical conservation habitats. [ 39 ]
Most introduced species do not become invasive. Examples of introduced animals that have become invasive include the gypsy moth in eastern North America , the zebra mussel and alewife in the Great Lakes , the Canada goose and gray squirrel in Europe, the beaver in Tierra del Fuego , the muskrat in Europe and Asia , the cane toad and red fox in Australia , nutria in North America , Eurasia , and Africa , and the common brushtail possum in New Zealand . In Taiwan , the success of introduced bird species was related to their native range size and body size; larger species with larger native range sizes were found to have larger introduced range sizes. [ 40 ]
One notoriously devastating introduced species is the small Indian mongoose ( Urva auropunctata ). Originating in a region encompassing Iran and India , it was introduced to the West Indies and Hawaii in the late 1800s for pest control. Since then, it has thrived on prey unequipped to deal with its speed, nearly leading to the local extinction of a variety of species. [ 41 ]
In some cases, introduced animals may unintentionally promote the cause of rewilding . [ 42 ] For example, escaped horses and donkeys that have gone feral in the Americas may play ecological roles similar to those of the equids that became extinct there at the end of the Pleistocene . [ 43 ]
The exotic pet trade has also been a large source of introduced species. The species favored as pets have more general habitat requirements and larger distributions. [ 44 ] Therefore, as these pets escape or are released, unintentionally or intentionally, they are more likely to survive and establish non-native populations in the wild. Among the popular exotic pets that have become alien or invasive species are parrots, frogs, terrapins, and iguanas.
Some species, such as the Western honey bee , brown rat , house sparrow , ring-necked pheasant , and European starling , have been introduced very widely. In addition there are some agricultural and pet species that frequently become feral ; these include rabbits , dogs , ducks , snakes , goats , fish , pigs , and cats . Many water fleas such as Daphnia , Bosmina and Bythotrephes have introduced around the world, causing dramatic changes in native freshwater ecosystems. [ 45 ]
When a new species is introduced, the species could potentially breed with members of native species, producing hybrids. The effect of the creating of hybrids can range from having little effect, a negative effect, to having devastating effects on native species. Potential negative effects include hybrids that are less fit for their environment resulting in a population decrease. This was seen in the Atlantic Salmon population when high levels of escape from Atlantic Salmon farms into the wild populations resulted in hybrids that had reduced survival. [ 46 ] Potential positive effects include adding to the genetic diversity of the population which can increase the adaptation ability of the population and increase the number of healthy individuals within a population. This was seen in the introduction of guppies in Trinidad to encourage population growth and introduce new alleles into the population. The results of this introduction included increased levels of heterozygosity and a larger population size . [ 47 ] Wide-spread introductions of non-native iguanas are causing devastating effects on native Iguana populations in the Caribbean Lesser Antilles , as hybrids appear to have higher fitness than native iguanas, leading to competitive outcompetition and replacement. [ 48 ] [ 49 ] Numerous populations have already become extinct and hybridization continues to reduce the number of native iguanas on multiple islands.
In plants, introduced species have been observed to undergo rapid evolutionary change to adapt to their new environments, with changes in plant height, size, leaf shape, dispersal ability, reproductive output, vegetative reproduction ability, level of dependence on the mycorrhizal network , and level of phenotype plasticity appearing on timescales of decades to centuries. [ 50 ]
It has been hypothesized that invasive species of microbial life could contaminate a planetary body after the former is introduced by a space probe or spacecraft , either deliberately or unintentionally. [ 51 ] It has also been hypothesized that the origin of life on earth is due to introductions of life from other planets billions of years ago, possibly by a sentient race. Projects have been proposed to introduce life to other lifeless but habitable planets in other star systems some time in the future. In preparation for this, projects have been proposed to see if anything is still alive from any of the feces left behind during the six Moon landings from 1969 to 1972. [ 52 ]
"This protocol was defined in concert with Viking, the first mission to face the most stringent planetary protection requirements; its implementation remains the gold standard today." | https://en.wikipedia.org/wiki/Introduced_species |
Introductio in analysin infinitorum ( Latin : [ 1 ] Introduction to the Analysis of the Infinite ) is a two-volume work by Leonhard Euler which lays the foundations of mathematical analysis . Written in Latin and published in 1748, the Introductio contains 18 chapters in the first part and 22 chapters in the second. It has Eneström numbers E101 and E102. [ 2 ] [ 3 ] It is considered the first precalculus book.
Chapter 1 is on the concepts of variables and functions . Chapters 2 and 3 are concerned with the transformation of functions. Chapter 4 introduces infinite series through rational functions .
According to Henk Bos ,
Euler accomplished this feat by introducing exponentiation a x for arbitrary constant a in the positive real numbers . He noted that mapping x this way is not an algebraic function , but rather a transcendental function . For a > 1 these functions are monotonic increasing and form bijections of the real line with positive real numbers. Then each base a corresponds to an inverse function called the logarithm to base a , in chapter 6. In chapter 7, Euler introduces e as the number whose hyperbolic logarithm is 1. The reference here is to Gregoire de Saint-Vincent who performed a quadrature of the hyperbola y = 1/ x through description of the hyperbolic logarithm. Section 122 labels the logarithm to base e the "natural or hyperbolic logarithm...since the quadrature of the hyperbola can be expressed through these logarithms". Here he also gives the exponential series:
Then in chapter 8 Euler is prepared to address the classical trigonometric functions as "transcendental quantities that arise from the circle." He uses the unit circle and presents Euler's formula . Chapter 9 considers trinomial factors in polynomials . Chapter 16 is concerned with partitions , a topic in number theory . Continued fractions are the topic of chapter 18.
Carl Benjamin Boyer 's lectures at the 1950 International Congress of Mathematicians compared the influence of Euler's Introductio to that of Euclid 's Elements , calling the Elements the foremost textbook of ancient times, and the Introductio "the foremost textbook of modern times". [ 5 ] Boyer also wrote:
The first translation into English was that by John D. Blanton, published in 1988. [ 6 ] The second, by Ian Bruce, is available online. [ 7 ] A list of the editions of Introductio has been assembled by V. Frederick Rickey . [ 8 ] | https://en.wikipedia.org/wiki/Introductio_in_analysin_infinitorum |
Introduction to Lattices and Order is a mathematical textbook on order theory by Brian A. Davey and Hilary Priestley . It was published by the Cambridge University Press in their Cambridge Mathematical Textbooks series in 1990, [ 1 ] [ 2 ] [ 3 ] with a second edition in 2002. [ 4 ] [ 5 ] [ 6 ] The second edition is significantly different in its topics and organization, and was revised to incorporate recent developments in the area, especially in its applications to computer science . [ 4 ] [ 6 ] The Basic Library List Committee of the Mathematical Association of America has suggested its inclusion in undergraduate mathematics libraries. [ 7 ]
Both editions of the book have 11 chapters; in the second book they are organized with the first four providing a general reference for mathematicians and computer scientists, and the remaining seven focusing on more specialized material for logicians , topologists , and lattice theorists. [ 4 ]
The first chapter concerns partially ordered sets , with a fundamental example given by the partial functions ordered by the subset relation on their graphs , and covers fundamental concepts including top and bottom elements and upper and lower sets . These ideas lead to the second chapter, on lattices , in which every two elements (or in complete lattices , every set) has a greatest lower bound and a least upper bound . This chapter includes the construction of a lattice from the lower sets of any partial order, and the Knaster–Tarski theorem constructing a lattice from the fixed points of an order-preserving functions on a complete lattice. Chapter three concerns formal concept analysis , its construction of "concept lattices" from collections of objects and their properties, with each lattice element representing both a set of objects and a set of properties held by those objects, and the universality of this construction in forming complete lattices. The fourth of the introductory chapters concerns special classes of lattices, including modular lattices , distributive lattices , and Boolean lattices . [ 5 ]
In the second part of the book, chapter 5 concerns the theorem that every finite Boolean lattice is isomorphic to the lattice of subsets of a finite set, and (less trivially) Birkhoff's representation theorem according to which every finite distributive lattice is isomorphic to the lattice of lower sets of a finite partial order. Chapter 6 covers congruence relations on lattices. The topics in chapter 7 include closure operations and Galois connections on partial orders, and the Dedekind–MacNeille completion of a partial order into the smallest complete lattice containing it. The next two chapters concern complete partial orders , their fixed-point theorems, information systems, and their applications to denotational semantics . Chapter 10 discusses order-theoretic equivalents of the axiom of choice , including extensions of the representation theorems from chapter 5 to infinite lattices, and the final chapter discusses the representation of lattices with topological spaces , including Stone's representation theorem for Boolean algebras and the duality theory for distributive lattices . [ 5 ]
Two appendices provide background in topology needed for the final chapter, and an annotated bibliography. [ 6 ]
This book is aimed at beginning graduate students, [ 2 ] although it could also be used by advanced undergraduates. [ 6 ] Its many exercises make it suitable as a course textbook, [ 2 ] [ 3 ] and serve both to fill in details from the exposition in the book, and to provide pointers to additional topics. [ 5 ] Although some mathematical sophistication is required of its readers, the main prerequisites are discrete mathematics , abstract algebra , and group theory . [ 2 ] [ 5 ]
Writing of the first edition, reviewer Josef Niederle calls it "an excellent textbook", "up-to-date and clear". [ 3 ] Similarly, Thomas S. Blyth praises the first edition as "a well-written, satisfying, informative, and stimulating account of applications that are of great interest", [ 1 ] and in an updated review writes that the second edition is as good as the first. [ 4 ] Likewise, although Jon Cohen has some quibbles with the ordering and selection of topics (particularly the inclusion of congruences at the expense of a category-theoretic view of the subject), he concludes that the book is "a wonderful and accessible introduction to lattice theory, of equal interest to both computer scientists and mathematicians". [ 5 ]
Both Blyth and Cohen note the book's skilled use of LaTeX to create its diagrams, and its helpful descriptions of how the diagrams were made. [ 1 ] [ 5 ] | https://en.wikipedia.org/wiki/Introduction_to_Lattices_and_Order |
Introduction to Solid State Physics , known colloquially as Kittel , is a classic condensed matter physics textbook written by American physicist Charles Kittel in 1953. [ 1 ] The book has been highly influential and has seen widespread adoption; Marvin L. Cohen remarked in 2019 that Kittel's content choices in the original edition played a large role in defining the field of solid-state physics . [ 2 ] It was also the first proper textbook covering this new field of physics. [ 3 ] The book is published by John Wiley and Sons and, as of 2018, it is in its ninth edition and has been reprinted many times as well as translated into over a dozen languages, including Chinese, French, German, Hungarian, Indonesian, Italian, Japanese, Korean, Malay, Romanian, Russian, Spanish, and Turkish. In some later editions, the eighteenth chapter, titled Nanostructures , was written by Paul McEuen . Along with its competitor Ashcroft and Mermin , the book is considered a standard textbook in condensed matter physics.
Kittel received his PhD from the University of Wisconsin–Madison in 1941 under his advisor Gregory Breit . [ 4 ] Before being promoted to professor of physics at UC Berkeley in 1951, Kittel held several other positions. He worked for the Naval Ordnance Laboratory from 1940 to 1942, was a research physicist in the US Navy until 1945, worked at the Research Laboratory of Electronics at MIT from 1945 to 1947 and at Bell Labs from 1947 to 1951, and was a visiting associate professor at UC Berkeley from 1950 until his promotion. [ 4 ]
Henry Ehrenreich has noted that before the first edition of Introduction to Solid State Physics came out in 1953, there were no other textbooks on the subject; rather, the young field's study material was spread across several prominent articles and treatises. [ 3 ] The field of solid state physics was very new at the time of writing and was defined by only a few treatises that, in the Ehrenreich's view, expounded rather than explained the topics and were not suitable as textbooks. [ 3 ]
The book covers a wide range of topics in solid state physics, including Bloch's theorem, crystals, magnetism, phonons, Fermi gases, magnetic resonance, and surface physics. The chapters are broken into sections that highlight the topics. [ 5 ]
Marvin L. Cohen and Morrel H. Cohen , in an obituary for Kittel in 2019, remarked that the original book "was not only the dominant text for teaching in the field, it was on the bookshelf of researchers in academia and industry throughout the world", [ 4 ] though they did not provide any time frame on when it may have been surpassed as the dominant text. They also noted that Kittel's content choices played a large role in defining the field of solid-state physics . [ 4 ]
The book is a classic textbook in the subject and has seen use as a comparative benchmark in the reviews of other books in condensed matter physics. [ 1 ] [ 3 ] In a 1969 review of another book, Robert G. Chambers noted that there were not many textbooks covering these topics, as "since 1953, Kittel's classic Introduction to Solid State Physics has dominated the field so effectively that few competitors have appeared", noting that the third edition continues that legacy. Before continuing, the reviewer noted that the book was too long for some uses and that less thorough works would be welcome. [ 1 ] | https://en.wikipedia.org/wiki/Introduction_to_Solid_State_Physics |
A gauge theory is a type of theory in physics . The word gauge means a measurement , a thickness, an in-between distance (as in railroad tracks ), or a resulting number of units per certain parameter (a number of loops in an inch of fabric or a number of lead balls in a pound of ammunition ). [ 1 ] Modern theories describe physical forces in terms of fields , e.g., the electromagnetic field , the gravitational field , and fields that describe forces between the elementary particles . A general feature of these field theories is that the fundamental fields cannot be directly measured; however, some associated quantities can be measured, such as charges, energies, and velocities. For example, say you cannot measure the diameter of a lead ball, but you can determine how many lead balls, which are equal in every way, are required to make a pound. Using the number of balls, the density of lead, and the formula for calculating the volume of a sphere from its diameter, one could indirectly determine the diameter of a single lead ball.
In field theories, different configurations of the unobservable fields can result in identical observable quantities . A transformation from one such field configuration to another is called a gauge transformation ; [ 2 ] [ 3 ] the lack of change in the measurable quantities, despite the field being transformed, is a property called gauge invariance . For example, if you could measure the color of lead balls and discover that when you change the color, you still fit the same number of balls in a pound, the property of "color" would show gauge invariance . Since any kind of invariance under a field transformation is considered a symmetry , gauge invariance is sometimes called gauge symmetry . Generally, any theory that has the property of gauge invariance is considered a gauge theory.
For example, in electromagnetism the electric field E and the magnetic field B are observable, while the potentials V ("voltage") and A (the vector potential ) are not. [ 4 ] Under a gauge transformation in which a constant is added to V , no observable change occurs in E or B .
With the advent of quantum mechanics in the 1920s, and with successive advances in quantum field theory , the importance of gauge transformations has steadily grown. Gauge theories constrain the laws of physics, because all the changes induced by a gauge transformation have to cancel each other out when written in terms of observable quantities. Over the course of the 20th century, physicists gradually realized that all forces ( fundamental interactions ) arise from the constraints imposed by local gauge symmetries , in which case the transformations vary from point to point in space and time . Perturbative quantum field theory (usually employed for scattering theory) describes forces in terms of force-mediating particles called gauge bosons . The nature of these particles is determined by the nature of the gauge transformations. The culmination of these efforts is the Standard Model , a quantum field theory that accurately predicts all of the fundamental interactions except gravity .
The earliest field theory having a gauge symmetry was James Clerk Maxwell 's formulation, in 1864–65, of electrodynamics in " A Dynamical Theory of the Electromagnetic Field ". The importance of this symmetry remained unnoticed in the earliest formulations. Similarly unnoticed, David Hilbert had derived Einstein's equations of general relativity by postulating a symmetry under any change of coordinates, just as Einstein was completing his work. [ 5 ] Later Hermann Weyl , inspired by success in Einstein's general relativity , conjectured (incorrectly, as it turned out) in 1919 that invariance under the change of scale or "gauge" (a term inspired by the various track gauges of railroads) might also be a local symmetry of electromagnetism. [ 6 ] [ 7 ] : 5, 12 Although Weyl's choice of the gauge was incorrect, the name "gauge" stuck to the approach. After the development of quantum mechanics , Weyl, Vladimir Fock and Fritz London modified their gauge choice by replacing the scale factor with a change of wave phase , and applying it successfully to electromagnetism. [ 8 ] Gauge symmetry was generalized mathematically in 1954 by Chen Ning Yang and Robert Mills in an attempt to describe the strong nuclear forces . This idea, dubbed Yang–Mills theory , later found application in the quantum field theory of the weak force , and its unification with electromagnetism in the electroweak theory.
The importance of gauge theories for physics stems from their tremendous success in providing a unified framework to describe the quantum-mechanical behavior of electromagnetism , the weak force and the strong force . This gauge theory, known as the Standard Model , accurately describes experimental predictions regarding three of the four fundamental forces of nature.
Historically, the first example of gauge symmetry to be discovered was classical electromagnetism . [ 9 ] A static electric field can be described in terms of an electric potential (voltage, V ) that is defined at every point in space, and in practical work it is conventional to take the Earth as a physical reference that defines the zero level of the potential, or ground . But only differences in potential are physically measurable, which is the reason that a voltmeter must have two probes, and can only report the voltage difference between them. Thus one could choose to define all voltage differences relative to some other standard, rather than the Earth, resulting in the addition of a constant offset. [ 10 ] If the potential V is a solution to Maxwell's equations then, after this gauge transformation, the new potential V → V + C is also a solution to Maxwell's equations and no experiment can distinguish between these two solutions. In other words, the laws of physics governing electricity and magnetism (that is, Maxwell equations) are invariant under gauge transformation. [ 11 ] Maxwell's equations have a gauge symmetry.
Generalizing from static electricity to electromagnetism, we have a second potential, the magnetic vector potential A , which can also undergo gauge transformations. These transformations may be local. That is, rather than adding a constant onto V , one can add a function that takes on different values at different points in space and time. If A is also changed in certain corresponding ways, then the same E (electric) and B (magnetic) fields result. The detailed mathematical relationship between the fields E and B and the potentials V and A is given in the article Gauge fixing , along with the precise statement of the nature of the gauge transformation. The relevant point here is that the fields remain the same under the gauge transformation, and therefore Maxwell's equations are still satisfied.
Gauge symmetry is closely related to charge conservation . Suppose that there existed some process by which one could briefly violate conservation of charge by creating a charge q at a certain point in space, 1, moving it to some other point 2, and then destroying it. We might imagine that this process was consistent with conservation of energy. We could posit a rule stating that creating the charge required an input of energy E 1 = qV 1 and destroying it released E 2 = qV 2 , which would seem natural since qV measures the extra energy stored in the electric field because of the existence of a charge at a certain point. Outside of the interval during which the particle exists, conservation of energy would be satisfied, because the net energy released by creation and destruction of the particle, qV 2 − qV 1 , would be equal to the work done in moving the particle from 1 to 2, qV 2 − qV 1 . But although this scenario salvages conservation of energy, it violates gauge symmetry. Gauge symmetry requires that the laws of physics be invariant under the transformation V → V + C , which implies that no experiment should be able to measure the absolute potential, without reference to some external standard such as an electrical ground. But the proposed rules E 1 = qV 1 and E 2 = qV 2 for the energies of creation and destruction would allow an experimenter to determine the absolute potential, simply by comparing the energy input required to create the charge q at a particular point in space in the case where the potential is V and V + C respectively. The conclusion is that if gauge symmetry holds, and energy is conserved, then charge must be conserved. [ 12 ]
As discussed above, the gauge transformations for classical (i.e., non-quantum mechanical) general relativity are arbitrary coordinate transformations. [ 13 ] Technically, the transformations must be invertible, and both the transformation and its inverse must be smooth, in the sense of being differentiable an arbitrary number of times.
Some global symmetries under changes of coordinate predate both general relativity and the concept of a gauge. For example, Galileo and Newton introduced the notion of translation invariance [ when? ] , an advancement from the Aristotelian concept that different places in space, such as the earth versus the heavens, obeyed different physical rules.
Suppose, for example, that one observer examines the properties of a hydrogen atom on Earth, the other—on the Moon (or any other place in the universe), the observer will find that their hydrogen atoms exhibit completely identical properties. Again, if one observer had examined a hydrogen atom today and the other—100 years ago (or any other time in the past or in the future), the two experiments would again produce completely identical results. The invariance of the properties of a hydrogen atom with respect to the time and place where these properties were investigated is called translation invariance.
Recalling our two observers from different ages: the time in their experiments is shifted by 100 years. If the time when the older observer did the experiment was t , the time of the modern experiment is t + 100 years. Both observers discover the same laws of physics. Because light from hydrogen atoms in distant galaxies may reach the earth after having traveled across space for billions of years, in effect one can do such observations covering periods of time almost all the way back to the Big Bang , and they show that the laws of physics have always been the same.
In other words, if in the theory we change the time t to t + 100 years (or indeed any other time shift) the theoretical predictions do not change. [ 14 ]
In Einstein's general relativity , coordinates like x , y , z , and t are not only "relative" in the global sense of translations like t → t + C , rotations, etc., but become completely arbitrary, so that, for example, one can define an entirely new time-like coordinate according to some arbitrary rule such as t → t + t 3 / t 0 2 , where t 0 has dimensions of time, and yet the Einstein equations will have the same form. [ 13 ] [ 15 ]
Invariance of the form of an equation under an arbitrary coordinate transformation is customarily referred to as general covariance , and equations with this property are referred to as written in the covariant form. General covariance is a special case of gauge invariance.
Maxwell's equations can also be expressed in a generally covariant form, which is as invariant under general coordinate transformation as the Einstein field equation.
Until the advent of quantum mechanics, the only well known example of gauge symmetry was in electromagnetism, and the general significance of the concept was not fully understood. For example, it was not clear whether it was the fields E and B or the potentials V and A that were the fundamental quantities; if the former, then the gauge transformations could be considered as nothing more than a mathematical trick.
In quantum mechanics, a particle such as an electron is also described as a wave. For example, if the double-slit experiment is performed with electrons, then a wave-like interference pattern is observed. The electron has the highest probability of being detected at locations where the parts of the wave passing through the two slits are in phase with one another, resulting in constructive interference . The frequency, f , of the electron wave is related to the kinetic energy of an individual electron particle via the quantum-mechanical relation E = hf . If there are no electric or magnetic fields present in this experiment, then the electron's energy is constant, and, for example, there will be a high probability of detecting the electron along the central axis of the experiment, where by symmetry the two parts of the wave are in phase.
But now suppose that the electrons in the experiment are subject to electric or magnetic fields. For example, if an electric field were imposed on one side of the axis but not on the other, the results of the experiment would be affected. The part of the electron wave passing through that side oscillates at a different rate, since its energy has had − eV added to it, where − e is the charge of the electron and V the electrical potential. The results of the experiment will be different, because phase relationships between the two parts of the electron wave have changed, and therefore the locations of constructive and destructive interference will be shifted to one side or the other. It is the electric potential that occurs here, not the electric field, [ further explanation needed ] and this is a manifestation of the fact that it is the potentials and not the fields that are of fundamental significance in quantum mechanics.
It is even possible to have cases in which an experiment's results differ when the potentials are changed, even if no charged particle is ever exposed to a different field. One such example is the Aharonov–Bohm effect , shown in the figure. [ 16 ] In this example, turning on the solenoid only causes a magnetic field B to exist within the solenoid. But the solenoid has been positioned so that the electron cannot possibly pass through its interior. If one believed that the fields were the fundamental quantities, then one would expect that the results of the experiment would be unchanged. In reality, the results are different, because turning on the solenoid changed the vector potential A in the region that the electrons do pass through. Now that it has been established that it is the potentials V and A that are fundamental, and not the fields E and B , we can see that the gauge transformations, which change V and A , have real physical significance, rather than being merely mathematical artifacts.
Note that in these experiments, the only quantity that affects the result is the difference in phase between the two parts of the electron wave. Suppose we imagine the two parts of the electron wave as tiny clocks, each with a single hand that sweeps around in a circle, keeping track of its own phase. Although this cartoon ignores some technical details, it retains the physical phenomena that are important here. [ 17 ] If both clocks are sped up by the same amount, the phase relationship between them is unchanged, and the results of experiments are the same. Not only that, but it is not even necessary to change the speed of each clock by a fixed amount. We could change the angle of the hand on each clock by a varying amount θ , where θ could depend on both the position in space and on time. This would have no effect on the result of the experiment, since the final observation of the location of the electron occurs at a single place and time, so that the phase shift in each electron's "clock" would be the same, and the two effects would cancel out. This is another example of a gauge transformation: it is local, and it does not change the results of experiments.
In summary, gauge symmetry attains its full importance in the context of quantum mechanics. In the application of quantum mechanics to electromagnetism, i.e., quantum electrodynamics , gauge symmetry applies to both electromagnetic waves and electron waves. These two gauge symmetries are in fact intimately related. If a gauge transformation θ is applied to the electron waves, for example, then one must also apply a corresponding transformation to the potentials that describe the electromagnetic waves. [ 18 ] Gauge symmetry is required in order to make quantum electrodynamics a renormalizable theory, i.e., one in which the calculated predictions of all physically measurable quantities are finite.
The description of the electrons in the subsection above as little clocks is in effect a statement of the mathematical rules according to which the phases of electrons are to be added and subtracted: they are to be treated as ordinary numbers, except that in the case where the result of the calculation falls outside the range of 0≤θ<360°, we force it to "wrap around" into the allowed range, which covers a circle. Another way of putting this is that a phase angle of, say, 5° is considered to be completely equivalent to an angle of 365°. Experiments have verified this testable statement about the interference patterns formed by electron waves. Except for the "wrap-around" property, the algebraic properties of this mathematical structure are exactly the same as those of the ordinary real numbers.
In mathematical terminology, electron phases form an Abelian group under addition, called the circle group or U(1). "Abelian" means that addition is commutative , so that θ + φ = φ + θ . " Group " means that addition is associative , has an identity element , namely "0", and for every phase there exists an inverse such that the sum of a phase and its inverse is 0. Other examples of abelian groups are the integers under addition, 0, and negation, and the nonzero fractions under product, 1, and reciprocal.
As a way of visualizing the choice of a gauge, consider whether it is possible to tell if a cylinder has been twisted. If the cylinder has no bumps, marks, or scratches on it, we cannot tell. We could, however, draw an arbitrary curve along the cylinder, defined by some function θ ( x ), where x measures distance along the axis of the cylinder. Once this arbitrary choice (the choice of gauge) has been made, it becomes possible to detect it if someone later twists the cylinder.
In 1954, Chen Ning Yang and Robert Mills proposed to generalize these ideas to noncommutative groups. A noncommutative gauge group can describe a field that, unlike the electromagnetic field, interacts with itself. For example, general relativity states that gravitational fields have energy, and special relativity concludes that energy is equivalent to mass. Hence a gravitational field induces a further gravitational field. The nuclear forces also have this self-interacting property.
Surprisingly, gauge symmetry can give a deeper explanation for the existence of interactions, such as the electric and nuclear interactions. This arises from a type of gauge symmetry relating to the fact that all particles of a given type are experimentally indistinguishable from one another. Imagine that Alice and Betty are identical twins, labeled at birth by bracelets reading A and B. Because the girls are identical, nobody would be able to tell if they had been switched at birth; the labels A and B are arbitrary, and can be interchanged. Such a permanent interchanging of their identities is like a global gauge symmetry. There is also a corresponding local gauge symmetry, which describes the fact that from one moment to the next, Alice and Betty could swap roles while nobody was looking, and nobody would be able to tell. If we observe that Mom's favorite vase is broken, we can only infer that the blame belongs to one twin or the other, but we cannot tell whether the blame is 100% Alice's and 0% Betty's, or vice versa. If Alice and Betty are in fact quantum-mechanical particles rather than people, then they also have wave properties, including the property of superposition , which allows waves to be added, subtracted, and mixed arbitrarily. It follows that we are not even restricted to complete swaps of identity. For example, if we observe that a certain amount of energy exists in a certain location in space, there is no experiment that can tell us whether that energy is 100% A's and 0% B's, 0% A's and 100% B's, or 20% A's and 80% B's, or some other mixture. The fact that the symmetry is local means that we cannot even count on these proportions to remain fixed as the particles propagate through space. The details of how this is represented mathematically depend on technical issues relating to the spins of the particles, but for our present purposes we consider a spinless particle, for which it turns out that the mixing can be specified by some arbitrary choice of gauge θ ( x ), where an angle θ = 0° represents 100% A and 0% B, θ = 90° means 0% A and 100% B, and intermediate angles represent mixtures.
According to the principles of quantum mechanics, particles do not actually have trajectories through space. Motion can only be described in terms of waves, and the momentum p of an individual particle is related to its wavelength λ by p = h / λ . In terms of empirical measurements, the wavelength can only be determined by observing a change in the wave between one point in space and another nearby point (mathematically, by differentiation ). A wave with a shorter wavelength oscillates more rapidly, and therefore changes more rapidly between nearby points. Now suppose that we arbitrarily fix a gauge at one point in space, by saying that the energy at that location is 20% A's and 80% B's. We then measure the two waves at some other, nearby point, in order to determine their wavelengths. But there are two entirely different reasons that the waves could have changed. They could have changed because they were oscillating with a certain wavelength, or they could have changed because the gauge function changed from a 20–80 mixture to, say, 21–79. If we ignore the second possibility, the resulting theory does not work; strange discrepancies in momentum will show up, violating the principle of conservation of momentum. Something in the theory must be changed.
Again there are technical issues relating to spin, but in several important cases, including electrically charged particles and particles interacting via nuclear forces, the solution to the problem is to impute physical reality to the gauge function θ ( x ). We say that if the function θ oscillates, it represents a new type of quantum-mechanical wave, and this new wave has its own momentum p = h / λ , which turns out to patch up the discrepancies that otherwise would have broken conservation of momentum. In the context of electromagnetism, the particles A and B would be charged particles such as electrons, and the quantum mechanical wave represented by θ would be the electromagnetic field. (Here we ignore the technical issues raised by the fact that electrons actually have spin 1/2, not spin zero. This oversimplification is the reason that the gauge field θ comes out to be a scalar, whereas the electromagnetic field is actually represented by a vector consisting of V and A .) The result is that we have an explanation for the presence of electromagnetic interactions: if we try to construct a gauge-symmetric theory of identical, non-interacting particles, the result is not self-consistent, and can only be repaired by adding electric and magnetic fields that cause the particles to interact.
Although the function θ ( x ) describes a wave, the laws of quantum mechanics require that it also have particle properties. In the case of electromagnetism, the particle corresponding to electromagnetic waves is the photon. In general, such particles are called gauge bosons , where the term "boson" refers to a particle with integer spin. In the simplest versions of the theory gauge bosons are massless, but it is also possible to construct versions in which they have mass. This is the case for the gauge bosons that carry the weak interaction: the force responsible for nuclear decay.
These books are intended for general readers and employ the barest minimum of mathematics. | https://en.wikipedia.org/wiki/Introduction_to_gauge_theory |
Genetics is the study of genes and tries to explain what they are and how they work. Genes are how living organisms inherit features or traits from their ancestors; for example, children usually look like their parents because they have inherited their parents' genes. Genetics tries to identify which traits are inherited and to explain how these traits are passed from generation to generation.
Some traits are part of an organism's physical appearance , such as eye color or height. Other sorts of traits are not easily seen and include blood types or resistance to diseases . Some traits are inherited through genes, which is the reason why tall and thin people tend to have tall and thin children. Other traits come from interactions between genes and the environment, so a child who inherited the tendency of being tall will still be short if poorly nourished . The way our genes and environment interact to produce a trait can be complicated. For example, the chances of somebody dying of cancer or heart disease seems to depend on both their genes and their lifestyle.
Genes are made from a long molecule called DNA , which is copied and inherited across generations. DNA is made of simple units that line up in a particular order within it, carrying genetic information. The language used by DNA is called genetic code , which lets organisms read the information in the genes. This information is the instructions for the construction and operation of a living organism.
The information within a particular gene is not always exactly the same between one organism and another, so different copies of a gene do not always give exactly the same instructions. Each unique form of a single gene is called an allele . As an example, one allele for the gene for hair color could instruct the body to produce much pigment, producing black hair, while a different allele of the same gene might give garbled instructions that fail to produce any pigment, giving white hair. Mutations are random changes in genes and can create new alleles. Mutations can also produce new traits, such as when mutations to an allele for black hair produce a new allele for white hair. This appearance of new traits is important in evolution .
Genes are pieces of DNA that contain information for the synthesis of ribonucleic acids (RNAs) or polypeptides . Genes are inherited as units, with two parents dividing out copies of their genes to their offspring. Humans have two copies of each of their genes, but each egg or sperm cell only gets one of those copies for each gene. An egg and sperm join to form a zygote with a complete set of genes. The resulting offspring has the same number of genes as their parents, but for any gene, one of their two copies comes from their father and one from their mother. [ 1 ]
The effects of mixing depend on the types (the alleles ) of the gene. If the father has two copies of an allele for red hair, and the mother has two copies for brown hair, all their children get the two alleles that give different instructions, one for red hair and one for brown. The hair color of these children depends on how these alleles work together. If one allele dominates the instructions from another, it is called the dominant allele, and the allele that is overridden is called the recessive allele. In the case of a daughter with alleles for both red and brown hair, brown is dominant and she ends up with brown hair. [ 2 ]
Although the red color allele is still there in this brown-haired girl, it doesn't show. This is a difference between what is seen on the surface (the traits of an organism, called its phenotype ) and the genes within the organism (its genotype ). In this example, the allele for brown can be called "B" and the allele for red "b". (It is normal to write dominant alleles with capital letters and recessive ones with lower-case letters.) The brown hair daughter has the "brown hair phenotype" but her genotype is Bb, with one copy of the B allele, and one of the b allele.
Now imagine that this woman grows up and has children with a brown-haired man who also has a Bb genotype. Her eggs will be a mixture of two types, one sort containing the B allele, and one sort the b allele. Similarly, her partner will produce a mix of two types of sperm containing one or the other of these two alleles. When the transmitted genes are joined up in their offspring, these children have a chance of getting either brown or red hair, since they could get a genotype of BB = brown hair, Bb = brown hair or bb = red hair. In this generation, there is, therefore, a chance of the recessive allele showing itself in the phenotype of the children—some of them may have red hair like their grandfather. [ 2 ]
Many traits are inherited in a more complicated way than the example above. This can happen when there are several genes involved, each contributing a small part to the result. Tall people tend to have tall children because their children get a package of many alleles that each contribute a bit to how much they grow. However, there are not clear groups of "short people" and "tall people", like there are groups of people with brown or red hair. This is because of the large number of genes involved; this makes the trait very variable and people are of many different heights. [ 3 ] Despite a common misconception, the green/blue eye traits are also inherited in this complex inheritance model. [ 4 ] Inheritance can also be complicated when the trait depends on the interaction between genetics and environment. For example, malnutrition does not change traits like eye color, but can stunt growth. [ 5 ]
The function of genes is to provide the information needed to make molecules called proteins in cells. [ 1 ] Cells are the smallest independent parts of organisms: the human body contains about 100 trillion cells, while very small organisms like bacteria are just a single cell. A cell is like a miniature and very complex factory that can make all the parts needed to produce a copy of itself, which happens when cells divide . There is a simple division of labor in cells—genes give instructions and proteins carry out these instructions, tasks like building a new copy of a cell, or repairing the damage. [ 6 ] Each type of protein is a specialist that only does one job, so if a cell needs to do something new, it must make a new protein to do this job. Similarly, if a cell needs to do something faster or slower than before, it makes more or less of the protein responsible. Genes tell cells what to do by telling them which proteins to make and in what amounts.
Proteins are made of a chain of 20 different types of amino acid molecules. This chain folds up into a compact shape, rather like an untidy ball of string. The shape of the protein is determined by the sequence of amino acids along its chain and it is this shape that, in turn, determines what the protein does. [ 6 ] For example, some proteins have parts of their surface that perfectly match the shape of another molecule, allowing the protein to bind to this molecule very tightly. Other proteins are enzymes , which are like tiny machines that alter other molecules. [ 7 ]
The information in DNA is held in the sequence of the repeating units along the DNA chain. [ 8 ] These units are four types of nucleotides (A, T, G and C) and the sequence of nucleotides stores information in an alphabet called the genetic code . When a gene is read by a cell the DNA sequence is copied into a very similar molecule called RNA (this process is called transcription ). Transcription is controlled by other DNA sequences (such as promoters ), which show a cell where genes are, and control how often they are copied. The RNA copy made from a gene is then fed through a structure called a ribosome , which translates the sequence of nucleotides in the RNA into the correct sequence of amino acids and joins these amino acids together to make a complete protein chain. The new protein then folds up into its active form. The process of moving information from the language of RNA into the language of amino acids is called translation . [ 9 ]
If the sequence of the nucleotides in a gene changes, the sequence of the amino acids in the protein it produces may also change—if part of a gene is deleted, the protein produced is shorter and may not work anymore. [ 6 ] This is the reason why different alleles of a gene can have different effects on an organism. As an example, hair color depends on how much of a dark substance called melanin is put into the hair as it grows. If a person has a normal set of the genes involved in making melanin, they make all the proteins needed and they grow dark hair. However, if the alleles for a particular protein have different sequences and produce proteins that can't do their jobs, no melanin is produced and the person has white skin and hair ( albinism ). [ 10 ]
Genes are copied each time a cell divides into two new cells. The process that copies DNA is called DNA replication . [ 8 ] It is through a similar process that a child inherits genes from its parents when a copy from the mother is mixed with a copy from the father.
DNA can be copied very easily and accurately because each piece of DNA can direct the assembly of a new copy of its information. This is because DNA is made of two strands that pair together like the two sides of a zipper. The nucleotides are in the center, like the teeth in the zipper, and pair up to hold the two strands together. Importantly, the four different sorts of nucleotides are different shapes, so for the strands to close up properly, an A nucleotide must go opposite a T nucleotide, and a G opposite a C . This exact pairing is called base pairing . [ 8 ]
When DNA is copied, the two strands of the old DNA are pulled apart by enzymes; then they pair up with new nucleotides and then close. This produces two new pieces of DNA, each containing one strand from the old DNA and one newly made strand. This process is not predictably perfect as proteins attach to a nucleotide while they are building and cause a change in the sequence of that gene. These changes in the DNA sequence are called mutations . [ 11 ] Mutations produce new alleles of genes. Sometimes these changes stop the functioning of that gene or make it serve another advantageous function, such as the melanin genes discussed above. These mutations and their effects on the traits of organisms are one of the causes of evolution . [ 12 ]
A population of organisms evolves when an inherited trait becomes more common or less common over time. [ 12 ] For instance, all the mice living on an island would be a single population of mice: some with white fur, some gray. If over generations, white mice became more frequent and gray mice less frequent, then the color of the fur in this population of mice would be evolving . In terms of genetics, this is called an increase in allele frequency .
Alleles become more or less common either by chance in a process called genetic drift or by natural selection . [ 13 ] In natural selection, if an allele makes it more likely for an organism to survive and reproduce, then over time this allele becomes more common. But if an allele is harmful, natural selection makes it less common. In the above example, if the island were getting colder each year and snow became present for much of the time, then the allele for white fur would favor survival since predators would be less likely to see them against the snow, and more likely to see the gray mice. Over time white mice would become more and more frequent, while gray mice less and less.
Mutations create new alleles. These alleles have new DNA sequences and can produce proteins with new properties. [ 14 ] So if an island was populated entirely by black mice, mutations could happen creating alleles for white fur. The combination of mutations creating new alleles at random, and natural selection picking out those that are useful, causes an adaptation . This is when organisms change in ways that help them to survive and reproduce. Many such changes, studied in evolutionary developmental biology , affect the way the embryo develops into an adult body.
Some diseases are hereditary and run in families; others, such as infectious diseases , are caused by the environment. Other diseases come from a combination of genes and the environment. [ 15 ] Genetic disorders are diseases that are caused by a single allele of a gene and are inherited in families. These include Huntington's disease , cystic fibrosis or Duchenne muscular dystrophy . Cystic fibrosis, for example, is caused by mutations in a single gene called CFTR and is inherited as a recessive trait. [ 16 ]
Other diseases are influenced by genetics, but the genes a person gets from their parents only change their risk of getting a disease. Most of these diseases are inherited in a complex way, with either multiple genes involved, or coming from both genes and the environment. As an example, the risk of breast cancer is 50 times higher in the families most at risk, compared to the families least at risk. This variation is probably due to a large number of alleles, each changing the risk a little bit. [ 17 ] Several of the genes have been identified, such as BRCA1 and BRCA2 , but not all of them. However, although some of the risks are genetic, the risk of this cancer is also increased by being overweight, heavy alcohol consumption and not exercising. [ 18 ] A woman's risk of breast cancer, therefore, comes from a large number of alleles interacting with her environment, so it is very hard to predict.
Since traits come from the genes in a cell, putting a new piece of DNA into a cell can produce a new trait. This is how genetic engineering works. For example, rice can be given genes from a maize and a soil bacteria so the rice produces beta-carotene , which the body converts to vitamin A. [ 19 ] This can help children with Vitamin A deficiency. Another gene being put into some crops comes from the bacterium Bacillus thuringiensis ; the gene makes a protein that is an insecticide . The insecticide kills insects that eat the plants but is harmless to people. [ 20 ] In these plants, the new genes are put into the plant before it is grown, so the genes are in every part of the plant, including its seeds. [ 21 ] The plant's offspring inherit the new genes, which has led to concern about the spread of new traits into wild plants. [ 22 ]
The kind of technology used in genetic engineering is also being developed to treat people with genetic disorders in an experimental medical technique called gene therapy . [ 23 ] However, here the new, properly working gene is put in targeted cells, not altering the chance of future children inheriting the disease causing alleles. | https://en.wikipedia.org/wiki/Introduction_to_genetics |
Introduction to the Theory of Error-Correcting Codes is a textbook on error-correcting codes , by Vera Pless . It was published in 1982 by John Wiley & Sons , [ 1 ] [ 2 ] [ 3 ] [ 4 ] with a second edition in 1989 [ 5 ] [ 6 ] [ 7 ] [ 8 ] and a third in 1998. [ 9 ] [ 10 ] The Basic Library List Committee of the Mathematical Association of America has rated the book as essential for inclusion in undergraduate mathematics libraries. [ 11 ]
This book is mainly centered around algebraic and combinatorial techniques for designing and using error-correcting linear block codes . [ 1 ] [ 3 ] [ 9 ] It differs from previous works in this area in its reduction of each result to its mathematical foundations, and its clear exposition of the results follow from these foundations. [ 4 ]
The first two of its ten chapters present background and introductory material, including Hamming distance , decoding methods including maximum likelihood and syndromes, sphere packing and the Hamming bound , the Singleton bound , and the Gilbert–Varshamov bound , and the Hamming(7,4) code. [ 1 ] [ 6 ] [ 9 ] They also include brief discussions of additional material not covered in more detail later, including information theory , convolutional codes , and burst error-correcting codes . [ 6 ] Chapter 3 presents the BCH code over the field G F ( 2 4 ) {\displaystyle GF(2^{4})} , and Chapter 4 develops the theory of finite fields more generally. [ 1 ] [ 6 ]
Chapter 5 studies cyclic codes and Chapter 6 studies a special case of cyclic codes, the quadratic residue codes . Chapter 7 returns to BCH codes. [ 1 ] [ 6 ] After these discussions of specific codes, the next chapter concerns enumerator polynomials , including the MacWilliams identities, Pless's own power moment identities, and the Gleason polynomials. [ 1 ] The final two chapters connect this material to the theory of combinatorial designs and the design of experiments , [ 1 ] [ 2 ] and include material on the Assmus–Mattson theorem, the Witt design , the binary Golay codes , and the ternary Golay codes . [ 1 ]
The second edition adds material on BCH codes, Reed–Solomon error correction , Reed–Muller codes , decoding Golay codes, [ 5 ] [ 7 ] and "a new, simple combinatorial proof of the MacWilliams identities". [ 5 ] As well as correcting some errors and adding more exercises, the third edition includes new material on connections between greedily constructed lexicographic codes and combinatorial game theory , the Griesmer bound , non-linear codes , and the Gray images of Z 4 {\displaystyle \mathbb {Z} ^{4}} codes. [ 9 ] [ 10 ]
This book is written as a textbook for advanced undergraduates; [ 3 ] reviewer H. N. calls it "a leisurely introduction to the field which is at the same time mathematically rigorous". [ 8 ] It includes over 250 problems, [ 5 ] and can be read by mathematically-inclined students with only a background in linear algebra [ 1 ] (provided in an appendix) [ 6 ] [ 8 ] and with no prior knowledge of coding theory . [ 2 ]
Reviewer Ian F. Blake complained that the first edition omitted some topics necessary for engineers, including algebraic decoding, Goppa codes , Reed–Solomon error correction , and performance analysis, making this more appropriate for mathematics courses, but he suggests that it could still be used as the basis of an engineering course by replacing the last two chapters with this material, and overall he calls the book "a delightful little monograph". [ 1 ] Reviewer John Baylis adds that "for clearly exhibiting coding theory as a showpiece of applied modern algebra I haven't seen any to beat this one". [ 6 ] [ 9 ]
Other books in this area include The Theory of Error-Correcting Codes (1977) by Jessie MacWilliams and Neil Sloane , [ 5 ] and A First Course in Coding Theory (1988) by Raymond Hill. [ 6 ] | https://en.wikipedia.org/wiki/Introduction_to_the_Theory_of_Error-Correcting_Codes |
Introgressive hybridization, also known as introgression , is the flow of genetic material between divergent lineages via repeated backcrossing . In plants, this backcrossing occurs when an F 1 {\displaystyle F_{1}} generation hybrid breeds with one or both of its parental species.
Although some genera of plants hybridize and introgress more easily than others, in certain scenarios, external factors may contribute to an increased rate of hybridization. The phenomenon known as Hybridization of the Habitat echoes this idea, explaining that disturbances in a natural habitat can lead to species which typically do not hybridize and backcross to do so with relative ease. Plant breeders also manipulate their subjects to hybridize in order to optimize their hardiness, appearance, or whatever desired traits they want to select for. [ 1 ] This type of hybridization has been particularly impactful for the production of many crop species, including but not limited to: certain types of rice, corn, wheat, barley, and rye. Natural introgression can occur with many genera and species, but manipulating the gene pool with artificial/forced introgression is useful for honing in on desired characteristics, such as drought tolerance or pest resistance. [ 2 ]
In the early days of hybrid research, it was commonly believed that there was insufficient evidence of hybridization in nature because hybridization would mostly produce sterile or unfit offspring. Through experimentation and improved phylo genetic testing capabilities, we now see that the ability to produce fertile hybrid offspring varies by genus, within the plant kingdom. [ 3 ] A few examples of species with the capacity to produce fertile hybrids are given below.
One of the most significant early studies of plant hybridization involved three species of irises. Although they commonly form crosses where their natural habitats overlap, there is no evidence that Iris fulva , Iris hexagona , or I ris brevicaulis are closely related and their phenotypic differences (color/pattern/size) are distinct. Once introgression occurs, the resulting offspring display a wide array of color combinations, as well as varying flower size. Iris fulva shows a tendency for asymmetrical introgression, where it transfers more genetic material into hybrid offspring than either Iris hexagona or Iris brevicaulis . [ 4 ]
Differential introgression of chloroplasts and nuclear genomes was first seen among the common sunflower ( Helianthus annuus ssp. texanus ). Within a particular region, the population showed differences in morphological features which indicated there may be hybridization with H. debilis ssp cucumenifolius . Researchers discovered that these H. a. texanus contained chloroplast DNA from H. d. cucumennfolius , indicating introgression had occurred in one direction. [ 5 ]
Hybridization among poplars is common where ever populations overlap, however the degree of introgression varies greatly depending on the species. One study exploring the extent of introgression among three species of poplar trees (P. balsamifera, P. angustifolia and P. trichocarpa) conducted along the Rock Mountain range in the U.S. and Canada found extensive introgression in areas of species converge. Genomic sequencing even showed a trispecies hybrid in these overlapping areas. [ 6 ] Another study found a hybrid zone in Utah where there was a unidirectional flow of introgression between P. angustifolia and P. fremontii. [ 7 ]
Introgression has played a major role in the development of wheat for crop production. One of the ways crop species can be manipulated is by crossing them with wild type species . For instance, the wild wheat relative species Agropyron elongatum has been crossed and introgressed with the domesticated wheat Triticum aestivum . Consequently, the resulting hybrids have a higher water stress adaptation and higher root and shoot biomass. Both of these modifications can improve the fitness of the crop. [ 8 ]
Daffodils (genus Narcissus ) are able to produce semi-fertile or fertile offspring, even from wide crosses. The ability of daffodils, such as the yellow trumpet Narcissi and Poets’ Narcissi to hybridize and backcross allows for the vast variety of options modern-day gardeners have to select from. [ 3 ] Although daffodils do hybridize and introgress in nature, artificial introgression allows for breeders to take species that are geographically separated and make unique crosses that would not appear naturally. [ citation needed ] | https://en.wikipedia.org/wiki/Introgressive_hybridization_in_plants |
An introitus is an entrance into a canal or hollow organ . The vaginal introitus is the opening that leads to the vaginal canal. [ 1 ]
This anatomy article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Introitus |
An intron is any nucleotide sequence within a gene that is not expressed or operative in the final RNA product. The word intron is derived from the term intr agenic regi on , i.e., a region inside a gene. [ 1 ] The term intron refers to both the DNA sequence within a gene and the corresponding RNA sequence in RNA transcripts . [ 2 ] The non-intron sequences that become joined by this RNA processing to form the mature RNA are called exons . [ 3 ]
Introns are found in the genes of most eukaryotes and many eukaryotic viruses, and they can be located in both protein-coding genes and genes that function as RNA ( noncoding genes ). There are four main types of introns: tRNA introns, group I introns, group II introns, and spliceosomal introns (see below). Introns are rare in Bacteria and Archaea (prokaryotes).
Introns were first discovered in protein-coding genes of adenovirus , [ 4 ] [ 5 ] and were subsequently identified in genes encoding transfer RNA and ribosomal RNA genes. Introns are now known to occur within a wide variety of genes throughout organisms, bacteria, [ 6 ] and viruses within all of the biological kingdoms.
The fact that genes were split or interrupted by introns was discovered independently in 1977 by Phillip Allen Sharp and Richard J. Roberts , for which they shared the Nobel Prize in Physiology or Medicine in 1993, [ 7 ] though credit was excluded for the researchers and collaborators in their labs that did the experiments resulting in the discovery, Susan Berget and Louise Chow . [ 8 ] [ 9 ] The term intron was introduced by American biochemist Walter Gilbert : [ 1 ]
"The notion of the cistron [i.e., gene] ... must be replaced by that of a transcription unit containing regions which will be lost from the mature messenger – which I suggest we call introns (for intragenic regions) – alternating with regions which will be expressed – exons." (Gilbert 1978)
The term intron also refers to intracistron , i.e., an additional piece of DNA that arises within a cistron . [ 10 ]
Although introns are sometimes called intervening sequences , [ 11 ] the term "intervening sequence" can refer to any of several families of internal nucleic acid sequences that are not present in the final gene product, including inteins , untranslated regions (UTR), and nucleotides removed by RNA editing , in addition to introns.
The frequency of introns within different genomes is observed to vary widely across the spectrum of biological organisms. For example, introns are extremely common within the nuclear genome of jawed vertebrates (e.g. humans, mice, and pufferfish (fugu)), where protein-coding genes almost always contain multiple introns, while introns are rare within the nuclear genes of some eukaryotic microorganisms, [ 12 ] for example baker's/brewer's yeast ( Saccharomyces cerevisiae ). In contrast, the mitochondrial genomes of vertebrates are entirely devoid of introns, while those of eukaryotic microorganisms may contain many introns. [ 13 ]
A particularly extreme case is the Drosophila dhc7 gene containing a ≥3.6 megabase (Mb) intron, which takes roughly three days to transcribe. [ 14 ] [ 15 ] On the other extreme, a 2015 study suggests that the shortest known metazoan intron length is 30 base pairs (bp) belonging to the human MST1L gene. [ 16 ] The shortest known introns belong to the heterotrich ciliates, such as Stentor coeruleus , in which most (> 95%) introns are 15 or 16 bp long. [ 17 ]
Splicing of all intron-containing RNA molecules is superficially similar, as described above. However, different types of introns were identified through the examination of intron structure by DNA sequence analysis, together with genetic and biochemical analysis of RNA splicing reactions. At least four distinct classes of introns have been identified:
Group III introns are proposed to be a fifth family, but little is known about the biochemical apparatus that mediates their splicing. They appear to be related to group II introns, and possibly to spliceosomal introns. [ 18 ]
Nuclear pre-mRNA introns (spliceosomal introns) are characterized by specific intron sequences located at the boundaries between introns and exons. [ 19 ] These sequences are recognized by spliceosomal RNA molecules when the splicing reactions are initiated. [ 20 ] In addition, they contain a branch point, a particular nucleotide sequence near the 3' end of the intron that becomes covalently linked to the 5' end of the intron during the splicing process, generating a branched ( lariat ) [ clarification needed (complicated jargon) ] intron. Apart from these three short conserved elements, nuclear pre-mRNA intron sequences are highly variable. Nuclear pre-mRNA introns are often much longer than their surrounding exons.
Transfer RNA introns that depend upon proteins for removal occur at a specific location within the anticodon loop of unspliced tRNA precursors, and are removed by a tRNA splicing endonuclease. The exons are then linked together by a second protein, the tRNA splicing ligase. [ 21 ] Note that self-splicing introns are also sometimes found within tRNA genes. [ 22 ]
Group I and group II introns are found in genes encoding proteins ( messenger RNA ), transfer RNA and ribosomal RNA in a very wide range of living organisms. [ 23 ] [ 24 ] Following transcription into RNA, group I and group II introns also make extensive internal interactions that allow them to fold into a specific, complex three-dimensional architecture . These complex architectures allow some group I and group II introns to be self-splicing , that is, the intron-containing RNA molecule can rearrange its own covalent structure so as to precisely remove the intron and link the exons together in the correct order. In some cases, particular intron-binding proteins are involved in splicing, acting in such a way that they assist the intron in folding into the three-dimensional structure that is necessary for self-splicing activity. Group I and group II introns are distinguished by different sets of internal conserved sequences and folded structures, and by the fact that splicing of RNA molecules containing group II introns generates branched introns (like those of spliceosomal RNAs), while group I introns use a non-encoded guanosine nucleotide (typically GTP) to initiate splicing, adding it on to the 5'-end of the excised intron.
The spliceosome is a very complex structure containing up to one hundred proteins and five different RNAs. The substrate of the reaction is a long RNA molecule and the transesterification reactions catalyzed by the spliceosome require the bringing together of sites that may be thousands of nucleotides apart. [ 25 ] [ 26 ] All biochemical reactions are associated with known error rates and the more complicated the reaction the higher the error rate. Therefore, it is not surprising that the splicing reaction catalyzed by the spliceosome has a significant error rate even though there are spliceosome accessory factors that suppress the accidental cleavage of cryptic splice sites. [ 27 ]
Under ideal circumstances, the splicing reaction is likely to be 99.999% accurate (error rate of 10 −5 ) and the correct exons will be joined and the correct intron will be deleted. [ 28 ] However, these ideal conditions require very close matches to the best splice site sequences and the absence of any competing cryptic splice site sequences within the introns and those conditions are rarely met in large eukaryotic genes that may cover more than 40 kilobase pairs. Recent studies have shown that the actual error rate can be considerably higher than 10 −5 and may be as high as 2% or 3% errors (error rate of 2 or 3 x 10 −2 ) per gene. [ 29 ] [ 30 ] [ 31 ] Additional studies suggest that the error rate is no less than 0.1% per intron. [ 32 ] [ 33 ] This relatively high level of splicing errors explains why most splice variants are rapidly degraded by nonsense-mediated decay. [ 34 ] [ 35 ]
The presence of sloppy binding sites within genes causes splicing errors and it may seem strange that these sites haven't been eliminated by natural selection. The argument for their persistence is similar to the argument for junk DNA. [ 32 ] [ 36 ]
Although mutations which create or disrupt binding sites may be slightly deleterious, the large number of possible such mutations makes it inevitable that some will reach fixation in a population. This is particularly relevant in species, such as humans, with relatively small long-term effective population sizes. It is plausible, then, that the human genome carries a substantial load of suboptimal sequences which cause the generation of aberrant transcript isoforms. In this study, we present direct evidence that this is indeed the case. [ 32 ]
While the catalytic reaction may be accurate enough for effective processing most of the time, the overall error rate may be partly limited by the fidelity of transcription because transcription errors will introduce mutations that create cryptic splice sites. In addition, the transcription error rate of 10 −5 – 10 −6 is high enough that one in every 25,000 transcribed exons will have an incorporation error in one of the splice sites leading to a skipped intron or a skipped exon. Almost all multi-exon genes will produce incorrectly spliced transcripts but the frequency of this background noise will depend on the size of the genes, the number of introns, and the quality of the splice site sequences. [ 30 ] [ 33 ]
In some cases, splice variants will be produced by mutations in the gene (DNA). These can be SNP polymorphisms that create a cryptic splice site or mutate a functional site. They can also be somatic cell mutations that affect splicing in a particular tissue or a cell line. [ 37 ] [ 38 ] [ 39 ] When the mutant allele is in a heterozygous state this will result in production of two abundant splice variants; one functional and one non-functional. In the homozygous state the mutant alleles may cause a genetic disease such as the hemophilia found in descendants of Queen Victoria where a mutation in one of the introns in a blood clotting factor gene creates a cryptic 3' splice site resulting in aberrant splicing. [ 40 ] A significant fraction of human deaths by disease may be caused by mutations that interfere with normal splicing; mostly by creating cryptic splice sites. [ 41 ] [ 38 ]
Incorrectly spliced transcripts can easily be detected and their sequences entered into the online databases. They are usually described as "alternatively spliced" transcripts, which can be confusing because the term does not distinguish between real, biologically relevant, alternative splicing and processing noise due to splicing errors. One of the central issues in the field of alternative splicing is working out the differences between these two possibilities. Many scientists have argued that the null hypothesis should be splicing noise, putting the burden of proof on those who claim biologically relevant alternative splicing. According to those scientists, the claim of function must be accompanied by convincing evidence that multiple functional products are produced from the same gene. [ 42 ] [ 43 ]
While introns do not encode protein products, they are integral to gene expression regulation. Some introns themselves encode functional RNAs through further processing after splicing to generate noncoding RNA molecules. [ 44 ] Alternative splicing is widely used to generate multiple proteins from a single gene. Furthermore, some introns play essential roles in a wide range of gene expression regulatory functions such as nonsense-mediated decay [ 45 ] and mRNA export. [ 46 ]
After the initial discovery of introns in protein-coding genes of the eukaryotic nucleus, there was significant debate as to whether introns in modern-day organisms were inherited from a common ancient ancestor (termed the introns-early hypothesis), or whether they appeared in genes rather recently in the evolutionary process (termed the introns-late hypothesis). Another theory is that the spliceosome and the intron-exon structure of genes is a relic of the RNA world (the introns-first hypothesis). [ 47 ] There is still considerable debate about the extent to which of these hypotheses is most correct but the popular consensus at the moment is that following the formation of the first eukaryotic cell group II introns from the bacterial endosymbiont invaded the host genome. In the beginning these self-splicing introns excised themselves from the mRNA precursor but over time some of them lost that ability and their excision had to be aided in trans by other group II introns. Eventually a number of specific trans-acting introns evolved and these became the precursors to the snRNAs of the spliceosome. The efficiency of splicing was improved by association with stabilizing proteins to form the primitive spliceosome. [ 48 ] [ 49 ] [ 50 ] [ 51 ]
Early studies of genomic DNA sequences from a wide range of organisms show that the intron-exon structure of homologous genes in different organisms can vary widely. [ 52 ] More recent studies of entire eukaryotic genomes have now shown that the lengths and density (introns/gene) of introns varies considerably between related species. For example, while the human genome contains an average of 8.4 introns/gene (139,418 in the genome), the unicellular fungus Encephalitozoon cuniculi contains only 0.0075 introns/gene (15 introns in the genome). [ 53 ] Since eukaryotes arose from a common ancestor ( common descent ), there must have been extensive gain or loss of introns during evolutionary time. [ 54 ] [ 55 ] This process is thought to be subject to selection, with a tendency towards intron gain in larger species due to their smaller population sizes, and the converse in smaller (particularly unicellular) species. [ 56 ] Biological factors also influence which genes in a genome lose or accumulate introns. [ 57 ] [ 58 ] [ 59 ]
Alternative splicing of exons within a gene after intron excision acts to introduce greater variability of protein sequences translated from a single gene, allowing multiple related proteins to be generated from a single gene and a single precursor mRNA transcript. The control of alternative RNA splicing is performed by a complex network of signaling molecules that respond to a wide range of intracellular and extracellular signals.
Introns contain several short sequences that are important for efficient splicing, such as acceptor and donor sites at either end of the intron as well as a branch point site, which are required for proper splicing by the spliceosome . Some introns are known to enhance the expression of the gene that they are contained in by a process known as intron-mediated enhancement (IME).
Actively transcribed regions of DNA frequently form R-loops that are vulnerable to DNA damage . In highly expressed yeast genes, introns inhibit R-loop formation and the occurrence of DNA damage. [ 60 ] Genome-wide analysis in both yeast and humans revealed that intron-containing genes have decreased R-loop levels and decreased DNA damage compared to intronless genes of similar expression. [ 60 ] Insertion of an intron within an R-loop prone gene can also suppress R-loop formation and recombination . Bonnet et al. (2017) [ 60 ] speculated that the function of introns in maintaining genetic stability may explain their evolutionary maintenance at certain locations, particularly in highly expressed genes.
The physical presence of introns promotes cellular resistance to starvation via intron enhanced repression of ribosomal protein genes of nutrient-sensing pathways. [ 61 ]
Introns may be lost or gained over evolutionary time, as shown by many comparative studies of orthologous genes. Subsequent analyses have identified thousands of examples of intron loss and gain events, and it has been proposed that the emergence of eukaryotes, or the initial stages of eukaryotic evolution, involved an intron invasion. [ 62 ] Two definitive mechanisms of intron loss, reverse transcriptase-mediated intron loss (RTMIL) and genomic deletions, have been identified, and are known to occur. [ 63 ] The definitive mechanisms of intron gain, however, remain elusive and controversial. At least seven mechanisms of intron gain have been reported thus far: intron transposition, transposon insertion, tandem genomic duplication, intron transfer, intron gain during double-strand break repair (DSBR), insertion of a group II intron, and intronization. In theory it should be easiest to deduce the origin of recently gained introns due to the lack of host-induced mutations, yet even introns gained recently did not arise from any of the aforementioned mechanisms. These findings thus raise the question of whether or not the proposed mechanisms of intron gain fail to describe the mechanistic origin of many novel introns because they are not accurate mechanisms of intron gain, or if there are other, yet to be discovered, processes generating novel introns. [ 64 ]
In intron transposition, the most commonly purported intron gain mechanism, a spliced intron is thought to reverse splice into either its own mRNA or another mRNA at a previously intron-less position. This intron-containing mRNA is then reverse transcribed and the resulting intron-containing cDNA may then cause intron gain via complete or partial recombination with its original genomic locus.
Transposon insertions have been shown to generate thousands of new introns across diverse eukaryotic species. [ 65 ] Transposon insertions sometimes result in the duplication of this sequence on each side of the transposon. Such an insertion could intronize the transposon without disrupting the coding sequence when a transposon inserts into the sequence AGGT or encodes the splice sites within the transposon sequence. Where intron-generating transposons do not create target site duplications, elements include both splice sites GT (5') and AG (3') thereby splicing precisely without affecting the protein-coding sequence. [ 65 ] It is not yet understood why these elements are spliced, whether by chance, or by some preferential action by the transposon.
In tandem genomic duplication, due to the similarity between consensus donor and acceptor splice sites, which both closely resemble AGGT, the tandem genomic duplication of an exonic segment harboring an AGGT sequence generates two potential splice sites. When recognized by the spliceosome, the sequence between the original and duplicated AGGT will be spliced, resulting in the creation of an intron without alteration of the coding sequence of the gene. Double-stranded break repair via non-homologous end joining was recently identified as a source of intron gain when researchers identified short direct repeats flanking 43% of gained introns in Daphnia. [ 64 ] These numbers must be compared to the number of conserved introns flanked by repeats in other organisms, though, for statistical relevance. For group II intron insertion, the retrohoming of a group II intron into a nuclear gene was proposed to cause recent spliceosomal intron gain.
Intron transfer has been hypothesized to result in intron gain when a paralog or pseudogene gains an intron and then transfers this intron via recombination to an intron-absent location in its sister paralog. Intronization is the process by which mutations create novel introns from formerly exonic sequence. Thus, unlike other proposed mechanisms of intron gain, this mechanism does not require the insertion or generation of DNA to create a novel intron. [ 64 ]
The only hypothesized mechanism of recent intron gain lacking any direct evidence is that of group II intron insertion, which when demonstrated in vivo, abolishes gene expression. [ 66 ] Group II introns are therefore likely the presumed ancestors of spliceosomal introns, acting as site-specific retroelements, and are no longer responsible for intron gain. [ 67 ] [ 68 ] Tandem genomic duplication is the only proposed mechanism with supporting in vivo experimental evidence: a short intragenic tandem duplication can insert a novel intron into a protein-coding gene, leaving the corresponding peptide sequence unchanged. [ 69 ] This mechanism also has extensive indirect evidence lending support to the idea that tandem genomic duplication is a prevalent mechanism for intron gain. The testing of other proposed mechanisms in vivo, particularly intron gain during DSBR, intron transfer, and intronization, is possible, although these mechanisms must be demonstrated in vivo to solidify them as actual mechanisms of intron gain. Further genomic analyses, especially when executed at the population level, may then quantify the relative contribution of each mechanism, possibly identifying species-specific biases that may shed light on varied rates of intron gain amongst different species. [ 64 ]
Structure:
Splicing:
Function
Others: | https://en.wikipedia.org/wiki/Intron |
Intron-mediated enhancement (IME) is the ability of an intron sequence to enhance the expression of a gene containing that intron. In particular, the intron must be present in the transcribed region of the gene for enhancement to occur, differentiating IME from the action of typical transcriptional enhancers . [ 1 ] Descriptions of this phenomenon were first published in cultured maize cells in 1987, [ 2 ] and the term "intron-mediated enhancement" was subsequently coined in 1990. [ 3 ] A number of publications have demonstrated that this phenomenon is conserved across eukaryotes , including humans, [ 4 ] mice, [ 5 ] Arabidopsis , [ 6 ] rice, [ 7 ] [ 8 ] and C. elegans . [ 9 ] However, the mechanism(s) by which IME works are still not completely understood. [ 10 ]
When testing to see whether any given intron enhances the expression of a gene, it is typical to compare the expression of two constructs , one containing the intron and one without it, and to express the difference between the two results as a " fold increase " in enhancement. Further experiments can specifically point to IME as the cause of expression enhancement - one of the most common is to move the intron upstream of the transcription start site, removing it from the transcript. If the intron can no longer enhance expression, then inclusion of the intron in the transcript is important, and the intron probably causes IME.
Not all introns enhance gene expression, but those that do can enhance expression between 2– and >1,000–fold relative to an intronless control. [ 11 ] In Arabidopsis and other plant species, the IMEter has been developed to calculate the likelihood that an intron sequence will enhance gene expression. [ 12 ] It does this by calculating a score based on the patterns of nucleotide sequences within the target sequence. The position of an intron within the transcript is also important - the closer an intron is to the start (5' end) of a transcript, the greater its enhancement of gene expression. [ 13 ] | https://en.wikipedia.org/wiki/Intron-mediated_enhancement |
The Intronerator is a database of alternatively spliced genes and a database of introns for Caenorhabditis elegans . [ 1 ]
A working copy of the Intronerator no longer exists as of 2003.
Equivalent functions can be performed with the U.C. Santa Cruz
genome browser:
This Biological database -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intronerator |
Introspection is the examination of one's own conscious thoughts and feelings . [ 1 ] In psychology , the process of introspection relies on the observation of one's mental state , while in a spiritual context it may refer to the examination of one's soul . [ 2 ] Introspection is closely related to human self-reflection and self-discovery and is contrasted with external observation .
It generally provides a privileged access to one's own mental states, [ 3 ] not mediated by other sources of knowledge, so that individual experience of the mind is unique. Introspection can determine any number of mental states including: sensory, bodily, cognitive, emotional and so forth. [ 3 ]
Introspection has been a subject of philosophical discussion for thousands of years. The philosopher Plato asked, "...why should we not calmly and patiently review our own thoughts, and thoroughly examine and see what these appearances in us really are?" [ 4 ] [ 5 ] While introspection is applicable to many facets of philosophical thought it is perhaps best known for its role in epistemology ; in this context introspection is often compared with perception , reason , memory , and testimony as a source of knowledge . [ 6 ]
It has often been claimed that Wilhelm Wundt , the father of experimental psychology, was the first to adopt introspection to experimental psychology [ 1 ] though the methodological idea had been presented long before, as by 18th century German philosopher-psychologists such as Alexander Gottlieb Baumgarten or Johann Nicolaus Tetens . [ 7 ] Later writers have warned that Wundt's views on introspection must be approached with great care. [ 8 ] Wundt was influenced by notable physiologists , such as Gustav Fechner , who used a kind of controlled introspection as a means to study human sensory organs. Building upon that use of introspection in physiology, Wundt believed introspection included the ability to directly observe one's own experiences (not just to logically reflect on them or speculate about them, though some others misinterpreted his views in this way). [ 9 ] Wundt imposed exacting control over the study of introspection in his experimental laboratory at the University of Leipzig , [ 1 ] making it possible for other scientists to replicate his experiments elsewhere, a development that proved essential to the development of psychology as a modern, peer-reviewed scientific discipline. Such exact purism was typical of Wundt. He prepared a set of instructions to be followed by every observer in his laboratory during studies of introspection: "1) the Observer must, if possible, be in a position to determine beforehand the entrance of the process to be observed. 2) the introspectionist must, as far as possible, grasp the phenomenon in a state of strained attention and follow its course. 3) Every observation must, in order to make certain, be capable of being repeated several times under the same conditions and 4) the conditions under which the phenomenon appears must be found out by the variation of the attendant circumstances and when this was done the various coherent experiments must be varied according to a plan partly by eliminating certain stimuli and partly by grading their strength and quality". [ 9 ]
Edward Titchener was an early pioneer in experimental psychology and a student of Wilhelm Wundt. [ 1 ] After earning his doctorate under Wundt at the University of Leipzig, he made his way to Cornell University , where he established his own laboratory and research. [ 1 ] When Titchener arrived at Cornell in 1894, psychology was still a fledgling discipline, especially in the United States, and he was a key figure in bringing Wundt's ideas to America. However, Titchener misrepresented some of Wundt's ideas to the American psychological establishment, especially in his account of introspection which, Titchener taught, only served a purpose in the qualitative analysis of consciousness into its various parts, [ 1 ] while Wundt saw it as a means to quantitatively measure the whole of conscious experience. [ 1 ] Titchener was exclusively interested in the individual components that comprise conscious experience, while Wundt, seeing little purpose in the analysis of individual components, focused on synthesis of these components. Titchener's ideas formed the basis of the short-lived psychological theory of structuralism . [ 1 ]
American historiography of introspection, according to some authors, [ 10 ] [ 11 ] is dominated by three misconceptions. In particular, historians of psychology tend to argue 1) that introspection once was the dominant method of psychological inquiry, 2) that behaviorism, and in particular John B. Watson , was responsible for discrediting introspection as a valid method, and 3) that scientific psychology completely abandoned introspection as a result of those critiques. [ 10 ] However, introspection may not have been the dominant method. It was widely believed to be dominant because Edward Titchener 's student Edwin G. Boring , in his influential historical accounts of experimental psychology, privileged Titchener's views while giving little credit to original sources. [ 10 ] Introspection has been critiqued by many other psychologists, including Wilhelm Wundt and Knight Dunlap , who presented a non-behaviorist argument against self-observation. [ 12 ] Introspection is still widely used in psychology, but now implicitly, as self-report surveys, interviews and some fMRI studies are based on introspection. [ 11 ] : 4 It is not the method but rather its name that has been dropped from the dominant psychological vocabulary.
Partly as a result of Titchener's misrepresentation, the use of introspection diminished after his death and the subsequent decline of structuralism. [ 1 ] Later psychological movements, such as functionalism and behaviorism , rejected introspection for its lack of scientific reliability among other factors. [ 1 ] Functionalism originally arose in direct opposition to structuralism, opposing its narrow focus on the elements of consciousness [ 1 ] and emphasizing the purpose of consciousness and other psychological behavior. Behaviorism's objection to introspection focused much more on its unreliability and subjectivity which conflicted with behaviorism's focus on measurable behavior. [ 1 ] [ 13 ]
The more recently established cognitive psychology movement has to some extent accepted introspection's usefulness in the study of psychological phenomena, though generally only in experiments pertaining to internal thought conducted under experimental conditions. For example, in the " think aloud protocol ", investigators cue participants to speak their thoughts aloud in order to study an active thought process without forcing an individual to comment on the process itself. [ 14 ]
Already in the 18th century authors had criticized the use of introspection, both for knowing one's own mind and as a method for psychology. David Hume pointed out that introspecting a mental state tends to alter the very state itself; a German author, Christian Gottfried Schütz , noted that introspection is often described as mere "inner sensation", but actually requires also attention, that introspection does not get at unconscious mental states, and that it cannot be used naively — one needs to know what to look for. Immanuel Kant added that, if they are understood too narrowly, introspective experiments are impossible. Introspection delivers, at best, hints about what goes on in the mind; it does not suffice to justify knowledge claims about the mind. [ 15 ] Similarly, the idea continued to be discussed between John Stuart Mill and Auguste Comte . Recent psychological research on cognition and attribution has asked people to report on their mental processes, for instance to say why they made a particular choice or how they arrived at a judgment. In some situations, these reports are clearly confabulated . [ 16 ] For example, people justify choices they have not in fact made. [ 17 ] Such results undermine the idea that those verbal reports are based on direct introspective access to mental content. Instead, judgements about one's own mind seem to be inferences from overt behavior, similar to judgements made about another person. [ 16 ] However, it is hard to assess whether these results only apply to unusual experimental situations, or if they reveal something about everyday introspection. [ 18 ] The theory of the adaptive unconscious suggests that a very large proportion of mental processes, even "high-level" processes like goal-setting and decision-making, are inaccessible to introspection. [ 19 ] Indeed, it is questionable how confident researchers can be in their own introspections.
One of the central implications of dissociations between consciousness and meta-consciousness is that individuals, presumably including researchers, can misrepresent their experiences to themselves. Jack and Roepstorff assert, '...there is also a sense in which subjects simply cannot be wrong about their own experiential states.' Presumably they arrived at this conclusion by drawing on the seemingly self-evident quality of their own introspections, and assumed that it must equally apply to others. However, when we consider research on the topic, this conclusion seems less self-evident. If, for example, extensive introspection can cause people to make decisions that they later regret, then one very reasonable possibility is that the introspection caused them to 'lose touch with their feelings'. In short, empirical studies suggest that people can fail to appraise adequately (i.e. are wrong about) their own experiential states.
Another question in regards to the veracious accountability of introspection is if researchers lack the confidence in their own introspections and those of their participants, then how can it gain legitimacy? Three strategies are accountable: identifying behaviors that establish credibility, finding common ground that enables mutual understanding, and developing a trust that allows one to know when to give the benefit of the doubt.
That is to say, that words are only meaningful if validated by one's actions; When people report strategies, feelings or beliefs, their behaviors must correspond with these statements if they are to be believed. [ 20 ]
Even when their introspections are uninformative, people still give confident descriptions of their mental processes, being "unaware of their unawareness". [ 21 ] This phenomenon has been termed the introspection illusion and has been used to explain some cognitive biases [ 22 ] and belief in some paranormal phenomena. [ 23 ] When making judgements about themselves, subjects treat their own introspections as reliable, whereas they judge other people based on their behavior. [ 24 ] This can lead to illusions of superiority . For example, people generally see themselves as less conformist than others, and this seems to be because they do not introspect any urge to conform. [ 25 ] Another reliable finding is that people generally see themselves as less biased than everyone else , because they are not likely to introspect any biased thought processes. [ 24 ]
One experiment tried to give their subjects access to others' introspections. They made audio recordings of subjects who had been told to say whatever came into their heads as they answered a question about their own bias. [ 24 ] Although subjects persuaded themselves they were unlikely to be biased, their introspective reports did not sway the assessments of observers. When subjects were explicitly told to avoid relying on introspection, their assessments of their own bias became more realistic. [ 24 ]
In Buddhism , Sampajañña refers to "the mental process by which one continuously monitors one's own body and mind. In the practice of śamatha , its principal function is to note the occurrence of laxity and excitation." [ 26 ] It is of central importance for meditative practice in all Buddhist traditions . [ citation needed ]
In Judaism , particularly in the teachings of the practitioners of Mussar a person could achieve progress in perfecting their character traits through a daily "Cheshbon Hanefesh," or Accounting of the Soul. In the practice of Cheshbon Hanefesh, a person introspects about themselves, their day, their faults, progress, and so on, and over time can use the data and process to change behavior and thoughts. Introspection is encouraged during the penitent season in the month of Elul in order to correct the year's sins through repentance, which in Judaism begins with recalling and recognizing them. [ citation needed ]
In Christianity , perfection is not just the possession and preservation of sanctifying grace , since perfection is determined by one's action, although Christian mysticism has gained a renewed interest in western Christianity and is prominent in eastern Christianity . [ 27 ]
In Eastern Christianity some concepts addressing human needs, such as sober introspection ( nepsis ), require watchfulness of the human heart and the conflicts of the human nous , heart or mind. Noetic understanding can not be achieved by rational or discursive thought (i.e. systemization). [ citation needed ]
Rationalists view prayer as a way to help train a person to focus on divinity through philosophy and intellectual contemplation ( meditation ).
Jains practise pratikraman ( Sanskrit "introspection"), a process of repentance of wrongdoings during their daily life, and remind themselves to refrain from doing so again. Devout Jains often do Pratikraman at least twice a day. [ citation needed ] Many practice Pratikraman on holy days such as Samvatsari , or Forgiveness Day. [ 28 ]
Introspection is encouraged in schools such as Advaita Vedanta ; in order for one to know their own true nature, they need to reflect and introspect on their true nature—which is what meditation is. Especially, Swami Chinmayananda emphasised the role of introspection in five stages, outlined in his book "Self Unfoldment." [ citation needed ]
In Islam , greater jihad is the exertion of effort to internally struggle against one's evil inclinations. [ 29 ] [ 30 ] In Sufism , nafs is in its unrefined state "the ego", which is considered to be the lowest dimension of a person's inward existence—his animal and satanic nature. [ 31 ]
Interior monologue is the fiction-writing mode used to convey a character's silent thoughts, which may include introspective thoughts. As explained by Renni Browne and Dave King, "One of the great gifts of literature is that it allows for the expression of unexpressed thoughts..." [ 32 ]
According to Nancy Kress , a character's thoughts can greatly enhance a story: deepening characterization, increasing tension, and widening the scope of a story. [ 33 ] As outlined by Jack M. Bickham, thought plays a critical role in both scene and sequel . [ 34 ] | https://en.wikipedia.org/wiki/Introspection |
In quantum and theoretical chemistry , an intruder state is a particular situation arising in perturbative evaluations , where the energy of the perturbers is comparable in magnitude to the energy associated to the zero order wavefunction. In this case, a divergent behavior occurs, due to the nearly zero denominator in the expression of the perturbative correction.
Multi-reference wavefunction methods are not immune. [ 1 ] [ 2 ] There are ways to identity them. [ 3 ] [ 4 ] The natural orbitals of the perturbation expansion are a useful diagnostic for detecting intruder state effects. [ 5 ] Sometimes what appears to be an intruder state is simply a change in basis. [ 1 ] [ 6 ]
This article about theoretical chemistry is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Intruder_state |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.