text
stringlengths
11
320k
source
stringlengths
26
161
Ifinatamab deruxtecan ( DS-7300 ) is an experimental anti-cancer treatment developed by Merck and Daiichi Sankyo . It is an antibody–drug conjugate that "consists of an anti- B7-H3 antibody linked with a DNA topoisomerase I inhibiting anti-tumor agent". [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Ifinatamab_deruxtecan
In geology , igneous differentiation , or magmatic differentiation , is an umbrella term for the various processes by which magmas undergo bulk chemical change during the partial melting process, cooling, emplacement , or eruption . The sequence of (usually increasingly silicic) magmas produced by igneous differentiation is known as a magma series . When a rock melts to form a liquid, the liquid is known as a primary melt . Primary melts have not undergone any differentiation and represent the starting composition of a magma. In nature, primary melts are rarely seen. Some leucosomes of migmatites are examples of primary melts. Primary melts derived from the mantle are especially important and are known as primitive melts or primitive magmas . By finding the primitive magma composition of a magma series, it is possible to model the composition of the rock from which a melt was formed, which is important because we have little direct evidence of the Earth's mantle. Where it is impossible to find the primitive or primary magma composition, it is often useful to attempt to identify a parental melt. A parental melt is a magma composition from which the observed range of magma chemistries has been derived by the processes of igneous differentiation. It need not be a primitive melt. For instance, a series of basalt lava flows is assumed to be related to one another. A composition from which they could reasonably be produced by fractional crystallization is termed a parental melt . To prove this, fractional crystallization models would be produced to test the hypothesis that they share a common parental melt. Fractional crystallization and accumulation of crystals formed during the differentiation process of a magmatic event are known as cumulate rocks , and those parts are the first which crystallize out of the magma. Identifying whether a rock is a cumulate or not is crucial for understanding if it can be modelled back to a primary melt or a primitive melt, and identifying whether the magma has dropped out cumulate minerals is equally important even for rocks which carry no phenocrysts . The primary cause of change in the composition of a magma is cooling , which is an inevitable consequence of the magma being formed and migrating from the site of partial melting into an area of lower stress - generally a cooler volume of the crust. Cooling causes the magma to begin to crystallize minerals from the melt or liquid portion of the magma. Most magmas are a mixture of liquid rock (melt) and crystalline minerals (phenocrysts). Contamination is another cause of magma differentiation. Contamination can be caused by assimilation of wall rocks, mixing of two or more magmas or even by replenishment of the magma chamber with fresh, hot magma. The whole gamut of mechanisms for differentiation has been referred to as the FARM process, which stands for fractional crystallization, assimilation, replenishment and magma mixing. Fractional crystallization is the removal and segregation from a melt of mineral precipitates, which changes the composition of the melt. This is one of the most important geochemical and physical processes operating within the Earth's crust and mantle . Fractional crystallization in silicate melts (magmas) is a very complex process compared to chemical systems in the laboratory because it is affected by a wide variety of phenomena. Prime amongst these are the composition, temperature, and pressure of a magma during its cooling. The composition of a magma is the primary control on which mineral is crystallized as the melt cools down past the liquidus . For instance in mafic and ultramafic melts, the MgO and SiO 2 contents determine whether forsterite olivine is precipitated or whether enstatite pyroxene is precipitated. Two magmas of similar composition and temperature at different pressure may crystallize different minerals. An example is high-pressure and high-temperature fractional crystallization of granites to produce single- feldspar granite, and low-pressure low-temperature conditions which produce two-feldspar granites. The partial pressure of volatile phases in silicate melts is also of prime importance, especially in near- solidus crystallization of granites. Assimilation can be broadly defined as a process where a mass of magma wholly or partially homogenizes with materials derived from the wall rock of the magma body. [ 1 ] Assimilation is a popular mechanism to partly explain the felsification of ultramafic and mafic magmas as they rise through the crust: a hot primitive melt intruding into a cooler, felsic crust will melt the crust and mix with the resulting melt. [ 2 ] This then alters the composition of the primitive magma. Also, pre-existing mafic host rocks can be assimilated by very hot primitive magmas. [ 3 ] [ 4 ] Effects of assimilation on the chemistry and evolution of magma bodies are to be expected, and have been clearly proven in many places. In the early 20th century there was a lively discussion on the relative importance of the process in igneous differentiation. [ 5 ] [ 6 ] More recent research has shown, however, that assimilation has a fundamental role in altering the trace element and isotopic composition of magmas, [ 7 ] in formation of some economically important ore deposits, [ 8 ] and in causing volcanic eruptions. [ 9 ] When a melt undergoes cooling along the liquid line of descent, the results are limited to the production of a homogeneous solid body of intrusive rock, with uniform mineralogy and composition, or a partially differentiated cumulate mass with layers, compositional zones and so on. This behaviour is fairly predictable and easy enough to prove with geochemical investigations. In such cases, a magma chamber will form a close approximation of the ideal Bowen's reaction series . However, most magmatic systems are polyphase events, with several pulses of magmatism. In such a case, the liquid line of descent is interrupted by the injection of a fresh batch of hot, undifferentiated magma. This can cause extreme fractional crystallisation because of three main effects: Magma mixing is the process by which two magmas meet, comingle, and form a magma of a composition somewhere between the two end-member magmas. Magma mixing is a common process in volcanic magma chambers, which are open-system chambers where magmas enter the chamber, [ 10 ] undergo some form of assimilation, fractional crystallisation and partial melt extraction (via eruption of lava), and are replenished. Magma mixing also tends to occur at deeper levels in the crust and is considered one of the primary mechanisms for forming intermediate rocks such as monzonite and andesite . Here, due to heat transfer and increased volatile flux from subduction , the silicic crust melts to form a felsic magma (essentially granitic in composition). These granitic melts are known as an underplate . Basaltic primary melts formed in the mantle beneath the crust rise and mingle with the underplate magmas, the result being part-way between basalt and rhyolite ; literally an 'intermediate' composition. Convection in a large magma chamber is subject to the interplay of forces generated by thermal convection and the resistance offered by friction, viscosity and drag on the magma offered by the walls of the magma chamber. Often near the margins of a magma chamber which is convecting, cooler and more viscous layers form concentrically from the outside in, defined by breaks in viscosity and temperature. This forms laminar flow , which separates several domains of the magma chamber which can begin to differentiate separately. Flow banding is the result of a process of fractional crystallization which occurs by convection, if the crystals which are caught in the flow-banded margins are removed from the melt. The friction and viscosity of the magma causes phenocrysts and xenoliths within the magma or lava to slow down near the interface and become trapped in a viscous layer. This can change the composition of the melt in large intrusions , leading to differentiation. With reference to the definitions, above, a magma chamber will tend to cool down and crystallize minerals according to the liquid line of descent. When this occurs, especially in conjunction with zonation and crystal accumulation, and the melt portion is removed, this can change the composition of a magma chamber. In fact, this is basically fractional crystallization, except in this case we are observing a magma chamber which is the remnant left behind from which a daughter melt has been extracted. If such a magma chamber continues to cool, the minerals it forms and its overall composition will not match a sample liquid line of descent or a parental magma composition. It is worth reiterating that magma chambers are not usually static single entities. The typical magma chamber is formed from a series of injections of melt and magma, and most are also subject to some form of partial melt extraction. Granite magmas are generally much more viscous than mafic magmas and are usually more homogeneous in composition. This is generally considered to be caused by the viscosity of the magma, which is orders of magnitude higher than mafic magmas. The higher viscosity means that, when melted, a granitic magma will tend to move in a larger concerted mass and be emplaced as a larger mass because it is less fluid and able to move. This is why granites tend to occur as large plutons , and mafic rocks as dikes and sills . Granites are cooler and are therefore less able to melt and assimilate country rocks. Wholesale contamination is therefore minor and unusual, although mixing of granitic and basaltic melts is not unknown where basalt is injected into granitic magma chambers. Mafic magmas are more liable to flow, and are therefore more likely to undergo periodic replenishment of a magma chamber. Because they are more fluid, crystal precipitation occurs much more rapidly, resulting in greater changes by fractional crystallisation. Higher temperatures also allow mafic magmas to assimilate wall rocks more readily and therefore contamination is more common and better developed. All igneous magmas contain dissolved gases ( water , carbonic acid , hydrogen sulfide , chlorine, fluorine, boric acid , etc.). Of these water is the principal, and was formerly believed to have percolated downwards from the Earth's surface to the heated rocks below, but is now generally admitted to be an integral part of the magma. Many peculiarities of the structure of the plutonic rocks as contrasted with the lavas may reasonably be accounted for by the operation of these gases, which were unable to escape as the deep-seated masses slowly cooled, while they were promptly given up by the superficial effusions. The acid plutonic or intrusive rocks have never been reproduced by laboratory experiments, and the only successful attempts to obtain their minerals artificially have been those in which special provision was made for the retention of the "mineralizing" gases in the crucibles or sealed tubes employed. These gases often do not enter into the composition of the rock-forming minerals, for most of these are free from water, carbonic acid, etc. Hence as crystallization goes on the residual melt must contain an ever-increasing proportion of volatile constituents. It is conceivable that in the final stages the still uncrystallized part of the magma has more resemblance to a solution of mineral matter in superheated steam than to a dry igneous fusion. Quartz , for example, is the last mineral to form in a granite. It bears much of the stamp of the quartz which we know has been deposited from aqueous solution in veins , etc. It is at the same time the most infusible of all the common minerals of rocks. Its late formation shows that in this case it arose at comparatively low temperatures and points clearly to the special importance of the gases of the magma as determining the sequence of crystallization. [ 6 ] When solidification is nearly complete the gases can no longer be retained in the rock and make their escape through fissures towards the surface. They are powerful agents in attacking the minerals of the rocks which they traverse, and instances of their operation are found in the kaolinization of granites, tourmalinization and formation of greisen , deposition of quartz veins, and the group of changes known as propylitization. These "pneumatolytic" processes are of the first importance in the genesis of many ore deposits . They are a real part of the history of the magma itself and constitute the terminal phases of the volcanic sequence. [ 6 ] There are several methods of directly measuring and quantifying igneous differentiation processes; In all cases, the primary and most valuable method for identifying magma differentiation processes is mapping the exposed rocks, tracking mineralogical changes within the igneous rocks and describing field relationships and textural evidence for magma differentiation. Clinopyroxene thermobarometry can be used to determine pressures and temperatures of magma differentiation.
https://en.wikipedia.org/wiki/Igneous_differentiation
Ignorance management is a knowledge management practice that addresses the concept of ignorance in organizations. [ 1 ] Logically, ignorance management is based upon the concept of ignorance . John Israilidis, Russell Lock, and Louise Cooke of Loughborough University described ignorance management as: "[...] a process of discovering, exploring, realising, recognising and managing ignorance outside and inside the organisation through an appropriate management process to meet current and future demands, design better policy and modify actions in order to achieve organisational objectives and sustain competitive advantage ." [ 2 ] The key principle of this theory is that knowledge management (KM) could better be seen as ignorance management, due to the fact that it is impossible for someone to comprehend and understand everything in a complete way. The only real wisdom is in recognising the limits and extent of one's knowledge , and therefore KM is essentially a matter of sharing the extent of one's ignorance with other people, and thus learning together. This process of knowing what is needed to know, and also acknowledging the power of understanding the unknown , could develop a tacit understanding and could improve both short-term opportunistic value capture and longer term business sustainability. [ 3 ] Several attempts have been made to explore the value of managing organisational ignorance in order to prevent failures within knowledge transfer contexts. The need to recognise the role and significance of power in the management of ignorance has been introduced to further enhance such efforts. [ 4 ] Also, a growing body of psychology research shows that humans find it intrinsically difficult to get a sense of what they do not know, and argues that incompetence deprives people of the ability to recognise their own incompetence (the Dunning–Kruger effect ). [ 5 ] The viewpoint of developing our understanding of organisational ignorance can yield impressive benefits, if successfully incorporated within a company's KM strategy. [ 6 ]
https://en.wikipedia.org/wiki/Ignorance_management
Malcolm Timothy Gladwell CM (born 3 September 1963) is a Canadian journalist, author, and public speaker. [ 2 ] He has been a staff writer for The New Yorker since 1996. He has published eight books. He is also the host of the podcast Revisionist History and co-founder of the podcast company Pushkin Industries . Gladwell's writings often deal with the unexpected implications of research in the social sciences, such as sociology and psychology , and make frequent and extended use of academic work. Gladwell was appointed to the Order of Canada in 2011. [ 3 ] Gladwell was born in Fareham , Hampshire , United Kingdom . His mother Joyce (née Nation) Gladwell, is a Jamaican psychotherapist . His father, Graham Gladwell, was a mathematics professor from Kent , England. [ 4 ] [ 5 ] [ 6 ] When he was six his family moved from Southampton to the Mennonite community of Elmira, Ontario , Canada. [ 4 ] He has two brothers. [ 7 ] Throughout his childhood, Malcolm lived in rural Ontario Mennonite country, where he attended a Mennonite church. [ 8 ] [ 9 ] Research done by historian Henry Louis Gates Jr. revealed that one of Gladwell's maternal ancestors was a Jamaican free woman of colour (mixed black and white) who was a slaveowner. [ 10 ] His great-great-great-grandmother was of Igbo ethnicity from Nigeria. In the epilogue of his 2008 book Outliers he describes many lucky circumstances that came to his family over the course of several generations, contributing to his path towards success. [ 11 ] Gladwell has said that his mother is his role model as a writer. [ 12 ] Gladwell's father noted that Malcolm was an unusually single-minded and ambitious boy. [ 13 ] When Malcolm was 11, his father, a professor of mathematics and engineering at the University of Waterloo , [ 14 ] allowed his son to wander around the offices at his university, which stoked the boy's interest in reading and libraries. [ 15 ] In the spring of 1982, Gladwell interned with the National Journalism Center in Washington, D.C. [ 16 ] He graduated with a bachelor's degree in history from Trinity College of the University of Toronto in 1984. [ 17 ] Gladwell decided to pursue advertising as a career after college. [ 15 ] [ 18 ] After being rejected by every advertising agency he applied to, he accepted a journalism position at conservative magazine The American Spectator and moved to Indiana . [ 19 ] He subsequently wrote for Insight on the News , a conservative magazine owned by Sun Myung Moon 's Unification Church . [ 20 ] In 1987, Gladwell began covering business and science for The Washington Post , where he worked until 1996. [ 21 ] In a personal elucidation of the 10,000-hour rule he popularized in Outliers , Gladwell notes, "I was a basket case at the beginning, and I felt like an expert at the end. It took 10 years—exactly that long." [ 15 ] When Gladwell started at The New Yorker in 1996, he wanted to "mine current academic research for insights, theories, direction, or inspiration". [ 13 ] His first assignment was to write a piece about fashion. Instead of writing about high-class fashion, Gladwell opted to write a piece about a man who manufactured T-shirts, saying: "[I]t was much more interesting to write a piece about someone who made a T-shirt for $8 than it was to write about a dress that costs $100,000. I mean, you or I could make a dress for $100,000, but to make a T-shirt for $8—that's much tougher." [ 13 ] Gladwell gained popularity with two New Yorker articles, both written in 1996: "The Tipping Point" and "The Coolhunt". [ 22 ] [ 23 ] These two pieces would become the basis for Gladwell's first book, The Tipping Point , for which he received a $1 million advance. [ 18 ] [ 23 ] He continues to write for The New Yorker . Gladwell also served as a contributing editor for Grantland , a sports journalism website founded by former ESPN columnist Bill Simmons . In a July 2002 article in The New Yorker , Gladwell introduced the concept of the "talent myth" that companies and organizations, in his view, incorrectly follow. [ 24 ] This work examines different managerial and administrative techniques that companies, both winners and losers, have used. He states that the misconception seems to be that management and executives are all too ready to classify employees without ample performance records and thus make hasty decisions. Many companies believe in disproportionately rewarding "stars" over other employees with bonuses and promotions. However, with the quick rise of inexperienced workers with little in-depth performance review, promotions are often incorrectly made, putting employees into positions they should not have and keeping other, more experienced employees from rising. He also points out that under this system, narcissistic personality types are more likely to climb the ladder, since they are more likely to take more credit for achievements and take less blame for failure. [ 24 ] He states both that narcissists make the worst managers and that the system of rewarding "stars" eventually worsens a company's position. Gladwell states that the most successful long-term companies are those who reward experience above all else and require greater time for promotions. [ 24 ] With the release of Revenge of the Tipping Point: Overstories, Superspreaders, and the Rise of Social Engineering in 2024, Gladwell has had eight books published. When asked for the process behind his writing, he said: "I have two parallel things I'm interested in. One is, I'm interested in collecting interesting stories, and the other is I'm interested in collecting interesting research. What I'm looking for is cases where they overlap". [ 25 ] The initial inspiration for his first book, The Tipping Point , which was published in 2000, came from the sudden drop of crime in New York City . He wanted the book to have a broader appeal than just crime, however, and sought to explain similar phenomena through the lens of epidemiology . While Gladwell was a reporter for The Washington Post , he covered the AIDS epidemic. He began to take note of "how strange epidemics were", saying epidemiologists have a "strikingly different way of looking at the world". The term " tipping point " comes from the moment in an epidemic when the virus reaches critical mass and begins to spread at a much higher rate. [ 26 ] Gladwell's theories of crime were heavily influenced by the " broken windows theory " of policing, and Gladwell is credited for packaging and popularizing the theory in a way that was implementable in New York City. Gladwell's theoretical implementation bears a striking resemblance to the " stop-and-frisk " policies of the NYPD. [ 27 ] However, in the decade and a half since its publication, The Tipping Point and Gladwell have both come under fire for the tenuous link between "broken windows" and New York City's drop in violent crime. During a 2013 interview with BBC journalist Jon Ronson for The Culture Show , Gladwell admitted that he was "too in love with the broken-windows notion". He went on to say that he was "so enamored by the metaphorical simplicity of that idea that I overstated its importance". [ 28 ] After The Tipping Point, Gladwell published Blink in 2005. The book explains how the human unconscious interprets events or cues as well as how past experiences can lead people to make informed decisions very rapidly. Gladwell uses examples like the Getty kouros and psychologist John Gottman 's research on the likelihood of divorce in married couples . Gladwell's hair was the inspiration for Blink . He stated that once he allowed his hair to get longer, he started to get speeding tickets all the time, an oddity considering that he had never gotten one before and that he started getting pulled out of airport security lines for special attention. [ 29 ] In a particular incident, he was apprehended by three police officers while walking in downtown Manhattan because his curly hair matched the profile of a rapist, despite the fact the suspect looked nothing like him otherwise. [ 29 ] Gladwell's The Tipping Point (2000) and Blink (2005) were international bestsellers. The Tipping Point sold more than two million copies in the United States. Blink sold equally well. [ 18 ] [ 30 ] As of November 2008, the two books had sold a combined 4.5 million copies. [ 15 ] Gladwell's third book, Outliers , published in 2008, examines how a person's environment, in conjunction with personal drive and motivation, affects his or her possibility and opportunity for success. Gladwell's original question revolved around lawyers: "We take it for granted that there's this guy in New York who's the corporate lawyer, right? I just was curious: Why is it all the same guy?", referring to the fact that "a surprising number of the most powerful and successful corporate lawyers in New York City have almost the exact same biography". [ 31 ] [ 15 ] In another example given in the book, Gladwell noticed that people ascribe Bill Gates 's success to being "really smart" or "really ambitious". He noted that he knew a lot of people who are really smart and really ambitious, but not worth $60 billion. "It struck me that our understanding of success was really crude—and there was an opportunity to dig down and come up with a better set of explanations." Gladwell's fourth book, What the Dog Saw: And Other Adventures , was published in 2009. What the Dog Saw bundles together Gladwell's favourites of his articles from The New Yorker since he joined the magazine as a staff writer in 1996. [ 19 ] The stories share a common theme, namely that Gladwell tries to show us the world through the eyes of others, even if that other happens to be a dog. [ 32 ] [ 33 ] Gladwell's fifth book, David and Goliath , was released in October 2013, and examines the struggle of underdogs versus favourites. The book is partially inspired by an article Gladwell wrote for The New Yorker in 2009 titled "How David Beats Goliath". [ 34 ] [ 35 ] The book was a bestseller but received mixed reviews. [ 36 ] [ 37 ] [ 38 ] [ 39 ] Gladwell's sixth book, Talking to Strangers , was released September 2019. The book examines interactions with strangers, covers examples that include the deceptions of Bernie Madoff , the trial of Amanda Knox , the suicide of Sylvia Plath , the Jerry Sandusky pedophilia case at Penn State , and the death of Sandra Bland . [ 40 ] [ 41 ] [ 42 ] Gladwell explained what inspired him to write the book as being "struck by how many high profile cases in the news were about the same thing—strangers misunderstanding each other." [ 43 ] It challenges the assumptions we are programmed to make when encountering strangers, and the potentially dangerous consequences of misreading people we do not know. [ 44 ] Gladwell's seventh book, The Bomber Mafia: A Dream, a Temptation, and the Longest Night of the Second World War , was released in April 2021. [ 45 ] Gladwell's eighth book, Revenge of the Tipping Point was released in October 2024. The book is a sequel to his best seller The Tipping Point, which was released in 2000. The book discusses social epidemics and tipping points, this time with the aim of explaining the dark side of contagious phenomena, and offers an alternate history of two of the biggest epidemics of our day: COVID and the opioid crisis. The Tipping Point was named as one of the best books of the decade by The A.V. Club , The Guardian , and The Times . [ 46 ] [ 47 ] [ 48 ] It was also Barnes & Noble 's fifth-best-selling non-fiction book of the decade. [ 49 ] Blink was named to Fast Company 's list of the best business books of 2005. [ 50 ] It was also number 5 on Amazon customers' favourite books of 2005, named to The Christian Science Monitor 's best non-fiction books of 2005, and in the top 50 of Amazon customers' favourite books of the decade. [ 51 ] [ 52 ] [ 53 ] Outliers was a number 1 New York Times bestseller for 11 straight weeks and was Time 's number 10 non-fiction book of 2008 as well as named to the San Francisco Chronicle 's list of the 50 best non-fiction books of 2008. [ 54 ] [ 55 ] [ 56 ] Fortune described The Tipping Point as "a fascinating book that makes you see the world in a different way". [ 57 ] [ 58 ] The Daily Telegraph called it "a wonderfully offbeat study of that little-understood phenomenon, the social epidemic". [ 59 ] Reviewing Blink , The Baltimore Sun dubbed Gladwell "the most original American journalist since the young Tom Wolfe." [ 60 ] Farhad Manjoo at Salon described the book as "a real pleasure. As in the best of Gladwell's work, Blink brims with surprising insights about our world and ourselves." [ 61 ] The Economist called Outliers "a compelling read with an important message". [ 62 ] David Leonhardt wrote in The New York Times Book Review : "In the vast world of nonfiction writing, Malcolm Gladwell is as close to a singular talent as exists today" and Outliers "leaves you mulling over its inventive theories for days afterward". [ 63 ] Ian Sample wrote in The Guardian : "Brought together, the pieces form a dazzling record of Gladwell's art. There is depth to his research and clarity in his arguments, but it is the breadth of subjects he applies himself to that is truly impressive." [ 19 ] [ 64 ] Gladwell's critics have described him as prone to oversimplification. The New Republic called the final chapter of Outliers, "impervious to all forms of critical thinking" and said Gladwell believes "a perfect anecdote proves a fatuous rule". [ 65 ] Gladwell has also been criticized for his emphasis on anecdotal evidence over research to support his conclusions. [ 66 ] Maureen Tkacik and Steven Pinker have challenged the integrity of Gladwell's approach. [ 67 ] [ 68 ] Even while praising Gladwell's writing style and content, Pinker summed up Gladwell as "a minor genius who unwittingly demonstrates the hazards of statistical reasoning", while accusing him of "cherry-picked anecdotes, post-hoc sophistry and false dichotomies" in his book Outliers . Referencing a Gladwell reporting mistake in which Gladwell refers to " eigenvalue " as "Igon Value", Pinker criticizes his lack of expertise: "I will call this the Igon Value Problem: when a writer's education on a topic consists in interviewing an expert, he is apt to offer generalizations that are banal, obtuse or flat wrong." [ 68 ] A writer in The Independent accused Gladwell of posing "obvious" insights. [ 69 ] The British website The Register has accused Gladwell of making arguments by weak analogy and commented Gladwell has an "aversion for fact", adding: "Gladwell has made a career out of handing simple, vacuous truths to people and dressing them up with flowery language and an impressionistic take on the scientific method." [ 70 ] In that regard, The New Republic has called him "America's Best-Paid Fairy-Tale Writer". [ 71 ] His approach was satirized by the online site "The Malcolm Gladwell Book Generator". [ 72 ] In 2005, Gladwell commanded a $45,000 speaking fee. [ 73 ] In 2008, he was making "about 30 speeches a year—most for tens of thousands of dollars, some for free", according to a profile in New York magazine. [ 74 ] In 2011, he gave three talks to groups of small businessmen as part of a three-city speaking tour put on by Bank of America . The program was titled "Bank of America Small Business Speaker Series: A Conversation with Malcolm Gladwell". [ 75 ] Paul Starobin, writing in the Columbia Journalism Review , said the engagement's "entire point seemed to be to forge a public link between a tarnished brand (the bank), and a winning one (a journalist often described in profiles as the epitome of cool)". [ 76 ] An article by Melissa Bell of The Washington Post posed the question: "Malcolm Gladwell: Bank of America's new spokesman?" [ 77 ] Mother Jones editor Clara Jeffery said Gladwell's job for Bank of America had "terrible ethical optics". However, Gladwell says he was unaware that Bank of America was "bragging about his speaking engagements" until the Atlantic Wire emailed him. Gladwell explained: I did a talk about innovation for a group of entrepreneurs in Los Angeles a while back, sponsored by Bank of America. They liked the talk, and asked me to give the same talk at two more small business events—in Dallas and yesterday in D.C. That's the extent of it. No different from any other speaking gig. I haven't been asked to do anything else and imagine that's it. [ 78 ] In 2012, CBS 's 60 Minutes attributed the trend of American parents " redshirting " their five-year-olds (postponing entrance into kindergarten to give them an advantage) to a section in Gladwell's Outliers . [ 79 ] Sociology professor Shayne Lee referenced Outliers in a CNN editorial commemorating Martin Luther King Jr. 's birthday. Lee discussed the strategic timing of King's ascent from a "Gladwellian perspective". [ 80 ] Gladwell gives credit to Richard Nisbett and Lee Ross for inventing the Gladwellian genre. [ 81 ] Gladwell has provided blurbs for "scores of book covers", leading The New York Times to ask, "Is it possible that Mr. Gladwell has been spreading the love a bit too thinly?" Gladwell, who said he did not know how many blurbs he had written, acknowledged, "The more blurbs you give, the lower the value of the blurb. It's the tragedy of the commons ." [ 82 ] Gladwell is host of the podcast Revisionist History , initially produced through Panoply Media and now through Gladwell's own podcast company. It began in 2016 and has aired seven 10-episode seasons. Each episode begins with an inquiry about a person, event, or idea, and proceeds to question the received wisdom about the subject. Gladwell was recruited to create a podcast by Jacob Weisberg , editor-in-chief of The Slate Group , which also includes the podcast network Panoply Media. In September 2018, Gladwell announced he was co-founding a podcast company, later named Pushkin Industries , [ 83 ] with Weisberg. [ 84 ] About this decision, Gladwell told the Los Angeles Times : "There is a certain kind of whimsy and emotionality that can only be captured on audio." [ 85 ] He also has a music podcast with Bruce Headlam and Rick Rubin , titled Broken Record where they interview musicians. [ 86 ] It has two seasons, 2018–2019 and 2020 with a total of 49 episodes. [ 87 ] The Unusual Suspects with Kenya Barris and Malcom Gladwell, premiered January 30, 2025. The podcast features candid interviews with influential figures across a spectrum of disciplines. A common thread throughout these interviews are discussions about each subject's path to success. Interview subjects have ranged from trailblazing Fortune 500 CEO Ursula Burns to hip hop recording artist and producer Dr. Dre . [ 88 ] Gladwell is a Christian. [ 89 ] His family attended Above Bar Church in Southampton, U.K., and later Gale Presbyterian in Elmira when they moved to Canada. His parents and siblings are part of the Mennonite community in Southwestern Ontario. [ 9 ] Gladwell wandered away from his Christian roots when he moved to New York, only to rediscover his faith during the writing of David and Goliath and his encounter with Wilma Derksen regarding the death of her child. [ 90 ] Gladwell was a national class runner and an Ontario High School ( Ontario Federation of School Athletic Associations – OFSAA) champion. [ 91 ] He was among Canada's fastest teenagers at 1500 metres , running 4:14 at the age of 13 and 4:05 when aged 14. At university, Gladwell ran 1500 metres in 3:55. In 2014, at the age of 51, he ran a 4:54 at the Fifth Avenue Mile . [ 92 ] [ 93 ] At 57 he ran a 5:15 mile. [ 94 ] He had his first child, a daughter, in 2022. [ 95 ] In 2024 it was reported that "In a span of five years, he got engaged, had two children, turned 61, and moved from Manhattan to pastoral Hudson, New York .". [ 96 ] Gladwell is passionate about cars and reading car magazines, particularly the British magazine Car . [ 97 ] In 2005, Time named Gladwell one of its 100 most influential people. [ 98 ] In 2007, he received the American Sociological Association 's first Award for Excellence in the Reporting of Social Issues. [ 99 ] The same year, he received an honorary degree from the University of Waterloo. In 2011, he was named a Member of the Order of Canada , the second highest honour for merit in the system of orders, decorations, and medals of Canada . [ 3 ] He has received honorary degrees from the University of Waterloo (2007) [ 100 ] [ 101 ] and the University of Toronto (2011). [ 102 ] His is a recipient of the 2024 Audio Vanguard Award presented by On Air Fest. [ 103 ] Gladwell was a featured storyteller for the Moth podcast. He told a story about a well-intentioned wedding toast for a young man and his friends that went wrong. [ 108 ] Gladwell was featured in General Motors "EVerybody in." campaign. [ 109 ] Gladwell is the only guest to have been featured as a headliner at every OZY Fest festival [ 110 ] —an annual music and ideas festival produced by OZY Media —other than OZY co-founder and CEO Carlos Watson . Gladwell has also appeared on several television shows for OZY Media, including the Carlos Watson Show (YouTube) [ 111 ] and Third Rail With OZY (PBS). [ 112 ] Gladwell has a chapter giving advice in Tim Ferriss 's book Tools of Titans . Gladwell was voiced by Colton Dunn in Solar Opposites S3.E1 The Extremity Triangulator . [ 113 ]
https://en.wikipedia.org/wiki/Igon_value
Igor Jurisica is a Professor in the departments of Computer Science and Medical Biophysics at the University of Toronto . He is a Tier I Canada Research Chair in Integrative Cancer Informatics, [ 5 ] and an associate editor for BMC Bioinformatics , Proteomes, Cancer Informatics , International Journal of Knowledge Discovery in Bioinformatics, and Interdisciplinary Sciences: Computational Life Sciences. [ 2 ] In 2014, 2015 and 2016, he is an ISI Highly Cited Researcher . [ 6 ] This biographical article relating to a computer scientist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Igor_Jurisica
Igor Rivin (born 1961 in Moscow , USSR ) is a Russian-Canadian mathematician, working in various fields of pure and applied mathematics, computer science, and materials science. He was the Regius Professor of Mathematics at the University of St. Andrews from 2015 to 2017, and was the chief research officer at Cryptos Fund until 2019. He was the principal of a couple of small hedge funds, and later did research for Edgestream LP, in addition to his academic work. He received his B.Sc. (Hon) in mathematics from the University of Toronto in 1981, and his Ph.D. in 1986 [ 1 ] from Princeton University under the direction of William Thurston . Following his doctorate, Rivin directed development of QLISP and the Mathematica kernel, before returning to academia in 1992, where he held positions at the Institut des Hautes Études Scientifiques , the Institute for Advanced Study , the University of Melbourne , Warwick , and Caltech . Since 1999, Rivin has been professor of mathematics at Temple University . Between 2015 and 2017 he was Regius Professor of Mathematics at the University of St. Andrews . Rivin's PhD thesis [ 1 ] [ 2 ] and a series of extensions [ 3 ] [ 4 ] [ 5 ] characterized hyperbolic 3-dimensional polyhedra in terms of their dihedral angles, resolving a long-standing open question of Jakob Steiner on the inscribable combinatorial types. These, and some related results in convex geometry, [ 6 ] have been used in 3-manifold topology, [ 7 ] theoretical physics, computational geometry, and the recently developed field of discrete differential geometry . Rivin has also made advances in counting geodesics on surfaces, [ 8 ] the study of generic elements of discrete subgroups of Lie groups, [ 9 ] and in the theory of dynamical systems. [ 10 ] Rivin is also active in applied areas, having written large parts of the Mathematica 2.0 kernel, and he developed a database of hypothetical zeolites in collaboration with M. M. J. Treacy. Rivin is a frequent contributor to MathOverflow . Igor Rivin is the co-creator, with economist Carlo Scevola, of Cryptocurrencies Index 30 (CCi30), [ 11 ] an index of the top 30 cryptocurrencies weighted by market capitalization . CCi30 is sometimes used by academic economists as a market index when comparing the cryptocurrency trading market as a whole with individual currencies. [ 12 ] [ 13 ]
https://en.wikipedia.org/wiki/Igor_Rivin
conformationally restricted and fluorine-containing amino acids and peptides, Igor Volodymyrovych Komarov ( Ukrainian : Ігор Володимирович Комаров ) is a Ukrainian synthetic organic chemist , specializing in medicinal chemistry and nanotechnology . He is the director of the Institute of High Technologies of Taras Shevchenko National University of Kyiv . [ 1 ] He is also a scientific advisor of Enamine Ltd ( Ukraine ) [ 2 ] and Lumobiotics GmbH ( Germany ). Source: [ 3 ] Igor V. Komarov graduated with distinction from Taras Shevchenko National University of Kyiv, and started to work at the same university in 1986 first as an engineer . He obtained his Candidate of Sciences degree in 1991 in organic chemistry at Taras Shevchenko National University of Kyiv under supervision of Mikhail Yu. Kornilov; the candidate thesis was devoted to the use of lanthanide shift reagents in NMR spectroscopy. [ 4 ] Afterwards, he was a postdoctoral fellow at the University Chemical Laboratory in Cambridge (1996–1997, United Kingdom ) and at the Institut für Organische Katalyseforschung in Rostock (2000–2001, Germany). He holds the Supramolecular Chemistry Chair of Institute of High Technologies at Taras Shevchenko National University. Komarov earned his Doctor of Sciences degree in 2003; the title of his thesis is "Design and synthesis of model compounds: study of stereoelectronic , steric effects , reactive intermediates , catalytic enantioselective hydrogenation and dynamic protection of functional groups " [ 5 ] He is also a scientific advisor for Enamine Ltd. [ 2 ] and Lumobiotics GmbH. Igor V. Komarov was awarded the title of Professor in 2007. [ 6 ] Source: [ 7 ] The areas of scientific interests of Igor V. Komarov are medicinal chemistry and synthesis of model compounds, which can be used to obtain new knowledge in biochemistry , stereochemistry , theoretical chemistry , catalysis . Igor has over 125 peer reviewed research papers , h-index 31, [ 8 ] has guided 8 PhD students to date. Igor's scientific group puts the main focus on developing of novel synthetic methods and design of theoretically interesting molecules, part of which were created and synthesized in tight collaboration with Prof. Anthony J. Kirby [ 9 ] from the University of Cambridge (United Kingdom). One of such collaborative projects was synthesis, study of stereochemistry and chemical properties of 1-aza-2-adamantanone and its derivatives. A trimethyl-substituted derivative ("the most twisted amide", [ 10 ] "Kirby's amide" [ 11 ] ) was designed in the Prof. Kirby's laboratory and synthesized by Igor in 1997 during his postdoctoral stay in Cambridge. In 2014, a parent molecule was made in Igor's group in collaboration with Prof. Kirby. The compound modelled the transition state of cis - trans isomerization of amides and allowed obtaining fundamental knowledge about the amide bond . [ 12 ] Igor V. Komarov started his research in the area of synthetic organic chemistry at the beginning of 1990th, working on phosphorylation of aromatic heterocyclic compounds by phosphorus(V) acid halides. [ 13 ] At that time, convenient phosphorylation methods were developed, which now find use, for example, for synthesis of materials applicable for uranium extraction . [ 14 ] Later, working in Rostock, Igor V. Komarov changed the direction of his research and got interested in homogenous asymmetric catalysis . The study of catalysis was carried out using model compounds: functionalized camphor - and tartaric acid -derived chiral ligands were synthesized such as monophosphines , [ 15 ] [ 16 ] diphosphines , [ 17 ] and then Rhodium (I) complexes with them. [ 17 ] The complexes were used for asymmetric homogenous hydrogenation of prochiral substrates , and the obtained results allowed elucidating the effects of oxo- and oxy- functional groups in ligands on efficiency and selectivity of the catalysts. [ 17 ] These works led to introduction of efficient catalysts to synthetic practice, like catASium, [ 18 ] some of them bearing a camphor-derived ligand ROCKYPhos [ 19 ] (named after the cities ROstok and KYiv ). Although Igor's interest to the synthesis of chiral ligands has not been faded, he changed the general direction of his research once more, and now he works in the area of drug design . [ 7 ] One of the main design principle is restriction of conformational mobility of the drug candidate molecules . [ 20 ] [ 21 ] Prof. Komarov's research group developed many approaches to synthesis of conformationally restricted amines and amino acids - the building blocks for drug design. [ 20 ] [ 22 ] Numerous conformationally restricted fluorine -containing amino acids were also designed and synthesized, with a purpose of using them as labels to study peptides in lipid bilayers by solid-state NMR spectroscopy . [ 23 ] Igor V. Komarov's group made a contribution to design and synthesis of light-controllable biologically active compounds - photocontrollable peptides - potential candidates for photopharmacology drugs. Photopharmacology drugs can be administered in the inactive, non-toxic form, and then activated ("switched on") by light only when and where required to treat localized lesions (e.g.in solid tumors ). [ 24 ] The activation by light can be done with very high spatiotemporal precision in the lesion site, leaving the rest of the patient body unaffected. [ 25 ] [ 26 ] After the treatment, the photopharmacology drugs can be inactivated ("switched off") by light in order to diminish side-effects and environmental burden . [ 24 ] Another research direction in the Igor V. Komarov's scientific group is navigation of chemical space . A method of structural comparison for organic molecules was developed which employed exit vector plot analysis. [ 27 ] Enumeration of molecules (exhaustive generation of all theoretically possible structures) was carried out for some classes of organic compounds, for example, for conformationally restricted diamines . [ 28 ] In the area of nanotechnology , Igor V. Komarov's research group studied cell-penetrating peptides as carriers for carbon-based fluorescent nanoparticles , shuttling them inside eukaryotic cells with the purpose of bioimaging . [ 29 ] Igor V. Komarov has a Ukrainian patent, [ 30 ] 2 international patents, [ 31 ] [ 32 ] is a co-authors of text-books on NMR spectroscopy. [ 33 ] Igor V. Komarov was a coordinator of scientific projects financed by the Ministry of Education and Science of Ukraine (three applied projects devoted to design of therapeutic peptides, including photocontrolled [1] ), Alexander von Humboldt Foundation (Institute Partnershaft and Research Linkage Programs, in collaboration with Karlsruhe University (Karlsruhe, Germany) [2] and Leibniz Institute of Molecular Pharmacology (Berlin, Germany) [3] ), private companies Degussa (the project was devoted to development of large-scale production of a ligand for Rhodium-based catalysts of asymmetric hydrogenation) and Enamine (six medicinal chemistry projects, lead discovery and lead optimization). He is currently a coordinator of a European Horizon2020 Research and Innovation Staff Exchange (RISE) Programme (2016–2019) Grant Agreement number: 690973 [4] , the title of the project – “Peptidomimetics with Photocontrolled Biological Activity”.
https://en.wikipedia.org/wiki/Igor_V._Komarov
In mathematics , the Ihara zeta function is a zeta function associated with a finite graph . It closely resembles the Selberg zeta function , and is used to relate closed walks to the spectrum of the adjacency matrix . The Ihara zeta function was first defined by Yasutaka Ihara in the 1960s in the context of discrete subgroups of the two-by-two p-adic special linear group . Jean-Pierre Serre suggested in his book Trees that Ihara's original definition can be reinterpreted graph-theoretically. It was Toshikazu Sunada who put this suggestion into practice in 1985. As observed by Sunada, a regular graph is a Ramanujan graph if and only if its Ihara zeta function satisfies an analogue of the Riemann hypothesis . [ 1 ] The Ihara zeta function is defined as the analytic continuation of the infinite product where L ( p ) is the length L ( p ) {\displaystyle L(p)} of p {\displaystyle p} . The product in the definition is taken over all prime closed geodesics p {\displaystyle p} of the graph G = ( V , E ) {\displaystyle G=(V,E)} , where geodesics which differ by a cyclic rotation are considered equal. A closed geodesic p {\displaystyle p} on G {\displaystyle G} (known in graph theory as a " reduced closed walk "; it is not a graph geodesic) is a finite sequence of vertices p = ( v 0 , … , v k − 1 ) {\displaystyle p=(v_{0},\ldots ,v_{k-1})} such that The integer k {\displaystyle k} is the length L ( p ) {\displaystyle L(p)} . The closed geodesic p {\displaystyle p} is prime if it cannot be obtained by repeating a closed geodesic m {\displaystyle m} times, for an integer m > 1 {\displaystyle m>1} . This graph-theoretic formulation is due to Sunada. Ihara (and Sunada in the graph-theoretic setting) showed that for regular graphs the zeta function is a rational function. If G {\displaystyle G} is a q + 1 {\displaystyle q+1} -regular graph with adjacency matrix A {\displaystyle A} then [ 2 ] where r ( G ) {\displaystyle r(G)} is the circuit rank of G {\displaystyle G} . If G {\displaystyle G} is connected and has n {\displaystyle n} vertices, r ( G ) − 1 = ( q − 1 ) n / 2 {\displaystyle r(G)-1=(q-1)n/2} . The Ihara zeta-function is in fact always the reciprocal of a graph polynomial : where T {\displaystyle T} is Ki-ichiro Hashimoto's edge adjacency operator. Hyman Bass gave a determinant formula involving the adjacency operator. The Ihara zeta function plays an important role in the study of free groups , spectral graph theory , and dynamical systems , especially symbolic dynamics , where the Ihara zeta function is an example of a Ruelle zeta function . [ 3 ]
https://en.wikipedia.org/wiki/Ihara_zeta_function
Ilaria Mazzoleni is an architect and founder of IM Studio Milano/Los Angeles. She is known for her work on sustainable architecture at all scales of design, and on biomimicry or design innovation inspired by nature. She has built work in Italy, California, and Ghana. [ 1 ] [ 2 ] Her 2013 book "Architecture Follows Nature-Biomimetic Principles for Innovative Design" [ 3 ] covers topics such as biomimicry in architecture and site-specific architecture . She founded the Nature, Art & Habitat Residency (NAHR), a nonprofit association based in Bergamo Italy that offers a one-month residency in the rural Taleggio Valley. [ 4 ] [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Ilaria_Mazzoleni
An illegitimate receiver is an organism that intercepts another organism's signal, despite not being the signaler's intended target. [ 1 ] In animal communication, a signal is any transfer of information from one organism to another, including visual, olfactory (e.g. pheromones), and auditory signals. [ 2 ] If the illegitimate receiver's interception of the signal is a means of finding prey, the interception is typically a fitness detriment (meaning that it reduces survival or reproductive ability) to either the signaler or the organism meant to legitimately receive the signal, but it is a fitness advantage to the illegitimate receiver because it provides energy in the form of food. [ 1 ] Illegitimate receivers can have important effects on the evolution of communication behaviors. [ 1 ] [ 3 ] Illegitimate receivers can benefit by intercepting signals to locate prey, [ 4 ] [ 5 ] or, if they are parasites or parasitoids , by intercepting signals to locate host organisms . [ 6 ] In addition to locating prey by intercepting signals given by the prey organism, some animals use the signals of other predators to find carcasses that they can scavenge off of. [ 7 ] Other organisms benefit by illegitimately receiving the signals of rivals and using this information to improve their own chances of winning in competition for resources, including mates. [ 8 ] Illegitimate receivers can experience fitness costs if they respond to signals given off by illegitimate signalers , which are organisms that utilize deceptive signals to reduce receiver fitness, typically by preying on or parasitizing the organism that responds. [ 1 ] Illegitimate receivers may also experience fitness costs if intercepting signals not intended for them reduces their likelihood of receiving signals that are directed at them, such as the mating calls of members of their own species or the warning calls of rivals. Redeye bass ( Micropterus coosae ) and midland water snakes ( Nerodia sipedon pleuralis ) respond to acoustic and visual signals in male tricolor shiners ( Cyprinella trichroistia ) when detecting prey. [ 4 ] Male Great Bowerbirds sometimes steal nest decorations, which are intended to attract mates, from their rivals and use these decorations in their own nests. [ 8 ] Male túngara frogs ( Physalaemus pustulosus ) give off mating calls consisting of both "whines" and "chucks," with songs that contain chucks favored by females over those containing only whines. [ 9 ] However, the fringe-lipped bat ( Trachops cirrhosus ), a natural predator of the túngara frog, is an illegitimate receiver of these songs and uses them to locate its prey. These bats are especially attracted to frog songs containing the chuck element, and so túngara frogs rarely incorporate chucks into their calls. In fact, the frogs have been shown to typically only incorporate the chuck element into their songs when they are congregated in large groups, as this reduces the chance of being eaten via the dilution effect . [ 1 ] [ 5 ] On the island of Kauai , females of a species of parasitoid fly, Ormia ochracea , respond to the stridulation mating calls of male field crickets ( Teleogryllus oceanicus ) by locating the crickets and then laying their lethal larvae on them. [ 1 ] [ 10 ] [ 11 ] In response to this, male field crickets have evolved via a "flatwing" mutation to no longer produce mating songs. [ 6 ] Another example of evolution in response to illegitimate receivers is that of the Great tits . These European songbirds have evolved to use "seet" calls in order to avoid having their signals illegitimately received by hawks or owls. Great tits use two different calls to warn one another of nearby predators: When the predators are flying nearby, the great tits use a "seet" alarm call ; however, when the predators are perched nearby, the great tits use a mobbing call . The mobbing call is at a much higher frequency than the seet call, allowing for the great tits to recruit nearby individuals of their species when mobbing perched predators in an attempt to chase them out of the area. Meanwhile, the lower frequency of the seet call allows the great tits to warn one another of the danger without attracting the unwanted attention of the mobile hawk or owl. [ 1 ] Louder calls are also more frequently exhibited in birds inhabiting more protected habitats, while softer seet calls are more common in unprotected, open areas. [ 12 ] Illegitimate signalers utilize deceptive signals to reduce the receiver's fitness while increasing their own. [ 1 ] Examples include the case of the Photinus and Photuris fireflies, as well as aggressive mimicry . Honest signals are signals used by one organism to convey true information to another individual. [ 1 ] An example is the begging calls of bird chicks. Animal communication includes any transfer of information between individuals, including illegitimate receiving and signaling.
https://en.wikipedia.org/wiki/Illegitimate_receiver
Illegitimate recombination , or nonhomologous recombination , is the process by which two unrelated double stranded segments of DNA are joined. This insertion of genetic material which is not meant to be adjacent tends to lead to genes being broken causing the protein which they encode to not be properly expressed. One of the primary pathways by which this will occur is the repair mechanism known as non-homologous end joining (NHEJ). [ 1 ] Illegitimate recombination is a natural process which was first found to be present within E. coli . A 700-1400 base pair segment of DNA was found to have inserted itself into the gal and lac operons resulting in a strong polar mutation . [ 2 ] This mechanism was then found to have the ability to insert other short genetic sequences into other locations within the bacterial genome often leading to a change in the expression of neighboring genes. Oftentimes it leads to the neighboring genes to simply shut off. However some of these segments also had strong start and stop signals which changed the regulation of neighboring genes leading in changes in the amount of transcription. What differentiated this form of genetic recombination from those dependent of genetic homology was that the process observed as illegitimate did not require the use of homologous segments of DNA. While not being entirely understood at the time, it was recognized to hold potential in generating changes in the chromosomal evolution. [ 3 ] In prokaryotes, illegitimate recombination results in a mutation of the genetic sequence of the prokaryote . This process takes different forms in eukaryotes one of which is deletions. In a deletion mutation the prokaryotic organism undergoes illegitimate recombination resulting in the removal of a continuous segment of genetic code. However this form of mutation occurs infrequently among mutants of natural origin rather than those that have been induced. Another form of illegitimate recombination in prokaryotes is that of a duplication mutations of a genome. In this case a portion of the parental genome is inserted multiple times into the genome. This duplication either inserts the genetic material in the same orientation or opposite of the original parental segments as it is non-homology driven. [ 3 ] The mechanism of Illegitimate recombination is that of non-homologous end joining in which two strands of DNA not sharing homology will be joined together by the gene repair machinery. Upon recognition of a double strand break a protein complex will keep the two strands within close enough proximity in order to allow for repair of the strands. Next the ends of the DNA are repaired such that any incorrect or damaged nucleotides are removed. Once this happens the strands are able to be ligated together such that they are a single strand of DNA which previously had not been adjacent. This process is common for eukaryotic cells and tends to act as a repair mechanism, but can lead to these mutations if illegitimate recombination occurs. The illegitimate recombination will often take the form of large chromosomal aberrations within a eukaryotic organism as it has much larger segments of DNA than prokaryotic cells. As of such non-homologous end joining can cause illegitimate recombination which creates insertion and deletion mutations in chromosomes as well as translocation of one chromosomal segment to that of another chromosome. These large scale changes in the chromosome in eukaryotic organisms tend to have deleterious effects on the organism rather than conferring a type of genetic advantage. [ 4 ] Illegitimate recombination oftentimes has deleterious effects on an organism as it results in a large scale change on the genetic sequence of an organism. These changes will result in mutations as the joining of DNA not based on homology will most often place genetic elements in locations in which they previously had not been placed. This can disrupt the function of genes which may be essential to the function of an organism. In the case of cancer it has been found that tumors can be a result of illegitimate recombination resulting in hairpin formation which alters the gene function within the genome of tumor cells. [ 5 ] Illegitimate recombination is a tool which can be used in the laboratory as well as it is a useful research tool. Illegitimate recombination can generate random mutagenesis in order to generate a random alteration of the genetic sequence of an organism. [ 6 ] The induction of this mutagenesis allows for the study of a genetic sequence by creating a mutation in a genetic segment altering the function of that genetic segment. This allows for the study of gene function through the analysis of differences between mutants and natural organisms to interpret what process a gene is linked to.
https://en.wikipedia.org/wiki/Illegitimate_recombination
Illinois Public Interest Research Group ( Illinois PIRG ) is a non-profit organization that is part of the state PIRG organizations. It works on a variety of political activities, including childhood obesity , reducing the interest on student loans, and closing tax loopholes. [ 1 ] [ 2 ] In the United States, Public Interest Research Groups (PIRGs) are non-profit organizations that employ grassroots organizing, direct advocacy, investigative journalism , and litigation to affect public policy. [ 3 ] Illinois PIRG was founded in 1987, and has offices in Chicago , Springfield, IL , and a national lobbying office in Washington, D.C. called US PIRG. [ 4 ] The PIRGs emerged in the early 1970s on U.S. college campuses. The PIRG model was proposed in the book Action for a Change by Ralph Nader and Donald Ross . [ 5 ] Among other early accomplishments, the PIRGs were responsible for much of the Container Container Deposit Legislation in the United States , also known as "bottle bills." [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/Illinois_Public_Interest_Research_Group
The Illinois Soil Nitrogen Test ("ISNT") is a method for measuring the amount of Nitrogen in soil that is available for use by plants as a nutrient. The test predicts whether the addition of nitrogen fertilizer to agricultural land will result in increased crop yields. [ 1 ] [ 2 ] Nitrogen is essential for plant development. Indeed, for crops that are destined to be food for farm animal or human consumption, incorporation of nitrogen into the crop is an important goal, since this forms the basis for protein in the human diet. Nitrogen is commonly present in soils in many forms, and there are many ways to measure this nitrogen. None of these are completely satisfactory as a measure of the nitrogen that is available for use by crops. The ISNT is a new (2007) method for measuring nitrogen available for plant uptake. ISNT estimates the amount of nitrogen present in the soil as amino sugar nitrogen. With respect to corn and soybeans , the optimal range for plant growth appears to be around 225 to 240 mg/Kg. Some form of nitrogen fertilizer is needed if levels are below this range. On the other hand, if levels are above this range, addition of nitrogen fertilizer will not increase crop yield. In the corn belt , since about 1975, the predominant method of estimating the amount of nitrogen needed for corn has been the "yield-based" method. A farmer first estimates the yield of corn he intends to produce. He then applies 1.1 to 1.4 lbs of nitrogen per bushel of expected yield. [ 3 ] ISNT represents an alternative approach to managing nitrogen application. However, ISNT does not offer a simple answer as to the amount of nitrogen fertilizer that is needed, or as to the optimal form of that fertilizer. In field trials in Illinois, some fields have been found to be under-fertilized when managed according to the "yield-based" method, as judged by the ISNT. In the majority of trials, however, the yield-based method calls for the addition of nitrogen far in excess of the levels needed for optimal crop production. This nitrogen, which is applied by farmers at great cost, does not find its way into the crop, but is lost to the atmosphere or leaches into waterways. Within the corn belt, stalks and other crop residues are left in the field with the intention of enhancing the amount of organic material in the soil. Excessive nitrogen application, however, appears to promote the rapid decomposition of organic matter in the soil, resulting in release of carbon dioxide . [ 4 ] As a result, the amount of organic material in soils managed according to the yield-based method in the corn belt appears to be decreasing in spite of the large amounts of crop residues left in the fields.
https://en.wikipedia.org/wiki/Illinois_Soil_Nitrogen_Test
Illinois Solar Decathlon (ISD) is an interdisciplinary organization based in the Champaign-Urbana, IL and is the official Solar Decathlon team for the University of Illinois at Urbana-Champaign . [ 1 ] It is also closely affiliated with the Illinois School of Architecture. [ 2 ] Illinois Solar Decathlon was formed in 2007 after the university's involvement in three past Solar Decathlon competitions. [ 2 ] Starting in 2002, the US Department of Energy sponsors a biannual competition where twenty universities from around the world design and construct an innovative and efficient net-zero home. [ 3 ] They are transported to a central location and over the course of a week the homes are judged in ten different criteria including energy balance, architecture, communications, and market appeal to name a few. [ 3 ] With ISD's increasing involvement in the Solar Decathlon's competition, Europe began having their own Solar Decathlon competitions in 2007, with China joining in a few years later in 2013. [ 4 ] The RSO works to recruit future team members, maintain sponsor relations, and help maintain the three past homes that are near campus. ISD also aims to educate the campus and community about the importance of sustainable energy efficient homes and hope to provide a strong foundation for future Illinois Decathlon teams. [ 5 ] ISD has focused on interdisciplinary teamwork and collaboration, with students in architecture, engineering and like disciplines working together on various projects. [ 6 ] ISD is constantly searching for new members and there are many opportunities for involvement including working on past homes, marketing efforts, or even being part of the next design team. The Solar Decathlon competitions are intended to be a “research goal of reducing the cost of solar-powered homes and advancing solar technology,” according to the Solar Decathlon 2007 (SD07). [ 7 ] Illinois Solar Decathlon began its involvement with the Solar Decathlon competitions in the 2007 competition. The 2007 Solar Decathlon Competition is the third competition held and the first time the University of Illinois competed in. [ 7 ] The concept for the University of Illinois' Solar Decathlon Team's 2007 house is a flexible modular system that can provide utility-independent housing for temporary or seasonal use. [ 8 ] Beyond the goal of winning the competition, Illinois hopes that its final product can serve as a model for emergency housing situations such as those experienced by citizens of the Gulf Coast as a result of Hurricane Katrina in 2005. Immediately following success in the 2007 competition, ISD began work on the 2009 competition entry. The primary concept of ISD's 2009 entry, named Gable Home, is to create a synthesis between innovative technology and vernacular Midwestern architecture. [ 9 ] This synthesis results in a synergetic relationship between the two, creating an environmentally sustainable home of the future. The design exhibits a strong preference for reused/reclaimed materials over the production of new material. The siding of the house was reclaimed from a barn being deconstructed in Rockford, Illinois. The decking material was salvaged from a demolished grain silo in Champaign, Illinois. Restoring and repurposing material from farm structures strengthened the overall design emphasis on local vernacular architecture. [ 10 ] For the 2011 Solar Decathlon competition, Team Illinois has designed the Re_home. For rapid assembly after a natural disaster, the solar powered home will demonstrate how environmentally aware living can be brought to the forefront of a community-led recovery effort. [ 11 ] Through a carefully thought out process the Re_home can be pre-constructed and quickly deployed immediately following a natural disaster. When called upon, the house mobilizes quickly from its constructed location towards new communities. Upon arrival the house is assembled within several hours. Once sealed, the home becomes a livable space the day it arrives, providing new shelter for disaster victims. During the days following delivery the rest of the house can be assembled by members of the community. Pre-installed adjustable solar panels provide renewable energy quickly, and prefabricated modular decks, planters, and canopies ease installation of all exterior elements of the home. [ 12 ] The 2011 competition team was led by Dr. Xinlei Wang, Ph.D. and Mark Taylor, AIA. [ 13 ] To complement the leadership, members from the College of Agriculture, Consumer, and Environmental Sciences (ACES) joined the team, including David Weightman, James Anderson, Joe Harper, Sarah Taylor Lovell. [ 13 ] The 2013 Solar Decathlon Competition featured the first ever Solar Decathlon competition held in China. [ 14 ] For the 2013 competition, Illinois Solar Decathlon from the University of Illinois at Urbana-Champaign designed their project in collaboration with Peking University in Beijing. The international team hopes that their house will increase public awareness for solar technology and promote low-carbon development. [ 14 ] Etho is centered around a feeling of solace; an urban oasis. It meets the Chinese market’s need for a refuge from overcrowded cities and inspires the Chinese public to see the potential in a high-quality sustainability. [ 15 ] Ethos’s design philosophy is centered on creating a better future, creating a link to China’s rich past, and sculpting a home perfectly suited to meet the needs of its inhabitants. Designed for the next generation of young families as a sustainable getaway, this solar-powered home will demonstrate how educating one influential demographic can help to spread awareness of environmental sustainability and energy. [ 15 ] Its design emphasizes natural daylighting and maximizing solar gains on the PV arrays. Similar to the 2011 competition, the 2013 competition team was also led by Dr. Xinlei Wang, Ph.D. and Mark Taylor, AIA. [ 16 ] Mike McCully, a professor at the Illinois School of Architecture, joined the leadership and assisted in the architectural aspects of the project. [ 16 ] Starting in fall of 2014, Illinois Solar Decathlon began its involvement in the U.S. Department of Energy's Race to Zero Competition. In the fall of 2014, ISD commenced work on the 2015 Race to Zero Competition. Previous 2013 Solar Decathlon member, Matthew McClone, LEED AP BD+C, who worked on the 2013 SD project, Etho, was selected to lead the 2015 Race to Zero Competition. The architectural aspects of the project were headed by Ryan Christiansen. Other notable team members include Priscilla Zhang, LEED GA, Robert Moy, Assoc. AIA, Sean Killarney, LEED GA, and Kasey Colombani. The concept of ISD's 2015 Project was a theoretical deep energy retrofit based on a cottage located in Allerton Park, located in Monticello, Illinois. [ 17 ] The name of the project is called Sun Catcher Cottage. [ 17 ] The project was then presented in Golden, CO on April 18–20, 2015. Illinois Solar Decathlon was awarded the Grand Winner Finalist during the competition. [ 18 ] [ 19 ] In the fall of 2015, ISD returned to enter the 2016 Race to Zero competition. For the 2016 competition, several key members of previous years' 2015 competition team transferred to focus their work on the 2017 Solar Decathlon competition. Amir Amizadeh, Assoc. AIA, LEED GA and Vasco Chan were selected as the Lead Project Manager and Assistant Project Manager, respectively. [ 20 ] Previous 2015 R20 architecture team member Robert Moy, Assoc. AIA was promoted to Competition Lead and one of the Project Managers for the 2016 competition, while previous 2015 R20 member Priscilla Puchun Zhang was promoted to Architecture Team Lead. [ 20 ] The team’s 2016 contest entry proposed a deep retrofit of an existing 3-story ~4,000 sq. ft. student apartment building located in Urbana, IL near the campus of the University of Illinois at Urbana-Champaign. The new proposal for the 1920s 3 story building has eight rental units, featuring two, three or four bedrooms each. [ 5 ] The team aimed for an overall integrated design, seamlessly unifying certain building systems, such as lighthing, HVAC, architectural details, materials and finishes. [ 6 ] The ISD team won 2nd place overall in the 2016 competition, which was held on April 16–17, 2016. [ 21 ] Judges during the 2016 Race to Zero Competition commended ISD for their use of an existing maintenance shaft that was converted to allow natural ventilation. [ 22 ] Following success in the 2015 and 2016 Race to Zero competitions, members of ISD reconvened to enter the 2017 Competition. In the 2017 competition, the team will be led by Lead Project Manager Robert Moy, Assoc. AIA and Michael Najder, Assoc. AIA, LEED GA. [ 2 ]
https://en.wikipedia.org/wiki/Illinois_Solar_Decathlon
The Illinois Structural Health Monitoring Project (ISHMP) is a structural health monitoring project devoted to researching and developing hardware and software systems to be used for distributed real-time monitoring of civil infrastructure . [ 1 ] The project focuses on monitoring bridges , and aims to reduce the cost and installation effort of structural health monitoring equipment. [ 2 ] It was founded in 2002 by Professor Bill F. Spencer and Professor Gul Agha of the University of Illinois at Urbana–Champaign . [ 3 ] The project aims to minimize the cost of monitoring structures through developing low cost wireless networks of sensor boards, each equipped with an embedded computer . The Illinois Structural Health Monitoring Project also focuses on creating a software toolsuite that can simplify the development of other structural health monitoring devices. [ 2 ] Currently, ISHMP has a wireless sensor network set up on the Jindo Bridge in South Korea . Each sensor board in the network uses real-time data to collect a multitude of different data , and then the microcomputer processes the data and determines the current state of the bridge. [ 4 ] The Illinois Structural Health Monitoring Project was founded in 2002 when Professor Bill F. Spencer, director of the Smart Structures Technology Laboratory, and Professor Gul Agha, director of the Open Systems Laboratory, began a collaborative effort between the two laboratories at the University of Illinois at Urbana–Champaign . [ 1 ] [ 3 ] The project aims to develop reliable wireless hardware and software for distributed real-time structural health monitoring of various infrastructure using multiple sensors on a single structure. Each sensor's data corresponding to a specific region on the structure is used to assess the overall health of the structure. [ 1 ] The Illinois Structural Health Monitoring Project's underlying goal is to minimize the cost of infrastructure inspections though using inexpensive and reliable wireless sensor arrays, significantly reducing the need for physical human inspection. Its main focus has been to monitor bridges using sensor networks. While other wired bridge monitoring systems require excessive amounts of cables and man hours to install, installing a wireless sensor network would prove much less expensive. [ 2 ] The Illinois Structural Health Monitoring Project receives support from the National Science Foundation , Intel Corporation , and the Vodafone -U.S. Foundation Graduate Fellowship. [ 1 ] Instead of using a single centralized point for collecting data from every sensor in a network, the ISHMP uses sensor platforms with embedded computers , such as Intel's Imote2. The Illinois Structural Health Monitoring Project has designed, developed, and tested various sensors that can stack onto these embedded computers and sense data such as vibrations, humidity levels, and wind speeds, to name a few. Using various power harvesting devices, there is no need for wiring the sensors to an electrical network. Initial tests for the sensor systems were run on a scale model of a truss bridge . [ 2 ] [ 5 ] The Illinois Structural Health Monitoring Project has also developed an open source toolsuite that contains a software library of customizable services for structural health monitoring platforms. This simplifies the development of structural health monitoring applications for other sensor systems. [ 2 ] [ 4 ] In 2008, a dense array of sensors was deployed on the Jindo Bridge in South Korea and was the first dense deployment of a wireless sensor network on a cable-stayed bridge . [ 4 ] This new bridge monitoring system is fully autonomous , and sends out an e-mail when a problem arises. The system wakes up for a few minutes at a time to collect, analyze, and send data, in order to conserve battery power. [ 5 ] The ISHMP has developed a software toolsuite with open source services needed for structural health monitoring applications on a network of Intel 's Imote2 smart sensors. These services include application services, application tools, and utilities. The application services allow for the implementation of structural health monitoring algorithms on the Imote2 and include tests for both the PC and Imote2. The tools allow for data collection from the sensors on the network, perform damage detection on the structure, and test for radio communication quality. [ 6 ] The ISHMP has also developed a sensor board that is produced by MEMSIC. It is designed to work with the Imote2 smart sensor platform, and is optimized for structural health monitoring applications. The sensor board provides the information output required to comprehend the data collected by the individual sensors. It includes a three axis accelerometer , a light sensor , a temperature sensor , and humidity sensors. It can also accommodate one additional external analog input signal. [ 7 ]
https://en.wikipedia.org/wiki/Illinois_Structural_Health_Monitoring_Project
In photometry , illuminance is the total luminous flux incident on a surface, per unit area . [ 1 ] It is a measure of how much the incident light illuminates the surface, wavelength-weighted by the luminosity function to correlate with human brightness perception. [ 2 ] Similarly, luminous emittance is the luminous flux per unit area emitted from a surface. Luminous emittance is also known as luminous exitance . [ 3 ] [ 4 ] In SI units illuminance is measured in lux (lx), or equivalently in lumens per square metre ( lm · m −2 ). [ 2 ] Luminous exitance is measured in lm·m −2 only, not lux. [ 4 ] In the CGS system, the unit of illuminance is the phot , which is equal to 10 000 lux . The foot-candle is a non-metric unit of illuminance that is used in photography . [ 5 ] Illuminance was formerly often called brightness , but this leads to confusion with other uses of the word, such as to mean luminance . "Brightness" should never be used for quantitative description, but only for nonquantitative references to physiological sensations and perceptions of light. The human eye is capable of seeing somewhat more than a 2 trillion-fold range. The presence of white objects is somewhat discernible under starlight, at 5 × 10 −5 lux (50 μlx), while at the bright end, it is possible to read large text at 10 8 lux (100 Mlx), or about 1000 times that of direct sunlight , although this can be very uncomfortable and cause long-lasting afterimages . [ citation needed ] In astronomy , the illuminance stars cast on the Earth's atmosphere is used as a measure of their brightness. The usual units are apparent magnitudes in the visible band. [ 7 ] V-magnitudes can be converted to lux using the formula [ 8 ] E v = 10 ( − 14.18 − m v ) / 2.5 , {\displaystyle E_{\mathrm {v} }=10^{(-14.18-m_{\mathrm {v} })/2.5},} where E v is the illuminance in lux, and m v is the apparent magnitude. The reverse conversion is m v = − 14.18 − 2.5 log ⁡ ( E v ) . {\displaystyle m_{\mathrm {v} }=-14.18-2.5\log(E_{\mathrm {v} }).} The luminance of a reflecting surface is related to the illuminance it receives: ∫ Ω Σ L v d Ω Σ cos ⁡ θ Σ = M v = E v R {\displaystyle \int _{\Omega _{\Sigma }}L_{\mathrm {v} }\mathrm {d} \Omega _{\Sigma }\cos \theta _{\Sigma }=M_{\mathrm {v} }=E_{\mathrm {v} }R} where the integral covers all the directions of emission Ω Σ , and In the case of a perfectly diffuse reflector (also called a Lambertian reflector ), the luminance is isotropic, per Lambert's cosine law . Then the relationship is simply L v = E v R π {\displaystyle L_{\mathrm {v} }={\frac {E_{\mathrm {v} }R}{\pi }}}
https://en.wikipedia.org/wiki/Illuminance
Illumination problems are a class of mathematical problems that study the illumination of rooms with mirrored walls by point light sources . The original formulation was attributed to Ernst Straus in the 1950s and has been resolved. Straus asked whether a room with mirrored walls can always be illuminated by a single point light source, allowing for repeated reflection of light off the mirrored walls. Alternatively, the question can be stated as asking that if a billiard table can be constructed in any required shape, is there a shape possible such that there is a point where it is impossible to hit the billiard ball at another point, assuming the ball is point-like and continues infinitely rather than stopping due to friction . The original problem was first solved in 1958 by Roger Penrose using ellipses to form the Penrose unilluminable room . He showed that there exists a room with curved walls that must always have dark regions if lit only by a single point source. This problem was also solved for polygonal rooms by George Tokarsky in 1995 for 2 and 3 dimensions, which showed that there exists an unilluminable polygonal 26-sided room with a "dark spot" which is not illuminated from another point in the room, even allowing for repeated reflections. [ 1 ] These were rare cases, when a finite number of dark points (rather than regions) are unilluminable only from a fixed position of the point source. In 1995, Tokarsky found the first polygonal unilluminable room which had 4 sides and two fixed boundary points. [ 2 ] He also in 1996 found a 20-sided unilluminable room with two distinct interior points. In 1997, two different 24-sided rooms with the same properties were put forward by George Tokarsky and David Castro separately. [ 3 ] [ 4 ] In 2016, Samuel Lelièvre, Thierry Monteil, and Barak Weiss showed that a light source in a polygonal room whose angles (in degrees) are all rational numbers will illuminate the entire polygon, with the possible exception of a finite number of points. [ 5 ] In 2019 this was strengthened by Amit Wolecki who showed that for each such polygon, the number of pairs of points which do not illuminate each other is finite. [ 6 ]
https://en.wikipedia.org/wiki/Illumination_problem
Illuminationism ( Persian حكمت اشراق hekmat-e eshrāq , Arabic : حكمة الإشراق ḥikmat al-ishrāq , both meaning "Wisdom of the Rising Light"), also known as Ishrāqiyyun or simply Ishrāqi ( Persian اشراق, Arabic : الإشراق, lit. "Rising", as in "Shining of the Rising Sun") is a philosophical and mystical school of thought introduced by Shahab al-Din Suhrawardi ( honorific : Shaikh al-ʿIshraq or Shaikh-i-Ishraq , both meaning "Master of Illumination") in the twelfth century, established with his Kitab Hikmat al-Ishraq (lit: "Book of the Wisdom of Illumination"), a fundamental text finished in 1186. Written with influence from Avicennism , Peripateticism , and Neoplatonism , the philosophy is nevertheless distinct as a novel and holistic addition to the history of Islamic philosophy . While the Ilkhanate - Mongol Siege of Baghdad and the destruction of the House of Wisdom ( Arabic : بيت الحكمة, romanized: Bayt al-Ḥikmah) effectively ended the Islamic Golden Age in 1258, it also paved the way for novel philosophical invention. [ 1 ] Such an example is the work of philosopher Abu'l-Barakāt al-Baghdādī , specifically his Kitāb al-Muʿtabar ("The Book of What Has Been Established by Personal Reflection"); the book's challenges to the Aristotelian norm in Islamic philosophy along with al-Baghdādī's emphasis on "evident self-reflection" and his revival of the Platonic use of light as a metaphor for phenomena like inspiration all influenced the philosophy of Suhrawardi. [ 2 ] The philosopher and logician Zayn al-Din Omar Savaji further inspired Suhrawardi with his foundational works on mathematics and his creativity in reconstructing the Organon ; Savaji's two-part logic based on "expository propositions" (al-aqwāl al-šāreḥa) and "proof theory" (ḥojaj) served as the precursory model for Suhrawardi's own "Rules of Thought" (al-Żawābeṭ al-fekr). [ 3 ] Among the three Islamic philosophers mentioned in Suhrawardi's work, al-Baghdādī and Savaji are two of them. Upon finishing his Kitab Hikmat al-Ishraq (lit: "Book of the Wisdom of Illumination"), the Persian [ 4 ] [ 5 ] [ 6 ] [ 1 ] philosopher Shahab al-Din Suhrawardi founded Illuminationism in 1186. The Persian and Islamic school draws on ancient Iranian philosophical disciplines, [ 7 ] [ 8 ] Avicennism ( Ibn Sina 's early Islamic philosophy ), Neoplatonic thought (modified by Ibn Sina), and the original ideas of Suhrawardi. In his Philosophy of Illumination , Suhrawardi argued that light operates at all levels and hierarchies of reality (PI, 97.7–98.11). Light produces immaterial and substantial lights, including immaterial intellects ( angels ), human and animal souls, and even 'dusky substances', such as bodies. [ 9 ] Suhrawardi's metaphysics is based on two principles. The first is a form of the principle of sufficient reason . The second principle is Aristotle's principle that an actual infinity is impossible. [ 10 ] The essential meaning of ishrāq ( Persian اشراق, Arabic : الإشراق) is "rising", specifically referring to the sunrise , though "illumination" is the more common translation. It has used both Arabic and Persian philosophical texts as means to signify the relation between the " apprehending subject " (al-mawżuʿ al-modrek) and the " apprehensible object " (al-modrak); beyond philosophical discourse, it is a term used in common discussion. Suhrawardi utilized the ordinariness of the word in order to encompass the all that is mystical along with an array of different kinds of knowledge, including elhām , meaning personal inspiration. [ 1 ] None of Suhrawardi's works was translated into Latin, so he remained unknown in the Latin West , although his work continued to be studied in the Islamic East. [ 11 ] According to Hosein Nasr , Suhrawardi was unknown to the west until he was translated into western languages by contemporary thinkers such as Henry Corbin , and he remains largely unknown even in countries within the Islamic world. [ 12 ] Suhrawardi tried to present a new perspective on questions like those of existence. He not only caused peripatetic philosophers to confront such new questions, but also gave new life to the body of philosophy after Avicenna. [ 13 ] According to John Walbridge , Suhrawardi's critiques of Peripatetic philosophy could be counted as an important turning point for his successors. Although Suhravardi was first a pioneer of Peripatetic philosophy, he later became a Platonist following a mystical experience. He is also counted as one who revived the ancient wisdom in Persia by his philosophy of illumination. His followers, such as Shahrzouri and Qutb al-Din al-Shirazi tried to continue the way of their teacher. Suhrewardi makes a distinction between two approaches in the philosophy of illumination: one approach is discursive and another is intuitive. [ 14 ] Illuminationist thinkers in the School of Isfahan played a significant role in revitalizing academic life in the [ 15 ] Safavid Empire under Shah Abbas I (1588–1629). [ 16 ] Avicennan thought continued to inform philosophy during the reign of the Safavid Empire. [ 16 ] Illuminationism was taught in Safavid Madrasas (Place of Study) established by pious shahs. [ 17 ] Mulla Sadra (Ṣadr ad-Dīn Muḥammad Shīrāzī) was a 17th-century Iranian philosopher who was considered a master [ 18 ] of illuminationism. He wrote a book titled al-Asfār al-Arbaʻah meaning 'the four journeys', referring to the soul's journey back to Allah. He developed his book into an entire school of thought; he did not refer to al-Asfār as a philosophy but as "wisdom." Sadra taught how one could be illuminated or given wisdom until becoming a sage. [ 19 ] Al-Asfar was one piece of illuminationism which is still an active part of Islamic philosophy today. It was representative of Mulla Sadra's entire philosophical worldview. [ 20 ] Like many important Arabic works it is difficult for the western world to understand because it has not been translated into English. Mulla Sadra eventually became the most significant teacher at the religious school known as Madrasa-yi Khan. [ 16 ] His philosophies are still taught throughout the Islamic East and South Asia. [ 16 ] Al-Asfar is Mulla Sadra's book explaining his view of illuminationism. He views problems starting with a Peripatetic sketch . [ 21 ] This Aristotelian style of teaching is reminiscent of Islamic Golden Age Philosopher Avicenna . Mulla Sadra often refers to the Qur'an when dealing with philosophical problems. He quotes Qur'anic verses while explaining philosophy. He wrote exegeses of the Qur'an such as his explanation of Āyat al-Kursī . Asfār means journeys. In al-Asfar is a journey to gain wisdom. Mulla Sadra used philosophy as a set of spiritual exercises to become more wise. [ 22 ] In Mulla Sadra's book The Transcendent Philosophy of the Four Journeys of the Intellect he describes the four journeys of
https://en.wikipedia.org/wiki/Illuminationism
Illusion of validity is a cognitive bias in which a person overestimates their ability to interpret and predict accurately the outcome when analyzing a set of data , in particular when the data analyzed show a very consistent pattern—that is, when the data "tell" a coherent story. [ 1 ] [ 2 ] This effect persists even when the person is aware of all the factors that limit the accuracy of their predictions , that is when the data and/or methods used to judge them lead to highly fallible predictions. [ 2 ] Daniel Kahneman , Paul Slovic , and Amos Tversky explain the illusion as follows: "people often predict by selecting the output...that is most representative of the input....The confidence they have in their prediction depends primarily on the degree of representativeness ...with little or no regard for the factors that limit predictive accuracy. Thus, people express great confidence in the prediction that a person is a librarian when given a description of his personality which matches the stereotype of librarians, even if the description is scanty, unreliable, or outdated. The unwarranted confidence which is produced by a good fit between the predicted outcome and the input information may be called the illusion of validity." [ 3 ] Consistent patterns may be observed when input variables are highly redundant or correlated , which may increase subjective confidence. However, a number of highly correlated inputs should not increase confidence much more than only one of the inputs; instead higher confidence should be merited when a number of highly independent inputs show a consistent pattern. [ 2 ] This bias was first described by Amos Tversky and Daniel Kahneman in their 1973 paper " On the Psychology of Prediction ". [ 1 ] In a 2011 article, Kahneman recounted the story of his discovery of the illusion of validity. After completing an undergraduate psychology degree and spending a year as an infantry officer in the Israeli Army, he was assigned to the army's Psychology Branch, where he helped evaluate candidates for officer training using a test called the Leaderless Group Challenge. Candidates were taken to an obstacle field and assigned a group task so that Kahneman and his fellow evaluators could discern their individual leadership qualities or lack thereof. But although Kahneman and his colleagues emerged from the exercise with very clear judgments as to who was and wasn't a potential leader, their forecasts proved "largely useless" in the long term. Comparing their original evaluations of candidates with the judgments of their officer-training school commanders months later, Kahneman and his colleagues found that their own "ability to predict performance at the school was negligible. Our forecasts were better than blind guesses, but not by much." Yet when asked to again to assess yet another group of candidates, their judgments were as clear as before. "The dismal truth about the quality of our predictions," recalled Kahneman, "had no effect whatsoever on how we evaluated new candidates and very little effect on the confidence we had in our judgments and predictions." Kahneman found this striking: "The statistical evidence of our failure should have shaken our confidence in our judgments of particular candidates, but it did not. It should also have caused us to moderate our predictions, but it did not." Kahneman named this cognitive fallacy "the illusion of validity". Decades later, Kahneman reflected that at least part of the reason for his and his colleagues' failure in assessing the officer candidates was that they had been confronted with a difficult question but had instead, without realizing it, answered an easier one instead. "We were required to predict a soldier's performance in officer training and in combat, but we did so by evaluating his behavior over one hour in an artificial situation. This was a perfect instance of a general rule that I call WYSIATI, 'What you see is all there is.' We had made up a story from the little we knew but had no way to allow for what we did not know about the individual’s future, which was almost everything that would actually matter." [ 4 ] Comparing the results of 25 wealth advisers over an eight-year period, Kahneman found that none of them stood out consistently as better or worse than the others. "The results," as he put it, "resembled what you would expect from a dice-rolling contest, not a game of skill." Yet at the firm for which all these advisers worked, no one seemed to be aware of this: "The advisers themselves felt they were competent professionals performing a task that was difficult but not impossible, and their superiors agreed." Kahneman informed the firm's directors that they were "rewarding luck as if it were skill." The directors believed this, yet "life in the firm went on just as before." The directors clung to the "illusion of skill," as did the advisers themselves. [ 4 ] The scientist Freeman Dyson has recalled his experience as a British Army statistician during World War II, performing an analysis of the operations of the Bomber Command. At the time, an officer argued that because of the heavy gun turrets they carried, the bombers were too slow and could not fly high enough to avoid being shot down. He suggested they remove the turrets and gunners. But the commander in chief rejected the suggestion – because, said Dyson, "he was blinded by the illusion of validity." He was not alone: everyone in the command "saw every bomber crew as a tightly knit team of seven, with the gunners playing an essential role defending their comrades against fighter attack." Part of this illusion "was the belief that the team learned by experience. As they became more skillful and more closely bonded, their chances of survival would improve." Yet statistics, Dyson found, proved that all this was an illusion: deaths occurred randomly, having nothing to do with experience. Members of the bomber command, he realized, were dying unnecessarily because everyone was taken in by an illusion. [ 5 ] In 2014, an article in Rolling Stone presented as fact an accusation of rape at the University of Virginia that proved to be false. Rolling Stone's writers and editors, the university president and other administrators, and many U.Va. students were quick to believe the false charges. Harlan Loeb later explained this as an example of the illusion of validity in action. [ 6 ] In 2012, a sportswriter who described Kahneman as his "favorite scientist" wrote: "The illusion of validity is why I get deeply suspicious whenever a fan, sportswriter, coach, or GM says anything to the effect of 'the numbers don’t tell the whole story.' This is, in fact, true, but what the person saying this usually means is 'I don't care what the numbers say because I am convinced that what I have seen is correct.' Which is, thanks to this illusion, almost never true. If I make an argument that the data says a player isn't good, and someone points out 'Yes, but if you watch the games you will notice that this year they are only shooting threes from the slot, and rarely from the corner, where he used to excel,' then that person is pointing out a hole in the data that's worth investigating. If the argument is along the lines of 'anyone who's watching him can clearly see he's much better than that,' then I'm certain the illusion of validity is doing its dirty work." [ 7 ] In a 1981 paper, J. B. Bushyhead and J. J. Christensen-Szalanski studied data from an outpatient clinic showing that doctors there ordered chest radiographs only on patients who manifested clinical attributes linked to some pneumonia cases, rather than on patients manifesting clinical attributes associated with all pneumonia cases. They attributed this behavior to the illusion of validity. [ 8 ] Other cases where this phenomenon appears include job interviews , wine tasting , prediction of prices in the financial international markets, political strategy . [ 9 ] [ 10 ] The illusion of validity may be caused in part by confirmation bias and/or the representativeness heuristic , and could in turn cause the overconfidence effect . [ 11 ] [ how? ] Among the factors contributing to the illusion of validity, according to Meinolf Dierkes, Ariane Berthoin Antal, John Child, and Ikujiro Nonaka, are "a person's tendency to register the frequency of events more than their probability "; "the impossibility of gathering information about alternative assumptions if action is based on a hypothesis"; a "disregard of base-rate information"; and "the self-fulfilling prophecy ," or "a behavior manifested in individuals or groups because it was expected." [ 12 ] If one wishes to try to avoid the illusion of validity, according to Kahneman, one should ask two questions: Is the environment in which the judgment is made sufficiently regular to enable predictions from the available evidence ? The answer is yes for diagnosticians, no for stock pickers. Do the professionals have an adequate opportunity to learn the cues and the regularities? The answer here depends on the professionals' experience and on the quality and speed with which they discover their mistakes. While many professionals "easily pass both tests," meaning that their "off-the-cuff judgments" are of value, in general judgments by "assertive and confident people" should be taken with a grain of salt "unless you have independent reason to believe that they know what they are talking about." This can be difficult, however, because "overconfident professionals...act as experts and look like experts," and it can be a "struggle to remind yourself that they may be in the grip of an illusion." [ 4 ] In his article on the false rape case at the University of Virginia, Harlan Loeb outlined an approach to avoiding the illusion of validity in cases which, like that one, involve “a highly emotional and personal issue that has national resonance, high-profile media coverage and an organization already on the defensive with recent issues.” He advised, first: "Always challenge (appropriately, of course) facts and assumptions that many rely on to inform their thinking and decision-making about risk and crisis management ." Second: "In situations with palpable unknowns , where the illusion of validity in decision-making is a material threat, push hard to do research, polling and active listening to help identify the levers and pulleys that shape the operating and environmental realities of the present risk." Third: "Determine existing organizational challenges that will prevent leadership from making decisions consistently, effectively, and quickly in the face of uncertainty." Fourth: "Be aware of how current actions could dictate future strategy." Fifth: "Be ready to take on current risk to manage future risk." [ 6 ] Phil Thornton has offered the following advice for avoiding the illusion of validity in the financial sector. First, "remember that just because previous generations were successful following certain approaches, replicating their actions may not necessarily be a good idea." Second, "remember that the consequences of decisions being wrong can be more important than the probability of them being correct." [ 13 ]
https://en.wikipedia.org/wiki/Illusion_of_validity
Illusion optics is an electromagnetic theory that can change the optical appearance of an object to be exactly like that of another virtual object, i.e. an illusion , such as turning the look of an apple into that of a banana. Invisibility is a special case of illusion optics, which turns objects into illusions of free space. The concept and numerical proof of illusion optics was proposed in 2009 based on transformation optics in the field of metamaterials . [ 1 ] It is a scientific disproof of the idiom 'Seeing is Believing'. [ 2 ] Illusion optics proves that the optical responses or properties of a space containing any objects can be changed to be exactly those of a virtual space but containing arbitrary virtual objects (illusions) by using a passive illusion optics device composed of materials or metamaterials with specific parameters and shape. For example, a dielectric spoon was numerically shown to exhibit the scattering properties of a metallic cup by using an illusion optics device in the seminal paper. [ 1 ] Such illusion effects do not rely on the direction and form of incident waves. However, due to dispersion limitation of specific material parameters, the functionality of illusion optics device only works in a narrow band of frequency. Unlike optical illusions that utilize the misinterpretation of the human brain to create illusionary perception different from the physical measurement, illusion optics changes the optical response or properties of objects. Illusion optic devices make these changes happen. Although both these terms deal with illusions , Illusion optics deal with the refraction and reflection of light , whereas while optical illusions are basically mind tricks. Illusion optics was recorded in 1968 when Soviet physicist Victor Veselago discovered that he can make objects appear in different areas through negatively refracting flat slab. [ 2 ] When light is negatively refracted, the light is directed towards the direction it entered and deflected away from the line of refraction. Normal refraction occurs when light passes through the line of refraction. Veselago used this theory to work the slab into a lens, which he recorded in his experiments. He discovered that unlike a normal lens, the objects resolution does not depend on the limits of the wavelengths passing through the lens. Veselago's work has been more prominent in recent years due to the advancement in metamaterials, which are engineered materials that have special internal physical properties and have the ability to negatively refract light. An illusion device is how illusion optics works—without a device there is no way to define how light is refracted and deflected. Based on a study on circular objects with illusion optic properties, (i.e. negative refraction indexes) there are three basics of an illusion device: the invisibility cloak, real object, and illusion object. [ 3 ] The invisibility cloak is basically the medium on which light waves refract. Invisibility cloaks allow for an object to be undetected while confined in the area of the cloak. In other words, the viewer does not see the real object. In illusion optics, devices are not limited to only invisibility cloaks. For example, in Veselago's experiments, and lens was used to steer eyes away from the real object and direct them towards the illusion object. The real object refers to any object that is being refracted upon. In this case, while the real object is under the invisibility cloak, light waves are directed around it so the viewer only sees past the cloak. In Veselago's experiments, the real object is being refracted so the viewer sees a mirrored view of it. The illusion object is how the light waves come together and produces what the viewer sees as “normal.” The invisibility cloak refracts the reflected background light around the object and directs it into the viewer. The viewer only perceives there to be a background. With Veselago's experiments, the illusion object is displayed, but is only an image and is not the real object. Artificial metamaterials are important to how illusion optic devices are created. The properties of these materials allow it to bend light waves negatively, so as to have negative permittivity and negative permeability. [ 4 ] There are two pieces of metamaterials which hold different properties: the complementary medium and the restoring medium. The complementary medium is the illusion media used to scatter wavelengths away from the object that is being refracted. The restoring medium focuses waves and directs scattered waves together. Transformation optics is an important to creating metamaterials. The intermolecular geometry used in this field is crucial to creating the material properties.
https://en.wikipedia.org/wiki/Illusion_optics
Illustra was a commercialized version of the Postgres object-relational database management system ( DBMS ) sold by Illustra Information Technologies, a company founded in 1992 and formed by Michael Stonebraker , Gary Morgenthaler and several of Michael Stonebraker's current and former students including: Wei Hong, Jeff Meredith, Michael Olson, Paula Hawthorn , Jeff Anton, Cimarron Taylor and Michael Ubell. The technology's extensibility model centered on DataBlade modules that defined types and associated index methods, operators, and functions for purposes and data domains that included Web publishing, search and manipulation of text, and management of geospatial information. It enabled all kinds of structured and unstructured multimedia data types to be stored as true objects in existing databases, and not just as parcels of data with object wrappers a la Oracle Corp. [ 1 ] In 1995, NASA decided Illustra would be the right tool to store and manipulate millions of satellite photographs. The only stumbling block was the company size: with only 150 employees, Illustra didn't have the manpower or the scale to support the NASA project. [ 2 ] The company was sold to Informix Corporation in 1996 by $400 million in stock, 40 times revenue. Stonebraker's share was $6.5 million, and he became CTO of Informix after the merger, a position he held until September 2000. [ 2 ] [ 3 ] The technology was folded into the Informix 7 OnLine product line, shipped in December 1996, leading eventually to the creation of the unified Informix Universal Server (IUS) product line, or more generally, Version 9. [ 4 ] The entire Informix product line was sold to IBM, which continued to extend Informix, offering several editions for use under various license metrics (including two editions which are free of charge). In April 2017, IBM delegated active development and support to HCL Technologies for 15 years while keeping part of the marketing responsibilities. [ 5 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Illustra
An illustration is a decoration, interpretation, or visual explanation of a text, concept, or process, [ 1 ] designed for integration in print and digitally published media, such as posters , flyers , magazines, books, teaching materials, animations , video games and films . An illustration is typically created by an illustrator . Digital illustrations are often used to make websites and apps more user-friendly, such as the use of emojis to accompany digital type. [ 2 ] Illustration also means providing an example; either in writing or in picture form. The origin of the word "illustration" is late Middle English (in the sense ‘illumination; spiritual or intellectual enlightenment’): via Old French from Latin illustratio (n-), from the verb illustrare . [ 3 ] Contemporary illustration uses a wide range of styles and techniques, including drawing , painting , printmaking , collage , montage , digital design , multimedia , 3D modelling . Depending on the purpose, illustration may be expressive, stylised, realistic, or highly technical. Specialist areas [ 4 ] include: Technical and scientific illustration communicates information of a technical or scientific nature. This may include exploded views , cutaways , fly-throughs, reconstructions, instructional images, component designs, diagrams . The aim is "to generate expressive images that effectively convey certain information via the visual channel to the human observer". [ 5 ] Technical and scientific illustration is generally designed to describe or explain subjects to a nontechnical audience, so it must provide "an overall impression of what an object is or does, to enhance the viewer's interest and understanding." [ 6 ] In contemporary illustration practice, 2D and 3D software is often used to create accurate representations that can be updated easily and reused in a variety of contexts. There is a Guild of Natural Science Illustrators [ 7 ] and Association of Medical Illustrators. [ 8 ] The Association of Medical Illustrators states that the median salary is $70,650, while for science illustrators it is $72,277. [ 9 ] Types of jobs range from research institutes to museums to animation. [ 10 ] In the art world, illustration has at times been considered of less importance than graphic design and fine art . [ citation needed ] Today, however, due in part to the growth of the graphic novel and video game industries, as well as increased use of illustration in magazines and other publications, illustration is now becoming a valued art form, capable of engaging a global market. [ citation needed ] Original illustration art has been known to attract high prices at auction. The US artist Norman Rockwell 's painting "Breaking Home Ties" sold in a 2006 Sotheby's auction for US$15.4 million. [ 11 ] Many other illustration genres are equally valued, with pinup artists such as Gil Elvgren and Alberto Vargas , for example, also attracting high prices. Historically, the art of illustration is closely linked to the industrial processes of printing and publishing . The illustrations of medieval codices were known as illuminations , and were individually hand-drawn and painted. With the invention of the printing press during the 15th century, books became more widely distributed, and often illustrated with woodcuts . [ 12 ] [ 13 ] Some of the earliest illustrations come from the time of ancient Egypt (Khemet) often as hieroglyph . A classic example of illustrations exists from the time of The Tomb of Pharaoh Seti I , c. 1294 BC to 1279 BC, who was father of Ramses II , born 1303 BC. 1600s Japan saw the origination of Ukiyo-e , an influential illustration style characterised by expressive line, vivid colour and subtle tones, resulting from the ink-brushed wood block printing technique. Subjects included traditional folk tales, popular figures and everyday life. Hokusai 's The Great Wave off Kanagawa is a famous image of the time. During the 16th and 17th centuries in Europe, the main reproduction processes for illustration were engraving and etching . In 18th Century England, a notable illustrator was William Blake (1757–1827), who used relief etching . By the early 19th century, the introduction of lithography substantially improved reproduction quality. In Europe, notable figures of the early 19th Century were John Leech , George Cruikshank , Dickens illustrator Hablot Knight Browne , and, in France, Honoré Daumier . All contributed to both satirical and "serious" publications. At this time, there was a great demand for caricature drawings encapsulating social mores, types and classes. The British humorous magazine Punch (1841–2002) built on the success of Cruikshank's Comic Almanac (1827–1840) and employed many well-regarded illustrators, including Sir John Tenniel , the Dalziel Brothers , and Georges du Maurier . Although all fine art trained, their reputations were gained primarily as illustrators. Historically, Punch was most influential in the 1840s and 1850s. The magazine was the first to use the term " cartoon " to describe a humorous illustration and its widespread use led to John Leech being known as the world's first " cartoonist ". [ 14 ] In common with similar magazines such as the Parisian Le Voleur , Punch realised good illustration sold as well as good text. With publication continuing into the 21st Century, Punch chronicles a gradual shift in popular illustration, from reliance on caricature to sophisticated topical observation. From the early 1800s newspapers , mass-market magazines , and illustrated books had become the dominant consumer media in Europe and the New World. By the 19th century, developments in printing technology freed illustrators to experiment with color and rendering techniques. These developments in printing affected all areas of literature from cookbooks, photography and travel guides, as well as children's books. Also, due to advances in printing, it became more affordable to produce color photographs within books and other materials. [ 15 ] By 1900, almost 100 percent of paper was machine-made, and while a person working by hand could produce 60-100lbs of paper per day, mechanization yielded around 1,000lbs per day. [ 16 ] Additionally, in the 50-year period between 1846 and 1916, book production increased 400% and the price of books was cut in half. [ 16 ] In America , this led to a "golden age of illustration" from before the 1880s until the early 20th century. A small group of illustrators became highly successful, with the imagery they created considered a portrait of American aspirations of the time. [ 17 ] Among the best-known illustrators of that period were N.C. Wyeth and Howard Pyle of the Brandywine School , James Montgomery Flagg , Elizabeth Shippen Green , J. C. Leyendecker , Violet Oakley , Maxfield Parrish , Jessie Willcox Smith , and John Rea Neill . In France , on 1905, the Contemporary Book Society commissioned Paul Jouve to illustrate Rudyard Kipling's Jungle Book . Paul Jouve would devote ten years to the 130 illustrations of this book which remains as one of the masterpieces of bibliophilia. [ 18 ]
https://en.wikipedia.org/wiki/Illustration
There is a strong scientific consensus that greenhouse effect due to carbon dioxide is a main driver of climate change . Following is an illustrative model meant for a pedagogical purpose, showing the main physical determinants of the effect. Under this understanding, global warming is determined by a simple energy budget: In the long run, Earth emits radiation in the same amount as it receives from the sun. However, the amount emitted depends both on Earth's temperature and on its albedo : The more reflective the Earth in a certain wavelength , the less radiation it would both receive and emit in this wavelength; the warmer the Earth, the more radiation it emits. Thus changes in the albedo may have an effect on Earth's temperature, and the effect can be calculated by assuming a new steady state would be arrived at. In most of the electromagnetic spectrum , atmospheric carbon dioxide either blocks the radiation emitted from the ground almost completely, or is almost transparent, so that increasing the amount of carbon dioxide in the atmosphere, e.g. doubling the amount, will have negligible effects. However, in some narrow parts of the spectrum this is not so; doubling the amount of atmospheric carbon dioxide will make Earth's atmosphere relatively opaque to in these wavelengths, which would result in Earth emitting light in these wavelengths from the upper layers of the atmosphere, rather from lower layers or from the ground. Since the upper layers are colder, the amount emitted would be lower, leading to warming of Earth until the reduction in emission is compensated by the rise in temperature. [ 1 ] Furthermore, such warming may cause a feedback mechanism due to other changes in Earth's albedo, e.g. due to ice melting. Most of the air—including ~88% of the CO 2 —is located in the lower part of the atmosphere known as troposphere . The troposphere is thicker in the equator and thinner at the poles, but the global mean of its thickness is around 11 km. Inside the troposphere, the temperature drops approximately linearly at a rate of 6.5 Celsius degrees per km, from a global mean of 288 Kelvin (15 Celsius) on the ground to 220 K (-53 Celsius). At higher altitudes, up to 20 km, the temperature is approximately constant; this layer is called the tropopause . The troposphere and tropopause together consist of ~99% of the atmospheric CO 2 . Inside the troposphere, the CO 2 drops with altitude approximately exponentially, with a typical length of 6.3 km; this means that the density at height y is approximately proportional to exp(-y/6.3 km), and it goes down to 37% at 6.3 km, and to 17% at 11 km. Higher through the tropopause, density continues dropping exponentially, albeit faster, with a typical length of 4.2 km. Earth constantly absorbs energy from sunlight and emits thermal radiation as infrared light. In the long run, Earth radiates the same amount of energy per second as it absorbs, because the amount of thermal radiation emitted depends upon temperature: If Earth absorbs more energy per second than it radiates, Earth heats up and the thermal radiation will increase, until balance is restored; if Earth absorbs less energy than it radiates, it cools down and the thermal radiation will decrease, again until balance is restored. Atmospheric CO 2 absorbs some of the energy radiated by the ground, but it emits itself thermal radiation: For example, in some wavelengths the atmosphere is totally opaque due to absorption by CO 2 ; at these wavelengths, looking at Earth from outer space one would not see the ground, but the atmospheric CO 2 , and hence its thermal radiation—rather than the ground's thermal radiation. [ 2 ] Had the atmosphere been at the same temperature as the ground, this would not change Earth's energy budget; but since the radiation is emitted from atmosphere layers that are cooler than the ground, less radiation is emitted. As CO 2 content of the atmosphere increases due to human activity, this process intensifies, and the total radiation emitted by Earth diminishes; therefore, Earth heats up until the balance is restored. CO 2 absorbs the ground's thermal radiation mainly at wavelengths between 13 and 17 micron. At this wavelength range, it is almost solely responsible for the attenuation of radiation from the ground. The amount of ground radiation that is transmitted through the atmosphere in each wavelength is related to the optical depth of the atmosphere at this wavelength, OD, by: The optical depth itself is given by Beer–Lambert law : where σ is the absorption cross section of a single CO 2 molecule, and n(y) is the number density of these molecules at altitude y. Due to the high dependence of the cross section in wavelength, the OD changes from around 0.1 at 13 microns to ~10 at 14 microns and even higher beyond 100 at 15 microns, then dropping off to ~10 at 16 microns, ~1 at 17 microns and below 0.1 at 18 microns. Note that the OD depends on the total number of molecules per unit area in the atmosphere, and therefore rises linearly with its CO 2 content. Looked upon from outer space into the atmosphere at a specific wavelength, one would see to different degrees different layers of the atmosphere, but on average one would see down to an altitude such that the part of the atmosphere from this altitude and up has an optical depth of ~1. Earth will therefore radiate at this wavelength approximately according to the temperature of that altitude. The effect of increasing CO 2 atmospheric content means that the optical depth increases, so that the altitude seen from outer space increases; [ 2 ] as long as it increases within the troposphere, the radiation temperature drops and the radiation decreases. When it reaches the tropopause, any further increase in CO 2 levels will have no noticeable effect, since the temperature no longer depends there on the altitude. At wavelengths of 14 to 16 microns, even the tropopause, having ~0.12 of the amount of CO 2 of the whole atmosphere, has OD>1. Therefore, at these wavelengths Earth radiates mainly in the tropopause temperature, and addition of CO 2 does not change this. At wavelengths smaller than 13 microns or larger than 18 microns, the atmospheric absorption is negligible, and addition of CO 2 hardly changes this. Therefore, the effect of CO 2 increase on radiation is relevant in wavelengths 13–14 and 16–18 microns, and addition on CO 2 mainly contributes to the opacity of the troposphere, changing the altitude that is effectively seen from outer space within the troposphere. We now turn to calculating the effect of CO 2 on radiation, using a one-layer model, i.e. we treat the whole troposphere as a single layer: [ 3 ] Looking at a particular wavelength λ up to λ+dλ, the whole atmosphere has an optical depth OD, while the tropopause has an optical depth 0.12*OD; the troposphere has an optical depth of 0.88*OD. Thus, e − 0.12 ⋅ O D {\displaystyle e^{-0.12\cdot OD}} of the radiation from below the tropopause is transmitted out, but this includes e − 0.88 ⋅ O D {\displaystyle e^{-0.88\cdot OD}} of the radiation that originates from the ground. Thus, the weight of the troposphere in determining the radiation that is emitted to outer space is: A relative increase in the CO 2 concentration means an equal relative increase in the total CO 2 content of the atmosphere, dN/N where N is the number of CO 2 molecules. Adding a minute number of such molecules dN will increase the troposphere's weight in determining the radiation for the relevant wavelengths, approximately by the relative amount dN/N, and thus by: d N N ⋅ ( e − 0.12 ⋅ O D − e − O D ) {\displaystyle {\frac {dN}{N}}\cdot (e^{-0.12\cdot OD}-e^{-OD})} Since CO 2 hardly influences sunlight absorption by Earth, the radiative forcing due to an increase in CO 2 content is equal to the difference in the flux radiated by Earth due to such an increase. To calculate this, one must multiply the above by the difference in radiation due to the difference in temperature. According to Planck's law , this is: The ground is at temperature T 0 = 288 K, and for the troposphere we will take a typical temperature, the one at the average height of molecules, 6.3 km, where the temperature is T 1 247 K. Therefore, dI, the change in Earth's emitted radiation is, in a rough approximation, is: Since dN/N = d(ln N), this can be written as: The function e − 0.12 ⋅ x − e − x {\displaystyle e^{-0.12\cdot x}-e^{-x}} is maximal for x = 2.41, with a maximal value of 0.66, and it drops to half this value at x=0.5 and x = 9.2. Thus we look at wavelengths for which the OD is between 0.5 and 9.2: This gives a wavelength band at the width of approximately 1 micron around 17 microns, and less than 1 micron around 13.5 microns. We therefore take: Which gives -2.3 W/m 2 for the 13.5 microns band, and -2.7 W/m 2 for the 17 microns band, for a total of 5 W/m 2 . A 2-fold increase in CO 2 content changes the wavelengths ranges only slightly, and so this derivative is approximately constant along such an increase. Thus, a 2-fold increase in CO 2 content will reduce the radiation emitted by Earth by approximately: More generally, an increase by a factor c/c 0 gives: These results are close to the approximation of a more elaborate yet simplified model giving We may make a more elaborate calculation by treating the atmosphere as compounded of many thin layers. For each such layer, at height y and thickness dy, the weight of this layer in determining the radiation temperature seen from outer space is a generalization of the expression arrived at earlier for the troposphere. It is: where OD(y) is the optical depth of the part of the atmosphere from y upwards. The total effect of CO 2 on the radiation at wavelengths λ to λ+dλ is therefore: [ 3 ] where B is the expression for radiation according to Planck's law presented above: and the infinity here can be taken actually as the top of the tropopause. Thus the effect of a relative change in CO 2 concentration, dN/N = dn 0 /n 0 (where n 0 is the density number near ground), would be (noting that dN/N = d(ln N) = d(ln n 0 ): where we have used integration by part. Because B does not depend on N, and because e − O D ( ∞ ) = 1 {\displaystyle e^{-OD(\infty )}=1} , we have: Now, d T d y {\displaystyle {\frac {dT}{dy}}} is constant in the troposphere and zero in the tropopause. We denote the height of the border between them as U. So: The optical depth is proportional to the integral of the number density over y, as does the pressure. Therefore, OD(y) is proportional to the pressure p(y), which within the troposphere (height 0 to U) falls exponentially with decay constant 1/H p ( H p ~5.6 km for CO 2 ), thus: Since l n ( n 0 ) = l n ( N ) {\displaystyle ln(n_{0})=ln(N)} + constant, viewed as a function of both y and N, we have: And therefore differentiating with respect to ln N is the same as differentiating with respect to y, times a factor of − H p {\displaystyle -H_{p}} . We arrive at: Since the temperature only changes by ~25% within the troposphere, one may take a (rough) linear approximation of B with T at the relevant wavelengths, [ 3 ] and get: Due to the linear approximation of B we have: H p ⋅ d d T B ( λ , d λ , T ) d T d y = [ T ( H p ) − T 0 ] ⋅ d d T B ( λ , d λ , T ) = B ( λ , d λ , T 1 ) − B ( λ , d λ , T 0 ) {\displaystyle H_{p}\cdot {\frac {d}{dT}}B(\lambda ,d\lambda ,T){\frac {dT}{dy}}=[T(H_{p})-T_{0}]\cdot {\frac {d}{dT}}B(\lambda ,d\lambda ,T)=B(\lambda ,d\lambda ,T_{1})-B(\lambda ,d\lambda ,T_{0})} with T 1 taken at H p , so that totally: giving the same result as in the one-layer model presented above, as well as the logarithmic dependence on N, except that now we see T 1 is taken at 5.6 km (the pressure drop height scale), rather than 6.3 km (the density drop height scale). The total average energy per unit time radiated by Earth is equal to the average energy flux j times the surface area 4πR 2 , where R is Earth's radius. On the other hand, the average energy flux absorbed from sunlight is the solar constant S 0 times Earth's cross section of πR 2 , times the fraction absorbed by Earth, which is one minus Earth's albedo a . The average energy per unit time radiated out is equal to the average energy per unit time absorbed from sunlight, so: 4 π R 2 ⋅ j = π R 2 ⋅ ( 1 − a ) ⋅ S 0 {\displaystyle 4\pi R^{2}\cdot j=\pi R^{2}\cdot (1-a)\cdot S_{0}} giving: j = 1 4 ⋅ ( 1 − a ) ⋅ S 0 = 1 4 ⋅ ( 1 − 0.3 ) ⋅ 1360 W / m 2 = 240 W / m 2 {\displaystyle j={\frac {1}{4}}\cdot (1-a)\cdot S_{0}={\frac {1}{4}}\cdot (1-0.3)\cdot 1360W/m^{2}=240W/m^{2}} Based on the value of 3.1 W/m^2 obtained above in the section on the one layer model, the radiative forcing due to CO 2 relative to the average radiated flux is therefore: 3.1 ( W / m 2 ) / 240 ( W / m 2 ) = 1.3 % {\displaystyle 3.1(W/m^{2})/240(W/m^{2})=1.3\%} An exact calculation using the MODTRAN model, over all wavelengths and including methane and ozone greenhouse gasses, as shown in the plot above, gives, for tropical latitudes, an outgoing flux j = {\displaystyle j=} 298.645 W/m 2 for current CO 2 levels and j = {\displaystyle j=} 295.286 W/m 2 after CO 2 doubling, i.e. a radiative forcing of 1.1%, under clear sky conditions, as well as a ground temperature of 299.7 o K (26.6 o Celsius). The radiative forcing is largely similar in different latitudes and under different weather conditions. [ 5 ] On average, the total power of the thermal radiation emitted by Earth is equal to the power absorbed from sunlight. As CO 2 levels rise, the emitted radiation can maintain this equilibrium only if the temperature increases, so that the total emitted radiation is unchanged (averaged over enough time, in the order of few years so that diurnal and annual periods are averaged upon). According to Stefan–Boltzmann law , the total emitted power by Earth per unit area is: where σ B is Stefan–Boltzmann constant and ε is the emissivity in the relevant wavelengths. T is some average temperature representing the effective radiation temperature. CO 2 content changes the effective T, but instead one may treat T to be a typical ground or lower-atmosphere temperature (same as T 0 or close to it) and consider CO 2 content as changing the emissivity ε. We thus re-interpret ε in the above equation as an effective emissivity that includes the CO 2 effect;, and take T=T 0 . A change in CO 2 content thus causes a change dε in this effective emissivity, so that d ϵ ϵ {\displaystyle {\frac {d\epsilon }{\epsilon }}} is the radiative forcing, divided by the total energy flux radiated by Earth. The relative change in the total radiated energy flux due to changes in emissivity and temperature is: Thus, if the total emitted power is to remain unchanged, a radiative forcing relative to the total energy flux radiated by Earth, causes a 1/4-fold relative change in temperature. Thus: Δ T = 1 4 T ⋅ Δ j j = 1 4 ⋅ 288 K ⋅ 1.3 % = 0.94 K {\displaystyle \Delta T={\frac {1}{4}}T\cdot {\frac {\Delta j}{j}}={\frac {1}{4}}\cdot 288K\cdot 1.3\%=0.94K} Since warming of Earth means less ice on the ground on average, it would cause lower albedo and more sunlight absorbed, hence further increasing Earth's temperature. As a rough estimate, we note that the average temperature on most of Earth are between -20 and +30 Celsius degree, a good guess will be that 2% of its surface are between -1 and 0 °C , and thus an equivalent area of its surface will be changed from ice-covered (or snow-covered) to either ocean or forest. For comparison, in the northern hemisphere, the arctic sea ice has shrunk between 1979 and 2015 by 1.43x10 12 m 2 at maxima and 2.52x10 12 m 2 at minima, for an average of almost 2x10 12 m 2 , [ 6 ] which is 0.4% of Earth's total surface of 510x10 12 m 2 . At this time the global temperature rose by ~0.6 °C . The areas of inland glaciers combined (not including the antarctice ice sheet), the antarctic sea ice, and the arctic sea ice are all comparable, [ 7 ] [ 8 ] so one may expect the change in ice of the arctic sea ice is roughly a third of the total change, giving 1.2% of the Earth surface turned from ice to ocean or bare ground per 0.6 °C, or equivalently 2% per 1 °C. The antarctic ice cap size oscillates, [ 7 ] and it is hard to predict its future course, [ 9 ] [ 10 ] with factors such as relative thermal insulated and constraints due to the Antarctic Circumpolar Current probably playing a part. [ 11 ] As the difference in albedo between ice and e.g. ocean is around 2/3, this means that due to a 1 °C rise, the albedo will drop by 2%*2/3 = 4/3%. However this will mainly happen in northern and southern latitudes, around 60 degrees off the equator, and so the effective area is actually 2% * cos(60 o ) = 1%, and the global albedo drop would be 2/3%. Since a change in radiation of 1.3% causes a direct change of 1 degree Celsius (without feedback), as calculated above, and this causes another change of 2/3% in radiation due to positive feedback, whice is half the original change, this means the total factor caused by this feedback mechanism would be: Thus, this feedback would double the effect of the change in radiation, causing a change of ~ 2 K in the global temperature, which is indeed the commonly accepted short-term value. For long-term value, including further feedback mechanisms, ~3K is considered more probable.
https://en.wikipedia.org/wiki/Illustrative_model_of_greenhouse_effect_on_climate_change
The Illustris project is an ongoing series of astrophysical simulations run by an international collaboration of scientists. [ 1 ] The aim is to study the processes of galaxy formation and evolution in the universe with a comprehensive physical model. Early results were described in a number of publications [ 2 ] [ 3 ] [ 4 ] following widespread press coverage. [ 5 ] [ 6 ] [ 7 ] The project publicly released all data produced by the simulations in April, 2015. Key developers of the Illustris simulation have been Volker Springel (Max-Planck-Institut für Astrophysik) and Mark Vogelsberger (Massachusetts Institute of Technology). The Illustris simulation framework and galaxy formation model has been used for a wide range of spin-off projects, starting with Auriga and IllustrisTNG (both 2017) followed by Thesan (2021), MillenniumTNG (2022) and TNG-Cluster (2023). The original Illustris project was carried out by Mark Vogelsberger [ 8 ] and collaborators as the first large-scale galaxy formation application of Volker Springel's novel Arepo code. [ 9 ] The Illustris project included large-scale cosmological simulations of the evolution of the universe , spanning initial conditions of the Big Bang , to the present day, 13.8 billion years later. Modeling, based on the most precise data and calculations currently available, are compared to actual findings of the observable universe in order to better understand the nature of the universe , including galaxy formation , dark matter and dark energy . [ 5 ] [ 6 ] [ 7 ] The simulation included many physical processes which are thought to be critical for galaxy formation. These include the formation of stars and the subsequent "feedback" due to supernova explosions, as well as the formation of super-massive black holes, their consumption of nearby gas, and their multiple modes of energetic feedback. [ 1 ] [ 4 ] [ 10 ] Images, videos, and other data visualizations for public distribution are available at official media page . The main Illustris simulation was run on the Curie supercomputer at CEA (France) and the SuperMUC supercomputer at the Leibniz Computing Centre (Germany) . [ 1 ] [ 11 ] A total of 19 million CPU hours was required, using 8,192 CPU cores . [ 1 ] The peak memory usage was approximately 25 TB of RAM. [ 1 ] A total of 136 snapshots were saved over the course of the simulation, totaling over 230 TB cumulative data volume. [ 2 ] A code called "Arepo" was used to run the Illustris simulations. It was written by Volker Springel, the same author as the GADGET code. The name is derived from the Sator Square . This code solves the coupled equations of gravity and hydrodynamics using a discretization of space based on a moving Voronoi tessellation . It is optimized for running on large, distributed memory supercomputers using an MPI approach. In April, 2015 (eleven months after the first papers were published) the project team publicly released all data products from all simulations. [ 12 ] All original data files can be directly downloaded through the data release webpage . This includes group catalogs of individual halos and subhalos, merger trees tracking these objects through time, full snapshot particle data at 135 distinct time points, and various supplementary data catalogs. In addition to direct data download, a web-based API allows for many common search and data extraction tasks to be completed without needing access to the full data sets. In December 2018, the Illustris simulation was recognized by Deutsche Post through a special series stamp . The Illustris simulation framework has been used by a wide range of spin-off projects that focus on specific scientific questions. IllustrisTNG: The IllustrisTNG project , "the next generation" follow up to the original Illustris simulation, was first presented in July, 2017. A team of scientists from Germany and the U.S. led by Prof. Volker Springel . [ 13 ] First, a new physical model was developed, which among other features included Magnetohydrodynamics planned three simulations, which used different volumes at different resolutions. The intermediate simulation (TNG100) was equivalent to the original Illustris simulation. Unlike Illustris, it was run on the Hazel Hen machine at the High Performance Computing Center, Stuttgart in Germany. Up to 25,000 computer cores were employed. In December 2018 the simulation data from IllustrisTNG was released publicly. The data service includes a JupyterLab interface. Auriga: The Auriga project consists of high-resolution zoom simulations of Milky Way-like dark matter halos to understand the formation of our Milky Way galaxy. Thesan: The Thesan project is a radiative-transfer version of IllustrisTNG to explore the epoch of reionization. MillenniumTNG: The MillenniumTNG employs the IllustrisTNG galaxy formation model in a larger cosmological volume to explore the massive end of the halo mass function for detailed cosmological probe forecasts. TNG-Cluster: A suite of high-resolution zoom-in simulations of galaxy clusters.
https://en.wikipedia.org/wiki/Illustris_project
Ilya Iosifovich Moiseev ( Russian : Илья Иосифович Моисеев ; 15 March 1929 – 10 October 2020) [ 2 ] [ 3 ] was a Soviet and Russian chemist. An expert in both kinetics and the coordination chemistry of transition metals, he made significant advances in metal-complex catalysis. [ 1 ] Moiseev was born in Moscow . He studied organic chemistry at Moscow State University of Fine Chemical Technologies (MITHT). After graduating in 1952, his first jobs were as an engineer, a junior researcher in physical chemistry, then a senior researcher in organic chemistry. From 1963, he worked at the N. S. Kurnakov Institute of General and Inorganic Chemistry [ ru ] (IGIC) of the Russian Academy of Sciences (RAS), Moscow, as head of the laboratory of metal-complex catalysis and coordination chemistry. From 2003 onward, he was a professor at the Gubkin Russian State University of Oil and Gas (RGUNG Gubkin). [ 1 ] He also served as chairman of the Scientific Council for Gas Chemistry, RAS, and vice-president of the Russian Chemical Society . [ 1 ] By developing new principles for the design of catalytic systems he created highly efficient catalysts that enabled compounds of commercial importance to be synthesized from cheap hydrocarbons. [ 4 ] His concerns for efficiency and choice of raw materials were informed by environmental as well as economic considerations. [ 5 ] His innovations became the basis of industrial methods for the production of acetaldehyde from ethylene, the synthesis of formic acid from carbon monoxide and water, the hydrogenation of oxygen to hydrogen peroxide, and the synthesis of isoprene. [ 4 ] He discovered Palladium catalysts that have selective effects under mild conditions, and synthesized new classes of inorganic compounds. [ 1 ] Possibly his most famous discovery was the Pd(II)-catalyzed acetoxlyation of ethylene to vinyl acetate in 1960, which has become known as Moiseev's reaction. [ 6 ] [ 7 ] The reaction proceeds only in the presence of sodium acetate; Moiseev used benzoquinone to regenerate the Pd(II) catalyst. CH 2 =CH 2 +  2 CH 3 COONa  +  PdCl 2 ⟶   CH 2 =CHOOCCH 3 +  2 NaCl  +  Pd  +   CH 3 COOH In 2002 he received the State Prize of the Russian Federation in the field of science and technology. [ 8 ] In 2011 he was awarded the Prize of the Government of the Russian Federation in the field of science and technology. [ 9 ] He also received the Orders of the Red Banner of Labour (1986), Honour (1999) and Friendship (2009). [ 1 ] The Royal Society of Chemistry awarded Moiseev the Centenary Prize for 2006/7. In 2012, he was awarded the Demidov Prize for his contribution to the chemistry of organoelement compounds, petrochemistry, and carbene chemistry, [ 4 ] [ 10 ] and the RAS Chugaev Prize for his work on coordination compounds in industrially important redox reactions. [ 11 ] In 2013 he received the RAS Mendeleev Medal for outstanding work in the field of catalysis and energy-saving technologies. [ 12 ] He became a corresponding member of the Academy of Sciences of the Soviet Union in 1990, and an academician of the Russian Academy of Sciences in 1992. He was a full member of the Academy of Sciences, Arts and Literature in Paris, the European Academy of Sciences and Arts , and the Academia Europaea . [ 1 ]
https://en.wikipedia.org/wiki/Ilya_Moiseev
Viscount Ilya Romanovich Prigogine ( / p r ɪ ˈ ɡ oʊ ʒ iː n / ; Russian : Илья́ Рома́нович Приго́жин ; 25 January [ O.S. 12 January] 1917 – 28 May 2003) was a Belgian physical chemist of Russian-Jewish origin, noted for his work on dissipative structures , complex systems , and irreversibility . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] Prigogine's work most notably earned him the 1977 Nobel Prize in Chemistry “for his contributions to non-equilibrium thermodynamics, particularly the theory of dissipative structures”, [ 6 ] as well as the Francqui Prize in 1955, and the Rumford Medal in 1976. Prigogine was born in Moscow a few months before the October Revolution of 1917, into a Jewish family. [ 7 ] His father, Ruvim (Roman) Abramovich Prigogine, was a chemist who studied at the Imperial Moscow Technical School and owned a soap factory; his mother, Yulia Vikhman, was a pianist who attended the Moscow Conservatory . In 1921, the factory having been nationalized by the new Soviet regime and the feeling of insecurity rising amidst the civil war , the family left Russia. After a brief period in Lithuania , they went to Germany and settled in Berlin ; 8 years later, due to the poor economic situation and the creeping emergence of Nazism , they moved on to Brussels , where Prigogine received Belgian nationality in 1949. His brother Alexandre (1913–1991) became an ornithologist. [ 8 ] As a teenager, Prigogine was interested in music, history and archeology. He graduated from the Athenée d' Ixelles in 1935, majoring in Greek and Latin. His parents encouraged him to become a lawyer, and he initially enrolled in law studies at the Free University of Brussels . At that time he developed an interest in psychology and the study of behavior ; in turn, reading about these subjects triggered an interest in chemistry , as chemical processes impact the mind and body ; this also triggered a more fundamental interest in physics , as they explain chemistry. He ended up dropping out from the law faculty. [ 9 ] Prigogine afterwards simultaneously enrolled in chemistry and physics at the Free University of Brussels , something he achieved with "uncommon success"; he earned the equivalent of a Master's degree in both disciplines in 1939, and a PhD in chemistry in 1941 under Théophile de Donder . [ 9 ] [ 10 ] He started his research career under the German occupation of Belgium . From 1940 onwards he gave clandestine lectures to students. In 1941, the university formally closed to protest the forced appointment of Flemish pro-Nazi New Order professors by the occupiers; [ 11 ] he continued giving clandestine lectures until the Liberation of Belgium in 1944. During that time window he also published 21 articles. In 1943, Prigogine and his future wife Hélène Jofé were arrested by the Germans; after multiple interventions including by the Queen Elisabeth , they were eventually released a couple of weeks later. [ 9 ] In 1951, he became a full professor at his alma mater; at 34 years old, he was the youngest ever full professor at the science faculty in Brussels . [ 9 ] In 1959, he was appointed director of the International Solvay Institute in Brussels , Belgium . In that year, he also started teaching at the University of Texas at Austin in the United States , where he later was appointed Regental Professor and Ashbel Smith Professor of Physics and Chemical Engineering. From 1961 until 1966 he was affiliated with the Enrico Fermi Institute at the University of Chicago and was a visiting professor at Northwestern University . [ 12 ] [ 13 ] In Austin, in 1967, he co-founded the Center for Thermodynamics and Statistical Mechanics, now the Center for Complex Quantum Systems . [ 14 ] In that year, he also returned to Belgium , where he became director of the Center for Statistical Mechanics and Thermodynamics. He was a member of numerous scientific organizations, and received numerous awards, prizes and 53 honorary degrees. In 1955, Prigogine was awarded the Francqui Prize for Exact Sciences. For his study in irreversible thermodynamics , he received the Rumford Medal in 1976, and in 1977, the Nobel Prize in Chemistry "for his contributions to non-equilibrium thermodynamics, particularly the theory of dissipative structures ". [ 6 ] In 1989, he was awarded the title of viscount in the Belgian nobility by the King of the Belgians . Until his death, he was president of the International Academy of Science, Munich and was in 1997, one of the founders of the International Commission on Distance Education (CODE), a worldwide accreditation agency. [ 15 ] [ 16 ] Prigogine received an Honorary Doctorate from Heriot-Watt University in 1985 [ 17 ] and in 1998 he was awarded an honoris causa doctorate by the UNAM in Mexico City . Prigogine was first married to belgian poet Hélène Jofé (as an author also known as Hélène Prigogine) and in 1945 they had a son Yves. After their divorce, he married Polish-born chemist Maria Prokopowicz (also known as Maria Prigogine) in 1961. In 1970 they had a son, Pascal. [ 18 ] In 2003 he was one of 22 Nobel Laureates who signed the Humanist Manifesto . [ 19 ] Prigogine defined dissipative structures and their role in thermodynamic systems far from equilibrium , [ 1 ] [ 2 ] [ 5 ] a discovery that won him the Nobel Prize in Chemistry in 1977. [ 6 ] In summary, Ilya Prigogine discovered that importation and dissipation of energy into chemical systems could result in the emergence of new structures (hence dissipative structures) due to internal self reorganization. [ 20 ] [ 21 ] [ 22 ] In his 1955 text, Prigogine drew connections between dissipative structures and the Rayleigh-Bénard instability and the Turing mechanism . [ 23 ] And his 1977 work on self-reorganization was recognized as relevant for psychology. [ 24 ] Dissipative structure theory led to pioneering research in self-organizing systems , as well as philosophical inquiries into the formation of complexity in biological entities and the quest for a creative and irreversible role of time in the natural sciences . With professor Robert Herman , he also developed the basis of the two fluid model , [ 25 ] a traffic model in traffic engineering for urban networks, analogous to the two fluid model in classical statistical mechanics, [ 25 ] [ 26 ] a common problem that had attracted Prigogine's attention some years before. [ 27 ] Prigogine's formal concept of self-organization was used also as a "complementary bridge" between general systems theory and thermodynamics , conciliating the cloudiness of some important systems theory concepts such as entropy instead of molecular disorder, [ 28 ] [ which? ] and emergence , fluctuations and irreversibility instead of “birth and death” [ 3 ] [ 29 ] with scientific rigor. [ 21 ] [ 30 ] In his later years, his work concentrated on the fundamental role of indeterminism in nonlinear systems on both the classical and quantum level. Prigogine and coworkers proposed a Liouville space extension of quantum mechanics. [ 31 ] [ 32 ] A Liouville space is the vector space formed by the set of (self-adjoint) linear operators , equipped with an inner product, that act on a Hilbert space . [ 33 ] There exists a mapping of each linear operator into Liouville space, yet not every self-adjoint operator of Liouville space has a counterpart in Hilbert space, and in this sense Liouville space has a richer structure than Hilbert space. [ 34 ] The Liouville space extension proposal by Prigogine and co-workers aimed to solve the arrow of time problem of thermodynamics and the measurement problem of quantum mechanics. [ 32 ] Prigogine co-authored several books with Isabelle Stengers , including The End of Certainty and La Nouvelle Alliance ( Order out of Chaos ). In his 1996 book, La Fin des certitudes , written in collaboration with Isabelle Stengers and published in English in 1997 as The End of Certainty: Time, Chaos, and the New Laws of Nature , Prigogine contends that determinism is no longer a viable scientific belief: "The more we know about our universe, the more difficult it becomes to believe in determinism." This is a major departure from the approach of Newton , Einstein and Schrödinger , all of whom expressed their theories in terms of deterministic equations. According to Prigogine, determinism loses its explanatory power in the face of irreversibility and instability . Prigogine traces the dispute over determinism back to Darwin , whose attempt to explain individual variability according to evolving populations inspired Ludwig Boltzmann to explain the behavior of gases in terms of populations of particles rather than individual particles. [ 35 ] This led to the field of statistical mechanics and the realization that gases undergo irreversible processes . In deterministic physics, all processes are time-reversible, meaning that they can proceed backward as well as forward through time. As Prigogine explains, determinism is fundamentally a denial of the arrow of time . With no arrow of time, there is no longer a privileged moment known as the "present," which follows a determined "past" and precedes an undetermined "future." All of time is simply given, with the future as determined or as undetermined as the past. With irreversibility, the arrow of time is reintroduced to physics. Prigogine notes numerous examples of irreversibility, including diffusion , radioactive decay , solar radiation , weather and the emergence and evolution of life . Like weather systems, organisms are unstable systems existing far from thermodynamic equilibrium . Instability resists standard deterministic explanation. Instead, due to sensitivity to initial conditions, unstable systems can only be explained statistically, that is, in terms of probability . Prigogine asserts that Newtonian physics has now been "extended" three times: [ citation needed ] first with the introduction of spacetime in general relativity , then with the use of the wave function in quantum mechanics , and finally with the recognition of indeterminism in the study of unstable systems ( chaos theory ). The Ilya Prigogine Prize for Thermodynamics was initialized in 2001 and patronized by Ilya Prigogine himself until his death in 2003. It is awarded on a biennial basis during the Joint European Thermodynamics Conference (JETC) and considers all branches of thermodynamics (applied, theoretical, and experimental as well as quantum thermodynamics and classical thermodynamics).
https://en.wikipedia.org/wiki/Ilya_Prigogine
In scientific visualization , image-based flow visualization (or visualisation ) is a computer modelling technique developed by Jarke van Wijk [ 1 ] to visualize two dimensional flows of liquids such as water and air, like the wind movement of a tornado . Compared with integration techniques it has the advantage of producing a whole image at every step, as the technique relies upon graphical computing methods for frame-by-frame capture of the model of advective transport of a decaying dye. It is a method from the texture advection family. The core idea is to create a noise texture on a regular grid and then bend this grid according to the flow (the vector field). The bent grid is then sampled at the original grid locations. Thus, the output is a version of the noise, that is displaced according to the flow. The advantage of this approach is that it can be accelerated on modern graphics hardware, thus allowing for real-time or almost real-time simulation of 2D flow data. This is particularly handy if one wants to visualise multiple scaled versions of the vector field to first gain an overview and then concentrate on the details. [ 2 ]
https://en.wikipedia.org/wiki/Image-based_flow_visualization
In mathematics , for a function f : X → Y {\displaystyle f:X\to Y} , the image of an input value x {\displaystyle x} is the single output value produced by f {\displaystyle f} when passed x {\displaystyle x} . The preimage of an output value y {\displaystyle y} is the set of input values that produce y {\displaystyle y} . More generally, evaluating f {\displaystyle f} at each element of a given subset A {\displaystyle A} of its domain X {\displaystyle X} produces a set, called the " image of A {\displaystyle A} under (or through) f {\displaystyle f} ". Similarly, the inverse image (or preimage ) of a given subset B {\displaystyle B} of the codomain Y {\displaystyle Y} is the set of all elements of X {\displaystyle X} that map to a member of B . {\displaystyle B.} The image of the function f {\displaystyle f} is the set of all output values it may produce, that is, the image of X {\displaystyle X} . The preimage of f {\displaystyle f} , that is, the preimage of Y {\displaystyle Y} under f {\displaystyle f} , always equals X {\displaystyle X} (the domain of f {\displaystyle f} ); therefore, the former notion is rarely used. Image and inverse image may also be defined for general binary relations , not just functions. The word "image" is used in three related ways. In these definitions, f : X → Y {\displaystyle f:X\to Y} is a function from the set X {\displaystyle X} to the set Y . {\displaystyle Y.} If x {\displaystyle x} is a member of X , {\displaystyle X,} then the image of x {\displaystyle x} under f , {\displaystyle f,} denoted f ( x ) , {\displaystyle f(x),} is the value of f {\displaystyle f} when applied to x . {\displaystyle x.} f ( x ) {\displaystyle f(x)} is alternatively known as the output of f {\displaystyle f} for argument x . {\displaystyle x.} Given y , {\displaystyle y,} the function f {\displaystyle f} is said to take the value y {\displaystyle y} or take y {\displaystyle y} as a value if there exists some x {\displaystyle x} in the function's domain such that f ( x ) = y . {\displaystyle f(x)=y.} Similarly, given a set S , {\displaystyle S,} f {\displaystyle f} is said to take a value in S {\displaystyle S} if there exists some x {\displaystyle x} in the function's domain such that f ( x ) ∈ S . {\displaystyle f(x)\in S.} However, f {\displaystyle f} takes [all] values in S {\displaystyle S} and f {\displaystyle f} is valued in S {\displaystyle S} means that f ( x ) ∈ S {\displaystyle f(x)\in S} for every point x {\displaystyle x} in the domain of f {\displaystyle f} . Throughout, let f : X → Y {\displaystyle f:X\to Y} be a function. The image under f {\displaystyle f} of a subset A {\displaystyle A} of X {\displaystyle X} is the set of all f ( a ) {\displaystyle f(a)} for a ∈ A . {\displaystyle a\in A.} It is denoted by f [ A ] , {\displaystyle f[A],} or by f ( A ) {\displaystyle f(A)} when there is no risk of confusion. Using set-builder notation , this definition can be written as [ 1 ] [ 2 ] f [ A ] = { f ( a ) : a ∈ A } . {\displaystyle f[A]=\{f(a):a\in A\}.} This induces a function f [ ⋅ ] : P ( X ) → P ( Y ) , {\displaystyle f[\,\cdot \,]:{\mathcal {P}}(X)\to {\mathcal {P}}(Y),} where P ( S ) {\displaystyle {\mathcal {P}}(S)} denotes the power set of a set S ; {\displaystyle S;} that is the set of all subsets of S . {\displaystyle S.} See § Notation below for more. The image of a function is the image of its entire domain , also known as the range of the function. [ 3 ] This last usage should be avoided because the word "range" is also commonly used to mean the codomain of f . {\displaystyle f.} If R {\displaystyle R} is an arbitrary binary relation on X × Y , {\displaystyle X\times Y,} then the set { y ∈ Y : x R y for some x ∈ X } {\displaystyle \{y\in Y:xRy{\text{ for some }}x\in X\}} is called the image, or the range, of R . {\displaystyle R.} Dually, the set { x ∈ X : x R y for some y ∈ Y } {\displaystyle \{x\in X:xRy{\text{ for some }}y\in Y\}} is called the domain of R . {\displaystyle R.} Let f {\displaystyle f} be a function from X {\displaystyle X} to Y . {\displaystyle Y.} The preimage or inverse image of a set B ⊆ Y {\displaystyle B\subseteq Y} under f , {\displaystyle f,} denoted by f − 1 [ B ] , {\displaystyle f^{-1}[B],} is the subset of X {\displaystyle X} defined by f − 1 [ B ] = { x ∈ X : f ( x ) ∈ B } . {\displaystyle f^{-1}[B]=\{x\in X\,:\,f(x)\in B\}.} Other notations include f − 1 ( B ) {\displaystyle f^{-1}(B)} and f − ( B ) . {\displaystyle f^{-}(B).} [ 4 ] The inverse image of a singleton set , denoted by f − 1 [ { y } ] {\displaystyle f^{-1}[\{y\}]} or by f − 1 ( y ) , {\displaystyle f^{-1}(y),} is also called the fiber or fiber over y {\displaystyle y} or the level set of y . {\displaystyle y.} The set of all the fibers over the elements of Y {\displaystyle Y} is a family of sets indexed by Y . {\displaystyle Y.} For example, for the function f ( x ) = x 2 , {\displaystyle f(x)=x^{2},} the inverse image of { 4 } {\displaystyle \{4\}} would be { − 2 , 2 } . {\displaystyle \{-2,2\}.} Again, if there is no risk of confusion, f − 1 [ B ] {\displaystyle f^{-1}[B]} can be denoted by f − 1 ( B ) , {\displaystyle f^{-1}(B),} and f − 1 {\displaystyle f^{-1}} can also be thought of as a function from the power set of Y {\displaystyle Y} to the power set of X . {\displaystyle X.} The notation f − 1 {\displaystyle f^{-1}} should not be confused with that for inverse function , although it coincides with the usual one for bijections in that the inverse image of B {\displaystyle B} under f {\displaystyle f} is the image of B {\displaystyle B} under f − 1 . {\displaystyle f^{-1}.} The traditional notations used in the previous section do not distinguish the original function f : X → Y {\displaystyle f:X\to Y} from the image-of-sets function f : P ( X ) → P ( Y ) {\displaystyle f:{\mathcal {P}}(X)\to {\mathcal {P}}(Y)} ; likewise they do not distinguish the inverse function (assuming one exists) from the inverse image function (which again relates the powersets). Given the right context, this keeps the notation light and usually does not cause confusion. But if needed, an alternative [ 5 ] is to give explicit names for the image and preimage as functions between power sets: For every function f : X → Y {\displaystyle f:X\to Y} and all subsets A ⊆ X {\displaystyle A\subseteq X} and B ⊆ Y , {\displaystyle B\subseteq Y,} the following properties hold: Also: For functions f : X → Y {\displaystyle f:X\to Y} and g : Y → Z {\displaystyle g:Y\to Z} with subsets A ⊆ X {\displaystyle A\subseteq X} and C ⊆ Z , {\displaystyle C\subseteq Z,} the following properties hold: For function f : X → Y {\displaystyle f:X\to Y} and subsets A , B ⊆ X {\displaystyle A,B\subseteq X} and S , T ⊆ Y , {\displaystyle S,T\subseteq Y,} the following properties hold: The results relating images and preimages to the ( Boolean ) algebra of intersection and union work for any collection of subsets, not just for pairs of subsets: (Here, S {\displaystyle S} can be infinite, even uncountably infinite .) With respect to the algebra of subsets described above, the inverse image function is a lattice homomorphism , while the image function is only a semilattice homomorphism (that is, it does not always preserve intersections). This article incorporates material from Fibre on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
https://en.wikipedia.org/wiki/Image_(mathematics)
Image Share is a service for sharing images between users during a mobile phone call. It has been specified for use in a 3GPP -compliant cellular network by the GSM Association in the PRD IR.79 Image Share Interoperability Specification. [1] According to the specification, "The terminal interoperable Image Share service allows users to share Images between them over PS connection with ongoing CS call, thus enhancing and enriching end-users voice communication." An Image Share session begins by end-users setting up a normal circuit switched (CS) voice call. After the voice call is set up, terminals perform a registration to an IMS core system with a packet switched (PS) connection. Then based on successful capability negotiation between the terminals, the end-user will be presented with an option in terminal UI offering the possibility of sharing one or several images. If this is selected, then these images are transferred between the Image Share software clients located in the mobile phones using the PS connection and the recipient is able to see the images. During this process the normal CS voice session has been ongoing continuously. Image Share can be seen as a kind of spin-off from the Video Share mobile phone service. Video Share is commercially launched for example by AT&T in USA, but Image Share is not yet available from any mobile operator/service provider. According to GSMA press release [3] interoperability between different Image Share clients was successfully tested in a multi-vendor trial in May 2007, including interworking between multiple networks. No mobile operator has launched Image Share so far (as of March 2008).
https://en.wikipedia.org/wiki/Image_Share
In 3D computer graphics , the image plane is that plane in the world which is identified with the plane of the display monitor used to view the image that is being rendered. It is also referred to as screen space . If one makes the analogy of taking a photograph to rendering a 3D image, the surface of the film is the image plane. In this case, the viewing transformation is a projection that maps the world onto the image plane. A rectangular region of this plane, called the viewing window or viewport , maps to the monitor. This establishes the mapping between pixels on the monitor and points (or rather, rays) in the 3D world. The plane is not usually an actual geometric object in a 3D scene , but instead is usually a collection of target coordinates or dimensions that are used during the rasterization process so the final output can be displayed as intended on the physical screen. In optics , the image plane is the plane that contains the object's projected image, and lies beyond the back focal plane . [ 1 ] This computer graphics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Image_plane
Image rectification is a transformation process used to project images onto a common image plane. This process has several degrees of freedom and there are many strategies for transforming images to the common plane. Image rectification is used in computer stereo vision to simplify the problem of finding matching points between images (i.e. the correspondence problem ), and in geographic information systems (GIS) to merge images taken from multiple perspectives into a common map coordinate system. Computer stereo vision takes two or more images with known relative camera positions that show an object from different viewpoints. For each pixel it then determines the corresponding scene point's depth (i.e. distance from the camera) by first finding matching pixels (i.e. pixels showing the same scene point) in the other image(s) and then applying triangulation to the found matches to determine their depth. Finding matches in stereo vision is restricted by epipolar geometry : Each pixel's match in another image can only be found on a line called the epipolar line. If two images are coplanar, i.e. they were taken such that the right camera is only offset horizontally compared to the left camera (not being moved towards the object or rotated), then each pixel's epipolar line is horizontal and at the same vertical position as that pixel. However, in general settings (the camera does move towards the object or rotate) the epipolar lines are slanted. Image rectification warps both images such that they appear as if they have been taken with only a horizontal displacement and as a consequence all epipolar lines are horizontal, which slightly simplifies the stereo matching process. Note however, that rectification does not fundamentally change the stereo matching process: It searches on lines, slanted ones before and horizontal ones after rectification. Image rectification is also an equivalent (and more often used [ 1 ] ) alternative to perfect camera coplanarity. Even with high-precision equipment, image rectification is usually performed because it may be impractical to maintain perfect coplanarity between cameras. Image rectification can only be performed with two images at a time and simultaneous rectification of more than two images is generally impossible. [ 2 ] If the images to be rectified are taken from camera pairs without geometric distortion , this calculation can easily be made with a linear transformation . X & Y rotation puts the images on the same plane, scaling makes the image frames be the same size and Z rotation & skew adjustments make the image pixel rows directly line up [ citation needed ] . The rigid alignment of the cameras needs to be known (by calibration) and the calibration coefficients are used by the transform. [ 3 ] In performing the transform, if the cameras themselves are calibrated for internal parameters, an essential matrix provides the relationship between the cameras. The more general case (without camera calibration) is represented by the fundamental matrix . If the fundamental matrix is not known, it is necessary to find preliminary point correspondences between stereo images to facilitate its extraction. [ 3 ] There are three main categories for image rectification algorithms: planar rectification, [ 4 ] cylindrical rectification [ 1 ] and polar rectification. [ 5 ] [ 6 ] [ 7 ] All rectified images satisfy the following two properties: [ 8 ] In order to transform the original image pair into a rectified image pair, it is necessary to find a projective transformation H . Constraints are placed on H to satisfy the two properties above. For example, constraining the epipolar lines to be parallel with the horizontal axis means that epipoles must be mapped to the infinite point [1,0,0] T in homogeneous coordinates . Even with these constraints, H still has four degrees of freedom. [ 9 ] It is also necessary to find a matching H' to rectify the second image of an image pair. Poor choices of H and H' can result in rectified images that are dramatically changed in scale or severely distorted. There are many different strategies for choosing a projective transform H for each image from all possible solutions. One advanced method is minimizing the disparity or least-square difference of corresponding points on the horizontal axis of the rectified image pair. [ 9 ] Another method is separating H into a specialized projective transform, similarity transform, and shearing transform to minimize image distortion. [ 8 ] One simple method is to rotate both images to look perpendicular to the line joining their collective optical centers, twist the optical axes so the horizontal axis of each image points in the direction of the other image's optical center, and finally scale the smaller image to match for line-to-line correspondence. [ 2 ] This process is demonstrated in the following example. Our model for this example is based on a pair of images that observe a 3D point P , which corresponds to p and p' in the pixel coordinates of each image. O and O' represent the optical centers of each camera, with known camera matrices M = K [ I 0 ] {\displaystyle M=K[I~0]} and M ′ = K ′ [ R T ] {\displaystyle M'=K'[R~T]} (we assume the world origin is at the first camera). We will briefly outline and depict the results for a simple approach to find a H and H' projective transformation that rectify the image pair from the example scene. First, we compute the epipoles, e and e' in each image: Second, we find a projective transformation H 1 that rotates our first image to be parallel to the baseline connecting O and O' (row 2, column 1 of 2D image set). This rotation can be found by using the cross product between the original and the desired optical axes. [ 2 ] Next, we find the projective transformation H 2 that takes the rotated image and twists it so that the horizontal axis aligns with the baseline. If calculated correctly, this second transformation should map the e to infinity on the x axis (row 3, column 1 of 2D image set). Finally, define H = H 2 H 1 {\displaystyle H=H_{2}H_{1}} as the projective transformation for rectifying the first image. Third, through an equivalent operation, we can find H' to rectify the second image (column 2 of 2D image set). Note that H' 1 should rotate the second image's optical axis to be parallel with the transformed optical axis of the first image. One strategy is to pick a plane parallel to the line where the two original optical axes intersect to minimize distortion from the reprojection process. [ 10 ] In this example, we simply define H' using the rotation matrix R and initial projective transformation H as H ′ = H R T {\displaystyle H'=HR^{T}} . Finally, we scale both images to the same approximate resolution and align the now horizontal epipoles for easier horizontal scanning for correspondences (row 4 of 2D image set). Note that it is possible to perform this and similar algorithms without having the camera parameter matrices M and M' . All that is required is a set of seven or more image to image correspondences to compute the fundamental matrices and epipoles. [ 9 ] Image rectification in GIS converts images to a standard map coordinate system. This is done by matching ground control points (GCP) in the mapping system to points in the image. These GCPs calculate necessary image transforms. [ 11 ] Primary difficulties in the process occur The maps that are used with rectified images are non-topographical. However, the images to be used may contain distortion from terrain. Image orthorectification additionally removes these effects. [ 11 ] Image rectification is a standard feature available with GIS software packages.
https://en.wikipedia.org/wiki/Image_rectification
Image response (or more correctly, image response rejection ratio , or IMRR ) is a measure of performance of a radio receiver that operates on the superheterodyne principle. [ 1 ] In such a radio receiver, a local oscillator (LO) is used to heterodyne or "beat" against the incoming radio frequency (RF), generating sum and difference frequencies . One of these will be at the intermediate frequency (IF), and will be selected and amplified. The radio receiver is responsive to any signal at its designed IF frequency, including unwanted signals. For example, with a LO tuned to 110 MHz, there are two incoming signal frequencies that can generate a 10 MHz IF frequency. A signal broadcast at 100 MHz (the wanted signal), and mixed with the 110 MHz LO will create the sum frequency of 210 MHz (ignored by the receiver), and the difference frequency at the desired 10 MHz. However, a signal broadcast at 120 MHz (the unwanted signal), and mixed with the 110 MHz LO will create a sum frequency of 230 MHz (ignored by the receiver), and the difference frequency also at 10 MHz. The signal at 120 MHz is called the image of the wanted signal at 100 MHz. The ability of the receiver to reject this image gives the image rejection ratio (IMRR) of the system. The image rejection ratio , or image frequency rejection ratio , is the ratio of the intermediate- frequency (IF) signal level produced by the desired input frequency to that produced by the image frequency . The image rejection ratio is usually expressed in dB . When the image rejection ratio is measured, the input signal levels of the desired and image frequencies must be equal for the measurement to be meaningful. IMRR is measured in dB , giving the ratio of the wanted to the unwanted signal to yield the same output from the receiver. In a good design, ratios of >60 dB are achievable. Note that IMRR is not a measurement of the performance of the IF stages or IF filtering ( selectivity ); the signal yields a perfectly valid IF frequency. Rather, it is the measure of the bandpass characteristics of the stages preceding the IF amplifier, which will consist of RF bandpass filters and usually an RF amplifier stage or two. The Image Frequency Rejection Ratio (IRR) is characterized by its RF filter which can be determined on the basis of its relative response of a parallel tuned circuit . [ 2 ] I R R = 1 + ρ 2 Q 2 {\displaystyle IRR={\sqrt {1+\rho ^{2}Q^{2}}}} where, ρ = f I M A G E f R F − f R F f I M A G E {\displaystyle \rho ={\frac {f_{IMAGE}}{f_{RF}}}-{\frac {f_{RF}}{f_{IMAGE}}}} and Q is the quality factor. The Image Rejection Ratio for a given value of gain imbalance γ , ( ϵ = γ − 1 ) {\displaystyle \gamma ,(\epsilon =\gamma -1)} and phase imbalance ϕ {\displaystyle \phi } is determined by, [ 3 ] I M R R = γ 2 + 1 − 2 γ c o s ( ϕ ) γ 2 + 1 + 2 γ c o s ( ϕ ) ≈ ϵ 2 + ϕ 2 4 {\displaystyle IMRR={\frac {\gamma ^{2}+1-2\gamma cos(\phi )}{\gamma ^{2}+1+2\gamma cos(\phi )}}\approx {\frac {\epsilon ^{2}+\phi ^{2}}{4}}} This article incorporates public domain material from Federal Standard 1037C . General Services Administration . Archived from the original on 2022-01-22. (in support of MIL-STD-188 ).
https://en.wikipedia.org/wiki/Image_response
In model theory , a branch of mathematics , an imaginary element of a structure is roughly a definable equivalence class . These were introduced by Shelah (1990) , and elimination of imaginaries was introduced by Poizat (1983) .
https://en.wikipedia.org/wiki/Imaginary_element
Imaginary time is a mathematical representation of time that appears in some approaches to special relativity and quantum mechanics . It finds uses in certain cosmological theories. Mathematically, imaginary time is real time which has undergone a Wick rotation so that its coordinates are multiplied by the imaginary unit i . Imaginary time is not imaginary in the sense that it is unreal or made-up; it is simply expressed in terms of imaginary numbers . In mathematics, the imaginary unit i {\displaystyle i} is − 1 {\displaystyle {\sqrt {-1}}} , such that i 2 {\displaystyle i^{2}} is defined to be − 1 {\displaystyle -1} . A number which is a direct multiple of i {\displaystyle i} is known as an imaginary number . [ 1 ] : Chp 4 A number that is the sum of an imaginary number and a real number is known as a complex number. In certain physical theories, periods of time are multiplied by i {\displaystyle i} in this way. Mathematically, an imaginary time period τ {\textstyle \tau } may be obtained from real time t {\textstyle t} via a Wick rotation by π / 2 {\textstyle \pi /2} in the complex plane : τ = i t {\textstyle \tau =it} . [ 1 ] : 769 Stephen Hawking popularized the concept of imaginary time in his book The Universe in a Nutshell . "One might think this means that imaginary numbers are just a mathematical game having nothing to do with the real world. From the viewpoint of positivist philosophy , however, one cannot determine what is real. All one can do is find which mathematical models describe the universe we live in. It turns out that a mathematical model involving imaginary time predicts not only effects we have already observed but also effects we have not been able to measure yet nevertheless believe in for other reasons. So what is real and what is imaginary? Is the distinction just in our minds?" In fact, the terms " real " and " imaginary " for numbers are just a historical accident, much like the terms " rational " and " irrational ": "...the words real and imaginary are picturesque relics of an age when the nature of complex numbers was not properly understood." In the Minkowski spacetime model adopted by the theory of relativity , spacetime is represented as a four-dimensional surface or manifold . Its four-dimensional equivalent of a distance in three-dimensional space is called an interval . Assuming that a specific time period is represented as a real number in the same way as a distance in space, an interval d {\displaystyle d} in relativistic spacetime is given by the usual formula but with time negated: d 2 = x 2 + y 2 + z 2 − t 2 {\displaystyle d^{2}=x^{2}+y^{2}+z^{2}-t^{2}} where x {\displaystyle x} , y {\displaystyle y} and z {\displaystyle z} are distances along each spatial axis and t {\displaystyle t} is a period of time or "distance" along the time axis (Strictly, the time coordinate is ( c t ) 2 {\displaystyle (ct)^{2}} where c {\displaystyle c} is the speed of light , however we conventionally choose units such that c = 1 {\displaystyle c=1} ). Mathematically this is equivalent to writing d 2 = x 2 + y 2 + z 2 + ( i t ) 2 {\displaystyle d^{2}=x^{2}+y^{2}+z^{2}+(it)^{2}} In this context, i {\displaystyle i} may be either accepted as a feature of the relationship between space and real time, as above, or it may alternatively be incorporated into time itself, such that the value of time is itself an imaginary number , denoted by τ {\displaystyle \tau } . The equation may then be rewritten in normalised form: d 2 = x 2 + y 2 + z 2 + τ 2 {\displaystyle d^{2}=x^{2}+y^{2}+z^{2}+\tau ^{2}} Similarly its four vector may then be written as ( x 0 , x 1 , x 2 , x 3 ) {\displaystyle (x_{0},x_{1},x_{2},x_{3})} where distances are represented as x n {\displaystyle x_{n}} , and x 0 = i c t {\displaystyle x_{0}=ict} where c {\displaystyle c} is the speed of light and time is imaginary. Hawking noted the utility of rotating time intervals into an imaginary metric in certain situations, in 1971. [ 4 ] In physical cosmology , imaginary time may be incorporated into certain models of the universe which are solutions to the equations of general relativity . In particular, imaginary time can help to smooth out gravitational singularities , where known physical laws break down, to remove the singularity and avoid such breakdowns (see Hartle–Hawking state ). The Big Bang , for example, appears as a singularity in ordinary time but, when modelled with imaginary time, the singularity can be removed and the Big Bang functions like any other point in four-dimensional spacetime . Any boundary to spacetime is a form of singularity, where the smooth nature of spacetime breaks down. [ 1 ] : 769–772 With all such singularities removed from the Universe, it thus can have no boundary and Stephen Hawking speculated that "the boundary condition to the Universe is that it has no boundary". [ 2 ] : 85 However, the unproven nature of the relationship between actual physical time and imaginary time incorporated into such models has raised criticisms. [ 5 ] Roger Penrose has noted that there needs to be a transition from the Riemannian metric (often referred to as " Euclidean " in this context) with imaginary time at the Big Bang to a Lorentzian metric with real time for the evolving Universe. Also, modern observations suggest that the Universe is open and will never shrink back to a Big Crunch . If this proves true, then the end-of-time boundary still remains. [ 1 ] : 769–772
https://en.wikipedia.org/wiki/Imaginary_time
The imaginary unit or unit imaginary number ( i ) is a mathematical constant that is a solution to the quadratic equation x 2 + 1 = 0. Although there is no real number with this property, i can be used to extend the real numbers to what are called complex numbers , using addition and multiplication . A simple example of the use of i in a complex number is 2 + 3 i . Imaginary numbers are an important mathematical concept; they extend the real number system R {\displaystyle \mathbb {R} } to the complex number system C , {\displaystyle \mathbb {C} ,} in which at least one root for every nonconstant polynomial exists (see Algebraic closure and Fundamental theorem of algebra ). Here, the term imaginary is used because there is no real number having a negative square . There are two complex square roots of −1: i and − i , just as there are two complex square roots of every real number other than zero (which has one double square root ). In contexts in which use of the letter i is ambiguous or problematic, the letter j is sometimes used instead. For example, in electrical engineering and control systems engineering , the imaginary unit is normally denoted by j instead of i , because i is commonly used to denote electric current . [ 1 ] Square roots of negative numbers are called imaginary because in early-modern mathematics , only what are now called real numbers , obtainable by physical measurements or basic arithmetic, were considered to be numbers at all – even negative numbers were treated with skepticism – so the square root of a negative number was previously considered undefined or nonsensical. The name imaginary is generally credited to René Descartes , and Isaac Newton used the term as early as 1670. [ 2 ] [ 3 ] The i notation was introduced by Leonhard Euler . [ 4 ] A unit is an undivided whole, and unity or the unit number is the number one ( 1 ). The imaginary unit i is defined solely by the property that its square is −1: i 2 = − 1. {\displaystyle i^{2}=-1.} With i defined this way, it follows directly from algebra that i and − i are both square roots of −1. Although the construction is called imaginary , and although the concept of an imaginary number may be intuitively more difficult to grasp than that of a real number, the construction is valid from a mathematical standpoint. Real number operations can be extended to imaginary and complex numbers, by treating i as an unknown quantity while manipulating an expression (and using the definition to replace any occurrence of i 2 with −1 ). Higher integral powers of i are thus i 3 = i 2 i = ( − 1 ) i = − i , i 4 = i 3 i = ( − i ) i = 1 , i 5 = i 4 i = ( 1 ) i = i , {\displaystyle {\begin{alignedat}{3}i^{3}&=i^{2}i&&=(-1)i&&=-i,\\[3mu]i^{4}&=i^{3}i&&=\;\!(-i)i&&=\ \,1,\\[3mu]i^{5}&=i^{4}i&&=\ \,(1)i&&=\ \ i,\end{alignedat}}} and so on, cycling through the four values 1 , i , −1 , and − i . As with any non-zero real number, i 0 = 1. As a complex number, i can be represented in rectangular form as 0 + 1 i , with a zero real component and a unit imaginary component. In polar form , i can be represented as 1 × e πi /2 (or just e πi /2 ), with an absolute value (or magnitude) of 1 and an argument (or angle) of π 2 {\displaystyle {\tfrac {\pi }{2}}} radians . (Adding any integer multiple of 2 π to this angle works as well.) In the complex plane , which is a special interpretation of a Cartesian plane , i is the point located one unit from the origin along the imaginary axis (which is perpendicular to the real axis ). Being a quadratic polynomial with no multiple root , the defining equation x 2 = −1 has two distinct solutions, which are equally valid and which happen to be additive and multiplicative inverses of each other. Although the two solutions are distinct numbers, their properties are indistinguishable; there is no property that one has that the other does not. One of these two solutions is labelled + i (or simply i ) and the other is labelled − i , though it is inherently ambiguous which is which. The only differences between + i and − i arise from this labelling. For example, by convention + i is said to have an argument of + π 2 {\displaystyle +{\tfrac {\pi }{2}}} and − i is said to have an argument of − π 2 , {\displaystyle -{\tfrac {\pi }{2}},} related to the convention of labelling orientations in the Cartesian plane relative to the positive x -axis with positive angles turning anticlockwise in the direction of the positive y -axis. Also, despite the signs written with them, neither + i nor − i is inherently positive or negative in the sense that real numbers are. [ 5 ] A more formal expression of this indistinguishability of + i and − i is that, although the complex field is unique (as an extension of the real numbers) up to isomorphism , it is not unique up to a unique isomorphism. That is, there are two field automorphisms of the complex numbers C {\displaystyle \mathbb {C} } that keep each real number fixed, namely the identity and complex conjugation . For more on this general phenomenon, see Galois group . Using the concepts of matrices and matrix multiplication , complex numbers can be represented in linear algebra. The real unit 1 and imaginary unit i can be represented by any pair of matrices I and J satisfying I 2 = I , IJ = JI = J , and J 2 = − I . Then a complex number a + bi can be represented by the matrix aI + bJ , and all of the ordinary rules of complex arithmetic can be derived from the rules of matrix arithmetic. The most common choice is to represent 1 and i by the 2 × 2 identity matrix I and the matrix J , I = ( 1 0 0 1 ) , J = ( 0 − 1 1 0 ) . {\displaystyle I={\begin{pmatrix}1&0\\0&1\end{pmatrix}},\quad J={\begin{pmatrix}0&-1\\1&0\end{pmatrix}}.} Then an arbitrary complex number a + bi can be represented by: a I + b J = ( a − b b a ) . {\displaystyle aI+bJ={\begin{pmatrix}a&-b\\b&a\end{pmatrix}}.} More generally, any real-valued 2 × 2 matrix with a trace of zero and a determinant of one squares to − I , so could be chosen for J . Larger matrices could also be used; for example, 1 could be represented by the 4 × 4 identity matrix and i could be represented by any of the Dirac matrices for spatial dimensions. Polynomials (weighted sums of the powers of a variable) are a basic tool in algebra. Polynomials whose coefficients are real numbers form a ring , denoted R [ x ] , {\displaystyle \mathbb {R} [x],} an algebraic structure with addition and multiplication and sharing many properties with the ring of integers . The polynomial x 2 + 1 {\displaystyle x^{2}+1} has no real-number roots , but the set of all real-coefficient polynomials divisible by x 2 + 1 {\displaystyle x^{2}+1} forms an ideal , and so there is a quotient ring R [ x ] / ⟨ x 2 + 1 ⟩ . {\displaystyle \mathbb {R} [x]/\langle x^{2}+1\rangle .} This quotient ring is isomorphic to the complex numbers, and the variable x {\displaystyle x} expresses the imaginary unit. The complex numbers can be represented graphically by drawing the real number line as the horizontal axis and the imaginary numbers as the vertical axis of a Cartesian plane called the complex plane . In this representation, the numbers 1 and i are at the same distance from 0 , with a right angle between them. Addition by a complex number corresponds to translation in the plane, while multiplication by a unit-magnitude complex number corresponds to rotation about the origin. Every similarity transformation of the plane can be represented by a complex-linear function z ↦ a z + b . {\displaystyle z\mapsto az+b.} In the geometric algebra of the Euclidean plane , the geometric product or quotient of two arbitrary vectors is a sum of a scalar (real number) part and a bivector part. (A scalar is a quantity with no orientation, a vector is a quantity oriented like a line, and a bivector is a quantity oriented like a plane.) The square of any vector is a positive scalar, representing its length squared, while the square of any bivector is a negative scalar. The quotient of a vector with itself is the scalar 1 = u / u , and when multiplied by any vector leaves it unchanged (the identity transformation ). The quotient of any two perpendicular vectors of the same magnitude, J = u / v , which when multiplied rotates the divisor a quarter turn into the dividend, Jv = u , is a unit bivector which squares to −1 , and can thus be taken as a representative of the imaginary unit. Any sum of a scalar and bivector can be multiplied by a vector to scale and rotate it, and the algebra of such sums is isomorphic to the algebra of complex numbers. In this interpretation points, vectors, and sums of scalars and bivectors are all distinct types of geometric objects. [ 6 ] More generally, in the geometric algebra of any higher-dimensional Euclidean space , a unit bivector of any arbitrary planar orientation squares to −1 , so can be taken to represent the imaginary unit i . The imaginary unit was historically written − 1 , {\textstyle {\sqrt {-1}},} and still is in some modern works. However, great care needs to be taken when manipulating formulas involving radicals . The radical sign notation x {\textstyle {\sqrt {x}}} is reserved either for the principal square root function, which is defined for only real x ≥ 0, or for the principal branch of the complex square root function. Attempting to apply the calculation rules of the principal (real) square root function to manipulate the principal branch of the complex square root function can produce false results: [ 7 ] − 1 = i ⋅ i = − 1 ⋅ − 1 = f a l l a c y ( − 1 ) ⋅ ( − 1 ) = 1 = 1 (incorrect). {\displaystyle -1=i\cdot i={\sqrt {-1}}\cdot {\sqrt {-1}}\mathrel {\stackrel {\mathrm {fallacy} }{=}} {\textstyle {\sqrt {(-1)\cdot (-1)}}}={\sqrt {1}}=1\qquad {\text{(incorrect).}}} Generally, the calculation rules x t y ⋅ y t y = x ⋅ y t y {\textstyle {\sqrt {x{\vphantom {ty}}}}\cdot \!{\sqrt {y{\vphantom {ty}}}}={\sqrt {x\cdot y{\vphantom {ty}}}}} and x t y / y t y = x / y {\textstyle {\sqrt {x{\vphantom {ty}}}}{\big /}\!{\sqrt {y{\vphantom {ty}}}}={\sqrt {x/y}}} are guaranteed to be valid only for real, positive values of x and y . [ 8 ] [ 9 ] [ 10 ] When x or y is real but negative, these problems can be avoided by writing and manipulating expressions like i 7 {\textstyle i{\sqrt {7}}} , rather than − 7 {\textstyle {\sqrt {-7}}} . For a more thorough discussion, see the articles Square root and Branch point . As a complex number, the imaginary unit follows all of the rules of complex arithmetic . When the imaginary unit is repeatedly added or subtracted, the result is some integer times the imaginary unit, an imaginary integer ; any such numbers can be added and the result is also an imaginary integer: a i + b i = ( a + b ) i . {\displaystyle ai+bi=(a+b)i.} Thus, the imaginary unit is the generator of a group under addition, specifically an infinite cyclic group . The imaginary unit can also be multiplied by any arbitrary real number to form an imaginary number . These numbers can be pictured on a number line , the imaginary axis , which as part of the complex plane is typically drawn with a vertical orientation, perpendicular to the real axis which is drawn horizontally. Integer sums of the real unit 1 and the imaginary unit i form a square lattice in the complex plane called the Gaussian integers . The sum, difference, or product of Gaussian integers is also a Gaussian integer: ( a + b i ) + ( c + d i ) = ( a + c ) + ( b + d ) i , ( a + b i ) ( c + d i ) = ( a c − b d ) + ( a d + b c ) i . {\displaystyle {\begin{aligned}(a+bi)+(c+di)&=(a+c)+(b+d)i,\\[5mu](a+bi)(c+di)&=(ac-bd)+(ad+bc)i.\end{aligned}}} When multiplied by the imaginary unit i , any arbitrary complex number in the complex plane is rotated by a quarter turn ( 1 2 π {\displaystyle {\tfrac {1}{2}}\pi } radians or 90° ) anticlockwise . When multiplied by − i , any arbitrary complex number is rotated by a quarter turn clockwise. In polar form: i r e φ i = r e ( φ + π / 2 ) i , − i r e φ i = r e ( φ − π / 2 ) i . {\displaystyle i\,re^{\varphi i}=re^{(\varphi +\pi /2)i},\quad -i\,re^{\varphi i}=re^{(\varphi -\pi /2)i}.} In rectangular form, i ( a + b i ) = − b + a i , − i ( a + b i ) = b − a i . {\displaystyle i(a+bi)=-b+ai,\quad -i(a+bi)=b-ai.} The powers of i repeat in a cycle expressible with the following pattern, where n is any integer: i 4 n = 1 , i 4 n + 1 = i , i 4 n + 2 = − 1 , i 4 n + 3 = − i . {\displaystyle i^{4n}=1,\quad i^{4n+1}=i,\quad i^{4n+2}=-1,\quad i^{4n+3}=-i.} Thus, under multiplication, i is a generator of a cyclic group of order 4, a discrete subgroup of the continuous circle group of the unit complex numbers under multiplication. Written as a special case of Euler's formula for an integer n , i n = exp ( 1 2 π i ) n = exp ( 1 2 n π i ) = cos ( 1 2 n π ) + i sin ( 1 2 n π ) . {\displaystyle i^{n}={\exp }{\bigl (}{\tfrac {1}{2}}\pi i{\bigr )}^{n}={\exp }{\bigl (}{\tfrac {1}{2}}n\pi i{\bigr )}={\cos }{\bigl (}{\tfrac {1}{2}}n\pi {\bigr )}+{i\sin }{\bigl (}{\tfrac {1}{2}}n\pi {\bigr )}.} With a careful choice of branch cuts and principal values , this last equation can also apply to arbitrary complex values of n , including cases like n = i . [ citation needed ] Just like all nonzero complex numbers, i = e π i / 2 {\textstyle i=e^{\pi i/2}} has two distinct square roots which are additive inverses . In polar form, they are i = exp ( 1 2 π i ) 1 / 2 = exp ( 1 4 π i ) , − i = exp ( 1 4 π i − π i ) = exp ( − 3 4 π i ) . {\displaystyle {\begin{alignedat}{3}{\sqrt {i}}&={\exp }{\bigl (}{\tfrac {1}{2}}{\pi i}{\bigr )}^{1/2}&&{}={\exp }{\bigl (}{\tfrac {1}{4}}\pi i{\bigr )},\\-{\sqrt {i}}&={\exp }{\bigl (}{\tfrac {1}{4}}{\pi i}-\pi i{\bigr )}&&{}={\exp }{\bigl (}{-{\tfrac {3}{4}}\pi i}{\bigr )}.\end{alignedat}}} In rectangular form, they are [ a ] i = 1 + i 2 = − 2 2 + 2 2 i , − i = − 1 + i 2 = − 2 2 − 2 2 i . {\displaystyle {\begin{alignedat}{3}{\sqrt {i}}&={\frac {1+i}{\sqrt {2}}}&&{}={\phantom {-}}{\tfrac {\sqrt {2}}{2}}+{\tfrac {\sqrt {2}}{2}}i,\\[5mu]-{\sqrt {i}}&=-{\frac {1+i}{\sqrt {2}}}&&{}=-{\tfrac {\sqrt {2}}{2}}-{\tfrac {\sqrt {2}}{2}}i.\end{alignedat}}} Squaring either expression yields ( ± 1 + i 2 ) 2 = 1 + 2 i − 1 2 = 2 i 2 = i . {\displaystyle \left(\pm {\frac {1+i}{\sqrt {2}}}\right)^{2}={\frac {1+2i-1}{2}}={\frac {2i}{2}}=i.} The three cube roots of i are [ 12 ] i 3 = exp ( 1 6 π i ) = 3 2 + 1 2 i , exp ( 5 6 π i ) = − 3 2 + 1 2 i , exp ( − 1 2 π i ) = − i . {\displaystyle {\sqrt[{3}]{i}}={\exp }{\bigl (}{\tfrac {1}{6}}\pi i{\bigr )}={\tfrac {\sqrt {3}}{2}}+{\tfrac {1}{2}}i,\quad {\exp }{\bigl (}{\tfrac {5}{6}}\pi i{\bigr )}=-{\tfrac {\sqrt {3}}{2}}+{\tfrac {1}{2}}i,\quad {\exp }{\bigl (}{-{\tfrac {1}{2}}\pi i}{\bigr )}=-i.} For a general positive integer n , the n -th roots of i are, for k = 0, 1, ..., n − 1, exp ⁡ ( 2 π i k + 1 4 n ) = cos ⁡ ( 4 k + 1 2 n π ) + i sin ⁡ ( 4 k + 1 2 n π ) . {\displaystyle \exp \left(2\pi i{\frac {k+{\frac {1}{4}}}{n}}\right)=\cos \left({\frac {4k+1}{2n}}\pi \right)+i\sin \left({\frac {4k+1}{2n}}\pi \right).} The value associated with k = 0 is the principal n -th root of i . The set of roots equals the corresponding set of roots of unity rotated by the principal n -th root of i . These are the vertices of a regular polygon inscribed within the complex unit circle . The complex exponential function relates complex addition in the domain to complex multiplication in the codomain. Real values in the domain represent scaling in the codomain (multiplication by a real scalar) with 1 representing multiplication by e , while imaginary values in the domain represent rotation in the codomain (multiplication by a unit complex number) with i representing a rotation by 1 radian. The complex exponential is thus a periodic function in the imaginary direction, with period 2 πi and image 1 at points 2 kπi for all integers k , a real multiple of the lattice of imaginary integers. The complex exponential can be broken into even and odd components, the hyperbolic functions cosh and sinh or the trigonometric functions cos and sin : exp ⁡ z = cosh ⁡ z + sinh ⁡ z = cos ⁡ ( − i z ) + i sin ⁡ ( − i z ) {\displaystyle \exp z=\cosh z+\sinh z=\cos(-iz)+i\sin(-iz)} Euler's formula decomposes the exponential of an imaginary number representing a rotation: exp ⁡ i φ = cos ⁡ φ + i sin ⁡ φ . {\displaystyle \exp i\varphi =\cos \varphi +i\sin \varphi .} This fact can be used to demonstrate, among other things, the apparently counterintuitive result that i i {\displaystyle i^{i}} is a real number. [ 13 ] The quotient coth z = cosh z / sinh z , with appropriate scaling, can be represented as an infinite partial fraction decomposition as the sum of reciprocal functions translated by imaginary integers: [ 14 ] π coth ⁡ π z = lim n → ∞ ∑ k = − n n 1 z + k i . {\displaystyle \pi \coth \pi z=\lim _{n\to \infty }\sum _{k=-n}^{n}{\frac {1}{z+ki}}.} Other functions based on the complex exponential are well-defined with imaginary inputs. For example, a number raised to the ni power is: x n i = cos ⁡ ( n ln ⁡ x ) + i sin ⁡ ( n ln ⁡ x ) . {\displaystyle x^{ni}=\cos(n\ln x)+i\sin(n\ln x).} Because the exponential is periodic, its inverse the complex logarithm is a multi-valued function , with each complex number in the domain corresponding to multiple values in the codomain, separated from each-other by any integer multiple of 2 πi . One way of obtaining a single-valued function is to treat the codomain as a cylinder , with complex values separated by any integer multiple of 2 πi treated as the same value; another is to take the domain to be a Riemann surface consisting of multiple copies of the complex plane stitched together along the negative real axis as a branch cut , with each branch in the domain corresponding to one infinite strip in the codomain. [ 15 ] Functions depending on the complex logarithm therefore depend on careful choice of branch to define and evaluate clearly. For example, if one chooses any branch where ln ⁡ i = 1 2 π i {\displaystyle \ln i={\tfrac {1}{2}}\pi i} then when x is a positive real number, log i ⁡ x = − 2 i ln ⁡ x π . {\displaystyle \log _{i}x=-{\frac {2i\ln x}{\pi }}.} The factorial of the imaginary unit i is most often given in terms of the gamma function evaluated at 1 + i : [ 16 ] i ! = Γ ( 1 + i ) = i Γ ( i ) ≈ 0.4980 − 0.1549 i . {\displaystyle i!=\Gamma (1+i)=i\Gamma (i)\approx 0.4980-0.1549\,i.} The magnitude and argument of this number are: [ 17 ] | Γ ( 1 + i ) | = π sinh ⁡ π ≈ 0.5216 , arg ⁡ Γ ( 1 + i ) ≈ − 0.3016. {\displaystyle |\Gamma (1+i)|={\sqrt {\frac {\pi }{\sinh \pi }}}\approx 0.5216,\quad \arg {\Gamma (1+i)}\approx -0.3016.}
https://en.wikipedia.org/wiki/Imaginary_unit
Imagine Cup is an annual competition sponsored and hosted by Microsoft Corp. which brings together student developers worldwide to help resolve some of the world's toughest challenges. [ 1 ] It is considered as "Olympics of Technology" by computer science and engineering and is considered one of the top competitions and awards related to technology and software design. All Imagine Cup competitors create projects that address the Imagine Cup theme: "Imagine a world where technology helps solve the toughest problems". Started in 2003, it has steadily grown, with more than 2 million competitors representing 150 countries in 2022. The 2023 Imagine Cup World Championship was held in Seattle, United States. [ 2 ] [ 3 ] The Imagine Cup was first introduced in 2003 as part of Microsoft's initiative to encourage the next generation of technologists and entrepreneurs. Its goal was to empower students to use Microsoft technologies to create impactful projects in fields such as education, health, sustainability, and accessibility. Over the years, the Imagine Cup has grown in scope and scale, attracting tens of thousands of participants annually. The competition has been shaped by technological trends and global challenges. The competition is divided into multiple stages: Throughout its history, the Imagine Cup has spotlighted innovative ideas and entrepreneurial spirit. Some notable milestones include: The Imagine Cup has become a catalyst for innovation, providing students with resources, mentorship, and exposure to industry leaders. Many participants have gone on to turn their projects into startups, gaining recognition and funding. All Imagine Cup competitors create projects that address the Imagine Cup theme: "Imagine a world where technology helps solve the toughest problems." There are a number of competitions and challenges within the Imagine Cup. The Software Design category is the primary competition in which its winners take home the Imagine Cup trophy. Andre Furtado Carlos Rodrigues Ivan Cardim Roberto Sonnino Team Wings Rafael Costa Guilhermr Campos Helena Van Kampen Tulio Marques Soria Kenny Deriemaeker Timothy Vanherberghen Filip Van Bouwel Jeroen Van Raevels Dong HoonKim KiHwan Kim Min My Park Luciano José Edgar Neto Vinícius Ottoni Victor Rafael Pierre Elias Eric Lee Veronica Burkel Chase Sandmann Thomas Tiam-Lee Jenina Chua Jeriah Miranda Keven Hernandez Matteo Valoriani Antimo Musone Daniele Midi Antonio Vecchio Learning and Education Code Buzzers Innovate and Collaborate Super Sea Dragons LeapKin Tutus flipped.uy RELOAD Learn n' Earn BiDE Code AfriGal Tech Flow CodeBlue Team HCI/d Games Team Silicon Innovation Hiraya Gabriel Eric Villanueva Jomarie Anne Marquez Neo John Tuquero Games Team Sticky Bits Innovation Team Mote Labs Games Liaison Team Innovation YouBeRu The Imagine Cup Innovation Accelerator was a program that, between 2006 and 2008, provided Imagine Cup Software Design teams with direction on the next stage of developing their innovative ideas into a business. Each year, between 2006 and 2008, six teams were selected for the Innovation Accelerator program. Participants in the Innovation Accelerator program travelled to the Microsoft Mountain View campus in Silicon Valley and received technical support and business coaching to create the must-have technology and communications applications of the future. In 2010, Microsoft began inviting every Imagine Cup team to participate in its new program for startups: Microsoft BizSpark. [ 41 ] With this program, startups receive access to current, full-featured software development tools and platforms. [ 42 ] Previous teams include: A three-year, $3 million competitive grant program was established by Microsoft in 2011 to support a select number of winning teams’ solutions to go to market and realize its potential to solve a critical global problem. The inaugural grant recipients were announced at the World Economic Forum in Davos, Switzerland on January 27, 2012, which included the following teams: The grant packages include US$75,000 for each team, as well as software, cloud computing services, solution provider support, premium Microsoft BizSpark account benefits and access to local resources such as the Microsoft Innovation Centers. Microsoft will also connect grant recipients with its network of investors, nongovernmental organization partners and business partners. For the 2012 version of the competition, the following teams were announced in December 2012. The teams are: Imagine Cup participants from around the world who won their regional competitions in 2010 have been recognized by their government leaders. [ 43 ] In October 2010, two Imagine Cup 2010 United States finalists (Wilson To [ 44 ] from the Mobilifeteam and Christian Hood from BeastWare) [ citation needed ] were invited to participate in the White House Science Fair . New Zealand's Prime Minister, Hon. John Key sent Team OneBeep from New Zealand a personal letter that congratulated them on their third-place finish. Team Skeek from Thailand, winners of the 2010 Software Design competition, met Dr. Khunying Kalaya Sophonpanich, a member of Parliament and Secretary General of The Rajapruek Institute Foundation. Microsoft Poland and members of the European Parliament hosted the "Pushing the Boundaries of Innovation" conference in Brussels. Imagine Cup teams from Poland (fteams and Mutants), Serbia (TFZR), Germany (Mediator), and Belgium (Nom Nom Productions) were in attendance. Greek Imagine Cup winners, Giorgos Karakatsiotis and Vangos Pterneas, of Megadodo, [ citation needed ] met with the Prime Minister of Greece, George Papandreou , and demonstrated their project that creates personalized descriptions of museum exhibits based on the user's needs. Teams Xormis and Educ8 from Jamaica were honored with a special luncheon hosted by the Government of Jamaica that included an address from Hon. Bruce Golding, the prime minister. Team Think Green had the opportunity to meet with Ivo Josipović , President of Croatia . [ 45 ]
https://en.wikipedia.org/wiki/Imagine_Cup
The Imagineering Foundation is a British charity organisation that encourages schoolchildren aged 8–16 to engage with engineering. It was formed in 1999 by a group of Midlands engineers who were concerned about a perceived lowering of interest in engineering activities by schoolchildren, leading to a skills shortage in the STEM subjects. It was launched as an educational charity on 31 July 2001. The website was developed in 2004, with educational material for the clubs. It is based in Warwickshire . It also has a substantial online presence with a website built so tutors can access project material. [ 1 ] It currently organizes 147 engineering clubs mostly in England , but with two in Wales and five in Scotland. Most of the clubs are in the West Midlands , and to a lesser extent in the East Midlands ( Northamptonshire ). Each week, around 1800 children take part in the clubs. It publishes a magazine called The Imagineer . It hosts children's engineering fairs at the Royal Bath and West Show , the Royal International Air Tattoo at RAF Fairford , and the International Air Day at RNAS Yeovilton . These events are often attended by people (parents) already interested in engineering - the organization's target audience.
https://en.wikipedia.org/wiki/Imagineering_Foundation
An imaging biomarker is a biologic feature, or biomarker detectable in an image. [ 1 ] In medicine, an imaging biomarker is a feature of an image relevant to a patient's diagnosis. For example, a number of biomarkers are frequently used to determine risk of lung cancer . First, a simple lesion in the lung detected by X-ray , CT , or MRI can lead to the suspicion of a neoplasm . The lesion itself serves as a biomarker, but the minute details of the lesion serve as biomarkers as well, and can collectively be used to assess the risk of neoplasm. Some of the imaging biomarkers used in lung nodule assessment include size, spiculation , calcification, cavitation, location within the lung, rate of growth, and rate of metabolism. Each piece of information from the image represents a probability . Spiculation increases the probability of the lesion being cancer. A slow rate of growth indicates benignity. These variables can be added to the patient's history, physical exam, laboratory tests, and pathology to reach a proposed diagnosis. Imaging biomarkers can be measured using several techniques, such as CT, PET, SPECT, ultrasound, electroencephalography , magnetoencephalography , and MRI. Imaging biomarkers are as old as the X-ray itself. A feature of a radiograph that represent some kind of pathology was first coined "Roentgen signs" after Wilhelm Röntgen , the discoverer of the X-ray. [ 2 ] As the field of medical imaging developed and expanded to include numerous imaging modalities, imaging biomarkers have grown as well, in both quantity and complexity as finally in chemical imaging . A quantitative imaging biomarkers (QIB) is an objective characteristic derived from an in vivo image measured on a ratio or interval scale as indicators of normal biological processes, pathogenic processes or a response to a therapeutic intervention . [ 3 ] An advantage of QIB's over qualitative imaging biomarkers is that they are better suited to be used for follow-up of patients or in clinical trials. Early examples of a frequently used QIB are the RECIST criteria, measuring the evolution in tumor size to assess treatment response for patients with cancer, the Nuchal scan used for prenatal screening, or the assessment of lesion load and brain atrophy for patients with multiple sclerosis . Subsequent QIB's have focused on physical measurands or dimensionless quantities derived from the same (e.g., z-score). Example QIBs in this vein include the apparent diffusion coefficient, [ 4 ] temperature, magnetic susceptibility, standard uptake value (SUV), [ 5 ] and shear wave speed. These newer QIBs allow for a metrological traceability, raising the bar for measurement accuracy and precision. Clinical trials are known to be one of the most valuable sources of data in evidence-based medicine . For a pharmaceutical, device, or procedure to be approved for regular use in the U.S., it must be rigorously tested in clinical trials, and demonstrate sufficient efficacy. Unfortunately clinical trials are also extremely expensive and time consuming. End-points , such as morbidity and mortality, are used as measures to compare groups within a clinical trial. The most basic endpoint used in clinical trials, mortality, requires years and sometimes decades of follow-up to sufficiently assess. Morbidity, although potentially faster to measure than mortality, can also be a very difficult endpoint to measure clinically, as it is often very subjective. These are some of the reasons why biomarkers have been increasingly used in clinical trials to detect subtle changes in physiology and pathology before they can are detected clinically. The biomarkers act as surrogate endpoints . The use of surrogate endpoints has been shown to significantly decrease the time and resources used in clinical trials. Because surrogate end-points allow researchers to assess a marker rather than the patient, it allows participants to act as their own control, and in many cases allows for easier blinding. In addition to surrogate endpoints , imaging biomarkers can be used as predictive classifiers , to assist in selecting appropriate candidates for particular treatment. Predictive classifiers are frequently used in molecular imaging in order to ensure enzymatic response to treatment. The United States Congress and the Food and Drug Administration have acknowledged the value of imaging biomarkers as evidenced by recent actions that encourage their use. The FDA Modernization Act of 1997 was instituted to improve the regulatory process for medical products. Section 112 of the Act gives explicit authority to give expedited approval for drugs that treat serious conditions as long as it has shown to have an effect on a surrogate end-point that reasonably indicates a clinical benefit. Other provisions enables monitoring of the products following market approval to ensure the efficacy of the surrogate end-points and requires the FDA to establish a program that promotes the development and use of surrogate end-points for serious diseases. Although the act does not specifically mention the use of surrogate end-points for medical devices, section 205 requires that the "least burdensome means necessary" be used in their approval. [ 6 ] The wording is much more general than the provision for pharmaceuticals, but is generally accepted that surrogate endpoints will often qualify as being the "least burdensome means". Developing an understanding of clinical significance for specific biomarkers can be a difficult process. There are two steps of certification for a surrogate endpoint to be fully established: Qualification and Validation. For a biomarker to become qualified it must go through a somewhat formal qualification process. A request must be submitted to IPRG to qualify an imaging biomarker for a specific use. The Biomarker Qualification Review Team, recruited from nonclinical and clinical review divisions, assesses the context and available data regarding the biomarker. They also evaluate the qualification study strategy methods and results and ultimately make a decision to accept or reject. After qualification, a biomarker may have limited use as a surrogate endpoint . They may be used in phase I and II clinical trials , but can only be used in phase III trials for early futility analyses. There are two steps to validation, probable validation and known validation. "Probable validation" requires widespread agreement in the medical or scientific community as to its efficacy. "Known validation" requires a scientific framework or body of evidence that appears to elucidate the marker’s efficacy. [ 7 ] For full validation, a biomarker must demonstrate that the treatment versus control differences are similar to the treatment versus control differences for clinical outcome. It is not sufficient to simply demonstrate that the biomarker responders survive longer than the biomarker non-responders. The following are 3 measures of quality to determine the strength of biomarker for use in clinical trials. [ 8 ] Because the project of compiling a library of validated biomarkers requires an enormous amount of resources, the FDA has encouraged the creation of consortia between public and private organization in order to facilitate the sharing of data for the qualification and validation of biomarkers. The Biomarkers Consortium was created by the Foundation for the National Institutes of Health , National Institute of Health , Food and Drug Administration , and Pharmaceutical Research and Manufacturers of America . It is a public-private biomedical research partnership aimed to provide grants for the generation of data for clinical biomarker qualification. The Predictive Safety Testing Consortium, was created by the Critical Path Institute and the Food and Drug Administration to develop a framework needed for data sharing between its members in order to make biomarker qualification easier. They are also working with regulatory agencies to replace the currently unstructured qualification process. In 2001, the Radiology department at Massachusetts General Hospital , founded the MGH Center for Biomarkers in Imaging, a center dedicated to encourage the development and use of imaging biomarkers. Their initial project was to catalogue the known biomarkers in order to make them readily available to scientists, regulators, and industry representatives (now available on their website). The catalogue includes the pathology specific to the biomarkers, the investigator(s) involved in creating and using the biomarker, and the modalities used in the detection of the biomarker. International Cancer Biomarker Consortium was created to assist in discovery of biomarkers by facilitating coordinated research and by leveraging resources. Each international team chooses a cancer site(s) for study, functions independently, and secures its own funding. The president of the organization, Leland Hartwell , won the Nobel Prize for physiology/medicine in 2001. Uniform Protocols for Imaging in Clinical Trials (UPICT) was created by the American College of Radiology . Imaging Response Assessment Teams was created by the National Cancer Institute and AACI to advance the role of imaging in assessment of response to therapy and to increase the application of quantitative, anatomic, functional, and molecular imaging endpoints in clinical therapeutic trials. Aims to strengthen clinical collaboration between imaging scientists and oncologic investigators. Oncology Biomarker Qualification Initiative was created by the Food and Drug Administration and the National Cancer Institute to qualify new cancer biomarkers. Their first project involves PET imaging in non-Hodgkin lymphoma .
https://en.wikipedia.org/wiki/Imaging_biomarker
An imaging cycler microscope (ICM) is a fully automated (epi) fluorescence microscope which overcomes the spectral resolution limit resulting in parameter- and dimension-unlimited fluorescence imaging. The principle and robotic device was described by Walter Schubert in 1997 [ 1 ] and has been further developed with his co-workers within the human toponome project. [ 2 ] [ 3 ] [ 4 ] [ 5 ] The ICM runs robotically controlled repetitive incubation-imaging-bleaching cycles with dye-conjugated probe libraries recognizing target structures in situ (biomolecules in fixed cells or tissue sections). This results in the transmission of a randomly large number of distinct biological informations by re-using the same fluorescence channel after bleaching for the transmission of another biological information using the same dye which is conjugated to another specific probe, a.s.o. Thereby noise-reduced quasi-multichannel fluorescence images with reproducible physical, geometrical, and biophysical stabilities are generated. The resulting power of combinatorial molecular discrimination (PCMD) per data point is given by 65,536 k , where 65,536 is the number of grey value levels (output of a 16-bit CCD camera), and k is the number of co-mapped biomolecules and/or subdomains per biomolecule(s). High PCMD has been shown for k = 100, [ 3 ] [ 5 ] and in principle can be expanded for much higher numbers of k . In contrast to traditional multichannel–few-parameter fluorescence microscopy (panel a in the figure) high PCMDs in an ICM lead to high functional and spatial resolution (panel b in the figure). Systematic ICM analysis of biological systems reveals the supramolecular segregation law that describes the principle of order of large, hierarchically organized biomolecular networks in situ (toponome). [ 6 ] The ICM is the core technology for the systematic mapping of the complete protein network code in tissues (human toponome project). [ 2 ] The original ICM method [ 1 ] includes any modification of the bleaching step. Corresponding modifications have been reported for antibody retrieval [ 7 ] and chemical dye-quenching [ 8 ] debated recently. [ 9 ] [ 10 ] The Toponome Imaging Systems (TIS) and multi-epitope-ligand cartographs (MELC) represent different stages of the ICM technological development. Imaging cycler microscopy received the American ISAC best paper award in 2008 for the three symbol code of organized proteomes. [ 11 ]
https://en.wikipedia.org/wiki/Imaging_cycler_microscopy
Imaging genetics refers to the use of anatomical or physiological imaging technologies as phenotypic assays to evaluate genetic variation. Scientists that first used the term imaging genetics were interested in how genes influence psychopathology and used functional neuroimaging to investigate genes that are expressed in the brain (neuroimaging genetics). [ 1 ] Imaging genetics uses research approaches in which genetic information and fMRI data in the same subjects are combined to define neuro-mechanisms linked to genetic variation. [ 2 ] With the images and genetic information, it can be determined how individual differences in single nucleotide polymorphisms , or SNPs, lead to differences in brain wiring structure, and intellectual function. [ 3 ] Imaging genetics allows the direct observation of the link between genes and brain activity in which the overall idea is that common variants in SNPs lead to common diseases. [ 4 ] A neuroimaging phenotype is attractive because it is closer to the biology of genetic function than illnesses or cognitive phenotypes. [ 5 ] By combining the outputs of the polygenic and neuro-imaging within a linear model, it has been shown that genetic information provides additive value in the task of predicting Alzheimer's disease (AD). [ 6 ] AD traditionally has been considered a disease marked by neuronal cell loss and widespread gray matter atrophy and the apolipoprotein E allele ( APOE4 ) is a widely confirmed genetic risk factor for late-onset AD. [ 7 ] Another gene risk variant is associated with Alzheimer's, which is known as the CLU gene risk variant. The CLU gene risk variant showed a distinct profile of lower white matter integrity that may increase vulnerability to developing AD later in life. [ 7 ] Each CLU-C allele was associated with lower FA in frontal, temporal, parietal, occipital, and subcortical white matter. [ 7 ] Brain regions with lower FA included corticocortical pathways previously demonstrated to have lower FA in AD patients and APOE4 carriers. [ 7 ] CLU-C related variability found here might create a local vulnerability important for disease onset. [ 7 ] These effects are remarkable as they already exist early in life and are associated with a risk gene that is very prevalent (~36% of Caucasians carry two copies of the risk conferring genetic variant CLU-C.) [ 7 ] Quantitative mapping of structural brain differences in those at genetic risk of AD is crucial for evaluating treatment and prevention strategies. If the risk for AD is identified, appropriate changes in lifestyle may limit the apprehension of AD; exercise and body mass index-have an effect on brain structure and the level of brain atrophy [ 7 ] If suitable biomarkers are found and applied in clinical use, we will be able to give the diagnosis of the AD spectrum at an even earlier stage. [ 8 ] In the proposal, the AD spectrum is divided into the three stages (i) the preclinical stage; (ii) mild cognitive impairment; and (iii) clinical AD. [ 8 ] In the preclinical stage, only changes in a specific biomarker are observed with neither cognitive impairment nor clinical signs of AD. The mild cognitive impairment stage may include those showing biomarker changes as well as mild cognitive decline but no clinical signs and symptoms of AD. AD is diagnosed in patients with biomarker changes and clinical signs and symptoms of AD. This concept will promote understanding of the continuous transition from preclinical stage to AD via mild cognitive impairment, in which biomarkers can be utilized to discriminate and clearly define each stage of the AD spectrum. The new criteria will promote earlier detection of the subjects who will develop AD in later life, and also to initiate intervention aiming for the prevention of AD. Imaging genetics must develop methods that will allow relating the effects of a large number of genetic variants on equally multi-dimensional neuroimaging phenotypes. [ 9 ] Additionally, the field of imaging epigenetics is emerging with particular relevance, for example, to the understanding of intergenerational transmission of trauma-related psychopathology and related disturbances of maternal care. [ 10 ] Medication, hospitalization history, or associated behaviors ranging such as smoking can affect imaging. [ 9 ]
https://en.wikipedia.org/wiki/Imaging_genetics
Imaging informatics , also known as radiology informatics or medical imaging informatics , is a subspecialty of biomedical informatics that aims to improve the efficiency, accuracy, usability and reliability of medical imaging services within the healthcare enterprise. [ 1 ] It is devoted to the study of how information about and contained within medical images is retrieved, analyzed, enhanced, and exchanged throughout the medical enterprise. As radiology is an inherently data-intensive and technology-driven specialty, those in this branch of medicine have become leaders in Imaging Informatics. However, with the proliferation of digitized images across the practice of medicine to include fields such as cardiology , ophthalmology , dermatology , surgery, gastroenterology , obstetrics , gynecology and pathology , the advances in Imaging Informatics are also being tested and applied in other areas of medicine. Various industry players and vendors involved with medical imaging, along with IT experts and other biomedical informatics professionals, are contributing and getting involved in this expanding field. Imaging informatics exists at the intersection of several broad fields: Due to the diversity of the industry players and broad professional fields involved with Imaging Informatics, there grew a demand for new standards and protocols. These include DICOM (Digital Imaging and Communications in Medicine), Health Level 7 (HL7), International Organization for Standardization (ISO), and Artificial Intelligence protocols. Current research surrounding Imaging Informatics has a focus on Artificial Intelligence (AI) and Machine Learning (ML). These new technologies are being used to develop automation methods, disease classification, advanced visualization techniques, and improvements in diagnostic accuracy. However, AI and ML integration faces several challenges with data management and security. While the field of imaging informatics is based around the power of modern computing, its roots trace back to the dawn of the 20th century. On November 8, 1895, German physicist Wilhelm Conrad Röntgen observed a new imaging technique he coined “ X-rays ” during his experiments. This discovery led to the creation of the medical imaging field, and in turn launched a new wave of human innovation. [ 2 ] X-rays stood as the only medical imaging technology for several decades following its discovery. However, the arrival of the mid 20th century meant the expansion of the medical imaging field. The new modalities included: computed tomography (CT) to visualize soft tissue with a high degree of resolution; Magnetic Resonance Imaging (MRI) which is a modern standard for soft tissue imaging; Ultrasound that uses sound waves to create less expensive visualizations; Nuclear Imaging and Hybrid Scanners for functional imaging and imaging with higher spatial resolution created by combining multiple modalities. [ 3 ] As these imaging techniques became more sophisticated, the amount of information that medical imaging professionals were expected to process also increased. Additionally, the digital revolution of the mid to late 20th century further increased the data these techniques could gather. As a result, the main limiting factor for the medical imaging field became the human inability to accurately interpret large amounts of data. [ 4 ] Thus, the need arose for computerized assistance with complex digital imaging analysis, storage and manipulation. Modern Imaging Informatics was developed to fulfill these needs. Imaging Informatics is a broad field with numerous areas of interest , making its development a culmination of the development of various individual technologies. Several of the key innovations for the field are as follows: The development of PACS popularized the use of image storage and retrieval systems in medical practices. [ 5 ] Moreover, this new technology demanded the development of others. The world quickly realized that digital imaging standards would need to be put in place given the impact PACS had on the medical community. The American College of Radiology (ACR) and the National Electrical Manufacturers Association (NEMA) created the Digital Imaging and Communications Standards Committee (later becoming DICOM) in response to this concern. [ 6 ] The digital age’s impact on radiology resulted in a large influx of data that needed to be managed. To combat this, the field of information technology was incorporated with technology such as Radiology Information System (RIS) [ 7 ] and Hospital Information System (HIS). These systems would work in tandem with PACS and other imaging technology to streamline the patient data management, as shown in the figure to the right. [ 8 ] The idea of computer-aided detection (CAD) and computer-aided diagnosis (CADx) is that the process of analysis and interpretation of medical image data could be automated, with a potentially higher degree of accuracy than human detection and diagnosis. Interest in this subject dates back to 1966, when radiology imaging first became digitized. [ 9 ] The first successful implementation of a CAD system was in 1994 at the University of Chicago for use in mammography . This was followed by the first commercial CAD system in 1998 called ImageChecker M1000. [ 6 ] With the arrival of the 21st century, machine learning techniques have been utilized to accomplish a version of the CAD and CADx systems. [ 10 ] The future development of these technologies is advantageous as it gives a solution to human limitations in medical image processing. [ 4 ] Although a highly accurate and fully automated CAD system has yet to be realized, recent advancements in Artificial Intelligence may allow for functioning implementations. [ 11 ] In the domain of imaging informatics, it is imperative to ascertain that the information pertaining to industry standards and data-sharing protocols is contemporaneous. The expeditious advancement in this field necessitates a vigilant approach to sustain uniformity, foster interoperability, and guarantee the efficacious dissemination of imaging data. To this end, several pivotal facets warrant rigorous consideration: The Digital Imaging and Communications in Medicine (DICOM) standard delineates a sophisticated structural schema that integrates medical imaging data with pertinent patient identifiers into unified data sets, analogous to the embedded metadata in JPEG images. Such DICOM entities are constituted by a multitude of attributes, notably encapsulating pixel data, which in certain imaging modalities, corresponds to discrete images or, alternatively, an array of frames exemplifying kinetic or volumetric data, as observed in cine loops or multi-dimensional scans in nuclear medicine. This architecture accommodates the assimilation of intricate, multi-faceted data into a monolithic DICOM file. The standard accommodates a spectrum of pixel data compression algorithms, including but not limited to JPEG and JPEG 2000, and provisionally allows for holistic data set compression. DICOM specifies three encodings for data elements, with a predilection for explicit value representations, barring specific exceptions as elaborated in Part 5 of the DICOM compendium. Uniformly applied across diverse applications, the file manifestation customarily incorporates a header that houses essential attributes and data on the originating application. The proposed workflow integrates the use of DICOM Structured Reporting (SR), in which essential measurements are encoded as DICOM SR objects. These objects are then used to fill a predefined SR template, resulting in the creation of a standardized report composed of discrete data elements. This report is subsequently transmitted to the Electronic Medical Record (EMR) system. The discrete data extracted from these reports facilitate the longitudinal monitoring of individual patient metrics, are forwarded to data registries, or are leveraged for clinical research purposes. [ 12 ] DDInteract has been crafted to enhance cooperative engagement between healthcare practitioners and patients, aiming to ascertain the optimal therapeutic approach that minimizes the hazards posed by potential drug-drug interactions. The user interface of DDInteract is systematically organized into four distinct segments. Medication data can be represented across a variety of Fast Health Interoperability Resources (FHIR) resources, necessitating careful analysis by DDInteract. Specifically, MedicationRequest is utilized for medications prescribed to the patient; MedicationDispense covers medications that have been physically provided to the patient; and MedicationStatement pertains to medications that the patient reports having taken or is currently taking. It is possible for a single medication to be represented in multiple resource forms, with potential redundancies being amalgamated into a single record based on the most recent date and a defined hierarchy among the resource types. To optimize the efficiency of data retrieval from the FHIR server, not every instance of medication is considered. Only those resources that are currently active or were active within the past 100 days are included, adhering to the prevalent U.S. protocol that typically allows for medication dispensation for a duration not exceeding three months. A Quality Management System (QMS) is an integrative construct that includes the organizational architecture, the allocation of resources, the expertise of personnel, and the repository of documents and procedures that collectively contribute to the assurance and enhancement of quality in an entity's offerings. It delineates a suite of systematically orchestrated actions essential for governing and optimizing quality parameters. The ISO 9000 suite emerges as the preeminent and universally endorsed schema for QMS implementations, whereas the ISO 15189 standard provides a specialized framework designed expressly for the exigencies of clinical laboratory settings. [ 13 ] A systematic review critically assessed the design, reporting standards, risk of bias, and validity of claims within studies that compare the efficacy of diagnostic deep learning algorithms in medical imaging against the expertise of clinicians. Conducted using data from prominent databases spanning from 2010 to June 2019, the review specifically targeted studies involving convolutional neural networks (CNNs)—notable for their capacity to autonomously discern crucial features for image classification within medical contexts. The investigation uncovered a notable deficiency in randomized clinical trials concerning this subject, identifying only ten such studies, of which merely two were published, exhibiting low risk of bias and commendable adherence to reporting protocols. Among the 81 non-randomized studies located, a minority were prospective or validated in practical clinical settings, with the majority presenting a high risk of bias, substandard compliance with reporting norms, and a pronounced lack of accessibility to data and code. This review underscores the imperative for an augmentation in the number of prospective studies and randomized trials, advocating for diminished bias, amplified clinical pertinence, enhanced transparency, and tempered conclusions in the burgeoning field of applying deep learning to medical imaging. [ 14 ] The exponential growth in digital data alongside enhanced computing capabilities has markedly accelerated advancements in artificial intelligence (AI), which are now progressively being incorporated into healthcare. These AI applications aim to refine diagnosis, treatment, and prognosis through sophisticated classification and prediction models. Nevertheless, the evolution of these technologies is impeded by a lack of rigorous reporting standards relating to data sourcing, model architecture, and the methodologies employed in model evaluation and validation. In response, we propose MINIMAR (Minimum Information for Medical AI Reporting), an initiative designed to establish critical parameters for understanding AI-driven predictions, the demographics targeted, inherent biases, and the ability to generalize these technologies. We urge the adoption of standardized protocols to ensure that AI implementations in healthcare are reported with accuracy and responsibility, facilitating the development and deployment of associated clinical decision-support tools while simultaneously addressing critical concerns regarding precision and bias. [ 15 ] As a foundational requisite, the proposed standard ought to fulfill several essential criteria: Firstly, it should encompass comprehensive details concerning the population from which the training data are derived, delineating the sources of data and the methods employed for cohort selection. Secondly, the demographics of the training data should be explicitly documented to facilitate a substantive comparison with the demographic characteristics of the population on which the model is intended to operate. Thirdly, there should be a thorough disclosure of the model’s architecture and its development process to allow for a clear interpretation of the model's intended purpose, comparison with analogous models, and to enable exact replication. Fourthly, the process of model evaluation, optimization, and validation must be transparently reported to elucidate the means by which local model optimization is attained and to support replication and the sharing of resources. [ 15 ] In summary, while AI offers significant opportunities for advancing imaging informatics, leveraging these opportunities to their fullest extent necessitates stringent validation, adherence to robust reporting frameworks, and an overarching commitment to addressing ethical considerations. These steps are pivotal in ensuring that AI-driven tools achieve their promise of enhancing efficiency and effectiveness in medical diagnostics. Key areas relevant to Imaging informatics include: Imaging Informatics has quite a few applications within the medical field. Imaging Informatics is most prominent within the field of radiology. Using AI, radiologists can use Imaging Informatics to ease their job and save time whilst analyzing images. A study published in "Current Medical Imaging" discovered that in CT imaging assisted by AI, the reading time to detect lung nodules and pleural effusions was reduced by more than 44% for radiologists. [ 18 ] Imaging informatics within Cardiology aids in the molecular phenotyping of CV(Cardiovascular) diseases and unification of CV knowledge. [ 19 ] This means that through data extraction, imaging, and machine learning analysis of these data and images allow researchers to categorize diseases based on the characteristics or features discovered. With this classification, researchers are then able to unify this CV information into one platform for continued analysis and information retrieval. Imaging informatics in pathology as a whole allows for a wide range of disease detection and analysis. The most prominent use in pathology is with the detection and analysis of different forms of cancer. Diagnosing cancer manually is a pain staking and subjective process which includes examining what could be millions of cells. Through various clinical decision support systems(CDSS), professionals can ease the manual labor of tissue region selection, using Whole-Slide Imaging(WSI) tools to maximize the information analyzed. Several predictive models aimed to identify regions of interest within WSI, requiring training before use. Unsupervised models are being introduced, but are currently less prominent. An example of an unsupervised model being used is detecting tissue folds by using an unsupervised method to cluster the pixels in an image representing the difference between saturation and intensity values for every pixel. Due to being an unsupervised model, this method has some limitations. These limitations being that it has low sensitivity for different types of tissue folds within an image, and it has low specificity for images without tissue folds. [ 20 ] In the US and some other countries, radiologists who wish to pursue sub-specialty training in this field can undergo fellowship training in imaging informatics. Medical Imaging Informatics Fellowships are done after completion of Board Certification in Diagnostic Radiology, and may be pursued concurrently with other sub-specialty radiology fellowships. The American Board of Imaging Informatics (ABII) also administers a certification examination for Imaging Informatics Professionals. PARCA (PACS Administrators Registry and Certification Association) certifications also exist for imaging informatics professionals. [ 21 ] The American Board of Preventive Medicine (ABPM) offers a certification examination for Clinical Informatics for physicians who have primary board certification with the American Board of Medical Specialties , a medical license and a medical degree. There are two pathways to be eligible to sit for the examination: Practice Pathway (open through 2022) for those who have not completed ACGME-accredited fellowship training in Clinical Informatics and ACGME-Accredited Fellowship Pathway of at least 24 months in duration. [ 22 ] The expansion of DICOM standards facilitated the widespread adoption of Picture Archiving and Communication Systems (PACS), marking a milestone in the digital transformation of imaging informatics. This standardization, which began to take hold in the late 1990s and was established by the early 2000s, has enhanced the ability to store, retrieve, and share medical images across different systems, improving the efficiency of medical imaging practices. [ 23 ] The adoption of structured reporting aimed to standardize reports to be concise and uniform, influencing patient care. The introduction of BI-RADS (Breast Imaging–Reporting and Data System) is a notable example, which has led to improved consistency across mammography reports. This milestone spans several years as these systems were refined and more widely adopted throughout the early 2010s. [ 23 ] The realization that graphics processing units (GPUs) could be used to accelerate neural networks occurred around 2012. This advancement led to the rapid development of deep learning techniques, speeding up tasks like image segmentation , feature recognition , and algorithm creation from large datasets of annotated images . This era of AI has enabled high-performance algorithms capable of assisting in hundreds of diagnostic tasks. [ 23 ] The field of radiomics , which involves extracting quantitative features from medical images that are invisible to the human eye, saw significant growth towards the late 2010s. This approach has enabled a deeper analysis of imaging data, which can be correlated with genomic patterns and other medical data to enhance diagnostic and predictive accuracy. [ 23 ] The development and FDA clearance of photon-counting detectors (PCD) for computed tomography (CT) scans in 2022 was an important innovation. These detectors offer a more efficient process for converting X-rays to electrical signals, allowing for better material differentiation and potentially reducing the radiation dose for patients. The image to the right shows two scans of the same brain using old and new CT technology respectively. [ 24 ] Current research in imaging informatics is primarily focused on the integration and advancement of artificial intelligence (AI) and machine learning (ML) within medical imaging technologies. Efforts are concentrated on enhancing diagnostic precision, improving predictive analytics, and automating image analysis processes. Deep learning , a subset of ML, is particularly pivotal in transforming radiological imaging , with algorithms increasingly being developed for tasks such as tumor detection, organ segmentation, and anomaly identification. These advancements not only aim to increase the efficiency and accuracy of diagnoses but also strive to reduce the workload on radiologists by automating routine tasks. [ 11 ] [ 25 ] Looking ahead, the future directions of imaging informatics are expected to further embrace interdisciplinary approaches, incorporating genetics, pathology, and data from wearable devices to offer more holistic views of patient health. The concept of "radiogenomics," linking imaging features with genomic data, is an area of growing interest, potentially leading to more personalized and precise medical treatments. Additionally, the ongoing development of interoperability standards and secure data exchange protocols will be crucial in enabling the seamless integration of imaging data across different healthcare platforms, enhancing collaborative research and clinical practice globally. [ 23 ] [ 24 ] There are several challenges in the field of Imaging Informatics: Addressing these challenges requires a coordinated effort among technology developers, healthcare providers, regulatory bodies, and other stakeholders. Advances in technology must be balanced with considerations of practicality, ethics, and equity to ensure that imaging informatics can fulfill its promise to enhance patient care and treatment outcomes. Recent years have seen significant advancements in software technologies relevant to imaging informatics. One notable development is the integration of machine learning algorithms into imaging software, enabling automated analysis and interpretation of medical images. For instance, Rajpurkar et al. (2017) demonstrated the effectiveness of deep learning algorithms in pneumonia detection on chest X-rays, showcasing the potential of machine learning in medical imaging analysis. [ 27 ] These algorithms have shown promising results in tasks such as lesion detection, disease classification, and treatment response assessment. Moreover, the implementation of natural language processing (NLP) techniques has facilitated the extraction of valuable insights from unstructured radiology reports, enhancing the efficiency of data analysis and decision-making processes. Advances in hardware technology have also played a pivotal role in shaping the landscape of imaging informatics. The evolution of imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET), has led to improvements in image resolution, acquisition speed, and diagnostic accuracy. [ 28 ] Additionally, the miniaturization of imaging devices has enabled point-of-care imaging, allowing for real-time assessment of patients in various clinical settings. For example, the development of handheld ultrasound devices has revolutionized point-of-care imaging by providing clinicians with portable and easy-to-use tools for bedside examinations (Smith, 2018). The rise of wearable devices and mobile health applications has further expanded the scope of imaging informatics, facilitating remote imaging and patient monitoring using sensors and cameras. [ 28 ] Along with technological innovations, methodological advancements have expanded the capabilities of imaging informatics. One development is the integration of multimodal imaging techniques, which combine data from multiple imaging modalities to provide complementary information about anatomical and physiological structures. For instance, recent studies have demonstrated the effectiveness of combining MRI, CT, and ultrasound data for improved diagnosis and treatment planning in oncology patients (Gupta et al., 2020). [ 29 ] By fusing data from these sources, clinicians can obtain a more comprehensive understanding of a patient's condition, leading to more accurate diagnoses and personalized treatment plans.
https://en.wikipedia.org/wiki/Imaging_informatics
Imaging particle analysis is a technique for making particle measurements using digital imaging , one of the techniques defined by the broader term particle size analysis . The measurements that can be made include particle size , particle shape (morphology or shape analysis and grayscale or color , as well as distributions (graphs) of statistical population measurements. Imaging particle analysis uses the techniques common to image analysis or image processing for the analysis of particles. Particles are defined here per particle size analysis as particulate solids, and thereby not including atomic or sub-atomic particles. Furthermore, this article is limited to real images (optically formed), as opposed to "synthetic" (computed) images ( computed tomography , confocal microscopy , SIM and other super resolution microscopy techniques, etc.). Given the above, the primary method for imaging particle analysis is using optical microscopy. While optical microscopes have been around and used for particle analysis since the 1600s, [ 1 ] the "analysis" in the past has been accomplished by humans using the human visual system . As such, much of this analysis is subjective, or qualitative in nature. Even when some sort of qualitative tools are available, such as a measuring reticle in the microscope, it has still required a human to determine and record those measurements. Beginning in the late 1800s [ 2 ] with the availability of photographic plates , it became possible to capture microscope images permanently on film or paper, making measurements easier to acquire by simply using a scaled ruler on the hard copy image. While this significantly speeded up the acquisition of particle measurements, it was still a tedious, labor-intensive process, which not only made it difficult to measure statistically significant particle populations, but also still introduced some degree of human error to the process. Finally, beginning roughly in the late 1970s, CCD digital sensors for capturing images and computers which could process those images, began to revolutionize the process by using digital imaging . Although the actual algorithms for performing digital image processing had been around for some time, it was not until the significant computing power needed to perform these analyses became available at reasonable prices that digital imaging techniques could be brought to bear in the mainstream. The first dynamic imaging particle analysis system was patented in 1982. [ 3 ] As faster computing resources became available at lowered costs, the task of making measurements from microscope images of particles could now be performed automatically by machine without human intervention, making it possible to measure significantly larger numbers of particles in much less time. The basic process by which imaging particle analysis is carried out is as follows: Imaging particle analyzers can be subdivided into two distinct types, static and dynamic, based upon the image acquisition methods. While the basic principles are the same, the methods of image acquisition are different in nature, and each has advantages and disadvantages. Static image acquisition is the most common form. Almost all microscopes can be easily adapted to accept a digital camera via a C mount adaptor. This type of set-up is often referred to as a digital microscope , although many systems using that name are used only for displaying an image on a monitor . The sample is prepared on a microscope slide which is placed on the microscope stage . Once the sample has been focused on, then an image can be acquired and displayed on the monitor. If it is a digital camera or a frame grabber is present, the image can now be saved in digital format, and image processing algorithms can be used to isolate particles in the field of view and measure them. [ 11 ] [ 12 ] In static image acquisition only one field of view image is captured at a time. If the user wishes to image other portions of the same sample on the slide, they can use the X-Y positioning hardware (typically composed of two linear stages on the microscope to move to a different area of the slide. Care must be taken to insure that two images do not overlap so as not to count and measure the same particles more than once. The major drawback to static image acquisition is that it is time consuming, both in sample preparation (getting the sample onto the slide with proper dilution if necessary), and in multiple movements of the stage in order to be able to acquire a statistically significant number of particles to count/measure. Computer-controlled X-Y positioning stages are sometimes used in these systems to speed the process up and to reduce the amount of operator intervention, but it is still a time consuming process, and the motorized stages can be expensive due to the level of precision required when working at high magnification. [ 13 ] The major advantages to static particle imaging systems are the use of standard microscope systems and simplicity of depth of field considerations. Since these systems can be made from any standard optical microscope, they may be a lower cost approach for people who already have microscopes. More important, though, is that microscope-based systems have less depth of field issues generally versus dynamic imaging systems. This is because the sample is placed on a microscope slide, and then usually covered with a cover slip , thus limiting the plane containing the particles relative to the optical axis . This means that more particles will be in acceptable focus at high magnifications. [ 13 ] In Dynamic image acquisition, large amounts of sample are imaged by moving the sample past the microscope optics and using high speed flash illumination to effectively "freeze" the motion of the sample. The flash is synchronized with a high shutter speed in the camera to further prevent motion blur. In a dry particle system, the particles are dispensed from a shaker table and fall by gravity past the optical system. In fluid imaging particle analysis systems, the liquid is passed across the optical axis by use of a narrow flow cell as shown at right. The flow cell is characterized by its depth perpendicular to the optical axis, as shown in the second diagram on right. In order to keep the particles in focus, the flow depth is restricted so that the particles remain in a plane of best focus perpendicular to the optical axis. This is similar in concept to the effect of the microscope slide plus cover slip in a static imaging system. Since depth of field decreases exponentially with increasing magnification, the depth of the flow cell must be narrowed significantly with higher magnifications. The major drawback to dynamic image acquisition is that the flow cell depth must be limited as described above. This means that, in general, particles larger in size than the flow cell depth can not be allowed in the sample being processed, because they will probably clog the system. So the sample will typically have to be filtered to remove particles larger than the flow cell depth prior to being evaluated. If it is desired to look at a very wide range of particle size, this may mean that the sample would have to be fractionated into smaller size range components, and run with different magnification/flow cell combinations. [ 13 ] The major advantage to dynamic image acquisition is that it enables acquiring and measuring particles at significantly higher speed, typically on the order of 10,000 particles/minute or greater. This means that statistically significant populations can be analyzed in far shorter time periods than previously possible with manual microscopy or even static imaging particle analysis. In this sense, dynamic imaging particle analysis systems combine the speed typical of particle counters with the discriminatory capabilities of microscopy. [ 13 ] Dynamic imaging particle analysis is used in aquatic microorganism research to analyze phytoplankton, zooplankton, and other aquatic microorganisms ranging from 2 um to 5 mm in size. Dynamic imaging particle analysis is also biopharmaceutical research to characterize and analyze particles ranging from 300 nm to 5mm in size. Micro-flow imaging (MFI) is a particle analysis technique that uses flow microscopy to quantify particles contained in a solution based on size. This technique is used in the biopharmaceutical industry to characterize subvisible particles from approximately 1 μm to >50 μm. [ 14 ]
https://en.wikipedia.org/wiki/Imaging_particle_analysis
An imaging spectrometer is an instrument used in hyperspectral imaging and imaging spectroscopy to acquire a spectrally-resolved image of an object or scene, usually to support analysis of the composition the object being imaged. [ 1 ] [ 2 ] The spectral data produced for a pixel is often referred to as a datacube due to the three-dimensional representation of the data. Two axes of the image correspond to vertical and horizontal distance and the third to wavelength . The principle of operation is the same as that of the simple spectrometer , but special care is taken to avoid optical aberrations for better image quality. Example imaging spectrometer types include: filtered camera, whiskbroom scanner , pushbroom scanner , integral field spectrograph (or related dimensional reformatting techniques), wedge imaging spectrometer, Fourier transform imaging spectrometer, computed tomography imaging spectrometer (CTIS), image replicating imaging spectrometer (IRIS), coded aperture snapshot spectral imager (CASSI), and image mapping spectrometer (IMS). In 1704, Sir Isaac Newton demonstrated that white light could be split up into component colours. The subsequent history of spectroscopy led to precise measurements and provided the empirical foundations for atomic and molecular physics (Born & Wolf, 1999). Significant achievements in imaging spectroscopy are attributed to airborne instruments, particularly arising in the early 1980s and 1990s (Goetz et al., 1985; Vane et al., 1984). However, it was not until 1999 that the first imaging spectrometer was launched in space (the NASA Moderate-resolution Imaging Spectroradiometer , or MODIS). Terminology and definitions evolve over time. At one time, >10 spectral bands sufficed to justify the term imaging spectrometer but presently the term is seldom defined by a total minimum number of spectral bands, rather by a contiguous (or redundant) statement of spectral bands . Imaging spectrometers are used specifically for the purpose of measuring the spectral content of light and electromagnetic light. The spectral data gathered is used to give the operator insight into the sources of radiation. Prism spectrometers use a classical method of dispersing radiation by means of a prism as a refracting element. The imaging spectrometer works by imaging a radiation source onto what is called a "slit" by means of a source imager. A collimator collimates the beam that is dispersed by a refracting prism and re-imaged onto a detection system by a re-imager. Special care is taken to produce the best possible image of the source onto the slit. The purpose of the collimator and re-imaging optics are to take the best possible image of the slit. An area-array of elements fills the detection system at this stage. The source image is reimaged, every point, as a line spectrum on what is called a detector-array column. The detector array signals supply data pertaining to spectral content, in particular, spatially resolved source points inside source area. These source points are imaged onto the slit and then re-imaged onto the detector array. Simultaneously, the system provides spectral information about the source area and its line of spatially resolved points. The line is then scanned in order to build a database of information about the spectral content. [ 3 ] In imaging spectroscopy (also hyperspectral imaging or spectral imaging ) each pixel of an image acquires many bands of light intensity data from the spectrum, instead of just the three bands of the RGB color model . More precisely, it is the simultaneous acquisition of spatially coregistered images in many spectrally contiguous bands . Some spectral images contain only a few image planes of a spectral data cube , while others are better thought of as full spectra at every location in the image. For example, solar physicists use the spectroheliograph to make images of the Sun built up by scanning the slit of a spectrograph, to study the behavior of surface features on the Sun; such a spectroheliogram may have a spectral resolution of over 100,000 ( λ / Δ λ {\displaystyle \lambda /\Delta \lambda } ) and be used to measure local motion (via the Doppler shift ) and even the magnetic field (via the Zeeman splitting or Hanle effect ) at each location in the image plane. The multispectral images collected by the Opportunity rover , in contrast, have only four wavelength bands and hence are only a little more than 3-color images . Hyperspectral data is often used to determine what materials are present in a scene. Materials of interest could include roadways, vegetation, and specific targets (i.e. pollutants, hazardous materials, etc.). Trivially, each pixel of a hyperspectral image could be compared to a material database to determine the type of material making up the pixel. However, many hyperspectral imaging platforms have low resolution (>5m per pixel) causing each pixel to be a mixture of several materials. The process of unmixing one of these 'mixed' pixels is called hyperspectral image unmixing or simply hyperspectral unmixing. A solution to hyperspectral unmixing is to reverse the mixing process. Generally, two models of mixing are assumed: linear and nonlinear. Linear mixing models the ground as being flat and incident sunlight on the ground causes the materials to radiate some amount of the incident energy back to the sensor. Each pixel then, is modeled as a linear sum of all the radiated energy curves of materials making up the pixel. Therefore, each material contributes to the sensor's observation in a positive linear fashion. Additionally, a conservation of energy constraint is often observed thereby forcing the weights of the linear mixture to sum to one in addition to being positive. The model can be described mathematically as follows: where p {\displaystyle p} represents a pixel observed by the sensor, A {\displaystyle A} is a matrix of material reflectance signatures (each signature is a column of the matrix), and x {\displaystyle x} is the proportion of material present in the observed pixel. This type of model is also referred to as a simplex . With x {\displaystyle x} satisfying the two constraints: 1. Abundance Nonnegativity Constraint (ANC) - each element of x is positive. 2. Abundance Sum-to-one Constraint (ASC) - the elements of x must sum to one. Non-linear mixing results from multiple scattering often due to non-flat surface such as buildings and vegetation. There are many algorithms to unmix hyperspectral data each with their own strengths and weaknesses. Many algorithms assume that pure pixels (pixels which contain only one materials) are present in a scene. Some algorithms to perform unmixing are listed below: Non-linear unmixing algorithms also exist: support vector machines or analytical neural network. Probabilistic methods have also been attempted to unmix pixel through Monte Carlo unmixing algorithm. Once the fundamental materials of a scene are determined, it is often useful to construct an abundance map of each material which displays the fractional amount of material present at each pixel. Often linear programming is done to observed ANC and ASC. The practical application of imaging spectrometers is they are used to observe the planet Earth from orbiting satellites. The spectrometer functions by recording all points of color on a picture, thus, the spectrometer is focused on specific parts of the Earth's surface to record data. The advantages of spectral content data include vegetation identification, physical condition analysis, mineral identification for the purpose of potential mining, and the assessment of polluted waters in oceans, coastal zones and inland waterways. Prism spectrometers are ideal for Earth observation because they measure wide spectral ranges competently. Spectrometers can be set to cover a range from 400 nm to 2,500 nm, which interests scientists who are able to observe Earth by means of aircraft and satellite. The spectral resolution of the prism spectrometer is not desirable for most scientific applications; thus, its purpose is specific to recording spectral content of areas with greater spatial variations. [ 3 ] Venus express , orbiting Venus, had a number of imaging spectrometers covering NIR-vis-UV. One application is spectral geophysical imaging , which allows quantitative and qualitative characterization of the surface and of the atmosphere , using radiometric measurements. These measurements can then be used for unambiguous direct and indirect identification of surface materials and atmospheric trace gases, the measurement of their relative concentrations, subsequently the assignment of the proportional contribution of mixed pixel signals (e.g., the spectral unmixing problem), the derivation of their spatial distribution (mapping problem), and finally their study over time (multi-temporal analysis). The Moon Mineralogy Mapper on Chandrayaan-1 was a geophysical imaging spectrometer. [ 5 ] The lenses of the prism spectrometer are used for both collimation and re-imaging; however, the imaging spectrometer is limited in its performance by the image quality provided by the collimators and re-imagers. The resolution of the slit image at each wavelength limits spatial resolution; likewise, the resolution of optics across the slit image at each wavelength limits spectral resolution. Moreover, distortion of the slit image at each wavelength can complicate the interpretation of the spectral data. The refracting lenses used in the imaging spectrometer limit performance by the axial chromatic aberrations of the lens. These chromatic aberrations are bad because they create differences in focus, which prevent good resolution; however, if the range is restricted it is possible to achieve good resolution. Furthermore, chromatic aberrations can be corrected by using two or more refracting materials over the full visible range. It is harder to correct chromatic aberrations over wider spectral ranges without further optical complexity. [ 3 ] Spectrometers intended for very wide spectral ranges are best if made with all-mirror systems. These particular systems have no chromatic aberrations, and that is why they are preferable. On the other hand, spectrometers with single point or linear array detection systems require simpler mirror systems. Spectrometers using area-array detectors need more complex mirror systems to provide good resolution. It is conceivable that a collimator could be made that would prevent all aberrations; however, this design is expensive because it requires the use of aspherical mirrors. Smaller two-mirror systems can correct aberrations, but they are not suited for imaging spectrometers. Three mirror systems are compact and correct aberrations as well, but they require at least two asperical components. Systems with more than four mirrors tend to be large and a lot more complex. Catadioptric systems are used in Imagine Spectrometers and are compact, too; however, the collimator or imager will be made up of two curved mirrors and three refracting elements, and thus, the system is very complex. Optical complexity is unfavorable, however, because effects scatter all optical surfaces and stray reflections. Scattered radiation can interfere with the detector by entering into it and causing errors in recorded spectra. Stray radiation is referred to as stray light . By limiting the total number of surfaces that can contribute to scatter, it limits the introduction of stray light into the equation. Imaging spectrometers are meant to produce well resolved images. In order for this to occur, imaging spectrometers need to be made with few optical surfaces and have no aspherical optical surfaces. [ 3 ] Planned: Current and Past:
https://en.wikipedia.org/wiki/Imaging_spectroscopy
Imbibition is a special type of diffusion that takes place when liquid is absorbed by solids- colloids causing an increase in volume. Water surface potential movement takes place along a concentration gradient; some dry materials absorb water. A gradient between the absorbent and the liquid is essential for imbibition. For a substance to imbibe a liquid, there must first be some attraction between them. Imbibition occurs when a wetting fluid displaces a non-wetting fluid, the opposite of drainage in which a non-wetting phase displaces the wetting fluid. The two processes are governed by different mechanisms. [ clarification needed ] Imbibition is also a type of diffusion since water movement is along the concentration gradient. Seeds and other such materials have almost no water hence they absorb water easily. Water potential gradient between the absorbent and liquid imbibed is essential for imbibition. One example of Imbibition in nature is the absorption of water by hydrophilic colloids . Matrix potential contributes significantly to water in such substances. Dry seeds germinate in part by imbibition. Imbibition can also control circadian rhythms in Arabidopsis thaliana and (probably) other plants. The Amott test employs imbibition. Proteins have high imbibition capacities, so proteinaceous pea seeds swell more than starchy wheat seeds. Imbibition of water increases imbibant volume, which results in imbibitional pressure (IP). The magnitude of such pressure can be demonstrated by the splitting of rocks by inserting dry wooden stalks in their crevices and soaking them in water, a technique used by early Egyptians to cleave stone blocks. [ 1 ] [ 2 ] Skin grafts (split thickness and full thickness) receive oxygenation and nutrition via imbibition, maintaining cellular viability until the processes of inosculation and revascularisation have re-established a new blood supply within these tissues. Examples include the absorption of water by seeds [ 3 ] and dry wood. If there is no pressure due to imbibition, seedlings would not be able to emerge from soil. [ speculation? ] The radicle is the first part of a seedling (a growing plant embryo ) to emerge from the seed during the process of germination . [ 4 ] The radicle is the embryonic root of the plant, and grows downward in the soil (the shoot emerges from the plumule ) where it absorbs more water. Most of the seed is stored energy so nutrients are not essential during the first days for the seedling.
https://en.wikipedia.org/wiki/Imbibition
The Imd pathway is a broadly-conserved NF-κB immune signalling pathway of insects and some arthropods [ 1 ] that regulates a potent antibacterial defence response. The pathway is named after the discovery of a mutation causing severe immune deficiency (the gene was named "Imd" for "immune deficiency"). The Imd pathway was first discovered in 1995 using Drosophila fruit flies by Bruno Lemaitre and colleagues, who also later discovered that the Drosophila Toll gene regulated defence against Gram-positive bacteria and fungi. [ 2 ] [ 3 ] Together the Toll and Imd pathways have formed a paradigm of insect immune signalling; as of September 2, 2019, these two landmark discovery papers have been cited collectively over 5000 times since publication on Google Scholar. [ 4 ] [ 5 ] The Imd pathway responds to signals produced by Gram-negative bacteria . Peptidoglycan recognition proteins (PGRPs) sense DAP-type peptidoglycan , which activates the Imd signalling cascade. This culminates in the translocation of the NF-κB transcription factor Relish, leading to production of antimicrobial peptides and other effectors. [ 6 ] Insects lacking Imd signalling either naturally or by genetic manipulation are extremely susceptible to infection by a wide variety of pathogens and especially bacteria. The Imd pathway bears a number of similarities to mammalian TNFR signalling, though many of the intracellular regulatory proteins of Imd signalling also bear homology to different signalling cascades of human Toll-like receptors . [ 6 ] The following genes are analogous or homologous between Drosophila melanogaster (in bold) and human TNFR1 signalling: [ 7 ] [ 8 ] While the exact epistasis of Imd pathway signalling components is continually scrutinized, the mechanistic order of many key components of the pathway is well-established. The following sections discuss Imd signalling as it is found in Drosophila melanogaster , where it is exceptionally well-characterized. [ 6 ] Imd signalling is activated by a series of steps from recognition of a bacterial substance ( e.g. peptidoglycan) to the transmission of that signal leading to activation of the NF-κB transcription factor Relish. [ 7 ] Activated Relish then forms dimers that move into the nucleus and bind to DNA leading to the transcription of antimicrobial peptides and other effectors. The sensing of bacterial signals is performed by peptidoglycan recognition protein LC (PGRP-LC), a transmembrane protein with an intracellular domain. Binding of bacterial peptidoglycan leads to dimerization of PGRP-LC which generates the conformation needed to bind and activate the Imd protein. However alternate isoforms of PGRP-LC can also be expressed with different functions: PGRP-LCx recognizes polymeric peptidoglycan, while PGRP-LCa does not bind peptidoglycan directly but acts alongside PGRP-LCx to bind monomeric peptidoglycan fragments (called tracheal cytotoxin or "TCT"). Another PGRP (PGRP-LE) also acts intracellularly to bind TCT that has crossed the cell membrane or is derived from an intracellular infection. PGRP-LA promotes the activation of Imd signalling in epithelial cells, but the mechanism is still unknown. [ 6 ] [ 7 ] Other PGRPs can inhibit the activation of Imd signalling by binding bacterial signals or inhibiting host signalling proteins: PGRP-LF is a transmembrane PGRP that lacks an intracellular domain and does not bind peptidoglycan. Instead PGRP-LF forms dimers with PGRP-LC preventing PGRP-LC dimerization and consequently activation of Imd signalling. A number of secreted PGRPs have amidase activity that downregulate the Imd pathway by digesting peptidoglycan into short, non-immunogenic fragments. These include PGRP-LB, PGRP-SC1A, PGRP-SC1B, and PGRP-SC2. Additionally, PGRP-LB is the major regulator in the gut. [ 9 ] The principle intracellular signalling protein is Imd, a death domain-containing protein that binds with FADD and Dredd to form a complex. Dredd is activated following ubiquitination by the Iap2 complex (involving Iap2, UEV1a, bend, and eff), which allows Dredd to cleave the 30 residue N-terminus of Imd, allowing it to also be ubiquitinated by Iap2. [ 7 ] Following this, the Tak1/TAB2 complex binds to the activated form of Imd and subsequently activates the IKKγ/Ird5 complex through phosphorylation. This IKKγ complex activates Relish by phosphorylation, leading to cleavage of Relish and thereby producing both N-terminal and C-terminal Relish fragments. The N-terminal Relish fragments dimerize leading to their translocation into the nucleus where these dimers bind to Relish-family NF-κB binding sites. Binding of Relish promotes the transcription of effectors such as antimicrobial peptides . [ 6 ] [ 7 ] While Relish is integral for transcription of Imd pathway effectors, there is additional cooperation with other pathways such as Toll and JNK . The TAK1/TAB2 complex is key to propagating intracellular signalling of not only the Imd pathway, but also the JNK pathway. As a result, mutants for JNK signalling have severely reduced expression of Imd pathway antimicrobial peptides. [ 10 ] Imd signalling regulates a number of effector peptides and proteins that are produced en masse following immune challenge. [ 11 ] This includes many of the major antimicrobial peptide genes of Drosophila , particularly: Diptericin , Attacin , Drosocin , Cecropin , and Defensin . [ 12 ] The Imd pathway regulates hundreds of genes after infection, however the antimicrobial peptides play one of the most essential roles of Imd signalling in defence. Flies lacking multiple antimicrobial peptide genes succumb to infections by a broad suite of Gram-negative bacteria. [ 13 ] [ 14 ] Classical thinking suggested that antimicrobial peptides worked as a generalist cocktail in defence, where each peptide provided a small and somewhat redundant contribution. [ 15 ] [ 6 ] However Hanson and colleagues found that single antimicrobial peptide genes displayed an unexpectedly high degree of specificity for defence against specific microbes. [ 13 ] The fly Diptericin A gene is essential for defence against the bacterium Providencia rettgeri (also suggested by an earlier evolutionary study [ 16 ] ). A second specificity is encoded by Diptericin B , which defends flies against Acetobacter bacteria of the fly microbiome. [ 17 ] A third specificity is encoded by the gene Drosocin . Flies lacking Drosocin are highly susceptible to Enterobacter cloacae infection. [ 13 ] [ 14 ] [ 18 ] The Drosocin gene itself encodes two peptides (named Drosocin and Buletin), wherein it is specifically the Drosocin peptide that is responsible for defence against E. cloacae , while the Buletin peptide instead mediates a specific defence against another bacterium, Providencia burhodogranariea . [ 18 ] These works accompany others on antimicrobial peptides and effectors regulated by the Drosophila Toll pathway, which also display a specific importance in defence against certain fungi or bacteria. [ 19 ] [ 20 ] [ 21 ] This work on Drosophila immune antimicrobial peptides and effectors has greatly revised the former view that such peptides are generalist molecules. The modern interpretation is now that specific molecules might provide a somewhat redundant layer of defence, but also single peptides can have critical importance, individually, against relevant microbes. [ 22 ] [ 23 ] [ 24 ] [ 25 ] The Imd pathway appears to have evolved in the last common ancestor of centipedes and insects. [ 1 ] However certain lineages of insects have since lost core components of Imd signalling. The first-discovered and most famous example is the pea aphid Acyrthosiphon pisum . It is thought that plant-feeding aphids have lost Imd signalling as they bear a number of bacterial endosymbionts , including both nutritional symbionts that would be disrupted by aberrant expression of antimicrobial peptides, and defensive symbionts that cover for some of the immune deficiency caused by loss of Imd signalling. [ 26 ] It has also been suggested that antimicrobial peptides, the downstream components of Imd signalling, may be detrimental to fitness and lost by insects with exclusively plant-feeding ecologies. [ 27 ] While the Toll and Imd signalling pathways of Drosophila are commonly depicted as independent for explanatory purposes, the underlying complexity of Imd signalling involves a number of likely mechanisms wherein Imd signalling interacts with other signalling pathways including Toll and JNK . [ 6 ] While the paradigm of Toll and Imd as largely independent provides a useful context for the study of immune signalling, the universality of this paradigm as it applies to other insects has been questioned. In Plautia stali stinkbugs , suppression of either Toll or Imd genes simultaneously leads to reduced activity of classic Toll and Imd effectors from both pathways. [ 28 ]
https://en.wikipedia.org/wiki/Imd_pathway
Imidapril , sold under the brand name Tanatril among others, is an ACE inhibitor used as an antihypertensive drug and for the treatment of chronic heart failure . [ 1 ] It was patented in 1982 and approved for medical use in 1993. [ 2 ] Contraindications are hypersensitivity against ACE inhibitors, especially if it has resulted in angioedema ; idiopathic or hereditary angioedema ; kidney failure ; the second and third trimesters in pregnancy; and combination with the drug aliskiren in people with diabetes . [ 3 ] [ 4 ] Common adverse effects are similar to other antihypertensive drugs and include headache, vertigo , and drowsiness . A dry cough is common as with all ACE inhibitors. [ 3 ] [ 4 ] Other possible adverse effects are described at ACE inhibitor#Adverse effects . No interaction studies have been conducted except with digoxin , which slightly decreases imidapril levels, possibly because it reduces its absorption from the gut. Other potential interactions are not well studied: Rifampicin reduces the activation of imidapril to its active metabolite imidaprilat. Like other ACE inhibitors, imidapril increases potassium levels in the blood and can therefore cause hyperkalaemia , especially when combined with potassium-sparing diuretics or potassium substitution. Other diuretics , vasodilators , tricyclic antidepressants and antipsychotics can add to the antihypertensive effect of imidapril. Lithium can reach toxic levels when combined with imidapril. The effect of antidiabetic drugs can be increased, potentially causing hypoglycaemia (low blood glucose levels). [ 3 ] [ 4 ] About 70% of the ingested imidapril is absorbed quickly from the gut; this percentage is reduced significantly when taken with a fatty meal. It reaches highest blood plasma concentrations after two hours and has a biological half-life of two hours. The substance is a prodrug and is activated to imidaprilat, which reaches highest plasma concentrations after 7 hours, has an initial half-life of 7 to 9 hours and a terminal half-life of more than 24 hours. The absolute bioavailability of imidaprilat is 42%. [ 3 ] [ 4 ] About 40% of the drug is excreted via the urine and 50% via the bile and faeces. [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Imidapril
Imidazole-1-sulfonyl azide is an organic azide compound that can be used as an alternative organic synthesis reagent to trifluoromethanesulfonyl azide . It is an explosive colorless liquid, but some of its organic-soluble salts can be safely handled and stored as a solid. The hydrochloride salt of this compound is also available commercially, but can degrade to release explosive byproducts. [ 1 ] Like trifluoromethanesulfonyl azide , this compound generally converts primary amines or ammonium salts to azides when catalyzed by copper(II), nickel(II), zinc(II), and cobalt(II) salts. [ 2 ] This reaction is effectively the reverse of the Staudinger reaction . Similarly, it is able to transfer the diazo group (=N 2 ) under basic conditions. [ 2 ] As with all organic azides, this compound is potentially explosive both in use and in preparation. The hydrochloride salt was initially reported to be insensitive to impact, vigorous grinding, and prolonged heating at 80 °C, although heating above 150 °C resulted in violent decomposition. Further reported impact studies indicated otherwise, showing the sensitivity to be similar to RDX . [ 3 ] Subsequent reports noted that the hydrochloride salt is hygroscopic , and upon prolonged storage was hydrolyzed to produce hydrazoic acid , which made the material sensitive. [ 2 ] [ 3 ] Synthesis of the HCl salt has led to a significant explosion, with expected explosive byproducts of sulfonyl diazide or hydrazoic acid being present. [ 4 ] Recent studies have shown the hydrogen sulfate salt to be significantly less hazardous to handle with decomposition temperature of 131 °C, insensitivity to impact, and low electrostatic discharge and friction sensitivities. [ 3 ] Further improvements have led to its synthesis with increased safety, making the hydrogen sulfate salt a relatively safe diazo-transfer reagent to both synthesize and handle. [ 5 ]
https://en.wikipedia.org/wiki/Imidazole-1-sulfonyl_azide
Imidazole-4-acetaldehyde is a metabolite of histamine in biological species . The process of histamine inactivation in biological species involves its metabolism through the oxidative deamination of its primary amino group . This reaction is catalyzed by the enzyme diamine oxidase (DAO). The metabolite produced from this reaction is imidazole-4-acetaldehyde. [ 1 ] [ 2 ] Imidazole-4-acetaldehyde is then further oxidized by a NAD-dependent aldehyde dehydrogenase , leading to imidazole-4-acetic acid . [ 1 ] Under prebiotic conditions , imidazole-4-acetaldehyde can be synthesized from erythrose , formamidine , formaldehyde , and ammonia . [ 3 ] In a study of imidazole-4-acetaldehyde presence in the reaction mixture during the coupling reaction of fungal amine oxidase and bacterial aldehyde oxidase for histamine elimination, imidazole 4-acetaldehyde was not detected, which suggests that imidazole 4-acetaldehyde was not produced as a result of the coupling reaction between fungal amine oxidase and aldehyde oxidase , as such, its absence in the reaction mixture implies that the fungal amine oxidase-aldehyde oxidase coupling reaction likely proceeded directly from histamine to imidazole-4-acetic acid with an apparent yield of 100%, without the intermediate formation of imidazole 4-acetaldehyde. [ 4 ] In a 2022 observational study aimed to identify preoperative serum metabolites that could predict postoperative opioid consumption, the role of imidazole-4-acetaldehyde was identified as one of the metabolites that showed different trends between gastric cancer patients with high postoperative opioid consumption and those with low opioid consumption group. The results suggest that imidazole-4-acetaldehyde, along with other metabolites, was significantly different between the two groups, so that that imidazole-4-acetaldehyde may serve as a potential biomarker for predicting postoperative opioid consumption in gastric cancer patients, still, the results of this study is inconclusive. [ 5 ]
https://en.wikipedia.org/wiki/Imidazole-4-acetaldehyde
Imidazoline is a heterocycle formally derived from imidazole by the reduction of one of the two double bonds. Three isomers are known, 2-imidazolines, 3-imidazolines, and 4-imidazolines. The 2- and 3-imidazolines contain an imine center, whereas the 4-imidazolines contain an alkene group. The 2-Imidazoline group occurs in several drugs. [ 1 ] This organic chemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Imidazoline
In organic chemistry , an imide is a functional group consisting of two acyl groups bound to nitrogen . [ 1 ] The compounds are structurally related to acid anhydrides , although imides are more resistant to hydrolysis. In terms of commercial applications, imides are best known as components of high-strength polymers, called polyimides . Inorganic imides are also known as solid state or gaseous compounds, and the imido group (=NH) can also act as a ligand . Simple example is diacetamide with the formula HN(COCH 3 ) 2 , formally the diacetylated derivative of ammonia. Commonly encountered imides, however, are cyclic, being derived from dicarboxylic acids . A common example is succinimide derived from succinic acid and ammonia. The names of these cyclic imides reflect the parent acid. [ 2 ] Many imides are derived from primary amines as opposed to ammonia. These are indicated by N -substituent in the prefix. For example, N-ethylsuccinimide is derived from succinic acid and ethylamine . Being highly polar, imides exhibit good solubility in polar organic solvents. Unlike the structurally related acid anhydrides, they resist hydrolysis and some can even be recrystallized from boiling water. The N–H center for imides derived from ammonia is acidic and can participate in hydrogen bonding . The N-H group is weakly acidic as indicated in the case of maleimide, with a pKa estimated at 10. [ 4 ] Many high strength or electrically conductive polymers contain imide subunits, i.e., the polyimides . One example is Kapton where the repeat unit consists of two imide groups derived from aromatic tetracarboxylic acids. [ 5 ] Another example of polyimides is the polyglutarimide typically made from polymethylmethacrylate (PMMA) and ammonia or a primary amine by aminolysis and cyclization of the PMMA at high temperature and pressure, typically in an extruder. This technique is called reactive extrusion. A commercial polyglutarimide product based on the methylamine derivative of PMMA, called Kamax, was produced by the Rohm and Haas company. The toughness of these materials reflects the rigidity of the imide functional group. Interest in the bioactivity of imide-containing compounds was sparked by the early discovery of the high bioactivity of the Cycloheximide as an inhibitor of protein biosynthesis in certain organisms. Thalidomide , famous for its adverse effects, is one result of this research. A number of fungicides and herbicides contain the imide functionality. Examples include Captan , which is considered carcinogenic under some conditions, and Procymidone . [ 6 ] In the 21st century new interest arose in thalidomide's immunomodulatory effects, leading to the class of immunomodulators known as immunomodulatory imide drugs (IMiDs). Most common imides are prepared by heating dicarboxylic acids or their anhydrides and ammonia or primary amines . The result is a condensation reaction : [ 7 ] These reactions proceed via the intermediacy of amides . The intramolecular reaction of a carboxylic acid with an amide is far faster than the intermolecular reaction, which is rarely observed. They may also be produced via the oxidation of amides , particularly when starting from lactams . [ 8 ] Certain imides can also be prepared in the isoimide-to-imide Mumm rearrangement . For imides derived from ammonia, the N–H center is weakly acidic. Thus, alkali metal salts of imides can be prepared by conventional bases such as potassium hydroxide. The conjugate base of phthalimide is potassium phthalimide . These anion can be alkylated to give N -alkylimides, which in turn can be degraded to release the primary amine. Strong nucleophiles, such as potassium hydroxide or hydrazine are used in the release step. Treatment of imides with halogens and base gives the N -halo derivatives. Examples that are useful in organic synthesis are N -chlorosuccinimide and N - bromosuccinimide , which respectively serve as sources of "Cl + " and "Br + " in organic synthesis . Isoimides are isomeric with imides and have the formula RC(O)OC(NR′)R″. They are often intermediates that convert to the more symmetrical imides. Isoimides upon heating rearrange to imides: [ 8 ] Organic compounds called carbodiimides have the formula RN=C=NR. They are unrelated to imides.
https://en.wikipedia.org/wiki/Imide
In organic chemistry , imidic acids are organic compounds with the structure RC(−OH)=NR' . [ 1 ] They are tautomeric to non- tertiary amides ( RC(=O)−NHR' ) and isomeric to oximes ( RR'C=N−OH ). The term " imino acid " is an obsolete term for this group that should not be used in this context because it has a different molecular structure. [ 2 ] Imidic acids can be formed by metal-catalyzed dehydrogenation of geminal amino alcohols . For example, methanolamine , the parent compound of the amino alcohols, can be dehydrogenated to methanimidic acid, the parent compound of the imidic acids. Geminal amino alcohols with side chains similarly form imidic acids with the same side chains: Another way to form imidic acids is the reaction of carboxylic acids with azanone . For example, the reaction for carbamic acid : And the general reaction for substituted imidic acids: Another mechanism is the reaction of carboxylic acids with diazene or other azo compounds , forming azanone . Imidic acids tautomerize to amides by a hydrogen shift from the oxygen to the nitrogen atom. Amides are more stable in an environment with oxygen or water, whereas imidic acids dominate the equilibrium in solution with ammonia or methane.
https://en.wikipedia.org/wiki/Imidic_acid
In chemistry imidines are a rare functional group , being the nitrogen analogues of anhydrides and imides . They were first reported by Adolf Pinner in 1883, [ 1 ] but did not see significant investigation until the 1950s, when Patrick Linstead and John Arthur Elvidge developed a number of compounds. [ 2 ] [ 3 ] Imidines may be prepared in a modified Pinner reaction , by passing hydrogen chloride into an alcoholic solution of their corresponding di- nitriles (i.e. succinonitrile , glutaronitrile , adiponitrile ) to give imino ethers which then condense when treated with ammonia. As a result, most structures are cyclic . The compounds are highly moisture sensitive and can be converted into imides upon exposure to water. [ 2 ]
https://en.wikipedia.org/wiki/Imidine
Azanylene Azanylidene Imidogen Nitrene Imidogen is an inorganic compound with the chemical formula NH. [ 2 ] Like other simple radicals , it is highly reactive and consequently short-lived except as a dilute gas. Its behavior depends on its spin multiplicity . Imidogen can be generated by electrical discharge in an atmosphere of ammonia . [ 3 ] Imidogen has a large rotational splitting and a weak spin–spin interaction, therefore it will be less likely to undergo collision-induced Zeeman transitions . [ 3 ] Ground-state imidogen can be magnetically trapped using buffer-gas loading from a molecular beam. [ 3 ] The ground state of imidogen is a triplet , with a singlet excited state only slightly higher in energy. [ 4 ] The first excited state (a 1 Δ) has a long lifetime as its relaxation to ground state (X 3 Σ − ) is spin-forbidden . [ 5 ] Imidogen undergoes collision-induced intersystem crossing . [ 4 ] Ignoring hydrogen atoms, imidogen is isoelectronic with carbene (CH 2 ) and oxygen (O) atoms, and it exhibits comparable reactivity. [ 5 ] The first excited state can be detected by laser-induced fluorescence (LIF). [ 5 ] LIF methods allow for detection of depletion, production, and chemical products of NH. It reacts with nitric oxide (NO): The former reaction is more favorable with a Δ H 0 of −408 ± 2 kJ/mol compared to a Δ H 0 of −147 ± 2 kJ/mol for the latter reaction. [ 6 ] The trivial name nitrene is the preferred IUPAC name . The systematic names, λ 1 -azane and hydridonitrogen , valid IUPAC names, are constructed according to the substitutive and additive nomenclatures, respectively. In appropriate contexts, imidogen can be viewed as ammonia with two hydrogen atoms removed, and as such, azylidene may be used as a context-specific systematic name, according to substitutive nomenclature. By default, this name pays no regard to the radicality of the imidogen molecule. Although, in even more specific context, it can also name the non-radical state, whereas the diradical state is named azanediyl . Interstellar NH was identified in the diffuse clouds toward ζ Persei and HD 27778 from high-resolution high- signal-to-noise spectra of the NH A 3 Π→X 3 Σ (0,0) absorption band near 3358 Å. [ 7 ] A temperature of about 30 K (−243 °C) favored an efficient production of CN from NH within the diffuse cloud. [ 8 ] [ 9 ] [ 7 ] Within diffuse clouds H − + N → NH + e − is a major formation mechanism. Near chemical equilibrium important NH formation mechanisms are recombinations of NH + 2 and NH + 3 ions with electrons. Depending on the radiation field in the diffuse cloud, NH 2 can also contribute. NH is destroyed in diffuse clouds by photodissociation and photoionization . In dense clouds NH is destroyed by reactions with atomic oxygen and nitrogen. O + and N + form OH and NH in diffuse clouds. NH is involved in creating N 2 , OH, H, CN + , CH, N, NH + 2 , NH + for the interstellar medium. NH has been reported in the diffuse interstellar medium but not in dense molecular clouds . [ 12 ] The purpose of detecting NH is often to get a better estimate of the rotational constants and vibrational levels of NH. [ 13 ] It is also needed in order to confirm theoretical data which predicts N and NH abundances in stars which produce N and NH and other stars with leftover trace amounts of N and NH. [ 14 ] Using current values for rotational constants and vibrations of NH as well as those of OH and CH permit studying the carbon, nitrogen and oxygen abundances without resorting to a full spectrum synthesis with a 3D model atmosphere. [ 15 ]
https://en.wikipedia.org/wiki/Imidogen
Imidoyl chlorides are organic compounds that contain the functional group RC(NR')Cl. A double bond exist between the R'N and the carbon centre. These compounds are analogues of acyl chloride . Imidoyl chlorides tend to be highly reactive and are more commonly found as intermediates in a wide variety of synthetic procedures. Such procedures include Gattermann aldehyde synthesis , Houben-Hoesch ketone synthesis , and the Beckmann rearrangement . Their chemistry is related to that of enamines and their tautomers when the α hydrogen is next to the C=N bond. [ 1 ] Many chlorinated N- heterocycles are formally imidoyl chlorides, e.g. 2-chloropyridine , 2, 4, and 6-chloro pyrimidines . Imidoyl halides are synthesized by combining amides and halogenating agents. The structure of the carboxylic acid amides plays a role in the outcome of the synthesis. Imidoyl chloride can be prepared by treating a monosubstituted carboxylic acid amide with phosgene . [ 1 ] Thionyl chloride is also used. [ 2 ] Imidoyl chlorides are generally colorless liquids or low-melting solids that are sensitive to both heat and especially moisture. In their IR spectra these compounds exhibit a characteristic ν C=N band near 1650–1689 cm −1 . Although both the syn and anti configurations are possible, most imidoyl chlorides adopt the anti configuration. [ 1 ] Imidoyl chlorides react readily with water, hydrogen sulfide, amines, and hydrogen halides. Treating imidoyl chlorides with water forms the corresponding amide: Aliphatic imidoyl chlorides are more sensitive toward hydrolysis than aryl derivatives. Electron-withdrawing substituents decrease the reaction rate. Imidoyl chlorides react with hydrogen sulfide to produce thioamides : [ 1 ] When amines are treated with imidoyl chlorides, amidines are obtained. [ 1 ] When R' ≠ R", two isomers are possible. Upon heating, imidoyl chlorides also undergo dehydrohalogenation to form nitriles: [ 1 ] Treatment of imidoyl chloride with hydrogen halides, such as HCl, forms the corresponding iminium chloride cations: [ 1 ] Imidoyl chlorides are useful intermediates in the syntheses of several compounds, including imidates, thioimidates, amidines, and imidoyl cyanides. Most of these syntheses involve replacing the chloride with alcohols, thiols, amines, and cyanates, respectively. [ 1 ] Imidoyl chlorides can also undergo Friedel-Crafts reactions to install an imine groups on aromatic substrates. If the nitrogen of the imidoyl chloride has two substituents, the resulting chloroiminium ion is vulnerable to attack by aromatic rings without the need for a Lewis acid to remove the chloride first. This reaction is called the Vilsmeier–Haack reaction , and the chloroiminium ion is referred to as the Vilsmeier reagent. [ 4 ] [ 5 ] [ 6 ] After attaching the iminium ion to the ring, the functional group can later be hydrolyzed to a carbonyl for further modification. The Vilsmeier-Haack reaction can be a useful technique to add functional groups to an aromatic ring if the ring contains electron-withdrawing groups, which make using the alternative Friedel-Crafts reaction difficult. Imidoyl chlorides can be easily halogenated at the α carbon position. [ 1 ] By treating imidoyl chlorides with hydrogen halide, will cause all α hydrogens to be replaced with the halide. This method can be an effective way to halogenate many substances. Imidoyl chlorides can also be used to form peptide bonds by first creating amidines and then allowing them to be hydrolyzed to the amide. This approach may prove to be a useful route to synthesizing synthetic proteins. [ 1 ] Imidoyl chlorides can be difficult to handle. Imidoyl chlorides react readily with water, which makes any attempt to isolate and store them for long periods of time difficult. Further, imidoyl chlorides tend to undergo self-condensation at higher temperatures if the imidoyl chloride has an α CH group. At even higher temperatures, the chlorine of the imidoyl chloride tends to be eliminated, leaving the nitrile. Because of these complications, imidoyl chlorides are typically prepared and used immediately. More stable intermediates are being sought, with substances such as imidoylbenzotriazoles being suggested. [ 7 ]
https://en.wikipedia.org/wiki/Imidoyl_chloride
In organic chemistry , an imine ( / ɪ ˈ m iː n / or / ˈ ɪ m ɪ n / ) is a functional group or organic compound containing a carbon – nitrogen double bond ( C=N ). The nitrogen atom can be attached to a hydrogen or an organic group (R). The carbon atom has two additional single bonds . [ 1 ] [ 2 ] Imines are common in synthetic and naturally occurring compounds and they participate in many reactions. [ 3 ] Distinction is sometimes made between aldimines and ketimines, derived from aldehydes and ketones, respectively. In imines the five core atoms (C 2 C=NX, ketimine; and C(H)C=NX, aldimine; X = H or C) are coplanar. Planarity results from the sp 2 -hybridization of the mutually double-bonded carbon and the nitrogen atoms. The C=N distance is 1.29–1.31 Å for nonconjugated imines and 1.35 Å for conjugated imines. By contrast, C−N distances in amines and nitriles are 1.47 and 1.16 Å respectively. [ 4 ] Rotation about the C=N bond is slow. Using NMR spectroscopy , both E and Z isomers of aldimines have been detected. Owing to steric effects, the E isomer is favored. [ 5 ] The term "imine" was coined in 1883 by the German chemist Albert Ladenburg . [ 6 ] Usually imines refer to compounds with the general formula R 2 C=NR, as discussed below. [ 7 ] In the older literature, imine refers to the aza -analogue of an epoxide . Thus, ethylenimine is the three-membered ring species aziridine C 2 H 4 NH. [ 8 ] The relationship of im ines to am ines having double and single bonds can be correlated with im ides and am ides , as in succinimide vs acetamide . Imines are related to ketones and aldehydes by replacement of the oxygen with an NR group. When R = H, the compound is a primary imine, when R is hydrocarbyl , the compound is a secondary imine. If this group is not a hydrogen atom, then the compound can sometimes be referred to as a Schiff base . [ 9 ] When R 3 is OH, the imine is called an oxime , and when R 3 is NH 2 the imine is called a hydrazone . A primary imine in which C is attached to both a hydrocarbyl and a H (derived from an aldehyde) is called a primary aldimine ; a secondary imine with such groups is called a secondary aldimine . [ 10 ] A primary imine in which C is attached to two hydrocarbyls (derived from a ketone) is called a primary ketimine ; a secondary imine with such groups is called a secondary ketimine . [ 11 ] N-Sulfinyl imines are a special class of imines having a sulfinyl group attached to the nitrogen atom. Imines are typically prepared by the condensation of primary amines and aldehydes. [ 12 ] [ 13 ] Ketones undergo similar reactions, but less commonly than aldehydes. In terms of mechanism, such reactions proceed via the nucleophilic addition giving a hemiaminal −C(OH)(NR 2 )− intermediate, followed by an elimination of water to yield the imine (see alkylimino-de-oxo-bisubstitution for a detailed mechanism). The equilibrium in this reaction usually favors the carbonyl compound and amine, so that azeotropic distillation or use of a dehydrating agent, such as molecular sieves or magnesium sulfate , is required to favor of imine formation. In recent years, several reagents such as Tris(2,2,2-trifluoroethyl)borate [B(OCH 2 CF 3 ) 3 ], [ 14 ] pyrrolidine [ 15 ] or titanium ethoxide [Ti(OEt) 4 ] [ 16 ] have been shown to catalyse imine formation. Rarer than primary amines is the use of ammonia to give a primary imine. [ 17 ] In the case of hexafluoroacetone, the hemiaminal intermediate can be isolated. [ 18 ] Primary ketimines can be synthesized via a Grignard reaction with a nitrile . This method is known as Moureu-Mignonac ketimine synthesis. [ 19 ] [ 20 ] [ 21 ] For example, benzophenone imine can also be synthesized by addition of phenylmagnesium bromide to benzonitrile followed by careful hydrolysis (lest the imine be hydrolyzed): [ 22 ] Several other methods exist for the synthesis of imines. The chief reaction of imines, often undesirable, is their hydrolysis back to the amine and the carbonyl precursor. Imines are widely used as intermediates in the synthesis of heterocycles. Somewhat like the parent amines, imines are mildly basic and reversibly protonate to give iminium salts: Alternatively, primary imines are sufficiently acidic to allow N-alkylation, as illustrated with benzophenone imine : [ 28 ] Imines are common ligands in coordination chemistry . Particularly popular examples are found with Schiff base ligands derived from salicylaldehyde , the salen ligands . Metal-catalyzed reactions of imines proceed through such complexes. In classical coordination complexes , imines bind metals through nitrogen. For low-valent metals, η 2 -imine ligands are observed. Very analogous to ketones and aldehydes, primary imines are susceptible to attack by carbanion equivalents. The method allows for the synthesis of secondary amines: [ 29 ] [ 30 ] This can be expanded to include enolisable carbons in the Mannich reaction , which is a straightforward and commonly used approach for producing β-amino-carbonyl compounds. [ 31 ] Imines are reduced via reductive amination . An imine can be reduced to an amine via hydrogenation for example in a synthesis of m -tolylbenzylamine: [ 32 ] Other reducing agents are lithium aluminium hydride and sodium borohydride . [ 33 ] The asymmetric reduction of imines has been achieved by hydrosilylation using a rhodium- DIOP catalyst. [ 34 ] Many systems have since been investigated. [ 35 ] [ 36 ] Owing to their enhanced electrophilicity, iminium derivatives are particularly susceptible to reduction to the amines. Such reductions can be achieved by transfer hydrogenation or by the stoichiometric action of sodium cyanoborohydride . Since imines derived from unsymmetrical ketones are prochiral , their reduction defines a route to chiral amines. Unhindered aldimines tend to cyclize, as illustrated by the condensation of methylamine and formaldehyde , which gives the hexahydro-1,3,5-triazine . Imine polymers ( polyimines ) can be synthesised from multivalent aldehydes and amines . [ 37 ] The polymerisation reaction proceeds directly when the aldehyde and amine monomers are mixed together at room temperature. In most cases, (small) amounts of solvent may still be required. Polyimines are particularly interesting materials because of their application as vitrimers . Owing to the dynamic covalent nature of the imine bonds, polyimines can be recycled relatively easily. Furthermore, polyimines are known for their self-healing behaviour. [ 38 ] [ 39 ] Akin to pinacol couplings , imines are susceptible to reductive coupling leading to 1,2- diamines . [ 40 ] Imine are oxidized with meta-chloroperoxybenzoic acid (mCPBA) to give an oxaziridines . Imines are intermediates in the alkylation of amines with formic acid in the Eschweiler–Clarke reaction . A rearrangement in carbohydrate chemistry involving an imine is the Amadori rearrangement . A methylene transfer reaction of an imine by an unstabilised sulphonium ylide can give an aziridine system. Imine react with dialkylphosphite in the Pudovik reaction and Kabachnik–Fields reaction Imines are common in nature. [ 41 ] [ 42 ] The pyridoxal phosphate -dependent enzymes (PLP enzymes) catalyze myriad reactions involving aldimines (or Schiff bases). [ 43 ] Cyclic imines are also substrates for many imine reductase enzymes. [ 44 ]
https://en.wikipedia.org/wiki/Imine
The aza-Diels–Alder reaction is a modification of the Diels–Alder reaction wherein a nitrogen replaces sp 2 carbon. [ 1 ] The nitrogen atom can be part of the diene or the dienophile . The aza Diels-Alder reaction may occur either by a concerted or stepwise process. The lowest-energy transition state for the concerted process places the imine lone pair (or coordinated Lewis acid) in an exo position. Thus, ( E ) imines, in which the lone pair and larger imine carbon substituent are cis , tend to give exo products. [ 2 ] When the imine nitrogen is protonated or coordinated to a strong Lewis acid, the mechanism shifts to a stepwise, Mannich-Michael pathway. [ 3 ] Attaching an electron-withdrawing group to the imine nitrogen increases the rate. The exo isomer usually predominates (particularly when cyclic dienes are used), although selectivities vary. [ 4 ] In many cases, cyclic dienes give higher diastereoselectivities than acyclic dienes. Use of amino-acid-based chiral auxiliaries, for instance, leads to good diastereoselectivities in reactions of cyclopentadiene, but not in reactions of acyclic dienes. [ 5 ] Chiral auxiliaries have been employed on either the imino nitrogen [ 6 ] or imino carbon [ 7 ] to effect diastereoselection. In the enantioselective Diels–Alder reaction of an aniline , formaldehyde and a cyclohexenone catalyzed by ( S )- proline even the diene is masked. [ 8 ] The imine is often generated in situ from an amine and formaldehyde . An example is the reaction of cyclopentadiene with benzylamine to an aza norbornene . [ 9 ] The catalytic cycle starts with the reactions of the aromatic amine with formaldehyde to the imine and the reaction of the ketone with proline to the diene. The second step, an endo trig cyclisation , is driven to one of the two possible enantiomers (99% ee ) because the imine nitrogen atom forms a hydrogen bond with the carboxylic acid group of proline on the Si face . Hydrolysis of the final complex releases the product and regenerates the catalyst. Tosylimines may be generated in situ from tosylisocyanate and aldehydes. Cycloadditions of these intermediates with dienes give single constitutional isomers, but proceed with moderate stereoselectivity. [ 10 ] Lewis-acid catalyzed reactions of sulfonyl imines also exhibit moderate stereoselectivity. [ 11 ] Simple unactivated imines react with hydrocarbon dienes only with the help of a Lewis acid; however, both electron-rich and electron-poor dienes react with unactivated imines when heated. Vinylketenes, for instance, afford dihydropyridones upon [4+2] cycloaddition with imines. Regio- and stereoselectivity are unusually high in reactions of this class of dienes. [ 12 ] Vinylallenes react similarly in the presence of a Lewis acid, often with high diastereoselectivity. [ 13 ] Acyliminium ions also participate in cycloadditions. These cations are generated by removal of chloride from chloromethylated amides: [ 14 ] The resulting acyl iminium cations serve as hetero dienes as well as dienophile . The aza-Diels–Alder reaction has been applied to the synthesis of a number of alkaloid natural products. Danishefsky's diene is used to form a six-membered ring en route to phyllanthine. [ 15 ]
https://en.wikipedia.org/wiki/Imine_Diels–Alder_reaction
In organic chemistry , an iminium cation is a polyatomic ion with the general structure [R 1 R 2 C=NR 3 R 4 ] + . [ 1 ] They are common in synthetic chemistry and biology. Iminium cations adopt alkene -like geometries: the central C=N unit is nearly coplanar with all four substituents. Unsymmetrical iminium cations can exist as cis and trans isomers. The C=N bonds, which are near 129 picometers in length, are shorter than C-N single bonds. Cis/trans isomers are observed. The C=N distance is slightly shorter in iminium cations than in the parent imine, and computational studies indicate that the C=N bonding is also stronger in iminium vs imine, although the C=N distance contracts only slightly. These results indicate that the barrier for rotation is higher than in the parent imines. [ 3 ] [ 4 ] Iminium cations are obtained by protonation and alkylation of imines : They also are generated by the condensation of secondary amines with ketones or aldehydes: This rapid, reversible reaction is one step in "iminium catalysis". [ 5 ] More exotic routes to iminium cations are known, e.g. from ring-opening reactions of pyridine . [ 6 ] Iminium derivatives are common in biology. Pyridoxal phosphate reacts with amino acids to give iminium derivatives. Many iminium salts are encountered in synthetic organic chemistry. Iminium salts hydrolyse to give the corresponding ketone or aldehyde: [ 8 ] Iminium cations are reduced to the amines, e.g. by sodium cyanoborohydride . Iminium cations are intermediates in the reductive amination of ketones and aldehydes. Unsymmetrical iminium cations undergo cis-trans isomerization. The isomerization is catalyzed by nucleophiles, which add to the unsaturated carbon, breaking the C=N double bond. [ 3 ] Iminylium ions have the general structure R 2 C=N + . They form a subclass of nitrenium ions . [ 10 ]
https://en.wikipedia.org/wiki/Iminium
Imipenem (trade name Primaxin among others) is a synthetic β-lactam antibiotic belonging to the carbapenems chemical class. developed by Merck scientists Burton Christensen, William Leanza, and Kenneth Wildonger in the mid-1970s. [ 1 ] Carbapenems are highly resistant to the β-lactamase enzymes produced by many multiple drug-resistant Gram-negative bacteria, [ 2 ] thus playing a key role in the treatment of infections not readily treated with other antibiotics. [ 3 ] It is usually administered through intravenous injection. Imipenem was patented in 1975 and approved for medical use in 1985. [ 4 ] It was developed via a lengthy trial-and-error search for a more stable version of the natural product thienamycin , which is produced by the bacterium Streptomyces cattleya . Thienamycin has antibacterial activity, but is unstable in aqueous solution, thus it is practically of no medicinal use. [ 5 ] Imipenem has a broad spectrum of activity against aerobic and anaerobic , Gram-positive and Gram-negative bacteria . [ 6 ] It is particularly important for its activity against Pseudomonas aeruginosa and Enterococcus species. However, it is not active against MRSA . Acinetobacter anitratus , Acinetobacter calcoaceticus , Actinomyces odontolyticus , Aeromonas hydrophila , Bacteroides distasonis , Bacteroides uniformis , and Clostridium perfringens are generally susceptible to imipenem, while Acinetobacter baumannii , some Acinetobacter spp., Bacteroides fragilis , and Enterococcus faecalis have developed resistance to imipenem to varying degrees. Not many species are resistant to imipenem except Pseudomonas aeruginosa (Oman) and Stenotrophomonas maltophilia . [ 7 ] Imipenem is rapidly degraded by the renal enzyme dehydropeptidase 1 when administered alone, and is almost always coadministered with cilastatin to prevent this inactivation. [ 8 ] Common adverse drug reactions are nausea and vomiting. People who are allergic to penicillin and other β-lactam antibiotics should take caution if taking imipenem, as cross-reactivity rates are high. At high doses, imipenem is seizurogenic. [ 9 ] Imipenem acts as an antimicrobial through inhibiting cell wall synthesis of various Gram-positive and Gram-negative bacteria. It remains very stable in the presence of β-lactamase (both penicillinase and cephalosporinase) produced by some bacteria, and is a strong inhibitor of β-lactamases from some Gram-negative bacteria that are resistant to most β-lactam antibiotics. [ citation needed ]
https://en.wikipedia.org/wiki/Imipenem
ISWI ( I mitation SWI tch) is one of the five major DNA chromatin remodeling complex types, or subfamilies, found in most eukaryotic organisms. [ 1 ] ISWI remodeling complexes place nucleosomes along segments of DNA at regular intervals. The placement of nucleosomes by ISWI protein complexes typically results in the silencing of the DNA because the nucleosome placement prevents transcription of the DNA. [ 2 ] ISWI, like the closely related SWI/SNF subfamily, is an ATP-dependent chromatin remodeler. However, the chromatin remodeling activities of ISWI and SWI/SNF are distinct and mediate the binding of non-overlapping sets of DNA transcription factors. [ 3 ] The protein ISW1 is the first ATPase subunit which has been isolated in the ISWI chromatin remodeling family in the fruit fly Drosophila . This protein presents high level of similarity to the SWI/SNF chromatin remodeling family in the ATPase domain. Outside the ATPase domain ISWI loses the similarity with the member of the SWI/SNF family, possessing a SANT domain instead of the bromodomain . The protein ISWI can interact with several proteins giving three different chromatin-remodeling complexes in Drosophila melanogaster : NURF ( nucleosome remodeling factor), CHRAC (chromatin remodeling and assembly complex) and ACF (ATP-utilising chromatin remodeling and assembly factor). In vitro , the ISWI protein alone can assemble nucleosomes on linear DNA and it can move nucleosomes on linear DNA from the center to the extremities. Inside the CHRAC complex, ISWI catalyzes the inverse reaction, moving nucleosomes from the extremities to the center. [ 4 ] A single molecule study using atomic force microscopy (AFM) and tethered particle motion (TPM) has observed that ISWI can bind naked DNA in the absence of ATP , wrapping DNA around the protein. In presence of ATP , the protein generates DNA loops while simultaneously generating negative supercoils in the template. [ 5 ] The first figure in this paper shows three AFM images from where single DNA interacting with ISWI was deposed on mica surfaces. On the center, a single ISWI is bound near the end of a dsDNA template. The right image shows two DNA loops generated by ISWI. These loops contains supercoils. The TPM study instead showed that the duration of loop formed by ISWI was ATP-dependent.
https://en.wikipedia.org/wiki/Imitation_SWI
In mathematics, the immanant of a matrix was defined by Dudley E. Littlewood and Archibald Read Richardson as a generalisation of the concepts of determinant and permanent . Let λ = ( λ 1 , λ 2 , … ) {\displaystyle \lambda =(\lambda _{1},\lambda _{2},\ldots )} be a partition of an integer n {\displaystyle n} and let χ λ {\displaystyle \chi _{\lambda }} be the corresponding irreducible representation-theoretic character of the symmetric group S n {\displaystyle S_{n}} . The immanant of an n × n {\displaystyle n\times n} matrix A = ( a i j ) {\displaystyle A=(a_{ij})} associated with the character χ λ {\displaystyle \chi _{\lambda }} is defined as the expression The determinant is a special case of the immanant, where χ λ {\displaystyle \chi _{\lambda }} is the alternating character sgn {\displaystyle \operatorname {sgn} } , of S n , defined by the parity of a permutation . The permanent is the case where χ λ {\displaystyle \chi _{\lambda }} is the trivial character , which is identically equal to 1. For example, for 3 × 3 {\displaystyle 3\times 3} matrices, there are three irreducible representations of S 3 {\displaystyle S_{3}} , as shown in the character table: As stated above, χ 1 {\displaystyle \chi _{1}} produces the permanent and χ 2 {\displaystyle \chi _{2}} produces the determinant, but χ 3 {\displaystyle \chi _{3}} produces the operation that maps as follows: The immanant shares several properties with determinant and permanent. In particular, the immanant is multilinear in the rows and columns of the matrix; and the immanant is invariant under simultaneous permutations of the rows or columns by the same element of the symmetric group . Littlewood and Richardson studied the relation of the immanant to Schur functions in the representation theory of the symmetric group . The necessary and sufficient conditions for the immanant of a Gram matrix to be 0 {\displaystyle 0} are given by Gamas's Theorem .
https://en.wikipedia.org/wiki/Immanant
An immature ovum is a cell that goes through the process of oogenesis to become an ovum . It can be an oogonium , an oocyte , or an ootid . An oocyte, in turn, can be either primary or secondary, depending on how far it has come in its process of meiosis . Oogonia are the cells that turn into primary oocytes in oogenesis. [ 1 ] They are diploid. Oogonia are created in early embryonic life. All have turned into primary oocytes at late fetal age. The primary oocyte is defined by its process of ootidogenesis , which is meiosis. [ 2 ] It has duplicated its DNA, so that each chromosome has two chromatids, i.e. 92 chromatids all in all (4C). When meiosis I is completed, one secondary oocyte and one polar body is created. Primary oocytes have been created in late fetal life. This is the stage where immature ova spend most of their lifetime, more specifically in diplotene of prophase I of meiosis. The halt is called dictyate . Most degenerate by atresia , but a few go through ovulation , and that's the trigger to the next step. Thus, an immature ovum can spend up to ~55 years as a primary oocyte (the last ovulation before menopause ). The secondary oocyte is the cell that is formed by meiosis I in oogenesis. [ 3 ] Thus, it has only one of each pair of homologous chromosomes. In other words, it is haploid. However, each chromosome still has two chromatids, making a total of 46 chromatids (1N but 2C). The secondary oocyte continues the second stage of meiosis (meiosis II), and the daughter cells are one ootid and one polar body . Secondary oocytes are the immature ovum shortly after ovulation, to fertilization , where it turns into an ootid. Thus, the time as a secondary oocyte is measured in days. An ootid is the haploid result of ootidogenesis. [ 4 ] In oogenesis, it doesn't really have any significance in itself, since it is very similar to the ovum. However, it fills the purpose as the female counterpart of the male spermatid in spermatogenesis . Each chromosome is split between the two ootids, leaving only one chromatid per chromosome. Thus, there are 23 chromatids in total (1N). The ootid matures into an ovum .
https://en.wikipedia.org/wiki/Immature_ovum
The term immediately dangerous to life or health ( IDLH ) is defined by the US National Institute for Occupational Safety and Health (NIOSH) as exposure to airborne contaminants that is "likely to cause death or immediate or delayed permanent adverse health effects or prevent escape from such an environment." Examples include smoke or other poisonous gases at sufficiently high concentrations. It is calculated using the LD50 or LC50 . [ 1 ] The Occupational Safety and Health Administration (OSHA) regulation (1910.134(b)) defines the term as "an atmosphere that poses an immediate threat to life, would cause irreversible adverse health effects, or would impair an individual's ability to escape from a dangerous atmosphere." [ 2 ] IDLH values are often used to guide the selection of breathing apparatus that are made available to workers or firefighters in specific situations. [ 1 ] The NIOSH definition does not include oxygen deficiency (below 19.5%) although atmosphere-supplying breathing apparatus is also required. [ 3 ] Examples include high altitudes and unventilated, confined spaces . The OSHA definition is arguably broad enough to include oxygen-deficient circumstances in the absence of "airborne contaminants", as well as many other chemical, thermal, or pneumatic hazards to life or health (e.g., pure helium, super-cooled or super-heated air, hyperbaric or hypo-baric or submerged chambers, etc.). It also uses the broader term "impair", rather than "prevent", with respect to the ability to escape. For example, blinding but non-toxic smoke could be considered IDLH under the OSHA definition if it would impair the ability to escape a "dangerous" but not life-threatening atmosphere (such as tear gas). The OSHA definition is part of a legal standard, which is the minimum legal requirement. Users or employers are encouraged to apply proper judgment to avoid taking unnecessary risks, even if the only immediate hazard is "reversible", such as temporary pain, disorientation, nausea, or non-toxic contamination. If the concentration of harmful substances is IDLH, the worker must use the most reliable respirators. Such respirators should not use cartridges or canister with the sorbent, as their lifetime is too poorly predicted . In addition, the respirator must maintain positive pressure under the mask during inspiration, as this will prevent the leakage of unfiltered air through the gaps (which occur between the edges of the mask and the face sometimes). Textbook NIOSH [ 4 ] recommended for use in IDLH conditions only pressure-demand self-contained breathing apparatus with a full facepiece , or pressure-demand supplied-air respirator equipped with a full facepiece in combination with an auxiliary pressure-demand self-contained breathing apparatus. The following examples are listed in reference to IDLH values. [ 3 ] Legend: [ 5 ] Related media at Wikimedia Commons:
https://en.wikipedia.org/wiki/Immediately_dangerous_to_life_or_health
In computational complexity theory , the Immerman–Szelepcsényi theorem states that nondeterministic space complexity classes are closed under complementation. It was proven independently by Neil Immerman and Róbert Szelepcsényi in 1987, for which they shared the 1995 Gödel Prize . In its general form the theorem states that NSPACE ( s ( n )) = co-NSPACE( s ( n )) for any function s ( n ) ≥ log n . The result is equivalently stated as NL = co-NL; although this is the special case when s ( n ) = log n , it implies the general theorem by a standard padding argument . [ 1 ] The result solved the second LBA problem . In other words, if a nondeterministic machine can solve a problem, another machine with the same resource bounds can solve its complement problem (with the yes and no answers reversed) in the same asymptotic amount of space. No similar result is known for the time complexity classes, and indeed it is conjectured that NP is not equal to co-NP . The principle used to prove the theorem has become known as inductive counting . It has also been used to prove other theorems in computational complexity, including the closure of LOGCFL under complementation and the existence of error-free randomized logspace algorithms for USTCON . [ 2 ] We prove here that NL = co-NL. The theorem is obtained from this special case by a padding argument . The st-connectivity problem asks, given a digraph G and two vertices s and t , whether there is a directed path from s to t in G . This problem is NL-complete, therefore its complement st-non-connectivity is co-NL-complete. It suffices to show that st-non-connectivity is in NL. This proves co-NL ⊆ NL, and by complementation, NL ⊆ co-NL. We fix a digraph G , a source vertex s , and a target vertex t . We denote by R k the set of vertices which are reachable from s in at most k steps. Note that if t is reachable from s , it is reachable in at most n-1 steps, where n is the number of vertices, therefore we are reduced to testing whether t ∉ R n-1 . We remark that R 0 = { s }, and R k +1 is the set of vertices v which are either in R k , or the target of an edge w → v where w is in R k . This immediately gives an algorithm to decide t ∈ R n , by successively computing R 1 , …, R n . However, this algorithm uses too much space to solve the problem in NL, since storing a set R k requires one bit per vertex. The crucial idea of the proof is that instead of computing R k +1 from R k , it is possible to compute the size of R k +1 from the size of R k , with the help of non-determinism. We iterate over vertices and increment a counter for each vertex that is found to belong to R k +1 . The problem is how to determine whether v ∈ R k +1 for a given vertex v , when we only have the size of R k available. To this end, we iterate over vertices w , and for each w , we non-deterministically guess whether w ∈ R k . If we guess w ∈ R k , and v = w or there is an edge w → v , then we determine that v belongs to R k +1 . If this fails for all vertices w , then v does not belong to R k +1 . Thus, the computation that determines whether v belongs to R k +1 splits into branches for the different guesses of which vertices belong to R k . A mechanism is needed to make all of these branches abort (reject immediately), except the one where all the guesses were correct. For this, when we have made a “yes-guess” that w ∈ R k , we check this guess, by non-deterministically looking for a path from s to w of length at most k . If this check fails, we abort the current branch. If it succeeds, we increment a counter of “yes-guesses”. On the other hand, we do not check the “no-guesses” that w ∉ R k (this would require solving st-non-connectivity , which is precisely the problem that we are solving in the first place). However, at the end of the loop over w , we check that the counter of “yes-guesses” matches the size of R k , which we know. If there is a mismatch, we abort. Otherwise, all the “yes-guesses” were correct, and there was exactly the right number of them, thus all “no-guesses” were correct as well. This concludes the computation of the size of R k +1 from the size of R k . Iteratively, we compute the sizes of R 1 , R 2 , …, R n-2 . Finally, we check whether t ∈ R n-1 , which is possible from the size of R n -2 by the sub-algorithm that is used inside the computation of the size of R k +1 . The following pseudocode summarizes the algorithm: As a corollary, in the same article, Immerman proved that, using descriptive complexity 's equality between NL and FO(Transitive Closure) , the logarithmic hierarchy, i.e. the languages decided by an alternating Turing machine in logarithmic space with a bounded number of alternations, is the same class as NL.
https://en.wikipedia.org/wiki/Immerman–Szelepcsényi_theorem
In virtual reality (VR), immersion is the perception of being physically present in a non-physical world. The perception is created by surrounding the user of the VR system in images, sound or other stimuli that provide an engrossing total environment. The name is a metaphoric use of the experience of submersion applied to representation, fiction or simulation. Immersion can also be defined as the state of consciousness where a "visitor" ( Maurice Benayoun ) or "immersant" ( Char Davies ) has their awareness of their physical self transformed by being surrounded in an artificial environment. The term is used to describe partial or complete suspension of disbelief , enabling action or reaction to stimulations encountered in a virtual or artistic environment. The greater the suspension of disbelief, the greater the degree of presence achieved. According to Ernest W. Adams , [ 2 ] immersion can be separated into three main categories: Staffan Björk and Jussi Holopainen, in Patterns In Game Design , [ 3 ] divide immersion into similar categories, but call them sensory-motoric immersion , cognitive immersion and emotional immersion , respectively. In addition to these, they add a new category: spatial immersion , which occurs when a player feels the simulated world is perceptually convincing. The player feels that he or she is really "there" and that a simulated world looks and feels "real". Presence, a term derived from the shortening of the original " telepresence ", is a phenomenon enabling people to interact with and feel connected to the world outside their physical bodies via technology. It is defined as a person's subjective sensation of being there in a scene depicted by a medium, usually virtual in nature. [ 4 ] Most designers focus on the technology used to create a high-fidelity virtual environment; however, the human factors involved in achieving a state of presence must be taken into account as well. It is the subjective perception, although generated by and/or filtered through human-made technology, that ultimately determines the successful attainment of presence. [ 5 ] Virtual reality glasses can produce a visceral feeling of being in a simulated world, a form of spatial immersion called Presence. According to Oculus VR , the technology requirements to achieve this visceral reaction are low-latency and precise tracking of movements. [ 6 ] [ 7 ] [ 8 ] Michael Abrash gave a talk on VR at Steam Dev Days in 2014. [ 9 ] According to the VR research team at Valve , all of the following are needed to establish presence. Immersive media is a term applied to a group of concepts, [ 10 ] variously defined, which may have application in fields such as engineering, media, healthcare, education and retail. [ 11 ] Concepts included in immersive media are: Immersive virtual reality is a technology that aims to completely immerse the user inside the computer generated world, giving the impression to the user that they have "stepped inside" the synthetic world. [ 13 ] This is achieved by either using the technologies of Head-Mounted Display(HMD) or multiple projections. HMD allows VR to be projected right in front of the eyes and allows users to focus on it without any distraction. [ 14 ] The earliest attempts at developing immersive technology date back to the 1800s. Without these early attempts, the world of immersive technology would never have reached its advanced technological state we have today. The many elements that surround the realm of immersive technology all come together in different ways to create different types of immersive technology including virtual reality and pervasive gaming. [ 15 ] While immersive technology has already had an immense impact on our world, its progressive growth and development will continue to make lasting impacts among our technological culture. One of the first devices that was designed to look like and function as a virtual reality headset was called a stereoscope . It was invented in the 1830s during the early days of photography, and it used a slightly different image in each eye to create a kind of 3D effect. [ 16 ] Although as photography continued to develop in the late 1800s, stereoscopes became more and more obsolete. Immersive technology became more available to the people in 1957 when Morton Heilig invented the Sensorama cinematic experience that included speakers, fans, smell generators, and a vibrating chair to immerse the viewer in the movie. [ 14 ] When one imagines the VR headsets they see today, they must give credit to The Sword of Damocles which was invented in 1968 and allowed users to connect their VR headsets to a computer rather than a camera. In 1991, Sega launched the Sega VR headset which was made for arcade/home use, but only the arcade version was released due to technical difficulties. [ 14 ] Augmented reality began to rapidly develop within the 1990s when Louis Rosenberg created Virtual Fixtures , which was the first fully immersive augmented reality system, used for the Air Force . The invention enhanced operator performance of manual tasks in remote locations by using two robot controls in an exoskeleton. [ 14 ] The first introduction of augmented reality displayed to a live audience was in 1998, when the NFL first displayed a virtual yellow line to represent the line of scrimmage/first down. In 1999, Hirokazu Kato developed the ARToolkit, which was an open source library for the development of AR applications. This allowed people to experiment with AR and release new and improved applications. [ 14 ] Later, in 2009 Esquire's magazine was the first to use a QR code on the front of their magazine to provide additional content. Once The Oculus came out in 2012, it revolutionized virtual reality and eventually raised 2.4 million dollars and began releasing their pre-production models to developers. Facebook purchased Oculus for 2 billion dollars in 2014, which showed the world the upward trajectory of VR. [ 14 ] In 2013, Google announced their plans to develop their first AR headset, Google Glass . The production stopped in 2015 due to privacy concerns, but relaunched in 2017 exclusively for the enterprise. In 2016, Pokémon Go took the world by storm and became one of the most downloaded apps of all time. It was the first augmented reality game that was accessible through ones phone. A full immersive technology experience happens when all elements of sight, sound, and touch come together. A true immersive experience needs to be done with either virtual reality or augmented reality, as these two types utilize all of these elements. [ 17 ] Interactivity and connectivity is the entire focus of immersive technology. It is not placing someone in an entirely different environment, it is when they are virtually presented with a new environment and are given the opportunity to learn how to optimally live and interact with it. Virtual reality is the primary source of immersive technology that allows the user to be completely immersed in a fully digital environment that replicates another reality. [ 18 ] Users must use a headset, hand controls, and headphones in order to have a fully immersive experience where one is able to utilize movements/reflects. [ 15 ] There are also pervasive games which utilize real world locations within game play. [ 18 ] This is when the user's interaction on a virtual game lead to them interacting in real life. Some of these games may require users to physically meet up in order to complete stages. [ 18 ] The gaming world has developed a series of popular virtual reality video games , such as Vader Immortal , Trover Saves the Universe , and No Man's Sky . [ 19 ] The world of immersive technology has many facets that will continue to develop/expand over time. Immersive technology has grown immensely in the past few decades, and is continuing to progress. VR has even been described as the learning aid of the 21st century. [ 20 ] Head mounted displays (HMD) is what allows users to get the full immersive experience. The HMD market is expected to be worth over 25 billion USD by the year 2022. [ 20 ] The technologies of VR and AR received a boost in attention when Mark Zuckerberg , founder/creator of Facebook , bought Oculus for 2 billion USD in 2014. [ 21 ] Recently, the Oculus quest was released, which is wireless and allows users to move more freely. It costs around 400 USD which is around the same price as the previous generation headsets with cables. [ 20 ] Other massive corporations such as, Sony, Samsung, HTC are also making huge investments into VR/AR. [ 21 ] In regards to education, there are currently many researchers who are exploring the benefits and applications of virtual reality in the classroom. [ 20 ] However, there is little systemic work that currently exists regarding how researchers have applied immersive VR for higher education purposes using HMD's. [ 20 ] The most popular use of immersive technology comes in the world of video games. Completely immersing users into their favorite game, HMD's have allowed individuals to experience the realm of video games in an entirely new light. [ 22 ] Current video games such as Star Wars: Squadron, Half-Life: Alyx, and No Man's Sky are giving users the ability to experience every aspect of the digital world in their game. [ 22 ] While there is still a lot to learn about immersive technology and what it has to offer, it has come an entirely long way from its beginning on the early 1800s. Hardware technologies are developed to stimulate one or more of the senses to create perceptually real sensations. Some vision technologies are 3D displays , fulldomes , head-mounted displays , and holography . Some auditory technologies are 3D audio effects , high-resolution audio , and surround sound . Haptic technology simulates tactile responses. Various technologies provide the ability to interact and communicate with the virtual environment, including brain-computer interfaces , gesture recognition , omnidirectional treadmills , and speech recognition . Software interacts with the hardware technology to render the virtual environment and process the user input to provide dynamic, real-time response. To achieve this, software often integrates components of artificial intelligence and virtual worlds . This is done differently depending on the technology and environment; Whether the software needs to create a fully immersive environment or display a projection on the already existing environment the user is looking at. Many universities have programs that research and develop immersive technology. Examples are Stanford's Virtual Human Interaction Lab, USC's Computer Graphics and Immersive Technologies Lab, Iowa State Virtual Reality Applications Center, University of Buffalo's VR Lab, Teesside University's Intelligent Virtual Environments Lab, Liverpool John Moores University 's Immersive Story Lab, University of Michigan Ann Arbor, Oklahoma State University and the University of Southern California. [ 23 ] All of these universities and more are researching the advancement of the technology along with the different uses that VR could be applied to. [ 24 ] As well universities the video game industry has received a massive boost from immersive technology specifically Augmented reality. The company Epic games known for their popular game Fortnite generated 1.25 billion dollars in a round of investing in 2018 as they have a leading 3D development platform for AR apps. [ 25 ] The U.S. Government requests information for immersive technology development [ 26 ] and funds specific projects. [ 27 ] This is for implementation in government branches in the future. Immersive technology is applied in several areas, including retail and e-commerce , [ 28 ] the adult industry , [ 29 ] art , [ 30 ] entertainment and video games and interactive storytelling , military , education , [ 31 ] [ 32 ] and medicine . [ 33 ] It is also growing in the Non-profit industry in fields such as disaster relief and conservation due to its ability to put a user in a situation that would elicit more of a real-world experience than just a picture giving them a stronger emotional connection to the situation they would be viewing. As immersive technology becomes more mainstream, it will likely pervade other industries. Also with the legalization of cannabis happening worldwide, the cannabis industry has seen a large growth in the immersive technology market to allow virtual tours of their facilities to engage potential customers and investors. The potential perils of immersive technology have often been portrayed in science fiction and entertainment. Movies such as eXistenZ , The Matrix , and the short film Play by David Kaplan and Eric Zimmerman, [ 34 ] raise questions about what may happen if we are unable to distinguish the physical world from the digital world. There has been debate on the issue of virtual crime , and whether it is ethical to permit illegal behavior such as rape in a simulated environment. [ 35 ] Immersive virtual reality is a hypothetical future technology that exists today as virtual reality art projects, for the most part. [ 36 ] It consists of immersion in an artificial environment where the user feels just as immersed as they usually feel in everyday life . The most considered method would be to induce the sensations that made up the virtual reality in the nervous system directly. In functionalism /conventional biology we interact with everyday life through the nervous system. Thus we receive all input from all the senses as nerve impulses. It gives your neurons a feeling of heightened sensation. It would involve the user receiving inputs as artificially stimulated nerve impulses, the system would receive the CNS outputs (natural nerve impulses) and process them allowing the user to interact with the virtual reality. Natural impulses between the body and central nervous system would need to be prevented. This could be done by blocking out natural impulses using nanorobots which attach themselves to the brain wiring, whilst receiving the digital impulses of which describe the virtual world, which could then be sent into the wiring of the brain. A feedback system between the user and the computer which stores the information would also be needed. Considering how much information would be required for such a system, it is likely that it would be based on hypothetical forms of computer technology. A comprehensive understanding of which nerve impulses correspond to which sensations, and which motor impulses correspond to which muscle contractions will be required. This will allow the correct sensations in the user, and actions in the virtual reality to occur. The Blue Brain Project is the current, most promising research with the idea of understanding how the brain works by building very large scale computer models. An immersive digital environment is an artificial , interactive , computer-created scene or "world" within which a user can immerse themselves. [ 37 ] Immersive digital environments could be thought of as synonymous with virtual reality, but without the implication that actual "reality" is being simulated. An immersive digital environment could be a model of reality, but it could also be a complete fantasy user interface or abstraction , as long as the user of the environment is immersed within it. The definition of immersion is wide and variable, but here it is assumed to mean simply that the user feels like they are part of the simulated " universe ". The success with which an immersive digital environment can actually immerse the user is dependent on many factors such as believable 3D computer graphics , surround sound , interactive user-input and other factors such as simplicity, functionality and potential for enjoyment. New technologies are currently under development which claim to bring realistic environmental effects to the players' environment – effects like wind, seat vibration and ambient lighting. To create a sense of full immersion, the 5 senses (sight, sound, touch, smell, taste) must perceive the digital environment to be physically real. Immersive technology can perceptually fool the senses through: Once the senses reach a sufficient belief that the digital environment is real (it is interaction and involvement which can never be real), the user must then be able to interact with the environment in a natural, intuitive manner. Various immersive technologies such as gestural controls, motion tracking, and computer vision respond to the user's actions and movements. Brain control interfaces (BCI) respond to the user's brainwave activity. Training and rehearsal simulations run the gamut from part task procedural training (often buttonology, for example: which button do you push to deploy a refueling boom) through situational simulation (such as crisis response or convoy driver training) to full motion simulations which train pilots or soldiers and law enforcement in scenarios that are too dangerous to train in actual equipment using live ordinance. Video games from simple arcade to massively multiplayer online game and training programs such as flight and driving simulators. Entertainment environments such as motion simulators that immerse the riders/players in a virtual digital environment enhanced by motion, visual and aural cues. Reality simulators, such as one of the Virunga Mountains in Rwanda that takes you on a trip through the jungle to meet a tribe of mountain gorillas . [ 38 ] Or training versions such as one which simulates taking a ride through human arteries and the heart to witness the buildup of plaque and thus learn about cholesterol and health. [ 39 ] In parallel with scientists, artists like Knowbotic Research , Donna Cox , Rebecca Allen , Robbie Cooper , Maurice Benayoun , Char Davies , and Jeffrey Shaw use the potential of immersive virtual reality to create physiologic or symbolic experiences and situations. Other examples of immersion technology include physical environment / immersive space with surrounding digital projections and sound such as the CAVE , and the use of virtual reality headsets for viewing movies, with head-tracking and computer control of the image presented, so that the viewer appears to be inside the scene. Additionally, immersion technology can include audio with head-tracking and precise directivity of sound, such as the Nokia OZO technology. The next generation is VIRTSIM, which achieves total immersion through motion capture and wireless head mounted displays for teams of up to thirteen immersants enabling natural movement through space and interaction in both the virtual and physical space simultaneously. New fields of studies linked to immersive virtual reality emerge every day. Researchers see a great potential in virtual reality tests serving as complementary interview methods in psychiatric care. [ 40 ] Immersive virtual reality have in studies also been used as an educational tool in which the visualization of psychotic states have been used to get increased understanding of patients with similar symptoms. [ 41 ] New treatment methods are available for schizophrenia [ 42 ] and other newly developed research areas where immersive virtual reality is expected to achieve melioration is in education of surgical procedures, [ 43 ] rehabilitation program from injuries and surgeries [ 44 ] and reduction of phantom limb pain. [ 45 ] In the domain of architectural design and building science , immersive virtual environments are adopted to facilitate architects and building engineers to enhance the design process through assimilating their sense of scale, depth, and spatial awareness . Such platforms integrate the use of virtual reality models and mixed reality technologies in various functions of building science research, [ 46 ] construction operations , [ 47 ] personnel training, end-user surveys, performance simulations [ 48 ] and building information modeling visualization. [ 49 ] [ 50 ] Head-mounted displays (with both 3 degrees of freedom and 6 degrees of freedom systems) and CAVE platforms are used for spatial visualization and building information modeling (BIM) navigations for different design and evaluation purposes. [ 51 ] Clients, architects and building owners use derived applications from game engines to navigate 1:1 scale BIM models, allowing a virtual walkthrough experience of future buildings. [ 50 ] For such use cases, the performance improvement of space navigation between virtual reality headsets and 2D desktop screens has been investigated in various studies, with some suggesting significant improvement in virtual reality headsets [ 52 ] [ 53 ] while others indicate no significant difference. [ 54 ] [ 55 ] Architects and building engineers can also use immersive design tools to model various building elements in virtual reality CAD interfaces, [ 56 ] [ 57 ] and apply property modifications to building information modeling (BIM) files through such environments. [ 49 ] [ 58 ] In the building construction phase, immersive environments are used to improve site preparations, on site communication and collaboration of team members, safety [ 59 ] [ 60 ] and logistics . [ 61 ] For training of construction workers, virtual environments have shown to be highly effective in skill transfer with studies showing similar performance results to training in real environments. [ 62 ] Moreover, virtual platforms are also used in the operation phase of buildings to interact and visualize data with Internet of Things (IoT) devices available in buildings, process improvement and also resource management. [ 63 ] [ 64 ] Occupant and end-user studies are performed through immersive environments. [ 65 ] [ 66 ] Virtual immersive platforms engage future occupants in the building design process by providing a sense of presence to users with integrating pre-construction mock-ups and BIM models for the evaluation of alternative design options in the building model in a timely and cost efficient manner. [ 67 ] Studies conducting human experiments have shown users perform similarly in daily office activities (object identification, reading speed and comprehension) within immersive virtual environments and benchmarked physical environments. [ 65 ] In the field of lighting , virtual reality headsets have been used investigate the influence of façade patterns on the perceptual impressions and satisfaction of a simulated daylit space. [ 68 ] Moreover, artificial lighting studies have implemented immersive virtual environments to evaluate end-users lighting preferences of simulated virtual scenes with the controlling of the blinds and artificial lights in the virtual environment. [ 66 ] For structural engineering and analysis, immersive environments enable the user to focus on structural investigations without getting too distracted to operate and navigate the simulation tool. [ 69 ] Virtual and augmented reality applications have been designed for finite element analysis of shell structures . Using stylus and data gloves as input devices, the user can create, modify mesh, and specify boundary conditions. For a simple geometry, real-time color-coded results are obtained by changing loads on the model. [ 70 ] Studies have used artificial neural networks (ANN) or approximation methods to achieve real-time interaction for the complex geometry, and to simulate its impact via haptic gloves . [ 71 ] Large scale structures and bridge simulation have also been achieved in immersive virtual environments. The user can move the loads acting on the bridge, and finite element analysis results are updated immediately using an approximate module. [ 72 ] Simulation sickness , or simulator sickness, is a condition where a person exhibits symptoms similar to motion sickness caused by playing computer/simulation/video games (Oculus Rift is working to solve simulator sickness). [ 73 ] Motion sickness due to virtual reality is very similar to simulation sickness and motion sickness due to films. In virtual reality, however, the effect is made more acute as all external reference points are blocked from vision, the simulated images are three-dimensional and in some cases stereo sound that may also give a sense of motion. Studies have shown that exposure to rotational motions in a virtual environment can cause significant increases in nausea and other symptoms of motion sickness. [ 74 ] Other behavioural changes such as stress, addiction , isolation and mood changes are also discussed to be side-effects caused by immersive virtual reality. [ 75 ]
https://en.wikipedia.org/wiki/Immersion_(virtual_reality)
Immersion chillers work by circulating a cooling fluid (usually tap water from a garden hose or faucet) through a copper/stainless steel coil that is placed directly in the hot wort . As the cooling fluid runs through the coil it absorbs and carries away heat until the wort has cooled to the desired temperature. The advantage of using a copper or stainless steel immersion chiller is the lower risk of contamination versus other methods when used in an amateur or homebrewing environment. The clean chiller is placed directly in the still boiling wort and thus sanitized before the cooling process begins. [ 1 ] This product article is a stub . You can help Wikipedia by expanding it . This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Immersion_chiller
Immersion zinc plating is an electroless (non- electrolytic ) coating process that deposits a thin layer of zinc on a less electronegative metal, by immersion in a solution containing a zinc or zincate ions, Zn(OH) 2− 4 . A typical use is plating aluminum with zinc prior to electrolytic or electroless nickel plating . Immersion zinc plating involves the displacement of zinc from zincate by the underlying metal: [ 1 ]
https://en.wikipedia.org/wiki/Immersion_zinc_plating
Immersive design (Experimental Design) describes design work which ranges in levels of interaction and leads users to be fully absorbed in an experience. This form of design involves the use of virtual reality (VR), augmented reality (AR), and mixed reality (MR) that creates the illusion that the user is physically interacting with a realistic digital atmosphere. [ 1 ] [ 2 ] Alex McDowell coined the phrase 'immersive design' in 2007 in order to frame a discussion around a design discipline that addresses story-based media within the context of digital and virtual technologies. [ 3 ] [ 4 ] Together McDowell and museum director Chris Scoates co-directed 5D | The Future of Immersive Design conference in Long Beach 2008, laying some groundwork for immersive design to be a distinct design philosophy. 5D has become a forum and community representing a broad range of cross-media designers with its intent based in education, cross-pollination and the development of an expanding knowledge base. [ 5 ] In recent years, immersive design has been promoted as a design philosophy where it has been appropriated for the purposes of describing design for narrative media and the process of Worldbuilding . [ citation needed ] [ 6 ] With immersive design being applied to a variety of topics and discussions, there is great benefit to how immersive design can benefit the future of technology. Topics and discussions include, mental health and personal medicine , gaming , journalism , and education . [ 7 ] [ 8 ] [ 9 ] [ 10 ] Although immersive design is still maturing, it has served a great benefit to these fields, providing a unique learning experience for those involved. In order for an experience to be considered 'immersive', it needs to incorporate multiple characteristics that help generate the altered illusionary experience. [ 11 ] [ 12 ]
https://en.wikipedia.org/wiki/Immersive_design
Immittance is a term used within electrical engineering and acoustics , specifically bioacoustics and the inner ear , to describe the combined measure of electrical or acoustic admittance and electrical or acoustic impedance . Immittance was initially coined by H. W. Bode in 1945, and was first used to describe the electrical admittance or impedance of either a nodal or a mesh network. Bode also suggested the name "adpedence", however the current name was more widely adopted. In bioacoustics, immittance is typically used to help define the characteristics of noise reverberation within the middle ear and assist with differential diagnosis of middle-ear disease. [ 1 ] Immittance is typically a complex number which can represent either or both the impedance and the admittance (ratio of voltage to current or vice versa in electrical circuits, or volume velocity to sound pressure or vice versa in acoustical systems) of a system. [ 3 ] Immittance does not have an associated unit because it applies to both impedance, which is measured in ohms ( Ω ) or acoustic ohms , and admittance, which is commonly measured in siemens ( S ) and historically has also been measured in mhos ( ℧ ), the reciprocal of ohms. In audiology , tympanometry is sometimes referred to as immittance testing . Tympanometry is especially effective when both the impedance and admittance of the inner ear are accounted for. Immittance allows for the analysis of both, and therefore is crucial to multiple-component, multiple-frequency tympanometry. Clinically, few cases require the use of this technique for accurate diagnosis; but for the fewer than 20% of cases which do require it, the technique is a necessity. Multiple-component, multiple-frequency tympanometry is invaluable for the differential diagnosis of fixation of the lateral ossicular chain from fixation of the stapes , profound mixed hearing losses , clinical otosclerosis from disruption of the ossicular chain, hypermobility of the incudostapedial joint , and congenital ossicular fixation in children. [ 4 ] In electronics, an immittance Smith chart can be created by overlaying both the impedance and admittance grids, which is useful for cascading series-connected with parallel-connected electric circuits. This allows for the visualization of changes in impedance or admittance in the system caused by components of either the series or parallel circuit. [ 5 ]
https://en.wikipedia.org/wiki/Immittance
Immobilized cell technology is a method of air filtration and purification that uses whole cell immobilization . [ 1 ] It is a process whereby microfine particulate matter is removed from the air by attracting charged particulates in the air to a bio-reactive mass, or bioreactor , which enzymatically renders them inert. Almost all airborne particulate matter carries a charge, either negative or positive. As air moves through an immobilized cell technology system, those charged particles are attracted to a water cascade, that functions as neutral ground, and that pulls the particulates into the bioreactor. (These airborne particulates consist of inert materials [dust, pollen , etc.], volatile organic compounds , and hazardous air pollutants.) Once inside the bioreactor, the particulates are oxidized in a solution of water, enzymes , and bacteria breaking them down into base elements. Oxidation through immobilized cell technology is 12 times more efficient than natural oxidation is under the same conditions. The key difference between traditional air filtration systems and those that employ immobilized cell technology is that the latter does not employ mesh filters. It requires water, electrical power, and biomass for the bioreactor. The base, inert, elements it produces need to be removed periodically.
https://en.wikipedia.org/wiki/Immobilized_Cell_Technology
An immobilized enzyme is an enzyme , with restricted mobility, attached to an inert, insoluble material—such as calcium alginate (produced by reacting a mixture of sodium alginate solution and enzyme solution with calcium chloride ). This can provide increased resistance to changes in conditions such as pH or temperature . It also lets enzymes be held in place throughout the reaction, following which they are easily separated from the products and may be used again - a far more efficient process and so is widely used in industry for enzyme catalysed reactions . An alternative to enzyme immobilization is whole cell immobilization . [ 1 ] [ 2 ] Immobilized enzymes are easily to be handled, simply separated from their products, and can be reused. [ 3 ] Enzymes are bio-catalysts which play an essential role in the enhancement of chemical reactions in cells without being persistently modified, wasted, nor resulting in the loss of equilibrium of chemical reactions. Although the characteristics of enzymes are extremely unique, their utility in the industry is limited due to the lack of re-usability, stability, and high-cost of production. [ 4 ] The first synthetic immobilized enzyme was made in the 1950s, performed by the inclusion of enzyme into polymeric matrices or binding onto carrier substances. Also cross-linking procedure was applied by cross-linking of protein alone or along with the addition of inert materials. [ 3 ] Over the last decade various immobilization methods have been developed. Binding the enzyme to previously synthesized carrier materials for example is the mostly preferred method so far. Newly, the procedure of cross-linking of crystals of enzyme is also considered as an exciting substitute.  Utilization rate of immobilized enzymes is growing constantly. [ 5 ] Before performing any kind of immobilization techniques, some factors should be in mind. It is necessary to understand the chemical and physical effects on an enzyme following immobilization. Enzyme stability and kinetic characteristics can be altered due to changes in the microenvironment conditions of the enzyme after entrapment, support material attachment, or products of enzymatic actions for instance. Additionally, it is important to consider maintaining the tertiary structure of an enzyme prior to immobilizing to have a functional enzyme. Similarly, another crucial site for the functionality of an enzyme is the active-site , which should also be maintained while enzyme is being attached to a surface for immobilization, it is a must to have a selective method for the attachment of surface/material to not end up with an immobilized, but dysfunctional enzyme. [ 3 ] Consequently, there are three foundational factors to be thought of for the production of functional immobilized enzymes: immobilization supports selection, conditions and methods of immobilization. [ 6 ] For a support material to be ideal, it must be hydrophilic , inert towards enzymes, biocompatible , microbial attack and compression resistant, and must be affordable. [ 7 ] [ 8 ] Support materials can be organic or inorganic, synthetic or natural (depending on the composition), since they are biomaterial types at the end. There is no universal type of a support material to be used for the immobilization of all enzymes. However, there are some commonly used supports such as silica-based carriers, acrylic resins , synthetic polymers , active membranes and exchange resins. [ 6 ] One of the hardest processes before the immobilization process itself, is the selection of support material since it relies on the enzyme type, reaction of media, safety policy of hydrodynamic and reaction conditions. [ 3 ] [ 8 ] As different types of support give different physical and chemical characteristics and properties, which would effect enzyme function, such as: Hydrophilicity / hydrophobicity , surface chemistry , and pore size. Enzymes can be immobilized by physical, or chemical methods including: Affinity-tag binding: is an immobilization method combining physical, and chemical methods where enzymes may be immobilized to a surface, e.g. in a porous material, using non-covalent or covalent Protein tags . This technology has been established for protein purification purposes. This technique is the generally applicable, and can be performed without prior enzyme purification with a pure preparation as the result. Porous glass and derivatives thereof are used, where the porous surface can be adapted in terms of hydrophobicity to suit the enzyme in question. [ 9 ] Numerous enzymes of biotechnological importance have been immobilized on various supports (inorganic, organic, composite and nanomaterials) via random multipoint attachment. However, immobilization via random chemical modification results in a heterogeneous protein population where more than one side chains (amino, carboxyl, thiol etc) present in proteins are linked with the support with potential reduction in activity due to restriction of substrate access to the active site. [ 12 ] In contrast, in site-directed enzyme immobilization, the support can be linked to a single specific amino acid (generally N- or C-termini) in a protein molecule away from the active-site. This way maximal enzyme activity is retained due to the free access of the substrate to the active-site. These strategies are mainly chemical but may additionally require genetic and enzymatic methods to generate functional groups (that are absent in protein) on the support and enzyme. [ 12 ] The choice of SDCM method depends on many factors, such as the type of enzyme (less stable psychrophilic, or more stable thermophilic homologue), pH stability of enzyme, the availability of N- or C-termini to the reagent, non-interference of the enzyme terminus with the enzyme activity, type of catalytic amino acid residue, the availability, price and the ease of preparation of reagents. For example, the generation of complementary clickable functionalities (alkyne and azide) on the support and enzyme is one of the most convenient way for immobilizing enzymes via site-directed chemical modification. [ 13 ] Another widely used application of the immobilization approach together with enzymes has been the enzymatic reactions on immobilized substrates. This approach facilitates the analysis of enzyme activities and mimics the performance of enzymes on e.g. cell walls. [ 14 ] Immobilized enzymes have important application uses as they reduce costs and improve the outcome of the reaction they catalyze. Advantages include: In the past, biological washing powders and detergents contained many proteases and lipases that broke down dirt. However, when the cleaning products contacted human skin, they created allergic reactions. This is why immobilization of enzymes is important, for many application fields. Immobilized enzymes are used in various applications including: food, chemical, pharmaceutical, and medical industry. In the food industry for example, Immobilized enzymes are used for the manufacturing of several types of zero-calorie sweetners, Allulose for instance is an epimer of fructose , which is different structurally, resulting in it not being absorbable by human bodies when ingested. Another example of immobilized-enzyme-based sweetners include: Tagatose (immobilized β-galactosidase ). [ 16 ] In the chemical (cosmetics) industry as well, immobilized enzymes are used for the production of emollient esters by utilizing immobilized CalB enzyme. The first company to use such method is Evonik company in 2000. The enzyme Lipase-CalB in its immobilized state is actually used in other pharmaceutical applications for the production of Odanacatib , and Sofosbuvir . [ 16 ]
https://en.wikipedia.org/wiki/Immobilized_enzyme
The immobilized whole cell system is an alternative to enzyme immobilization . Unlike enzyme immobilization, where the enzyme is attached to a solid support (such as calcium alginate or activated PVA or activated PEI), in immobilized whole cell systems, the target cell is immobilized. Such methods may be implemented when the enzymes required are difficult or expensive to extract, an example being intracellular enzymes. [ 1 ] [ 2 ] Also, if a series of enzymes are required in the reaction; whole cell immobilization may be used for convenience. This is only done on a commercial basis when the need for the product is more justified. Multiple enzymes may be introduced into the reaction, thus eliminating the need for immobilization of multiple enzymes. Furthermore, intracellular enzymes need not be extracted prior to the reaction; they may be used directly. However, some enzymes may be used for the metabolic needs of the cell, leading to reduced yield of the cell. This enzyme -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Immobilized_whole_cell
The immortal DNA strand hypothesis posits that adult stem cells replicate their DNA asymmetrically to minimize mutations in their genomes . [ 1 ] It was proposed in 1975 by John Cairns as a mechanism that would benefit organisms by reducing cancer incidence. For decades, evidence for the hypothesis was sparse, contradictory and inconclusive. [ 2 ] [ 3 ] Since the 2010s, evidence from a variety of species, including, mice , flies and humans , strongly suggests that DNA is randomly segregated in stem cells, thereby refuting the immortal strand hypothesis. [ 4 ] [ 5 ] [ 6 ] [ 7 ] Instead, multiple processes (xenobiotic metabolism, efflux, DNA repair and quiescence) are in place to minimize mutations, detrimental mutations are negatively selected and, in some cases, cells with driver mutations are kept in check by other cells by direct competition. [ 8 ] Mechanisms such as these keep cancer rates low in humans, despite the fact that somatic mutagenesis is widespread and errors accumulate with age. [ 9 ] In some stem cell compartments, such as the esophagus [ 10 ] and hematopoietic stem cells , [ 11 ] [ 12 ] driver mutations are found in the majority of cells in older individuals. According to Cairns, adult stem cells could divide their DNA asymmetrically, instead of segregating their DNA during mitosis in a random manner. In this manner, stem cells would retain a distinct template set of DNA strands (parental strands) in each division. By retaining the same set of template DNA strands, adult stem cells would pass mutations arising from errors in DNA replication on to non-stem cell daughters that soon terminally differentiate (end mitotic divisions and become a functional cell). Passing on these replication errors would allow adult stem cells to reduce their rate of accumulation of mutations that could lead to serious genetic disorders such as cancer . After Cairns first proposed the immortal DNA strand mechanism, the theory has undergone several updated refinements. In 2002, he proposed that in addition to using immortal DNA strand mechanisms to segregate DNA, when the immortal DNA strands of adult stem cells undergo damage, they will choose to die (apoptose) rather than use DNA repair mechanisms that are normally used in non-stem cells. [ 13 ] Emmanuel David Tannenbaum and James Sherley developed a quantitative model describing how repair of point mutations might differ in adult stem cells. [ 14 ] They found that in adult stem cells, repair was most efficient if they used an immortal DNA strand mechanism for segregating DNA, rather than a random segregation mechanism. This method would be beneficial because it avoids wrongly fixing DNA mutations in both DNA strands and propagating the mutation. The complete proof of a concept generally requires a plausible mechanism that could mediate the effect. Although controversial, there is a suggestion that this could be provided by the dynein motor. [ 15 ] This paper is accompanied by a comment summarizing the findings and background. [ 16 ] However, this work has highly respected biologists among its detractors as exemplified by a further comment on a paper by the same authors from 2006. [ 17 ] The authors have rebutted the criticism. [ 18 ] Prior to the development of technologies such as next-generation sequencing , advanced lineage tracing and mass spectrometry imaging , two main assays were used to detect immortal DNA strand segregation: label-retention and label-release pulse/chase assays. In the label-retention assay, the goal is to mark 'immortal' or parental DNA strands with a DNA label such as tritiated thymidine or bromodeoxyuridine (BrdU). These types of DNA labels will incorporate into the newly synthesized DNA of dividing cells during S phase . A pulse of DNA label is given to adult stem cells under conditions where they have not yet delineated an immortal DNA strand. During these conditions, the adult stem cells are either dividing symmetrically (thus with each division a new 'immortal' strand is determined and in at least one of the stem cells the immortal DNA strand will be marked with DNA label), or the adult stem cells have not yet been determined (thus their precursors are dividing symmetrically, and once they differentiate into adult stem cells and choose an 'immortal' strand, the 'immortal strand' will already have been marked). Experimentally, adult stem cells are undergoing symmetric divisions during growth and after wound healing, and are not yet determined at neonatal stages. Once the immortal DNA strand is labelled and the adult stem cell has begun or resumed asymmetric divisions, the DNA label is chased out. In symmetric divisions (most mitotic cells), DNA is segregating randomly and the DNA label will be diluted out to levels below detection after five divisions. If, however, cells are using an immortal DNA strand mechanism, then all the labeled DNA will continue to co-segregate with the adult stem cell, and after five (or more) divisions will still be detected within the adult stem cell. These cells are sometimes called label-retaining cells (LRCs). In the label-release assay, the goal is to mark the newly synthesized DNA that is normally passed on to the daughter (non-stem) cell. A pulse of DNA label is given to adult stem cells under conditions where they are dividing asymmetrically. Under conditions of homeostasis , adult stem cells should be dividing asymmetrically so that the same number of adult stem cells is maintained in the tissue compartment. After pulsing for long enough to label all the newly replicated DNA, the DNA label is chased out (each DNA replication now incorporates unlabeled nucleotides) and the adult stem cells are assayed for loss of the DNA label after two cell divisions. If cells are using a random segregation mechanism, then enough DNA label should remain in the cell to be detected. If, however, the adult stem cells are using an immortal DNA strand mechanism, they are obligated to retain the unlabeled 'immortal' DNA, and will release all the newly synthesized labeled DNA to their differentiating daughter cells in two divisions. Some scientists have combined the two approaches, [ 19 ] [ 20 ] by first using one DNA label to label the immortal strands, allowing to adult stem cells to begin dividing asymmetrically, and then using a different DNA label to label the newly synthesized DNA. Thus, the adult stem cells will retain one DNA label and release the other within two divisions. One of the earliest studies by Karl Lark et al. demonstrated co-segregation of DNA in the cells of plant root tips. [ 21 ] Plant root tips labeled with tritiated thymidine tended to segregate their labeled DNA to the same daughter cell. Though not all the labeled DNA segregated to the same daughter, the amount of thymidine-labeled DNA seen in the daughter with less label corresponded to the amount that would have arisen from sister-chromatid exchange. [ 21 ] Later studies by Christopher Potten et al. (2002), [ 19 ] using pulse/chase experiments with tritiated thymidine, found long-term label-retaining cells in the small intestinal crypts of neonatal mice. These researchers hypothesized that long-term incorporation of tritiated thymidine occurred because neonatal mice have undeveloped small intestines, and that pulsing tritiated thymidine soon after the birth of the mice allowed the 'immortal' DNA of adult stem cells to be labeled during their formation. These long-term cells were demonstrated to be actively cycling, as demonstrated by incorporation and release of BrdU. [ 19 ] Since these cells were cycling but continued to contain the BrdU label in their DNA, the researchers reasoned that they must be segregating their DNA using an immortal DNA strand mechanism. Joshua Merok et al. from the lab of James Sherley engineered mammalian cells with an inducible p53 gene that controls asymmetric divisions. [ 22 ] BrdU pulse/chase experiments with these cells demonstrated that chromosomes segregated non-randomly only when the cells were induced to divide asymmetrically like adult stem cells. These asymmetrically dividing cells provide an in vitro model for demonstration and investigation of immortal strand mechanisms. Scientists have strived to demonstrate that this immortal DNA strand mechanism exists in vivo in other types of adult stem cells. In 1996 Nik Zeps published the first paper demonstrating label retaining cells were present in the mouse mammary gland [ 23 ] and this was confirmed in 2005 by Gilbert Smith who also published evidence that a subset of mouse mammary epithelial cells could retain DNA label and release DNA label in a manner consistent with the immortal DNA strand mechanism. [ 20 ] Soon after, scientists from the laboratory of Derek van der Kooy showed that mice have neural stem cells that are BrdU-retaining and continue to be mitotically active. [ 24 ] Asymmetric segregation of DNA was shown using real-time imaging of cells in culture. In 2006, scientists in the lab of Shahragim Tajbakhsh presented evidence that muscle satellite cells , which are proposed to be adult stem cells of the skeletal muscle compartment, exhibited asymmetric segregation of BrdU-labelled DNA when put into culture. They also had evidence that demonstrated BrdU release kinetics consistent with an immortal DNA strand mechanism were operating in vivo , using juvenile mice and mice with muscle regeneration induced by freezing. [ 25 ] These experiments supporting the immortal strand hypothesis, however, are not conclusive. While the Lark experiments demonstrated co-segregation, the co-segregation may have been an artifact of radiation from the tritium. Although Potten identified the cycling, label-retaining cells as adult stem cells, these cells are difficult to identify unequivocally as adult stem cells. While the engineered cells provide an elegant model for co-segregation of chromosomes, studies with these cells were done in vitro with engineered cells. Some features may not be present in vivo or may be absent in vitro . In May 2007 evidence in support of the Immortal DNA Strand theory was discovered by Michael Conboy et al., [ 26 ] using the muscle stem/satellite cell model during tissue regeneration, where there is tremendous cell division during a relatively brief period of time. Using two BrdU analogs to label template and newly synthesized DNA strands, they saw that about half of the dividing cells in regenerating muscle sort the older "Immortal" DNA to one daughter cell and the younger DNA to the other. In keeping with the stem cell hypothesis, the more undifferentiated daughter typically inherited the chromatids with the older DNA, while the more differentiated daughter inherited the younger DNA. Early experimental evidence against the immortal strand hypothesis was similarly sparse, limited by the techniques of the time. [ 2 ] [ 3 ] In one study, researchers incorporated tritiated thymidine into dividing murine epidermal basal cells. [ 27 ] They followed the release of tritiated thymidine after various chase periods, but the pattern of release was not consistent with the immortal strand hypothesis. Although they found label-retaining cells, they were not within the putative stem cell compartment. With increasing lengths of time for the chase periods, these label-retaining cells were located farther from the putative stem cell compartment, suggesting that the label-retaining cells had moved. DNA template strand segregation was studied in the developing zebrafish . [ 28 ] During larval development there was rapid depletion of older DNA template strands from stem cell niches in the retina , brain and intestine. [ 28 ] Using high resolution microscopy, no evidence of asymmetric template strand segregation (in over 100 cell pairs) was found, making it improbable that in developing zebrafish asymmetric DNA segregation avoids mutational burden as proposed by the immortal strand hypothesis. [ 28 ]
https://en.wikipedia.org/wiki/Immortal_DNA_strand_hypothesis
An immortalised cell line is a population of cells from a multicellular organism that would normally not proliferate indefinitely but, due to mutation , have evaded normal cellular senescence and instead can keep undergoing division. The cells can therefore be grown for prolonged periods in vitro . The mutations required for immortality can occur naturally or be intentionally induced for experimental purposes. Immortal cell lines are a very important tool for research into the biochemistry and cell biology of multicellular organisms. Immortalised cell lines have also found uses in biotechnology . An immortalised cell line should not be confused with stem cells , which can also divide indefinitely, but form a normal part of the development of a multicellular organism. There are various immortal cell lines. Some of them are normal cell lines (e.g. derived from stem cells). Other immortalised cell lines are the in vitro equivalent of cancerous cells. Cancer occurs when a somatic cell that normally cannot divide undergoes mutations that cause deregulation of the normal cell cycle controls, leading to uncontrolled proliferation. Immortalised cell lines have undergone similar mutations, allowing a cell type that would normally not be able to divide to be proliferated in vitro . The origins of some immortal cell lines – for example, HeLa human cells – are from naturally occurring cancers. HeLa, the first immortal human cell line on record to be successfully isolated and proliferated by a laboratory, was taken from Henrietta Lacks in 1951 at Johns Hopkins Hospital in Baltimore , Maryland. [ 1 ] Immortalised cell lines are widely used as a simple model for more complex biological systems – for example, for the analysis of the biochemistry and cell biology of mammalian (including human ) cells. [ 2 ] The main advantage of using an immortal cell line for research is its immortality; the cells can be grown indefinitely in culture. This simplifies analysis of the biology of cells that may otherwise have a limited lifetime. [ citation needed ] Immortalised cell lines can also be cloned, giving rise to a clonal population that can, in turn, be propagated indefinitely. This allows an analysis to be repeated many times on genetically identical cells, which is desirable for repeatable scientific experiments. The alternative, performing an analysis on primary cells from multiple tissue donors, does not have this advantage. [ citation needed ] Immortalised cell lines find use in biotechnology, where they are a cost-effective way of growing cells similar to those found in a multicellular organism in vitro . The cells are used for a wide variety of purposes, from testing toxicity of compounds or drugs to production of eukaryotic proteins. [ citation needed ] While immortalised cell lines often originate from a well-known tissue type, they have undergone significant mutations to become immortal. This can alter the biology of the cell and must be taken into consideration in any analysis. Further, cell lines can change genetically over multiple passages, leading to phenotypic differences among isolates and potentially different experimental results depending on when and with what strain isolate an experiment is conducted. [ 3 ] Many cell lines that are widely used for biomedical research have been contaminated and overgrown by other, more aggressive cells. For example, supposed thyroid lines were actually melanoma cells, supposed prostate tissue was actually bladder cancer, and supposed normal uterine cultures were actually breast cancer. [ 4 ] There are several methods for generating immortalised cell lines: [ 5 ] There are several examples of immortalised cell lines, each with different properties. Most immortalised cell lines are classified by the cell type they originated from or are most similar to biologically
https://en.wikipedia.org/wiki/Immortalised_cell_line
The immune-related response criteria (irRC) is a set of published rules that define when tumors in cancer patients improve ("respond"), stay the same ("stabilize"), or worsen ("progress") during treatment, where the compound being evaluated is an immuno-oncology drug. Immuno-oncology, part of the broader field of cancer immunotherapy , involves agents which harness the body's own immune system to fight cancer. Traditionally, patient responses to new cancer treatments have been evaluated using two sets of criteria, the WHO criteria and the response evaluation criteria in solid tumors (RECIST). The immune-related response criteria, first published in 2009, [ 1 ] arose out of observations that immuno-oncology drugs would fail in clinical trials that measured responses using the WHO or RECIST Criteria, because these criteria could not account for the time gap in many patients between initial treatment and the apparent action of the immune system to reduce the tumor burden. Part of the process of determining the effectiveness of anti-cancer agents in clinical trials involves measuring the amount of tumor shrinkage such agents can generate. The WHO Criteria, developed in the 1970s by the International Union Against Cancer and the World Health Organization , represented the first generally agreed specific criteria for the codification of tumor response evaluation. These criteria were first published in 1981. [ 2 ] The RECIST criteria, first published in 2000, [ 3 ] revised the WHO criteria primarily to clarify differences that remained between research groups. Under RECIST tumour size was measured unidimensionally rather than bidimensionally, fewer lesions were measured, and the definition of 'progression' was changed so that it was no longer based on the isolated increase of a single lesion. RECIST also adopted a different shrinkage threshold for definitions of tumour response and progression. For the WHO Criteria it had been >50% tumour shrinkage for a Partial Response and >25% tumour increase for Progressive Disease. For RECIST it was >30% shrinkage for a Partial Response and >20% increase for Progressive Disease. One outcome of all these revisions was that more patients who would have been considered 'progressors' under the old criteria became 'responders' or 'stable' under the new criteria. [ 4 ] RECIST and its successor, RECIST 1.1 from 2009, [ 5 ] is now the standard measurement protocol for measuring response in cancer trials. The key driver in the development of the irRC was the observation that, in studies of various cancer therapies derived from the immune system such as cytokines and monoclonal antibodies, the looked-for Complete and Partial Responses as well as Stable Disease only occurred after an increase in tumor burden that the conventional RECIST Criteria would have dubbed 'Progressive Disease'. Basically, RECIST failed to take account of the delay between dosing and an observed anti-tumour T cell response, so that otherwise 'successful' drugs - that is, drugs which ultimately prolonged life - failed in clinical trials. [ 6 ] This led various researchers and drug developers interested in cancer immunotherapy such as Axel Hoos at Bristol-Myers Squibb (BMS) to start discussing whether a new set of response criteria ought to be developed specifically for immmuno-oncology drugs. Their ideas, first flagged in a key 2007 paper in the Journal of Immunotherapy , [ 1 ] evolved into the immune-related response criteria (irRC), which was published in late 2009 in the journal Clinical Cancer Research . [ 1 ] The developers of the irRC based their criteria on the WHO Criteria but modified it: The initial evidence cited by the creators of the irRC that their criteria were useful lay in the two Phase II melanoma trials described in the Clinical Cancer Research paper. The drug being trialled was a monoclonal antibody called ipilimumab , then under development at BMS with Axel Hoos as the medical lead. The drug targeted an immune checkpoint called CTLA-4 , known as a key negative regulator of T cell activity. By blocking CTLA-4, ipilimumab was designed to potentiate antitumor T-cell responses. In the Phase IIs, which encompassed 227 treated patients and evaluated patients using the irRC, around 10% of these patients would have been deemed to have Progressive Disease by the WHO Criteria but actually experienced irPRs or irSDs, consistent with a response to ipilimumab. The Phase III clinical failure of Pfizer 's tremelimumab anti-CTLA-4 monoclonal antibody, which competed with ipilimumab, provided the first large-scale evidence of the utility of the irRC. The Pfizer study used conventional response criteria, and an early interim analysis found no survival advantage for the treated patients, leading to the termination of the trial in April 2008. [ 7 ] [ 8 ] However within a year of this development, Pfizer's investigators were beginning to notice a separation of survival curves between treatment and control groups. [ 9 ] Tremelimumab's competitor, ipilimumab, which was trialled in Phase III using the irRC, went on to gain FDA approval in 2011, indicated for unresectable stage III or IV melanoma, after a 676-patient study that compared ipilimumab plus an experimental vaccine called gp100 with the vaccine alone. The median overall survival for the ipilimumab+vaccine group was 10 months versus only 6.4 months for the vaccine. [ 10 ] Marketed as Yervoy, ipilimumab subsequently became a blockbuster for BMS. The 2009 paper which described the new irRC had twelve authors, all associated with the ipilimumab clinical trials used as examples - Jedd Wolchok of Memorial Sloan Kettering Cancer Center, Axel Hoos and Rachel Humphrey of Bristol-Myers Squibb, Steven O'Day and Omid Hamid of the Angeles Clinic in Santa Monica, Ca. , Jeffrey Weber of the University of South Florida , Celeste Lebbé of Hôpital Saint-Louis in Paris, Michele Maio of University Hospital of Siena, Michael Binder of Medical University of Vienna , Oliver Bohnsack of a Berlin -based clinical informatics firm called Perceptive Informatics, Geoffrey Nichol of the antibody engineering company Medarex (which had originally developed ipilimumab) and Stephen Hodi of the Dana–Farber Cancer Institute in Boston .
https://en.wikipedia.org/wiki/Immune-related_response_criteria
Immune Therapy Holdings AB or ITH is a Swedish biotechnology R&D holding company headquartered at the Karolinska Institutet and Karolinska University Hospital in Stockholm . ITH's research is primarily focused on its Tailored Leukapheresis (TLA) treatment for immune mediated inflammatory diseases (IMIDs). [ 1 ] Tailored Leukapheresis (TLA) treatment is an apheresis immunotherapy for selective removal of disease-causing pro-inflammatory cells extracorporeally , which has therapeutic application in various IMIDs that are caused and maintained by inflammation. TLA utilises the natural affinity of chemokines and chemokine receptors to selectively attract, bind, and deplete circulating pro-inflammatory cells en route to the site of inflammation. It is the first [ citation needed ] , and hitherto novel, apheresis technology with a demonstrated efficacy in targeting and removing selected leukocytes while leaving all other blood cells unaffected. [ 2 ] The immunotherapy has been evaluated in a Phase I/II placebo-controlled clinical trial , where all primary and secondary clinical endpoints were met and the treatment showed an absence of any side effects of clinical significance. [ 3 ] TLA received the Dagens Medicin 's Athena Prize (Sweden, 2013) [ 4 ] and Universal Biotech Innovation Prize (France, 2012). [ 5 ] During 2014, TLA was selected by the Swedish Institute for its Innovative Sweden exhibition that highlights Swedish innovativeness worldwide. [ 6 ] TLA has received competitive research funding from the following sources: The global anti-inflammatory therapeutics market was $57.8 billion in 2010 and is estimated to increase at a CAGR of 5.8% between 2010 and 2017 to a total value of $85.9 billion. [ 10 ]
https://en.wikipedia.org/wiki/Immune_Therapy_Holdings
Immune adherence was described by Nelson (1953) [ 1 ] for an in vitro immunological reaction between normal erythrocytes and a wide variety of microorganisms sensitized with their individually specific antibody and complement ; erythrocytes were observed to adhere to microorganisms. [ 1 ] It was later recognized to occur in vivo. [ 2 ] The phenomenon is now resolved as a complement-dependent binding reaction of erythrocytes to microorganisms where specific antibodies are engaged in the process. [ 3 ] The reaction process is as follows: any microorganisms are bound with their specific antibodies, if they are produced, which activate the classical pathway of the complement system. The cascade begins to work from C1 to C3b through C4b , C3b being further transformed to iC3b (inactive derivative of C3b), all of which, C4b and thereafter, remain to bind to the surface of the microbe. Because primate erythrocytes express complement receptor 1 (CR1) on their surface and having binding specificity to C4b, C3b, or iC3b, erythrocytes accumulate on the microbe via CR1-complement binding. [ 3 ] [ 4 ] Human erythrocytes express 100 to 1,000 CR1 per cell, the average number of approximately 300 being an inherited characteristics. [ 5 ] Immune complexes bound to erythrocytes are effectively removed from the circulation, which is presumed alternatively to prevent deposition at tissue sites, for example, the renal glomerulus. Erythrocytes bearing immune complexes traverse sinusoids of the liver and spleen, where they encounter fixed phagocytes . Phagocytes expressing CR1, CR3 , and Fc γ receptors effect a transfer of the immune complexes to their surface. Then erythrocytes leave the liver and spleen bearing off immune complexes and work on the next round of transfer of immune complexes after adhering to them. [ 5 ] [ 6 ] [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/Immune_adherence
An immune complex , sometimes called an antigen-antibody complex or antigen-bound antibody , is a molecule formed from the binding of multiple antigens to antibodies . [ 1 ] The bound antigen and antibody act as a unitary object, effectively an antigen of its own with a specific epitope . After an antigen-antibody reaction, the immune complexes can be subject to any of a number of responses, including complement deposition, opsonization , [ 2 ] phagocytosis , or processing by proteases . Red blood cells carrying CR1 -receptors on their surface may bind C3b -coated immune complexes and transport them to phagocytes , mostly in liver and spleen , and return to the general circulation. The ratio of antigen to antibody determines size and shape of immune complex. [ 3 ] This, in turn, determines the effect of the immune complex. Many innate immune cells have FcRs , which are membrane-bound receptors that bind the constant regions of antibodies. Most FcRs on innate immune cells have low affinity for a singular antibody, and instead need to bind to an immune complex containing multiple antibodies in order to begin their intracellular signaling pathway and pass along a message from outside to inside of the cell. [ 3 ] Additionally, the grouping and binding together of multiple immune complexes allows for an increase in the avidity, or strength of binding, of the FcRs. This allows innate immune cells to get multiple inputs at once and prevents them from being activated early. [ 3 ] Immune complexes may themselves cause illness when they are deposited in organs, for example, in certain forms of vasculitis . This is the third form of hypersensitivity in the Gell-Coombs classification, called type III hypersensitivity . [ 4 ] Such hypersensitivity progressing to disease states produces the immune complex diseases. Immune complex deposition is a prominent feature of several autoimmune diseases, including rheumatoid arthritis , scleroderma and Sjögren's syndrome . [ 5 ] [ 6 ] An inability to degrade immune complexes in the lysosome and subsequent accumulation on the surface of immune cells has been associated with systemic lupus erythematosus . [ 7 ] [ 8 ] Immune complexes can also play a role in the regulation of antibody production. B cells express B-cell receptors (BCRs) on their surfaces and antigen binding to these receptors begins a signaling cascade that leads to activation. B cells also express FcγRIIb , low affinity receptors specific to the constant region of IgG, on their surfaces. IgG immune complexes are the ligand for these receptors and immune complex binding to these receptors induces apoptosis , or cell death. After B cells are activated, they differentiate into plasma cells and cease to express BCR but continue to express FcγRIIb, which allows IgG immune complexes to regulate IgG production via negative feedback and prevent uncontrolled IgG production. [ 9 ] Immune complexes, particularly those made of IgG, also play a variety of roles in the activation and regulation of phagocytes, which include dendritic cells (DCs) and macrophages . Immune complexes are better at inducing DC maturation than an antigen on its own. [ 10 ] Again, the low affinity of many FcγR for IgG means that only immune complexes, not single antibodies, can induce the FcγR’s signaling cascade. When compared to single antibodies binding to FcγRs, immune complexes binding to FcγRs cause significant changes in internalization and processing of antigen, maturation of the vesicles containing the internalized antigen, and activation in DCs and macrophages. [ 11 ] There are multiple classes of macrophages and DCs that express different FcγRs, which have differing affinities for single antibodies and immune complexes. [ 11 ] This allows the response of the DC or macrophage to be tuned precisely, subsequently tuning the level of IgG. These diverse FcγRs cause different responses in their DCs or macrophages by initiating different signaling pathways that can either activate or inhibit cellular functions. [ 11 ] The binding of the immune complex to the DC’s membrane-bound receptor and internalization of the immune complex and receptor begins the process of antigen presentation, which allows the DC to activate T cells. Via this process, immune complexes cause enhanced T cell activation. [ 11 ] Type I FcγRs activation begins a cascade of reactions to eliminate the IgG-opsonized target. Type I FcγRs is another type of IgG constant region receptor, which can bind to IgG immune complexes and lead to the elimination of the opsonized complex. Immune complexes bind to multiple type I FcγRs, which cluster on the cell surface and begin the ITAM signaling pathway. Although both activating and inhibitory Type I FcγRs can mediate phagocytosis, but the internalization of IgG-opsonized targets through activating FcγRs is more effective for response. Immune complexes bind to multiple type I FcγRs, which cluster on the cell surface and begin the Immunoreceptor Tyrosine-Based Activation Motif (ITAM) signaling pathway. [ 12 ] ITAM is composed of tyrosine which is separated from a leucine or isoleucine by two other amino acids and is located in the cytoplasmic tail of the molecule. Following the clustering by IgG complexes, ITAM is phosphorylated by FcγRs crosslinking. This phosphorylation of the ITAM leads to pro-inflammatory signaling that mediates cellular activation which will induce a signaling cascade and eventually leads to elimination of opsonized immune complex. [ 13 ]
https://en.wikipedia.org/wiki/Immune_complex
Immune dysregulation is any proposed or confirmed breakdown or maladaptive change in molecular control of immune system processes. For example, dysregulation is a component in the pathogenesis of autoimmune diseases and some cancers . Immune system dysfunction, as seen in IPEX syndrome leads to immune dysfunction, polyendocrinopathy, enteropathy, X-linked (IPEX). IPEX typically presents during the first few months of life with diabetes mellitus, intractable diarrhea, failure to thrive, eczema, and hemolytic anemia. unrestrained or unregulated immune response. [ 1 ] IPEX (Immune dysregulation, polyendocrinopathy, enteropathy, X-linked syndrome) is a syndrome caused by a genetic mutation in the FOXP3 gene, [ 2 ] [ 3 ] [ 4 ] which encodes a major transcription factor of regulatory T cells (Tregs). Such a mutation leads to dysfunctional Tregs and, as a result, autoimmune diseases . The classic clinical manifestations are enteropathy , type I diabetes mellitus and eczema . Various other autoimmune diseases or hypersensitivity are common in other individuals with IPEX syndrome. [ 2 ] In addition to autoimmune diseases, individuals experience higher immune reactivity (e.g. chronic dermatitis) and susceptibility to infections . Individuals also develop autoimmune diseases at a young age. [ 4 ] Autoimmune polyendocrinopathy-candidiasis-endodermal dystrophy (APECED) is a syndrome caused by a mutation in AIRE (autoimmune regulator). Typical manifestations of APECED are mucocutaneous candidiasis and multiple endocrine autoimmune diseases. APECED causes loss of central immune tolerance. [ 5 ] Omenn syndrome manifests as GVHD ( graft versus host disease )-like autoimmune disease. Immune dysregulation is caused by increased IgE production. The syndrome is caused by mutations in the RAG1 , RAG2 , IL2RG , IL7RA or RMRP genes. The number of immune cells is usually normal in this syndrome , but functionality is reduced [ 6 ] Wiskott-Aldrich syndrome is caused by a mutation in the WAS gene . It manifests itself as a higher susceptibility to infections, eczema, more frequent development of autoimmune hemolytic anemia , neutropenia and arthritis . [ 6 ] Partial T cell immunodeficiency is characterized by an incomplete reduction in T cell number or activity. In contrast to severe T cell immunodeficiency , some of T-cell ability to respond to infections can be maintained. T-cell immunodeficiencies tend to be associated with autoimmune diseases or hyperreactivity and increased IgE production. Mutations tend to be in genes for cytokines (such as IL-7 ), TCRs , or proteins important for somatic recombination and antigen presentation . [ 6 ] Additional T cell-associated immune dysregulation may be due to a mutation in CTLA-4 . CTLA-4 is essential for the negative regulation of the immune response and its loss leads to dysregulation and autoimmune diseases. The disease is characterized by hypogammaglobulinemia , frequent infections and the occurrence of autoimmune diseases . In individuals, the disease may manifest itself differently, with in some cases only a partial reduction in the number of Tregs , in others the ability to bind CTLA-4 ligand has been reduced, resulting in disruption homeostasis of effector T and B cells . The inheritance of this syndrome is autosomal dominant with incomplete penetration. [ 7 ] Chronic stress at various stages of life can lead to chronic inflammation and immune dysregulation. Individuals with high stress in childhood (abuse, neglect, etc.) are at higher risk of cardiovascular disease , type II diabetes , osteoporosis , rheumatoid arthritis and other problems associated with immune dysregulation in adulthood. [ 8 ] [ 9 ] Overall, individuals with higher childhood stress increases the risk of chronic inflammation in adulthood. Higher levels of IL-6 and TNF-α are then noted in stressed individuals. Chronic stress in childhood also promotes the development of proinflammatory types of monocytes and macrophages and they also develop resistance to anti-inflammatory agent (e.g. cortisol ). Traumatized individuals also have higher antibody titers to viruses such as Herpes simplex virus , Epstein–Barr virus , or Cytomegalovirus than individuals without chronic stress. [ 8 ] [ 10 ] Dysregulation of the immune system is also associated with immunosenescence , which arises due to aging. Immunosenescence is manifested by a decrease in reactivity to vaccination or infection, an impaired ability of T and B lymphocytes to activate and proliferate, or a lower ability of antigen presentation by dendritic cells . In immunosenescence, memory and effector T cells accumulate at the expense of naïve T cells . The lack of naïve T lymphocytes is the cause of low plasticity of the immune system in the elderly. [ 11 ] In aging of the immune system is also a decrease in central tolerance and an increase in the number of autoreactive T cells . [ 12 ] B cells also have a decreased repertoire of naïve cells and an increase in memory B cells. [ 13 ] They also have reduced the production of antibodies against antigens . In immunosenescence, here is a change in the individual subtypes of immunoglobulins . IgM and IgD levels decrease while IgG1 , IgG2 , and IgG3 levels increase. IgA is higher in the form of monomers in serum but lower as a dimer on the mucosal surface. [ 11 ] The overall accumulation of both effector T and B cells is due to the presence of chronic inflammation due to long-term exposure to antigens. In immunosenescence is also a reduced ability to apoptosis , which promotes the survival of memory cells. [ 11 ] In old age, innate immunity cells are also affected, when activated cells have a lower ability to return to a quiescent state, only effector functions decrease. [ 12 ] Elderly people show poor NK cell reactivity and impaired ability of antigen presentation by dendritic cells . [ 14 ] In macrophages, the ability of phagocytosis is reduced and the M2 phenotype of macrophages ( alternatively activated ) is promoted. [ 13 ] Immunosenescence also results in increased production of some immune mediators, such as proinflammatory IL-6 [ 14 ] or IL-1 . There may also be higher production of anti-inflammatory IL-10 or IL-4 . [ 12 ] In old age, the ability to heal wounds also decreases, leading to a susceptibility to further infections at the site of injury . [ 14 ] The aging of the immune system is also supported by chronic infections, oxidative stress , or the production and accumulation of reactive oxygen species (ROS). The increase in the proportion of memory cells is also affected by cytomegalovirus infection. [ 11 ] A chronic pro-inflammatory condition in an aging organism is also referred to as inflammaging . It is a long-term, low-grade systemic inflammation present without the presence of infection. [ 13 ] Immune dysregulation can also be caused by toxins . For example, in environmental workers, increased exposure to pesticides (such as DDT , organophosphate , amides , phthalamides , etc.) disrupts immune system responses. The resulting damage depends on the individual's age, dose and time of toxin exposure. At a young age and in adolescents, there are significant negative effects even with a lower dose of toxins. However, the ability to break down toxic substances and the resulting impact on the organism is also related to the metabolism and genetic equipment of the individual. Toxins can act directly on the cellular component of immunity, or by their metabolites , or they can promote reactive oxygen species (ROS) in the body, or by depletion of antioxidants or oxidative stress. The most common clinical manifestations are immunosuppression , hypersensitivity , autoimmune diseases , but also support for the Th2 response and the development of allergies , or support for chronic inflammation . [ 15 ] Conventional toxins and irritants in the environment, such as saliva enzymes of blood-feeding parasites , insect poisons , or irritants in plants, can also cause allergic reactions. These substances can disrupt cell membranes , activate cell receptors , aggregate or degrade certain proteins, or disrupt the mucosal surface layer. The immune system often responds to these substances with reactions that lead to the removal of an irritant substance from the body, such as itching , coughing , sneezing , or vomiting . [ 16 ] Combining the action of several toxins at the same time can increase the negative effects, but in some cases the effects of the toxins can cancel each other out. [ 15 ] Allergic reactions are misdirected reactions of the immune system to substances commonly found in the environment. Allergens elicit a Th2 immune response, including the involvement of IgE , mast cells , Innate lymphoid cells 2 (ILC2), eosinophils , and basophils . Allergy symptoms are often related to the body's efforts to expel the allergen from the body and to protect it from further exposure to the allergen. [ 17 ] Allergic reactions increase the production of mucus by goblet cells on the mucosa. The production of mucus is promoted by IL-13 from ILC 2 and Th2 cells. Higher mucus production then creates stronger barrier protection and supports runny nose, coughing, or sneezing. Removal of the allergen from the body by sneezing, coughing, vomiting, or diarrhea is enabled by the activation of peristalsis and contractions of the smooth muscles of the digestive and respiratory systems . Activation of smooth muscles occurs after the action of histamine , which is released by mast cells. Manifestations of allergies generally aim to eliminate the body's allergen. This is also related to hearing the flushing of antigens in the eyes or to attempts to achieve mechanical removal of the surface of the organism. [ 16 ] Allergies can be caused by genetic and environmental factors. Some theories support the view that allergies enter as protection against environmental substances that can disrupt the body, such as insect venom. Another possibility of activating an allergic reaction is the similarity of some allergens to the molecular patterns of parasites against which the immune system also uses a type 2 immune response. [ 16 ] The hygiene hypothesis then relates to changes in lifetime exposure to pathogens in developed countries. In the case of insufficient exposure to pathogens and insufficient stimulation of the Th1 response during an individual's development, the balance between Th1 and Th2 type responses may predominate to proallergic Th2. The theory is supported by the more frequent occurrence of allergies in developed countries compared to developing countries, but also by the higher incidence of allergies in cities compared to villages, where individuals can meet with pathogens of farm animals . Children from small families are also more likely to have allergies than children from families with more children, where there is more frequent contact with pathogens from siblings. [ 17 ] Another environmental factor that may promote the predisposition to allergies is a reduction in the diversity of the microbiome – this affects the diet of individuals, but also the diet of the mother during pregnancy , method of delivery, breastfeeding , antibiotics , and the presence of domestic or farm animals in the normal life of individuals. [ 18 ]
https://en.wikipedia.org/wiki/Immune_dysregulation
Certain sites of the mammalian body have immune privilege (no immune response), meaning they are able to tolerate the introduction of antigens without eliciting an inflammatory immune response. Tissue grafts are normally recognised as foreign antigens by the body and attacked by the immune system . However, in immune privileged sites, tissue grafts can survive for extended periods of time without rejection occurring. [ 1 ] Immunologically privileged sites include: It’s thought that immune privilege also occurs to some extent—-or is able to be induced in—- articular cartilage . [ 2 ] [ 3 ] [ 4 ] it was once thought that, theoretically, it could also occur (or be inducible) in the brain , but this is now known to be incorrect, as it has been shown that immune cells of the central nervous system contribute to the maintenance of neurogenesis and spatial learning abilities in adulthood. [ 5 ] Immune privilege is thought to be an evolutionary adaptation to protect vital structures from the potentially lethal effects of an inflammatory immune response in those regions. Inflammation in the brain or eye could cause the loss of organ functions, while immune responses directed against a fetus could cause miscarriage . [ 6 ] Immune privilege allows doctors to perform cornea transplants [ 7 ] and knee meniscal transplantation. [ 8 ] Antigens from immune privileged regions have been found to interact with T cells in an unusual way: inducing tolerance of normally rejected stimuli. [ 9 ] Immune privilege has emerged as an active rather than a passive process. [ citation needed ] Physical structures surrounding privileged sites cause a lack of lymphatic drainage, limiting the immune system's ability to enter the site. Other factors that contribute to the maintenance of immune privilege include: The nature of isolation of immunologically privileged sites from the rest of the body's immune system can cause them to become targets of autoimmune diseases or conditions, including sympathetic ophthalmia in the eye. As well as the mechanisms that limit immune cell entry and induce immune suppression, the eye contains active immune cells that act upon the detection of foreign antigens. These cells interact with the immune system to induce unusual suppression of the systemic immune system response to an antigen introduced into the eye. This is known as anterior chamber associated immune deviation (ACAID). [ 12 ] [ 13 ] Sympathetic ophthalmia is a rare disease which results from the isolation of the eye from the systemic immune system. Usually, trauma to one eye induces the release of eye antigens which are recognized and picked up by local antigen presenting cells (APC) such as macrophages and dendritic cells . These APC carry the antigen to local lymph nodes to be sampled by T cells and B cells . Entering the systemic immune system, these antigens are recognized as foreign and an immune response is mounted against them. The result is the sensitization of immune cells against a self-protein, causing an autoimmune attack on both the damaged eye and the non-damaged eye. [ 9 ] In this manner, the immune-privileged property has served to work against the eye instead. T cells normally encounter self-antigens during their development, when they move to the tissue draining lymph nodes . Anergy is induced in T cells which bind to self-antigens, deactivating them and preventing an autoimmune response in the future. However, the physical isolation of eye antigens results in the body's T cells never having encountered them at any time during development. Studies in mice have shown that the lack of presentation of eye self-antigens to specific T cells will fail to induce a sufficient amount of anergy to the self-antigens. While the lack of antigen presentation (due to the physical barriers) is sufficient to prevent the activation of autoreactive immune cells to the eye, the failure to induce sufficient anergy to T cells has detrimental results. In the case of damage or chance presentation to the immune system, the antigen presentation and immune response will occur at elevated rates. [ 14 ] The mother's immune system is able to provide protection from microbial infections without mounting an immune response against fetal tissues expressing paternally inherited alloantigens . A better understanding of the immunology of pregnancy may lead to the discovery of reasons for miscarriage . [ citation needed ] Regulatory T cells (Tregs) appear to be important in the maintenance of tolerance to fetal antigen. Increased numbers of Tregs are found during normal pregnancy. In both mouse models and humans diminished numbers of Tregs were associated with immunological rejection of the fetus and miscarriage. Experiments in mice involving the transfer of CD4+/CD25+ Treg cells from normal pregnant mice into abortion-prone animals resulted in the prevention of abortion. [ 15 ] This confirmed the importance of these cells in maintaining immune privilege in the womb. [ citation needed ] A number of theories exist as to the exact mechanism by which fetal tolerance is maintained. It has been proposed in recent literature [ 16 ] that a tolerant microenvironment is created at the interface between the mother and fetus by regulatory T-cells producing "tolerant molecules". These molecules including heme oxygenase 1 (HO-1), leukaemia inhibitory factor (LIF), transforming growth factor β (TGF-β) and interleukin 10 (IL-10) have all been implicated in the induction of immune tolerance. Foxp3 and neuropillin are markers expressed by the regulatory T-cells by which they are identified. [ citation needed ] Sperm are immunogenic – that is they will cause an autoimmune reaction if transplanted from the testis into a different part of the body. This has been demonstrated in experiments using rats by Lansteiner (1899) and Metchinikoff (1900), [ 17 ] [ 18 ] mice [ 19 ] and guinea pigs. [ 20 ] The likely reason for their immunogenicity or rather antigenicity is that sperm first mature at puberty, after central tolerance has been established, therefore the body recognizes them as foreign and mounts an immune reaction against them. [ 21 ] Therefore, mechanisms for their protection must exist in this organ to prevent any autoimmune reaction. The blood–testis barrier is likely to contribute to the survival of sperm. However, it is believed in the field of testicular immunology that the blood–testis barrier cannot account for all immune suppression in the testis, due to (1) its incompleteness at a region called the rete testis [ 18 ] and (2) the presence of immunogenic molecules outside the blood–testis barrier, on the surface of spermatogonia . [ 17 ] [ 18 ] The Sertoli cells play a crucial role in the protection of sperm from the immune system. They create the Sertoli cell barrier, which complements the blood-testis barrier. [ 21 ] The protection is ensured by tight junctions , which appear between two neighboring Sertoli cells. [ 22 ] Another mechanism which is likely to protect sperm is the suppression of immune responses in the testis. [ 23 ] [ 24 ] The central nervous system (CNS), which includes the brain and spinal cord, is a sensitive system with limited capacity for regeneration . In that regard, the concept of "immune privilege" within the CNS was once thought to be critical in limiting inflammation. The blood–brain barrier plays an important role in maintaining the separation of CNS from the systemic immune system but the presence of the blood–brain barrier, does not, on its own, provide immune privilege. [ 25 ] It is thought that immune privilege within the CNS varies throughout the different compartments of the system, being most pronounced in the parenchyma tissue or "white matter". [ 25 ] The concept of CNS as an "immune-privileged" organ system, however, has been overwhelmingly challenged and re-evaluated over the last two decades. Current data not only indicate the presence of resident CNS macrophages (known as microglia ) within the CNS, but there is also a wide body of evidence suggesting the active interaction of the CNS with peripheral immune cells. [ 26 ] Generally, in normal (uninjured) tissue, antigens are taken up by antigen presenting cells ( dendritic cells ), and subsequently transported to the lymph nodes. Alternatively, soluble antigens can drain into the lymph nodes. In contrast, in the CNS, dendritic cells are not thought to be present in normal parenchymal tissue or perivascular space although they are present in the meninges and choroids plexus . [ 25 ] Thus, the CNS is thought to be limited in its capacity to deliver antigens to local lymph nodes and cause T-cell activation. [ 27 ] Although there is no conventional lymphatic system in the CNS, the drainage of antigens from CNS tissue into the cervical lymph nodes has been demonstrated. The response elicited in the lymph nodes to CNS antigens is skewed towards B-cells. Dendritic cells from cerebrospinal fluid have been found to migrate to B-cell follicles of cervical lymph nodes. [ 28 ] The skewing of the response to antigen from the CNS towards a humoral response means that a more dangerous inflammatory T-cell response can be avoided. The induction of systemic tolerance to an antigen introduced into the CNS has been previously shown. [ 29 ] This was seen in the absence of the T-cell mediated inflammatory "delayed type hypersensitivity reaction" (DTH) when the antigen was reintroduced in another part of the body. This response is analogous to ACAID in the eye. [ citation needed ] There is great potential for use of molecular mechanisms present in immune privileged sites in transplantations, especially allotransplantations . Compared to skin allografts, which are rejected in almost 100% of cases, corneal allografts survive long-term in 50–90% of cases. Immune privileged allografts survive even without immunosuppression , which is routinely applied to different tissue/organ recipients. [ 30 ] Research suggests that the exploitation of anterior chamber-associated immune deviation (ACAID), aqueous humor and its anti-inflammatory properties and the induction of regulatory T cells (Treg) may lead to increased survival of allotransplants. [ 31 ] Another option of exploitation of immune privilege is gene therapy . Sertoli cells have already been used in research to produce insulin in live diabetic mice. The Sertoli cells were genetically engineered using recombinant lentivirus to produce insulin and then transplanted into mice. Even though the results were only short-term, the research team established that it is possible to use genetically engineered Sertoli cells in cell therapy. [ 32 ] Sertoli cells were also exploited in experiments for their immunosuppressive function. They were used to protect and nurture islets producing insulin to treat type I diabetes . The exploitation of Sertoli cells significantly increased the survival of transplanted islets. However, more experiments must be conducted before this method may be tested in human medicine as part of clinical trials. [ 33 ] In another study on type II diabetic and obese mice, the transplantation of microencapsulated Sertoli cells in the subcutaneous abdominal fat depot lead to the return of normal glucose levels in 60% of the animals. [ 34 ] The existence of immune privileged regions of the eye was recognized as early as the late 19th century and investigated by Peter Medawar . [ 35 ] The original explanation of this phenomenon was that physical barriers around the immune privileged site enabled it to avoid detection from the immune system altogether, preventing the immune system from responding to any antigens present. More recent investigations have revealed a number of different mechanisms by which immune privileged sites interact with the immune system.
https://en.wikipedia.org/wiki/Immune_privilege
The immune repertoire encompasses the different sub-types an organism's immune system makes of immunoglobulins or T-cell receptors . These help recognise pathogens in most vertebrates . The sub-types, all differing slightly from each other, can amount to tens of thousands, or millions in a given organism. Such a wide variety increases the odds of having a sub-type that recognises one of the many pathogens an organism may encounter. Too few sub-types and the pathogen can avoid the immune system, unchallenged, leading to disease. Lymphocytes generate the immune repertoire by recombining the genes encoding immunoglobulins and T cell receptors through V(D)J recombination . Although there are only a few of these genes, all their possible combinations can result in a wide variety of immune repertoire proteins. Through selection, cells with autoreactive proteins (and thus may cause autoimmunity ) are removed, while cells that may actually detect an invading organism are kept. The immune repertoire is affected by several factors: Due to technical difficulties, measuring the immune repertoire was seldom attempted. Estimates depend on the precise type or 'compartment' of immune cells and the protein studied, but the expected billions of combinations may be an over-estimation. The genetic spatio-temporal rule governing the TCR locus rearrangements imply that V(D)J rearrangements are not random, hence resulting in a smaller V(D)J diversity. [ 2 ] Next generation sequencing may have a large impact. [ 5 ] This can obtain thousands of DNA sequences, from different genes, quickly, at the same time, relatively cheaply. Thus it may be possible, to take a large sample of cells from someones immune system, and look quickly at the range of sub-types present in the sample. The ability to obtain data quickly from tens or hundreds of thousands of cells, one cell at a time, should provide a good idea, of the size of the person's immune repertoire. These large-scale adaptive immune receptor repertoire sequencing (AIRR-seq) data require specialized bioinformatics pipelines to be analyzed effectively. [ 6 ] Many computational tools are being developed for this purpose, including: The AIRR Community is community-driven organization that is organizing and coordinating stakeholders in the use of next-generation sequencing technologies to study immune repertoires. [ 11 ] In 2017, the AIRR Community published recommendations for a minimal set of metadata that should be used to describe an AIRR-seq data set when published and deposited in a public repository. [ 12 ]
https://en.wikipedia.org/wiki/Immune_repertoire
An immune response is a physiological reaction which occurs within an organism in the context of inflammation for the purpose of defending against exogenous factors. These include a wide variety of different toxins , viruses , intra- and extracellular bacteria , protozoa , helminths , and fungi which could cause serious problems to the health of the host organism if not cleared from the body. [ 1 ] In addition, there are other forms of immune response. For example, harmless exogenous factors (such as pollen and food components) can trigger allergy ; latex and metals are also known allergens. A transplanted tissue (for example, blood) or organ can cause graft-versus-host disease . A type of immune reactivity known as Rh disease can be observed in pregnant women. These special forms of immune response are classified as hypersensitivity . Another special form of immune response is antitumor immunity . In general, there are two branches of the immune response, the innate and the adaptive , which work together to protect against pathogens. Both branches engage humoral and cellular components. The innate branch—the body's first reaction to an invader—is known to be a non-specific and quick response to any sort of pathogen . Components of the innate immune response include physical barriers like the skin and mucous membranes, immune cells such as neutrophils , macrophages , and monocytes , and soluble factors including cytokines and complement . [ 2 ] On the other hand, the adaptive branch is the body's immune response which is catered against specific antigens and thus, it takes longer to activate the components involved. The adaptive branch include cells such as dendritic cells , T cell , and B cells as well as antibodies —also known as immunoglobulins—which directly interact with antigen and are a very important component for a strong response against an invader. [ 1 ] The first contact that an organism has with a particular antigen will result in the production of effector T and B cells which are activated cells that defend against the pathogen. The production of these effector cells as a result of the first-time exposure is called a primary immune response. Memory T and memory B cells are also produced in the case that the same pathogen enters the organism again. If the organism does happen to become re-exposed to the same pathogen, a secondary immune response will kick in and the immune system will be able to respond in both a fast and strong manner because of the memory cells from the first exposure. [ 3 ] Vaccines introduce a weakened, killed, or fragmented microorganism in order to evoke a primary immune response. This is so that in the case that an exposure to the real pathogen occurs, the body can rely on the secondary immune response to quickly defend against it. [ 4 ] The innate immune response is an organism's first response to foreign invaders. This immune response is evolutionarily conserved across many different species, with all multi-cellular organisms having some sort of variation of an innate response. [ 5 ] The innate immune system consists of physical barriers such as skin and mucous membranes , various cell types like neutrophils , macrophages , and monocytes , and soluble factors including cytokines and complement. [ 2 ] In contrast to the adaptive immune response , the innate response is not specific to any one foreign invader and as a result, works quickly to rid the body of pathogens. [ citation needed ] Pathogens are recognized and detected via pattern recognition receptors (PRR). These receptors are structures on the surface of macrophages which are capable of binding foreign invaders and thus initiating cell signaling within the immune cell. Specifically, the PRRs identify pathogen-associated molecular patterns (PAMPs) which are integral structural components of pathogens. Examples of PAMPs include the peptidoglycan cell wall or lipopolysaccharides (LPS), both of which are essential components of bacteria and are therefore evolutionarily conserved across many different bacterial species. [ 6 ] When a foreign pathogen bypasses the physical barriers and enters an organism, the PRRs on macrophages will recognize and bind to specific PAMPs. This binding results in the activation of a signaling pathway which allows for the transcription factor NF-κB to enter the nucleus of the macrophage and initiate the transcription and eventual secretion of various cytokines such as IL-8 , IL-1 , and TNFα . [ 5 ] Release of these cytokines is necessary for the entry of neutrophils from the blood vessels to the infected tissue. Once neutrophils enter the tissue, like macrophages, they are able to phagocytize and kill any pathogens or microbes. [ citation needed ] Complement , another component of the innate immune system, consists of three pathways that are activated in distinct ways. The classical pathway is triggered when IgG or IgM is bound to its target antigen on either the pathogen cell membrane or an antigen-bound antibody. The alternative pathway is activated by foreign surfaces such as viruses, fungi, bacteria, parasites, etc., and is capable of autoactivation due to “tickover” of C3. The lectin pathway is triggered when mannose-binding lectin (MBL) or ficolin aka specific pattern recognition receptors bind to pathogen-associated molecular patterns on the surface of invading microorganisms such as yeast , bacteria, parasites, and viruses. [ 7 ] Each of the three pathways ensures that complement will still be functional if one pathway ceases to work or a foreign invader is able to evade one of these pathways ( defense in depth principle). [ 5 ] Though the pathways are activated differently, the overall role of the complement system is to opsonize pathogens and induce a series of inflammatory responses that help to combat infection . [ citation needed ] The adaptive immune response is the body's second line of defense . The cells of the adaptive immune system are extremely specific because during early developmental stages the B and T cells develop antigen receptors that are specific to only certain antigens . This is extremely important for B and T cell activation. B and T cells are extremely dangerous cells, and if they are able to attack without undergoing a rigorous process of activation, a faulty B or T cell can begin exterminating the host's own healthy cells. [ 8 ] Activation of naïve helper T cells occurs when antigen-presenting cells (APCs) present foreign antigen via MHC class II molecules on their cell surface. These APCs include dendritic cells , B cells , and macrophages which are specially equipped not only with MHC class II but also with co-stimulatory ligands which are recognized by co-stimulatory receptors on helper T cells. Without the co-stimulatory molecules, the adaptive immune response would be inefficient and T cells would become anergic . Several T cell subgroups can be activated by specific APCs, and each T cell is specially equipped to deal with each unique microbial pathogen. The type of T cell activated and the type of response generated depends, in part, on the context in which the APC first encountered the antigen. [ 9 ] Once helper T cells are activated, they are able to activate naïve B cells in the lymph node . However, B cell activation is a two-step process. Firstly, B cell receptors, which are just Immunoglobulin M (IgM) and Immunoglobulin D (IgD) antibodies specific to the particular B cell, must bind to the antigen which then results in internal processing so that it is presented on the MHC class II molecules of the B cell. Once this happens a T helper cell which is able to identify the antigen bound to the MHC interacts with its co-stimulatory molecule and activates the B cell. As a result, the B cell becomes a plasma cell which secretes antibodies that act as an opsonin against invaders. [ citation needed ] Specificity in the adaptive branch is due to the fact that every B and T cell is different. Thus there is a diverse community of cells ready to recognize and attack a full range of invaders. [ 8 ] The trade-off, however, is that the adaptive immune response is much slower than the body's innate response because its cells are extremely specific and activation is required before it is able to actually act. In addition to specificity, the adaptive immune response is also known for immunological memory . After encountering an antigen, the immune system produces memory T and B cells which allow for a speedier, more robust immune response in the case that the organism ever encounters the same antigen again. [ 8 ] Depending on exogenous demands, several types of immune response (IR) are distinguished. In this paradigm, immune system (both innate and adaptive) and non-immune system cellular and molecular components are organized to optimally respond to distinct exposome challenges. Currently, several types of IR are classified. [ 10 ] [ 11 ] Type 1 IR is elicited by viruses, intracellular bacteria, parasites. The actors here are group 1 innate lymphoid cells (ILC1), NK cells, Th1 cells, macrophages, opsonizing IgG isotypes. Type 2 IR is caused by toxins and multicellular parasites. ILC2, epithelial cells , Th2 lymphocytes, eosinophils, basophils, mast cells, IgE are key players here. Type 3 IR targets extracellular bacteria and fungi by recruiting ILC3, Th17, neutrophils, opsonizing IgG isotypes. Additional types of IR can be observed in noninfectious pathologies. [ 12 ] All types of IR have sensor (ILCs, NK cells), adaptive (T and B cells), and effector ( neutrophils , eosinophils , basophils , mast cells ) parts. [ 11 ]
https://en.wikipedia.org/wiki/Immune_response
Immune tolerance , also known as immunological tolerance or immunotolerance , refers to the immune system 's state of unresponsiveness to substances or tissues that would otherwise trigger an immune response . It arises from prior exposure to a specific antigen [ 1 ] [ 2 ] and contrasts the immune system's conventional role in eliminating foreign antigens. Depending on the site of induction, tolerance is categorized as either central tolerance , occurring in the thymus and bone marrow , or peripheral tolerance , taking place in other tissues and lymph nodes . Although the mechanisms establishing central and peripheral tolerance differ, their outcomes are analogous, ensuring immune system modulation. Immune tolerance is important for normal physiology and homeostasis . Central tolerance is crucial for enabling the immune system to differentiate between self and non-self antigens, thereby preventing autoimmunity . Peripheral tolerance plays a significant role in preventing excessive immune reactions to environmental agents, including allergens and gut microbiota . Deficiencies in either central or peripheral tolerance mechanisms can lead to autoimmune diseases , with conditions such as systemic lupus erythematosus , [ 3 ] rheumatoid arthritis , type 1 diabetes , [ 4 ] autoimmune polyendocrine syndrome type 1 (APS-1), [ 5 ] and immunodysregulation polyendocrinopathy enteropathy X-linked syndrome (IPEX) [ 6 ] as examples. Furthermore, disruptions in immune tolerance are implicated in the development of asthma , atopy , [ 7 ] and inflammatory bowel disease . [ 4 ] In the context of pregnancy , immune tolerance is vital for the gestation of genetically distinct offspring , as it moderates the alloimmune response sufficiently to prevent miscarriage . However, immune tolerance is not without its drawbacks. It can permit the successful infection of a host by pathogenic microbes that manage to evade immune elimination. [ 8 ] Additionally, the induction of peripheral tolerance within the local microenvironment is a strategy employed by many cancers to avoid detection and destruction by the host's immune system. [ 9 ] The phenomenon of immune tolerance was first described by Ray D. Owen in 1945, who noted that dizygotic twin cattle sharing a common placenta also shared a stable mixture of each other's red blood cells (though not necessarily 50/50), and retained that mixture throughout life. [ 1 ] Although Owen did not use the term immune tolerance, his study showed the body could be tolerant of these foreign tissues. This observation was experimentally validated by Leslie Brent, Rupert E. Billingham and Peter Medawar in 1953, who showed by injecting foreign cells into fetal or neonatal mice, they could become accepting of future grafts from the same foreign donor. However, they were not thinking of the immunological consequences of their work at the time: as Medawar explains: [ citation needed ] However, these discoveries, and the host of allograft experiments and observations of twin chimerism they inspired, were seminal for the theories of immune tolerance formulated by Sir Frank McFarlane Burnet and Frank Fenner , who were the first to propose the deletion of self-reactive lymphocytes to establish tolerance, now termed clonal deletion . [ 10 ] Burnet and Medawar were ultimately credited for "the discovery of acquired immune tolerance" and shared the Nobel Prize in Physiology or Medicine in 1960. [ 1 ] In their Nobel Lecture, Medawar and Burnet define immune tolerance as "a state of indifference or non-reactivity towards a substance that would normally be expected to excite an immunological response." [ 1 ] Other more recent definitions have remained more or less the same. The 8th edition of Janeway's Immunobiology defines tolerance as "immunologically unresponsive...to another's tissues.". [ 2 ] Immune tolerance encompasses the range of physiological mechanisms by which the body reduces or eliminates an immune response to particular agents. It is used to describe the phenomenon underlying discrimination of self from non-self, suppressing allergic responses, allowing chronic infection instead of rejection and elimination, and preventing attack of fetuses by the maternal immune system. Typically, a change in the host, not the antigen, is implied. [ 1 ] Though some pathogens can evolve to become less virulent in host-pathogen coevolution, [ 11 ] tolerance does not refer to the change in the pathogen but can be used to describe the changes in host physiology. Immune tolerance also does not usually refer to artificially induced immunosuppression by corticosteroids, lymphotoxic chemotherapy agents, sublethal irradiation, etc. Nor does it refer to other types of non-reactivity such as immunological paralysis. [ 12 ] In the latter two cases, the host's physiology is handicapped but not fundamentally changed. Immune tolerance is formally differentiated into central or peripheral; [ 2 ] however, alternative terms such as "natural" or "acquired" tolerance have at times been used to refer to establishment of tolerance by physiological means or by artificial, experimental, or pharmacological means. [ 13 ] These two methods of categorization are sometimes confused, but are not equivalent—central or peripheral tolerance may be present naturally or induced experimentally. This difference is important to keep in mind. [ citation needed ] Central tolerance refers to the tolerance established by deleting autoreactive lymphocyte clones before they develop into fully immunocompetent cells. It occurs during lymphocyte development in the thymus [ 14 ] [ 15 ] and bone marrow for T and B lymphocytes , respectively. In these tissues, maturing lymphocytes are exposed to self-antigens presented by medullary thymic epithelial cells and thymic dendritic cells , or bone marrow cells . Self-antigens are present due to endogenous expression, importation of antigen from peripheral sites via circulating blood, and in the case of thymic stromal cells, expression of proteins of other non-thymic tissues by the action of the transcription factor AIRE . [ citation needed ] Those lymphocytes that have receptors that bind strongly to self-antigens are removed by induction of apoptosis of the autoreactive cells, or by induction of anergy , a state of non-activity. [ 16 ] Weakly autoreactive B cells may also remain in a state of immunological ignorance where they simply do not respond to stimulation of their B cell receptor. Some weakly self-recognizing T cells are alternatively differentiated into natural regulatory T cells (nTreg cells), which act as sentinels in the periphery to calm down potential instances of T cell autoreactivity (see peripheral tolerance below). [ 2 ] The deletion threshold is much more stringent for T cells than for B cells since T cells alone can cause direct tissue damage. Furthermore, it is more advantageous for the organism to let its B cells recognize a wider variety of antigen so it can produce antibodies against a greater diversity of pathogens. Since the B cells can only be fully activated after confirmation by more self-restricted T cells that recognize the same antigen, autoreactivity is held in check. [ 16 ] This process of negative selection ensures that T and B cells that could initiate a potent immune response to the host's own tissues are eliminated while preserving the ability to recognize foreign antigens. It is the step in lymphocyte education that is key for preventing autoimmunity (entire process detailed here ). Lymphocyte development and education is most active in fetal development but continues throughout life as immature lymphocytes are generated, slowing as the thymus degenerates and the bone marrow shrinks in adult life. [ citation needed ] Peripheral tolerance develops after T and B cells mature and enter the peripheral tissues and lymph nodes. [ 2 ] It is established by a number of partly overlapping mechanisms that mostly involve control at the level of T cells, especially CD4+ helper T cells, which orchestrate immune responses and give B cells the confirmatory signals they need in order to produce antibodies. Inappropriate reactivity toward normal self-antigen that was not eliminated in the thymus can occur, since the T cells that leave the thymus are relatively but not completely safe. Some will have receptors ( TCRs ) that can respond to self-antigens that: Those self-reactive T cells that escape intrathymic negative selection in the thymus can inflict cell injury unless they are deleted or effectively muzzled in the peripheral tissue chiefly by nTreg cells (see central tolerance above). [ citation needed ] Appropriate reactivity toward certain antigens can also be quieted by induction of tolerance after repeated exposure, or exposure in a certain context. In these cases, there is a differentiation of naïve CD4+ helper T cells into induced Treg cells (iTreg cells) in the peripheral tissue or nearby lymphoid tissue (lymph nodes, mucosal-associated lymphoid tissue, etc.). This differentiation is mediated by IL-2 produced upon T cell activation, and TGF-β from any of a variety of sources, including tolerizing dendritic cells (DCs), other antigen presenting cells , or in certain conditions surrounding tissue. [ 8 ] Treg cells are not the only cells that mediate peripheral tolerance. Other regulatory immune cells include T cell subsets similar to but phenotypically distinct from Treg cells, including TR1 cells that make IL-10 but do not express Foxp3 , TGF-β -secreting TH3 cells, as well as other less well-characterized cells that help establish a local tolerogenic environment. [ 17 ] B cells also express CD22 , a non-specific inhibitor receptor that dampens B cell receptor activation. A subset of B regulatory cells that makes IL-10 and TGF-β also exists. [ 18 ] Some DCs can make Indoleamine 2,3-dioxygenase (IDO) that depletes the amino acid tryptophan needed by T cells to proliferate and thus reduce responsiveness. DCs also have the capacity to directly induce anergy in T cells that recognize antigen expressed at high levels and thus presented at steady-state by DCs. [ 19 ] In addition, FasL expression by immune privileged tissues can result in activation-induced cell death of T cells. [ 20 ] The involvement of T cells, later classified as Treg cells , in immune tolerance was recognized in 1995 when animal models showed that CD4+ CD25+ T cells were necessary and sufficient for the prevention of autoimmunity in mice and rats. [ 17 ] Initial observations showed removal of the thymus of a newborn mouse resulted in autoimmunity, which could be rescued by transplantation of CD4+ T cells. A more specific depletion and reconstitution experiment established the phenotype of these cells as CD4+ and CD25+. Later in 2003, experiments showed that Treg cells were characterized by the expression of the Foxp3 transcription factor, which is responsible for the suppressive phenotype of these cells. [ 17 ] It was assumed that, since the presence of the Treg cells originally characterized was dependent on the neonatal thymus, these cells were thymically derived. By the mid-2000s, however, evidence was accruing of conversion of naïve CD4+ T cells to Treg cells outside of the thymus. [ 8 ] These were later defined as induced or iTreg cells to contrast them with thymus-derived nTreg cells. Both types of Treg cells quieten autoreactive T cell signaling and proliferation by cell-contact-dependent and -independent mechanisms including: [ 21 ] nTreg cells and iTreg cells, however, have a few important distinguishing characteristics that suggest they have different physiological roles: [ 8 ] Immune recognition of non-self-antigens typically complicates transplantation and engrafting of foreign tissue from an organism of the same species ( allografts ), resulting in graft reaction. However, there are two general cases in which an allograft may be accepted. One is when cells or tissue are grafted to an immune-privileged site that is sequestered from immune surveillance (like in the eye or testes) or has strong molecular signals in place to prevent dangerous inflammation (like in the brain). The second is when a state of tolerance has been induced, either by previous exposure to the antigen of the donor in a manner that causes immune tolerance rather than sensitization in the recipient, or after chronic rejection. Long-term exposure to a foreign antigen from fetal development or birth may result in establishment of central tolerance, as was observed in Medawar's mouse-allograft experiments. [ 1 ] In usual transplant cases, however, such early prior exposure is not possible. Nonetheless, a few patients can still develop allograft tolerance upon cessation of all exogenous immunosuppressive therapy, a condition referred to as operational tolerance. [ 22 ] [ 23 ] CD4+ Foxp3+ Treg cells, as well as CD8+ CD28- regulatory T cells that dampen cytotoxic responses to grafted organs, are thought to play a role. [ 16 ] In addition, genes involved in NK cell and γδT cell function associated with tolerance have been implicated for liver transplant patients. [ 23 ] The unique gene signatures of these patients implies their physiology may be predisposed toward immune tolerance. [ citation needed ] The fetus has a different genetic makeup than the mother, as it also translates its father's genes, and is thus perceived as foreign by the maternal immune system. Women who have borne multiple children by the same father typically have antibodies against the father's red blood cell and major histocompatibility complex (MHC) proteins. [ 2 ] However, the fetus usually is not rejected by the mother, making it essentially a physiologically tolerated allograft. It is thought that the placental tissues which interface with maternal tissues not only try to escape immunological recognition by downregulating identifying MHC proteins but also actively induce a marked peripheral tolerance. Placental trophoblast cells express a unique Human Leukocyte Antigen (HLA-G) that inhibits attack by maternal NK cells . These cells also express IDO , which represses maternal T cell responses by amino acid starvation. Maternal T cells specific for paternal antigens are also suppressed by tolerogenic DCs and activated iTregs or cross-reacting nTregs. [ 24 ] Some maternal Treg cells also release soluble fibrinogen-like proteins 2 (sFGL2), which suppresses the function of DCs and macrophages involved in inflammation and antigen presentation to reactive T cells [ 24 ] These mechanisms altogether establish an immune-privileged state in the placenta that protects the fetus. A break in this peripheral tolerance results in miscarriage and fetal loss. [ 25 ] (for more information, see Immune tolerance in pregnancy ). The skin and digestive tract of humans and many other organisms is colonized with an ecosystem of microorganisms that is referred to as the microbiome . Though in mammals a number of defenses exist to keep the microbiota at a safe distance, including a constant sampling and presentation of microbial antigens by local DCs, most organisms do not react against commensal microorganisms and tolerate their presence. Reactions are mounted, however, to pathogenic microbes and microbes that breach physiological barriers(epithelium barriers). Peripheral mucosal immune tolerance, in particular, mediated by iTreg cells and tolerogenic antigen-presenting cells, is thought to be responsible for this phenomenon. In particular, specialized gut CD103+ DCs that produce both TGF-β and retinoic acid efficiently promotes the differentiation of iTreg cells in the gut lymphoid tissue. [ 8 ] Foxp3- TR1 cells that make IL-10 are also enriched in the intestinal lining. [ 2 ] Break in this tolerance is thought to underlie the pathogenesis of inflammatory bowel diseases like Crohn's disease and ulcerative colitis . [ 4 ] Oral tolerance refers to a specific type of peripheral tolerance induced by antigens given by mouth and exposed to the gut mucosa and its associated lymphoid tissues . [ 13 ] The intestine harbours many non-self-antigens that are able to induce an immune reaction. The immune system in the gut needs to restrain from responding to these antigens to prevent constant inflammation. On the other hand, the thin intestinal wall is vulnerable to pathogenic penetration. The immune system must maintain its responsiveness to pathogenic antigens to prevent infections. The immune system has developed mechanisms in which orally ingested antigens can suppress following immune responses on a local and systemic level. [ 26 ] Oral tolerance may have evolved to prevent hypersensitivity reactions to food proteins. [ 27 ] The soluble antigens in the lumen of intestine are transported to dendritic cells in the lamina propria . After receiving an antigen these dendritic cells migrate to the mesenteric lymph nodes . Here they interact with naïve T cells and induce differentiation into regulatory T cells . The newly differentiated regulatory T cells travel to the lamina propria, where they suppress the immune reaction against the recognized antigens. [ citation needed ] Dendritic cells play a crucial role in establishing oral tolerance for food antigens. The dendritic cells in the intestines cannot directly sample the antigens, as they are located behind the epithelial wall. There are different mechanisms in which the dendritic cells come in contact with the food antigens Dissolved antigens can be taken up by enterocytes . The antigens are then partially degraded in the lysosomes . The partially degraded antigens are presented on MHCII after lysosome merging with MHCII carrying endosomes . The MHCII carrying vesicles are released on the basolateral surface of the enterocytes. Here dendritic cells can interact with the presented antigens. [ 28 ] [ 29 ] Another pathway of soluble antigen transport occurs through goblet cells . Goblet cell-associated antigen passages (GAP) transfer low molecular weight soluble antigens to CD103+ dendritic cells. CD103+ dendritic cells are associated with tolerance induction. [ 30 ] CX3CR1+ macrophages extend in between enterocytes and directly take up antigens form the intestinal lumen. These macrophages are not capable of traveling to the mesenteric lymph nodes. They form gap junctions with CD103+ dendritic cells and transfer antigens to the dendritic cells. [ 31 ] After antigen interaction the CD103+ dendritic cells travel to the mesenteric lymph nodes where they interact with their T cell population. Within the mesenteric lymph nodes the CD103+ dendritic cells will induce differentiation of the naïve T cell population into Foxp3+ regulatory T cells (iTregs). Under inflammatory conditions, CD103+ dendritic cells will induce Th1 cells instead. The local microenvironment determines if CD103+ dendritic cells act tolerogenic or immunogenic. [ 32 ] The differentiation into regulatory T cells is dependent on TGFβ and retinoic acid . Retinoic acid is also programming the T cells to stay in the gut environment by inducing CCR9 and α4β7 expression. [ 33 ] The mesenteric lymph node stromal cells also release retinoic acid and are required for gut localisation of the mesenteric lymph node T cell population. [ 34 ] The differentiated regulatory T cells subsequently migrate to the lamina propria, where they multiply. CX3CR1+ macrophages present in this environment secrete IL-10 , which is required for the expansion of the regulatory T cell population. [ 35 ] In the lamina propria the regulatory T cell population creates a tolerogenic environment to food antigens. It is known that tolerance to food antigens is systemic. The mechanism that establishes this systemic tolerance is not yet fully understood. [ 26 ] Oral tolerance is also established by inducing anergy or deletion of antigen specific T cells. This process can take place in the liver. The liver is exposed to many food antigens through the portal vein and is therefore also a site of food tolerance induction. Upon high antigen exposure plasmacytoid dendritic cells from the liver and mesenteric lymph node can induce anergy or deletion of antigen specific T cells. Anergic T cells are hyporesponsive to their specific antigen. [ 36 ] The hypo-responsiveness induced by oral exposure is systemic and can reduce hypersensitivity reactions in certain cases. Records from 1829 indicate that American Indians would reduce contact hypersensitivity from poison ivy by consuming leaves of related Rhus species; however, contemporary attempts to use oral tolerance to ameliorate autoimmune diseases like rheumatoid arthritis and other hypersensitivity reactions have been mixed. [ 13 ] The systemic effects of oral tolerance may be explained by the extensive recirculation of immune cells primed in one mucosal tissue in another mucosal tissue, allowing extension of mucosal immunity. [ 37 ] The same probably occurs for cells mediating mucosal immune tolerance. [ citation needed ] Allergy and hypersensitivity reactions in general are traditionally thought of as misguided or excessive reactions by the immune system, possibly due to broken or underdeveloped mechanisms of peripheral tolerance. Usually, Treg cells , TR1, and Th3 cells at mucosal surfaces suppress type 2 CD4 helper cells , mast cells , and eosinophils , which mediate allergic response. Deficits in Treg cells or their localization to mucosa have been implicated in asthma and atopic dermatitis . [ 38 ] Attempts have been made to reduce hypersensitivity reactions by oral tolerance and other means of repeated exposure. Repeated administration of the allergen in slowly increasing doses, subcutaneously or sublingually appears to be effective for allergic rhinitis . [ 39 ] Repeated administration of antibiotics, which can form haptens to cause allergic reactions, can also reduce antibiotic allergies in children. [ 40 ] Immune tolerance is an important means by which growing tumors , which have mutated proteins and altered antigen expression, prevent elimination by the host immune system. It is well recognized that tumors are a complex and dynamic population of cells composed of transformed cells as well as stromal cells , blood vessels, tissue macrophages, and other immune infiltrates. [ 9 ] [ 41 ] These cells and their interactions all contribute to the changing tumor microenvironment , which the tumor largely manipulates to be immunotolerant so as to avoid elimination. There is an accumulation of metabolic enzymes that suppress T cell proliferation and activation, including IDO and arginase , and high expression of tolerance-inducing ligands like FasL , PD-1 , CTLA-4 , and B7 . [ 9 ] [ 20 ] Pharmacologic monoclonal antibodies targeted against some of these ligands has been effective in treating cancer. [ 42 ] Tumor-derived vesicles known as exosomes have also been implicated promoting differentiation of iTreg cells and myeloid derived suppressor cells (MDSCs), which also induce peripheral tolerance. [ 9 ] [ 43 ] In addition to promoting immune tolerance, other aspects of the microenvironment aid in immune evasion and induction of tumor-promoting inflammation e.g., tumors with low expression of distinguishing antigens can directly cause creation of tolerized CD8+T cells thereby leading to immunotherapy resistance. [ 44 ] Though the exact evolutionary rationale behind the development of immunological tolerance is not completely known, it is thought to allow organisms to adapt to antigenic stimuli that will consistently be present instead of expending considerable resources fighting it off repeatedly. Tolerance in general can be thought of as an alternative defense strategy that focuses on minimizing impact of an invader on host fitness, instead of on destroying and eliminating the invader. [ 45 ] Such efforts may have a prohibitive cost on host fitness. In plants, where the concept was originally used, tolerance is defined as a reaction norm of host fitness over a range of parasite burdens, and can be measured from the slope of the line fitting these data. [ 46 ] Immune tolerance may constitute one aspect of this defense strategy, though other types of tissue tolerance have been described. [ 45 ] The advantages of immune tolerance, in particular, may be seen in experiments with mice infected with malaria, in which more tolerant mice have higher fitness at greater pathogen burdens. In addition, development of immune tolerance would have allowed organisms to reap the benefits of having a robust commensal microbiome, such as increased nutrient absorption and decreased colonization by pathogenic bacteria. Though it seems that the existence of tolerance is mostly adaptive, allowing an adjustment of the immune response to a level appropriate for the given stressor, it comes with important evolutionary disadvantages. Some infectious microbes take advantage of existing mechanisms of tolerance to avoid detection and/or elimination by the host immune system. Induction of regulatory T cells , for instance, has been noted in infections with Helicobacter pylori , Listeria monocytogenes , Brugia malayi , and other worms and parasites. [ 8 ] Another important disadvantage of the existence of tolerance may be susceptibility to cancer progression. Treg cells inhibit anti-tumor NK cells . [ 47 ] The injection of Treg cells specific for a tumor antigen also can reverse experimentally-mediated tumor rejection based on that same antigen. [ 48 ] The prior existence of immune tolerance mechanisms due to selection for its fitness benefits facilitates its utilization in tumor growth. Immune tolerance contrasts with resistance. Upon exposure to a foreign antigen, either the antigen is eliminated by the standard immune response (resistance), or the immune system adapts to the pathogen, promoting immune tolerance instead. Resistance typically protects the host at the expense of the parasite, while tolerance reduces harm to the host without having any direct negative effects on the parasite. [ 46 ] Each strategy has its unique costs and benefits for host fitness: [ 45 ] Evolution works to optimize host fitness, so whether elimination or tolerance occurs depends on which would benefit the organism most in a given scenario. If the antigen is from a rare, dangerous invader, the costs of tolerating its presence are high and it is more beneficial to the host to eliminate it. Conversely, if experience (of the organism or its ancestors) has shown that the antigen is innocuous, then it would be more beneficial to tolerate the presence of the antigen rather than pay the costs of inflammation. Despite having mechanisms for both immune resistance and tolerance, any one organism may be overall more skewed toward a tolerant or resistant phenotype depending on individual variation in both traits due to genetic and environmental factors. [ 46 ] In mice infected with malaria, different genetic strains of mice fall neatly along a spectrum of being more tolerant but less resistant or more resistant but less tolerant. [ 49 ] Patients with autoimmune diseases also often have a unique gene signature and certain environmental risk factors that predispose them to disease. [ 2 ] This may have implications for current efforts to identify why certain individuals may be disposed to or protected against autoimmunity , allergy , inflammatory bowel disease , and other such diseases.
https://en.wikipedia.org/wiki/Immune_tolerance
Immune tolerance in pregnancy or maternal immune tolerance is the immune tolerance shown towards the fetus and placenta during pregnancy . This tolerance counters the immune response that would normally result in the rejection of something foreign in the body, as can happen in cases of spontaneous abortion . [ 1 ] [ 2 ] It is studied within the field of reproductive immunology . The placenta functions as an immunological barrier between the mother and the fetus , creating an immunologically privileged site . For this purpose, it uses several mechanisms: Still, the placenta does allow maternal immunoglobulin G (IgG) to pass to the fetus to protect it against infections. However, these antibodies do not target fetal cells, unless any fetal material has escaped across the placenta where it can come in contact with maternal B cells and make those B cells start to produce antibodies against fetal targets. The mother does produce antibodies against foreign ABO blood types , where the fetal blood cells are possible targets, but these preformed antibodies are usually of the immunoglobulin M type, [ 7 ] and therefore usually do not cross the placenta. Still, rarely, ABO incompatibility can give rise to IgG antibodies that cross the placenta, and are caused by sensitization of mothers (usually of blood type O) to antigens in foods or bacteria that are homologous to A and B antigens. [ 8 ] Still, the placental barrier is not the sole means to evade the immune system, as foreign fetal cells also persist in the maternal circulation, on the other side of the placental barrier. [ 9 ] The placenta does not block maternal IgG antibodies, which thereby may pass through the human placenta, providing immune protection to the fetus against infectious diseases. One model for the induction of tolerance during the very early stages of pregnancy is the eutherian fetoembryonic defense system (eu-FEDS) hypothesis. [ 10 ] The basic premise of the eu-FEDS hypothesis is that both soluble and cell surface associated glycoproteins , present in the reproductive system and expressed on gametes , suppress any potential immune responses, and inhibit rejection of the fetus. [ 10 ] The eu-FEDS model further suggests that specific carbohydrate sequences ( oligosaccharides ) are covalently linked to these immunosuppressive glycoproteins and act as “functional groups” that suppress the immune response. The major uterine and fetal glycoproteins that are associated with the eu-FEDS model in the human include alpha-fetoprotein , CA125 , and glycodelin-A (also known as placental protein 14). Regulatory T cells also likely play a role. [ 11 ] Also, a shift from cell-mediated immunity toward humoral immunity is believed to occur. [ 12 ] Many cases of spontaneous abortion may be described in the same way as maternal transplant rejection , [ 2 ] and a chronic insufficient tolerance may cause infertility . Other examples of insufficient immune tolerance in pregnancy are Rh disease and pre-eclampsia : Pregnancies resulting from egg donation , where the carrier is less genetically similar to the fetus than a biological mother, are associated with a higher incidence of pregnancy-induced hypertension and placental pathology . [ 15 ] The local and systemic immunologic changes are also more pronounced than in normal pregnancies, so it has been suggested that the higher frequency of some conditions in egg donation may be caused by reduced immune tolerance from the mother. [ 15 ] Immunological responses could be the cause in many cases of infertility and miscarriage . Some immunological factors that contribute to infertility are reproductive autoimmune failure syndrome, the presence of antiphospholipid antibodies , and antinuclear antibodies . Antiphospholipid antibodies are targeted toward the phospholipids of the cell membrane. Studies have shown that antibodies against phosphatidylserine , phosphatidylcholine , phosphatidylglycerol , phosphatidylinositol and phosphatidylethanolamine target the pre-embryo. Antibodies against phosphatidylserine and phosphatidylethanolamine are against the trophoblast. [ 16 ] These phospholipids are essential in enabling the cells of the fetus to remain attached to the cells of the uterus with implantation. If a female has antibodies against these phospholipids, they will be destroyed through the immune response and ultimately the fetus will not be able to remain bound to the uterus. These antibodies also jeopardize the health of the uterus by altering the blood flow to the uterus. [ 16 ] Antinuclear antibodies cause an inflammation in the uterus that does not allow it to be a suitable host for implantation of the embryo. Natural killer cells misinterpret the fetal cells as cancer cells and attack them. An individual that presents with reproductive autoimmune failure syndrome has unexplained infertility, endometriosis , and repetitive miscarriages due to elevated levels of antinuclear antibodies circulating. [ 16 ] Both the presence of antiphospholipids antibodies and antinuclear antibodies have toxic effects on the implantation of embryos. This does not apply to anti-thyroid antibodies . Elevated levels do not have a toxic effect , but they are indicative of a risk of miscarriage. Elevated anti-thyroid antibodies act as a marker for females who have T-lymphocyte dysfunction because these levels indicate T cells that are secreting high levels of cytokines that induce inflammation in the uterine wall. [ 16 ] Still, there is currently no drug that has evidence of preventing miscarriage by inhibition of maternal immune responses; aspirin has no effect in this case. [ 17 ] The increased immune tolerance is believed to be a major contributing factor to an increased susceptibility and severity of infections in pregnancy. [ 18 ] Pregnant women are more severely affected by, for example, influenza , hepatitis E , herpes simplex and malaria . [ 18 ] The evidence is more limited for coccidioidomycosis , measles , smallpox , and varicella . [ 18 ] Pregnancy does not appear to alter the protective effects of vaccination . [ 18 ] If the mechanisms of rejection-immunity of the fetus could be understood, it might lead to interspecific pregnancy , having, for example, pigs carry human fetuses to term as an alternative to a human surrogate mother . [ 19 ]
https://en.wikipedia.org/wiki/Immune_tolerance_in_pregnancy
In biology , immunity is the state of being insusceptible or resistant to a noxious agent or process, especially a pathogen or infectious disease . Immunity may occur naturally or be produced by prior exposure or immunization . The immune system has innate and adaptive components. Innate immunity is present in all metazoans , [ 1 ] immune responses: inflammatory responses and phagocytosis . [ 2 ] The adaptive component, on the other hand, involves more advanced lymphatic cells that can distinguish between specific "non-self" substances in the presence of "self". The reaction to foreign substances is etymologically described as inflammation while the non-reaction to self substances is described as immunity. The two components of the immune system create a dynamic biological environment where "health" can be seen as a physical state where the self is immunologically spared, and what is foreign is inflammatorily and immunologically eliminated. "Disease" can arise when what is foreign cannot be eliminated or what is self is not spared. [ 3 ] Innate immunity, also known as native immunity, is a semi-specific and widely distributed form of immunity. It is defined as the first line of defense against pathogens, representing a critical systemic response to prevent infection and maintain homeostasis, contributing to the activation of an adaptive immune response. [ 4 ] It does not adapt to specific external stimulus or a prior infection, but relies on genetically encoded recognition of particular patterns. [ 5 ] Adaptive or acquired immunity is the active component of the host immune response, mediated by antigen-specific lymphocytes . Unlike the innate immunity, the acquired immunity is highly specific to a particular pathogen, including the development of immunological memory . [ 6 ] Like the innate system, the acquired system includes both humoral immunity components and cell-mediated immunity components. [ citation needed ] Adaptive immunity can be acquired either 'naturally' (by infection) or 'artificially' (through deliberate actions such as vaccination). Adaptive immunity can also be classified as 'active' or 'passive'. Active immunity is acquired through the exposure to a pathogen, which triggers the production of antibodies by the immune system. [ 7 ] Passive immunity is acquired through the transfer of antibodies or activated T-cells derived from an immune host either artificially or through the placenta; it is short-lived, requiring booster doses for continued immunity. The diagram below summarizes these divisions of immunity. Adaptive immunity recognizes more diverse patterns. Unlike innate immunity it is associated with memory of the pathogen. [ 5 ] For thousands of years mankind has been intrigued with the causes of disease and the concept of immunity. The prehistoric view was that disease was caused by supernatural forces, and that illness was a form of theurgic punishment for "bad deeds" or "evil thoughts" visited upon the soul by the gods or by one's enemies. [ 8 ] In Classical Greek times, Hippocrates , who is regarded as the Father of Medicine, attributed diseases to an alteration or imbalance in one of the four humors (blood, phlegm, yellow bile or black bile). [ 9 ] The first written descriptions of the concept of immunity may have been made by the Athenian Thucydides who, in 430 BC, described that when the plague hit Athens : "the sick and the dying were tended by the pitying care of those who had recovered, because they knew the course of the disease and were themselves free from apprehensions. For no one was ever attacked a second time, or not with a fatal result". [ 10 ] Active immunotherapy may have begun with Mithridates VI of Pontus (120-63 BC) [ 11 ] who, to induce active immunity for snake venom, recommended using a method similar to modern toxoid serum therapy , by drinking the blood of animals which fed on venomous snakes. [ 11 ] He is thought to have assumed that those animals acquired some detoxifying property, so that their blood would contain transformed components of the snake venom that could induce resistance to it instead of exerting a toxic effect. Mithridates reasoned that, by drinking the blood of these animals, he could acquire a similar resistance. [ 11 ] Fearing assassination by poison, he took daily sub-lethal doses of venom to build tolerance. He is also said to have sought to create a 'universal antidote' to protect him from all poisons. [ 9 ] [ 12 ] For nearly 2000 years, poisons were thought to be the proximate cause of disease, and a complicated mixture of ingredients, called Mithridate , was used to cure poisoning during the Renaissance . [ 13 ] [ 9 ] An updated version of this cure, Theriacum Andromachi , was used well into the 19th century. The term "immunes" is also found in the epic poem " Pharsalia " written around 60 BC by the poet Marcus Annaeus Lucanus to describe a North African tribe's resistance to snake venom . [ 9 ] The first clinical description of immunity which arose from a specific disease-causing organism is probably A Treatise on Smallpox and Measles ("Kitab fi al-jadari wa-al-hasbah″, translated 1848 [ 14 ] [ 15 ] ) written by the Islamic physician Al-Razi in the 9th century. In the treatise, Al Razi describes the clinical presentation of smallpox and measles and goes on to indicate that exposure to these specific agents confers lasting immunity (although he does not use this term). [ 9 ] Until the 19th century, the miasma theory was also widely accepted. The theory viewed diseases such as cholera or the Black Plague as being caused by a miasma, a noxious form of "bad air". [ 8 ] If someone was exposed to the miasma in a swamp, in evening air, or breathing air in a sickroom or hospital ward, they could catch a disease. Since the 19th century, communicable diseases came to be viewed as being caused by germs/microbes. The modern word "immunity" derives from the Latin immunis, meaning exemption from military service, tax payments or other public services. [ 10 ] The first scientist who developed a full theory of immunity was Ilya Mechnikov [ 16 ] who revealed phagocytosis in 1882. With Louis Pasteur 's germ theory of disease , the fledgling science of immunology began to explain how bacteria caused disease, and how, following infection, the human body gained the ability to resist further infections. [ 10 ] In 1888 Emile Roux and Alexandre Yersin isolated diphtheria toxin , and following the 1890 discovery by Behring and Kitasato of antitoxin based immunity to diphtheria and tetanus , the antitoxin became the first major success of modern therapeutic immunology. [ 9 ] In Europe , the induction of active immunity emerged in an attempt to contain smallpox . Immunization has existed in various forms for at least a thousand years, without the terminology. [ 10 ] The earliest use of immunization is unknown, but, about 1000 AD, the Chinese began practicing a form of immunization by drying and inhaling powders derived from the crusts of smallpox lesions. [ 10 ] Around the 15th century in India , the Ottoman Empire , and east Africa , the practice of inoculation (poking the skin with powdered material derived from smallpox crusts) was quite common. [ 10 ] This practice was first introduced into the west in 1721 by Lady Mary Wortley Montagu . [ 10 ] In 1798, Edward Jenner introduced the far safer method of deliberate infection with cowpox virus, ( smallpox vaccine ), which caused a mild infection that also induced immunity to smallpox. By 1800, the procedure was referred to as vaccination . To avoid confusion, smallpox inoculation was increasingly referred to as variolation , and it became common practice to use this term without regard for chronology. The success and general acceptance of Jenner's procedure would later drive the general nature of vaccination developed by Pasteur and others towards the end of the 19th century. [ 9 ] In 1891, Pasteur widened the definition of vaccine in honour of Jenner, and it then became essential to qualify the term by referring to polio vaccine , measles vaccine etc. Passive immunity is the immunity acquired by the transfer of ready-made antibodies from one individual to another. Passive immunity can occur naturally, such as when maternal antibodies are transferred to the foetus through the placenta, and can also be induced artificially, when high levels of human (or horse ) antibodies specific for a pathogen or toxin are transferred to non- immune individuals. Passive immunization is used when there is a high risk of infection and insufficient time for the body to develop its own immune response, or to reduce the symptoms of ongoing or immunosuppressive diseases. [ 17 ] Passive immunity provides immediate protection, but the body does not develop memory, therefore the patient is at risk of being infected by the same pathogen later. [ 18 ] A fetus naturally acquires passive immunity from its mother during pregnancy. Maternal passive immunity is antibody -mediated immunity. The mother's antibodies (MatAb) are passed through the placenta to the fetus by an FcRn receptor on placental cells. This occurs around the third month of gestation . IgG is the only antibody isotype that can pass through the placenta. Passive immunity is also provided through the transfer of IgA antibodies found in breast milk that are transferred to the gut of a nursing infant, protecting against bacterial infections, until the newborn can synthesize its antibodies. Colostrum present in mothers milk is an example of passive immunity. [ 18 ] Artificially acquired passive immunity is a short-term immunization induced by the transfer of antibodies, which can be administered in several forms; as human or animal blood plasma, as pooled human immunoglobulin for intravenous ( IVIG ) or intramuscular (IG) use, and in the form of monoclonal antibodies (MAb). Passive transfer is used prophylactically in the case of immunodeficiency diseases, such as hypogammaglobulinemia . [ 19 ] It is also used in the treatment of several types of acute infection, and to treat poisoning . [ 17 ] Immunity derived from passive immunization lasts for only a short period of time, and there is also a potential risk for hypersensitivity reactions, and serum sickness , especially from gamma globulin of non-human origin. [ 18 ] The artificial induction of passive immunity has been used for over a century to treat infectious disease, and before the advent of antibiotics , was often the only specific treatment for certain infections. Immunoglobulin therapy continued to be a first line therapy in the treatment of severe respiratory diseases until the 1930s, even after sulfonamide lot antibiotics were introduced. [ 19 ] Passive or " adoptive transfer " of cell-mediated immunity, is conferred by the transfer of "sensitized" or activated T-cells from one individual into another. It is rarely used in humans because it requires histocompatible (matched) donors, which are often difficult to find. In unmatched donors this type of transfer carries severe risks of graft versus host disease . [ 17 ] It has, however, been used to treat certain diseases including some types of cancer and immunodeficiency . This type of transfer differs from a bone marrow transplant , in which (undifferentiated) hematopoietic stem cells are transferred. [ citation needed ] When B cells and T cells are activated by a pathogen, memory B-cells and T- cells develop, and the primary immune response results. Throughout the lifetime of an animal, these memory cells will "remember" each specific pathogen encountered, and can mount a strong secondary response if the pathogen is detected again. The primary and secondary responses were first described in 1921 by English immunologist Alexander Glenny [ 20 ] although the mechanism involved was not discovered until later. This type of immunity is both active and adaptive because the body's immune system prepares itself for future challenges. Active immunity often involves both the cell-mediated and humoral aspects of immunity as well as input from the innate immune system . Naturally acquired active immunity occurs as the result of surviving an infection. When a person is exposed to a live pathogen and develops a primary immune response , this leads to immunological memory. [ 17 ] Many disorders of immune system function can affect the formation of active immunity, such as immunodeficiency [ 21 ] (both acquired and congenital forms) and immunosuppression . Artificially acquired active immunity can be induced by a vaccine , a substance that contains antigen. A vaccine stimulates a primary response against the antigen without causing symptoms of the disease. [ 17 ] The term vaccination was coined by Richard Dunning, a colleague of Edward Jenner , and adapted by Louis Pasteur for his pioneering work in vaccination. The method Pasteur used entailed treating the infectious agents for those diseases, so they lost the ability to cause serious disease. Pasteur adopted the name vaccine as a generic term in honor of Jenner's discovery, which Pasteur's work built upon. In 1807, Bavaria became the first group to require their military recruits to be vaccinated against smallpox, as the spread of smallpox was linked to combat. [ 22 ] Subsequently, the practice of vaccination would increase with the spread of war. There are four types of traditional vaccines : [ 23 ] In addition, there are some newer types of vaccines in use: A variety of vaccine types are under development; see Experimental Vaccine Types . Most vaccines are given by hypodermic or intramuscular injection as they are not absorbed reliably through the gut. Live attenuated polio and some typhoid and cholera vaccines are given orally in order to produce immunity based in the bowel . Hybrid immunity is the combination of natural immunity and artificial immunity. Studies of hybrid-immune people found that their blood was better able to neutralize the Beta and other variants of SARS-CoV-2 than never-infected, vaccinated people. [ 30 ] Moreover, on 29 October 2021, the Centers for Disease Control and Prevention (CDC) concluded that "Multiple studies in different settings have consistently shown that infection with SARS-CoV-2 and vaccination each result in a low risk of subsequent infection with antigenically similar variants for at least 6 months. Numerous immunologic studies and a growing number of epidemiologic studies have shown that vaccinating previously infected individuals significantly enhances their immune response and effectively reduces the risk of subsequent infection, including in the setting of increased circulation of more infectious variants. ..." [ 31 ] Immunity is determined genetically. Genomes in humans and animals encode the antibodies and numerous other immune response genes. While many of these genes are generally required for active and passive immune responses (see sections above), there are also many genes that appear to be required for very specific immune responses. For instance, Tumor Necrosis Factor (TNF) is required for defense of tuberculosis in humans. Individuals with genetic defects in TNF may get recurrent and life-threatening infections with tuberculosis bacteria ( Mycobacterium tuberculosis ) but are otherwise healthy. They also seem to respond to other infections more or less normally. The condition is therefore called Mendelian susceptibility to mycobacterial disease (MSMD) and variants of it can be caused by other genes related to interferon production or signaling (e.g. by mutations in the genes IFNG , IL12B , IL12RB1 , IL12RB2 , IL23R , ISG15 , MCTS1 , RORC , TBX21 , TYK2 , CYBB , JAK1 , IFNGR1 , IFNGR2 , STAT1 , USP18 , IRF1 , IRF8 , NEMO , SPPL2A ). [ 32 ]
https://en.wikipedia.org/wiki/Immunity_(medicine)
An immunity passport , [ 1 ] immunity certificate , [ 2 ] health pass or release certificate [ 3 ] (among other names used by various local authorities) is a document, whether in paper or digital format , attesting that its bearer has a degree of immunity to a contagious disease . [ 4 ] Public certification is an action that governments can take to mitigate an epidemic. [ 5 ] When it takes into account natural immunity or very recent negative test results, an immunity passport cannot be reduced to a vaccination record or vaccination certificate that proves someone has received certain vaccines verified by the medical records of the clinic where the vaccines were given., [ 6 ] such as the Carte Jaune ("yellow card") issued by the World Health Organization (WHO), which works as an official vaccination record. The concept of immunity passports received much attention during the COVID-19 pandemic as a potential way to contain the pandemic and permit faster economic recovery. [ 7 ] Reliable serological testing for antibodies against SARS-CoV-2 virus is done to certify people as relatively immune to COVID-19 and issue immunity documentation. [ 8 ] Quarantine has been used since ancient times as a method of limiting the spread of infectious disease. Consequently, there has also been a need for documents attesting that a person has completed quarantine or is otherwise known not to be infectious. One of the oldest known immunity passports, issued in 1578 in Venice, was found by Jacek Partyka , [ 10 ] and since the 1600s, various Italian states issued fedi di sanità to exempt their bearers from quarantine. [ 11 ] The International Certificate of Vaccination ( Carte Jaune ) is a certificate of vaccination and prophylaxis, not immunity. The document has remained largely unchanged since it was adopted by the International Sanitary Convention of 1944. [ 12 ] The certificate is most commonly associated with Yellow Fever , but it is also used to track vaccination against other illnesses. [ citation needed ] An immunity certificate is a legal document issued by a testing authority following a serology test demonstrating that the bearer has antibodies making them relatively immune to a disease. [ citation needed ] These antibodies can either be produced naturally by recovering from the disease, or triggered through vaccination or another medical procedure. [ citation needed ] Reliable immunity certificates can be used to exempt holders from quarantine and social distancing restrictions, permitting them to travel and work in most areas, including high-risk occupations such as medical care. In the COVID-19 context, it has been argued that such certificates are of practical use to society only if all of the following conditions can be satisfied: [ 13 ] [ 14 ] [ 15 ] [ 16 ] However, some long-standing vaccines recommended by the World Health Organization , such as Meningococcal vaccine , are less than 100% effective and their protection is not everlasting. [ 17 ] In 2021, as COVID-19 vaccines became more publicly accessible, some governments began to authorize health credentials either as a document or in a digital form. These "vaccine passports" are used to control public access to indoor venues (like bars, restaurants, spas, and casinos) and very large gatherings (like concerts, festivals, and sporting events) and not just to facilitate travel. Depending upon the requirements of the issuing authority, an applicant would need to provide either proof of vaccination(s), a negative COVID-19 test, proof of a recovery from the virus, or some combination of these. [ 18 ] Their usage and implementation has been controversial and has raised various scientific, medical, ethical, legal, discrimination, privacy, civil rights, and human rights concerns. [ 19 ]
https://en.wikipedia.org/wiki/Immunity_passport
Immunization , or immunisation , is the process by which an individual's immune system becomes fortified against an infectious agent (known as the immunogen ). When this system is exposed to molecules that are foreign to the body, called non-self , it will orchestrate an immune response, and it will also develop the ability to quickly respond to a subsequent encounter because of immunological memory . This is a function of the adaptive immune system . Therefore, by exposing a human, or an animal, to an immunogen in a controlled way, its body can learn to protect itself: this is called active immunization. The most important elements of the immune system that are improved by immunization are the T cells , B cells , and the antibodies B cells produce. Memory B cells and memory T cells are responsible for a swift response to a second encounter with a foreign molecule. Passive immunization is direct introduction of these elements into the body, instead of production of these elements by the body itself. Immunization happens in various ways, both in the wild and as done by human efforts in health care . Natural immunity is gained by those organisms whose immune systems succeed in fighting off a previous infection, if the relevant pathogen is one for which immunization is even possible. Natural immunity can have degrees of effectiveness (partial rather than absolute) and may fade over time (within months, years, or decades, depending on the pathogen). In health care, the main technique of artificial induction of immunity is vaccination , [ 1 ] which is a major form of prevention of disease , whether by prevention of infection (pathogen fails to mount sufficient reproduction in the host), prevention of severe disease (infection still happens but is not severe), or both. Vaccination against vaccine-preventable diseases is a major relief of disease burden even though it usually cannot eradicate a disease. Vaccines against microorganisms that cause diseases can prepare the body's immune system, thus helping to fight or prevent an infection . The fact that mutations can cause cancer cells to produce proteins or other molecules that are known to the body forms the theoretical basis for therapeutic cancer vaccines . Other molecules can be used for immunization as well, for example in experimental vaccines against nicotine ( NicVAX ) or the hormone ghrelin in experiments to create an obesity vaccine. Immunizations are often widely stated as less risky and an easier way to become immune to a particular disease than risking a milder form of the disease itself. They are important for both adults and children in that they can protect us from the many diseases out there. Immunization not only protects children against deadly diseases but also helps in developing children's immune systems. [ 2 ] Through the use of immunizations, some infections and diseases have almost completely been eradicated throughout the World. One example is polio . Thanks to dedicated health care professionals and the parents of children who vaccinated on schedule, polio has been eliminated in the U.S. since 1979. Polio is still found in other parts of the world so certain people could still be at risk of getting it. This includes those people who have never had the vaccine, those who did not receive all doses of the vaccine, or those traveling to areas of the world where polio is still prevalent. Active immunization/vaccination has been named one of the "Ten Great Public Health Achievements in the 20th Century". Before the introduction of vaccines, people could only become immune to an infectious disease by contracting the disease and surviving it. Smallpox ( variola ) was prevented in this way by inoculation , which produced a milder effect than the natural disease. The first clear reference to smallpox inoculation was made by the Chinese author Wan Quan (1499–1582) in his Douzhen xinfa (痘疹心法) published in 1549. [ 3 ] In China, powdered smallpox scabs were blown up the noses of the healthy. The patients would then develop a mild case of the disease and from then on were immune to it. The technique did have a 0.5–2.0% mortality rate, but that was considerably less than the 20–30% mortality rate of the disease itself. Two reports on the Chinese practice of inoculation were received by the Royal Society in London in 1700; one by Dr. Martin Lister who received a report by an employee of the East India Company stationed in China and another by Clopton Havers . [ 4 ] According to Voltaire (1742), the Turks derived their use of inoculation from neighbouring Circassia . Voltaire does not speculate on where the Circassians derived their technique from, though he reports that the Chinese have practiced it "these hundred years". [ 5 ] It was introduced into England from Turkey by Lady Mary Wortley Montagu in 1721 and used by Zabdiel Boylston in Boston the same year. In 1798 Edward Jenner introduced inoculation with cowpox ( smallpox vaccine ), a much safer procedure. This procedure, referred to as vaccination , gradually replaced smallpox inoculation, now called variolation to distinguish it from vaccination. Until the 1880s vaccine/vaccination referred only to smallpox, but Louis Pasteur developed immunization methods for chicken cholera and anthrax in animals and for human rabies, and suggested that the terms vaccine/vaccination should be extended to cover the new procedures. This can cause confusion if care is not taken to specify which vaccine is used e.g. measles vaccine or influenza vaccine. Immunization can be achieved in an active or passive manner: vaccination is an active form of immunization. Active immunization can occur naturally when a person comes in contact with, for example, a microbe. The immune system will eventually create antibodies and other defenses against the microbe. The next time, the immune response against this microbe can be very efficient; this is the case in many of the childhood infections that a person only contracts once, but then is immune. Artificial active immunization is where the microbe, or parts of it, are injected into the person before they are able to take it in naturally. If whole microbes are used , they are pre-treated. The importance of immunization is so great that the American Centers for Disease Control and Prevention has named it one of the "Ten Great Public Health Achievements in the 20th Century". [ 6 ] Live attenuated vaccines have decreased pathogenicity. Their effectiveness depends on the immune systems ability to replicate and elicits a response similar to natural infection. It is usually effective with a single dose. Examples of live, attenuated vaccines include measles , mumps , rubella , MMR , yellow fever , varicella , rotavirus , and influenza (LAIV). Passive immunization is where pre-synthesized elements of the immune system are transferred to a person so that the body does not need to produce these elements itself. Currently, antibodies can be used for passive immunization. This method of immunization begins to work very quickly, but it is short lasting, because the antibodies are naturally broken down, and if there are no B cells to produce more antibodies, they will disappear. Passive immunization occurs physiologically, when antibodies are transferred from mother to fetus during pregnancy , to protect the fetus before and shortly after birth. Artificial passive immunization is normally administered by injection and is used if there has been a recent outbreak of a particular disease or as an emergency treatment for toxicity, as in for tetanus . The antibodies can be produced in animals, called "serum therapy," although there is a high chance of anaphylactic shock because of immunity against animal serum itself. Thus, humanized antibodies produced in vitro by cell culture are used instead if available. Immunizations impose what is known as a positive consumer externality on society. In addition to providing the individual with protection against certain antigens it adds greater protection to all other individuals in society through herd immunity . Because this extra protection is not accounted for in the market transactions for immunizations we see an undervaluing of the marginal benefit of each immunization. This market failure is caused by individuals making decisions based on their private marginal benefit instead of the social marginal benefit. Society's undervaluing of immunizations means that through normal market transactions we end up at a quantity that is lower than what is socially optimal. [ 7 ] For example, if individual A values their own immunity to an antigen at $100 but the immunization costs $150, individual A will decide against receiving immunization. However, if the added benefit of herd immunity means person B values person A's immunity at $70 then the total social marginal benefit of their immunization is $170. Individual A's private marginal benefit being lower than the social marginal benefit leads to an under-consumption of immunizations. Having private marginal benefits lower than social marginal benefits will always lead to an under-consumption of any good. The size of the disparity is determined by the value that society places on each different immunization. Many times, immunizations do not reach a socially optimum quantity high enough to eradicate the antigen. Instead, they reach a social quantity that allows for an optimal amount of sick individuals. Most of the commonly immunized diseases in the United States still see a small presence with occasional larger outbreaks. Measles is a good example of a disease whose social optimum leaves enough room for outbreaks in the United States that often lead to the deaths of a handful of individuals. [ 8 ] There are also examples of illnesses so dangerous that the social optimum ended with the eradication of the virus, such as smallpox . In these cases, the social marginal benefit is so large that society is willing to pay the cost to reach a level of immunization that makes the spread and survival of the disease impossible. Despite the severity of certain illnesses, the cost of immunization versus the social marginal benefit means that total eradication is not always the end goal of immunization. Though it is hard to tell exactly where the socially optimal outcome is, we know that it is not the eradication of all disease for which an immunization exists. In order to internalize the positive externality imposed by immunizations payments equal to the marginal benefit must be made. In countries like the United States these payment usually come in the form of subsidies from the government. Before 1962 immunization programs in the United States were run on the local and state level of governments. The inconsistency in subsidies lead to some regions of the United States reaching the socially optimal quantity while other regions were left without subsidies and remained at the private marginal benefit level of immunizations. Since 1962 and the Vaccination Assistance Act , the United States as a whole has been moving towards the socially optimal outcome on a larger scale. [ 9 ] Despite government subsidies it is difficult to tell when social optimum has been achieved. In addition to hardships determining the true social marginal benefit of immunizations we see cultural movements shifting private marginal benefit curves. Vaccine controversies have changed the way some private citizens view the marginal benefit of being immunized. If Individual A believes that there is a large health risk, possibly larger than the antigen itself, associated with immunization they will not be willing to pay for or receive immunization. With fewer willing participants and a widening marginal benefit reaching a social optimum becomes more difficult for governments to achieve through subsidies. Outside of government intervention through subsidies, non profit organizations can also move a society towards the socially optimal outcome by providing free immunizations to developing regions. Without the ability to afford the immunizations to begin with, developing societies will not be able to reach a quantity determined by private marginal benefits. By running immunization programs organizations are able to move privately under-immunized communities towards the social optimum. In the United States, race and ethnicity are strong determinants of utilization of preventive and therapeutic health services as well as health outcomes. [ 10 ] Rates of infant mortality and most of the leading causes of overall mortality have been higher in African Americans than in European Americans. A recent analysis of mortality from influenza and pneumonia revealed that African Americans died of these causes at higher rates than European Americans in 1999–2018. [ 11 ] Contributing to these racial disparities are lower rates of immunization against influenza and pneumococcal pneumonia. [ 10 ] During the COVID-19 pandemic, death rates have been higher in African Americans than European Americans and vaccination rates have lagged in African Americans during the roll-out. [ 12 ] Among Hispanics immunization rates are lower than those in non-Hispanic whites. [ 13 ]
https://en.wikipedia.org/wiki/Immunization
Immunization during pregnancy is the administration of a vaccine to a pregnant individual. [ 1 ] This may be done either to protect the individual from disease or to induce an antibody response, such that the antibodies cross the placenta and provide passive immunity to the infant after birth. In many countries, including the US, [ 2 ] Canada, [ 3 ] UK, [ 4 ] Australia [ 5 ] [ 6 ] and New Zealand, [ 7 ] vaccination against influenza , COVID-19 and whooping cough is routinely offered during pregnancy. Other vaccines may be offered during pregnancy where travel-related or occupational exposure to disease-causing organisms warrant this. However, certain vaccines are contra-indicated in pregnancy. These include vaccines that include live attenuated organisms, such as the MMR and BCG vaccines, since there is a potential risk that these could infect the fetus. Newborns are at increased risk of infection, particularly before they receive their first infant vaccinations. For this reason, certain vaccinations are offered during pregnancy in order to induce an antibody response, resulting in the passage of antibody across the placenta and into the fetus : this confers passive immunity on the newborn. As early as 1879, it was noted that infants born following smallpox vaccination in pregnancy were themselves protected against smallpox. [ 8 ] However, the original smallpox vaccination was never widely used during pregnancy because, as a live vaccine, its use is contraindicated. [ citation needed ] Tetanus is a bacterial infection caused by Clostridium tetani . Newborns can be infected via their unhealed umbilical stump, particularly when the umbilical cord is cut with a non-sterile instrument, and suffer a generalised infection. The tetanus toxoid vaccine was first licensed for use in 1938 and, during the 1960s, it was noted that tetanus vaccination in pregnancy could prevent neonatal tetanus. [ 9 ] Subsequent trials showed that vaccination of pregnant women reduces infant deaths from tetanus by 94%. [ 10 ] [ 11 ] In 1988, the World Health Assembly passed a resolution to use maternal vaccination to eliminate neonatal tetanus by the year 2000. Although neonatal tetanus has not yet been eliminated, by 2017 there were an estimated 31,000 annual infant deaths from tetanus, down from 787,000 in 1987. [ 12 ] Whooping cough , or pertussis, is a contagious respiratory disease caused by the bacteria Bordetella pertussis . It is fatal in an estimated 0.5% of infants in the USA. [ 13 ] The first vaccine against whooping cough was developed in the 1930s, and in the 1940s a study found that vaccination in pregnancy protected infants against developing whooping cough. [ 14 ] The tetanus and whooping cough vaccinations are generally administered in combination during pregnancy, for example as the DTaP vaccine (which also protects against diphtheria) or the 4-in-1 vaccine (which also protects against diphtheria and polio). [ citation needed ] Influenza is a respiratory infection caused by influenza viruses . Pregnant women are disproportionately affected by influenza: in the 1918 pandemic, mortality rates as high as 27% were reported in this population and in the 1957 pandemic, nearly 20% of deaths in pregnancy were attributed to influenza. In the 2009 pandemic, even with medical advances, pregnant women accounted for a disproportionately high percentage of deaths. [ 15 ] The influenza vaccine was first used in the US military from 1938, and then in the civilian population from the 1940s. Given the increased risk of influenza during pregnancy, public health bodies in the USA recommended that pregnant women should be prioritised for influenza vaccination from the 1960s, [ 16 ] with the CDC endorsing the recommendation from 1997. [ 17 ] However, it was not until 2005 that a randomised clinical trial formally demonstrated the efficacy of influenza vaccination in pregnancy. [ 18 ] Following the 2009 pandemic, both Australia and the UK added influenza vaccination to the recommended schedule for pregnant women. [ 19 ] COVID-19 is a respiratory infection caused by the SARS-CoV2 virus. Before COVID-19 vaccines were available, pregnant women who caught the disease were at increased risk of needing intensive care, invasive ventilation or ECMO, but not at increased risk of death. [ 20 ] Infection significantly increased the risk of preterm birth, stillbirth and pre-eclampsia. [ 21 ] COVID-19 vaccination during pregnancy is safe and associated with improved levels of risk for stillbirth , premature birth and admission of the newborn to intensive care . Vaccination can prevent COVID-19 infection during pregnancy although these immunity benefits are not passed on to the child. [ 22 ] mRNA COVID-19 vaccines were first rolled out in December 2020. At this time, in recognition of the risks posed by COVID-19 disease in pregnancy, the US and Israel offered the vaccines to all pregnant women shortly afterwards, and the first safety and effectiveness data therefore came from these vaccines and these nations. [ 23 ] Rubella , or German measles, is an infection caused by the rubella virus . In childhood, it usually causes a mild disease but infection in pregnancy can result in fetal infection, or congenital rubella syndrome , which causes neonatal deaths, deafness, blindness and intellectual disabilities. The first rubella vaccine was licensed for use in 1969, with its development largely spurred by the heavy burden of congenital rubella experienced in the 1960s. [ 24 ] Because the rubella vaccine is a live attenuated vaccine, there is a theoretical risk that it could cause fetal infection, although this has never been seen to occur. Therefore, rubella vaccination is usually avoided during pregnancy. Rather, vaccination is offered to children to reduce the prevalence of rubella virus in circulation and/or to adolescent girls, to boost their immunity before they are likely to conceive. [ 25 ] [ 26 ]
https://en.wikipedia.org/wiki/Immunization_during_pregnancy
An immunization registry or immunization information system ( IIS ) is an information system that collects vaccination data about all persons within a geographic area. It consolidates the immunization records from multiple sources for each person living in its jurisdiction. Immunization information systems (IIS) are an important tool to increase and sustain high vaccination coverage by consolidating vaccination records of children and adults from multiple providers, forecasting next doses past due, due, and next due to support generating reminder and recall vaccination notices for each individual, and providing official vaccination forms and vaccination coverage assessments. One of the national health objectives is to increase to 95% the proportion of children aged <6 years who participate in fully operational population-based IIS. [ citation needed ] A "fully operational" IIS includes 95% enrollment or higher of all catchment area children less than 6 years of age with 2 or more immunization encounters administered according to ACIP recommendations. [ citation needed ] In a population-based IIS, children are entered into the IIS at birth, often through a linkage with electronic birth records . An IIS record also can be initiated by a health care provider at the time of a child's first immunization. If an IIS includes all children in a given geographical area and all providers are reporting immunization information, it can provide a single data source for all community immunization partners. Such a population-based IIS can make it easier to carry out the demonstrably effective immunization strategies (e.g., reminder/recall, AFIX , and WIC linkages) and thereby decrease the resources needed to achieve and maintain high levels of coverage. IIS can also be used to enhance adult immunization services and coverage. Pharmacy immunizations are reported to state IIS, allowing for a complete lifetime immunization history. [ citation needed ] The concept of IIS is not new. Many individual practices and health plans administer immunizations to their patients. Records of these immunizations often are based on computerized information systems designed for other purposes, such as billing . There is also a growing movement toward the development of totally computerized patient medical records . Although an IIS includes all immunizations administered by health care providers participating in it, only population-based IIS are capable of providing information on all children and all adult doses of vaccines administered by all providers. [ citation needed ]
https://en.wikipedia.org/wiki/Immunization_registry
Immuno-psychiatry, according to Pariante, is a discipline that studies the connection between the brain and the immune system. It differs from psychoneuroimmunology by postulating that behaviors and emotions are governed by peripheral immune mechanisms. Depression , for instance, is seen as malfunctioning of the immune system . Since the late 1800’s scientists and physicians have noticed a possible link between the immune system and psychiatric disorders. [ 1 ] [ 2 ] [ 3 ] In 1876 Alexandar Rosenblum, and later in the 1880s Dr. Julius Wagner-Jauregg , observe patients with neurosyphilis, syphilis that had spread to the nervous system, have decreased symptoms of psychosis after contracting malaria. [ 3 ] Then from the 1920s, Karl Menninger notices how many patients recovering or recovered from influenza have psychosis similar to that seen in patients with schizophrenia. [ 1 ] Moritz Tramer then reports how schizophrenia is associated with a child being born in the winter or spring months (when influenza is most commonly contracted). [ 1 ] Later in 1980s, much research is conducted associating increased rates of schizophrenia in patients with a history of prenatal, postnatal infection, and especially childhood central nervous system infections. [ 1 ] [ 4 ] In modern medicine, William Osler provides an early documentation of the association between inflammation and changes in mood and motivation. In his 1892 book, "The Principles and Practice of Medicine," he observed that clinical patients with progressive septicemia showed "early delirium and marked mental prostration and apathy." [ 2 ] In 1988 while studying animals, Benjamin Hart coined the term "sickness behavior" to describe the "sleepy or depressed or inactive" state and decreased motivation to move about that sick animals displayed. [ 2 ] A previous study from 1979, by M.J. Murry, found increased mortality when animals were force-fed after reducing their food intake in response to bacterial infection, suggesting that these changes played an essential role in fighting off infection. [ 2 ] Beginning in the mid 1990s, investigation into the similarity in these animal “sick behavior” and persons with depression led to more and more studies showing elevated levels of pro-inflammatory cytokines among persons with depression. Many of these early studies in sickness behavior showed significant differences in the many pro-inflammatory cytokines reviving interest into the role that the immune system played in psychiatric disorders. [ 2 ] Modern immuno-psychiatry theory now focuses on some variation of this model of how the environment leads to biological changes which affect the peripheral immune system and later affect the mind, mood, behavior, and response to psychiatric treatment. [ 3 ] Stress leads to processing by the sympathetic nervous system which releases catecholamines (dopamine and norepinephrine) that increase the number of monocytes, which respond to inflammatory signals ( DAMPS/MAMPs ), which causes the release of pro-inflammatory cytokines , which then later reach the brain and lead to changes in neurotransmitter metabolism neuronal signaling, and ultimately behavior. Pro-inflammatory cytokines alter the metabolism of neurotransmitters and has been documented to effect decrease levels of serotonin, increase indolamine-2,3-dioxygenase (IDO) activity(which normally catabolizes tryptophan and consequentially decrease serotonin synthesis), increased levels of kynurenine (leading to decreased glutamate and dopamine release), decrease dopamine as well as decreased levels of expression of tyrosine hydroxylase (which is required to make dopamine),  increased levels of quinolinic acid , leading to more NMDA receptor activation and oxidative stress leading to excitotoxicity and neurodegeneration. [ 6 ] Additionally, cytokines interferon-alpha and IL-6 can cause reversible reductions in brain levels of tetrahydrobiopterin (used in the serotonin, dopamine, and norepinephrine synthesis pathways).  However, inhibition of nitric oxide synthase , one of the down stream effects of interferon-alpha, can lead to a reversal of this decrease in tetrahydrobiopterin. [ 6 ] Microglia make the most cytokines of all cells in the brain, respond to stress, and are likely important in the stress response as they are found to be increased in density (yet decreased in overall number) in different parts of the brain of persons who had killed themselves with major depressive disorder, bipolar disorder, and schizophrenia. [ 6 ] On a molecular level, cytokines effect the glutamate metabolism of the nervous system and can lead to structural changes involving microglia similar to those seen in depressed patients. [ 6 ] TNF-alpha and IL-1, through oxidative stress via increased release of reactive oxygen and nitrogen species, impair re-uptake and transport of glutamate by glial cells, increasing release of glutamate by astrocytes and microglia, leading to an excitotoxic state. This loss of oligodendrocytes (the astrocytes and microglia mentioned before) are a key marker in structural analysis of the brains of depressed patient populations. [ 6 ] The hippocampus helps regulate the HPA-axis' secretion of cortisol and has the largest number of glucocorticoid receptors in the brain. [ 8 ] This makes it making it especially sensitive to stress and stress related increases to cortisol. Additionally, the neuroendocrine response by the HPA-axis is effected by the regulation of glucocorticoid receptor expression in the different regions of the brain. And multiple studies have shown that “altered HPA stress responsivity being associated with increased risk of psychopathology” such as in the study of human brain cell, gathered post-mortem, mRNA was harvested in patients who had killed themselves with either a history or a lack of a history of early childhood stresses revealed significant epigenetic changes in glucocorticoid receptor expression. Patients with elevated levels chronic inflammatory cytokines, (such as those with chronic hepatitis C and others undergoing injections of interferon-alpha , cause changes in glucocorticoid receptors and cortisol release similar to patients with major depression.  Both exhibit a loss of the normal cortisol rhythm of secretion throughout the day, and both show a loss of functional glucocorticoid receptors which would otherwise decrease the inflammation in the body. [ 6 ] Following studies of patients with significant chronic inflammation, like those undergoing interferon-alpha therapy for hepatitis C showing an association with depressive symptoms, not unlike Osler's "sickness behavior", more studies into major depressive disorder and its link to inflammation have been done. [ 2 ] There have been many studies inferring a link between inflammation and major depressive disorder from correlating levels of cytokines in the blood, correlating genes linked to inflammation to treatment response, and changes in cytokines to antidepressant therapy. Many studies investigating the role of the immune system in patients with major depressive disorder found that such patients had decreased immune cell activity of natural killer cells and lymphocytes despite reliably having elevated levels of pro-inflammatory cytokines(IL-6, TNF-alpha, and C-reactive protein). [ 2 ] [ 6 ] Depression is also associated with a decrease in regulatory T cells which secrete anti-inflammatory IL-10 and TGF-beta. [ 6 ] Different studies have shown the that persons with depression also have lower circulating levels of IL-10, TGF-beta, in addition to the mentioned elevated levels of pro-inflammatory IL-6 in their blood stream. [ 6 ] Antidepressants have been used to infer a link between inflammation and major depressive disorder. In human studies associating the link between inflammation and depression found that giving antidepressants prior to an expected inflammatory insult decreased observed severity of depression. For example, giving paroxetine prior to treatment for malignant melanoma and hepatitis C was found to decrease depressive symptoms compared to persons not given paroxetine (an antidepressant).  Additional experimental support of giving an antidepressant prior to injection of endotoxin, a substance known to cause systemic inflammation) was also found to reduce self-reported symptoms of depression. [ 6 ] In studies of antidepressant use, some persons show return to normal cytokine levels with depression treatment.  Patients with major depressive disorder treated with antidepressants have an increase in regulatory T cells and a decrease in inflammatory IL-1 beta. [ 6 ] And even more strongly replicated, patients with increased levels of pro-inflammatory cytokines, or even genes tied to increased pro-inflammatory activity, are more likely to have antidepressant resistant depression. [ 6 ] [ 9 ] Through all these studies there seems to be a slight difference in symptoms of major depressive disorder with and without inflammation. Inflammation related depression tends to have less guilt/self negativity and increased slowness and lack of appetite compared to depression in persons without increased levels of systemic inflammation. [ 6 ] There are ties to episodes of psychosis, and persons at risk for schizophrenia, severity of schizophrenia, and with antipsychotic therapy especially with levels of IL-6 in the blood as well as the cerebrospinal fluid of patients with schizophrenia. [ 1 ] Following studies revealing kynurenic acid's uniqueness as being the NMDA receptor's only endogenous (naturally found in the body) antagonist, and the fact that psychosis can be elicited from NMDA receptor antagonism, multiple studies investigated and confirmed change levels of this kynurenic acid may be related to psychosis.  Later drug studies have found that COX1 inhibition, which increases kynurenic acid,  has been reported to cause psychotic symptoms.  COX2 selective inhibitors like celecoxib, which reduce kynurenic acid, were found to reduce clinical severity of schizophrenia in non-randomized, unblinded clinical trials. [ 1 ] [ 10 ] While encouraging, these results remain to be confirmed in randomized clinical trials with confirmatory results before they are even considered for off-label usage. Recent research has shown that patients with OCD have six times the amount of a protein called Immuno-moodulin, or Imood, compared to individuals who do not contend with OCD. In addition to OCD, Imood was also found to increase symptoms of anxiety and stress, both mental health areas that have already been linked to OCD. [ 11 ] The overall results for the many clinical trials of combinations of NSAIDS and antidepressants, proposed to more thoroughly treat standard major depressive disorder and treatment-resistant major depressive disorder, shows that the current degree of importance of addressing the inflammatory component of mood disorders is unclear. Mixed results of some or no improvement in such studies, and the relative lack of studies recruiting sufficient numbers of patients with treatment resistant depression, a lack of studies of patients with chronic inflammation and treatment depression, and a lack of a standardized definition of an elevated chronic inflammatory state leaves more studies to be desired in pursuing the understanding of inflammation and psychiatric disorders. [ 3 ]
https://en.wikipedia.org/wiki/Immuno-psychiatry
ImmunoGen, Inc. was a biotechnology company focused on the development of antibody-drug conjugate (ADC) therapeutics for the treatment of cancer. ImmunoGen was founded in 1981 and was headquartered in Waltham, Massachusetts . [ 2 ] An ImmunoGen ADC contains a manufactured antibody that binds to a target found on cancer cells, with one of the company's potent cell-killing agents attached as a "payload". The antibody serves to deliver the cell-killing agent specifically to cancer cells bearing its target and the payload serves to kill these cells. In some cases, the antibody also has anticancer activity. In November 2023, AbbVie , an American pharmaceutical company, announced it was buying ImmunoGen for $10.1 billion. [ 3 ] [ 4 ] Currently approved ADCs with ImmunoGen technology employ one of the company's maytansinoid cell-killing agents, either DM1 or DM4, or one of the company's DNA-acting IGN payloads. ImmunoGen also developed isatuximab , a monoclonal antibody without linkage to a toxin. ImmunoGen uses its ADC technology to develop its own product candidates. The mirvetuximab, which has been submitted for FDA approval, is being developed as a monotherapy or a standalone treatment for ovarian cancer. [ 7 ] [ 8 ] Other products currently in clinical-stage development include: [ 9 ] The company also selectively out licenses limited use of its technology to other companies. Companies licensing ImmunoGen's technology include Amgen , Bayer HealthCare , Biotest, Genentech / Roche , Eli Lilly , Novartis , Sanofi , and Takeda . [ 9 ] Roche's Kadcyla (ado-trastuzumab emtansine) utilizes ImmunoGen's ADC technology. It has been approved and launched in a number of countries, including the US, where it is marketed by Genentech, a member of the Roche Group. [ 14 ] [ 15 ] In October 2015, the company disclosed that Kadcyla had failed to meet its primary endpoint in the Phase II/III GATSBY trial investigating the second line treatment of HER2 -positive advanced gastric cancer. [ 16 ] This biotechnology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/ImmunoGen
Immunoadsorption is a procedure that removes specific blood group antibodies from the blood. [ 1 ] It is needed to remove the antibodies against pathogenic antibodies . [ 2 ] [ 3 ] [ 4 ] The procedure generally takes about three to four hours. [ 5 ] Immunoadsorption was developed in the 1990s as a method of extracorporeal removal of molecules from the blood , in particular molecules of the immune system . Different number of devices/columns exist on the market, each with a different active component to which the molecule of interest attaches, allowing for selectivity in the molecules of interest. [ citation needed ] Immunoadsorption may be used as an alternative to plasma exchange in certain conditions. [ 6 ] Evidence of benefit is lacking in those with kidney problems. Concerns include that it is expensive. [ 7 ] Blood first passes to plasma filter. Plasma then passes on to immunoadsorption column before returning to patient. As the plasma is passing through one column, the second column is being regenerated. Once the first column is saturated the flow switches to the second column while the first is then regenerated. -1st step: the separation of plasma from the blood cells -2nd step: the immunoadsorption column Treatment prescriptions for immunoadsorption are based on plasma volumes with different recommendations for each condition and depending on the condition being treated, sessions can be daily or intermittent. [ citation needed ] Immunoadsorption could be used in various autoimmune-mediated neurological diseases in order to remove autoimmune antibodies and other pathological constituents from the patients blood It is increasingly recognized as a more specific alternative and generally appreciated for its potentially advantageous safety profile. [ 7 ] Immunoadsorption is also used in kidney transplantation for either the preparation of the ABO -incompatible or the highly sensitized kidney transplant candidate before transplantation , or the treatment of antibody-mediated rejection after transplantation . [ 8 ] The most frequently encountered complication of immunoadsorption is an allergic reaction to the filter or adsorption column. Medication may be given before the procedure to minimize the risk. Other side effects during the treatment could be dizziness, nausea or feeling cold. [ 5 ] The usage of immunoadsorption as medical procedure is still limited in some countries of the world, especially in Northern America. The additional costs for immunoadsorption are balanced by the reduced length of stay time as well as the reduced need of plasma substituting solutions and handling of side effects. [ 9 ]
https://en.wikipedia.org/wiki/Immunoadsorption
An immunoassay ( IA ) is a biochemical test that measures the presence or concentration of a macromolecule or a small molecule in a solution through the use of an antibody (usually) or an antigen (sometimes). The molecule detected by the immunoassay is often referred to as an " analyte " and is in many cases a protein , although it may be other kinds of molecules, of different sizes and types, as long as the proper antibodies that have the required properties for the assay are developed. Analytes in biological liquids such as serum or urine are frequently measured using immunoassays for medical and research purposes. [ 1 ] Immunoassays come in many different formats and variations. Immunoassays may be run in multiple steps with reagents being added and washed away or separated at different points in the assay. Multi-step assays are often called separation immunoassays or heterogeneous immunoassays. Some immunoassays can be carried out simply by mixing the reagents and samples and making a physical measurement. Such assays are called homogeneous immunoassays, or less frequently non-separation immunoassays. The use of a calibrator is often employed in immunoassays. Calibrators are solutions that are known to contain the analyte in question, and the concentration of that analyte is generally known. Comparison of an assay's response to a real sample against the assay's response produced by the calibrators makes it possible to interpret the signal strength in terms of the presence or concentration of analyte in the sample. Immunoassays rely on the ability of an antibody to recognize and bind a specific macromolecule in what might be a complex mixture of macromolecules. In immunology the particular macromolecule bound by an antibody is referred to as an antigen and the area on an antigen to which the antibody binds is called an epitope . In some cases, an immunoassay may use an antigen to detect for the presence of antibodies, which recognize that antigen, in a solution. In other words, in some immunoassays, the analyte may be an antibody rather than an antigen. In addition to the binding of an antibody to its antigen, the other key feature of all immunoassays is a means to produce a measurable signal in response to the binding. Most, though not all, immunoassays involve chemically linking antibodies or antigens with some kind of detectable label. A large number of labels exist in modern immunoassays, and they allow for detection through different means. Many labels are detectable because they either emit radiation, produce a color change in a solution, fluoresce under light, or can be induced to emit light. Rosalyn Sussman Yalow and Solomon Berson are credited with the development of the first immunoassays in the 1950s. Yalow accepted the Nobel Prize for her work in immunoassays in 1977, becoming the second American woman to have won the award. [ 2 ] Immunoassays became considerably simpler to perform and more popular when techniques for chemically linked enzymes to antibodies were demonstrated in the late 1960s. [ 3 ] In 1983, Professor Anthony Campbell [ 4 ] at Cardiff University replaced radioactive iodine used in immunoassay with an acridinium ester that makes its own light: chemiluminescence . This type of immunoassay is now used in around 100 million clinical tests every year worldwide, enabling clinicians to measure a wide range of proteins, pathogens and other molecules in blood samples. [ 5 ] By 2012, the commercial immunoassay industry earned US$ 17,000,000,000 and was thought to have prospects of slow annual growth in the 2 to 3 percent range. [ 6 ] Immunoassays employ a variety of different labels to allow for detection of antibodies and antigens. Labels are typically chemically linked or conjugated to the desired antibody or antigen. Possibly one of the most popular labels to use in immunoassays is enzymes . Immunoassays which employ enzymes are referred to as enzyme immunoassays (EIAs), of which enzyme-linked immunosorbent assays (ELISAs) and enzyme multiplied immunoassay technique (EMIT) are the most common types. Enzymes used in ELISAs include horseradish peroxidase (HRP), alkaline phosphatase (AP) or glucose oxidase . These enzymes allow for detection often because they produce an observable color change in the presence of certain reagents. In some cases these enzymes are exposed to reagents which cause them to produce light or chemiluminescence. There are several types of ELISA: direct, indirect, sandwich, competitive. [ 7 ] Radioactive isotopes can be incorporated into immunoassay reagents to produce a radioimmunoassay (RIA). Radioactivity emitted by bound antibody-antigen complexes can be easily detected using conventional methods. RIAs were some of the earliest immunoassays developed, but have fallen out of favor largely due to the difficulty and potential dangers presented by working with radioactivity. [ 8 ] [ 9 ] A newer approach to immunoassays involves combining real-time quantitative polymerase chain reaction (RT qPCR) and traditional immunoassay techniques. Called real-time immunoquantitative PCR (iqPCR) the label used in these assays is a DNA probe. [ 10 ] [ 11 ] Fluorogenic reporters like phycoerythrin are used in a number of modern immunoassays. [ 12 ] Protein microarrays are a type of immunoassay that often employ fluorogenic reporters. [ 13 ] Some labels work via electrochemiluminescence (ECL), in which the label emits detectable light in response to electric current. [ 14 ] [ 15 ] While some kind of label is generally employed in immunoassays, there are certain kinds of assays which do not rely on labels, but instead employ detection methods that do not require the modification or labeling the components of the assay. Surface plasmon resonance is an example of technique that can detect binding between an unlabeled antibody and antigens. [ 16 ] Another demonstrated labeless immunoassay involves measuring the change in resistance on an electrode as antigens bind to it. [ 17 ] Immunoassays can be run in a number of different formats. Generally, an immunoassay will fall into one of several categories depending on how it is run. [ 18 ] In a competitive, homogeneous immunoassay, unlabelled analyte in a sample competes with labeled analyte to bind an antibody. The amount of labelled, unbound analyte is then measured. In theory, the more analyte in the sample, the more labelled analyte gets displaced and then measured; hence, the amount of labelled, unbound analyte is proportional to the amount of analyte in the sample. As in a competitive, homogeneous immunoassay, unlabelled analyte in a sample competes with labelled analyte to bind an antibody. In the heterogeneous assays, the labelled, unbound analyte is separated or washed away, and the remaining labelled, bound analyte is measured. Mixing a sample with labelled antibodies, the targeted analyte is bound by labelled antibodies. The unbound, labelled antibodies are washed away, and the bound, labelled antibodies are measured. The intensity of the signal is directly proportional to the amount of analyte in the sample. The analyte in the unknown sample is bound to the antibody site, then the labelled antibody is bound to the analyte. The amount of labelled antibody on the site is then measured. It will be directly proportional to the concentration of the analyte because the labelled antibody will not bind if the analyte is not present in the unknown sample. This type of immunoassay is also known as a sandwich assay as the analyte is "sandwiched" between two antibodies. A wide range of medical tests are immunoassays, called immunodiagnostics in this context. Many home pregnancy tests are immunoassays, which detect the pregnancy marker human chorionic gonadotropin . [ 20 ] More specifically, they are qualitative tests that detect whether hCG is present, using a lateral flow setup. [ 21 ] The COVID-19 rapid antigen test is also a qualitative, lateral-flow test. [ 22 ] Other clinical immunoassays are quantitative; they measure amounts. Immunoassays can measure levels of CK-MB to assess heart disease, insulin to assess hypoglycemia , prostate-specific antigen to detect prostate cancer , and some are also used for the detection and/or quantitative measurement of some pharmaceutical compounds (see Enzyme multiplied immunoassay technique for more details). [ 23 ] Drug testing also starts with a quick qualtitative immunoassay. [ 24 ] Immunoassays are used in sports anti-doping laboratories to test athletes' blood samples for prohibited recombinant human growth hormone (rhGH, rGH, hGH, GH). [ 25 ] The photoacoustic immunoassay measures low-frequency acoustic signals generated by metal nanoparticle tags. Illuminated by a modulated light at a plasmon resonance wavelength, the nanoparticles generate strong acoustic signal, which can be measured using a microphone. [ 26 ] The photoacoustic immunoassay can be applied to lateral flow tests, which use colloidal nanoparticles. [ 27 ] "The Immunoassay Handbook", 3rd Edition, David Wild, Ed., Elsevier, 2008
https://en.wikipedia.org/wiki/Immunoassay
Immunocapitalism describes the ways in which disease outbreaks and the acquisition of immunity are leveraged for economic and political gain. The concept highlights the intersection of health, capitalism , and power , demonstrating how social and economic inequalities are exacerbated by epidemics. In some cases, individuals actively attempt to contract a disease in order to become immune to it, because of resulting benefits to their socioeconomic status. [ 1 ] Santa Clara University anthropologist Mythri Jegathesan states the term was first coined by Stanford historian Kathryn Olivarius . [ 1 ] In her paper, "Immunity, Capital, and Power in Antebellum New Orleans," published in The American Historical Review in 2019, Olivarius examined 19th-century New Orleans , where yellow fever outbreaks were rampant. She further explored the concept in her 2022 book, Necropolis: Disease, Power, and Capitalism in the Cotton Kingdom . In both works, Olivarius argues that white New Orleans elites exploited the disease to their advantage, creating a system where immunity, or the lack thereof, became a form of capital. Those who were 'acclimated' to yellow fever, having survived the disease, were granted a form of social and economic capital, while the 'unacclimated', often marginalized groups, were exploited and seen as expendable. This historical context provides a framework for understanding how health inequalities can be exploited for economic and political gain. [ 2 ] [ 3 ] It was further used during the COVID-19 pandemic , with Assistant Director of the Britain-based Nuffield Council on Bioethics Pete Mills declaring in a June 2020 report on the "Ethics of immunity testing" that "Economic incentives invite 'immunocapitalism'". [ 4 ] Mills highlights the economic factors that prioritize COVID-19 immunity status explaining that employers might favor workers who are believed to be immune, as these individuals are perceived to be less likely to contract the virus, or transmit it to colleagues or customers. [ 4 ]
https://en.wikipedia.org/wiki/Immunocapitalism
Immunochemistry is the study of the chemistry of the immune system . [ 1 ] This involves the study of the properties, functions, interactions and production of the chemical components of the immune system. It also include immune responses and determination of immune materials/products by immunochemical assays. In addition, immunochemistry is the study of the identities and functions of the components of the immune system. Immunochemistry is also used to describe the application of immune system components, in particular antibodies, to chemically labelled antigen molecules for visualization. Various methods in immunochemistry have been developed and refined, and used in scientific study, from virology to molecular evolution . Immunochemical techniques include: enzyme-linked immunosorbent assay, immunoblotting (e.g., Western blot assay), precipitation and agglutination reactions, immunoelectrophoresis, immunophenotyping, immunochromatographic assay and cyflometry. One of the earliest examples of immunochemistry is the Wasserman test to detect syphilis . Svante Arrhenius was also one of the pioneers in the field; he published Immunochemistry in 1907 which described the application of the methods of physical chemistry to the study of the theory of toxins and antitoxins . Immunochemistry is also studied from the aspect of using antibodies to label epitopes of interest in cells ( immunocytochemistry ) or tissues ( immunohistochemistry ). Chemical Components of the Immune System
https://en.wikipedia.org/wiki/Immunochemistry
In immunology , immunocompetence is the ability of the body to produce a normal immune response following exposure to an antigen . Immunocompetence is the opposite of immunodeficiency (also known as immuno-incompetence or being immuno-compromised ). In reference to lymphocytes , immunocompetence means that a B cell or T cell is mature and can recognize antigens and allow a person to mount an immune response. In order for lymphocytes such as T cells to become immunocompetent, which refers to the ability of lymphocyte cell receptors to recognize MHC molecules, they must undergo positive selection. [ 1 ] Adaptive immunocompetence is regulated by growth hormone (GH), prolactin (PRL), and vasopressin (VP) – hormones secreted by the pituitary gland. [ 2 ]
https://en.wikipedia.org/wiki/Immunocompetence
Immunoconjugates are antibodies conjugated (joined) to a second molecule, usually a toxin , radioisotope or label . [ 1 ] These conjugates are used in immunotherapy [ citation needed ] and to develop monoclonal antibody therapy as a targeted form of chemotherapy [ 2 ] when they are often known as antibody-drug conjugates . When the conjugates include a radioisotope see radioimmunotherapy . When the conjugates include a toxin see immunotoxin . This immunology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Immunoconjugate
Immunocytochemistry ( ICC ) is a common laboratory technique that is used to anatomically visualize the localization of a specific protein or antigen in cells by use of a specific primary antibody that binds to it. The primary antibody allows visualization of the protein under a fluorescence microscope when it is bound by a secondary antibody that has a conjugated fluorophore . ICC allows researchers to evaluate whether or not cells in a particular sample express the antigen [ 1 ] in question. In cases where an immunopositive signal is found, ICC also allows researchers to determine which sub-cellular compartments are expressing the antigen. Immunocytochemistry differs from immunohistochemistry [ 2 ] in that the former is performed on samples of intact cells that have had most, if not all, of their surrounding extracellular matrix removed. [ citation needed ] This includes individual cells that have been isolated from a block of solid tissue, cells grown within a culture , cells deposited from suspension , or cells taken from a smear . In contrast, immunohistochemical samples are sections of biological tissue , where each cell is surrounded by tissue architecture and other cells normally found in the intact tissue. Immunocytochemistry is a technique used to assess the presence of a specific protein or antigen in cells (cultured cells, cell suspensions) by use of a specific antibody, which binds to it, thereby allowing visualization and examination under a microscope. It is a valuable tool for the determination of cellular contents from individual cells. Samples that can be analyzed include blood smears, aspirates, swabs, cultured cells, and cell suspensions. There are many ways to prepare cell samples for immunocytochemical analysis. Each method has its own strengths and unique characteristics so the right method can be chosen for the desired sample and outcome. Cells to be stained can be attached to a solid support to allow easy handling in subsequent procedures. This can be achieved by several methods: adherent cells may be grown on microscope slides, coverslips, or an optically suitable plastic support. Suspension cells can be centrifuged onto glass slides ( cytospin ), bound to solid support using chemical linkers, or in some cases handled in suspension. Concentrated cellular suspensions that exist in a low-viscosity medium make good candidates for smear preparations. Dilute cell suspensions existing in a dilute medium are best suited for the preparation of cytospins through cytocentrifugation. Cell suspensions that exist in a high-viscosity medium, are best suited to be tested as swab preparations. The constant among these preparations is that the whole cell is present on the slide surface. For any intercellular reaction to take place, immunoglobulin must first traverse the cell membrane that is intact in these preparations. Reactions taking place in the nucleus can be more difficult, and the extracellular fluids can create unique obstacles in the performance of immunocytochemistry. In this situation, permeabilizing cells using detergent (Triton X-100 or Tween-20) or choosing organic fixatives (acetone, methanol, or ethanol) becomes necessary. Antibodies are an important tool for demonstrating both the presence and the subcellular localization of an antigen. Cell staining is a very versatile technique and, if the antigen is highly localized, can detect as few as a thousand antigen molecules in a cell. In some circumstances, cell staining may also be used to determine the approximate concentration of an antigen, especially by an image analyzer. There are many methods to obtain immunological detection on tissues, including those tied directly to primary antibodies or antisera. A direct method involves the use of a detectable tag (e.g., fluorescent molecule, gold particles, etc., ) directly to the antibody [ 3 ] that is then allowed to bind to the antigen (e.g., protein) in a cell. Alternatively, there are many indirect methods . In one such method, the antigen is bound by a primary antibody which is then amplified by use of a secondary antibody which binds to the primary antibody. Next, a tertiary reagent containing an enzymatic moiety is applied and binds to the secondary antibody. When the quaternary reagent, or substrate, is applied, the enzymatic end of the tertiary reagent converts the substrate into a pigment reaction product, which produces a color (many colors are possible; brown, black, red, etc.,) in the same location that the original primary antibody recognized that antigen of interest. Some examples of substrates used (also known as chromogens) are AEC (3-Amino-9-EthylCarbazole), or DAB ( 3,3'-Diaminobenzidine ). Use of one of these reagents after exposure to the necessary enzyme (e.g., horseradish peroxidase conjugated to an antibody reagent) produces a positive immunoreaction product. Immunocytochemical visualization of specific antigens of interest can be used when a less specific stain like H&E (Hematoxylin and Eosin) cannot be used for a diagnosis to be made or to provide additional predictive information regarding treatment (in some cancers, for example). Alternatively the secondary antibody may be covalently linked to a fluorophore ( FITC and Rhodamine are the most common) which is detected in a fluorescence or confocal microscope. The location of fluorescence will vary according to the target molecule, external for membrane proteins, and internal for cytoplasmic proteins. In this way immunofluorescence is a powerful technique when combined with confocal microscopy for studying the location of proteins and dynamic processes ( exocytosis , endocytosis , etc.).
https://en.wikipedia.org/wiki/Immunocytochemistry
Immunodermatology studies skin as an organ of immunity in health and disease. Several areas have special attention, such as photo-immunology (effects of UV light on skin defense), inflammatory diseases such as Hidradenitis suppurativa , allergic contact dermatitis and atopic eczema , presumably autoimmune skin diseases such as vitiligo and psoriasis , and finally the immunology of microbial skin diseases such as retrovirus infections and leprosy . New therapies in development for the immunomodulation of common immunological skin diseases include biologicals aimed at neutralizing TNF-alfa and chemokine receptor inhibitors. [ citation needed ] There are multiple universities currently do Immunodermatology: This immunology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Immunodermatology
Immunodiffusion is a laboratory technique used to detect and quantify antigens and antibodies by observing their interactions within a gel medium. [ 1 ] This technique involves the diffusion of antigens and antibodies through a gel, usually agar , resulting in the formation of a visible precipitate when they interact. [ 1 ] [ 2 ] Immunodiffusion techniques are widely used in immunology for various purposes, including: [ 1 ] [ 2 ] In this method, antibodies are uniformly distributed in an agar gel, and the antigen sample is placed in wells cut into the gel. As the antigen diffuses radially, it forms a precipitation ring with the antibody. The diameter of this ring corresponds to the concentration of the antigen in the solution. [ 3 ] [ 2 ] This method involves both antigen and antibody diffusing through the gel from separate wells, forming precipitation lines where they meet and react. [ 4 ] This immunology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Immunodiffusion
Immunodominance is the immunological phenomenon in which immune responses are mounted against only a few of the antigenic peptides out of the many produced. [ 1 ] That is, despite multiple allelic variations of MHC molecules and multiple peptides presented on antigen presenting cells, the immune response is skewed to only specific combinations of the two. [ 1 ] Immunodominance is evident for both antibody-mediated immunity and cell-mediated immunity. [ 2 ] [ 3 ] Epitopes that are not targeted or targeted to a lower degree during an immune response are known as subdominant epitopes. [ 1 ] [ 2 ] The impact of immunodominance is immunodomination, where immunodominant epitopes will curtail immune responses against non-dominant epitopes. [ 4 ] Antigen-presenting cells such as dendritic cells, can have up to six different types of MHC molecules for antigen presentation. [ 1 ] There is a potential for generation of hundreds to thousands of different peptides from the proteins of pathogens. [ 1 ] Yet, the effector cell population that is reactive against the pathogen is dominated by cells that recognize only a certain class of MHC bound to only certain pathogen-derived peptides presented by that MHC class. [ 1 ] Antigens from a particular pathogen can be of variable immunogenicity, with the antigen that stimulates the strongest response being the immunodominant one. The different levels of immunogenicity amongst antigens forms what is known as dominance hierarchy. [ 2 ] The mechanisms of immunodominance are very poorly understood. [ 1 ] [ 2 ] What determines cytotoxic T lymphocyte (CTL) immunodominance can be a number of factors, many of which are debated. [ 1 ] Of these, one in particular focuses on the timing of CTL clonal expansion. [ 2 ] [ 3 ] The dominant CTLs that arise were activated sooner so therefore proliferate faster than subdominant CTLs that were activated later, thus resulting in a greater number of CTLs for that immunodominant epitope. [ 2 ] This can be in concordance with an additional theory which states that immunodominance may be dependent on the affinity of the T-cell receptor (TCR) to the immunodominant epitope. [ 4 ] That is, T cells with a TCR that has high affinity for its antigen are most likely to be immunodominant. [ 4 ] High affinity of the peptide to the TCR contributes to the T cell’s survival and proliferation, allowing for more clonal selection of the immunodominant T cells over the subdominant T cells. [ 4 ] Immunodominant T cells also curtail subdominant T cells by outcompeting them for cytokine sources from antigen-presenting cells. [ 4 ] This leads to a greater expansion of the T cells that recognize a high affinity epitope and is favoured since these cells are likely to clear the infection much more quickly and effectively than their subdominant counterparts. It is important to note, however, that immunodominance is a relative term. If subdominant epitopes are introduced without the dominant epitope, the immune response will be focused to that subdominant epitope. [ 4 ] Meanwhile, if the dominant epitope is introduced with the subdominant epitope, the immune response will be directed against the dominant epitope while silencing the response against the subdominant epitope. [ 4 ] The mechanism of immunodominance in B cell activation focuses on the affinity of epitope binding to the B-cell receptor (BCR). If an epitope binds very strongly to a B cell BCR, it will then subsequently bind with high affinity to the resultant antibodies produced by that B cell upon activation. These antibodies then out-compete the BCR for the epitope, and thus that B cell lineage will be unavailable for subsequent stimulation. [ 2 ] On the opposite end of the scale where BCRs have low affinity for the epitopes, these B cells are outcompeted for stimulation by B cells with BCRs that have higher affinities for their respective epitopes. [ 2 ] Insufficient T cell stimulation by these B cells also leads to suppression of these B cells by the T cells. [ 2 ] The immunodominant epitope will be a BCR that has a particular ‘goldilocks’ amount of affinity for its epitope determined by equilibrium binding affinity. [ 2 ] This leads to initial IgM response directed at the strongly binding epitope, and the subsequent IgG response focused on the immunodominant epitope. [ 2 ] That is, those within the ‘goldilocks zone’ for affinity will be available for subsequent T helper stimulation, allowing for class switching, affinity maturation and thus, resulting in immunodominance to that particular epitope. [ 2 ] Having the immune response focused on a specific immunodominant epitope is useful because it allows the strongest immune response against a certain pathogen to dominate, thus eliminating the pathogen fast and effectively. [ 4 ] However, it can also cause a hindrance because of potential pathogen escape. [ 4 ] In the case of HIV, immunodominance can be unfavourable because of the high mutation rate of HIV. [ 1 ] The immunodominant epitope can be mutated in the virus, thus allowing HIV to avoid the adaptive immune response when reintroduced from latency. [ 1 ] This is why the disease perpetuates, as the virus mutates to avoid the antibodies and T cells specific for the immunodominant epitope that is no longer expressed by the virus. [ 1 ] Immunodominance can also have implications in cancer immunotherapy. Similar to HIV-escape, cancer can escape the immune system’s detection by antigenic variation . [ 4 ] As the immunodominant epitope is mutated and/or lost in the cancer, the immune response no longer has Immunodominance also has implications in vaccine development. Immunodominant epitopes vary from person to person. [ 5 ] This phenomenon is due to the variability of HLA types, which make up the MHC molecules that present the immunodominant epitopes. [ 5 ] Therefore, people with different alleles may respond to different epitopes of the same pathogen. With vaccine development particularly for subunit-based and recombinant vaccines, this may lead to some individuals which have different HLA haplotype to not respond while others do. [ 5 ] Immunodominance is influenced by evolutionary pressures that shape the immune system's capacity to respond effectively to diverse pathogens. Pathogen-host coevolution drives the selection of immune responses that are most effective at clearing infections, which may result in certain epitopes being preferentially targeted by the immune system. [ 6 ] Pathogens can also exploit immunodominance by evolving escape mutations in dominant epitopes to evade immune detection. This phenomenon has been well-documented in chronic viral infections such as HIV and hepatitis C virus (HCV), where mutations frequently arise in immunodominant epitopes recognized by cytotoxic T lymphocytes (CTLs). [ 7 ] Conversely, the immune system may maintain dominance hierarchies that are evolutionarily conserved due to their effectiveness against common pathogens. For example, studies have shown that certain epitopes of influenza virus are consistently recognized by the immune system across diverse human populations, suggesting that these epitopes are evolutionarily important targets. [ 8 ] Furthermore, the evolutionary pressure exerted by the immune system on pathogens can drive antigenic variation and epitope diversification, a common strategy employed by rapidly evolving pathogens such as HIV and influenza. [ 9 ] The phenomenon of immunodominance also has implications for vaccine design. Pathogens may preferentially mutate immunodominant epitopes, making it essential to consider both dominant and subdominant epitopes when designing broad and effective vaccines.
https://en.wikipedia.org/wiki/Immunodominance