text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
In mathematics , the remainder is the amount "left over" after performing some computation. In arithmetic , the remainder is the integer "left over" after dividing one integer by another to produce an integer quotient ( integer division ). In algebra of polynomials, the remainder is the polynomial "left over" after dividing one polynomial by another. The modulo operation is the operation that produces such a remainder when given a dividend and divisor.
Alternatively, a remainder is also what is left after subtracting one number from another, although this is more precisely called the difference . This usage can be found in some elementary textbooks; colloquially it is replaced by the expression "the rest" as in "Give me two dollars back and keep the rest." [ 1 ] However, the term "remainder" is still used in this sense when a function is approximated by a series expansion , where the error expression ("the rest") is referred to as the remainder term .
Given an integer a and a non-zero integer d , it can be shown that there exist unique integers q and r , such that a = qd + r and 0 ≤ r < | d | . The number q is called the quotient , while r is called the remainder .
(For a proof of this result, see Euclidean division . For algorithms describing how to calculate the remainder, see Division algorithm .)
The remainder, as defined above, is called the least positive remainder or simply the remainder . [ 2 ]
In some occasions, it is convenient to carry out the division so that a is as close to an integral multiple of d as possible, that is, we can write
In this case, s is called the least absolute remainder . [ 3 ] As with the quotient and remainder, k and s are uniquely determined, except in the case where d = 2 n and s = ± n . For this exception, we have:
A unique remainder can be obtained in this case by some convention—such as always taking the positive value of s .
In the division of 43 by 5, we have:
so 3 is the least positive remainder. We also have that:
and −2 is the least absolute remainder.
These definitions are also valid if d is negative, for example, in the division of 43 by −5,
and 3 is the least positive remainder, while,
and −2 is the least absolute remainder.
In the division of 42 by 5, we have:
and since 2 < 5/2, 2 is both the least positive remainder and the least absolute remainder.
In these examples, the (negative) least absolute remainder is obtained from the least positive remainder by subtracting 5, which is d . This holds in general. When dividing by d , either both remainders are positive and therefore equal, or they have opposite signs. If the positive remainder is r 1 , and the negative one is r 2 , then
When a and d are floating-point numbers , with d non-zero, a can be divided by d without remainder, with the quotient being another floating-point number. If the quotient is constrained to being an integer, however, the concept of remainder is still necessary. It can be proved that there exists a unique integer quotient q and a unique floating-point remainder r such that a = qd + r with 0 ≤ r < | d | .
Extending the definition of remainder for floating-point numbers, as described above, is not of theoretical importance in mathematics; however, many programming languages implement this definition (see Modulo operation ).
While there are no difficulties inherent in the definitions, there are implementation issues that arise when negative numbers are involved in calculating remainders. Different programming languages have adopted different conventions. For example:
Euclidean division of polynomials is very similar to Euclidean division of integers and leads to polynomial remainders. Its existence is based on the following theorem: Given two univariate polynomials a ( x ) and b ( x ) (where b ( x ) is a non-zero polynomial) defined over a field (in particular, the reals or complex numbers ), there exist two polynomials q ( x ) (the quotient ) and r ( x ) (the remainder ) which satisfy: [ 7 ]
where
where "deg(...)" denotes the degree of the polynomial (the degree of the constant polynomial whose value is always 0 can be defined to be negative, so that this degree condition will always be valid when this is the remainder). Moreover, q ( x ) and r ( x ) are uniquely determined by these relations.
This differs from the Euclidean division of integers in that, for the integers, the degree condition is replaced by the bounds on the remainder r (non-negative and less than the divisor, which insures that r is unique.) The similarity between Euclidean division for integers and that for polynomials motivates the search for the most general algebraic setting in which Euclidean division is valid. The rings for which such a theorem exists are called Euclidean domains , but in this generality, uniqueness of the quotient and remainder is not guaranteed. [ 8 ]
Polynomial division leads to a result known as the polynomial remainder theorem : If a polynomial f ( x ) is divided by x − k , the remainder is the constant r = f ( k ) . [ 9 ] [ 10 ] | https://en.wikipedia.org/wiki/Remainder |
Remanence or remanent magnetization or residual magnetism is the magnetization left behind in a ferromagnetic material (such as iron ) after an external magnetic field is removed. [ 1 ] Colloquially, when a magnet is "magnetized", it has remanence. [ 2 ] The remanence of magnetic materials provides the magnetic memory in magnetic storage devices, and is used as a source of information on the past Earth's magnetic field in paleomagnetism . The word remanence is from remanent + -ence, meaning "that which remains". [ 3 ]
The equivalent term residual magnetization is generally used in engineering applications. In transformers , electric motors and generators a large residual magnetization is not desirable (see also electrical steel ) as it is an unwanted contamination, for example, a magnetization remaining in an electromagnet after the current in the coil is turned off. Where it is unwanted, it can be removed by degaussing .
Sometimes the term retentivity is used for remanence measured in units of magnetic flux density . [ 4 ]
The default definition of magnetic remanence is the magnetization remaining in zero field after a large magnetic field is applied (enough to achieve saturation ). [ 1 ] The effect of a magnetic hysteresis loop is measured using instruments such as a vibrating sample magnetometer ; and the zero-field intercept is a measure of the remanence. In physics this measure is converted to an average magnetization (the total magnetic moment divided by the volume of the sample) and denoted in equations as M r . If it must be distinguished from other kinds of remanence, then it is called the saturation remanence or saturation isothermal remanence (SIRM) and denoted by M rs .
In engineering applications the residual magnetization is often measured using a B-H analyzer , which measures the response to an AC magnetic field (as in Fig. 1). This is represented by a flux density B r . This value of remanence is one of the most important parameters characterizing permanent magnets ; it measures the strongest magnetic field they can produce. Neodymium magnets , for example, have a remanence approximately equal to 1.3 Tesla .
Often a single measure of remanence does not provide adequate information on a magnet. For example, magnetic tapes contain a large number of small magnetic particles (see magnetic storage ), and these particles are not identical. Magnetic minerals in rocks may have a wide range of magnetic properties (see rock magnetism ). One way to look inside these materials is to add or subtract small increments of remanence. One way of doing this is first demagnetizing the magnet in an AC field, and then applying a field H and removing it. This remanence, denoted by M r ( H ), depends on the field. [ 5 ] It is called the initial remanence [ 6 ] or the isothermal remanent magnetization (IRM) . [ 7 ]
Another kind of IRM can be obtained by first giving the magnet a saturation remanence in one direction and then applying and removing a magnetic field in the opposite direction. [ 5 ] This is called demagnetization remanence or DC demagnetization remanence and is denoted by symbols like M d ( H ), where H is the magnitude of the field. [ 8 ] Yet another kind of remanence can be obtained by demagnetizing the saturation remanence in an ac field. This is called AC demagnetization remanence or alternating field demagnetization remanence and is denoted by symbols like M af ( H ).
If the particles are noninteracting single-domain particles with uniaxial anisotropy , there are simple linear relations between the remanences. [ 5 ]
Another kind of laboratory remanence is anhysteretic remanence or anhysteretic remanent magnetization (ARM) . This is induced by exposing a magnet to a large alternating field plus a small DC bias field. The amplitude of the alternating field is gradually reduced to zero to get an anhysteretic magnetization , and then the bias field is removed to get the remanence. The anhysteretic magnetization curve is often close to an average of the two branches of the hysteresis loop , [ 9 ] and is assumed in some models to represent the lowest-energy state for a given field. [ 10 ] There are several ways for experimental measurement of the anhysteretic magnetization curve, based on fluxmeters and DC biased demagnetization. [ 11 ] ARM has also been studied because of its similarity to the write process in some magnetic recording technology [ 12 ] and to the acquisition of natural remanent magnetization in rocks. [ 13 ] | https://en.wikipedia.org/wiki/Remanence |
Remarks on the Foundations of Mathematics ( German : Bemerkungen über die Grundlagen der Mathematik ) is a book of Ludwig Wittgenstein 's notes on the philosophy of mathematics . It has been translated from German to English by G.E.M. Anscombe , edited by G.H. von Wright and Rush Rhees , [ 1 ] and published first in 1956. The text has been produced from passages in various sources by selection and editing. The notes have been written during the years 1937–1944 and a few passages are incorporated in the Philosophical Investigations which were composed later.
When the book appeared it received many negative reviews [ 2 ] mostly from working logicians and mathematicians, among them Michael Dummett , Paul Bernays , and Georg Kreisel . [ 3 ] Kreisel's scathing review received particular attention although he later distanced himself from it. [ 4 ]
In later years however it received more positive reviews. [ 5 ] [ 6 ] Today Remarks on the Foundations of Mathematics is read mostly by philosophers sympathetic to Wittgenstein and they tend to adopt a more positive stance. [ 7 ]
Wittgenstein's philosophy of mathematics is exposed chiefly by simple examples on which further skeptical comments are made. The text offers an extended analysis of the concept of mathematical proof and an exploration of Wittgenstein's contention that philosophical considerations introduce false problems in mathematics. Wittgenstein in the Remarks adopts an attitude of doubt in opposition to much orthodoxy in the philosophy of mathematics.
Particularly controversial in the Remarks was Wittgenstein's "notorious paragraph", which contained an unusual commentary on Gödel's incompleteness theorems . Multiple commentators read Wittgenstein as misunderstanding Gödel. In 2000 Juliet Floyd and Hilary Putnam suggested that the majority of commentary misunderstands
Wittgenstein but their interpretation [ 8 ] has not been met with approval. [ 9 ] [ 10 ]
Wittgenstein wrote
I imagine someone asking my advice; he says: "I have constructed a proposition (I will use 'P' to designate it) in Russell's symbolism, and by means of certain definitions and transformations it can be so interpreted that it says: 'P is not provable in Russell's system'. Must I not say that this proposition on the one hand is true, and on the other hand unprovable? For suppose it were false; then it is true that it is provable. And that surely cannot be! And if it is proved, then it is proved that it is not provable. Thus it can only be true, but unprovable."
Just as we can ask, " 'Provable' in what system?," so we must also ask, "'True' in what system?" "True in Russell's system" means, as was said, proved in Russell's system, and "false" in Russell's system means the opposite has been proved in Russell's system.—Now, what does your "suppose it is false" mean? In the Russell sense it means, "suppose the opposite is proved in Russell's system"; if that is your assumption you will now presumably give up the interpretation that it is unprovable. And by "this interpretation" I understand the translation into this English sentence.—If you assume that the proposition is provable in Russell's system, that means it is true in the Russell sense, and the interpretation "P is not provable" again has to be given up. If you assume that the proposition is true in the Russell sense, the same thing follows. Further: if the proposition is supposed to be false in some other than the Russell sense, then it does not contradict this for it to be proved in Russell's system. (What is called "losing" in chess may constitute winning in another game.) [ 11 ]
The debate has been running around the so-called Key Claim : If one assumes that P is provable in PM, then one should give up the "translation" of P by the English sentence "P is not provable".
Wittgenstein does not mention the name of Kurt Gödel who was a member of the Vienna Circle during the period in which Wittgenstein's early ideal language philosophy and Tractatus Logico-Philosophicus dominated the circle's thinking; multiple writings of Gödel in his Nachlass contain his own antipathy for Wittgenstein, and belief that Wittgenstein wilfully misread the theorems. [ 12 ] Some commentators, such as Rebecca Goldstein , have hypothesized that Gödel developed his logical theorems in opposition to Wittgenstein. [ 12 ] | https://en.wikipedia.org/wiki/Remarks_on_the_Foundations_of_Mathematics |
A remaster is a change in the sound or image quality of previously created forms of media, whether audiophonic , cinematic , or videographic . The resulting product is said to be remastered . The terms digital remastering and digitally remastered are also used.
In a wider sense, remastering a product may involve other, typically smaller inclusions or changes to the content itself. They tend to be distinguished from remakes , based on the original.
A master recording is the definitive recording version that will be replicated for the end user, commonly into other formats (e.g. LP records , tapes , CDs , DVDs , Blu-rays , etc.).
A batch of copies is often made from a single original master recording, which might itself be based on previous recordings. For example, sound effects (e.g. a door opening, punching sounds, falling down the stairs, a bell ringing) might have been added from copies of sound effect tapes similar to modern sampling to make a radio play for broadcast.
Problematically, several different levels of masters often exist for any one audio release. As an example, examine the way a typical music album from the 1960s was created. Musicians and vocalists were recorded on multi-track tape . This tape was mixed to create a stereo or mono master. A further master tape would likely be created from this original master recording consisting of equalization and other adjustments and improvements to the audio to make it sound better on record players for example.
More master recordings would be duplicated from the equalized master for regional copying purposes (for example to send to several pressing plants). Pressing masters for vinyl recordings would be created. Often these interim recordings were referred to as mother tapes . All vinyl records would derive from one of the master recordings.
Thus, mastering refers to the process of creating a master. This might be as simple as copying a tape for further duplication purposes or might include the actual equalization and processing steps used to fine-tune material for release. The latter example usually requires the work of mastering engineers .
With the advent of digital recording in the late 1970s, many mastering ideas changed. Previously, creating new masters meant incurring an analog generational loss; in other words, copying a tape to a tape meant reducing the signal-to-noise ratio . This means how much of the original intended "good" information is recorded against faults added to the recording as a result of the technical limitations of the equipment used (noise, e.g. tape hiss , static, etc.). Although noise reduction techniques exist, they also increase other audio distortions such as azimuth shift, wow and flutter , print-through and stereo image shift.
With digital recording, masters could be created and duplicated without incurring the usual generational loss. As CDs were a digital format, digital masters created from original analog recordings became a necessity.
Remastering is the process of making a new master for an album, [ 1 ] film, or any other creation. It tends to refer to the process of porting a recording from an analog medium to a digital one, but this is not always the case. [ citation needed ]
For example, a vinyl LP – originally pressed from a worn-out pressing master many tape generations removed from the "original" master recording – could be remastered and re-pressed from a better-condition tape. All CDs created from analog sources are technically digitally remastered.
The process of creating a digital transfer of an analog tape remasters the material in the digital domain, even if no equalization, compression , or other processing is done to the material. Ideally, because of their higher resolution, a CD or DVD (or even higher quality like high-resolution audio or hi-def video ) release should come from the best source possible, with the most care taken during its transfer. [ citation needed ]
Additionally, the earliest days of the CD era found digital technology in its infancy, which sometimes resulted in poor-sounding digital transfers. The early DVD era was not much different, with copies of films frequently being produced from worn prints, with low bitrates and muffled audio. [ citation needed ] When the first CD remasters turned out to be bestsellers, companies soon realized that new editions of back-catalog items could compete with new releases as a source of revenue. Back-catalog values skyrocketed, and today it is not unusual to see expanded and remastered editions of relatively modern albums.
Master tapes, or something close to them, can be used to make CD releases. Better processing choices can be used. Better prints can be utilized, with sound elements remixed to 5.1 surround sound and obvious print flaws digitally corrected. The modern era gives publishers almost unlimited ways to touch up, doctor, and "improve" their media, and as each release promises improved sound, video, extras and others, producers hope these upgrades will entice people into making a purchase.
Remastering music for CD or even digital distribution starts from locating the original analog version. [ 2 ] The next step involves digitizing the track or tracks so it can be edited using a computer. Then the track order is chosen. This is something engineers often worry about because if the track order is not right, it may seem sonically unbalanced. [ 2 ]
When the remastering starts, engineers use software tools such as a limiter, an equalizer, and a compressor. The compressor and limiters are ways of controlling the loudness of a track. [ 2 ] This is not to be confused with the volume of a track, which is controlled by the listener during playback.
The dynamic range of an audio track is measured by calculating the variation between the loudest and the quietest part of a track. [ 2 ] In recording studios the loudness is measured with negative decibels, zero designating the loudest recordable sound. A limiter works by having a certain cap on the loudest parts and if that cap is exceeded, it is automatically lowered by a ratio preset by the engineer. [ 2 ]
Remastered audio has been the subject of criticism. [ 3 ] [ 4 ] Many remastered CDs from the late 1990s onwards have been affected by the " loudness war ", where the average volume of the recording is increased and dynamic range is compressed at the expense of clarity, making the remastered version sound louder at regular listening volume and more distorted than an uncompressed version. [ 3 ] [ 4 ] Some have also criticized the overuse of noise reduction in the remastering process, as it affects not only the noise, but the signal too, and can leave audible artifacts. [ 5 ] [ 6 ] Equalisation can change the character of a recording noticeably. As EQ decisions are a matter of taste to some degree, they are often the subject of criticism. Mastering engineers such as Steve Hoffman have noted that using flat EQ on a mastering allows listeners to adjust the EQ on their equipment to their own preference, but mastering a release with a certain EQ means that it may not be possible to get a recording to sound right on high-end equipment. [ 3 ] [ 4 ] Additionally, from an artistic point of view, original mastering involved the original artist, but remastering often does not. Therefore, a remastered record may not sound how the artist originally intended. [ citation needed ]
To remaster a film digitally for DVD and Blu-ray , digital restoration operators must scan in the film frame by frame at a resolution of at least 2,048 pixels across (referred to as 2K resolution ). [ 7 ] Some films are scanned at 4K , 6K , or even 8K resolution to be ready for higher resolution devices. [ 7 ] Scanning a film at 4K—a resolution of 4096 × 3092 for a full frame of film—generates at least 12 terabytes of data before any editing is done. [ 7 ]
Digital restoration operators then use specialist software such as MTI's Digital Restoration System (DRS) to remove scratches and dust from damaged film. Restoring the film to its original color is also included in this process. [ 2 ]
As well as remastering the video aspect, the audio is also remastered using such software as Pro Tools to remove background noise and boost dialogue volumes so when actors are speaking they are easier to understand and hear. [ 2 ] Audio effects are also added or enhanced, as well as surround sound , which allows the soundtrack elements to be spread among multiple speakers for a more immersive experience. [ 2 ]
An example of a restored film is the 1939 film The Wizard of Oz . [ 8 ] The color portions of Oz were shot in the three-strip Technicolor process , which in the 1930s yielded three black and white negatives created from red, green and blue light filters which were used to print the cyan, magenta and yellow portions of the final printed color film answer print . [ 8 ] These three negatives were scanned individually into a computer system, where the digital images were tinted and combined using proprietary software. [ 8 ]
The cyan, magenta, and yellow records had suffered from shrinkage over the decades, and the software used in the restoration morphed all three records into the correct alignment. [ 8 ] The software was also used to remove dust and scratches from the film by copying data, for example, from the cyan and yellow records to fix a blemish in the magenta record. [ 8 ] Restoring the film made it possible to see precise visual details not visible on earlier home releases: for example, when the Scarecrow says "I have a brain", burlap is noticeable on his cheeks. It was also not possible to see a rivet between the Tin Man 's eyes prior to the restoration. [ 8 ]
Shows that were shot and edited entirely on film, such as Star Trek: The Original Series , are able to be re-released in HD through re-scanning the original film negatives; the remastering process for the show additionally enabled Paramount to digitally update certain special effects. [ 9 ] [ unreliable source? ] Shows that were made between the early 1980s and the early 2000s were generally shot on film, then transferred to and edited on standard-definition videotape, making high-definition transfers impossible without re-editing the product from scratch, such as with the HD release of Star Trek: The Next Generation , which cost Paramount over $12 million to produce. Because of this release's commercial failure, Paramount chose not to give Deep Space Nine or Voyager the same treatment. [ 10 ] In 2014, Pee-wee's Playhouse was digitally remastered from the original film and audio tracks. [ 11 ]
Remastered films have been the subject of criticism. When the Arnold Schwarzenegger film Predator was remastered, it was felt by some critics that the process was overdone, resulting in Schwarzenegger's skin looking waxy. [ 12 ] As well as complaints about the way the picture looks, there have been other complaints about digital fixing. [ 13 ] One notable complaint is from the 2002 remastered version of E.T. the Extra-Terrestrial (1982), where director Steven Spielberg replaced guns in the hands of police and federal agents with walkie-talkies . A later 30th-anniversary edition released in 2012 saw the return of the original scene. [ 13 ]
With regard to animation—both for television and film—"remastering" can take on a different context, including altering original images to extremes.
For traditionally animated projects, completed on cels and printed to film, remastering can be as simple as touching up a film negative. There have been times where these revisions have been controversial: boxed DVD sets of animated properties like Looney Tunes from the early 2000s saw extensive criticism from fans and historians due to the aggressive use of digital video noise reduction (DVNR). The process was designed to automatically remove dust or specks from the image, but would mistake stray ink lines or smudges on the cel for damage, as well as removing natural imperfections. [ 14 ] Disney went a step farther with its remastering of its canon catalog in the early 21st century: for its cel-animated films, teams meticulously reconstructed scenes from original cel setups and background paintings to create new images free of film artifacts (jitter, grain, etc.). While complex and revolutionary, this process was criticized by some for essentially removing the films from their era and medium, making them indistinguishable in age. [ 15 ] [ 16 ] Later remasters, including a 4K restoration of Cinderella in 2023, prioritized a filmic look, with period-appropriate grain and weave. [ 17 ]
Remastering other animated projects can vary in scope based on their art style. In the case of natively digital images, including computer-animated films , remastering can be a simple matter of going back to the original files and re-rendering them at a desired resolution. Some modern software, like Toon Boom Harmony , utilize lossless vector shapes, [ 18 ] allowing an artist to re-render work at different resolutions with ease. This can prove tricky at times when files have become corrupted or unreadable ; a 3D reissue of Toy Story , the first CG film, was fraught with difficulties due to the unreadability of the file format on modern systems. [ 19 ] In television, South Park is an example of a program that was natively digital from its start—its construction paper style was made up of digital images manipulated in software like Maya . This allowed its creative team to completely re-render episodes in a higher resolution than its original broadcast; in some instances shots were re-framed to fit a 16:9 aspect ratio. [ 20 ]
Another issue in terms of remastering is upscaling projects completed in the early days of digital ink and paint . Animation industries across the globe gradually switched from cels to digital coloring around the turn of the millennium, and projects that pre-date the advent of higher-resolution formats have proved challenging to remaster. [ 21 ] Remasters of films that used early digipaint processes are typically struck from filmout 35mm prints, as the computer files were never properly archived. Projects that were composited on lower resolution formats like videotape have made going back to the original elements impractical due to their inferior size. Some studios have utilized artificial intelligence to professionally upscale the material; boutique label Discotek has released seasons of the anime Digimon using a specialized tool called AstroRes. [ 22 ]
Remastering a video game is more difficult than remastering a film or music recording because the video game's graphics show their age, even when the source code is used. [ 23 ] This can be due to a number of factors, notably lower resolutions and less complicated rendering engines at the time of release. A video game remaster typically has ambience and design updated to the capabilities of a more powerful console, while a video game remake is also updated but with recreated models. [ 24 ]
Modern computer monitors and high-definition televisions tend to have higher display resolutions and different aspect ratios than the monitors/televisions available when the video game was released. [ 23 ] Because of this, classic games that are remastered typically have their graphics re-rendered at higher resolutions. [ 23 ] An example of a game that has had its original graphics re-rendered at higher resolutions is Hitman HD Trilogy , which contains two games with high-resolution graphics: Hitman 2: Silent Assassin and Hitman: Contracts . Both were originally released on PC , PlayStation 2 , and Xbox . [ 25 ] The original resolution was 480p on Xbox, while the remastered resolution is displayed at 720p on Xbox 360 . [ 25 ] There is some debate regarding whether graphics of an older game at higher resolutions make a video game look better or worse than the original artwork, with comparisons made to colorizing black-and-white-films. [ 23 ]
More significant than low resolution is the age of the original game engine and simplicity of the original 3D models. Older computers and video game consoles had limited 3D rendering speed, which required simple 3D object geometry such as human hands being modeled as mittens rather than with individual fingers, while maps having a distinctly chunky appearance with no smoothly curving surfaces. Older computers also had less texture memory for 3D environments, requiring low-resolution bitmap images that look visibly pixelated or blurry when viewed at high resolution. Some early 3D games such as the 1993 version of Doom also just used an animated two-dimensional image that is rotated to always face the player character, rather than attempt to render highly complex scenery objects or enemies in full 3D. As a result, depending on the age of the original game, if the original assets are not compatible with the new technology for a remaster, it is often considered necessary to remake or remodel the graphical assets. An example of a game that has had its graphics redesigned is Halo: Combat Evolved Anniversary , [ 23 ] while the core character and level information is exactly the same as in Halo: Combat Evolved . [ 23 ] [ 26 ] [ 27 ] | https://en.wikipedia.org/wiki/Remaster |
Remestemcel , sold under the brand name Ryoncil , is an allogeneic bone marrow-derived mesenchymal stromal cell therapy used for the treatment of graft-versus-host disease . [ 1 ] [ 2 ] Remestemcel contains mesenchymal stromal cells, which are a type of cell that can have various roles in the body and can differentiate into multiple other types of cells. [ 3 ] These mesenchymal stromal cell are isolated from the bone marrow of healthy adult human donors. [ 3 ]
The most common adverse reactions include viral infectious disorders, bacterial infectious disorders, infection – pathogen unspecified, pyrexia, hemorrhage, edema, abdominal pain, and hypertension. [ 4 ]
Remestemcel was approved for medical use in the United States in December 2024. [ 2 ] [ 3 ] [ 4 ] Remestemcel is the first mesenchymal stromal cell therapy approved by the US Food and Drug Administration . [ 3 ] [ 4 ]
Remestemcel is indicated for the treatment of steroid-refractory acute graft-versus-host disease . [ 1 ] [ 2 ]
The safety and effectiveness of remestemcel were evaluated in a multicenter, single-arm study in 54 pediatric study participants with steroid-refractory acute graft-versus-host disease after undergoing allogeneic hematopoietic (blood) stem cell transplantation. [ 4 ] Study participants received intravenous infusion of remestemcel twice weekly for four consecutive weeks, for a total of eight infusions. [ 4 ] Each study participant's condition at baseline was analyzed using the international blood and marrow transplantation registry severity index criteria (IBMTR) to evaluate which organs have been affected and the overall severity of the disease. [ 4 ] The effectiveness of remestemcel was based primarily on the rate and duration of response to treatment 28 days after initiating remestemcel. [ 4 ] Study participants who had a partial or mixed response to treatment—meaning that there was improved condition in one organ with either no change (partial) or worsening condition (mixed) in another organ—received additional infusions once weekly for an additional four weeks. [ 4 ] Sixteen study participants (30%) had a complete response to treatment 28 days after receiving remestemcel, while 22 study participants (41%) had a partial response. [ 4 ]
The US Food and Drug Administration (FDA) granted the application for remestemcel fast track , orphan drug , and priority review designations. [ 3 ] [ 4 ] The FDA granted approval of Ryoncil to Mesoblast, Inc. [ 3 ]
Remestemcel was approved for medical use in the United States in December 2024. [ 2 ] [ 5 ]
Remestemcel is the international nonproprietary name . [ 6 ]
Remestemcel-L is the United States Adopted Name . [ 7 ] | https://en.wikipedia.org/wiki/Remestemcel |
The Remez algorithm or Remez exchange algorithm , published by Evgeny Yakovlevich Remez in 1934, is an iterative algorithm used to find simple approximations to functions, specifically, approximations by functions in a Chebyshev space that are the best in the uniform norm L ∞ sense. [ 1 ] It is sometimes referred to as Remes algorithm or Reme algorithm . [ citation needed ]
A typical example of a Chebyshev space is the subspace of Chebyshev polynomials of order n in the space of real continuous functions on an interval , C [ a , b ]. The polynomial of best approximation within a given subspace is defined to be the one that minimizes the maximum absolute difference between the polynomial and the function. In this case, the form of the solution is precised by the equioscillation theorem .
The Remez algorithm starts with the function f {\displaystyle f} to be approximated and a set X {\displaystyle X} of n + 2 {\displaystyle n+2} sample points x 1 , x 2 , . . . , x n + 2 {\displaystyle x_{1},x_{2},...,x_{n+2}} in the approximation interval, usually the extrema of Chebyshev polynomial linearly mapped to the interval. The steps are:
The result is called the polynomial of best approximation or the minimax approximation algorithm .
A review of technicalities in implementing the Remez algorithm is given by W. Fraser. [ 2 ]
The Chebyshev nodes are a common choice for the initial approximation because of their role in the theory of polynomial interpolation. For the initialization of the optimization problem for function f by the Lagrange interpolant L n ( f ), it can be shown that this initial approximation is bounded by
with the norm or Lebesgue constant of the Lagrange interpolation operator L n of the nodes ( t 1 , ..., t n + 1 ) being
T being the zeros of the Chebyshev polynomials, and the Lebesgue functions being
Theodore A. Kilgore, [ 3 ] Carl de Boor, and Allan Pinkus [ 4 ] proved that there exists a unique t i for each L n , although not known explicitly for (ordinary) polynomials. Similarly, Λ _ n ( T ) = min − 1 ≤ x ≤ 1 λ n ( T ; x ) {\displaystyle {\underline {\Lambda }}_{n}(T)=\min _{-1\leq x\leq 1}\lambda _{n}(T;x)} , and the optimality of a choice of nodes can be expressed as Λ ¯ n − Λ _ n ≥ 0. {\displaystyle {\overline {\Lambda }}_{n}-{\underline {\Lambda }}_{n}\geq 0.}
For Chebyshev nodes, which provides a suboptimal, but analytically explicit choice, the asymptotic behavior is known as [ 5 ]
( γ being the Euler–Mascheroni constant ) with
and upper bound [ 6 ]
Lev Brutman [ 7 ] obtained the bound for n ≥ 3 {\displaystyle n\geq 3} , and T ^ {\displaystyle {\hat {T}}} being the zeros of the expanded Chebyshev polynomials:
Rüdiger Günttner [ 8 ] obtained from a sharper estimate for n ≥ 40 {\displaystyle n\geq 40}
This section provides more information on the steps outlined above. In this section, the index i runs from 0 to n +1.
Step 1: Given x 0 , x 1 , . . . x n + 1 {\displaystyle x_{0},x_{1},...x_{n+1}} , solve the linear system of n +2 equations
It should be clear that ( − 1 ) i E {\displaystyle (-1)^{i}E} in this equation makes sense only if the nodes x 0 , . . . , x n + 1 {\displaystyle x_{0},...,x_{n+1}} are ordered , either strictly increasing or strictly decreasing. Then this linear system has a unique solution. (As is well known, not every linear system has a solution.) Also, the solution can be obtained with only O ( n 2 ) {\displaystyle O(n^{2})} arithmetic operations while a standard solver from the library would take O ( n 3 ) {\displaystyle O(n^{3})} operations. Here is the simple proof:
Compute the standard n -th degree interpolant p 1 ( x ) {\displaystyle p_{1}(x)} to f ( x ) {\displaystyle f(x)} at the first n +1 nodes and also the standard n -th degree interpolant p 2 ( x ) {\displaystyle p_{2}(x)} to the ordinates ( − 1 ) i {\displaystyle (-1)^{i}}
To this end, use each time Newton's interpolation formula with the divided
differences of order 0 , . . . , n {\displaystyle 0,...,n} and O ( n 2 ) {\displaystyle O(n^{2})} arithmetic operations.
The polynomial p 2 ( x ) {\displaystyle p_{2}(x)} has its i -th zero between x i − 1 {\displaystyle x_{i-1}} and x i , i = 1 , . . . , n {\displaystyle x_{i},\ i=1,...,n} , and thus no further zeroes between x n {\displaystyle x_{n}} and x n + 1 {\displaystyle x_{n+1}} : p 2 ( x n ) {\displaystyle p_{2}(x_{n})} and p 2 ( x n + 1 ) {\displaystyle p_{2}(x_{n+1})} have the same sign ( − 1 ) n {\displaystyle (-1)^{n}} .
The linear combination p ( x ) := p 1 ( x ) − p 2 ( x ) ⋅ E {\displaystyle p(x):=p_{1}(x)-p_{2}(x)\!\cdot \!E} is also a polynomial of degree n and
This is the same as the equation above for i = 0 , . . . , n {\displaystyle i=0,...,n} and for any choice of E .
The same equation for i = n +1 is
As mentioned above, the two terms in the denominator have same sign: E and thus p ( x ) ≡ b 0 + b 1 x + … + b n x n {\displaystyle p(x)\equiv b_{0}+b_{1}x+\ldots +b_{n}x^{n}} are always well-defined.
The error at the given n +2 ordered nodes is positive and negative in turn because
The theorem of de La Vallée Poussin states that under this condition no polynomial of degree n exists with error less than E . Indeed, if such a polynomial existed, call it p ~ ( x ) {\displaystyle {\tilde {p}}(x)} , then the difference p ( x ) − p ~ ( x ) = ( p ( x ) − f ( x ) ) − ( p ~ ( x ) − f ( x ) ) {\displaystyle p(x)-{\tilde {p}}(x)=(p(x)-f(x))-({\tilde {p}}(x)-f(x))} would still be positive/negative at the n +2 nodes x i {\displaystyle x_{i}} and therefore have at least n +1 zeros which is impossible for a polynomial of degree n .
Thus, this E is a lower bound for the minimum error which can be achieved with polynomials of degree n .
Step 2 changes the notation from b 0 + b 1 x + . . . + b n x n {\displaystyle b_{0}+b_{1}x+...+b_{n}x^{n}} to p ( x ) {\displaystyle p(x)} .
Step 3 improves upon the input nodes x 0 , . . . , x n + 1 {\displaystyle x_{0},...,x_{n+1}} and their errors ± E {\displaystyle \pm E} as follows.
In each P-region, the current node x i {\displaystyle x_{i}} is replaced with the local maximizer x ¯ i {\displaystyle {\bar {x}}_{i}} and in each N-region x i {\displaystyle x_{i}} is replaced with the local minimizer. (Expect x ¯ 0 {\displaystyle {\bar {x}}_{0}} at A , the x ¯ i {\displaystyle {\bar {x}}_{i}} near x i {\displaystyle x_{i}} , and x ¯ n + 1 {\displaystyle {\bar {x}}_{n+1}} at B .) No high precision is required here,
the standard line search with a couple of quadratic fits should suffice. (See [ 9 ] )
Let z i := p ( x ¯ i ) − f ( x ¯ i ) {\displaystyle z_{i}:=p({\bar {x}}_{i})-f({\bar {x}}_{i})} . Each amplitude | z i | {\displaystyle |z_{i}|} is greater than or equal to E . The Theorem of de La Vallée Poussin and its proof also
apply to z 0 , . . . , z n + 1 {\displaystyle z_{0},...,z_{n+1}} with min { | z i | } ≥ E {\displaystyle \min\{|z_{i}|\}\geq E} as the new
lower bound for the best error possible with polynomials of degree n .
Moreover, max { | z i | } {\displaystyle \max\{|z_{i}|\}} comes in handy as an obvious upper bound for that best possible error.
Step 4: With min { | z i | } {\displaystyle \min \,\{|z_{i}|\}} and max { | z i | } {\displaystyle \max \,\{|z_{i}|\}} as lower and upper bound for the best possible approximation error, one has a reliable stopping criterion: repeat the steps until max { | z i | } − min { | z i | } {\displaystyle \max\{|z_{i}|\}-\min\{|z_{i}|\}} is sufficiently small or no longer decreases. These bounds indicate the progress.
Some modifications of the algorithm are present on the literature. [ 10 ] These include: | https://en.wikipedia.org/wiki/Remez_algorithm |
In mathematics , the Remez inequality , discovered by the Soviet mathematician Evgeny Yakovlevich Remez ( Remez 1936 ), gives a bound on the sup norms of certain polynomials , the bound being attained by the Chebyshev polynomials .
Let σ be an arbitrary fixed positive number. Define the class of polynomials π n ( σ ) to be those polynomials p of degree n for which
on some set of measure ≥ 2 contained in the closed interval [−1, 1+ σ ]. Then the Remez inequality states that
where T n ( x ) is the Chebyshev polynomial of degree n , and the supremum norm is taken over the interval [−1, 1+ σ ].
Observe that T n is increasing on [ 1 , + ∞ ] {\displaystyle [1,+\infty ]} , hence
The R.i., combined with an estimate on Chebyshev polynomials, implies the following corollary : If J ⊂ R is a finite interval, and E ⊂ J is an arbitrary measurable set , then
for any polynomial p of degree n .
Inequalities similar to ( ⁎ ) have been proved for different classes of functions , and are known as Remez-type inequalities. One important example is Nazarov 's inequality for exponential sums ( Nazarov 1993 ):
In the special case when λ k are pure imaginary and integer, and the subset E is itself an interval, the inequality was proved by Pál Turán and is known as Turán's lemma.
This inequality also extends to L p ( T ) , 0 ≤ p ≤ 2 {\displaystyle L^{p}(\mathbb {T} ),\ 0\leq p\leq 2} in the following way
for some A > 0 independent of p , E , and n . When
a similar inequality holds for p > 2. For p = ∞ there is an extension to multidimensional polynomials.
Proof: Applying Nazarov's lemma to E = E λ = { x : | p ( x ) | ≤ λ } , λ > 0 {\displaystyle E=E_{\lambda }=\{x:|p(x)|\leq \lambda \},\ \lambda >0} leads to
thus
Now fix a set E {\displaystyle E} and choose λ {\displaystyle \lambda } such that mes E λ ≤ 1 2 mes E {\displaystyle \operatorname {mes} E_{\lambda }\leq {\tfrac {1}{2}}\operatorname {mes} E} , that is
Note that this implies:
Now
which completes the proof.
One of the corollaries of the Remez inequality is the Pólya inequality , which was proved by George Pólya ( Pólya 1928 ), and states that the Lebesgue measure of a sub-level set of a polynomial p of degree n is bounded in terms of the leading coefficient LC( p ) as follows: | https://en.wikipedia.org/wiki/Remez_inequality |
Remind (previously Remind101 ) is a private mobile messaging platform used for communication between teachers, parents, and students in K–12 schools. [ 1 ]
Remind101 was founded in 2011 by brothers Brett and David Kopf. [ 2 ] The platform was based on a prototype messaging system developed by David to help Brett, who was diagnosed with attention deficit disorder and dyslexia , to keep track of tests while attending Michigan State University . [ 3 ] [ 4 ] The two decided to found a company based on the messaging platform, and it became part of the first class at the Imagine K12 incubator in Palo Alto, California . [ 5 ]
In 2014, the company added John Doerr , a venture capitalist at Kleiner Perkins, to its board. [ 6 ] As of 2014, the company had raised $59 million in funding, [ 7 ] and had over 20 million monthly active users across the United States. [ 8 ] On June 16, 2014, the company changed its name from Remind101 to Remind. [ 9 ]
In 2016, Brian Grey, former CEO of Bleacher Report , became CEO of Remind. [ 10 ]
As of September 2016, the platform was used in more than 50% of the public schools in the U.S. [ 11 ] [ 3 ]
Remind was purchased by ParentSquare in November 2023. Both companies' platforms and executive staff were merged. Existing Remind products kept their names. [ 12 ] | https://en.wikipedia.org/wiki/Remind |
In biogeochemistry , remineralisation (or remineralization ) refers to the breakdown or transformation of organic matter (those molecules derived from a biological source) into its simplest inorganic forms. These transformations form a crucial link within ecosystems as they are responsible for liberating the energy stored in organic molecules and recycling matter within the system to be reused as nutrients by other organisms . [ 1 ]
Remineralisation is normally viewed as it relates to the cycling of the major biologically important elements such as carbon , nitrogen and phosphorus . While crucial to all ecosystems, the process receives special consideration in aquatic settings, where it forms a significant link in the biogeochemical dynamics and cycling of aquatic ecosystems.
The term "remineralization" is used in several contexts across different disciplines. The term is most commonly used in the medicinal and physiological fields, where it describes the development or redevelopment of mineralized structures in organisms such as teeth or bone. In the field of biogeochemistry , however, remineralization is used to describe a link in the chain of elemental cycling within a specific ecosystem. In particular, remineralization represents the point where organic material constructed by living organisms is broken down into basal inorganic components that are not obviously identifiable as having come from an organic source. This differs from the process of decomposition which is a more general descriptor of larger structures degrading to smaller structures.
Biogeochemists study this process across all ecosystems for a variety of reasons. This is done primarily to investigate the flow of material and energy in a given system, which is key to understanding the productivity of that ecosystem along with how it recycles material versus how much is entering the system. Understanding the rates and dynamics of organic matter remineralization in a given system can help in determining how or why some ecosystems might be more productive than others.
While it is important to note that the process of remineralization is a series of complex biochemical pathways [within microbes], it can often be simplified as a series of one-step processes for ecosystem-level models and calculations. A generic form of these reactions is shown by:
The above generic equation starts with two reactants: some piece of organic matter (composed of organic carbon) and an oxidant. Most organic carbon exists in a reduced form which is then oxidized by the oxidant (such as O 2 ) into CO 2 and energy that can be harnessed by the organism. This process generally produces CO 2 , water and a collection of simple nutrients like nitrate or phosphate that can then be taken up by other organisms. The above general form, when considering O 2 as the oxidant, is the equation for respiration. In this context specifically, the above equation represents bacterial respiration though the reactants and products are essentially analogous to the short-hand equations used for multi-cellular respiration.
The degradation of organic matter through respiration in the modern ocean is facilitated by different electron acceptors, their favorability based on Gibbs free energy law , and the laws of thermodynamics . [ 2 ] This redox chemistry is the basis for life in deep sea sediments and determines the obtainability of energy to organisms that live there. From the water interface moving toward deeper sediments, the order of these acceptors is oxygen , nitrate , manganese , iron , and sulfate . The zonation of these favored acceptors can be seen in Figure 1. Moving downwards from the surface through the zonation of these deep ocean sediments, acceptors are used and depleted. Once depleted the next acceptor of lower favorability takes its place. Thermodynamically, oxygen represents the most favorable electron accepted but is quickly used up in the water sediment interface and O 2 concentrations extends only millimeters to centimeters down into the sediment in most locations of the deep sea. This favorability indicates an organism's ability to obtain higher energy from the reaction which helps them compete with other organisms. [ 3 ] In the absence of these acceptors, organic matter can also be degraded through methanogenesis, but the net oxidation of this organic matter is not fully represented by this process. Each pathway and the stoichiometry of its reaction are listed in table 1. [ 3 ]
Due to this quick depletion of O 2 in the surface sediments, a majority of microbes use anaerobic pathways to metabolize other oxides such as manganese, iron, and sulfate. [ 4 ] It is also important to figure in bioturbation and the constant mixing of this material which can change the relative importance of each respiration pathway. For the microbial perspective please reference the electron transport chain .
A quarter of all organic material that exits the photic zone makes it to the seafloor without being remineralised and 90% of that remaining material is remineralised in sediments itself. [ 1 ] Once in the sediment, organic remineralisation may occur through a variety of reactions. [ 5 ] The following reactions are the primary ways in which organic matter is remineralised, in them general organic matter (OM) is often represented by the shorthand: (CH 2 O) 106 (NH 3 ) 16 (H 3 PO 4 ) .
Aerobic respiration is the most preferred remineralisation reaction due to its high energy yield. Although oxygen is quickly depleted in the sediments and is generally exhausted centimeters from the sediment-water interface.
In instances in which the environment is suboxic or anoxic , organisms will prefer to utilize denitrification to remineralise organic matter as it provides the second largest amount of energy. In depths below where denitrification is favored, reactions such as Manganese Reduction, Iron Reduction, Sulfate Reduction, Methane Reduction (also known as Methanogenesis ), become favored respectively. This favorability is governed by Gibbs Free Energy (ΔG). In a water body, sediment seabed, or soil, the sorting of these chemical reactions with depth in order of energy provided is called a redox gradient .
Redox zonation refers to how the processes that transfer terminal electrons as a result of organic matter degradation vary depending on time and space. [ 6 ] Certain reactions will be favored over others due to their energy yield as detailed in the energy acceptor cascade detailed above. [ 7 ] In oxic conditions, in which oxygen is readily available, aerobic respiration will be favored due to its high energy yield. Once the use of oxygen through respiration exceeds the input of oxygen due to bioturbation and diffusion, the environment will become anoxic and organic matter will be broken down via other means, such as denitrification and manganese reduction. [ 8 ]
In most open ocean ecosystems only a small fraction of organic matter reaches the seafloor. Biological activity in the photic zone of most water bodies tends to recycle material so well that only a small fraction of organic matter ever sinks out of that top photosynthetic layer. Remineralisation within this top layer occurs rapidly and due to the higher concentrations of organisms and the availability of light, those remineralised nutrients are often taken up by autotrophs just as rapidly as they are released.
What fraction does escape varies depending on the location of interest. For example, in the North Sea, values of carbon deposition are ~1% of primary production [ 9 ] while that value is <0.5% in the open oceans on average. [ 10 ] Therefore, most of nutrients remain in the water column, recycled by the biota . Heterotrophic organisms will utilize the materials produced by the autotrophic (and chemotrophic ) organisms and via respiration will remineralise the compounds from the organic form back to inorganic, making them available for primary producers again.
For most areas of the ocean, the highest rates of carbon remineralisation occur at depths between 100–1,200 m (330–3,940 ft) in the water column, decreasing down to about 1,200 m where remineralisation rates remain pretty constant at 0.1 μmol kg −1 yr −1 . [ 11 ] As a result of this, the pool of remineralised carbon (which generally takes the form of carbon dioxide) tends to increase in the photic zone.
Most remineralisation is done with dissolved organic carbon (DOC). Studies have shown that it is larger sinking particles that transport matter down to the sea floor [ 12 ] while suspended particles and dissolved organics are mostly consumed by remineralisation. [ 13 ] This happens in part due to the fact that organisms must typically ingest nutrients smaller than they are, often by orders of magnitude. [ 14 ] With the microbial community making up 90% of marine biomass, [ 15 ] it is particles smaller than the microbes (on the order of 10 −6 [ 16 ] ) that will be taken up for remineralisation. | https://en.wikipedia.org/wiki/Remineralisation |
A remnant natural area , also known as remnant habitat , is an ecological community containing native flora and fauna that has not been significantly disturbed by destructive activities such as agriculture , logging , pollution , development , fire suppression, or non-native species invasion . [ 1 ] The more disturbed an area has been, the less characteristic it becomes of remnant habitat. Remnant areas are also described as " biologically intact " or "ecologically intact." [ 2 ]
Remnant natural areas are often used as reference ecosystems in ecological restoration projects. [ 3 ]
A remnant natural area can be described in terms of its natural quality or biological integrity , which is the extent to which it has the internal biodiversity and abiotic elements to replicate itself over time. [ 4 ] Another definition of biological integrity is "the capability of supporting and maintaining a balanced, integrated, adaptive community of organisms having a species composition, diversity , and functional organization comparable to that of the natural habitat of the region." [ 5 ] Abiotic elements determining the quality of a natural area may include factors such as hydrologic connectivity or fire. In areas that have been dredged , drained , or dammed , the altered hydrology can destroy a remnant natural area. Similarly, too much or too little fire can degrade or destroy a remnant natural area. [ 4 ]
Remnant natural areas are characterized by the presence of " conservative " plants and animals—organisms that are restricted to or highly characteristic of areas that have not been disturbed by humans. [ 6 ] Tools to measure aspects of natural areas quality in remnant areas include Floristic Quality Assessment and the Macroinvertebrate Community Index .
In the upper Midwestern United States , remnant natural areas date prior to European settlement , going back to the end of the Wisconsinian Glaciation approximately 15,000 years ago. [ 1 ] Diverse remnant plant community examples in that region include tallgrass prairie , beech-maple forest , savannas , bogs , and fens . [ 7 ] Remnant natural areas in Illinois have largely been classified by the Illinois Natural Areas Inventory as Category I "high quality terrestrial or wetland natural communities." [ 8 ]
In Australia , remnant habitats are sometimes called " bushland ," and include communities such as forest, woodland, grasslands, mallee , coastal heathland , and rainforest . [ 9 ] | https://en.wikipedia.org/wiki/Remnant_natural_area |
The Remote Activation Munition System (RAMS) is a radio frequency controlled system that is used to remotely detonate demolition charges . It can also be used to remotely operate electronic equipment such as beacons , laser markers, and radios. [ 1 ]
RAMS was developed by a team of researchers led by James Chopak at the Army Research Laboratory from 1996 to 2000. [ 2 ] [ 3 ] The system consists of a transmitter and two different types of receivers , one to initiate blasting caps and one to initiate C4 directly. [ 4 ]
RAMS was designed to serve as a more portable and convenient alternative to conventional remote activation systems like the model XM-122, which was considered too big, heavy, and fragile for efficient use. In addition, the XM-122 was limited in its range (about 1 km) and relied on very large high capacity batteries.
In contrast, the RAMS weighed only a couple pounds and its microprocessor -based transmitter was powered by seven standard 9-volt batteries. The device was capable of reaching a range up to 2 kilometers, and the combination of the crystal filter in the receivers and the FM detector circuit made it possible to maintain high signal sensitivity at a low power consumption rate.
In addition, the RAMS was operational in harsh environments with temperatures as low as −25 °F (−32 °C) and as high as 140 °F (60 °C). It was also capable of functioning when submerged in saltwater , up to depths of 66 feet (20 m). [ 5 ] However, testing performed by the Army Research Laboratory have found that due to the low power levels of the RAMS receiver’s electrical signals output, the system has demonstrated a noticeable level of unreliability in performance past a certain distance. [ 6 ]
More modern versions of the RAMS can weigh as little as 3 pounds (1.4 kg) and can reach a range of more than 5 kilometers, allowing operators to stand further away from the blast at a safer distance. [ 7 ] | https://en.wikipedia.org/wiki/Remote_Activation_Munition_System |
The Remote Telescope Markup Language (RTML) is an XML dialect for controlling remote and/or robotic telescopes . It is used to describe various telescope parameters (such as coordinates and exposure time ) to facilitate observation of selected targets. RTML instructions were designed to be displayed in a more human-readable way; they are then processed and executed by telescopes through local parsers. [ 1 ] [ 2 ] [ 3 ]
It was created by UC Berkeley 's Hands-On Universe project in 1999. [ 4 ] Because of its XML structure and consequent flexibility readability, it is now widely used, and has become an international standard for astronomical imaging. [ 1 ] [ 5 ]
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Remote_Telescope_Markup_Language |
A remote control , also known colloquially as a remote or clicker , [ 1 ] is an electronic device used to operate another device from a distance, usually wirelessly . In consumer electronics , a remote control can be used to operate devices such as a television set , DVD player or other digital home media appliance. A remote control can allow operation of devices that are out of convenient reach for direct operation of controls. They function best when used from a short distance. This is primarily a convenience feature for the user. In some cases, remote controls allow a person to operate a device that they otherwise would not be able to reach, as when a garage door opener is triggered from outside.
Early television remote controls (1956–1977) used ultrasonic tones. Present-day remote controls are commonly consumer infrared devices which send digitally-coded pulses of infrared radiation. They control functions such as power, volume, channels, playback, track change, energy, fan speed, and various other features. Remote controls for these devices are usually small wireless handheld objects with an array of buttons. They are used to adjust various settings such as television channel , track number, and volume . The remote control code, and thus the required remote control device, is usually specific to a product line. However, there are universal remotes , which emulate the remote control made for most major brand devices.
Remote controls in the 2000s include Bluetooth or Wi-Fi connectivity, motion sensor -enabled capabilities and voice control . [ 2 ] [ 3 ] Remote controls for 2010s onward Smart TVs may feature a standalone keyboard on the rear side to facilitate typing, and be usable as a pointing device. [ 4 ]
Wired and wireless remote control was developed in the latter half of the 19th century to meet the need to control unmanned vehicles (for the most part military torpedoes). [ 5 ] These included a wired version by German engineer Werner von Siemens in 1870, and radio controlled ones by British engineer Ernest Wilson and C. J. Evans (1897) [ 6 ] [ 7 ] and a prototype that inventor Nikola Tesla demonstrated in New York in 1898. [ 8 ] In 1903 Spanish engineer Leonardo Torres Quevedo introduced a radio based control system called the " Telekino " at the Paris Academy of Sciences , [ 9 ] which he hoped to use to control a dirigible airship of his own design. Unlike previous “on/off” techniques, the Telekino was able to execute a finite but not limited set of different mechanical actions using a single communication channel . [ 10 ] [ 11 ] From 1904 to 1906 Torres chose to conduct Telekino testings in the form of a three-wheeled land vehicle with an effective range of 20 to 30 meters, and guiding a manned electrically powered boat , which demonstrated a standoff range of 2 kilometers. [ 12 ] The first remote-controlled model airplane flew in 1932, [ citation needed ] and the use of remote control technology for military purposes was worked on intensively during the Second World War , one result of this being the German Wasserfall missile .
By the late 1930s, several radio manufacturers offered remote controls for some of their higher-end models. [ 13 ] Most of these were connected to the set being controlled by wires, but the Philco Mystery Control (1939) was a battery-operated low-frequency radio transmitter, [ 14 ] thus making it the first wireless remote control for a consumer electronics device. Using pulse-count modulation, this also was the first digital wireless remote control.
One of the first remote intended to control a television was developed by Zenith Radio Corporation in 1950. The remote, called Lazy Bones, [ 15 ] was connected to the television by a wire. A wireless remote control, the Flash-Matic , [ 15 ] [ 16 ] was developed in 1955 by Eugene Polley . It worked by shining a beam of light onto one of four photoelectric cells , [ 17 ] but the cell did not distinguish between light from the remote and light from other sources. [ 18 ] The Flashmatic also had to be pointed very precisely at one of the sensors in order to work. [ 18 ] [ 19 ]
In 1956, Robert Adler developed Zenith Space Command, a wireless remote. [ 15 ] [ 20 ] [ 21 ] It was mechanical and used ultrasound to change the channel and volume. [ 22 ] [ 21 ] When the user pushed a button on the remote control, it struck a bar and clicked, hence they were commonly called "clickers", and the mechanics were similar to a pluck . [ 21 ] [ 23 ] Each of the four bars emitted a different fundamental frequency with ultrasonic harmonics, and circuits in the television detected these sounds and interpreted them as channel-up, channel-down, sound-on/off, and power-on/off. [ 24 ]
Later, the rapid decrease in price of transistors made possible cheaper electronic remotes that contained a piezoelectric crystal that was fed by an oscillating electric current at a frequency near or above the upper threshold of human hearing , though still audible to dogs . The receiver contained a microphone attached to a circuit that was tuned to the same frequency. Some problems with this method were that the receiver could be triggered accidentally by naturally occurring noises or deliberately by metal against glass, for example, and some people could hear the lower ultrasonic harmonics.
In 1970, RCA introduced an all-electronic remote control that uses digital signals and metal–oxide–semiconductor field-effect transistor (MOSFET) memory . This was widely adopted for color television , replacing motor-driven tuning controls. [ 25 ]
The impetus for a more complex type of television remote control came in 1973, with the development of the Ceefax teletext service by the BBC . Most commercial remote controls at that time had a limited number of functions, sometimes as few as three: next channel, previous channel, and volume/off. This type of control did not meet the needs of Teletext sets, where pages were identified with three-digit numbers. A remote control that selects Teletext pages would need buttons for each numeral from zero to nine, as well as other control functions, such as switching from text to picture, and the normal television controls of volume, channel, brightness, color intensity, etc. Early Teletext sets used wired remote controls to select pages, but the continuous use of the remote control required for Teletext quickly indicated the need for a wireless device. So BBC engineers began talks with one or two television manufacturers, which led to early prototypes in around 1977–1978 that could control many more functions. ITT was one of the companies and later gave its name to the ITT protocol of infrared communication. [ 26 ]
In 1980, the most popular remote control was the Starcom Cable TV Converter (from Jerrold Electronics , a division of General Instrument ) [ 15 ] [ failed verification ] which used 40-kHz sound to change channels. Then, a Canadian company, Viewstar, Inc., was formed by engineer Paul Hrivnak and started producing a cable TV converter with an infrared remote control. The product was sold through Philips for approximately $190 CAD . The Viewstar converter was an immediate success, the millionth converter being sold on March 21, 1985, with 1.6 million sold by 1989. [ 27 ] [ 28 ]
The Blab-off was a wired remote control created in 1952 that turned a TV's (television) sound on or off so that viewers could avoid hearing commercials. [ 29 ] In the 1980s Steve Wozniak of Apple started a company named CL 9 . The purpose of this company was to create a remote control that could operate multiple electronic devices. The CORE unit (Controller Of Remote Equipment) was introduced in the fall of 1987. The advantage to this remote controller was that it could "learn" remote signals from different devices. It had the ability to perform specific or multiple functions at various times with its built-in clock. It was the first remote control that could be linked to a computer and loaded with updated software code as needed. The CORE unit never made a huge impact on the market. It was much too cumbersome for the average user to program, but it received rave reviews from those who could. [ citation needed ] These obstacles eventually led to the demise of CL 9, but two of its employees continued the business under the name Celadon. This was one of the first computer-controlled learning remote controls on the market. [ 30 ]
In the 1990s, cars were increasingly sold with electronic remote control door locks. These remotes transmit a signal to the car which locks or unlocks the door locks or unlocks the trunk. An aftermarket device sold in some countries is the remote starter. This enables a car owner to remotely start their car. This feature is most associated with countries with winter climates, where users may wish to run the car for several minutes before they intend to use it, so that the car heater and defrost systems can remove ice and snow from the windows.
By the early 2000s, the number of consumer electronic devices in most homes greatly increased, along with the number of remotes to control those devices. According to the Consumer Electronics Association , an average US home has four remotes. [ citation needed ] To operate a home theater as many as five or six remotes may be required, including one for cable or satellite receiver, VCR or digital video recorder (DVR/PVR), DVD player , TV and audio amplifier . Several of these remotes may need to be used sequentially for some programs or services to work properly. However, as there are no accepted interface guidelines, the process is increasingly cumbersome. One solution used to reduce the number of remotes that have to be used is the universal remote , a remote control that is programmed with the operation codes for most major brands of TVs, DVD players, etc. In the early 2010s, many smartphone manufacturers began incorporating infrared emitters into their devices, thereby enabling their use as universal remotes via an included or downloadable app . [ 31 ]
The main technology used in home remote controls is infrared (IR) light. The signal between a remote control handset and the device it controls consists of pulses of infrared light, which is invisible to the human eye but can be seen through a digital camera, video camera or phone camera. The transmitter in the remote control handset sends out a stream of pulses of infrared light when the user presses a button on the handset. A transmitter is often a light-emitting diode (LED) which is built into the pointing end of the remote control handset. The infrared light pulses form a pattern unique to that button. The receiver in the device recognizes the pattern and causes the device to respond accordingly. [ 32 ]
Most remote controls for electronic appliances use a near infrared diode to emit a beam of light that reaches the device. A 940 nm wavelength LED is typical. [ 33 ] This infrared light is not visible to the human eye but picked up by sensors on the receiving device. Video cameras see the diode as if it produces visible purple light. With a single channel (single-function, one-button) remote control the presence of a carrier signal can be used to trigger a function. For multi-channel (normal multi-function) remote controls more sophisticated procedures are necessary: one consists of modulating the carrier with signals of different frequencies. After the receiver demodulates the received signal, it applies the appropriate frequency filters to separate the respective signals. One can often hear the signals being modulated on the infrared carrier by operating a remote control in very close proximity to an AM radio not tuned to a station. Today, IR remote controls almost always use a pulse width modulated code, encoded and decoded by a digital computer: a command from a remote control consists of a short train of pulses of carrier-present and carrier-not-present of varying widths.
Different manufacturers of infrared remote controls use different protocols to transmit the infrared commands. The RC-5 protocol that has its origins within Philips, uses, for instance, a total of 14 bits for each button press. The bit pattern is modulated onto a carrier frequency that, again, can be different for different manufacturers and standards, in the case of RC-5, the carrier is 36 kHz. Other consumer infrared protocols include the various versions of SIRCS used by Sony, the RC-6 from Philips, the Ruwido R-Step, and the NEC TC101 protocol.
Since infrared (IR) remote controls use light, they require line of sight to operate the destination device. The signal can, however, be reflected by mirrors, just like any other light source. If operation is required where no line of sight is possible, for instance when controlling equipment in another room or installed in a cabinet, many brands of IR extenders are available for this on the market. Most of these have an IR receiver, picking up the IR signal and relaying it via radio waves to the remote part, which has an IR transmitter mimicking the original IR control. Infrared receivers also tend to have a more or less limited operating angle, which mainly depends on the optical characteristics of the phototransistor . However, it is easy to increase the operating angle using a matte transparent object in front of the receiver.
Radio remote control (RF remote control) is used to control distant objects using a variety of radio signals transmitted by the remote control device. As a complementary method to infrared remote controls, the radio remote control is used with electric garage door or gate openers, automatic barrier systems, burglar alarms and industrial automation systems. Standards used for RF remotes are: Bluetooth AVRCP , Zigbee (RF4CE), Z-Wave . Most remote controls use their own coding, transmitting from 8 to 100 or more pulses, fixed or Rolling code , using OOK or FSK modulation. Also, transmitters or receivers can be universal , meaning they are able to work with many different codings. In this case, the transmitter is normally called a universal remote control duplicator because it is able to copy existing remote controls, while the receiver is called a universal receiver because it works with almost any remote control in the market.
A radio remote control system commonly has two parts: transmit and receive. The transmitter part is divided into two parts, the RF remote control and the transmitter module. This allows the transmitter module to be used as a component in a larger application. The transmitter module is small, but users must have detailed knowledge to use it; combined with the RF remote control it is much simpler to use.
The receiver is generally one of two types: a super-regenerative receiver or a superheterodyne . The super-regenerative receiver works like that of an intermittent oscillation detection circuit. The superheterodyne works like the one in a radio receiver. The superheterodyne receiver is used because of its stability, high sensitivity and it has relatively good anti-interference ability, a small package and lower price.
A remote control is used for controlling substations, pump storage power stations and HVDC -plants. For these systems often PLC-systems working in the longwave range are used.
A subset of Power-Line communication that sends remote control signals over energized AC power lines. This was used to remotely control home automation before the invention of WIFI connected smart switches.
Garage and gate remote controls, also called clickers or openers, are very common especially in some countries such as the US, Australia, and the UK, where garage doors, gates and barriers are widely used. Such a remote is very simple by design, usually only one button, and some with more buttons to control several gates from one control. Such remotes can be divided into two categories by the encoder type used: fixed code and rolling code . If you find dip-switches in the remote, it is likely to be fixed code, an older technology which was widely used. However, fixed codes have been criticized for their (lack of) security, thus rolling code has been more and more widely used in later installations.
Remotely operated torpedoes were demonstrated in the late 19th century in the form of several types of remotely controlled torpedoes . The early 1870s saw remotely controlled torpedoes by John Ericsson ( pneumatic ), John Louis Lay (electric wire guided), and Victor von Scheliha (electric wire guided). [ 34 ]
The Brennan torpedo , invented by Louis Brennan in 1877 was powered by two contra-rotating propellers that were spun by rapidly pulling out wires from drums wound inside the torpedo . Differential speed on the wires connected to the shore station allowed the torpedo to be guided to its target, making it "the world's first practical guided missile". [ 35 ] In 1898 Nikola Tesla publicly demonstrated a "wireless" radio-controlled torpedo that he hoped to sell to the U.S. Navy . [ 36 ] [ 37 ]
Archibald Low was known as the "father of radio guidance systems" for his pioneering work on guided rockets and planes during the First World War . In 1917, he demonstrated a remote-controlled aircraft to the Royal Flying Corps and in the same year built the first wire-guided rocket. As head of the secret RFC experimental works at Feltham , A. M. Low was the first person to use radio control successfully on an aircraft, an "Aerial Target" . It was "piloted" from the ground by future world aerial speed record holder Henry Segrave . [ 38 ] Low's systems encoded the command transmissions as a countermeasure to prevent enemy intervention. [ 39 ] By 1918 the secret D.C.B. Section of the Royal Navy's Signals School, Portsmouth under the command of Eric Robinson V.C. used a variant of the Aerial Target's radio control system to control from ‘mother’ aircraft different types of naval vessels including a submarine. [ 40 ]
The military also developed several early remote control vehicles. In World War I , the Imperial German Navy employed FL-boats (Fernlenkboote) against coastal shipping. These were driven by internal combustion engines and controlled remotely from a shore station through several miles of wire wound on a spool on the boat. An aircraft was used to signal directions to the shore station. EMBs carried a high explosive charge in the bow and traveled at speeds of thirty knots. [ 41 ] The Soviet Red Army used remotely controlled teletanks during the 1930s in the Winter War against Finland and the early stages of World War II . A teletank is controlled by radio from a control tank at a distance of 500 to 1,500 meters, the two constituting a telemechanical group . The Red Army fielded at least two teletank battalions at the beginning of the Great Patriotic War . There were also remotely controlled cutters and experimental remotely controlled planes in the Red Army.
Remote controls in military usage employ jamming and countermeasures against jamming. Jammers are used to disable or sabotage the enemy's use of remote controls. The distances for military remote controls also tend to be much longer, up to intercontinental distance satellite-linked remote controls used by the U.S. for their unmanned airplanes (drones) in Afghanistan, Iraq, and Pakistan. Remote controls are used by insurgents in Iraq and Afghanistan to attack coalition and government troops with roadside improvised explosive devices , and terrorists in Iraq are reported in the media to use modified TV remote controls to detonate bombs. [ 42 ]
In the winter of 1971, the Soviet Union explored the surface of the Moon with the lunar vehicle Lunokhod 1 , the first roving remote-controlled robot to land on another celestial body. Remote control technology is also used in space travel, for instance, the Soviet Lunokhod vehicles were remote-controlled from the ground. Many space exploration rovers can be remotely controlled, though vast distance to a vehicle results in a long time delay between transmission and receipt of a command.
Existing infrared remote controls can be used to control PC applications. [ 43 ] Any application that supports shortcut keys can be controlled via infrared remote controls from other home devices (TV, VCR, AC). [ 44 ] This is widely used [ citation needed ] with multimedia applications for PC based home theater systems. For this to work, one needs a device that decodes IR remote control data signals and a PC application that communicates to this device connected to PC. A connection can be made via serial port, USB port or motherboard IrDA connector. Such devices are commercially available but can be homemade using low-cost microcontrollers. [ citation needed ] LIRC (Linux IR Remote control) and WinLIRC (for Windows) are software packages developed for the purpose of controlling PC using TV remote and can be also used for homebrew remote with lesser modification.
Remote controls are used in photography, in particular to take long-exposure shots. Many action cameras such as the GoPros [ 45 ] as well as standard DSLRs including Sony's Alpha series [ 46 ] incorporate Wi-Fi based remote control systems. These can often be accessed and even controlled via cell-phones and other mobile devices. [ 47 ]
Video game consoles had not used wireless controllers until recently, [ when? ] mainly because of the difficulty involved in playing the game while keeping the infrared transmitter pointed at the console. Early wireless controllers were cumbersome and when powered on alkaline batteries, lasted only a few hours before they needed replacement. Some wireless controllers were produced by third parties, in most cases using a radio link instead of infrared. Even these were very inconsistent, and in some cases, had transmission delays, making them virtually useless. Some examples include the Double Player for NES , the Master System Remote Control System and the Wireless Dual Shot for the PlayStation .
The first official wireless game controller made by a first party manufacturer was the CX-42 for Atari 2600 . The Philips CD-i 400 series also came with a remote control, the WaveBird was also produced for the GameCube . In the seventh generation of gaming consoles, wireless controllers became standard. Some wireless controllers, such as those of the PlayStation 3 and Wii , use Bluetooth . Others, like the Xbox 360 , use proprietary wireless protocols.
To be turned on by a wireless remote, the controlled appliance must always be partly on, consuming standby power . [ 48 ]
Hand- gesture recognition has been researched as an alternative to remote controls for television sets. [ 49 ] | https://en.wikipedia.org/wiki/Remote_control |
Remote control animals are animals that are controlled remotely by humans. Some applications require electrodes to be implanted in the animal's nervous system connected to a receiver which is usually carried on the animal's back. The animals are controlled by the use of radio signals. The electrodes do not move the animal directly, as if controlling a robot; rather, they signal a direction or action desired by the human operator and then stimulate the animal's reward centres if the animal complies. These are sometimes called bio-robots or robo-animals . They can be considered to be cyborgs as they combine electronic devices with an organic life form and hence are sometimes also called cyborg-animals or cyborg-insects .
Because of the surgery required, and the moral and ethical issues involved, there has been criticism aimed at the use of remote control animals, especially regarding animal welfare and animal rights , especially when relatively intelligent complex animals are used. Non-invasive applications may include stimulation of the brain with ultrasound to control the animal. Some applications (used primarily for dogs) use vibrations or sound to control the movements of the animals.
Several species of animals have been successfully controlled remotely. These include moths , [ 1 ] [ 2 ] beetles , [ 3 ] cockroaches , [ 4 ] [ 5 ] [ 6 ] rats , [ 7 ] dogfish sharks , [ 8 ] mice [ 9 ] and pigeons . [ 9 ]
Remote control animals can be directed and used as working animals for search and rescue operations, covert reconnaissance, data-gathering in hazardous areas, or various other uses.
Several studies have examined the remote control of rats using micro-electrodes implanted into their brains and rely on stimulating the reward centre of the rat. Three electrodes are implanted; two in the ventral posterolateral nucleus of the thalamus which conveys facial sensory information from the left and right whiskers, and a third in the medial forebrain bundle which is involved in the reward process of the rat. This third electrode is used to give a rewarding electrical stimulus to the brain when the rat makes the correct move to the left or right. During training, the operator stimulates the left or right electrode of the rat making it "feel" a touch to the corresponding set of whiskers, as though it had come in contact with an obstacle. If the rat then makes the correct response, the operator rewards the rat by stimulating the third electrode. [ 7 ]
In 2002, a team of scientists at the State University of New York remotely controlled rats from a laptop up to 500 m away. The rats could be instructed to turn left or right, climb trees and ladders, navigate piles of rubble, and jump from different heights. They could even be commanded into brightly lit areas, which rats usually avoid. It has been suggested that the rats could be used to carry cameras to people trapped in disaster zones. [ 7 ] [ 10 ] [ 11 ]
In 2013, researchers reported the development of a radio-telemetry system to remotely control free-roaming rats with a range of 200 m. The backpack worn by the rat includes the mainboard and an FM transmitter-receiver, which can generate biphasic microcurrent pulses. All components in the system are commercially available and are fabricated from surface mount devices to reduce the size (25 x 15 x 2 mm) and weight (10 g with battery). [ 12 ]
Concerns have been raised about the ethics of such studies. Even one of the pioneers in this area of study, Sanjiv Talwar , said "There's going to have to be a wide debate to see whether this is acceptable or not" and "There are some ethical issues here which I can't deny." [ 13 ] Elsewhere he was quoted as saying "The idea sounds a little creepy." [ 7 ] Some oppose the idea of placing living creatures under direct human command. "It's appalling, and yet another example of how the human species instrumentalises other species," says Gill Langley of the Dr Hadwen Trust based in Hertfordshire (UK), which funds alternatives to animal-based research. [ 7 ] Gary Francione, an expert in animal welfare law at Rutgers University School of Law, says "The animal is no longer functioning as an animal," as the rat is operating under someone's control. [ 7 ] And the issue goes beyond whether or not the stimulations are compelling or rewarding the rat to act. "There's got to be a level of discomfort in implanting these electrodes," he says, which may be difficult to justify. Talwar stated that the animal's "native intelligence" can stop it from performing some directives but with enough stimulation, this hesitation can sometimes be overcome, but occasionally cannot. [ 14 ]
Researchers at Harvard University have created a brain-to-brain interface (BBI) between a human and a Sprague-Dawley rat. Simply by thinking the appropriate thought, the BBI allows the human to control the rat's tail. The human wears an EEG -based brain-to-computer interface (BCI), while the anesthetised rat is equipped with a focused ultrasound (FUS) computer-to-brain interface (CBI). FUS is a technology that allows the researchers to excite a specific region of neurons in the rat's brain using an ultrasound signal (350 kHz ultrasound frequency, tone burst duration of 0.5 ms, pulse repetition frequency of 1 kHz, given for 300 ms duration). The main advantage of FUS is that, unlike most brain-stimulation techniques, it is non-invasive. Whenever the human looks at a specific pattern (strobe light flicker) on a computer screen, the BCI communicates a command to the rat's CBI, which causes ultrasound to be beamed into the region of the rat's motor cortex responsible for tail movement. The researchers report that the human BCI has an accuracy of 94%, and that it generally takes around 1.5 s from the human looking at the screen to movement of the rat's tail. [ 15 ] [ 16 ]
Another system that non-invasively controls rats uses ultrasonic , epidermal and LED photic stimulators on the back. The system receives commands to deliver specified electrical stimulations to the hearing, pain and visual senses of the rat respectively. The three stimuli work in groups for the rat navigation. [ 17 ]
Other researchers have dispensed with human remote control of rats and instead uses a General Regression Neural Network algorithm to analyse and model controlling of human operations. [ 18 ]
Dogs are often used in disaster relief, at crime scenes and on the battlefield, but it's not always easy for them to hear the commands of their handlers. A command module which contains a microprocessor , wireless radio, GPS receiver and an attitude and heading reference system (essentially a gyroscope ) can be fitted to dogs. The command module delivers vibration or sound commands (delivered by the handler over the radio) to the dog to guide it in a certain direction or to perform certain actions. The overall success rate of the control system is 86.6%. [ 10 ]
Researchers responsible for developing remote control of a pigeon using brain implants conducted a similar successful experiment on mice in 2005. [ 9 ]
In 1967, Franz Huber pioneered electrical stimulation to the brain of insects and showed that mushroom body stimulation elicits complex behaviours, including the inhibition of locomotion. [ 19 ]
The US-based company Backyard Brains released the "RoboRoach", a remote controlled cockroach kit that they refer to as "The world's first commercially available cyborg". The project started as a University of Michigan biomedical engineering student senior design project in 2010 [ 20 ] and was launched as an available beta product on 25 February 2011. [ 21 ] The RoboRoach was officially released into production via a TED talk at the TED Global conference, [ 22 ] and via the crowdsourcing website Kickstarter in 2013, [ 23 ] the kit allows students to use microstimulation to momentarily control the movements of a walking cockroach (left and right) using a bluetooth -enabled smartphone as the controller. The RoboRoach was the first kit available to the general public for the remote control of an animal and was funded by the United States ' National Institute of Mental Health as a device to serve as a teaching aid to promote an interest in neuroscience . [ 22 ] This funding was due to the similarities between the RoboRoach microstimulation, and the microstimulation used in the treatments of Parkinson's disease ( Deep Brain Stimulation ) and deafness ( Cochlear implants ) in humans. Several animal welfare organizations including the RSPCA [ 24 ] and PETA [ 25 ] have expressed concerns about the ethics and welfare of animals in this project.
Another group at North Carolina State University has developed a remote control cockroach. Researchers at NCSU have programmed a path for cockroaches to follow while tracking their location with an Xbox Kinect . The system automatically adjusted the cockroach's movements to ensure it stayed on the prescribed path. [ 26 ]
In 2022, researchers led by RIKEN scientists, reported the development of remote controlled cyborg cockroaches functional if moved (or moving) to sunlight for recharging. They could be used e.g. for purposes of inspecting hazardous areas or quickly finding humans underneath hard-to-access rubbles at disaster sites . [ 27 ] [ 6 ]
In 2009, remote control of the flight movements of the Cotinus texana and the much larger Mecynorrhina torquata beetles has been achieved during experiments funded by the Defence Advanced Research Projects Agency (DARPA). The weight of the electronics and battery meant that only Mecynorrhina was strong enough to fly freely under radio control. A specific series of pulses sent to the optic lobes of the insect encouraged it to take flight. The average length of flights was just 45 seconds, although one lasted for more than 30 minutes. A single pulse caused the beetle to land again. Stimulation of basilar flight muscles allowed the controller to direct the insect left or right, although this was successful on only 75% of stimulations. After each maneuver, the beetles quickly righted themselves and continued flying parallel to the ground. In 2015, researchers was able to fine tune the beetle steering in flight by changing the pulse train applied on the wing-folding muscle. [ 30 ] [ 31 ] Recently, scientists from Nanyang Technological University, Singapore , have demonstrated graded turning and backward walking in a small darkling beetle (Zophobas morio), which is 2 cm to 2.5 cm long and weight only 1 g including the electronic backpack and battery. [ 28 ] [ 32 ] It has been suggested the beetles could be used for search and rescue mission, however, it has been noted that currently available batteries, solar cells and piezoelectrics that harvest energy from movement cannot provide enough power to run the electrodes and radio transmitters for very long. [ 3 ] [ 33 ]
Work using Drosophila has dispensed with stimulating electrodes and developed a 3-part remote control system that evokes action potentials in pre-specified Drosophila neurons using a laser beam. The central component of the remote control system is a Ligand-gated ion channel gated by ATP . When ATP is applied, uptake of external calcium is induced and action potentials generated. The remaining two parts of the remote control system include chemically caged ATP, which is injected into the central nervous system through the fly's simple eye, and laser light capable of uncaging the injected ATP. The giant fibre system in insects consists of a pair of large interneurons in the brain which can excite the insect flight and jump muscles. A 200 ms pulse of laser light elicited jumping, wing flapping, or other flight movements in 60%–80% of the flies. Although this frequency is lower than that observed with direct electrical stimulation of the giant fibre system, it is higher than that elicited by natural stimuli, such as a light-off stimulus. [ 19 ]
Spiny dogfish sharks have been remotely controlled by implanting electrodes deep in the shark's brain to a remote control device outside the tank. When an electric current is passed through the wire, it stimulates the shark's sense of smell and the animal turns, just as it would move toward blood in the ocean. Stronger electrical signals—mimicking stronger smells—cause the shark to turn more sharply. One study is funded by a $600,000 grant from Defense Advanced Research Projects Agency (DARPA). [ 34 ] It has been suggested that such sharks could search hostile waters with sensors that detect explosives, or cameras that record intelligence photographs. Outside the military, similar sensors could detect oil spills or gather data on the behaviour of sharks in their natural habitat. Scientists working with remote control sharks admit they are not sure exactly which neurons they are stimulating, and therefore, they can't always control the shark's direction reliably. The sharks only respond after some training, and some sharks don't respond at all. The research has prompted protests from bloggers who allude to remote controlled humans or horror films featuring maniacal cyborg sharks on a feeding frenzy. [ 8 ]
An alternative technique was to use small gadgets attached to the shark's noses that released squid juice on demand. [ 10 ]
South Korean researchers have remotely controlled the movements of a turtle using a completely non-invasive steering system. Red-eared terrapins ( Trachemys scripta elegans ) were made to follow a specific path by manipulating the turtles' natural obstacle avoidance behaviour. If these turtles detect something is blocking their path in one direction, they move to avoid it. The researchers attached a black half cylinder to the turtle. The "visor" was positioned around the turtle's rear end, but was pivoted around using a microcontroller and a servo motor to either the left or right to partially block the turtle's vision on one side. This made the turtle believe there was an obstacle it needed to avoid on that side and thereby encouraged the turtle to move in the other direction. [ 10 ]
Some animals have had parts of their bodies remotely controlled, rather than their entire bodies. Researchers in China stimulated the mesencephalon of geckos ( G. gecko ) via micro stainless steel electrodes and observed the gecko's responses during stimulation. Locomotion responses such as spinal bending and limb movements could be elicited in different depths of mesencephalon. Stimulation of the periaqueductal gray area elicited ipsilateral spinal bending while stimulation of the ventral tegmental area elicited contralateral spinal bending. [ 35 ]
In 2007, researchers at east China's Shandong University of Science and Technology implanted micro electrodes in the brain of a pigeon so they could remotely control it to fly right or left, or up or down. [ 9 ]
Remote-controlled animals are considered to have several potential uses, replacing the need for humans in some dangerous situations. Their application is further widened if they are equipped with additional electronic devices. Small creatures fitted with cameras and other sensors have been proposed as being useful when searching for survivors after a building has collapsed, with cockroaches or rats being small and maneuverable enough to go under rubble. [ 5 ] [ 7 ]
There have been a number of suggested military uses of remote controlled animals, particularly in the area of surveillance. [ 7 ] [ 8 ] Remote-controlled dogfish sharks have been likened to the studies into the use of military dolphins . [ 8 ] It has also been proposed that remote-controlled rats could be used for the clearing of land mines. [ 7 ] Other suggested fields of application include pest control, the mapping of underground areas, and the study of animal behaviour. [ 7 ] [ 8 ]
Development of robots that are capable of performing the same actions as controlled animals is often technologically difficult and cost-prohibitive. [ 7 ] Flight is very difficult to replicate while having an acceptable payload and flight duration. Harnessing insects and using their natural flying ability gives significant improvements in performance. [ 33 ] The availability of "inexpensive, organic substitutes" therefore allows for the development of small, controllable robots that are otherwise currently unavailable. [ 7 ]
Some animals are remotely controlled, but rather than being directed to move left or right, the animal is prevented from moving forward, or its behaviour is modified in other ways.
Shock collars deliver electrical shocks of varying intensity and duration to the neck or other area of a dog's body via a radio-controlled electronic device incorporated into a dog collar. Some collar models also include a tone or vibration setting, as an alternative to or in conjunction with the shock. Shock collars are now readily available and have been used in a range of applications, including behavioural modification, obedience training, and pet containment, as well as in military, police and service training. While similar systems are available for other animals, the most common are the collars designed for domestic dogs.
The use of shock collars is controversial and scientific evidence for their safety and efficacy is mixed. [ citation needed ] A few countries have enacted bans or controls on their use. Some animal welfare organizations warn against their use or actively support a ban on their use or sale. [ citation needed ] Some want restrictions placed on their sale. Some professional dog trainers and their organizations oppose their use and some support them. Support for their use or calls for bans from the general public is mixed.
In 2007, it was reported that scientists at the Commonwealth Scientific and Industrial Research Organisation had developed a prototype "invisible fence" using the Global Positioning System (GPS) in a project nicknamed Bovines Without Borders. The system uses battery-powered collars that emit a sound to warn cattle when they are approaching a virtual boundary. If a cow wanders too near, the collar emits a warning noise. If it continues, the cow gets an electric shock of 250-milliwatts . The boundaries are drawn by GPS and exist only as a line on a computer. There are no wires or fixed transmitters at all. The cattle took less than an hour to learn to back off when they heard the warning noise. The scientists indicated that commercial units were up to 10 years away. [ 36 ]
Another type of invisible fence uses a buried wire that sends radio signals to activate shock collars worn by animals that are "fenced" in. The system works with three signals. The first is visual (white plastic flags spaced at intervals around the perimeter in the fenced-in area), the second is audible (the collar emits a sound when the animal wearing it approaches buried cable), and finally there's an electric shock to indicate they have reached the fence. [ 37 ]
Other invisible fences are wireless. Rather than using a buried wire, they emit a radio signal from a central unit, and activate when the animal travels beyond a certain radius from the unit. | https://en.wikipedia.org/wiki/Remote_control_animal |
Remote error indication ( REI ) or formerly far end block error (FEBE) is an alarm signal used in synchronous optical networking (SONET). It indicates to the transmitting node that the receiver has detected a block error. [ 1 ]
REI or FEBE errors are mostly seen on DS3 circuits, however they are known to be present on other types (SONET/T1s etc.).
Each terminating device ( router or otherwise) monitors the incoming signal for CP-bit path errors. If an error is detected on the incoming DS3, the terminating elements transmit a FEBE bit on the outgoing direction of the DS3. Network monitoring equipment located anywhere along the path then measures these FEBEs in each direction to gauge the quality of the circuit while in service.
If you have a DS3 running from New York to Atlanta, and there's a problem within one of the central offices in Virginia. The errors are being generated by a device in the central office, and being detected by the terminating device (a NID, M13 Mux or router). The terminating device then sends the 'FEBE' error signal outbound to alert further devices there were problems.
So, errors are generated on the incoming side of the loop, the device terminating that end picks up the errors, and transmits a 'FEBE errors' message on the outgoing side. This specific setup of error reporting is what causes the confusion between many technicians trying to perform repairs.
Technical jargon:
An error detected by extracting the 4-bit FEBE field from the path status byte (G1). The legal range for the 4-bit field is between 0000 and 1000, representing zero to eight errors. Any other value is interpreted as zero errors.
The DS-3 M-frame uses P bits to check the line parity. The M-subframe uses C bits in a format called C-bit parity, which copies the result of the P bits at the source and checks the result at the destination. An ATM interface reports detected C-bit parity errors back to the source via a far-end block error (FEBE). ( Cisco.com all rights reserved)
An indication sent to a transmitting node that a flawed block has been detected at the receiving node. (DS3) A FEBE in C-bit parity is a parity violation detected at the far-end terminal and transmitted back to the near-end terminal.
A maintenance cell indicates that an error occurred with a data block at the far end of the link. This cell then sends a message back to the near end. | https://en.wikipedia.org/wiki/Remote_error_indication |
A remote keyless system ( RKS ), also known as remote keyless entry ( RKE ) or remote central locking , is an electronic lock that controls access to a building or vehicle by using an electronic remote control (activated by a handheld device or automatically by proximity). [ 1 ] RKS largely and quickly superseded keyless entry , a budding technology that restrictively bound locking and unlocking functions to vehicle-mounted keypads.
Widely used in automobiles, an RKS performs the functions of a standard car key without physical contact. When within a few yards of the car, pressing a button on the remote can lock or unlock the doors, and may perform other functions.
A remote keyless system can include both remote keyless entry (RKE), which unlocks the doors, and remote keyless ignition (RKI), which starts the engine.
Numerous manufacturers have offered entry systems that use door- or pillar-mounted keypad entry systems ; touchless passive entry / smart key systems that allow a key to remain pocketed; and PAAK (Phone as a Key) systems.
Remote keyless entry was patented in 1981 by Paul Lipschutz, who worked for Nieman (a supplier of security components to the car industry) and had developed a number of automotive security devices. His electrically actuated lock system could be controlled by using a handheld fob to stream infrared data. Patented in 1981 after successful submission in 1979, it worked using a "coded pulse signal generator and battery-powered infra-red radiation emitter." In some geographic areas, the system is called a PLIP system, or Plipper, after Lipschutz. Infrared technology was superseded in 1995 when a European frequency was standardised. [ 2 ] [ 3 ]
The remote keyless systems using a handheld transmitter first appeared on the French made Renault Fuego in 1982, [ 4 ] and as an option on several American Motors vehicles in 1983, including the Renault Alliance . The feature gained its first widespread availability in the U.S. on several General Motors vehicles in 1989. [ citation needed ]
Keyless remotes contain a short-range radio transmitter , and must be within a certain range, usually 5–20 meters, of the car to work. When a button is pushed, it sends a coded signal by radio waves to a receiver unit in the car, which locks or unlocks the door. Most RKEs operate at a frequency of 315 MHz for North America-made cars and at 433.92 MHz for European, Japanese and Asian cars. Modern systems since the mid-1990s implement encryption as well as rotating entry codes to prevent car thieves from intercepting and spoofing the signal. [ 5 ] Earlier systems used infrared instead of radio signals to unlock the vehicle, such as systems found on Mercedes-Benz, [ 6 ] BMW [ 7 ] and other manufacturers.
The system signals locked or unlocked status through discreet signaling — by the lights, horn, or both. A vehicle might use a chirp system: two beeps on driver door unlocking, four beeps on unlocking of all doors, a long beep for the trunk or power tailgate or a short beep on locking and arming of the alarm.
The functions of a remote keyless entry system are contained on a key fob or built into the ignition key handle itself. Buttons are dedicated to locking or unlocking the doors and opening the trunk or tailgate. On some minivans, the power sliding doors can be opened/closed remotely. Some cars will also close any open windows and roof when remotely locking the car. Some remote keyless fobs also feature a red panic button which activates the car alarm as a standard feature. Further adding to the convenience, some cars' engines with remote keyless ignition systems can be started by the push of a button on the key fob (useful in cold weather), and convertible tops can be raised and lowered from outside the vehicle while it's parked.
On cars where the trunk release is electronically operated, it can be triggered to open by a button on the remote. Conventionally, the trunk springs open with the help of hydraulic struts or torsion springs , and thereafter must be lowered manually. Premium models, such as SUVs and estates with tailgates, may have a motorized assist that can both open and close the tailgate for easy access and remote operation.
For offices, or residences, the system can also be coupled with the security system, garage door opener or remotely activated lighting devices.
Remote keyless entry fobs emit a radio frequency with a designated, distinct digital identity code. Inasmuch as "programming" fobs is a proprietary technical process, it is typically performed by the automobile manufacturer. In general, the procedure is to put the car computer in 'programming mode'. This usually entails engaging the power in the car several times while holding a button or lever. It may also include opening doors, or removing fuses . The procedure varies amongst various makes, models, and years. Once in 'programming mode' one or more of the fob buttons is depressed to send the digital identity code to the car's onboard computer. The computer saves the code and the car is then taken out of programming mode.
As RKS fobs have become more prevalent in the automobile industry a secondary market of unprogrammed devices has sprung up. Some websites sell steps to program fobs for individual models of cars as well as accessory kits to remotely activate other car devices.
On early (1998–2012) keyless entry remotes, the remotes can be individually programmed by the user, by pressing a button on the remote, and starting the vehicle. However, newer (2013+) keyless entry remotes require dealership or locksmith programming via a computer with special software . The Infrared keyless entry systems offered user programming, though radio frequency keyless entry systems mostly require dealer programming.
Some cars feature a passive keyless entry system. Their primary distinction is the ability to lock/unlock (and later iterations allow starting) the vehicle without any input from the user.
General Motors pioneered this technology with the Passive Keyless Entry (PKE) system in the 1993 Chevrolet Corvette. It featured passive locking/unlocking, but traditional keyed starting of the vehicle.
Today, passive systems are commonly found on a variety of vehicles, and although the exact method of operation differs between makes and models, their operation is generally similar: a vehicle can be unlocked without the driver needing to physically push a button on the key fob to lock or unlock the car. Additionally, some are able to start or stop the vehicle without physically having to insert a key.
Keyless ignition does not by default provide better security. In October 2014, it was found that some insurers in the United Kingdom would not insure certain vehicles with keyless ignition unless there were additional mechanical locks in place due to weaknesses in the keyless system. [ 8 ]
A security concern with any remote entry system is a spoofing technique called a replay attack , in which a thief records the signal sent by the key fob using a specialized receiver called a code grabber, and later replays it to open the door. To prevent this, the key fob does not use the same unlock code each time but a rolling code system; it contains a pseudorandom number generator which transmits a different code each use. [ 9 ] The car's receiver has another pseudorandom number generator synchronized to the fob to recognise the code. To prevent a thief from simulating the pseudorandom number generator the fob encrypts the code.
News media have reported cases where it is suspected that criminals managed to open cars by using radio repeaters to trick vehicles into thinking that their keyless entry fobs were close by even when they were far away ( relay attack ), [ 10 ] though they have not reported that any such devices have been found. The articles speculate that keeping fobs in aluminum foil or a freezer when not in use can prevent criminals from exploiting this vulnerability. [ 11 ]
In 2015, it was reported that Samy Kamkar had built an inexpensive electronic device about the size of a wallet that could be concealed on or near a locked vehicle to capture a single keyless entry code to be used at a later time to unlock the vehicle. The device transmits a jamming signal to block the vehicle's reception of rolling code signals from the owner's fob, while recording these signals from both of his two attempts needed to unlock the vehicle. The recorded first code is sent to the vehicle only when the owner makes the second attempt, while the recorded second code is retained for future use. Kamkar stated that this vulnerability had been widely known for years to be present in many vehicle types but was previously undemonstrated. [ 12 ] A demonstration was done during DEF CON 23. [ 13 ]
Actual thefts targeting luxury cars based on the above exploit have been reported when the key fob is near the front of the home. Several workaround can prevent such exploits, including placing the key fob in a tin box. [ 14 ] [ 15 ] A criminal ring stole about 100 vehicles using this technique in Southern and Eastern Ontario. [ 16 ]
Prior to remote keyless systems (RKS), several manufacturers offered keypad systems which did not allow "remote entry" per se, but allowed a user to enter a vehicle without a key by entering a code on a multi-button keypad on the driver door or pillar — to unlock the driver door. Subsequent code presses could unlock all doors or the trunk — or to lock the vehicle from the outside.
A keypad system can enables tiered or time-restricted permissions, i.e., the code allowing access to the vehicle but not starting the engine. The code, while factory programmed, can subseqently be user programmed and if shared, can be easily changed to prevent subsequent vehicle access. The system also allows a user to leave the ignition key in the vehicle, for later retrieval — including by another user sharing a unique entry code. Two hikers, for example, can leave the keys in the glove box, lock the door, and either hiker can return later to access the vehicle via their own code. The keypad also allows a user to walk away from a running vehicle, e.g., to warm up the vehicle in cold weather, returning to unlock the vehicle by the keypad.
Ford introduced its proprietary keypad system with physical buttons for model year 1980 — on the Ford Thunderbird , Mercury Cougar , Lincoln Continental Mark VI , and Lincoln Town Car — marketed initially as the Keyless Entry System, later as SecuriCode and most recently as the SecuriCode Invisible , the latter where a capacitive touch pad replaces physical buttons and illuminates on contact, remaining otherwise hidden. Because of its unique access features and its popularity, Ford offered a keypad system on 90% of its vehicles as of 2019 [ 17 ] and continues to offer it as of 2025, more than forty years after its introduction. Notably, Ford's other systems have not displaced the keypad system; rather Ford continues to offer its keypad system alongside its fob-operated RKS, its passive entry/smark key systems, [ 18 ] and most recently its App-driven "Phone as a Key" systems.
The sixth generation Buick Electra (1985-1991) featured a sill-mounted keypad for model years 1985-1988, superseded in 1989 by GM's remote keyless system.
Nissan offered the a keypad technology on the 1984 Maxima , Fairlady , Gloria and Cedric , essentially using the same approach as Ford, with the addition of being able to roll the windows down and open the optional moonroof from outside the vehicle on the door handle installed keypad on both the driver's and front passengers door as well as roll the windows up, close the optional sunroof and lock the vehicle. | https://en.wikipedia.org/wiki/Remote_keyless_system |
Remote laboratory (also known as online laboratory or remote workbench ) is the use of telecommunications to remotely conduct real (as opposed to virtual ) experiments, at the physical location of the operating technology, whilst the scientist is utilizing technology from a separate geographical location. Remote laboratory comprehends one or more remote experiments . [ 1 ]
The benefits of remote laboratories are predominantly in engineering education : [ 2 ]
The disadvantages differ depending on the type of remote laboratory and the topic area.
The general disadvantages compared to a proximal (hands on) laboratory are:
Current system capabilities include:
For India's virtual labs project, see Virtual Labs (India). For the online project "Virtual Laboratory. Essays and Resources on the Experimentalization of Life, 1830-1930," see Virtual Laboratory. These resources provide further opportunities for virtual experimentation and historical insights into the development of experimental techniques. | https://en.wikipedia.org/wiki/Remote_laboratory |
A remote plasma (also downstream plasma or afterglow plasma ) is a plasma processing method in which the plasma and material interaction occurs at a location remote from the plasma in the plasma afterglow . [ 1 ] [ 2 ]
This article about materials science is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Remote_plasma |
Remote recording , also known as location recording , is the act of making a high-quality complex audio recording of a live concert performance, or any other location recording that uses multitrack recording techniques outside of a recording studio . [ 1 ] The multitrack recording is then carefully mixed , and the finished result is called a remote recording or a live album . This is in contrast to a field recording which uses few microphones, recorded onto the same number of channels as the intended product. Remote recording is not the same as remote broadcast for which multiple microphones are mixed live and broadcast during the performance, typically to stereo. Remote recording and remote broadcast may be carried out simultaneously by the same crew using the same microphones.
One important benefit of a remote recording is that the performers will respond to the audience; they will not be as distracted by the recording process. [ 2 ] Another reason for a remote recording is to capture an artist in a different acoustic space such as a church, ballroom or meeting hall. [ 3 ]
To make a remote recording, studio-quality recording equipment is trucked to the concert venue and connected to the concert microphones with a bank of microphone splitters . Other microphones may be added. The individual microphone signals are routed to separate tracks. [ 4 ]
A remote recording is often made using a specially built remote truck : a rolling recording studio carrying a mixing console , studio monitors and multitrack recorders. Beginning modestly in 1958, recording engineer Wally Heider developed and popularized the use of a remote truck in California in the mid-1960s and throughout the 1970s. [ 5 ]
Remote recording developed out of the practice of making field recordings with high-quality equipment. The earliest such recordings were crude, undertaken in the 1920s and 1930s, beginning with Ralph Peer in 1923. Peer carried a disc-cutting machine and recorded musicians directly to disc. From 1941 Alan Lomax became known for the field recordings he made of the various musical traditions carried to or created in the United States. In the 1950s, advances in microphones, mixers and tape recorders allowed more sophisticated equipment to be carried to a concert location, including more microphones, tape recorders with more tracks, and possibly a mixing console to mix multiple microphones down to fewer recorded tracks.
Not all remote recordings were well received by the public. For instance, in 1963 Chess Records hauled their monaural tape recorder to Myrtle Beach, South Carolina , to capture the Fourth of July weekend concerts including Bo Diddley 's electrifying performance in front of 2,000 excited fans. The resulting album, Bo Diddley's Beach Party , did not sell well in the U.S. [ 6 ]
In 1958, American recording engineer Wally Heider mounted recording equipment in a truck, reportedly the first to do so. The next year, engineer Reice Hamel did the same. Both men used new techniques, bringing many microphones to a concert and mixing the performance as it happened—in the manner of a remote broadcast—recording onto stereo tape recorders for release as stereo and mono records. Hamel's first truck grew from simple to more complex in the first seven years. He started with stereo, obtained a three-track machine on which he taped a Barbra Streisand concert, then in 1965 he configured the truck as a complete recording studio. In 1966 he installed a four-track machine, then moved to eight-track, and by 1971 was recording on sixteen tracks. [ 7 ]
Many of Heider's recordings became hits or critical successes. One of them is the classic album Live in Concert by Ray Charles , captured in 1964 at the Shrine Auditorium in Los Angeles. [ 8 ] Heider recorded the Monterey Pop Festival in 1967; [ 9 ] [ 10 ] [ 11 ] its many musical acts and the increasing importance of high quality sound for a concert film signaled a major shift in scale and importance for the remote truck operator. After that, other recording studios assembled their own remote recording trucks and more concerts were saved on multitrack tapes. The Woodstock Festival was recorded on 12-track by a remote truck and then mixed at the Record Plant studio in New York. [ 12 ] In August 1971, the Record Plant used its first remote truck to make its first remote recording, The Concert for Bangladesh held at Madison Square Garden . [ 13 ] In preparation for the concert, Record Plant co-founder Chris Stone said that remote recording had several key advantages to studio recording: "It is really not as expensive as studio time when one considers that the concert is two hours long, perhaps twice a night for two days. It is a spontaneous music that is recorded live. This makes it more flavorable [ sic ]. And it is usually easier on the musician, who gets paid for the concert and gets the recording done for his next LP at the same time. Everyone wins." [ 7 ] | https://en.wikipedia.org/wiki/Remote_recording |
Remote service software is used by equipment manufacturers to remotely monitor, access and repair products in use at customer sites. It is a secure, auditable gateway for service teams to troubleshoot problems, perform proactive maintenance, assist with user operations and monitor performance. This technology is typically implemented in mission-critical environments like hospitals or IT data centers – where equipment downtime is intolerable.
Remote service software helps to:
Manufacturers are using aftermarket service a competitive differentiator. Remote service software provides a platform for manufacturers to offer and meet stringent service level agreements (SLAs) without increasing the size of their service team. | https://en.wikipedia.org/wiki/Remote_service_software |
RemoveDEBRIS was a satellite research project intending to demonstrate various space debris removal technologies. The mission was led by the Surrey Space Centre from the University of Surrey with the satellite's platform manufactured by Surrey Satellite Technology Ltd (SSTL). Partners on the project included Airbus , ArianeGroup , Swiss Center for Electronics and Microtechnology , Inria , Innovative Solutions In Space , Surrey Space Centre , and Stellenbosch University .
Rather than engaging in active debris removal (ADR) of real space debris, the RemoveDEBRIS mission plan was to test the efficacy of several ADR technologies on mock targets in low Earth orbit . In order to complete its planned experiments the platform was equipped with a net, a harpoon, a laser ranging instrument, a dragsail, and two CubeSats (miniature research satellites). [ 3 ]
The experiments were as follows:
The RemoveDEBRIS platform was based on a SSTL X50 bus that had been customised for deployment from the International Space Station. The platform hosted all the experimental payloads as well as providing power, data and control for the mission. A high degree of autonomy was built in using time-tagged commands to allow experiments to be run out of sight of the groundstation. [ 5 ]
The DebrisSat 1 (DS-1, aka REMDEB-NET, COSPAR 1998-067PM) was built by engineers and students at the University of Surrey and was based on a 2U CubeSat measuring 100 × 100 × 227 mm. 1U of the satellite contained the power and avionics to power the payload. The payload contained an inflatable designed to provide a large target area for the next experiment. A Cold Gas Generator (CGG) was used to inflate six aluminium booms to provide a frame. Small aluminium sails attached to the end of the booms then deployed during the inflation. [ 5 ] DebrisSat 1 decayed from orbit on 2 March 2019. [ 6 ]
The DebrisSat 2 (DS-2, aka REMDEB-DS2, COSPAR: 1998-067PR) was also based on a 2U CubeSat with two deployable panels solar panels and communications. The spacecraft contained a GPS receiver as well as an inter-satellite link to provide location and attitude data back to the platform to assess the VBN camera performance. The avionics were based on the QB50 avionics stack developed by the Surrey Space Centre and Electronic Systems Laboratory (ESL) at Stellenbosch University. In addition the spacecraft also tested out a low-cost UART camera which was able to beam back pictures to the platform as it separated. [ 5 ] DebrisSat 2 deorbited 30 May 2020. [ 7 ]
After final system end-to-end and environmental testing, the RemoveDebris spacecraft was shipped to Nanoracks in Houston and then onto the launch site at the Kennedy Space Centre in Florida. The spacecraft was placed in an ISS cargo transfer bag and placed in the pressurised section of the CRS-14 SpaceX Dragon 1 spacecraft. The Dragon resupply mission with RemoveDEBRIS onboard was launched 2 April 2018, arriving at the ISS on 4 April. [ 8 ]
The RemoveDebris spacecraft was unloaded from the capsule. NASA Astronauts Drew Feustel and Ricky Arnold removed the platform handling panels, completed final preparation and loaded the satellite into the Japanese Experiment Module (JEM) airlock on 6 June 2018. An airlock cycle was performed on 19 June 2018 and RemoveDEBRIS moved outside the JEM via the airlock slide table. The spacecraft was grasped by the Kaber interface on the Mobile Servicing System Special Purpose Dexterous Manipulator (MSS SPDM) and placed in the deployment position. [ 9 ]
Deployment of the satellite from the station's Kibo module via robotic Canadarm-2 took place on 20 June 2018. [ 4 ] [ 10 ] At approximately 100 kg, RemoveDEBRIS was the largest satellite to have ever been deployed from the ISS. [ 11 ] The platform contained two CubeSat deployers from ISISPACE . The full lifespan of the mission from launch to re-entry was estimated at 1.5 years. [ 12 ]
On 16 September 2018, it demonstrated its ability to use a net to capture a deployed simulated target. [ 13 ] [ 14 ]
On 28 October 2018, DebrisSat 2 was deployed at 06:15UTC. The VBN camera on the platform took 361 images of the spacecraft crucial to determining the performance of the camera system. Position and attitude data from DebrisSat 2 was transmitted back to the platform providing ground truth for the experiment. DebrisSat 2 also forwarded low resolution photos of the deployment to the platform from its own vantage point. [ 15 ]
On 8 February 2019, SSTL demonstrated the RemoveDEBRIS harpoon which was fired at a speed of 20 metres per second penetrating a simulated target extended from the satellite on a 1.5 m (4 ft 11 in) boom. [ 16 ]
The deployment of the dragsail was targeted for 4 March 2019. After the deploy command had been sent, no expected changes in spacecraft behaviour were detected. After an investigation it was determined that the most likely result was a partial or failed deployment of the inflatable boom which prevented the sail from deploying. Lessons learnt from this attempt were put into practice for two new dragsails that were deployed on the Spaceflight SSO-A mission. [ 15 ] | https://en.wikipedia.org/wiki/RemoveDEBRIS |
Remove before flight is a safety warning often seen on removable aircraft and spacecraft components, typically in the form of a red ribbon, to indicate that a device, such as a protective cover or a pin to prevent the movement of mechanical parts, is only used when the aircraft is on the ground (parked or taxiing ). On small general aviation aircraft, this may include a pitot tube cover or a control lock. The warning appears in English only. Other ribbons labelled "pull to arm" or similar are found on missiles and other weapon systems that are not mounted on aircraft.
Remove-before-flight components are often referred to as "red tag items". Typically, the ground crew will have a checklist of remove-before-flight items. Some checklists will require the ribbon or tag to be attached to the checklist to verify it has been removed. Non-removal of a labelled part has caused airplane crashes , like that of Aeroperú Flight 603 and, in 1975, a Royal Nepal Airlines Pilatus PC-6 Porter carrying the wife and daughter of Sir Edmund Hillary . [ 1 ]
Red tag items typically include:
This article about aircraft components is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Remove_before_flight |
In computing, rename refers to the altering of a name of a file. This can be done manually by using a shell command such as ren or mv , or by using batch renaming software that can automate the renaming process.
The C standard library provides a function called rename which does this action. [ 1 ] In POSIX , which is extended from the C standard, the rename function will fail if the old and new names are on different mounted file systems . [ 2 ]
In SQL , renames are performed by using the CHANGE specification in ALTER TABLE statements.
In POSIX , a successful call to rename is guaranteed to have been atomic from the point of view of the current host (i.e., another program would only see the file with the old name or the file with the new name, not both or neither of them). This aspect is often used during a file save operation to avoid any possibility of the file contents being lost if the save operation is interrupted.
The rename function from the C library in Windows does not implement the POSIX atomic behaviour; instead it fails if the destination file already exists. However, other calls in the Windows API do implement the atomic behaviour [ citation needed ] .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Rename_(computing) |
Renard series are a system of preferred numbers dividing an interval from 1 to 10 into 5, 10, 20, or 40 steps. [ 1 ] This set of preferred numbers was proposed ca. 1877 by French army engineer Colonel Charles Renard [ 2 ] [ 3 ] [ 4 ] and reportedly published in an 1886 instruction for captive balloon troops, thus receiving the current name in 1920s. [ 5 ] His system was adopted by the ISO in 1949 [ 6 ] to form the ISO Recommendation R3 , first published in 1953 [ 7 ] or 1954, which evolved into the international standard ISO 3 . [ 1 ] The factor between two consecutive numbers in a Renard series is approximately constant (before rounding), namely the 5th, 10th, 20th, or 40th root of 10 (approximately 1.58, 1.26, 1.12, and 1.06, respectively), which leads to a geometric sequence . This way, the maximum relative error is minimized if an arbitrary number is replaced by the nearest Renard number multiplied by the appropriate power of 10. One application of the Renard series of numbers is the current rating of electric fuses . Another common use is the voltage rating of capacitors (e.g. 100 V, 160 V, 250 V, 400 V, 630 V).
The most basic R5 series consists of these five rounded numbers, which are powers of the fifth root of 10, rounded to two digits. The Renard numbers are not always rounded to the closest three-digit number to the theoretical geometric sequence:
If a finer resolution is needed, another five numbers are added to the series, one after each of the original R5 numbers, and one ends up with the R10 series. These are rounded to a multiple of 0.05. Where an even finer grading is needed, the R20, R40, and R80 series can be applied. The R20 series is usually rounded to a multiple of 0.05, and the R40 and R80 values interpolate between the R20 values, rather than being powers of the 80th root of 10 rounded correctly. In the table below, the additional R80 values are written to the right of the R40 values in the column named "R80 add'l". The R40 numbers 3.00 and 6.00 are higher than they "should" be by interpolation, in order to give rounder numbers.
In some applications more rounded values are desirable, either because the numbers from the normal series would imply an unrealistically high accuracy, or because an integer value is needed (e.g., the number of teeth in a gear). For these needs, more rounded versions of the Renard series have been defined in ISO 3. In the table below, rounded values that differ from their less rounded counterparts are shown in bold.
As the Renard numbers repeat after every 10-fold change of the scale, they are particularly well-suited for use with SI units. It makes no difference whether the Renard numbers are used with metres or millimetres . But one would need to use an appropriate number base to avoid ending up with two incompatible sets of nicely spaced dimensions, if for instance they were applied with both inches and feet . In the case of inches and feet a root of 12 would be desirable, that is, n √ 12 where n is the desired number of divisions within the major step size of twelve. Similarly, a base of two, eight, or sixteen would fit nicely with the binary units commonly found in computer science.
Each of the Renard sequences can be reduced to a subset by taking every n th value in a series, which is designated by adding the number n after a slash. [ 4 ] For example, "R10″/3 (1…1000)" designates a series consisting of every third value in the R″10 series from 1 to 1000, that is, 1, 2, 4, 8, 15, 30, 60, 120, 250, 500, 1000.
Such narrowing of the general original series brings the opposite idea of deepening the series and to redefine it by a strict simple formula. As the beginning of the selected series seen higher, the {1, 2, 4, 8, ...} series can be defined as binary. That means that the R10 series can be formulated as R10 ≈ bR3 = 3 √ 2 n , generating just 9 values of R10, just because of the kind of periodicity. This way rounding is eliminated, as the 3 values of the first period are repeated multiplied by 2. The usual cons however is that the thousand product of such multiplication is shifted slightly: Instead of decadic 1000, the binary 1024 appears, as classics in IT. The pro is that the characteristics is now fully valid, that whatever value multiplied by 2 is also member of the series, any rounding effectively eliminated. The multiplication by 2 is possible in R10 too, to get another members, but the long fractioned numbers complicate the R10 accuracy. | https://en.wikipedia.org/wiki/Renard_series |
Rendezvous delay is a term that pertains to mobile wireless networking , and the hand-off of a mobile device from one base station to a new base station. It is the amount of time elapsed for a mobile networked device to attach to the new base station after it has stopped its link with its old base station. The nature of this delay depends on the type of wireless network and the protocols used. | https://en.wikipedia.org/wiki/Rendezvous_delay |
Renewable energy commercialization involves the deployment of three generations of renewable energy technologies dating back more than 100 years. First-generation technologies, which are already mature and economically competitive, include biomass , hydroelectricity , geothermal power and heat. Second-generation technologies are market-ready and are being deployed at the present time; they include solar heating , photovoltaics , wind power , solar thermal power stations , and modern forms of bioenergy . Third-generation technologies require continued R&D efforts in order to make large contributions on a global scale and include advanced biomass gasification , hot-dry-rock geothermal power, and ocean energy . [ 7 ] In 2019, nearly 75% of new installed electricity generation capacity used renewable energy [ 8 ] and the International Energy Agency (IEA) has predicted that by 2025, renewable capacity will meet 35% of global power generation. [ 9 ]
Public policy and political leadership helps to "level the playing field" and drive the wider acceptance of renewable energy technologies. [ 10 ] [ 11 ] [ 12 ] Countries such as Germany, Denmark, and Spain have led the way in implementing innovative policies which has driven most of the growth over the past decade. As of 2014, Germany has a commitment to the " Energiewende " transition to a sustainable energy economy, and Denmark has a commitment to 100% renewable energy by 2050. There are now 144 countries with renewable energy policy targets.
Renewable energy continued its rapid growth in 2015, providing multiple benefits. There was a new record set for installed wind and photovoltaic capacity (64GW and 57GW) and a new high of US$329 Billion for global renewables investment. A key benefit that this investment growth brings is a growth in jobs. [ 13 ] The top countries for investment in recent years were China, Germany, Spain, the United States, Italy, and Brazil. [ 11 ] [ 14 ] Renewable energy companies include BrightSource Energy , First Solar , Gamesa , GE Energy , Goldwind , Sinovel , Targray , Trina Solar , Vestas , and Yingli . [ 15 ] [ 16 ]
Climate change concerns [ 17 ] [ 18 ] [ 19 ] are also driving increasing growth in the renewable energy industries. [ 20 ] [ 21 ] According to a 2011 projection by the IEA, solar power generators may produce most of the world's electricity within 50 years, reducing harmful greenhouse gas emissions . [ 22 ]
Climate change , pollution, and energy insecurity are significant problems, and addressing them requires major changes to energy infrastructures. [ 24 ] Renewable energy technologies are essential contributors to the energy supply portfolio, as they contribute to world energy security , reduce dependency on fossil fuels , and some also provide opportunities for mitigating greenhouse gases . [ 7 ] Climate-disrupting fossil fuels are being replaced by clean, climate-stabilizing, non-depletable sources of energy:
...the transition from coal, oil, and gas to wind, solar, and geothermal energy is well under way. In the old economy, energy was produced by burning something — oil, coal, or natural gas — leading to the carbon emissions that have come to define our economy. The new energy economy harnesses the energy in wind, the energy coming from the sun, and heat from within the earth itself. [ 25 ]
In international public opinion surveys there is strong support for a variety of methods for addressing the problem of energy supply. These methods include promoting renewable sources such as solar power and wind power, requiring utilities to use more renewable energy, and providing tax incentives to encourage the development and use of such technologies. It is expected that renewable energy investments will pay off economically in the long term. [ 26 ]
EU member countries have shown support for ambitious renewable energy goals. In 2010, Eurobarometer polled the twenty-seven EU member states about the target "to increase the share of renewable energy in the EU by 20 percent by 2020". Most people in all twenty-seven countries either approved of the target or called for it to go further. Across the EU, 57 percent thought the proposed goal was "about right" and 16 percent thought it was "too modest." In comparison, 19 percent said it was "too ambitious". [ 27 ]
As of 2011, new evidence has emerged that there are considerable risks associated with traditional energy sources, and that major changes to the mix of energy technologies is needed:
Several mining tragedies globally have underscored the human toll of the coal supply chain. New EPA initiatives targeting air toxics, coal ash, and effluent releases highlight the environmental impacts of coal and the cost of addressing them with control technologies. The use of fracking in natural gas exploration is coming under scrutiny, with evidence of groundwater contamination and greenhouse gas emissions. Concerns are increasing about the vast amounts of water used at coal-fired and nuclear power plants, particularly in regions of the country facing water shortages. Events at the Fukushima nuclear plant have renewed doubts about the ability to operate large numbers of nuclear plants safely over the long term. Further, cost estimates for "next generation" nuclear units continue to climb, and lenders are unwilling to finance these plants without taxpayer guarantees. [ 28 ]
The 2014 REN21 Global Status Report says that renewable energies are no longer just energy sources, but ways to address pressing social, political, economic and environmental problems:
Today, renewables are seen not only as sources of energy, but also as tools to address many other pressing needs, including: improving energy security; reducing the health and environmental impacts associated with fossil and nuclear energy; mitigating greenhouse gas emissions; improving educational opportunities; creating jobs; reducing poverty; and increasing gender equality... Renewables have entered the mainstream. [ 29 ]
In 2008 for the first time, more renewable energy than conventional power capacity was added in both the European Union and United States, demonstrating a "fundamental transition" of the world's energy markets towards renewables, according to a report released by REN21 , a global renewable energy policy network based in Paris. [ 33 ] In 2010, renewable power consisted about a third of the newly built power generation capacities. [ 34 ]
By the end of 2011, total renewable power capacity worldwide exceeded 1,360 GW, up 8%. Renewables producing electricity accounted for almost half of the 208 GW of capacity added globally during 2011. Wind and solar photovoltaics (PV) accounted for almost 40% and 30%. [ 35 ] Based on REN21 's 2014 report, renewables contributed 19 percent to our energy consumption and 22 percent to our electricity generation in 2012 and 2013, respectively. This energy consumption is divided as 9% coming from traditional biomass, 4.2% as heat energy (non-biomass), 3.8% hydro electricity and 2% electricity from wind, solar, geothermal, and biomass. [ 36 ]
During the five-years from the end of 2004 through 2009, worldwide renewable energy capacity grew at rates of 10–60 percent annually for many technologies, while actual production grew 1.2% overall. [ 37 ] [ 38 ] In 2011, UN under-secretary general Achim Steiner said: "The continuing growth in this core segment of the green economy is not happening by chance. The combination of government target-setting, policy support and stimulus funds is underpinning the renewable industry's rise and bringing the much needed transformation of our global energy system within reach." He added: "Renewable energies are expanding both in terms of investment, projects and geographical spread. In doing so, they are making an increasing contribution to combating climate change, countering energy poverty and energy insecurity". [ 39 ]
According to a 2011 projection by the International Energy Agency, solar power plants may produce most of the world's electricity within 50 years, significantly reducing the emissions of greenhouse gases that harm the environment. The IEA has said: "Photovoltaic and solar-thermal plants may meet most of the world's demand for electricity by 2060 – and half of all energy needs – with wind, hydropower and biomass plants supplying much of the remaining generation". "Photovoltaic and concentrated solar power together can become the major source of electricity". [ 22 ]
In 2013, China led the world in renewable energy production, with a total capacity of 378 GW , mainly from hydroelectric and wind power . As of 2014, China leads the world in the production and use of wind power, solar photovoltaic power and smart grid technologies, generating almost as much water, wind and solar energy as all of France and Germany's power plants combined. China's renewable energy sector is growing faster than its fossil fuels and nuclear power capacity. Since 2005, production of solar cells in China has expanded 100-fold. As Chinese renewable manufacturing has grown, the costs of renewable energy technologies have dropped. Innovation has helped, but the main driver of reduced costs has been market expansion. [ 53 ]
See also renewable energy in the United States for US-figures.
Renewable energy technologies are getting cheaper, through technological change and through the benefits of mass production and market competition. A 2011 IEA report said: "A portfolio of renewable energy technologies is becoming cost-competitive in an increasingly broad range of circumstances, in some cases providing investment opportunities without the need for specific economic support," and added that "cost reductions in critical technologies, such as wind and solar, are set to continue." [ 56 ] As of 2011 [update] , there have been substantial reductions in the cost of solar and wind technologies:
The price of PV modules per MW has fallen by 60 percent since the summer of 2008, according to Bloomberg New Energy Finance estimates, putting solar power for the first time on a competitive footing with the retail price of electricity in a number of sunny countries. Wind turbine prices have also fallen – by 18 percent per MW in the last two years – reflecting, as with solar, fierce competition in the supply chain. Further improvements in the levelised cost of energy for solar, wind and other technologies lie ahead, posing a growing threat to the dominance of fossil fuel generation sources in the next few years. [ 39 ]
Hydro-electricity and geothermal electricity produced at favourable sites are now the cheapest way to generate electricity. Renewable energy costs continue to drop, and the levelised cost of electricity (LCOE) is declining for wind power, solar photovoltaic (PV), concentrated solar power (CSP) and some biomass technologies. [ 57 ]
Renewable energy is also the most economic solution for new grid-connected capacity in areas with good resources. As the cost of renewable power falls, the scope of economically viable applications increases. Renewable technologies are now often the most economic solution for new generating capacity. Where "oil-fired generation is the predominant power generation source (e.g. on islands, off-grid and in some countries) a lower-cost renewable solution almost always exists today". [ 57 ] As of 2012, renewable power generation technologies accounted for around half of all new power generation capacity additions globally. In 2011, additions included 41 gigawatt (GW) of new wind power capacity, 30 GW of PV, 25 GW of hydro-electricity, 6 GW of biomass, 0.5 GW of CSP, and 0.1 GW of geothermal power. [ 57 ]
Renewable energy includes a number of sources and technologies at different stages of commercialization. The International Energy Agency (IEA) has defined three generations of renewable energy technologies, reaching back over 100 years:
First-generation technologies are well established, second-generation technologies are entering markets, and third-generation technologies heavily depend on long-term research and development commitments, where the public sector has a role to play. [ 7 ]
First-generation technologies are widely used in locations with abundant resources. Their future use depends on the exploration of the remaining resource potential, particularly in developing countries, and on overcoming challenges related to the environment and social acceptance.
Biomass , the burning of organic materials for heat and power, is a fully mature technology . Unlike most renewable sources, biomass (and hydropower) can supply stable base load power generation. [ 58 ]
Biomass produces CO 2 emissions on combustion, and the issue of whether biomass is carbon neutral is contested. [ 59 ] Material directly combusted in cook stoves produces pollutants, leading to severe health and environmental consequences. Improved cook stove programs are alleviating some of these effects.
The industry remained relatively stagnant over the decade to 2007, but demand for biomass (mostly wood) continues to grow in many developing countries , as well as Brazil and Germany .
The economic viability of biomass is dependent on regulated tariffs, due to high costs of infrastructure and ingredients for ongoing operations. [ 58 ] Biomass does offer a ready disposal mechanism by burning municipal, agricultural, and industrial organic waste products. First-generation biomass technologies can be economically competitive, but may still require deployment support to overcome public acceptance and small-scale issues. [ 7 ] As part of the food vs. fuel debate, several economists from Iowa State University found in 2008 "there is no evidence to disprove that the primary objective of biofuel policy is to support farm income." [ 60 ]
Hydroelectricity is the term referring to electricity generated by hydropower ; the production of electrical power through the use of the gravitational force of falling or flowing water. In 2015 hydropower generated 16.6% of the worlds total electricity and 70% of all renewable electricity [ 61 ] and is expected to increase about 3.1% each year for the next 25 years. Hydroelectric plants have the advantage of being long-lived and many existing plants have operated for more than 100 years.
Hydropower is produced in 150 countries, with the Asia-Pacific region generating 32 percent of global hydropower in 2010. China is the largest hydroelectricity producer, with 721 terawatt-hours of production in 2010, representing around 17 percent of domestic electricity use. There are now three hydroelectricity plants larger than 10 GW: the Three Gorges Dam in China, Itaipu Dam across the Brazil/Paraguay border, and Guri Dam in Venezuela. [ 62 ] The cost of hydroelectricity is low, making it a competitive source of renewable electricity. The average cost of electricity from a hydro plant larger than 10 megawatts is 3 to 5 U.S. cents per kilowatt-hour. [ 62 ]
Geothermal power plants can operate 24 hours per day, providing baseload capacity. Estimates for the world potential capacity for geothermal power generation vary widely, ranging from 40 GW by 2020 to as much as 6,000 GW. [ 63 ] [ 64 ]
Geothermal power capacity grew from around 1 GW in 1975 to almost 10 GW in 2008. [ 64 ] The United States is the world leader in terms of installed capacity, representing 3.1 GW. Other countries with significant installed capacity include the Philippines (1.9 GW), Indonesia (1.2 GW), Mexico (1.0 GW), Italy (0.8 GW), Iceland (0.6 GW), Japan (0.5 GW), and New Zealand (0.5 GW). [ 64 ] [ 65 ] In some countries, geothermal power accounts for a significant share of the total electricity supply, such as in the Philippines, where geothermal represented 17 percent of the total power mix at the end of 2008. [ 66 ]
Geothermal (ground source) heat pumps represented an estimated 30 GWth of installed capacity at the end of 2008, with other direct uses of geothermal heat (i.e., for space heating, agricultural drying and other uses) reaching an estimated 15 GWth. As of 2008 [update] , at least 76 countries use direct geothermal energy in some form. [ 67 ]
Second-generation technologies have gone from being a passion for the dedicated few to a major economic sector in countries such as Germany, Spain, the United States, and Japan. Many large industrial companies and financial institutions are involved and the challenge is to broaden the market base for continued growth worldwide. [ 7 ] [ 18 ]
Solar heating systems are a well known second-generation technology and generally consist of solar thermal collectors , a fluid system to move the heat from the collector to its point of usage, and a reservoir or tank for heat storage. The systems may be used to heat domestic hot water, swimming pools, or homes and businesses. [ 68 ] The heat can also be used for industrial process applications or as an energy input for other uses such as cooling equipment. [ 69 ]
In many warmer climates, a solar heating system can provide a very high percentage (50 to 75%) of domestic hot water energy. As of 2009 [update] , China has 27 million rooftop solar water heaters. [ 70 ]
Photovoltaic (PV) cells, also called solar cells , convert light into electricity. In the 1980s and early 1990s, most photovoltaic modules were used to provide remote-area power supply , but from around 1995, industry efforts have focused increasingly on developing building integrated photovoltaics and photovoltaic power stations for grid connected applications.
Many plants are integrated with agriculture and some use innovative tracking systems that follow the sun's daily path across the sky to generate more electricity than conventional fixed-mounted systems. There are no fuel costs or emissions during operation of the power stations.
Some of the second-generation renewables, such as wind power, have high potential and have already realised relatively low production costs. [ 73 ] [ 74 ] Wind power could become cheaper than nuclear power. [ 75 ] Global wind power installations increased by 35,800 MW in 2010, bringing total installed capacity up to 194,400 MW, a 22.5% increase on the 158,700 MW installed at the end of 2009. The increase for 2010 represents investments totalling €47.3 billion (US$65 billion) and for the first time more than half of all new wind power was added outside of the traditional markets of Europe and North America, mainly driven, by the continuing boom in China which accounted for nearly half of all of the installations at 16,500 MW. China now has 42,300 MW of wind power installed. [ 76 ] Wind power accounts for approximately 19% of electricity generated in Denmark , 9% in Spain and Portugal , and 6% in Germany and the Republic of Ireland. [ 77 ] In Australian state of South Australia wind power, championed by Premier Mike Rann (2002–2011), now comprises 26% of the state's electricity generation, edging out coal fired power. At the end of 2011 South Australia, with 7.2% of Australia's population, had 54% of the nation's installed wind power capacity. [ 78 ]
Wind power's share of worldwide electricity usage at the end of 2014 was 3.1%. [ 79 ]
The wind industry is able to produce more power at lower cost by using taller wind turbines with longer blades, capturing the faster winds at higher elevations. This has opened up new opportunities and in Indiana, Michigan, and Ohio, the price of power from wind turbines built 300 feet to 400 feet above the ground can now compete with conventional fossil fuels like coal. Prices have fallen to about 4 cents per kilowatt-hour in some cases and utilities have been increasing the amount of wind energy in their portfolio, saying it is their cheapest option. [ 80 ]
Solar thermal power stations include the 354 megawatt (MW) Solar Energy Generating Systems power plant in the US, Solnova Solar Power Station (Spain, 150 MW), Andasol solar power station (Spain, 100 MW), Nevada Solar One (USA, 64 MW), PS20 solar power tower (Spain, 20 MW), and the PS10 solar power tower (Spain, 11 MW). The 370 MW Ivanpah Solar Power Facility , located in California's Mojave Desert , is the world's largest solar-thermal power plant project currently under construction. [ 81 ] Many other plants are under construction or planned, mainly in Spain and the USA. [ 82 ] In developing countries, three World Bank projects for integrated solar thermal/combined-cycle gas-turbine power plants in Egypt , Mexico , and Morocco have been approved. [ 82 ]
Global ethanol production for transport fuel tripled between 2000 and 2007 from 17 billion to more than 52 billion litres, while biodiesel expanded more than tenfold from less than 1 billion to almost 11 billion litres. Biofuels provide 1.8% of the world's transport fuel and recent estimates indicate a continued high growth. The main producing countries for transport biofuels are the US, Brazil, and the EU. [ 83 ]
Brazil has one of the largest renewable energy programs in the world, involving production of ethanol fuel from sugar cane , and ethanol now provides 18 percent of the country's automotive fuel. As a result of this and the exploitation of domestic deep water oil sources, Brazil, which for years had to import a large share of the petroleum needed for domestic consumption, recently reached complete self-sufficiency in liquid fuels. [ 84 ] [ 85 ]
Nearly all the gasoline sold in the United States today is mixed with 10 percent ethanol, a mix known as E10, [ 86 ] and motor vehicle manufacturers already produce vehicles designed to run on much higher ethanol blends. Ford , DaimlerChrysler , and GM are among the automobile companies that sell flexible-fuel cars, trucks, and minivans that can use gasoline and ethanol blends ranging from pure gasoline up to 85% ethanol (E85). The challenge is to expand the market for biofuels beyond the farm states where they have been most popular to date. The Energy Policy Act of 2005 , which calls for 7.5 billion US gallons (28,000,000 m 3 ) of biofuels to be used annually by 2012, will also help to expand the market. [ 87 ]
The growing ethanol and biodiesel industries are providing jobs in plant construction, operations, and maintenance, mostly in rural communities. According to the Renewable Fuels Association, "the ethanol industry created almost 154,000 U.S. jobs in 2005 alone, boosting household income by $5.7 billion. It also contributed about $3.5 billion in tax revenues at the local, state, and federal levels". [ 87 ]
Third-generation renewable energy technologies are still under development and include advanced biomass gasification , biorefinery technologies, hot-dry-rock geothermal power, and ocean energy . Third-generation technologies are not yet widely demonstrated or have limited commercialization. Many are on the horizon and may have potential comparable to other renewable energy technologies, but still depend on attracting sufficient attention and research and development funding. [ 7 ]
According to the International Energy Agency, cellulosic ethanol biorefineries could allow biofuels to play a much bigger role in the future than organizations such as the IEA previously thought. [ 90 ] Cellulosic ethanol can be made from plant matter composed primarily of inedible cellulose fibers that form the stems and branches of most plants. Crop residues (such as corn stalks , wheat straw and rice straw), wood waste, and municipal solid waste are potential sources of cellulosic biomass. Dedicated energy crops, such as switchgrass , are also promising cellulose sources that can be sustainably produced in many regions. [ 91 ]
Ocean energy is all forms of renewable energy derived from the sea including wave energy, tidal energy, river current, ocean current energy, offshore wind, salinity gradient energy and ocean thermal gradient energy. [ 92 ]
The Rance Tidal Power Station (240 MW) is the world's first tidal power station. The facility is located on the estuary of the Rance River , in Brittany , France. Opened on 26 November 1966, it is currently operated by Électricité de France , and is the largest tidal power station in the world, in terms of installed capacity.
First proposed more than thirty years ago, systems to harvest utility-scale electrical power from ocean waves have recently been gaining momentum as a viable technology. The potential for this technology is considered promising, especially on west-facing coasts with latitudes between 40 and 60 degrees: [ 93 ]
In the United Kingdom, for example, the Carbon Trust recently estimated the extent of the economically viable offshore resource at 55 TWh per year, about 14% of current national demand. Across Europe, the technologically achievable resource has been estimated to be at least 280 TWh per year. In 2003, the U.S. Electric Power Research Institute (EPRI) estimated the viable resource in the United States at 255 TWh per year (6% of demand). [ 93 ]
There are currently nine projects, completed or in-development, off the coasts of the United Kingdom, United States, Spain and Australia to harness the rise and fall of waves by Ocean Power Technologies . The current maximum power output is 1.5 MW ( Reedsport, Oregon ), with development underway for 100 MW ( Coos Bay, Oregon ). [ 94 ]
As of 2008 [update] , geothermal power development was under way in more than 40 countries, partially attributable to the development of new technologies, such as Enhanced Geothermal Systems. [ 67 ] The development of binary cycle power plants and improvements in drilling and extraction technology may enable enhanced geothermal systems over a much greater geographical range than "traditional" Geothermal systems. Demonstration EGS projects are operational in the US, Australia, Germany, France, and the United Kingdom. [ 95 ]
Beyond the already established solar photovoltaics and solar thermal power technologies are such advanced solar concepts as the solar updraft tower or space-based solar power. These concepts have yet to (if ever) be commercialized.
The Solar updraft tower (SUT) is a renewable-energy power plant for generating electricity from low temperature solar heat. Sunshine heats the air beneath a very wide greenhouse-like roofed collector structure surrounding the central base of a very tall chimney tower. The resulting convection causes a hot air updraft in the tower by the chimney effect . This airflow drives wind turbines placed in the chimney updraft or around the chimney base to produce electricity . Plans for scaled-up versions of demonstration models will allow significant power generation, and may allow development of other applications, such as water extraction or distillation, and agriculture or horticulture. To view a study on the solar updraft tower and its affects click here [ 96 ]
A more advanced version of a similarly themed technology is the Vortex engine (AVE) which aims to replace large physical chimneys with a vortex of air created by a shorter, less-expensive structure.
Space-based solar power ( SBSP ) is the concept of collecting solar power in space (using an "SPS", that is, a "solar-power satellite" or a "satellite power system") for use on Earth . It has been in research since the early 1970s. SBSP would differ from current solar collection methods in that the means used to collect energy would reside on an orbiting satellite instead of on Earth's surface. Some projected benefits of such a system are a higher collection rate and a longer collection period due to the lack of a diffusing atmosphere and night time in space .
Total investment in renewable energy reached $211 billion in 2010, up from $160 billion in 2009. The top countries for investment in 2010 were
China, Germany, the United States, Italy, and Brazil. [ 14 ] Continued growth for the renewable energy sector is expected and promotional policies helped the industry weather the 2009 economic crisis better than many other sectors. [ 97 ]
As of 2010 [update] , Vestas (from Denmark) is the world's top wind turbine manufacturer in terms of percentage of market volume, and Sinovel (from China) is in second place. Together Vestas and Sinovel delivered 10,228 MW of new wind power capacity in 2010, and their market share was 25.9 percent. GE Energy (USA) was in third place, closely followed by Goldwind , another Chinese supplier. German Enercon ranks fifth in the world, and is followed in sixth place by Indian-based Suzlon . [ 98 ]
The solar PV market has been growing for the past few years. According to solar PV research company, PVinsights, worldwide shipment of solar modules in 2011 was around 25 GW, and the shipment year over year growth was around 40%. The top 5 solar module players in 2011 in turns are Suntech, First Solar, Yingli, Trina, and Sungen. The top 5 solar module companies possessed 51.3% market share of solar modules, according to PVinsights' market intelligence report.
The PV industry has seen drops in module prices since 2008. In late 2011, factory-gate prices for crystalline-silicon photovoltaic modules dropped below the $1.00/W mark. The $1.00/W installed cost, is often regarded in the PV industry as marking the achievement of grid parity for PV. These reductions have taken many stakeholders, including industry analysts, by surprise, and perceptions of current solar power economics often lags behind reality. Some stakeholders still have the perspective that solar PV remains too costly on an unsubsidized basis to compete with conventional generation options. Yet technological advancements, manufacturing process improvements, and industry re-structuring, mean that further price reductions are likely in coming years. [ 101 ]
Many energy markets, institutions, and policies have been developed to support the production and use of fossil fuels. [ 103 ] Newer and cleaner technologies may offer social and environmental benefits, but utility operators often reject renewable resources because they are trained to think only in terms of big, conventional power plants. [ 104 ] Consumers often ignore renewable power systems because they are not given accurate price signals about electricity consumption. Intentional market distortions (such as subsidies), and unintentional market distortions (such as split incentives) may work against renewables. [ 104 ] Benjamin K. Sovacool has argued that "some of the most surreptitious, yet powerful, impediments facing renewable energy and energy efficiency in the United States are more about culture and institutions than engineering and science". [ 105 ]
The obstacles to the widespread commercialization of renewable energy technologies are primarily political, not technical, [ 106 ] and there have been many studies which have identified a range of "non-technical barriers" to renewable energy use. [ 107 ] [ 17 ] [ 108 ] [ 109 ] These barriers are impediments which put renewable energy at a marketing, institutional, or policy disadvantage relative to other forms of energy. Key barriers include: [ 108 ] [ 109 ]
"National grids are usually tailored towards the operation of centralised power plants and thus favour their performance. Technologies that do not easily fit into these networks may struggle to enter the market, even if the technology itself is commercially viable. This applies to distributed generation as most grids are not suited to receive electricity from many small sources. Large-scale renewables may also encounter problems if they are sited in areas far from existing grids." [ 110 ]
With such a wide range of non-technical barriers, there is no "silver bullet" solution to drive the transition to renewable energy. So ideally there is a need for several different types of policy instruments to complement each other and overcome different types of barriers. [ 109 ] [ 112 ]
A policy framework must be created that will level the playing field and redress the imbalance of traditional approaches associated with fossil fuels. The policy landscape must keep pace with broad trends within the energy sector, as well as reflecting specific social, economic and environmental priorities. [ 113 ] Some resource-rich countries struggle to move away from fossil fuels and have failed thus far to adopt regulatory frameworks necessary for developing renewable energy (e.g. Russia). [ 114 ]
Public policy has a role to play in renewable energy commercialization because the free market system has some fundamental limitations. As the Stern Review points out: "In a liberalised energy market, investors, operators and consumers should face the full cost of their decisions. But this is not the case in many economies or energy sectors. Many policies distort the market in favour of existing fossil fuel technologies." [ 110 ] The International Solar Energy Society has stated that "historical incentives for the conventional energy resources continue even today to bias markets by burying many of the real societal costs of their use". [ 115 ]
Fossil-fuel energy systems have different production, transmission, and end-use costs and characteristics than do renewable energy systems, and new promotional policies are needed to ensure that renewable systems develop as quickly and broadly as is socially desirable. [ 103 ] Lester Brown states that the market "does not incorporate the indirect costs of providing goods or services into prices, it does not value nature's services adequately, and it does not respect the sustainable-yield thresholds of natural systems". [ 116 ] It also favors the near term over the long term, thereby showing limited concern for future generations. [ 116 ] Tax and subsidy shifting can help overcome these problems, [ 117 ] though is also problematic to combine different international normative regimes regulating this issue. [ 118 ]
Tax shifting has been widely discussed and endorsed by economists. It involves lowering income taxes while raising levies on environmentally destructive activities, in order to create a more responsive market. For example, a tax on coal that included the increased health care costs associated with breathing polluted air, the costs of acid rain damage, and the costs of climate disruption would encourage investment in renewable technologies. Several Western European countries are already shifting taxes in a process known there as environmental tax reform. [ 116 ]
In 2001, Sweden launched a new 10-year environmental tax shift designed to convert 30 billion kroner ($3.9 billion) of income taxes to taxes on environmentally destructive activities. Other European countries with significant tax reform efforts are France, Italy, Norway, Spain, and the United Kingdom. Asia's two leading economies, Japan and China, are considering carbon taxes. [ 116 ]
Just as there is a need for tax shifting, there is also a need for subsidy shifting. Subsidies are not an inherently bad thing as many technologies and industries emerged through government subsidy schemes. The Stern Review explains that of 20 key innovations from the past 30 years, only one of the 14 was funded entirely by the private sector and nine were totally publicly funded. [ 119 ] In terms of specific examples, the Internet was the result of publicly funded links among computers in government laboratories and research institutes. And the combination of the federal tax deduction and a robust state tax deduction in California helped to create the modern wind power industry. [ 117 ] At the same time specifically US tax credits systems for renewable energy have been described as an "opaque" financial instrument dominated by large investors to reduce their tax payments while greenhouse gas reduction targets are being treated as a side effect. [ 120 ]
Lester Brown has argued that "a world facing the prospect of economically disruptive climate change can no longer justify subsidies to expand the burning of coal and oil. Shifting these subsidies to the development of climate-benign energy sources such as wind, solar, biomass, and geothermal power is the key to stabilizing the earth's climate." [ 117 ] The International Solar Energy Society advocates "leveling the playing field" by redressing the continuing inequities in public subsidies of energy technologies and R&D, in which the fossil fuel and nuclear power receive the largest share of financial support. [ 121 ]
Some countries are eliminating or reducing climate-disrupting subsidies and Belgium, France, and Japan have phased out all subsidies for coal. Germany is reducing its coal subsidy. The subsidy dropped from $5.4 billion in 1989 to $2.8 billion in 2002, and in the process Germany lowered its coal use by 46 percent. China cut its coal subsidy from $750 million in 1993 to $240 million in 1995 and more recently has imposed a high-sulfur coal tax. [ 117 ] However, the United States has been increasing its support for the fossil fuel and nuclear industries. [ 117 ]
In November 2011, an IEA report entitled Deploying Renewables 2011 said "subsidies in green energy technologies that were not yet competitive are justified in order to give an incentive to investing into technologies with clear environmental and energy security benefits". The IEA's report disagreed with claims that renewable energy technologies are only viable through costly subsidies and not able to produce energy reliably to meet demand. [ 56 ]
A fair and efficient imposition of subsidies for renewable energies and aiming at sustainable development, however, require coordination and regulation at a global level, as subsidies granted in one country can easily disrupt industries and policies of others, thus underlining the relevance of this issue at the World Trade Organization. [ 122 ]
Setting national renewable energy targets can be an important part of a renewable energy policy and these targets are usually defined as a percentage of the primary energy and/or electricity generation mix. For example, the European Union has prescribed an indicative renewable energy target of 12 percent of the total EU energy mix and 22 percent of electricity consumption by 2010. National targets for individual EU Member States have also been set to meet the overall target. Other developed countries with defined national or regional targets include Australia, Canada, Israel, Japan, Korea, New Zealand, Norway, Singapore, Switzerland, and some US States. [ 123 ]
National targets are also an important component of renewable energy strategies in some developing countries . Developing countries with renewable energy targets include China, India, Indonesia, Malaysia, the Philippines, Thailand, Brazil, Egypt, Mali, and South Africa. The targets set by many developing countries are quite modest when compared with those in some industrialized countries. [ 123 ]
Renewable energy targets in most countries are indicative and nonbinding but they have assisted government actions and regulatory frameworks. The United Nations Environment Program has suggested that making renewable energy targets legally binding could be an important policy tool to achieve higher renewable energy market penetration. [ 123 ]
The IEA has identified three actions which will allow renewable energy and other clean energy technologies to "more effectively compete for private sector capital".
In response to the Great Recession , major governments made "green stimulus" programs one of their main policy instruments for supporting economic recovery. Some US$188 billion in green stimulus funding had been allocated to renewable energy and energy efficiency, to be spent mainly in 2010 and in 2011. [ 130 ]
Public policy determines the extent to which renewable energy (RE) is to be incorporated into a developed or developing country's generation mix. Energy sector regulators implement that policy—thus affecting the pace and pattern of RE investments and connections to the grid. Energy regulators often have authority to carry out a number of functions that have implications for the financial feasibility of renewable energy projects. Such functions include issuing licenses, setting performance standards, monitoring the performance of regulated firms, determining the price level and structure of tariffs, establishing uniform systems of accounts, arbitrating stakeholder disputes (like interconnection cost allocations), performing management audits, developing agency human resources (expertise), reporting sector and commission activities to government authorities, and coordinating decisions with other government agencies. Thus, regulators make a wide range of decisions that affect the financial outcomes associated with RE investments. In addition, the sector regulator is in a position to give advice to the government regarding the full implications of focusing on climate change or energy security. The energy sector regulator is the natural advocate for efficiency and cost-containment throughout the process of designing and implementing RE policies. Since policies are not self-implementing, energy sector regulators become a key facilitator (or blocker) of renewable energy investments. [ 131 ]
The Energiewende ( German for energy transition ) is the transition by Germany to a low carbon , environmentally sound, reliable, and affordable energy supply. [ 132 ] The new system will rely heavily on renewable energy (particularly wind , photovoltaics , and biomass ) energy efficiency , and energy demand management . Most if not all existing coal-fired generation will need to be retired. [ 133 ] The phase-out of Germany's fleet of nuclear reactors , to be complete by 2022, is a key part of the program. [ 134 ]
Legislative support for the Energiewende was passed in late 2010 and includes greenhouse gas (GHG) reductions of 80–95% by 2050 (relative to 1990) and a renewable energy target of 60% by 2050. [ 135 ] These targets are ambitious. [ 136 ] The Berlin-based policy institute Agora Energiewende noted that "while the German approach is not unique worldwide, the speed and scope of the Energiewende are exceptional". [ 137 ] The Energiewende also seeks a greater transparency in relation to national energy policy formation. [ 138 ]
Germany has made significant progress on its GHG emissions reduction target, achieving a 27% decrease between 1990 and 2014. However Germany will need to maintain an average GHG emissions abatement rate of 3.5% per annum to reach its Energiewende goal, equal to the maximum historical value thus far. [ 139 ]
Germany spends €1.5 billion per annum on energy research (2013 figure) in an effort to solve the technical and social issues raised by the transition. [ 140 ] This includes a number of computer studies that have confirmed the feasibility and a similar cost (relative to business-as-usual and given that carbon is adequately priced) of the Energiewende .
These initiatives go well beyond European Union legislation and the national policies of other European states. The policy objectives have been embraced by the German federal government and has resulted in a huge expansion of renewables, particularly wind power. Germany's share of renewables has increased from around 5% in 1999 to 22.9% in 2012, surpassing the OECD average of 18% usage of renewables. [ 141 ] Producers have been guaranteed a fixed feed-in tariff for 20 years, guaranteeing a fixed income. Energy co-operatives have been created, and efforts were made to decentralize control and profits. The large energy companies have a disproportionately small share of the renewables market. However, in some cases poor investment designs have caused bankruptcies and low returns , and unrealistic promises have been shown to be far from reality. [ 142 ] Nuclear power plants were closed, and the existing nine plants will close earlier than planned, in 2022.
One factor that has inhibited efficient employment of new renewable energy has been the lack of an accompanying investment in power infrastructure to bring the power to market. It is believed 8,300 km of power lines must be built or upgraded. [ 141 ] The different German States have varying attitudes to the construction of new power lines. Industry has had their rates frozen and so the increased costs of the Energiewende have been passed on to consumers, who have had rising electricity bills.
Voluntary markets, also referred to as green power markets, are driven by consumer preference. Voluntary markets allow a consumer to choose to do more than policy decisions require and reduce the environmental impact of their electricity use. Voluntary green power products must offer a significant benefit and value to buyers to be successful. Benefits may include zero or reduced greenhouse gas emissions, other pollution reductions or other environmental improvements on power stations. [ 143 ]
The driving factors behind voluntary green electricity within the EU are the liberalized electricity markets and the RES Directive. According to the directive, the EU Member States must ensure that the origin of electricity produced from renewables can be guaranteed and therefore a "guarantee of origin" must be issued (article 15). Environmental organisations are using the voluntary market to create new renewables and improving sustainability of the existing power production. In the US the main tool to track and stimulate voluntary actions is Green-e program managed by Center for Resource Solutions . [ 144 ] Globally available voluntary tool used by the NGOs to promote sustainable electricity production is EKOenergy label. [ 145 ]
A number of events in 2006 pushed renewable energy up the political agenda, including the US mid-term elections in November, which confirmed clean energy as a mainstream issue. Also in 2006, the Stern Review [ 19 ] made a strong economic case for investing in low carbon technologies now, and argued that economic growth need not be incompatible with cutting energy consumption. [ 146 ] According to a trend analysis from the United Nations Environment Programme , climate change concerns [ 18 ] coupled with recent high oil prices [ 147 ] and increasing government support are driving increasing rates of investment in the renewable energy and energy efficiency industries. [ 20 ] [ 148 ]
Investment capital flowing into renewable energy reached a record US$77 billion in 2007, with the upward trend continuing in 2008. [ 21 ] The OECD still dominates, but there is now increasing activity from companies in China, India and Brazil. Chinese companies were the second largest recipient of venture capital in 2006 after the United States. In the same year, India was the largest net buyer of companies abroad, mainly in the more established European markets. [ 148 ]
New government spending, regulation, and policies helped the industry weather the 2009 economic crisis better than many other sectors. [ 97 ] Most notably, U.S. President Barack Obama 's American Recovery and Reinvestment Act of 2009 included more than $70 billion in direct spending and tax credits for clean energy and associated transportation programs. This policy-stimulus combination represents the largest federal commitment in U.S. history for renewables, advanced transportation, and energy conservation initiatives. Based on these new rules, many more utilities strengthened their clean-energy programs. [ 97 ] Clean Edge suggests that the commercialization of clean energy will help countries around the world deal with the current economic malaise. [ 97 ] Once-promising solar energy company, Solyndra , became involved in a political controversy involving U.S. President Barack Obama's administration 's authorization of a $535 million loan guarantee to the Corporation in 2009 as part of a program to promote alternative energy growth. [ 149 ] [ 150 ] The company ceased all business activity, filed for Chapter 11 bankruptcy, and laid-off nearly all of its employees in early September 2011. [ 151 ] [ 152 ]
In his 24 January 2012, State of the Union address, President Barack Obama restated his commitment to renewable energy. Obama said that he "will not walk away from the promise of clean energy." Obama called for a commitment by the Defense Department to purchase 1,000 MW of renewable energy. He also mentioned the long-standing Interior Department commitment to permit 10,000 MW of renewable energy projects on public land in 2012. [ 153 ]
As of 2012, renewable energy plays a major role in the energy mix of many countries globally. Renewables are becoming increasingly economic in both developing and developed countries. Prices for renewable energy technologies, primarily wind power and solar power, continued to drop, making renewables competitive with conventional energy sources. Without a level playing field, however, high market penetration of renewables is still dependent on robust promotional policies. Fossil fuel subsidies, which are far higher than those for renewable energy, remain in place and quickly need to be phased out. [ 154 ]
United Nations' Secretary-General Ban Ki-moon has said that "renewable energy has the ability to lift the poorest nations to new levels of prosperity". [ 155 ] In October 2011, he "announced the creation of a high-level group to drum up support for energy access, energy efficiency and greater use of renewable energy. The group is to be co-chaired by Kandeh Yumkella, the chair of UN Energy and director general of the UN Industrial Development Organisation, and Charles Holliday, chairman of Bank of America". [ 156 ]
Worldwide use of solar power and wind power continued to grow significantly in 2012. Solar electricity consumption increased by 58 percent, to 93 terawatt-hours (TWh). Use of wind power in 2012 increased by 18.1 percent, to 521.3 TWh. [ 157 ] Global solar and wind energy installed capacities continued to expand even though new investments in these technologies declined during 2012. Worldwide investment in solar power in 2012 was $140.4 billion, an 11 percent decline from 2011, and wind power investment was down 10.1 percent, to $80.3 billion. But due to lower production costs for both technologies, total installed capacities grew sharply. [ 157 ] This investment decline, but growth in installed capacity, may again occur in 2013. [ 158 ] [ 159 ] Analysts expect the market to triple by 2030. [ 160 ] In 2015, investment in renewables exceeded fossils. [ 161 ]
The incentive to use 100% renewable energy for electricity, transport, or even total primary energy supply globally, has been motivated by global warming and other ecological as well as economic concerns. In the Intergovernmental Panel on Climate Change 's reviews of scenarios of energy usage that would keep global warming to approximately 1.5 degrees, the proportion of primary energy supplied by renewables increases from 15% in 2020 to 60% in 2050 (median values across all published pathways). [ 163 ] The proportion of primary energy supplied by biomass increases from 10% to 27%, [ 164 ] with effective controls on whether land use is changed in the growing of biomass. [ 165 ] The proportion from wind and solar increases from 1.8% to 21%. [ 164 ]
At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply.
Mark Z. Jacobson , professor of civil and environmental engineering at Stanford University and director of its Atmosphere and Energy Program says producing all new energy with wind power , solar power , and hydropower by 2030 is feasible and existing energy supply arrangements could be replaced by 2050. Barriers to implementing the renewable energy plan are seen to be "primarily social and political, not technological or economic". Jacobson says that energy costs with a wind, solar, water system should be similar to today's energy costs. [ 166 ]
Renewable projects must be sited at distant locations due to high land prices in urban areas or for the renewable resource itself which require transmission construction costs. [ 167 ]
Similarly, in the United States, the independent National Research Council has noted that "sufficient domestic renewable resources exist to allow renewable electricity to play a significant role in future electricity generation and thus help confront issues related to climate change, energy security, and the escalation of energy costs … Renewable energy is an attractive option because renewable resources available in the United States, taken collectively, can supply significantly greater amounts of electricity than the total current or projected domestic demand." [ 168 ]
The most significant barriers to the widespread implementation of large-scale renewable energy and low carbon energy strategies are primarily political and not technological. According to the 2013 Post Carbon Pathways report, which reviewed many international studies, the key roadblocks are: climate change denial , the fossil fuels lobby , political inaction, unsustainable energy consumption, outdated energy infrastructure, and financial constraints. [ 169 ]
Moving towards energy sustainability will require changes not only in the way energy is supplied, but in the way it is used, and reducing the amount of energy required to deliver various goods or services is essential. Opportunities for improvement on the demand side of the energy equation are as rich and diverse as those on the supply side, and often offer significant economic benefits. [ 170 ]
A sustainable energy economy requires commitments to both renewables and efficiency. Renewable energy and energy efficiency are said to be the "twin pillars" of sustainable energy policy. The American Council for an Energy-Efficient Economy has explained that both resources must be developed in order to stabilize and reduce carbon dioxide emissions: [ 171 ]
Efficiency is essential to slowing the energy demand growth so that rising clean energy supplies can make deep cuts in fossil fuel use. If energy use grows too fast, renewable energy development will chase a receding target. Likewise, unless clean energy supplies come online rapidly, slowing demand growth will only begin to reduce total emissions; reducing the carbon content of energy sources is also needed. [ 171 ]
The IEA has stated that renewable energy and energy efficiency policies are complementary tools for the development of a sustainable energy future, and should be developed together instead of being developed in isolation. [ 172 ] | https://en.wikipedia.org/wiki/Renewable_energy_commercialization |
The renin–angiotensin system ( RAS ), or renin–angiotensin–aldosterone system ( RAAS ), is a hormone system that regulates blood pressure , fluid , and electrolyte balance, and systemic vascular resistance . [ 2 ] [ 3 ]
When renal blood flow is reduced, juxtaglomerular cells in the kidneys convert the precursor prorenin (already present in the blood) into renin and secrete it directly into the circulation . Plasma renin then carries out the conversion of angiotensinogen , released by the liver , to angiotensin I , which has no biological function on its own. [ 4 ] Angiotensin I is subsequently converted to the active angiotensin II by the angiotensin-converting enzyme (ACE) found on the surface of vascular endothelial cells, predominantly those of the lungs . [ 5 ] Angiotensin II has a short life of about 1 to 2 minutes. Then, it is rapidly degraded into angiotensin III by angiotensinases which are present in red blood cells and vascular beds in many tissues.
Angiotensin III increases blood pressure and stimulates aldosterone secretion from the adrenal cortex ; it has 100% adrenocortical stimulating activity and 40% vasopressor activity of angiotensin II. Angiotensin IV also has adrenocortical and vasopressor activities.
Angiotensin II is a potent vasoconstrictive peptide that causes blood vessels to narrow, resulting in increased blood pressure. [ 6 ] Angiotensin II also stimulates the secretion of the hormone aldosterone [ 6 ] from the adrenal cortex . Aldosterone causes the renal tubules to increase the reabsorption of sodium which in consequence causes the reabsorption of water into the blood, while at the same time causing the excretion of potassium (to maintain electrolyte balance). This increases the volume of extracellular fluid in the body, which also increases blood pressure.
If the RAS is abnormally active, blood pressure will be too high. There are several types of drugs which include ACE inhibitors , angiotensin II receptor blockers (ARBs), and renin inhibitors that interrupt different steps in this system to improve blood pressure. These drugs are one of the primary ways to control high blood pressure , heart failure , kidney failure , and harmful effects of diabetes . [ 7 ] [ 8 ]
The system can be activated when there is a loss of blood volume or a drop in blood pressure (such as in hemorrhage or dehydration ). This loss of pressure is interpreted by baroreceptors in the carotid sinus . It can also be activated by a decrease in the filtrate sodium chloride (NaCl) concentration or a decreased filtrate flow rate that will stimulate the macula densa to signal the juxtaglomerular cells to release renin. [ citation needed ]
Angiotensin I may have some minor activity, but angiotensin II is the major bioactive product. Angiotensin II has a variety of effects on the body: [ citation needed ]
These effects directly act together to increase blood pressure and are opposed by atrial natriuretic peptide (ANP).
Locally expressed renin–angiotensin systems have been found in a number of tissues, including the kidneys , adrenal glands , the heart , vasculature and nervous system , and have a variety of functions, including local cardiovascular regulation , in association or independently of the systemic renin–angiotensin system, as well as non-cardiovascular functions. [ 9 ] [ 11 ] [ 12 ] Outside the kidneys, renin is predominantly picked up from the circulation but may be secreted locally in some tissues; its precursor prorenin is highly expressed in tissues and more than half of circulating prorenin is of extrarenal origin, but its physiological role besides serving as precursor to renin is still unclear. [ 13 ] Outside the liver, angiotensinogen is picked up from the circulation or expressed locally in some tissues; with renin they form angiotensin I, and locally expressed angiotensin-converting enzyme , chymase or other enzymes can transform it into angiotensin II. [ 13 ] [ 14 ] [ 15 ] This process can be intracellular or interstitial. [ 9 ]
In the adrenal glands, it is likely involved in the paracrine regulation of aldosterone secretion; in the heart and vasculature, it may be involved in remodeling or vascular tone; and in the brain , where it is largely independent of the circulatory RAS, it may be involved in local blood pressure regulation. [ 9 ] [ 12 ] [ 16 ] In addition, both the central and peripheral nervous systems can use angiotensin for sympathetic neurotransmission. [ 17 ] Other places of expression include the reproductive system, the skin and digestive organs. Medications aimed at the systemic system may affect the expression of those local systems, beneficially or adversely. [ 9 ]
In the fetus , the renin–angiotensin system is predominantly a sodium-losing system, [ citation needed ] as angiotensin II has little or no effect on aldosterone levels. Renin levels are high in the fetus, while angiotensin II levels are significantly lower; this is due to the limited pulmonary blood flow, preventing ACE (found predominantly in the pulmonary circulation) from having its maximum effect. [ citation needed ] | https://en.wikipedia.org/wiki/Renin–angiotensin_system |
The Renner-Teller effect is a phenomenon in molecular spectroscopy where a pair of electronic states that become degenerate at linearity are coupled by rovibrational motion. [ 1 ]
The Renner-Teller effect is observed in the spectra of molecules that have electronic states that allow vibration through a linear configuration. For such molecules electronic states that are doubly degenerate at linearity (Π, Δ, ..., etc.) will split into two close-lying nondegenerate states for non-linear configurations. As part of the Renner–Teller effect, the rovibronic levels of such a pair of states will be strongly Coriolis coupled by the rotational kinetic energy operator causing a breakdown of the Born–Oppenheimer approximation . This is to be contrasted with the Jahn–Teller effect which occurs for polyatomic molecules in electronic states that allow vibration through a symmetric nonlinear configuration, where the electronic state is degenerate, and which further involves a breakdown of the Born-Oppenheimer approximation but here caused by the vibrational kinetic energy operator.
In its original formulation, the Renner–Teller effect was discussed for a triatomic molecule in an electronic state that is a linear Π-state at equilibrium. The 1934 article by Rudolf Renner [ 1 ] was one of the first that considered dynamic effects that go beyond the Born–Oppenheimer approximation, in which the nuclear and electronic motions in a molecule are uncoupled. Renner chose an electronically excited state of the carbon dioxide molecule ( CO 2 ) that is a linear Π-state at equilibrium for his studies. The products of purely electronic and purely nuclear rovibrational states served as the zeroth-order (no rovibronic coupling) wave functions in Renner's study. The rovibronic coupling acts as a perturbation.
Renner is the only author of the 1934 paper that first described the effect, so it can be called simply the Renner effect . Renner did this work as a PhD student under the supervision of Edward Teller and presumably Teller was perfectly happy not to be a coauthor. However, in 1933 Gerhard Herzberg and Teller had recognized that the potential of a triatomic linear molecule in a degenerate electronic state at linearity splits into two when the molecule is bent. [ 2 ] A year later this effect was worked out in detail by Renner. [ 1 ] Herzberg refers to this as the "Renner–Teller" effect in one of his influential books, [ 3 ] and this name is most commonly used.
While Renner's theoretical study concerns an excited electronic state of carbon dioxide that is linear at equilibrium, the first observation of the Renner–Teller effect was in an electronic state of the NH 2 molecule that is bent at equilibrium. [ 4 ]
Much has been published about the Renner–Teller effect since its first experimental observation in 1959; see the bibliography on pages 412-413 of the textbook by Bunker and Jensen. [ 5 ] Section 13.4 of this textbook discusses both the Renner–Teller effect (called the Renner effect) and the Jahn–Teller effect.
This molecular physics –related article is a stub . You can help Wikipedia by expanding it .
This spectroscopy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Renner–Teller_effect |
In theoretical physics , the renormalization group ( RG ) is a formal apparatus that allows systematic investigation of the changes of a physical system as viewed at different scales . In particle physics , it reflects the changes in the underlying physical laws (codified in a quantum field theory ) as the energy (or mass) scale at which physical processes occur varies.
A change in scale is called a scale transformation . The renormalization group is intimately related to scale invariance and conformal invariance , symmetries in which a system appears the same at all scales ( self-similarity ), [ a ] where under the fixed point of the renormalization group flow the field theory is conformally invariant.
As the scale varies, it is as if one is decreasing (as RG is a semi-group and doesn't have a well-defined inverse operation) the magnifying power of a notional microscope viewing the system. In so-called renormalizable theories, the system at one scale will generally consist of self-similar copies of itself when viewed at a smaller scale, with different parameters describing the components of the system. The components, or fundamental variables, may relate to atoms, elementary particles, atomic spins, etc. The parameters of the theory typically describe the interactions of the components. These may be variable couplings which measure the strength of various forces, or mass parameters themselves. The components themselves may appear to be composed of more of the self-same components as one goes to shorter distances.
For example, in quantum electrodynamics (QED), an electron appears to be composed of electron and positron pairs and photons, as one views it at higher resolution, at very short distances. The electron at such short distances has a slightly different electric charge than does the dressed electron seen at large distances, and this change, or running , in the value of the electric charge is determined by the renormalization group equation.
The idea of scale transformations and scale invariance is old in physics: Scaling arguments were commonplace for the Pythagorean school , Euclid , and up to Galileo . [ 1 ] They became popular again at the end of the 19th century, perhaps the first example being the idea of enhanced viscosity of Osborne Reynolds , as a way to explain turbulence.
The renormalization group was initially devised in particle physics, but nowadays its applications extend to solid-state physics , fluid mechanics , physical cosmology , and even nanotechnology . An early article [ 2 ] by Ernst Stueckelberg and André Petermann in 1953 anticipates the idea in quantum field theory . Stueckelberg and Petermann opened the field conceptually. They noted that renormalization exhibits a group of transformations which transfers quantities from the bare terms to the counter terms. They introduced a function h ( e ) in quantum electrodynamics (QED) , which is now known as the beta function (see below).
Murray Gell-Mann and Francis E. Low restricted the idea to scale transformations in QED in 1954, [ 3 ] which are the most physically significant, and focused on asymptotic forms of the photon propagator at high energies. They determined the variation of the electromagnetic coupling in QED, by appreciating the simplicity of the scaling structure of that theory. They thus discovered that the coupling parameter g ( μ ) at the energy scale μ is effectively given by the (one-dimensional translation) group equation g ( μ ) = G − 1 ( ( μ M ) d G ( g ( M ) ) ) {\displaystyle g(\mu )=G^{-1}\left(\left({\frac {\mu }{M}}\right)^{d}G(g(M))\right)} or equivalently, G ( g ( μ ) ) = G ( g ( M ) ) ( μ / M ) d {\displaystyle G\left(g(\mu )\right)=G(g(M))\left({\mu }/{M}\right)^{d}} , for an arbitrary function G (known as Wegner 's scaling function) and a constant d , in terms of the coupling g(M) at a reference scale M .
Gell-Mann and Low realized in these results that the effective scale can be arbitrarily taken as μ , and can vary to define the theory at any other scale: g ( κ ) = G − 1 ( ( κ μ ) d G ( g ( μ ) ) ) = G − 1 ( ( κ M ) d G ( g ( M ) ) ) {\displaystyle g(\kappa )=G^{-1}\left(\left({\frac {\kappa }{\mu }}\right)^{d}G(g(\mu ))\right)=G^{-1}\left(\left({\frac {\kappa }{M}}\right)^{d}G(g(M))\right)}
The gist of the RG is this group property: as the scale μ varies, the theory presents a self-similar replica of itself, and any scale can be accessed similarly from any other scale, by group action, a formal transitive conjugacy of couplings [ 4 ] in the mathematical sense ( Schröder's equation ).
On the basis of this (finite) group equation and its scaling property, Gell-Mann and Low could then focus on infinitesimal transformations, and invented a computational method based on a mathematical flow function ψ ( g ) = G d /(∂ G /∂ g ) of the coupling parameter g , which they introduced. Like the function h ( e ) of Stueckelberg-Petermann, their function determines the differential change of the coupling g ( μ ) with respect to a small change in energy scale μ through a differential equation, the renormalization group equation : ∂ g ∂ ln μ = ψ ( g ) = β ( g ) {\displaystyle \displaystyle {\frac {\partial g}{\partial \ln \mu }}=\psi (g)=\beta (g)} The modern name is also indicated, the beta function , introduced by C. Callan and K. Symanzik in 1970. [ 5 ] Since it is a mere function of g , integration in g of a perturbative estimate of it permits specification of the renormalization trajectory of the coupling, that is, its variation with energy, effectively the function G in this perturbative approximation. The renormalization group prediction (cf. Stueckelberg–Petermann and Gell-Mann–Low works) was confirmed 40 years later at the LEP accelerator experiments: the fine structure "constant" of QED was measured [ 6 ] to be about 1 ⁄ 127 at energies close to 200 GeV, as opposed to the standard low-energy physics value of 1 ⁄ 137 . [ b ]
The renormalization group emerges from the renormalization of the quantum field variables, which normally has to address the problem of infinities in a quantum field theory. [ c ] This problem of systematically handling the infinities of quantum field theory to obtain finite physical quantities was solved for QED by Richard Feynman , Julian Schwinger and Shin'ichirō Tomonaga , who received the 1965 Nobel prize for these contributions. They effectively devised the theory of mass and charge renormalization, in which the infinity in the momentum scale is cut off by an ultra-large regulator , Λ. [ d ]
The dependence of physical quantities, such as the electric charge or electron mass, on the scale Λ is hidden, effectively swapped for the longer-distance scales at which the physical quantities are measured, and, as a result, all observable quantities end up being finite instead, even for an infinite Λ. Gell-Mann and Low thus realized in these results that, infinitesimally, while a tiny change in g is provided by the above RG equation given ψ( g ), the self-similarity is expressed by the fact that ψ( g ) depends explicitly only upon the parameter(s) of the theory, and not upon the scale μ . Consequently, the above renormalization group equation may be solved for ( G and thus) g ( μ ).
A deeper understanding of the physical meaning and generalization of the renormalization process, which goes beyond the dilation group of conventional renormalizable theories, considers methods where widely different scales of lengths appear simultaneously. It came from condensed matter physics : Leo P. Kadanoff 's paper in 1966 proposed the "block-spin" renormalization group. [ 8 ] The "blocking idea" is a way to define the components of the theory at large distances as aggregates of components at shorter distances.
This approach covered the conceptual point and was given full computational substance in the extensive important work of Kenneth Wilson . The power of Wilson's ideas was demonstrated by a constructive iterative renormalization solution of a long-standing problem, the Kondo problem , in 1975, [ 9 ] as well as the preceding seminal developments of his new method in the theory of second-order phase transitions and critical phenomena in 1971. [ 10 ] [ 11 ] [ 12 ] He was awarded the Nobel prize for these decisive contributions in 1982. [ 13 ]
Meanwhile, the RG in particle physics had been reformulated in more practical terms by Callan and Symanzik in 1970. [ 5 ] [ 14 ] The above beta function, which describes the "running of the coupling" parameter with scale, was also found to amount to the "canonical trace anomaly", which represents the quantum-mechanical breaking of scale (dilation) symmetry in a field theory. [ e ] Applications of the RG to particle physics exploded in number in the 1970s with the establishment of the Standard Model .
In 1973, [ 15 ] [ 16 ] it was discovered that a theory of interacting colored quarks, called quantum chromodynamics , had a negative beta function. This means that an initial high-energy value of the coupling will eventuate a special value of μ at which the coupling blows up (diverges). This special value is the scale of the strong interactions , μ = Λ QCD and occurs at about 200 MeV. Conversely, the coupling becomes weak at very high energies ( asymptotic freedom ), and the quarks become observable as point-like particles, in deep inelastic scattering , as anticipated by Feynman–Bjorken scaling. QCD was thereby established as the quantum field theory controlling the strong interactions of particles.
Momentum space RG also became a highly developed tool in solid state physics, but was hindered by the extensive use of perturbation theory, which prevented the theory from succeeding in strongly correlated systems. [ f ]
Conformal symmetry is associated with the vanishing of the beta function. This can occur naturally if a coupling constant is attracted, by running, toward a fixed point at which β ( g ) = 0. In QCD, the fixed point occurs at short distances where g → 0 and is called a ( trivial ) ultraviolet fixed point . For heavy quarks, such as the top quark , the coupling to the mass-giving Higgs boson runs toward a fixed non-zero (non-trivial) infrared fixed point , first predicted by Pendleton and Ross (1981), [ 17 ] and C. T. Hill . [ 18 ] The top quark Yukawa coupling lies slightly below the infrared fixed point of the Standard Model suggesting the possibility of additional new physics, such as sequential heavy Higgs bosons. [ citation needed ]
In string theory , conformal invariance of the string world-sheet is a fundamental symmetry: β = 0 is a requirement. Here, β is a function of the geometry of the space-time in which the string moves. This determines the space-time dimensionality of the string theory and enforces Einstein's equations of general relativity on the geometry. The RG is of fundamental importance to string theory and theories of grand unification .
It is also the modern key idea underlying critical phenomena in condensed matter physics. [ 19 ] Indeed, the RG has become one of the most important tools of modern physics. [ 20 ] It is often used in combination with the Monte Carlo method . [ 21 ]
This section introduces pedagogically a picture of RG which may be easiest to grasp: the block spin RG, devised by Leo P. Kadanoff in 1966. [ 8 ]
Consider a 2D solid, a set of atoms in a perfect square array, as depicted in the figure.
Assume that atoms interact among themselves only with their nearest neighbours, and that the system is at a given temperature T . The strength of their interaction is quantified by a certain coupling J . The physics of the system will be described by a certain formula, say the Hamiltonian H ( T , J ) .
Now proceed to divide the solid into blocks of 2×2 squares; we attempt to describe the system in terms of block variables , i.e., variables which describe the average behavior of the block. Further assume that, by some lucky coincidence, the physics of block variables is described by a formula of the same kind , but with different values for T and J : H ( T ′ , J ′ ) . (This isn't exactly true, in general, but it is often a good first approximation.)
Perhaps, the initial problem was too hard to solve, since there were too many atoms. Now, in the renormalized problem we have only one fourth of them. But why stop now? Another iteration of the same kind leads to H ( T" , J" ) , and only one sixteenth of the atoms. We are increasing the observation scale with each RG step.
Of course, the best idea is to iterate until there is only one very big block. Since the number of atoms in any real sample of material is very large, this is more or less equivalent to finding the long range behavior of the RG transformation which took ( T , J ) → ( T ′ , J ′ ) and ( T ′ , J ′ ) → ( T" , J" ) . Often, when iterated many times, this RG transformation leads to a certain number of fixed points .
To be more concrete, consider a magnetic system (e.g., the Ising model ), in which the J coupling denotes the trend of neighbor spins to be aligned. The configuration of the system is the result of the tradeoff between the ordering J term and the disordering effect of temperature.
For many models of this kind there are three fixed points:
So, if we are given a certain material with given values of T and J , all we have to do in order to find out the large-scale behaviour of the system is to iterate the pair until we find the corresponding fixed point.
In more technical terms, let us assume that we have a theory described by a certain function Z {\displaystyle Z} of the state variables { s i } {\displaystyle \{s_{i}\}} and a certain set of coupling constants { J k } {\displaystyle \{J_{k}\}} . This function may be a partition function , an action , a Hamiltonian , etc. It must contain the whole description of the physics of the system.
Now we consider a certain blocking transformation of the state variables { s i } → { s ~ i } {\displaystyle \{s_{i}\}\to \{{\tilde {s}}_{i}\}} , the number of s ~ i {\displaystyle {\tilde {s}}_{i}} must be lower than the number of s i {\displaystyle s_{i}} . Now let us try to rewrite the Z {\displaystyle Z} function only in terms of the s ~ i {\displaystyle {\tilde {s}}_{i}} . If this is achievable by a certain change in the parameters, { J k } → { J ~ k } {\displaystyle \{J_{k}\}\to \{{\tilde {J}}_{k}\}} , then the theory is said to be renormalizable .
Most fundamental theories of physics such as quantum electrodynamics , quantum chromodynamics and electro-weak interaction, but not gravity, are exactly renormalizable. Also, most theories in condensed matter physics are approximately renormalizable, from superconductivity to fluid turbulence.
The change in the parameters is implemented by a certain beta function: { J ~ k } = β ( { J k } ) {\displaystyle \{{\tilde {J}}_{k}\}=\beta (\{J_{k}\})} , which is said to induce a renormalization group flow (or RG flow ) on the J {\displaystyle J} -space. The values of J {\displaystyle J} under the flow are called running couplings .
As was stated in the previous section, the most important information in the RG flow are its fixed points . The possible macroscopic states of the system, at a large scale, are given by this set of fixed points. If these fixed points correspond to a free field theory, the theory is said to exhibit quantum triviality , possessing what is called a Landau pole , as in quantum electrodynamics. For a φ 4 interaction, Michael Aizenman proved that this theory is indeed trivial, for space-time dimension D ≥ 5. [ 22 ] For D = 4, the triviality has yet to be proven rigorously, but lattice computations have provided strong evidence for this. This fact is important as quantum triviality can be used to bound or even predict parameters such as the Higgs boson mass in asymptotic safety scenarios. Numerous fixed points appear in the study of lattice Higgs theories , but the nature of the quantum field theories associated with these remains an open question. [ 23 ]
Since the RG transformations in such systems are lossy (i.e.: the number of variables decreases - see as an example in a different context, Lossy data compression ), there need not be an inverse for a given RG transformation. Thus, in such lossy systems, the renormalization group is, in fact, a semigroup , as lossiness implies that there is no unique inverse for each element.
Consider a certain observable A of a physical system undergoing an RG transformation. The magnitude of the observable as the length scale of the system goes from small to large determines the importance of the observable(s) for the scaling law:
A relevant observable is needed to describe the macroscopic behaviour of the system; irrelevant observables are not needed. Marginal observables may or may not need to be taken into account. A remarkable broad fact is that most observables are irrelevant , i.e., the macroscopic physics is dominated by only a few observables in most systems .
As an example, in microscopic physics, to describe a system consisting of a mole of carbon-12 atoms we need of the order of 10 23 (the Avogadro number ) variables, while to describe it as a macroscopic system (12 grams of carbon-12) we only need a few.
Before Wilson's RG approach, there was an astonishing empirical fact to explain: The coincidence of the critical exponents (i.e., the exponents of the reduced-temperature dependence of several quantities near a second order phase transition ) in very disparate phenomena, such as magnetic systems, superfluid transition ( Lambda transition ), alloy physics, etc. So in general, thermodynamic features of a system near a phase transition depend only on a small number of variables , such as the dimensionality and symmetry, but are insensitive to details of the underlying microscopic properties of the system.
This coincidence of critical exponents for ostensibly quite different physical systems, called universality , is easily explained using the renormalization group, by demonstrating that the differences in phenomena among the individual fine-scale components are determined by irrelevant observables , while the relevant observables are shared in common. Hence many macroscopic phenomena may be grouped into a small set of universality classes , specified by the shared sets of relevant observables. [ g ]
Renormalization groups, in practice, come in two main "flavors". The Kadanoff picture explained above refers mainly to the so-called real-space RG .
Momentum-space RG on the other hand, has a longer history despite its relative subtlety. It can be used for systems where the degrees of freedom can be cast in terms of the Fourier modes of a given field. The RG transformation proceeds by integrating out a certain set of high-momentum (large-wavenumber) modes. Since large wavenumbers are related to short-length scales, the momentum-space RG results in an essentially analogous coarse-graining effect as with real-space RG.
Momentum-space RG is usually performed on a perturbation expansion. The validity of such an expansion is predicated upon the actual physics of a system being close to that of a free field system. In this case, one may calculate observables by summing the leading terms in the expansion.
This approach has proved successful for many theories, including most of particle physics, but fails for systems whose physics is very far from any free system, i.e., systems with strong correlations.
As an example of the physical meaning of RG in particle physics, consider an overview of charge renormalization in quantum electrodynamics (QED). Suppose we have a point positive charge of a certain true (or bare ) magnitude. The electromagnetic field around it has a certain energy, and thus may produce some virtual electron-positron pairs (for example). Although virtual particles annihilate very quickly, during their short lives the electron will be attracted by the charge, and the positron will be repelled. Since this happens uniformly everywhere near the point charge, where its electric field is sufficiently strong, these pairs effectively create a screen around the charge when viewed from far away. The measured strength of the charge will depend on how close our measuring probe can approach the point charge, bypassing more of the screen of virtual particles the closer it gets. Hence a dependence of a certain coupling constant (here, the electric charge) with distance scale .
Momentum and length scales are related inversely, according to the de Broglie relation : The higher the energy or momentum scale we may reach, the lower the length scale we may probe and resolve. Therefore, the momentum-space RG practitioners sometimes claim to integrate out high momenta or high energy from their theories.
An exact renormalization group equation ( ERGE ) is one that takes irrelevant couplings into account. There are several formulations.
The Wilson ERGE is the simplest conceptually, but is practically impossible to implement. Fourier transform into momentum space after Wick rotating into Euclidean space . Insist upon a hard momentum cutoff , p 2 ≤ Λ 2 so that the only degrees of freedom are those with momenta less than Λ . The partition function is Z = ∫ p 2 ≤ Λ 2 D φ exp [ − S Λ [ φ ] ] . {\displaystyle Z=\int _{p^{2}\leq \Lambda ^{2}}{\mathcal {D}}\varphi \exp \left[-S_{\Lambda }[\varphi ]\right].}
For any positive Λ′ less than Λ, define S Λ′ (a functional over field configurations φ whose Fourier transform has momentum support within p 2 ≤ Λ′ 2 ) as exp ( − S Λ ′ [ φ ] ) = d e f ∫ Λ ′ ≤ p ≤ Λ D φ exp [ − S Λ [ φ ] ] . {\displaystyle \exp \left(-S_{\Lambda '}[\varphi ]\right)\ {\stackrel {\mathrm {def} }{=}}\ \int _{\Lambda '\leq p\leq \Lambda }{\mathcal {D}}\varphi \exp \left[-S_{\Lambda }[\varphi ]\right].}
If S Λ depends only on ϕ and not on derivatives of ϕ , this may be rewritten as exp ( − S Λ ′ [ φ ] ) = d e f ∏ Λ ′ ≤ p ≤ Λ ∫ d φ ( p ) exp [ − S Λ [ φ ( p ) ] ] , {\displaystyle \exp \left(-S_{\Lambda '}[\varphi ]\right)\ {\stackrel {\mathrm {def} }{=}}\ \prod _{\Lambda '\leq p\leq \Lambda }\int d\varphi (p)\exp \left[-S_{\Lambda }[\varphi (p)]\right],} in which it becomes clear that, since only functions φ with support between Λ' and Λ are integrated over, the left hand side may still depend on ϕ with support outside that range. Obviously, Z = ∫ p 2 ≤ Λ ′ 2 D φ exp [ − S Λ ′ [ φ ] ] . {\displaystyle Z=\int _{p^{2}\leq {\Lambda '}^{2}}{\mathcal {D}}\varphi \exp \left[-S_{\Lambda '}[\varphi ]\right].}
In fact, this transformation is transitive . If you compute S Λ ′ from S Λ and then compute S Λ ′ ′ from S Λ ′ ′ , this gives you the same Wilsonian action as computing S Λ″ directly from S Λ .
The Polchinski ERGE involves a smooth UV regulator cutoff . [ 24 ] Basically, the idea is an improvement over the Wilson ERGE. Instead of a sharp momentum cutoff, it uses a smooth cutoff. Essentially, we suppress contributions from momenta greater than Λ heavily. The smoothness of the cutoff, however, allows us to derive a functional differential equation in the cutoff scale Λ . As in Wilson's approach, we have a different action functional for each cutoff energy scale Λ . Each of these actions are supposed to describe exactly the same model which means that their partition functionals have to match exactly.
In other words, (for a real scalar field; generalizations to other fields are obvious), Z Λ [ J ] = ∫ D φ exp ( − S Λ [ φ ] + J ⋅ φ ) = ∫ D φ exp ( − 1 2 φ ⋅ R Λ ⋅ φ − S int Λ [ φ ] + J ⋅ φ ) {\displaystyle Z_{\Lambda }[J]=\int {\mathcal {D}}\varphi \exp \left(-S_{\Lambda }[\varphi ]+J\cdot \varphi \right)=\int {\mathcal {D}}\varphi \exp \left(-{\tfrac {1}{2}}\varphi \cdot R_{\Lambda }\cdot \varphi -S_{\operatorname {int} \Lambda }[\varphi ]+J\cdot \varphi \right)} and Z Λ is really independent of Λ ! We have used the condensed deWitt notation here. We have also split the bare action S Λ into a quadratic kinetic part and an interacting part S int Λ . This split most certainly isn't clean. The "interacting" part can very well also contain quadratic kinetic terms . In fact, if there is any wave function renormalization , it most certainly will. This can be somewhat reduced by introducing field rescalings. R Λ is a function of the momentum p and the second term in the exponent is 1 2 ∫ d d p ( 2 π ) d φ ~ ∗ ( p ) R Λ ( p ) φ ~ ( p ) {\displaystyle {\frac {1}{2}}\int {\frac {d^{d}p}{(2\pi )^{d}}}{\tilde {\varphi }}^{*}(p)R_{\Lambda }(p){\tilde {\varphi }}(p)} when expanded.
When p ≪ Λ {\displaystyle p\ll \Lambda } , R Λ ( p )/ p 2 is essentially 1. When p ≫ Λ {\displaystyle p\gg \Lambda } , R Λ ( p )/ p 2 becomes very very huge and approaches infinity. R Λ ( p )/ p 2 is always greater than or equal to 1 and is smooth. Basically, this leaves the fluctuations with momenta less than the cutoff Λ unaffected but heavily suppresses contributions from fluctuations with momenta greater than the cutoff. This is obviously a huge improvement over Wilson.
The condition that d d Λ Z Λ = 0 {\displaystyle {\frac {d}{d\Lambda }}Z_{\Lambda }=0} can be satisfied by (but not only by) d d Λ S int Λ = 1 2 δ S int Λ δ φ ⋅ ( d d Λ R Λ − 1 ) ⋅ δ S int Λ δ φ − 1 2 Tr [ δ 2 S int Λ δ φ δ φ ⋅ R Λ − 1 ] . {\displaystyle {\frac {d}{d\Lambda }}S_{\operatorname {int} \Lambda }={\frac {1}{2}}{\frac {\delta S_{\operatorname {int} \Lambda }}{\delta \varphi }}\cdot \left({\frac {d}{d\Lambda }}R_{\Lambda }^{-1}\right)\cdot {\frac {\delta S_{\operatorname {int} \Lambda }}{\delta \varphi }}-{\frac {1}{2}}\operatorname {Tr} \left[{\frac {\delta ^{2}S_{\operatorname {int} \Lambda }}{\delta \varphi \,\delta \varphi }}\cdot R_{\Lambda }^{-1}\right].}
Jacques Distler claimed without proof that this ERGE is not correct nonperturbatively . [ 25 ]
The effective average action ERGE involves a smooth IR regulator cutoff.
The idea is to take all fluctuations right up to an IR scale k into account. The effective average action(EAA) will be accurate for fluctuations with momenta larger than k . As the parameter k is lowered, the effective average action approaches the effective action which includes all quantum and classical fluctuations. In contrast, for large k the effective average action is close to the "bare action". So, the effective average action interpolates between the "bare action" and the effective action .
For a real scalar field , one adds an IR cutoff 1 2 ∫ d d p ( 2 π ) d φ ~ ∗ ( p ) R k ( p ) φ ~ ( p ) {\displaystyle {\frac {1}{2}}\int {\frac {d^{d}p}{(2\pi )^{d}}}{\tilde {\varphi }}^{*}(p)R_{k}(p){\tilde {\varphi }}(p)} to the action S , where R k is a function of both k and p such that for p ≫ k {\displaystyle p\gg k} , R k (p) is very tiny and approaches 0 and for p ≪ k {\displaystyle p\ll k} , R k ( p ) ≳ k 2 {\displaystyle R_{k}(p)\gtrsim k^{2}} . R k is both smooth and nonnegative. Its large value for small momenta leads to a suppression of their contribution to the partition function which is effectively the same thing as neglecting large-scale fluctuations.
One can use the condensed deWitt notation 1 2 φ ⋅ R k ⋅ φ {\displaystyle {\frac {1}{2}}\varphi \cdot R_{k}\cdot \varphi } for this IR regulator.
So, exp ( W k [ J ] ) = Z k [ J ] = ∫ D φ exp ( − S [ φ ] − 1 2 φ ⋅ R k ⋅ φ + J ⋅ φ ) {\displaystyle \exp \left(W_{k}[J]\right)=Z_{k}[J]=\int {\mathcal {D}}\varphi \exp \left(-S[\varphi ]-{\frac {1}{2}}\varphi \cdot R_{k}\cdot \varphi +J\cdot \varphi \right)} where J is the source field . The Legendre transform of W k ordinarily gives the effective action . However, the action that we started off with is really S [ φ ] + 1/2 φ⋅R k ⋅ φ and so, to get the effective average action, we subtract off 1/2 φ ⋅ R k ⋅ φ . In other words, φ [ J ; k ] = δ W k δ J [ J ] {\displaystyle \varphi [J;k]={\frac {\delta W_{k}}{\delta J}}[J]} can be inverted to give J k [ φ ] and we define the effective average action Γ k as Γ k [ φ ] = d e f ( − W [ J k [ φ ] ] + J k [ φ ] ⋅ φ ) − 1 2 φ ⋅ R k ⋅ φ . {\displaystyle \Gamma _{k}[\varphi ]\ {\stackrel {\mathrm {def} }{=}}\ \left(-W\left[J_{k}[\varphi ]\right]+J_{k}[\varphi ]\cdot \varphi \right)-{\tfrac {1}{2}}\varphi \cdot R_{k}\cdot \varphi .}
Hence, d d k Γ k [ φ ] = − d d k W k [ J k [ φ ] ] − δ W k δ J ⋅ d d k J k [ φ ] + d d k J k [ φ ] ⋅ φ − 1 2 φ ⋅ d d k R k ⋅ φ = − d d k W k [ J k [ φ ] ] − 1 2 φ ⋅ d d k R k ⋅ φ = 1 2 ⟨ φ ⋅ d d k R k ⋅ φ ⟩ J k [ φ ] ; k − 1 2 φ ⋅ d d k R k ⋅ φ = 1 2 Tr [ ( δ J k δ φ ) − 1 ⋅ d d k R k ] = 1 2 Tr [ ( δ 2 Γ k δ φ δ φ + R k ) − 1 ⋅ d d k R k ] {\displaystyle {\begin{aligned}{\frac {d}{dk}}\Gamma _{k}[\varphi ]&=-{\frac {d}{dk}}W_{k}[J_{k}[\varphi ]]-{\frac {\delta W_{k}}{\delta J}}\cdot {\frac {d}{dk}}J_{k}[\varphi ]+{\frac {d}{dk}}J_{k}[\varphi ]\cdot \varphi -{\tfrac {1}{2}}\varphi \cdot {\frac {d}{dk}}R_{k}\cdot \varphi \\&=-{\frac {d}{dk}}W_{k}[J_{k}[\varphi ]]-{\tfrac {1}{2}}\varphi \cdot {\frac {d}{dk}}R_{k}\cdot \varphi \\&={\tfrac {1}{2}}\left\langle \varphi \cdot {\frac {d}{dk}}R_{k}\cdot \varphi \right\rangle _{J_{k}[\varphi ];k}-{\tfrac {1}{2}}\varphi \cdot {\frac {d}{dk}}R_{k}\cdot \varphi \\&={\tfrac {1}{2}}\operatorname {Tr} \left[\left({\frac {\delta J_{k}}{\delta \varphi }}\right)^{-1}\cdot {\frac {d}{dk}}R_{k}\right]\\&={\tfrac {1}{2}}\operatorname {Tr} \left[\left({\frac {\delta ^{2}\Gamma _{k}}{\delta \varphi \delta \varphi }}+R_{k}\right)^{-1}\cdot {\frac {d}{dk}}R_{k}\right]\end{aligned}}} thus d d k Γ k [ φ ] = 1 2 Tr [ ( δ 2 Γ k δ φ δ φ + R k ) − 1 ⋅ d d k R k ] {\displaystyle {\frac {d}{dk}}\Gamma _{k}[\varphi ]={\tfrac {1}{2}}\operatorname {Tr} \left[\left({\frac {\delta ^{2}\Gamma _{k}}{\delta \varphi \delta \varphi }}+R_{k}\right)^{-1}\cdot {\frac {d}{dk}}R_{k}\right]} is the ERGE which is also known as the Wetterich equation. [ 26 ]
As shown by Morris the effective action Γ k is in fact simply related to Polchinski's effective action S int via a Legendre transform relation. [ 27 ]
As there are infinitely many choices of R k , there are also infinitely many different interpolating ERGEs.
Generalization to other fields like spinorial fields is straightforward.
Although the Polchinski ERGE and the effective average action ERGE look similar, they are based upon very different philosophies. In the effective average action ERGE, the bare action is left unchanged (and the UV cutoff scale—if there is one—is also left unchanged) but the IR contributions to the effective action are suppressed whereas in the Polchinski ERGE, the QFT is fixed once and for all but the "bare action" is varied at different energy scales to reproduce the prespecified model. Polchinski's version is certainly much closer to Wilson's idea in spirit. Note that one uses "bare actions" whereas the other uses effective (average) actions.
The renormalization group can also be used to compute effective potentials at orders higher than 1-loop. This kind of approach is particularly interesting to compute corrections to the Coleman–Weinberg [ 28 ] mechanism. To do so, one must write the renormalization group equation in terms of the effective potential. To the case of the φ 4 {\displaystyle \varphi ^{4}} model: ( μ ∂ ∂ μ + β λ ∂ ∂ λ + φ γ φ ∂ ∂ φ ) V eff = 0. {\displaystyle \left(\mu {\frac {\partial }{\partial \mu }}+\beta _{\lambda }{\frac {\partial }{\partial \lambda }}+\varphi \gamma _{\varphi }{\frac {\partial }{\partial \varphi }}\right)V_{\text{eff}}=0.}
In order to determine the effective potential, it is useful to write V eff {\displaystyle V_{\text{eff}}} as V eff = 1 4 φ 4 S eff ( λ , L ( φ ) ) , {\displaystyle V_{\text{eff}}={\frac {1}{4}}\varphi ^{4}S_{\text{eff}}{\big (}\lambda ,L(\varphi ){\big )},} where S eff {\displaystyle S_{\text{eff}}} is a power series in L ( φ ) = log φ 2 μ 2 {\displaystyle L(\varphi )=\log {\frac {\varphi ^{2}}{\mu ^{2}}}} : S eff = A + B L + C L 2 + D L 3 + ⋯ . {\displaystyle S_{\text{eff}}=A+BL+CL^{2}+DL^{3}+\cdots .}
Using the above ansatz , it is possible to solve the renormalization group equation perturbatively and find the effective potential up to desired order. A pedagogical explanation of this technique is shown in reference. [ 29 ] | https://en.wikipedia.org/wiki/Renormalization_group |
Rensch's rule is a biological rule on allometrics , concerning the relationship between the extent of sexual size dimorphism and which sex is larger. Across species within a lineage , size dimorphism increases with increasing body size when the male is the larger sex, and decreases with increasing average body size when the female is the larger sex. The rule was proposed by the evolutionary biologist Bernhard Rensch in 1950. [ 1 ]
After controlling for confounding factors such as evolutionary history, an increase in average body size makes the difference in body size larger if the species has larger males, and smaller if it has larger females. [ 2 ] Some studies propose that this is due to sexual bimaturism , which causes male traits to diverge faster and develop for a longer period of time. [ 3 ] The correlation between sexual size dimorphism and body size is hypothesized to be a result of an increase in male-male competition in larger species, [ 4 ] a result of limited environmental resources, fuelling aggression between males over access to breeding territories [ 5 ] and mating partners. [ 2 ]
Phylogenetic lineages that appear to follow this rule include primates , pinnipeds , and artiodactyls . [ 6 ]
This rule has rarely been tested on parasites. A 2019 study showed that ectoparasitic philopterid and menoponid lice comply with it, while ricinid lice exhibit a reversed pattern. [ 7 ] | https://en.wikipedia.org/wiki/Rensch's_rule |
Utilization is the primary method by which tool rental companies measure asset performance . In its most basic form it measures the actual revenue earned by assets against the potential revenue they could have earned. [ 1 ]
Rental utilization is divided into a number of different calculations, and not all companies work precisely the same way. In general terms however there are two key calculations: the physical utilization on the asset, which is measured based on the number of available days for rental against the number of days actually rented. (This may also be measured in hours for certain types of equipment), and the financial utilization on the asset (referred to in North America as $ Utilization) which is measured as the rental revenue achieved over a period of time against the potential revenue that could have been achieved based on a target or standard, non-discounted rate. Physical utilization is also sometimes referred to as spot utilization , where a rental company looks at its current utilization of assets based on a single moment in time (e.g. now, 9 am today, etc.).
Utilization calculations may be varied based on many different factors. For example:
Utilization in this context is heavily linked to profitability . [ 3 ] Low physical utilization may be mitigated by keeping rental rates high, high physical utilization normally justifies keeping rental rates lower. [ 4 ] Different types of equipment may also alter the relationship between rates and utilization. [ 5 ]
Renting | https://en.wikipedia.org/wiki/Rental_utilization |
Rentiapril is an ACE inhibitor . [ 1 ]
This drug article relating to the cardiovascular system is a stub . You can help Wikipedia by expanding it .
This stereochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Rentiapril |
The Renwu incident was a soil pollution event at the Formosa Plastics Corporation 's Renwu Plant in Kaohsiung , Taiwan .
In 2009, the Taiwanese Environmental Protection Administration (EPA) found that the soil and the groundwater in the area close to Formosa Plastics' Renwu Plant has been polluted by benzene , chloroform , dichloromethane , 1,1,2-Trichloroethane , 1,1-dichloroethylene , tetrachloroethylene , trichloroethylene , and vinylchloride . The pollutants were all present at levels over 20 times the government standard; and most frighteningly, the levels of 1,2-dichloroethane were 30,000 times higher than the standard. [ 1 ]
The Formosa Plastics' Renwu Plant had already discovered the soil pollution in 2006, and they had tried to reinforce the structure of their wastewater pit, but the reinforcement work was never completed. [ 2 ]
Since the pollution was severe, the residents of the nearby area and an Elected Representative lodged protests, hoping that the Renwu plant could shut down. [ 3 ] In April 2010, the EPA proposed a fine on Formosa Plastic' Renwu Plant to NT$150 million (US$4.7 million) for causing soil and groundwater pollution. [ 4 ]
This Taiwan -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Renwu_incident |
René Descartes ( / d eɪ ˈ k ɑːr t / day- KART , also UK : / ˈ d eɪ k ɑːr t / DAY -kart ; French: [ʁəne dekaʁt] ⓘ ; [ note 3 ] [ 11 ] 31 March 1596 – 11 February 1650) [ 12 ] [ 13 ] : 58 was a French philosopher, scientist , and mathematician , widely considered a seminal figure in the emergence of modern philosophy and science . Mathematics was paramount to his method of inquiry, and he connected the previously separate fields of geometry and algebra into analytic geometry . Descartes spent much of his working life in the Dutch Republic , initially serving the Dutch States Army , and later becoming a central intellectual of the Dutch Golden Age . [ 14 ] Although he served a Protestant state and was later counted as a deist by critics, Descartes was Roman Catholic . [ 15 ] [ 16 ]
Many elements of Descartes's philosophy have precedents in late Aristotelianism , the revived Stoicism of the 16th century, or in earlier philosophers like Augustine . In his natural philosophy , he differed from the schools on two major points. First, he rejected the splitting of corporeal substance into matter and form; second, he rejected any appeal to final ends , divine or natural, in explaining natural phenomena. [ 17 ] In his theology, he insists on the absolute freedom of God's act of creation . Refusing to accept the authority of previous philosophers, Descartes frequently set his views apart from the philosophers who preceded him. In the opening section of the Passions of the Soul , an early modern treatise on emotions, Descartes goes so far as to assert that he will write on this topic "as if no one had written on these matters before." His best known philosophical statement is " cogito, ergo sum " ("I think, therefore I am"; French: Je pense, donc je suis ), found in Discourse on the Method (1637, in French and Latin, 1644) and Principles of Philosophy (1644, in Latin, 1647 in French). [ note 4 ] The statement has either been interpreted as a logical syllogism or as an intuitive thought. [ 18 ]
Descartes has often been called the father of modern philosophy, and is largely seen as responsible for the increased attention given to epistemology in the 17th century. [ 19 ] [ note 5 ] He laid the foundation for 17th-century continental rationalism , later advocated by Spinoza and Leibniz , and was later opposed by the empiricist school of thought consisting of Hobbes , Locke , Berkeley , and Hume . The rise of early modern rationalism—as a systematic school of philosophy in its own right for the first time in history—exerted an influence on modern Western thought in general, with the birth of two rationalistic philosophical systems of Descartes ( Cartesianism ) and Spinoza ( Spinozism ). It was the 17th-century arch-rationalists like Descartes, Spinoza, and Leibniz who have given the " Age of Reason " its name and place in history. Leibniz, Spinoza, [ 20 ] and Descartes were all well-versed in mathematics as well as philosophy, with Descartes and Leibniz additionally contributing to a variety of scientific disciplines. [ 21 ] Although only Leibniz is extensively recognized as a polymath , all three rationalists integrated disparate domains of knowledge into their respective works. [ 22 ]
Descartes's Meditations on First Philosophy (1641) continues to be a standard text at most university philosophy departments. Descartes's influence in mathematics is equally apparent, being the namesake of the Cartesian coordinate system . He is credited as the father of analytic geometry—used in the discovery of infinitesimal calculus and analysis . Descartes was also one of the key figures in the Scientific Revolution .
René Descartes was born in La Haye en Touraine , Province of Touraine (now Descartes , Indre-et-Loire ), France, on 31 March 1596. [ 23 ] In May 1597, his mother Jeanne Brochard, died a few days after giving birth to a still-born child. [ 24 ] [ 23 ] Descartes's father, Joachim, was a member of the Parlement of Rennes at Rennes . [ 25 ] : 22 René lived with his grandmother and with his great-uncle. Although the Descartes family was Roman Catholic, the Poitou region was controlled by the Protestant Huguenots . [ 26 ] In 1607, late because of his fragile health, he entered the Jesuit Collège Royal Henry-Le-Grand at La Flèche , [ 27 ] [ 28 ] where he was introduced to mathematics and physics, including Galileo 's work. [ 29 ] [ 30 ] While there, Descartes first encountered hermetic mysticism. After graduation in 1614, he studied for two years (1615–16) at the University of Poitiers , earning a Baccalauréat and Licence in canon and civil law in 1616, [ 29 ] in accordance with his father's wishes that he should become a lawyer. [ 31 ] From there, he moved to Paris.
In Discourse on the Method , Descartes recalls: [ 32 ] : 20–21
I entirely abandoned the study of letters. Resolving to seek no knowledge other than that of which could be found in myself or else in the great book of the world, I spent the rest of my youth traveling, visiting courts and armies, mixing with people of diverse temperaments and ranks, gathering various experiences, testing myself in the situations which fortune offered me, and at all times reflecting upon whatever came my way to derive some profit from it.
In accordance with his ambition to become a professional military officer in 1618, Descartes joined, as a mercenary , the Protestant Dutch States Army in Breda under the command of Maurice of Nassau , [ 29 ] and undertook a formal study of military engineering , as established by Simon Stevin . [ 33 ] Descartes, therefore, received much encouragement in Breda to advance his knowledge of mathematics. [ 29 ] In this way, he became acquainted with Isaac Beeckman , [ 29 ] the principal of a Dordrecht school, for whom he wrote the Compendium of Music (written 1618, published 1650). [ 34 ]
While in the service of the Catholic Duke Maximilian of Bavaria from 1619, [ 35 ] Descartes was present at the Battle of the White Mountain near Prague , in November 1620. [ 36 ] [ 37 ]
According to Adrien Baillet , on the night of 10–11 November 1619 ( St. Martin's Day ), while stationed in Neuburg an der Donau , Descartes shut himself in a room with an "oven" (probably a cocklestove ) [ 38 ] to escape the cold. While within, he had three dreams, [ 39 ] and believed that a divine spirit revealed to him a new philosophy. However, it is speculated that what Descartes considered to be his second dream was actually an episode of exploding head syndrome . [ 40 ] Upon exiting, he had formulated analytic geometry and the idea of applying the mathematical method to philosophy. He concluded from these visions that the pursuit of science would prove to be, for him, the pursuit of true wisdom and a central part of his life's work. [ 41 ] [ 42 ] Descartes also saw very clearly that all truths were linked with one another, so that finding a fundamental truth and proceeding with logic would open the way to all science. Descartes arrived at this basic truth quite soon: his famous " I think, therefore I am ." [ 43 ]
In 1620, Descartes left the army. He visited Basilica della Santa Casa in Loreto, then visited various countries before returning to France, and during the next few years, he spent time in Paris. It was there that he composed his first essay on method: Regulae ad Directionem Ingenii ( Rules for the Direction of the Mind ). [ 43 ] He arrived in La Haye in 1623, selling all of his property to invest in bonds , which provided a comfortable income for the rest of his life. [ 44 ] [ 45 ] : 94 Descartes was present at the siege of La Rochelle by Cardinal Richelieu in 1627 as an observer. [ 45 ] : 128 There, he was interested in the physical properties of the great dike that Richelieu was building and studied mathematically everything he saw during the siege. He also met French mathematician Girard Desargues . [ 46 ] In the autumn of that year, in the residence of the papal nuncio Guidi di Bagno , where he came with Mersenne and many other scholars to listen to a lecture given by the alchemist, Nicolas de Villiers, Sieur de Chandoux, on the principles of a supposed new philosophy, [ 47 ] Cardinal Bérulle urged him to write an exposition of his new philosophy in some location beyond the reach of the Inquisition. [ 48 ]
Descartes returned to the Dutch Republic in 1628. [ 39 ] In April 1629, he joined the University of Franeker , studying under Adriaan Metius , either living with a Catholic family or renting the Sjaerdemaslot . The next year, under the name "Poitevin", he enrolled at Leiden University , which at the time was a Protestant University. [ 49 ] He studied both mathematics with Jacobus Golius , who confronted him with Pappus's hexagon theorem , and astronomy with Martin Hortensius . [ 50 ] In October 1630, he had a falling-out with Beeckman, whom he accused of plagiarizing some of his ideas. In Amsterdam, he had a relationship with a servant girl, Helena Jans van der Strom, with whom he had a daughter, Francine , who was born in 1635 in Deventer . She was baptized a Protestant [ 51 ] [ 52 ] and died of scarlet fever at the age of 5.
Unlike many moralists of the time, Descartes did not deprecate the passions but rather defended them; [ 53 ] he wept upon Francine's death in 1640. [ 54 ] According to a recent biography by Jason Porterfield, "Descartes said that he did not believe that one must refrain from tears to prove oneself a man." [ 55 ] Russell Shorto speculates that the experience of fatherhood and losing a child formed a turning point in Descartes's work, changing its focus from medicine to a quest for universal answers. [ 56 ]
Despite frequent moves, [ note 6 ] he wrote all of his major work during his 20-plus years in the Netherlands, initiating a revolution in mathematics and philosophy. [ note 7 ] In 1633, Galileo was condemned by the Italian Inquisition , and Descartes abandoned plans to publish Treatise on the World , his work of the previous four years. Nevertheless, in 1637, he published parts of this work in three essays: [ 57 ] "Les Météores" (The Meteors), " La Dioptrique " (Dioptrics) and La Géométrie ( Geometry ), preceded by an introduction, his famous Discours de la méthode ( Discourse on the Method ). [ 57 ] In it, Descartes lays out four rules of thought, meant to ensure that our knowledge rests upon a firm foundation: [ 58 ]
The first was never to accept anything for true which I did not know to be such; that is to say, carefully to avoid precipitancy and prejudice, and to comprise nothing more in my judgment than what was presented to my mind so clearly and distinctly as to exclude all ground of doubt.
In La Géométrie , Descartes exploited the discoveries he made with Pierre de Fermat . This later became known as Cartesian geometry . [ 59 ]
Descartes continued to publish works concerning both mathematics and philosophy for the rest of his life. In 1641, he published a metaphysics treatise, Meditationes de Prima Philosophia ( Meditations on First Philosophy ), written in Latin and thus addressed to the learned. It was followed in 1644 by Principia Philosophiae ( Principles of Philosophy ), a kind of synthesis of the Discourse on the Method and Meditations on First Philosophy . In 1643, Cartesian philosophy was condemned at the University of Utrecht , and Descartes was obliged to flee to the Hague, settling in Egmond-Binnen .
Between 1643 and 1649 Descartes lived with his girlfriend at Egmond-Binnen in an inn. [ 60 ] Descartes became friendly with Anthony Studler van Zurck, lord of Bergen , and participated in the design of his mansion and estate. [ 61 ] [ 62 ] [ 63 ] He also met Dirck Rembrantsz van Nierop , a mathematician and surveyor . [ 64 ] He was so impressed by Van Nierop's knowledge that he even brought him to the attention of Constantijn Huygens and Frans van Schooten. [ 65 ]
Christia Mercer suggested that Descartes may have been influenced by Spanish author and Roman Catholic nun Teresa of Ávila , who, fifty years earlier, published The Interior Castle , concerning the role of philosophical reflection in intellectual growth. [ 66 ] [ 67 ]
Descartes began (through Alfonso Polloti, an Italian general in Dutch service) a six-year correspondence with Princess Elisabeth of Bohemia , devoted mainly to moral and psychological subjects. [ 68 ] Connected with this correspondence, in 1649 he published Les Passions de l'âme ( The Passions of the Soul ), which he dedicated to the Princess. A French translation of Principia Philosophiae , prepared by Abbot Claude Picot, was published in 1647. This edition was also dedicated to Princess Elisabeth. In the preface to the French edition , Descartes praised true philosophy as a means to attain wisdom. He identifies four ordinary sources to reach wisdom and finally says that there is a fifth, better and more secure, consisting in the search for first causes. [ 69 ]
By 1649, Descartes had become one of Europe's most famous philosophers and scientists. [ 57 ] That year, Queen Christina of Sweden invited him to her court to organize a new scientific academy and tutor her in his ideas about love. [ 70 ] Descartes accepted, and moved to the Swedish Empire in the middle of winter. [ 71 ] Christina was interested in and stimulated Descartes to publish The Passions of the Soul . [ 72 ]
He was a guest at the house of Pierre Chanut , living on Västerlånggatan , less than 500 meters from Castle Tre Kronor in Stockholm . There, Chanut and Descartes made observations with a Torricellian mercury barometer. [ 70 ] Challenging Blaise Pascal , Descartes took the first set of barometric readings in Stockholm to see if atmospheric pressure could be used in forecasting the weather. [ 73 ]
Descartes arranged to tutor Queen Christina after her birthday, three times a week at 5 am, in her cold and draughty castle. However, by 15 January 1650 the Queen had actually met with Descartes only four or five times. [ 70 ] It soon became clear they did not like each other; she did not care for his mechanical philosophy , nor did he share her interest in Ancient Greek language and literature . [ 70 ] On 1 February 1650, he contracted pneumonia and died on 11 February at Chanut. [ 74 ]
"Yesterday morning about four o'clock a.m. has deceased here at the house of His Excellency Mr. Chanut, French ambassador, Mr. Descartes. As I have been informed, he had been ill for a few days with pleurisy. But as he did not want to take or use medicines, a hot fever appears to have arisen as well. Thereupon, he had himself bled three times in one day, but without operation of losing much blood. Her Majesty much bemoaned his decease, because he was such a learned man. He has been cast in wax. It was not his intention to die here, as he had resolved shortly before his death to return to Holland at the first occasion. Etc." [ 75 ]
The cause of death was pneumonia according to Chanut, but peripneumonia according to Christina's physician Johann van Wullen who was not allowed to bleed him. [ 76 ] (The winter seems to have been mild, [ 77 ] except for the second half of January which was harsh as described by Descartes himself; however, "this remark was probably intended to be as much Descartes's take on the intellectual climate as it was about the weather.") [ 72 ]
E. Pies has questioned this account, based on a letter by the Doctor van Wullen; however, Descartes had refused his treatment, and more arguments against its veracity have been raised since. [ 78 ] In a 2009 book, German philosopher Theodor Ebert argues that Descartes was poisoned by Jacques Viogué, a Catholic missionary who opposed his religious views. [ 79 ] [ 80 ] As evidence, Ebert suggests that Catherine Descartes , the niece of René Descartes, made a veiled reference to the act of poisoning when her uncle was administered "communion" two days before his death, in her Report on the Death of M. Descartes, the Philosopher (1693). [ 81 ]
His last words were reported to have been:
My soul, though has long been held captive. The hour has now come for thee to quit thy prison, to leave the trammels of this body. Then to this separation with joy and courage! [ 82 ]
As a Catholic [ 83 ] [ 84 ] [ 85 ] in a Protestant nation, he was interred in the churchyard of what was to become Adolf Fredrik Church in Stockholm, where mainly orphans had been buried. His manuscripts came into the possession of Claude Clerselier , Chanut's brother-in-law, and "a devout Catholic who has begun the process of turning Descartes into a saint by cutting, adding and publishing his letters selectively." [ 86 ] [ 87 ] : 137–154 In 1663, the Pope placed Descartes's works on the Index of Prohibited Books . In 1666, sixteen years after his death, his remains were taken to France and buried in Saint-Étienne-du-Mont . In 1671, Louis XIV prohibited all lectures in Cartesianism . Although the National Convention in 1792 had planned to transfer his remains to the Panthéon , he was reburied in the Abbey of Saint-Germain-des-Prés in 1819, missing a finger and the skull. [ note 8 ] His alleged skull is in the Musée de l'Homme in Paris, [ 88 ] but some 2020 researches confirm that it may be a forgery. The original skull was probably divided into pieces in Sweden and given to private collectors; one of those pieces arrived at the University of Lund in 1691, where it is still preserved. [ 89 ]
In his Discourse on the Method , he attempts to arrive at a fundamental set of principles that one can know as true without any doubt. To achieve this, he employs a method called hyperbolical/metaphysical doubt, also sometimes referred to as methodological skepticism or Cartesian doubt : he rejects any ideas that can be doubted and then re-establishes them in order to acquire a firm foundation for genuine knowledge. [ 90 ] Descartes built his ideas from scratch which he does in The Meditations on First Philosophy . He relates this to architecture: the top soil is taken away to create a new building or structure. Descartes calls his doubt the soil and new knowledge the buildings. To Descartes, Aristotle's foundationalism is incomplete and his method of doubt enhances foundationalism. [ 91 ]
Initially, Descartes arrives at only a single first principle: he thinks. This is expressed in the Latin phrase in the Discourse on Method " Cogito, ergo sum " (English: "I think, therefore I am"). [ 92 ] Descartes concluded, if he doubted, then something or someone must be doing the doubting; therefore, the very fact that he doubted proved his existence. "The simple meaning of the phrase is that if one is skeptical of existence, that is in and of itself proof that he does exist." [ 93 ] These two first principles—I think and I exist—were later confirmed by Descartes's clear and distinct perception (delineated in his Third Meditation from The Meditations ): as he clearly and distinctly perceives these two principles, Descartes reasoned, ensures their indubitability.
Descartes concludes that he can be certain that he exists because he thinks. But in what form? He perceives his body through the use of the senses; however, these have previously been unreliable. So Descartes determines that the only indubitable knowledge is that he is a thinking thing . Thinking is what he does, and his power must come from his essence. Descartes defines "thought" ( cogitatio ) as "what happens in me such that I am immediately conscious of it, insofar as I am conscious of it". Thinking is thus every activity of a person of which the person is immediately conscious . [ 94 ] He gave reasons for thinking that waking thoughts are distinguishable from dreams , and that one's mind cannot have been "hijacked" by an evil demon placing an illusory external world before one's senses. [ 91 ]
And so something that I thought I was seeing with my eyes is grasped solely by the faculty of judgment which is in my mind. [ 95 ] : 109
In this manner, Descartes proceeds to construct a system of knowledge, discarding perception as unreliable and, instead, admitting only deduction as a method. [ 96 ]
Descartes, influenced by the automatons on display at the Château de Saint-Germain-en-Laye near Paris, investigated the connection between mind and body, and how they interact. [ 97 ] His main influences for dualism were theology and physics . [ 98 ] The theory on the dualism of mind and body is Descartes's signature doctrine and permeates other theories he advanced. Known as Cartesian dualism (or mind–body dualism), his theory on the separation between the mind and the body went on to influence subsequent Western philosophies. [ 99 ] In Meditations on First Philosophy , Descartes attempted to demonstrate the existence of God and the distinction between the human soul and the body. Humans are a union of mind and body; [ 100 ] thus Descartes's dualism embraced the idea that mind and body are distinct but closely joined. While many contemporary readers of Descartes found the distinction between mind and body difficult to grasp, he thought it was entirely straightforward. Descartes employed the concept of modes , which are the ways in which substances exist. In Principles of Philosophy , Descartes explained, "we can clearly perceive a substance apart from the mode which we say differs from it, whereas we cannot, conversely, understand the mode apart from the substance". To perceive a mode apart from its substance requires an intellectual abstraction, [ 101 ] which Descartes explained as follows:
The intellectual abstraction consists in my turning my thought away from one part of the contents of this richer idea the better to apply it to the other part with greater attention. Thus, when I consider a shape without thinking of the substance or the extension whose shape it is, I make a mental abstraction. [ 101 ]
According to Descartes, two substances are really distinct when each of them can exist apart from the other. Thus, Descartes reasoned that God is distinct from humans, and the body and mind of a human are also distinct from one another. [ 102 ] He argued that the great differences between body (an extended thing) and mind (an un-extended, immaterial thing) make the two ontologically distinct. According to Descartes's indivisibility argument, the mind is utterly indivisible: because "when I consider the mind, or myself in so far as I am merely a thinking thing, I am unable to distinguish any part within myself; I understand myself to be something quite single and complete." [ 103 ]
Moreover, in The Meditations , Descartes discusses a piece of wax and exposes the single most characteristic doctrine of Cartesian dualism: that the universe contained two radically different kinds of substances—the mind or soul defined as thinking , and the body defined as matter and unthinking. [ 104 ] The Aristotelian philosophy of Descartes's day held that the universe was inherently purposeful or teleological. Everything that happened, be it the motion of the stars or the growth of a tree , was supposedly explainable by a certain purpose, goal or end that worked its way out within nature. Aristotle called this the "final cause", and these final causes were indispensable for explaining the ways nature operated. Descartes's theory of dualism supports the distinction between traditional Aristotelian science and the new science of Kepler and Galileo, which denied the role of a divine power and "final causes" in its attempts to explain nature. Descartes's dualism provided the philosophical rationale for the latter by expelling the final cause from the physical universe (or res extensa ) in favor of the mind (or res cogitans ). Therefore, while Cartesian dualism paved the way for modern physics , it also held the door open for religious beliefs about the immortality of the soul . [ 105 ]
Descartes's dualism of mind and matter implied a concept of human beings. A human was, according to Descartes, a composite entity of mind and body. Descartes gave priority to the mind and argued that the mind could exist without the body, but the body could not exist without the mind. In The Meditations , Descartes even argues that while the mind is a substance, the body is composed only of "accidents". [ 106 ] But he did argue that mind and body are closely joined: [ 107 ]
Nature also teaches me, by the sensations of pain, hunger, thirst and so on, that I am not merely present in my body as a pilot in his ship, but that I am very closely joined and, as it were, intermingled with it, so that I and the body form a unit. If this were not so, I, who am nothing but a thinking thing, would not feel pain when the body was hurt, but would perceive the damage purely by the intellect, just as a sailor perceives by sight if anything in his ship is broken. [ 107 ]
Descartes's discussion on embodiment raised one of the most perplexing problems of his dualism philosophy: What exactly is the relationship of union between the mind and the body of a person? [ 107 ] Therefore, Cartesian dualism set the agenda for philosophical discussion of the mind–body problem for many years after Descartes's death. [ 108 ] Descartes was also a rationalist and believed in the power of innate ideas . [ 109 ] Descartes argued the theory of innate knowledge and that all humans were born with knowledge through the higher power of God. It was this theory of innate knowledge that was later combated by philosopher John Locke (1632–1704), an empiricist. [ 110 ] Empiricism holds that all knowledge is acquired through experience.
In The Passions of the Soul , published in 1649, [ 111 ] Descartes discussed the common contemporary belief that the human body contained animal spirits. These animal spirits were believed to be light and roaming fluids circulating rapidly around the nervous system between the brain and the muscles. These animal spirits were believed to affect the human soul, or passions of the soul. Descartes distinguished six basic passions: wonder, love, hatred, desire, joy and sadness. All of these passions, he argued, represented different combinations of the original spirit, and influenced the soul to will or want certain actions. He argued, for example, that fear is a passion that moves the soul to generate a response in the body. In line with his dualist teachings on the separation between the soul and the body, he hypothesized that some part of the brain served as a connector between the soul and the body and singled out the pineal gland as connector. [ 112 ] Descartes argued that signals passed from the ear and the eye to the pineal gland, through animal spirits. Thus different motions in the gland cause various animal spirits. He argued that these motions in the pineal gland are based on God's will and that humans are supposed to want and like things that are useful to them. But he also argued that the animal spirits that moved around the body could distort the commands from the pineal gland, thus humans had to learn how to control their passions. [ 113 ]
Descartes advanced a theory on automatic bodily reactions to external events, which influenced 19th-century reflex theory. He argued that external motions, such as touch and sound, reach the endings of the nerves and affect the animal spirits. For example, heat from fire affects a spot on the skin and sets in motion a chain of reactions, with the animal spirits reaching the brain through the central nervous system, and in turn, animal spirits are sent back to the muscles to move the hand away from the fire. [ 113 ] Through this chain of reactions, the automatic reactions of the body do not require a thought process. [ 109 ]
Above all, he was among the first scientists who believed that the soul should be subject to scientific investigation. He challenged the views of his contemporaries that the soul was divine , thus religious authorities regarded his books as dangerous. [ 114 ] Descartes's writings went on to form the basis for theories on emotions and how cognitive evaluations were translated into affective processes. Descartes believed the brain resembled a working machine and that mathematics, and mechanics could explain complicated processes in it. [ 115 ] In the 20th century, Alan Turing advanced computer science based on mathematical biology as inspired by Descartes. His theories on reflexes also served as the foundation for advanced physiological theories , more than 200 years after his death. The physiologist Ivan Pavlov was a great admirer of Descartes. [ 116 ]
Descartes denied that animals had reason or intelligence. [ 117 ] He argued that animals did not lack sensations or perceptions, but these could be explained mechanistically. [ 118 ] Whereas humans had a soul, or mind, and were able to feel pain and anxiety , animals by virtue of not having a soul could not feel pain or anxiety. If animals showed signs of distress then this was to protect the body from damage, but the innate state needed for them to suffer was absent. [ 119 ] Although Descartes's views were not universally accepted, they became prominent in Europe and North America, allowing humans to treat animals with impunity. The view that animals were quite separate from humanity and merely machines allowed for the maltreatment of animals , and was sanctioned in law and societal norms until the middle of the 19th century. [ 120 ] : 180–214 The publications of Charles Darwin would eventually erode the Cartesian view of animals. [ 121 ] : 37 Darwin argued that the continuity between humans and other species suggested the possibility of animal suffering. [ 122 ] : 177
For Descartes, ethics was a science, the highest and most perfect of them. Like the rest of the sciences, ethics had its roots in metaphysics. [ 96 ] In this way, he argues for the existence of God, investigates the place of man in nature, formulates the theory of mind–body dualism, and defends free will . However, as he was a convinced rationalist, Descartes clearly states that reason is sufficient in the search for the goods that individuals should seek, and virtue consists in the correct reasoning that should guide their actions. Nevertheless, the quality of this reasoning depends on knowledge and mental condition. For this reason, he said that a complete moral philosophy should include the study of the body. [ 123 ] : 189 He discussed this subject in the correspondence with Princess Elisabeth of Bohemia , and as a result wrote his work The Passions of the Soul , that contains a study of the psychosomatic processes and reactions in man, with an emphasis on emotions or passions. [ 124 ] His works about human passion and emotion would be the basis for the philosophy of his followers (see Cartesianism ), and would have a lasting impact on ideas concerning what literature and art should be, specifically how it should invoke emotion. [ 125 ]
Descartes and Zeno both identified sovereign goods with virtue. For Epicurus , the sovereign good was pleasure, and Descartes says that, in fact, this is not in contradiction with Zeno's teaching, because virtue produces a spiritual pleasure that is better than bodily pleasure. Regarding Aristotle 's opinion that happiness (eudaimonia) depends on both moral virtue and also on the goods of fortune such as a moderate degree of wealth, Descartes does not deny that fortunes contributes to happiness, but remarks that they are in great proportion outside one's own control, whereas one's mind is under one's complete control. [ 124 ] The moral writings of Descartes came at the last part of his life, but earlier, in his Discourse on the Method , he adopted three maxims to be able to act while he put all his ideas into doubt. Those maxims are known as his "Provisional Morals" .
In the third and fifth Meditation , Descartes offers proofs of a benevolent God (the trademark argument and the ontological argument respectively). Descartes has faith in the account of reality his senses provide him, since he believed that God provided him with a working mind and sensory system and does not desire to deceive him. From this supposition, however, Descartes finally establishes the possibility of acquiring knowledge about the world based on deduction and perception. Regarding epistemology , therefore, Descartes can be said to have contributed such ideas as a conception of foundationalism and the possibility that reason is the only reliable method of attaining knowledge. Descartes, however, was very much aware that experimentation was necessary to verify and validate theories. [ 96 ]
Descartes invokes his causal adequacy principle [ 126 ] to support his trademark argument for the existence of God, quoting Lucretius in defence: "Ex nihilo nihil fit" , meaning " Nothing comes from nothing " ( Lucretius ). [ 127 ] Oxford Reference summarises the argument, as follows, "that our idea of perfection is related to its perfect origin (God), just as a stamp or trademark is left in an article of workmanship by its maker." [ 128 ] In the fifth Meditation, Descartes presents a version of the ontological argument which is founded on the possibility of thinking the "idea of a being that is supremely perfect and infinite," and suggests that "of all the ideas that are in me, the idea that I have of God is the most true, the most clear and distinct." [ 129 ]
Descartes considered himself to be a devout Catholic, [ 83 ] [ 84 ] [ 85 ] and one of the purposes of the Meditations was to defend the Catholic faith. His attempt to ground theological beliefs on reason encountered intense opposition in his time. Pascal regarded Descartes's views as a rationalist and mechanist, and accused him of deism : "I cannot forgive Descartes; in all his philosophy, Descartes did his best to dispense with God. But Descartes could not avoid prodding God to set the world in motion with a snap of his lordly fingers; after that, he had no more use for God," while a powerful contemporary, Martin Schoock , accused him of atheist beliefs, though Descartes had provided an explicit critique of atheism in his Meditations . The Catholic Church prohibited his books in 1663. [ 130 ] [ 131 ] [ 132 ] : 274
Descartes also wrote a response to external world skepticism . Through this method of skepticism, he does not doubt for the sake of doubting but to achieve concrete and reliable information. In other words, certainty. He argues that sensory perceptions come to him involuntarily, and are not willed by him. They are external to his senses, and according to Descartes, this is evidence of the existence of something outside of his mind, and thus, an external world. Descartes goes on to argue that the things in the external world are material by arguing that God would not deceive him as to the ideas that are being transmitted, and that God has given him the "propensity" to believe that such ideas are caused by material things. Descartes also believes a substance is something that does not need any assistance to function or exist. Descartes further explains how only God can be a true "substance". But minds are substances, meaning they need only God for it to function. The mind is a thinking substance. The means for a thinking substance stem from ideas. [ 133 ]
Descartes steered clear of theological questions, restricting his attention to showing that there is no incompatibility between his metaphysics and theological orthodoxy. He avoided trying to demonstrate theological dogmas metaphysically. When challenged that he had not established the immortality of the soul merely in showing that the soul and the body are distinct substances, he replied, "I do not take it upon myself to try to use the power of human reason to settle any of those matters which depend on the free will of God." [ 134 ]
Descartes "invented the convention of representing unknowns in equations by x , y , and z , and knowns by a , b , and c ". He also "pioneered the standard notation" that uses superscripts to show the powers or exponents; for example, the 2 used in x 2 to indicate x squared. [ 135 ] [ 136 ] : 19
One of Descartes's most enduring legacies was his development - together with Pierre de Fermat - of Cartesian or analytic geometry , which uses algebra to describe geometry; the Cartesian coordinate system is named after him. [ 137 ] He was first to assign a fundamental place for algebra in the system of knowledge, using it as a method to automate or mechanize reasoning, particularly about abstract, unknown quantities. [ 138 ] : 91–114 Both Descartes and Fermat were in their versions of analytic geometry inspired by the works of the Ancient Greek mathematicians Pappus of Alexandria and Apollonius of Perga ; especially by their techniques of analysis. [ 139 ] Crucial for their work was also the symbolic algebra of François Viète . European mathematicians had previously viewed geometry as a more fundamental form of mathematics, serving as the foundation of algebra. Algebraic rules were given geometric proofs by mathematicians such as Pacioli , Cardano , Tartaglia and Ferrari . Equations of degree higher than the third were regarded as unreal, because a three-dimensional form, such as a cube, occupied the largest dimension of reality. Descartes professed that the abstract quantity a 2 could represent length as well as an area. This was in opposition to the teachings of mathematicians such as François Viète , who insisted that a second power must represent an area. Although Descartes did not pursue the subject, he preceded Gottfried Wilhelm Leibniz in envisioning a more general science of algebra or "universal mathematics", as a precursor to symbolic logic , that could encompass logical principles and methods symbolically, and mechanize general reasoning. [ 140 ] : 280–281
Current popular opinion holds that Descartes had the most influence of anyone on the young Isaac Newton . Descartes's influence extended not directly from his original French edition of La Géométrie , however, but rather from Frans van Schooten 's expanded second Latin edition of the work. [ 141 ] : 100 Newton continued Descartes's work on cubic equations , which freed the subject from the fetters of the Greek perspectives. The most important concept was his very modern treatment of single variables. [ 142 ] : 109–129
Descartes's work provided the basis for the calculus developed by Leibniz and Newton , who applied the infinitesimal calculus to the tangent line problem , thus permitting the evolution of that branch of modern mathematics. [ 143 ] His rule of signs is also a commonly used method to determine the number of positive and negative roots of a polynomial.
Descartes is often regarded as the first thinker to emphasize the use of reason to develop the natural sciences . [ 144 ] For him, philosophy was a thinking system that embodied all knowledge, as he related in a letter to a French translator: [ 96 ]
Thus, all Philosophy is like a tree, of which Metaphysics is the root, Physics the trunk, and all the other sciences the branches that grow out of this trunk, which are reduced to three principals, namely, Medicine, Mechanics, and Ethics. By the science of Morals, I understand the highest and most perfect which, presupposing an entire knowledge of the other sciences, is the last degree of wisdom.
The beginning of Descartes's interest in physics is accredited to the amateur scientist and mathematician Isaac Beeckman , whom he met in 1618, and who was at the forefront of a new school of thought known as mechanical philosophy . With this foundation of reasoning, Descartes formulated many of his theories on mechanical and geometric physics . [ 145 ] It is said that they met when both were looking at a placard that was set up in the Breda marketplace, detailing a mathematical problem to be solved. Descartes asked Beeckman to translate the problem from Dutch to French. [ 146 ] In their following meetings Beeckman interested Descartes in his corpuscularian approach to mechanical theory, and convinced him to devote his studies to a mathematical approach to nature. [ 147 ] [ 146 ] In 1628, Beeckman also introduced him to many of Galileo 's ideas. [ 147 ] Together, they worked on free fall , catenaries , conic sections , and fluid statics . Both believed that it was necessary to create a method that thoroughly linked mathematics and physics. [ 43 ]
Although the concept of work (in physics) was not formally used until 1826, similar concepts existed before then. [ 148 ] In 1637, Descartes wrote: [ 149 ]
Lifting 100 lb one foot twice over is the same as lifting 200 lb one foot, or 100 lb two feet.
In Principles of Philosophy ( Principia Philosophiae ) from 1644 Descartes outlined his views on the universe. In it he describes his three laws of motion . [ 150 ] ( Newton's own laws of motion would later be modeled on Descartes's exposition.) [ 145 ] Descartes defined "quantity of motion" ( Latin : quantitas motus ) as the product of size and speed, [ 151 ] and claimed that the total quantity of motion in the universe is conserved. [ 151 ]
If x is twice the size of y, and is moving half as fast, then there's the same amount of motion in each.
[God] created matter, along with its motion ... merely by letting things run their course, he preserves the same amount of motion ... as he put there in the beginning.
Descartes had discovered an early form of the law of conservation of momentum . [ 152 ] He envisioned quantity of motion as pertaining to motion in a straight line, as opposed to perfect circular motion, as Galileo had envisioned it. [ 145 ] [ 152 ] Descartes's discovery should not be seen as the modern law of conservation of momentum, since it had no concept of mass as distinct from weight or size, and since he believed that it is speed rather than velocity that is conserved. [ 153 ] [ 154 ] [ 155 ]
Descartes's vortex theory of planetary motion was later rejected by Newton in favor of his law of universal gravitation , and most of the second book of Newton's Principia is devoted to his counterargument.
Descartes proposed a theory to explain magnetism and explain the observation in De Magnete by William Gilbert . Descartes considered that 'effluvia' were emitted by a magnet, the effluvia rarefied the air creating pressure differences, and thus forces. [ 156 ]
In 1644, Descartes provides one of the earliest microscopic theories of glass . He considered that glass was formed by particles frozen in motion after heated. He also provides one of the earliest understanding of the role of stress and its relief by annealing . [ 157 ]
Descartes also made contributions to the field of optics . He showed by using geometric construction and the law of refraction (also known as Descartes's law in France, or more commonly Snell's law elsewhere) that the angular radius of a rainbow is 42 degrees (i.e., the angle subtended at the eye by the edge of the rainbow and the ray passing from the sun through the rainbow's centre is 42°). [ 158 ] He also independently discovered the law of reflection , and his essay on optics was the first published mention of this law. [ 159 ]
Within Discourse on the Method , there is an appendix in which Descartes discusses his theories on Meteorology known as Les Météores . He first proposed the idea that the elements were made up of small particles that join together imperfectly, thus leaving small spaces in between. These spaces were then filled with smaller much quicker "subtile matter". [ 160 ] These particles were different based on what element they constructed, for example, Descartes believed that particles of water were "like little eels, which, though they join and twist around each other, do not, for all that, ever knot or hook together in such a way that they cannot easily be separated." [ 160 ] In contrast, the particles that made up the more solid material, were constructed in a way that generated irregular shapes. The size of the particle also matters; if the particle was smaller, not only was it faster and constantly moving, it was more easily agitated by the larger particles, which were slow but had more force. The different qualities, such as combinations and shapes, gave rise to different secondary qualities of materials, such as temperature. [ 161 ] This first idea is the basis for the rest of Descartes's theory on meteorology.
While rejecting most of Aristotle 's theories on meteorology, he still kept some of the terminology that Aristotle used such as vapors and exhalations. These "vapors" would be drawn into the sky by the sun from "terrestrial substances" and would generate wind. [ 160 ] Descartes also theorized that falling clouds would displace the air below them, also generating wind. Falling clouds could also generate thunder. He theorized that when a cloud rests above another cloud and the air around the top cloud is hot, it condenses the vapor around the top cloud, and causes the particles to fall. When the particles falling from the top cloud collided with the bottom cloud's particles it would create thunder. [ 161 ] He compared his theory on thunder to his theory on avalanches. Descartes believed that the booming sound that avalanches created, was due to snow that was heated, and therefore heavier, falling onto the snow that was below it. [ 161 ] This theory was supported by experience "It follows that one can understand why it thunders more rarely in winter than in summer; for then not enough heat reaches the highest clouds, in order to break them up." [ 161 ]
Another theory that Descartes had was on the production of lightning. Descartes believed that lightning was caused by exhalations trapped between the two colliding clouds. He believed that in order to make these exhalations viable to produce lightning, they had to be made "fine and inflammable" by hot and dry weather. [ 161 ] Whenever the clouds would collide, it would cause them to ignite, creating lightning; if the cloud above was heavier than the bottom cloud, it would also produce thunder.
Descartes also believed that clouds were made up of drops of water and ice, and believed that rain would fall whenever the air could no longer support them. It would fall as snow if the air was not warm enough to melt the raindrops. And hail was when the cloud drops would melt, and then freeze again because cold air would refreeze them. [ 160 ] [ 161 ]
Descartes did not use mathematics or instruments (as there were not any at the time) to back up his theories on Meteorology and instead used qualitative reasoning in order to deduce his hypothesis. [ 160 ]
Descartes has often been dubbed the father of modern Western philosophy , the thinker whose approaches have profoundly changed the course of Western philosophy and set the basis for modernity . [ 19 ] [ 162 ] The first two of his Meditations on First Philosophy , those that formulate the famous methodic doubt, represent the portion of Descartes's writings that most influenced modern thinking. [ 163 ] It has been argued that Descartes himself did not realize the extent of this revolutionary move. [ 164 ] In shifting the debate from "what is true" to "of what can I be certain?", Descartes arguably shifted the authoritative guarantor of truth from God to humanity (even though Descartes himself claimed he received his visions from God)—while the traditional concept of "truth" implies an external authority, "certainty" instead relies on the judgment of the individual.
In an anthropocentric revolution, the human being is now raised to the level of a subject, an agent, an emancipated being equipped with autonomous reason. This was a revolutionary step that established the basis of modernity, the repercussions of which are still being felt: the emancipation of humanity from Christian revelational truth and Church doctrine ; humanity making its own law and taking its own stand. [ 165 ] [ 166 ] [ 167 ] In modernity, the guarantor of truth is not God anymore but human beings, each of whom is a "self-conscious shaper and guarantor" of their own reality. [ 168 ] [ 169 ] In that way, each person is turned into a reasoning adult, a subject and agent, [ 168 ] as opposed to a child obedient to God. This change in perspective was characteristic of the shift from the Christian medieval period to the modern period, a shift that had been anticipated in other fields, and which was now being formulated in the field of philosophy by Descartes. [ 168 ] [ 170 ]
This anthropocentric perspective of Descartes's work, establishing human reason as autonomous, provided the basis for the Enlightenment 's emancipation from God and the Church. According to Martin Heidegger , the perspective of Descartes's work also provided the basis for all subsequent anthropology . [ 171 ] Descartes's philosophical revolution is sometimes said to have sparked modern anthropocentrism and subjectivism . [ 19 ] [ 172 ] [ 173 ] [ 174 ]
In commercial terms, The Discourse appeared during Descartes's lifetime in a single edition of 500 copies, 200 of which were set aside for the author. Sharing a similar fate was the only French edition of The Meditations , which had not managed to sell out by the time of Descartes's death. A concomitant Latin edition of the latter was, however, eagerly sought out by Europe's scholarly community and proved a commercial success for Descartes. [ 175 ] : xliii–xliv
Although Descartes was well known in academic circles towards the end of his life, the teaching of his works in schools was controversial. Henri de Roy ( Henricus Regius , 1598–1679), Professor of Medicine at the University of Utrecht, was condemned by the Rector of the university, Gijsbert Voet (Voetius), for teaching Descartes's physics. [ 176 ]
According to philosophy professor John Cottingham , Descartes's Meditations on First Philosophy is considered to be "one of the key texts of Western philosophy". Cottingham said that the Meditations is the "most widely studied of all Descartes' writings". [ 177 ] : 50
According to Anthony Gottlieb , a former senior editor of The Economist , and the author of The Dream of Reason and The Dream of Enlightenment , one of the reasons Descartes and Thomas Hobbes continue to be debated in the second decade of the twenty-first century, is that they still have something to say to us that remains relevant on questions such as, "What does the advance of science entail for our understanding of ourselves and our ideas of God?" and "How is government to deal with religious diversity." [ 178 ]
In her 2018 interview with Tyler Cowen, Agnes Callard described Descartes's thought experiment in the Meditations , where he encouraged a complete, systematic doubting of everything that you believe, to "see what you come to". She said, "What Descartes comes to is a kind of real truth that he can build upon inside of his own mind." [ 179 ] She said that Hamlet 's monologues—"meditations on the nature of life and emotion"—were similar to Descartes's thought experiment. Hamlet/Descartes were "apart from the world", as if they were "trapped" in their own heads. [ 179 ] Cowen asked Callard if Descartes actually found any truths through his thought experiment or was it just "an earlier version of the contemporary argument that we're living in a simulation, where the evil demon is the simulation rather than Bayesian reasoning ?" Callard agreed that this argument can be traced to Descartes, who had said that he had refuted it. She clarified that in Descartes's reasoning, you do "end up back in the mind of God"—in a "universe God has created" that is the "real world"...The whole question is about being connected to reality as opposed to being a figment. If you're living in the world God created, God can create real things. So you're living in a real world." [ 179 ]
The membership of Descartes to the Rosicrucians is debated. [ 180 ]
The initials of his name have been linked to the R.C. acronym widely used by Rosicrucians. [ 181 ] Furthermore, in 1619 Descartes moved to Ulm which was a well renowned international center of the Rosicrucian movement. [ 181 ] During his journey in Germany, he met Johannes Faulhaber who had previously expressed his personal commitment to join the brotherhood. [ 182 ]
Descartes dedicated the work titled The Mathematical Treasure Trove of Polybius, Citizen of the World to "learned men throughout the world
and especially to the distinguished B.R.C. (Brothers of the Rosy Cross) in Germany". The work was not completed and its publication is uncertain. [ 183 ]
In January 2010, a previously unknown letter from Descartes, dated 27 May 1641, was found by the Dutch philosopher Erik-Jan Bos when browsing through Google . Bos found the letter mentioned in a summary of autographs kept by Haverford College in Haverford, Pennsylvania . The college was unaware that the letter had never been published. This was the third letter by Descartes found in the last 25 years. [ 187 ] [ 188 ]
The Descartes most familiar to twentieth-century philosophers is the Descartes of the first two Meditations , someone preoccupied with hyperbolic doubt of the material world and the certainty of knowledge of the self that emerges from the famous cogito argument.
For up to Descartes...a particular sub-iectum ...lies at the foundation of its own fixed qualities and changing circumstances. The superiority of a sub-iectum ...arises out of the claim of man to a...self-supported, unshakeable foundation of truth, in the sense of certainty. Why and how does this claim acquire its decisive authority? The claim originates in that emancipation of man in which he frees himself from obligation to Christian revelational truth and Church doctrine to a legislating for himself that takes its stand upon itself.
With the interpretation of man as subiectum , Descartes creates the metaphysical presupposition for future anthropology of every kind and tendency.
... the kind of anthropocentric subjectivism which has emerged from the Cartesian revolution.
When, with the beginning of modern times, religious belief was becoming more and more externalized as a lifeless convention, men of intellect were lifted by a new belief: their great belief in an autonomous philosophy and science. [...] in philosophy, the Meditations were epoch-making in a quite unique sense, and precisely because of their going back to the pure ego cogito . Descartes work has been used, in fact to inaugurates an entirely new kind of philosophy. Changing its total style, philosophy takes a radical turn: from naïve objectivism to transcendental subjectivism. | https://en.wikipedia.org/wiki/René_Descartes |
René Marcelin (12 June 1885 – 24 September 1914) was a French physical chemist, who died in World War I at a young age. He was a pupil of Jean Baptiste Perrin at the Faculty of Sciences in Paris and performed theoretical studies in the field of chemical kinetics . [ 1 ] [ 2 ]
René Marcelin developed the first theoretical treatment of the rate of chemical reactions that goes beyond a simple empirical description. He showed that the expression of the rate constant given by the Arrhenius equation had to be composed of two terms. In addition to the activation energy term, he considered that there had to be an activation entropy term. In 1910, Rene Marcelin introduced the concept of standard Gibbs energy of activation. In 1912, he treated the progress of a chemical reaction as a motion of a point in phase space . Using Gibbs' statistical-mechanical methods, he obtained an expression similar to the one which he had obtained earlier from thermodynamic consideration. In 1913, René Marcelin was also the first to use the term potential energy surface . [ 3 ] [ 4 ] He theorized that the progress of a chemical reaction could be described as a point in a potential energy surface with coordinates in atomic momenta and distances.
In his PhD thesis, [ 5 ] which he defended in 1914, he developed a general theory on absolute reaction rates, in which he used concepts of both thermodynamic [ 6 ] and kinetic [ 7 ] origin, describing the activation dependent phenomena as the movement of representative points in space. [ 8 ] His 1915 publication, [ 9 ] published shortly after his death, describes a chemical reaction between N atomic species in a 2N-dimensional phase space, using statistical mechanics to formally obtain the pre-exponential factor before the exponential term containing the Gibbs free energy of activation. The foundations of his theoretical treatment were correct, but René Marcelin was not able to evaluate the remaining integrals in his expressions, as the solution of these equations was not achievable at that time.
René Marcellin also developed the dividing surface approach to study rates of transport in Hamiltonian systems. These results were published after his death by his brother André in 1918. [ 10 ] | https://en.wikipedia.org/wiki/René_Marcelin |
René John Soetens (born 7 September 1948) was a member of the House of Commons of Canada from 1988 to 1993. His background was in business and sales.
Rene was elected to Town of Ajax Council in 1980 and re-elected 1982 and 1985.
He was elected in the 1988 federal election at the Ontario for the Progressive Conservative party . He served in the 34th Canadian Parliament but lost to Dan McTeague of the Liberal Party in the 1993 federal election .
Soetens also made an unsuccessful bid to return to national Parliament in the 2004 federal election at the Ajax—Pickering electoral district .
He is president and owner of Con-Test, a national certification company specializing in the testing of controlled environments found in research facilities, hospitals, pharmaceutical drug manufacturing and associated cleanrooms.
This article about a Progressive Conservative Party of Canada Member of the Parliament of Canada is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/René_Soetens |
In chemistry , a reoxidant is a reagent that regenerates a catalyst by oxidation. In some cases they are used stoichiometrically, in other cases only small amounts are required.
Reoxidants are commonly used in reactions catalyzed by osmium tetroxide , which is a primary oxidant converting alkenes to glycols . The spent catalyst is an osmium(VI) complex, which reacts with a reoxidant to regenerate Os(VIII). Typical reoxidants for this application include pyridine-N-oxide , ferricyanide/water, and N-methylmorpholine N-oxide . [ 1 ]
As catalysts for the polymerization of dienes, vanadium complexes are activated with alkylaluminium chlorides, e.g. diethylaluminium chloride . The organoaluminium reagent installs alkyl groups on the V(III) precatalyst. During catalysis or during catalyst activation , some vanadium(III) is reduced to inactive vanadium(II) derivatives. To correct for this reduction, reoxidants such as methyl trichloroacetate are added. The alkyl chloride functions as a source of a chlorine radical, which adds to the inactive V(II) species. In some cases, the reoxidants are called rejuvenators . [ 2 ]
(2,2,6,6-Tetramethylpiperidin-1-yl)oxyl, commonly known as TEMPO , is an expensive but effective oxidant for converting alcohols to carbonyls. With iodine as the reoxidant, TEMPO-H is oxidized back to TEMPO, which then functions catalytically: [ 3 ] | https://en.wikipedia.org/wiki/Reoxidant |
RepRap (a contraction of replicating rapid prototyper ) is a project to develop low-cost 3D printers that can print most of their own components. As open designs , all of the designs produced by the project are released under a free software license , the GNU General Public License . [ 1 ]
Due to the ability of these machines to make some of their own parts, authors envisioned the possibility of cheap RepRap units, enabling the manufacture of complex products without the need for extensive industrial infrastructure. [ 2 ] [ 3 ] [ 4 ] They intended for the RepRap to demonstrate evolution in this process as well as for it to increase in number exponentially. [ 5 ] [ 6 ] A preliminary study claimed that using RepRaps to print common products results in economic savings. [ 7 ]
The RepRap project started in England in 2005 as a University of Bath initiative, but it is now made up of hundreds of collaborators worldwide. [ 5 ]
RepRap was founded in 2005 by Adrian Bowyer , a Senior Lecturer in mechanical engineering at the University of Bath in England. Funding was obtained from the Engineering and Physical Sciences Research Council .
On 13 September 2006, the RepRap 0.2 prototype printed the first part identical to its own, which was then substituted for the original part created by a commercial 3D printer. On 9 February 2008, RepRap 1.0 "Darwin" made at least one instance of over half its rapid-prototyped parts. On 14 April 2008, RepRap made an end-user item: a clamp to hold an iPod to the dashboard of a Ford Fiesta car. By September that year, at least 100 copies had been produced in various countries. [ 8 ] On 29 May 2008, Darwin achieved self replication by making a complete copy of all its rapid-prototyped parts [ 9 ] (which represent 48% of all the parts, excluding fasteners). A couple hours later the "child" machine had made its first part: a timing-belt tensioner.
In April 2009, electronic circuit boards were produced automatically with a RepRap, using an automated control system and a swappable head system capable of printing both plastic and conductive solder. On 2 October 2009, the second generation design, called Mendel, printed its first part. Mendel's shape resembles a triangular prism rather than a cube. Mendel was completed in October 2009. On 27 January 2010, the Foresight Institute announced the "Kartik M. Gada Humanitarian Innovation Prize" for the design and construction of an improved RepRap. [ 10 ]
On 31 August 2010, the third generation design was named Huxley. It was a miniature of Mendel, with 30% of the original print volume. Within two years, RepRap and RepStrap building and use were widespread in the technology, gadget and engineering communities. [ 11 ]
In 2012, the first successful Delta design, Rostock, had a radically different design. The latest iterations used OpenBeams , wires (typically Dyneema or Spectra fishing lines) instead of belts, and so forth, which also represented some of the latest trends in RepRaps. [ citation needed ]
In early January 2016, RepRapPro (short for "RepRap Professional", and one commercial arm of the RepRap project in the UK) announced that it would cease trading on 15 January 2016. The reason given was congestion of the market for low-cost 3D printers and the inability to expand in that market. RepRapPro China continues to operate. [ 12 ]
As the project was designed by Bowyer to encourage evolution, many variations have been created. [ 13 ] [ 14 ] As an open source project, designers are free to make modifications and substitutions, but they must allow any of their potential improvements to be reused by others.
There are many RepRap printer designs including:
RepRap was conceived as a complete replication system rather than simply a piece of hardware. To this end the system includes computer-aided design (CAD) in the form of a 3D modeling system and computer-aided manufacturing (CAM) software and drivers that convert RepRap users' designs into a set of instructions to the RepRap to create physical objects.
Initially, two CAM tool chains were developed for RepRap. The first, called "RepRap Host", was written in Java by lead RepRap developer Adrian Bowyer. The second, "Skeinforge", [ 15 ] was written by Enrique Perez. Both are complete systems for translating 3D computer models into G-code , the machine language that commands the printer.
Later, other programs like Slic3r and Cura were created. Recently, the Franklin firmware was created to allow RepRap printers to be used for other purposes such as milling and fluid handling. [ 16 ]
Free and open-source 3-D modeling programs like Blender , OpenSCAD , and FreeCAD are preferred in the RepRap community, but almost any CAD or 3D modeling program can be used with the RepRap, as long as it can produce STL files (Slic3r also supports .obj and .amf files). Thus, content creators make use of any tools they are familiar with, whether they are commercial CAD programs, such as SolidWorks and Autodesk AutoCAD , Autodesk Inventor , Tinkercad , or SketchUp along with the libre software .
RepRaps print objects from ABS , Polylactic acid (PLA), Nylon (possibly not all extruders can), HDPE , TPE and similar thermoplastics .
The mechanical properties of RepRap-printed PLA and ABS have been tested and are equivalent to the tensile strengths of parts made by proprietary printers. [ 17 ]
Unlike with most commercial machines, RepRap users are encouraged to experiment with materials and methods, and to publish their results. Methods for printing novel materials (such as ceramics) have been developed this way. In addition, several RecycleBots have been designed and fabricated to convert waste plastic, such as shampoo containers and milk jugs, into inexpensive RepRap filament. [ 18 ] There is some evidence that using this approach of distributed recycling is better for the environment [ 19 ] [ 20 ] [ 21 ] and can be useful for creating " fair trade filament". [ 22 ]
In addition, 3D printing products at the point of consumption has also been shown to be better for the environment. [ 23 ]
The RepRap project has identified polyvinyl alcohol (PVA) as a potentially suitable support material to complement its printing process, although massive overhangs can be made by extruding thin layers of the primary printing media as support (these are mechanically removed afterwards).
Printing electronics is a major goal of the RepRap project so that it can print its own circuit boards. Several methods have been proposed:
Using a MIG welder as a print head a RepRap deltabot stage can be used to print metals like steel . [ 26 ] [ 27 ]
The RepRap concept can also be applied to a milling machine [ 28 ] and to laser welding . [ 29 ]
Although the aim of the project is for RepRap to be able to autonomously construct many of its own mechanical components soon using fairly low-level resources, several components such as sensors, stepper motors and microcontrollers cannot yet be made using the RepRap's 3D printing technology and so have to be produced independently. The plan is to approach 100% replication over a series of versions. For example, from the onset of the project, the RepRap team has explored a variety of approaches to integrating electrically-conductive media into the product. This would allow inclusion of connective wiring , printed circuit boards , and possibly motors in RepRapped products. Variations in the nature of the extruded, electrically-conductive media could produce electrical components with different functions from pure conductive traces, similar to the 1940s sprayed-circuit process Electronic Circuit Making Equipment (ECME), by John Sargrove . A related approach is printed electronics . Another non-replicable component is the threaded rods for linear motions. A current research area is in using replicated Sarrus linkages to replace them. [ 30 ]
The "Core team" of the project [ 31 ] has included:
The stated goal of the RepRap project is to produce a pure self-replicating device not for its own sake, but rather to put in the hands of individuals anywhere on the planet, for a minimal outlay of capital, a desktop manufacturing system that would enable the individual to manufacture many of the artifacts used in everyday life. [ 5 ] From a theoretical viewpoint, the project aims to prove the hypothesis that " rapid prototyping and direct writing technologies are sufficiently versatile to allow them to be used to make a von Neumann universal constructor ". [ 34 ]
RepRap technology has great potential in educational applications, according to some scholars. [ 35 ] [ 36 ] [ 37 ] RepRaps have already been used for an educational mobile robotics platform. [ 38 ] Some authors have claimed that RepRaps offer an unprecedented "revolution" in STEM education. [ 39 ] The evidence comes from both the low cost of rapid prototyping by students, and the fabrication of low-cost high-quality scientific equipment from open hardware designs forming open-source labs . [ 3 ] [ 4 ] | https://en.wikipedia.org/wiki/RepRap |
The RepRap Morgan is an open-source fused deposition modeling 3D printer . The Morgan is part of the RepRap project and has an unusual SCARA arm design. [ 1 ] The first Morgan printer was designed by Quentin Harley, a South African engineer (working for Siemens at the time) at the House4Hack Makerspace in Centurion . [ 2 ] The SCARA arm design was developed due to the lack of access to components of existing 3D printer designs in South Africa and their relatively high cost. [ 2 ] In 2013 the Morgan won the HumanityPlus Uplift Personal Manufacturing Prize and third place in the Gauteng Accelerator Program. [ 1 ] [ 3 ] [ 4 ]
The Morgan name comes from the RepRap convention of naming printers after famous deceased biologists. The Morgan printers was named after Thomas Hunt Morgan . He worked on the genome of the common fruitfly with his wife, Lilian Vaughan Morgan . Their names were used in the development codenames for the first two generations of Morgan 3D Printers.
Morgan printers are now manufactured full-time by the inventor in a small workshop factory in the House4Hack makerspace.
There are four versions of the RepRap Morgan, the Morgan v1 (codenamed Thomas), Morgan Pro, Morgan Mega and Morgan Pro 2 (codenamed Lilian). [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/RepRap_Morgan |
The RepRap Ormerod is an open-source fused deposition modeling 3D printer and is part of the RepRap project . The RepRap Ormerod is named after the English entomologist Eleanor Anne Ormerod , it was designed by RepRapPro. [ 1 ] [ 2 ] There have been two versions of the Ormerod, the Ormerod 1 was released in December 2013 and the Ormerod 2 released in December 2014. [ 1 ] [ 2 ] [ 3 ]
The RepRap Ormerod has a 200 mm × 200 mm × 200 mm build volume, uses a Bowden extruder , it also has a micro SD card and USB and Ethernet connections allowing it to be connected to a network. [ 1 ] [ 2 ] The printer was praised for the simplicity of construction and its low cost. [ 2 ] | https://en.wikipedia.org/wiki/RepRap_Ormerod |
The RepRap Snappy is an open-source fused deposition modeling 3D printer , part of the RepRap project , it is the most self replicating 3D printer in the world. [ 1 ] [ 2 ]
The RepRap Snappy is designed to address the core goal of the RepRap project of creating a ' general-purpose self-replicating manufacturing machine '. [ 2 ] [ 3 ] The RepRap Snappy is able to create 73% of its own parts by volume with a design that eliminates as many of the non 3D printed parts as possible including belts and bearings which are replaced with a rack and pinion system. [ 1 ] [ 4 ] The name Snappy comes from the use of snap fit connectors used on the small printed parts to construct larger pieces, this both cuts down on the use of non 3D printed parts and means a smaller build volume is needed on the machine producing the parts. [ 1 ] [ 4 ] The only non self replicating parts on the printer are the motors, electronics, a glass build plate and one 686 bearing, the 3D printed parts take around 150 hours to create. [ 4 ] The RepRap Snappy received an honourable mention in the Uplift Prize Grand Personal Manufacturing Prize. [ 5 ] | https://en.wikipedia.org/wiki/RepRap_Snappy |
A repair kit or service kit is a set of items used to repair a device, commonly comprising both tools and spare parts. Many kits are designed for vehicles, such as cars, boats, airplanes, motorbikes, and bicycles, and may be kept with the vehicle in order to make on-the-spot repairs. Some are considered essential safety equipment, and may be included survival kits . In the military, personnel crossing large water bodies in aircraft may be equipped with a raft and raft repair kit. Other kits, such as those for watch repair or specific engine components, are used by professionals. Depending on the type, a repair kits may be included when buying a product, or may be purchased separately.
Road vehicles often include basic tools and spare parts which commonly fail. A bicycle repair kit, for example, normally contains tools as well as patches and glue to repair a punctured tire. Other kits that include patches and glue are used to fix holes in fabric items such as inflatable boats and tents.
Watercraft normally contain both safety equipment and repair kits as part of their emergency equipment.
Some automobiles, such as the Mitsubishi i-MiEV , have an optional repair kit available. The Mercedes-Benz OM604 engine has an optional repair kit to help replace seals. The 1905 Gale Model A came with a repair kit.
In aerospace, kits have been developed for repairing the thermal protection tiles on the Space Shuttle and to fix space suits. [ 1 ]
Professionals who repair and maintain electronic equipment may have a kit containing a soldering iron, wire, and components such as transistors and resistors.
In medicine, a repair kit consisting of a plug and plastic mesh may be used during inguinal hernia surgery .
A particular trade may use a repair kit as part of normal work, such as in watch repair and at automobile repair shops.
A wide variety of tools and replacement parts may be included in a repair kit. Some common examples include screwdriver , spare tire , jumper cable , and duct tape .
A tire mobility kit is a package of equipment and accessories used for repairing vehicle tires. Hence, they are also called a tire repair kit or a tire inflator. The idea of a mobility kit is to have a small group of tools that are compact and easy to use. It benefits by taking up much less room in a trunk than even a temporary spare tire. An air compressor with a connected hose and a thick sealant container is generally included in the package. And, the most attractive thing about these kits is that they barely weigh 1-2 pounds. So, carrying them doesn’t get any easier. [ 2 ] | https://en.wikipedia.org/wiki/Repair_kit |
In computer science , repeat-accumulate codes ( RA codes ) are a low complexity class of error-correcting codes . They were devised so that their ensemble weight distributions are easy to derive. RA codes were introduced by Divsalar et al.
In an RA code, an information block of length N {\displaystyle {N}} is repeated q {\displaystyle {q}} times, scrambled by an interleaver of size q N {\displaystyle {qN}} , and then encoded by a rate 1 accumulator . The accumulator can be viewed as a truncated rate 1 recursive convolutional encoder with transfer function 1 / ( 1 + D ) {\displaystyle {1/(1+D)}} , but Divsalar et al. prefer to think of it as a block code whose input block ( z 1 , … , z n ) {\displaystyle {(z_{1},\ldots ,z_{n})}} and output block ( x 1 , … , x n ) {\displaystyle {(x_{1},\ldots ,x_{n})}} are related by the formula x 1 = z 1 {\displaystyle {x_{1}=z_{1}}} and x i = x i − 1 + z i {\displaystyle x_{i}=x_{i-1}+z_{i}} for i > 1 {\displaystyle i>1} . The encoding time for RA codes is linear and their rate is 1 / q {\displaystyle 1/q} . They are nonsystematic.
Irregular repeat accumulate (IRA) codes build on top of the ideas of RA codes. IRA replaces the outer code in RA code with a low density generator matrix code. [ 1 ] IRA codes first repeats information bits different times, and then accumulates subsets of these repeated bits to generate parity bits. The irregular degree profile on the information nodes, together with the degree profile on the check nodes, can be designed using density evolution.
Systematic IRA codes are considered a form of LDPC code. Litigation over whether the DVB-S2 LDPC code is a form of IRA code is ongoing. [ 2 ] US patents 7,116,710; 7,421,032; 7,916,781; and 8,284,833 are at issue. [ citation needed ] | https://en.wikipedia.org/wiki/Repeat-accumulate_code |
A repeat unit or repeating unit , or mer , is a part of a polymer whose repetition would produce the complete polymer chain (except for the end groups ) by linking the repeat units together successively along the chain, like the beads of a necklace. [ 1 ] [ 2 ]
constitutional unit : An atom or group of atoms (with pendant atoms or groups, if any) comprising a part of the essential structure of a macromolecule, an oligomer molecule, a block or a chain. [ 3 ] constitutional repeating unit : The smallest constitutional unit the repetition of which constitutes a regular macromolecule, a regular oligomer molecule, a regular block or a regular chain. [ 4 ]
A repeat unit is sometimes called a mer (or mer unit) in polymer chemistry . "Mer" originates from the Greek word meros , which means "a part". The word polymer derives its meaning from this, which means "many mers". The mer is not the same thing as a monomer —a mer is a repeating unit within a larger molecule, whereas a monomer is an actual molecule that exists independently, either prior to polymerization or after decomposition. [ 5 ]
One of the simplest repeat units is that of the addition polymer polyvinyl chloride ,
-[CH 2 -CHCl] n -, whose repeat unit is -[CH 2 -CHCl]-.
In this case the repeat unit has the same atoms as the monomer vinyl chloride CH 2 =CHCl. When the polymer is formed, the C=C double bond in the monomer is replaced by a C-C single bond in the polymer repeat unit, which links by two new bonds to adjoining repeat units.
In condensation polymers (see examples below), the repeat unit contains fewer atoms than the monomer or monomers from which it is formed.
The subscript "n" denotes the degree of polymerisation , that is, the number of units linked together. The molecular mass of the repeat unit, M R , is simply the sum of the atomic masses of the atoms within the repeat unit. The molecular mass of the chain is just the product nM R . Other than monodisperse polymers, there is normally a molar mass distribution caused by chains of different length.
In copolymers there are two or more types of repeat unit, which may be arranged in alternation, or at random, or in other more complex patterns.
Polyethylene may be considered either as -[CH 2 -CH 2 -] n - with a repeat unit of -[CH 2 -CH 2 ]-, or as [-CH 2 -] n -, with a repeat unit of -[CH 2 ]-. Chemists tend to consider the repeat unit as -[CH 2 -CH 2 ]- since this polymer is made from the monomer ethylene (CH 2 =CH 2 ).
More complex repeat units can occur in vinyl polymers -[CH 2 -CHR] n -, if one hydrogen in the ethylene repeat unit is substituted by a larger fragment R. Polypropylene -[CH 2 -CH(CH 3 )] n - has the repeat unit -[CH 2 -CH(CH 3 )]. Polystyrene has a chain where the substituent R is a phenyl group (C 6 H 5 ), corresponding to a benzene ring minus one hydrogen: -[CH 2 -CH(C 6 H 5 )] n -, so the repeat unit is -[CH 2 -CH(C 6 H 5 )]-.
In many condensation polymers , the repeat unit contains two structural units related to the comonomers which have been polymerized. For example, in polyethylene terephthalate (PET or "polyester"), the repeat unit is -CO-C 6 H 4 -CO-O-CH 2 -CH 2 -O-. The polymer is formed by the condensation reaction of the two monomers terephthalic acid (HOOC-C 6 H 4 -COOH) and ethylene glycol (HO-CH 2 -CH 2 -OH), or their chemical derivatives . The condensation involves loss of water, as an H is lost from each HO- group in the glycol, and an OH from each HOOC- group in the acid. The two structural units in the polymer are then considered to be -CO-C 6 H 4 -CO- and -O-CH 2 -CH 2 -O-. | https://en.wikipedia.org/wiki/Repeat_unit |
In telecommunications , a repeater is an electronic device that receives a signal and retransmits it. Repeaters are used to extend transmissions so that the signal can cover longer distances or be received on the other side of an obstruction. Some types of repeaters broadcast an identical signal, but alter its method of transmission, for example, on another frequency or baud rate .
There are several different types of repeaters; a telephone repeater is an amplifier in a telephone line , an optical repeater is an optoelectronic circuit that amplifies the light beam in an optical fiber cable ; and a radio repeater is a radio receiver and transmitter that retransmits a radio signal.
A broadcast relay station is a repeater used in broadcast radio and television .
When an information-bearing signal passes through a communication channel , it is progressively degraded due to loss of power. For example, when a telephone call passes through a wire telephone line , some of the power in the electric current which represents the audio signal is dissipated as heat in the resistance of the copper wire. The longer the wire, the more power is lost, and the smaller the amplitude of the signal at the far end. So with a long enough wire the call will not be audible at the other end. Similarly, the greater the distance between a radio station and a receiver , the weaker the radio signal , and the poorer the reception. A repeater is an electronic device in a communication channel that increases the power of a signal and retransmits it, allowing it to travel further. Since it amplifies the signal, it requires a source of electric power .
The term "repeater" originated with telegraphy in the 19th century, and referred to an electromechanical device (a relay ) used to regenerate telegraph signals. [ 1 ] [ 2 ]
Use of the term has continued in telephony and data communications .
In computer networking , because repeaters work with the actual physical signal, and do not attempt to interpret the data being transmitted, they operate on the physical layer , the first layer of the OSI model ; a multiport Ethernet repeater is usually called a hub .
This is used to increase the range of telephone signals in a telephone line.
They are most frequently used in trunklines that carry long distance calls. In an analog telephone line consisting of a pair of wires, it consists of an amplifier circuit made of transistors which use power from a DC current source to increase the power of the alternating current audio signal on the line. Since the telephone is a duplex (bidirectional) communication system, the wire pair carries two audio signals , one going in each direction. So telephone repeaters have to be bilateral, amplifying the signal in both directions without causing feedback, which complicates their design considerably. Telephone repeaters were the first type of repeater and were some of the first applications of amplification. The development of telephone repeaters between 1900 and 1915 made long-distance phone service possible. Now, most telecommunications cables are fiber-optic cables which use optical repeaters (below).
Before the invention of electronic amplifiers, mechanically coupled carbon microphones were used as amplifiers in telephone repeaters. After the turn of the 20th century it was found that negative resistance mercury lamps could amplify, and they were used. [ 3 ] The invention of audion tube repeaters around 1916 made transcontinental telephony practical. In the 1930s vacuum tube repeaters using hybrid coils became commonplace, allowing the use of thinner wires. In the 1950s negative impedance gain devices were more popular, and a transistorized version called the E6 repeater was the final major type used in the Bell System before the low cost of digital transmission made all voiceband repeaters obsolete. Frequency frogging repeaters were commonplace in frequency-division multiplexing systems from the middle to late 20th century.
This is a type of telephone repeater used in underwater submarine telecommunications cables .
This is used to increase the range of signals in a fiber-optic cable . Digital information travels through a fiber-optic cable in the form of short pulses of light. The light is made up of particles called photons , which can be absorbed or scattered in the fiber. An optical communications repeater usually consists of a phototransistor which converts the light pulses to an electrical signal, an amplifier to increase the power of the signal, an electronic filter which reshapes the pulses, and a laser which converts the electrical signal to light again and sends it out the other fiber. However, optical amplifiers are being developed for repeaters to amplify the light itself without the need of converting it to an electric signal first.
This is used to extend the range of coverage of a radio signal. The history of radio relay repeaters began in 1898 from the publication by Johann Mattausch in Austrian Journal Zeitschrift für Electrotechnik (v. 16,
35 - 36). [ 2 ] [ 4 ] But his proposal "Translator" was primitive and not suitable for use. The first relay system with radio repeaters, which really functioned, was that invented in 1899 by Emile Guarini-Foresio. [ 2 ]
A radio repeater usually consists of a radio receiver connected to a radio transmitter. The received signal is amplified and retransmitted, often on another frequency, to provide coverage beyond the obstruction. Usage of a duplexer can allow the repeater to use one antenna for both receive and transmit at the same time.
Radio repeaters improve communication coverage in systems using frequencies that typically have line-of-sight propagation . Without a repeater, these systems are limited in range by the curvature of the Earth and the blocking effect of terrain or high buildings. A repeater on a hilltop or tall building can allow stations that are out of each other's line-of-sight range to communicate reliably. [ 5 ]
Radio repeaters may also allow translation from one set of radio frequencies to another, for example to allow two different public service agencies to interoperate (say, police and fire services of a city, or neighboring police departments). They may provide links to the public switched telephone network as well, [ 6 ] [ 7 ] or satellite network ( BGAN , INMARSAT , MSAT ) as an alternative path from source to the destination. [ 8 ]
Typically a repeater station listens on one frequency, A, and transmits on a second, B. All mobile stations listen for signals on channel B and transmit on channel A. The difference between the two frequencies may be relatively small compared to the frequency of operation, say 1%. Often the repeater station will use the same antenna for transmission and reception; highly selective filters called "duplexers" separate the faint incoming received signal from the billions of times more powerful outbound transmitted signal. Sometimes separate transmitting and receiving locations are used, connected by a wire line or a radio link. While the repeater station is designed for simultaneous reception and transmission, mobile units need not be equipped with the bulky and costly duplexers, as they only transmit or receive at any time.
Mobile units in a repeater system may be provided with a "talkaround" channel that allows direct mobile-to-mobile operation on a single channel. This may be used if out of reach of the repeater system, or for communications not requiring the attention of all mobiles. The "talkaround" channel may be the repeater output frequency; the repeater will not retransmit any signals on its output frequency. [ 9 ]
An engineered radio communication system designer will analyze the coverage area desired and select repeater locations, elevations, antennas, operating frequencies and power levels to permit a predictable level of reliable communication over the designed coverage area.
Repeaters can be divided into two types depending on the type of data they handle:
This type is used in channels that transmit data in the form of an analog signal in which the voltage or current is proportional to the amplitude of the signal, as in an audio signal. They are also used in trunklines that transmit multiple signals using frequency division multiplexing (FDM). Analog repeaters are composed of a linear amplifier, and may include electronic filters to compensate for frequency and phase distortion in the line.
The digital repeater is used in channels that transmit data by binary digital signals , in which the data is in the form of pulses with only two possible values, representing the binary digits 1 and 0. A digital repeater amplifies the signal, and it also may retime, resynchronize, and reshape the pulses. A repeater that performs the retiming or resynchronizing functions may be called a regenerator . | https://en.wikipedia.org/wiki/Repeater |
In coding theory , the repetition code is one of the most basic linear error-correcting codes . In order to transmit a message over a noisy channel that may corrupt the transmission in a few places, the idea of the repetition code is to just repeat the message several times. The hope is that the channel corrupts only a minority of these repetitions. This way the receiver will notice that a transmission error occurred since the received data stream is not the repetition of a single message, and moreover, the receiver can recover the original message by looking at the received message in the data stream that occurs most often.
Because of the bad error correcting performance coupled with the low code rate (ratio between useful information symbols and actual transmitted symbols), other error correction codes are preferred in most cases. The chief attraction of the repetition code is the ease of implementation.
In the case of a binary repetition code, there exist two code words - all ones and all zeros - which have a length of n {\displaystyle n} . Therefore, the minimum Hamming distance of the code equals its length n {\displaystyle n} . This gives the repetition code an error correcting capacity of n − 1 2 {\displaystyle {\tfrac {n-1}{2}}} (i.e. it will correct up to n − 1 2 {\displaystyle {\tfrac {n-1}{2}}} errors in any code word).
If the length of a binary repetition code is odd, then it's a perfect code . [ 1 ] The binary repetition code of length n is equivalent to the ( n , 1)- Hamming code . A ( n , 1) BCH code is also a repetition code.
Consider a binary repetition code of length 3. The user wants to transmit the information bits 101 . Then the encoding maps each bit either to the all ones or all zeros code word, so we get the 111 000 111 , which will be transmitted.
Let's say three errors corrupt the transmitted bits and the received sequence is 111 010 100 . Decoding is usually done by a simple majority decision for each code word. That lead us to 100 as the decoded information bits, because in the first and second code word occurred less than two errors, so the majority of the bits are correct. But in the third code word two bits are corrupted, which results in an erroneous information bit, since two errors lie above the error correcting capacity.
Despite their poor performance as stand-alone codes, use in Turbo code -like iteratively decoded concatenated coding schemes, such as repeat-accumulate (RA) and accumulate-repeat-accumulate (ARA) codes, allows for surprisingly good error correction performance.
Repetition codes are one of the few known codes whose code rate can be automatically adjusted to varying channel capacity , by sending more or less parity information as required to overcome the channel noise, and it is the only such code known for non- erasure channels . Practical adaptive codes for erasure channels have been invented only recently, and are known as fountain codes .
Some UARTs , such as the ones used in the FlexRay protocol, use a majority filter to ignore brief noise spikes. This spike-rejection filter can be seen as a kind of repetition decoder. | https://en.wikipedia.org/wiki/Repetition_code |
In surveying , the repetition method is used to improve precision and accuracy of measurements of horizontal angles. The same angle is measured multiple times, with the survey instrument rotated so that systematic errors tend to cancel. The arithmetic mean of these observations gives true value of an angle. The precision of the measurement can exceed the least count of the instrument. used.
The repetition method is used when high accuracy is required. For rough or approximate survey work, the ordinary method of measuring horizontal angles is used as it is less time consuming. [ 1 ] | https://en.wikipedia.org/wiki/Repetition_method |
Repiping means replacing the pipes in a building, oil or gas well, or centrifuge . [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Repiping |
ReplayGain is a proposed technical standard published by David Robinson in 2001 to measure and normalize the perceived loudness of audio in computer audio formats such as MP3 and Ogg Vorbis . It allows media players to normalize loudness for individual tracks or albums. This avoids the common problem of having to manually adjust volume levels between tracks when playing audio files from albums that have been mastered at different loudness levels.
Although this de facto standard is now formally known as ReplayGain, [ 1 ] it was originally known as Replay Gain and is sometimes abbreviated RG .
ReplayGain is supported in a large number of media software and portable devices .
ReplayGain works by first performing a psychoacoustic analysis of an entire audio track or album to measure peak level and perceived loudness. Equal-loudness contours are used to compensate for frequency effects and statistical analysis is used to accommodate for effects related to time. The difference between the measured perceived loudness and the desired target loudness is calculated; this is considered the ideal replay gain value. Typically, the replay gain and peak level values are then stored as metadata in the audio file. ReplayGain-capable audio players use the replay gain metadata to automatically attenuate or amplify the signal on a per-track or per-album basis such that tracks or albums play at a similar loudness level. The peak level metadata can be used to prevent gain adjustments from inducing clipping in the playback device. [ 2 ]
The original ReplayGain proposal specified an 8- byte field in the header of any file. Most implementations now use tags for ReplayGain information. FLAC and Ogg Vorbis use the REPLAYGAIN_* Vorbis comment fields. MP3 files usually use ID3v2 . Other formats such as AAC and WMA use their native tag formats with a specially formatted tag entry listing the track's replay gain and peak loudness.
ReplayGain utilities usually add metadata to the audio files without altering the original audio data. Alternatively, a tool can amplify or attenuate the data itself and save the result to another, gain-adjusted audio file; this is not perfectly reversible in most cases. Some lossy audio formats, such as MP3, are structured in a way that they encode the volume of each compressed frame in a stream, and tools such as MP3Gain take advantage of this for directly applying the gain adjustment to MP3 files, adding undo information so that the process is reversible.
The target loudness is specified as the loudness of a stereo pink noise signal played back at 89 dB sound pressure level or −14 dB relative to full scale . [ 3 ] This is based on SMPTE recommendation RP 200:2002, which specifies a similar method for calibrating playback levels in movie theaters using a reference level 6 dB lower (83 dB SPL, −20 dBFS). [ note 1 ]
ReplayGain analysis can be performed on individual tracks so that all tracks will be of equal volume on playback. Analysis can also be performed on a per-album basis. In album-gain analysis an additional peak-value and gain-value, which will be shared by the whole album, is calculated. Using the album-gain values during playback will preserve the volume differences among tracks on an album.
On playback, listeners may decide if they want all tracks to sound equally loud or if they want all albums to sound equally loud with different tracks having different loudness. In album-gain mode, when album-gain data is missing, players should use track-gain data instead. | https://en.wikipedia.org/wiki/ReplayGain |
Replica cluster move in condensed matter physics refers to a family of non-local cluster algorithms used to simulate spin glasses . [ 1 ] [ 2 ] [ 3 ] It is an extension of the Swendsen-Wang algorithm in that it generates non-trivial spin clusters informed by the interaction states on two (or more) replicas instead of just one. It is different from the replica exchange method (or parallel tempering), as it performs a non-local update on a fraction of the sites between the two replicas at the same temperature, while parallel tempering directly exchanges all the spins between two replicas at different temperature. However, the two are often used alongside to achieve state-of-the-art efficiency in simulating spin-glass models. [ 1 ]
The Chayes-Machta-Redner (CMR) representation is a graphical representation of the Ising spin glass [ 2 ] which extends the standard FK representation . It is based on the observation that the total Hamiltonian of two independent Ising replicas α and β,
H = − ∑ < i j > J i j ( σ i α σ j α + σ i β σ j β ) , {\displaystyle H=-\sum _{<ij>}J_{ij}{\big (}\sigma _{i}^{\alpha }\sigma _{j}^{\alpha }+\sigma _{i}^{\beta }\sigma _{j}^{\beta }{\big )},}
can be written as the Hamiltonian of a 4-state clock model . [ 4 ] To see this, we define the following mapping
( σ α , σ β ) → θ : { ( + 1 , + 1 ) , ( + 1 , − 1 ) , ( − 1 , − 1 ) , ( − 1 , + 1 ) } ↦ { 0 , π 2 , π , 3 π 2 } , {\displaystyle (\sigma ^{\alpha },\sigma ^{\beta })\to \theta :\quad {\big \{}(+1,+1),(+1,-1),(-1,-1),(-1,+1){\big \}}\mapsto {\big \{}0,{\frac {\pi }{2}},\pi ,{\frac {3\pi }{2}}{\big \}},}
where θ {\displaystyle \theta } is the orientation of the 4-state clock, [ 5 ] then the total Hamiltonian can be represented as
H = − 2 J i j ∑ < i j > cos ( θ j − θ i ) . {\displaystyle H=-2J_{ij}\sum _{<ij>}\cos(\theta _{j}-\theta _{i}).}
In the graphical representation of this model, there are two types of bonds that can be open , referred to as blue and red . [ 2 ] To generate the bonds on the lattice, the following rules are imposed:
Under these rules, it can be checked that a cycle of open bonds can only contain an even number of red bonds. [ 6 ] [ 7 ] A cluster formed with blue bonds is referred to as a blue cluster , and a super-cluster formed together with both blue and red bonds is referred to as a grey cluster .
Once the clusters are generated, there are two types of non-local updates that can be made to the clock states independently in the clock clusters (and thus the spin states in both replicas). First, for every blue cluster, we can flip (or rotate 180 ∘ {\displaystyle 180^{\circ }} ) the clock states with some arbitrary probability. Following this, for every grey cluster (blue clusters connected with red bonds), we can rotate all the clock states simultaneously by a random angle.
It can be shown that both updates are consistent with the bond-formation rules, and satisfy detailed balance . [ 2 ] Therefore, an algorithm based on this CMR representation will be correct when used in conjunction with other ergodic algorithms. However, the algorithm is not necessarily efficient, as a giant grey cluster will tend to span the entire lattice at sufficiently low temperatures (e.g. even at paramagnetic phases of spin-glass models).
The Houdayer cluster move is a simpler cluster algorithm based on a site percolation process on sites with negative spin overlaps. It is discovered by Jerome Houdayer in 2001. [ 8 ] For two independent Ising replicas, we can define the spin overlap as
q i = σ i α σ j β , {\displaystyle q_{i}=\sigma _{i}^{\alpha }\sigma _{j}^{\beta },}
and a cluster is formed by randomly selecting a site and percolating through the adjacent sites with q = − 1 {\displaystyle q=-1} (with a percolation ratio of 1) until the maximal cluster is formed. The spins in the cluster are then exchanged between the two replicas. It can be shown that the exchange update is isoenergetic , meaning that the total energy is conserved in the update. This gives an acceptance ratio of 1 as calculated from the Metropolis-Hastings rule. In other words, the update is rejection-free.
The efficiency of this algorithm is highly sensitive to the site percolation threshold of the underlying lattice. If the percolation threshold is too small, then a giant cluster will likely span the entire lattice, resulting in the trivial update of exchanging nearly all the spins between the replicas. This is why the original algorithm only performs well in low dimensional settings [ 8 ] [ 9 ] (where the site percolation ratio is sufficiently high). To efficiently extend this algorithm to higher dimensions, one has to perform certain algorithmic interventions.
For instance, one can restrict the cluster moves to low-temperature replicas where one expects only a few number of negative-overlap sites to appear [ 1 ] (such that the algorithm does not percolate supercritically). In addition, one can perform a global spin-flip in one of the two replicas when the number of negative-overlap sites exceeds half the lattice size, in order to further suppress the percolation process.
The Jorg cluster move [ 10 ] is another way to reduce the sizes of the Houdayer clusters. In each Houdayer cluster, the algorithm forms open bonds with probability 1 − e − 4 β | J i j | {\displaystyle 1-e^{-4\beta |J_{ij}|}} , similar to the Swensden-Wang algorithm . This will form sub-clusters that are smaller than the Houdayer clusters, and the spins in these sub-clusters can then be exchange between replicas in a similar fashion as a Houdayer cluster move. | https://en.wikipedia.org/wiki/Replica_cluster_move |
Replica plating is a microbiological technique in which one or more secondary Petri plates containing different solid ( agar -based) selective growth media (lacking nutrients or containing chemical growth inhibitors such as antibiotics ) are inoculated with the same colonies of microorganisms from a primary plate (or master dish), reproducing the original spatial pattern of colonies. The technique involves pressing a velveteen -covered disk, and then imprinting secondary plates with cells in colonies removed from the original plate by the material. Generally, large numbers of colonies (roughly 30-300) are replica plated due to the difficulty in streaking each out individually onto a separate plate.
The purpose of replica plating is to be able to compare the master plate and any secondary plates, typically to screen for a desired phenotype . For example, when a colony that was present on the primary plate (or master dish), fails to appear on a secondary plate, it shows that the colony was sensitive to a substance on that particular secondary plate. Common screenable phenotypes include auxotrophy and antibiotic resistance .
Replica plating is especially useful for " negative selection ". However, it is more correct to refer to "negative screening" instead of using the term 'selection'. For example, if one wanted to select colonies that were sensitive to ampicillin , the primary plate could be replica plated on a secondary Amp + agar plate. The sensitive colonies on the secondary plate would die but the colonies could still be deduced from the primary plate since the two have the same spatial patterns from ampicillin resistant colonies. The sensitive colonies could then be picked off from the primary plate. Frequently the last plate will be non-selective. In the figure, a nonselective plate will be replica plated after the Amp+ plate to confirm that the absence of growth on the selective plate is due to the selection itself and not a problem with transferring cells. If one sees growth on the third (nonselective) plate but not the second one, the selective agent is responsible for the lack of growth. If the non-selective plate shows no growth, one cannot say whether viable cells were transferred at all, and no conclusions can be made about the presence or absence of growth on selective media. This is particularly useful if there are questions about the age or viability of the cells on the original plate.
By increasing the variety of secondary plates with different selective growth media , it is possible to rapidly screen a large number of individual isolated colonies for as many phenotypes as there are secondary plates.
The development of replica plating required two steps. The first step was to define the problem: a method of identifiably duplicating colonies. The second step was to devise a means to reliably implement the first step. Replica plating was first described by Esther Lederberg and Joshua Lederberg in 1952.One of the nutrient agar plate will have antibiotic resistance. [ 1 ] Lederberg sought to use a fabric that was able to be sterilized, and had a vertical fabric pile , akin to a 2D analog "wire brush" of that had been classically used to transfer colonies. Paper was unsatisfactory as "its lateral capillarity and its compression of the colonies distorted and broke up the original growth pattern.", and nylon velvet was too expensive and its stiffer fibers caused problems, leading to the choice and eventual standardization on cotton velveteen. [ 2 ] While first demonstrated with bacteria , velveteen based replica plating has also become a standard technique in the microbiology of eukaryotes , such as yeast . [ 3 ] | https://en.wikipedia.org/wiki/Replica_plating |
In the statistical physics of spin glasses and other systems with quenched disorder , the replica trick is a mathematical technique based on the application of the formula: ln Z = lim n → 0 Z n − 1 n {\displaystyle \ln Z=\lim _{n\to 0}{Z^{n}-1 \over n}} or: ln Z = lim n → 0 ∂ Z n ∂ n {\displaystyle \ln Z=\lim _{n\to 0}{\frac {\partial Z^{n}}{\partial n}}} where Z {\displaystyle Z} is most commonly the partition function , or a similar thermodynamic function.
It is typically used to simplify the calculation of ln Z ¯ {\displaystyle {\overline {\ln Z}}} , the expected value of ln Z {\displaystyle \ln Z} , reducing the problem to calculating the disorder average Z n ¯ {\displaystyle {\overline {Z^{n}}}} where n {\displaystyle n} is assumed to be an integer. This is physically equivalent to averaging over n {\displaystyle n} copies or replicas of the system, hence the name.
The crux of the replica trick is that while the disorder averaging is done assuming n {\displaystyle n} to be an integer, to recover the disorder-averaged logarithm one must send n {\displaystyle n} continuously to zero. This apparent contradiction at the heart of the replica trick has never been formally resolved, however in all cases where the replica method can be compared with other exact solutions, the methods lead to the same results. (A natural sufficient rigorous proof that the replica trick works would be to check that the assumptions of Carlson's theorem hold, especially that the ratio ( Z n − 1 ) / n {\displaystyle (Z^{n}-1)/n} is of exponential type less than π .)
It is occasionally necessary to require the additional property of replica symmetry breaking (RSB) in order to obtain physical results, which is associated with the breakdown of ergodicity .
It is generally used for computations involving analytic functions (can be expanded in power series).
Expand f ( z ) {\displaystyle f(z)} using its power series : into powers of z {\displaystyle z} or in other words replicas of z {\displaystyle z} , and perform the same computation which is to be done on f ( z ) {\displaystyle f(z)} , using the powers of z {\displaystyle z} .
A particular case which is of great use in physics is in averaging the thermodynamic free energy ,
over values of J i j {\displaystyle J_{ij}} with a certain probability distribution, typically Gaussian. [ 1 ]
The partition function is then given by
Notice that if we were calculating just Z [ J i j ] {\displaystyle Z[J_{ij}]} (or more generally, any power of J i j {\displaystyle J_{ij}} ) and not its logarithm which we wanted to average, the resulting integral (assuming a Gaussian distribution) is just
a standard Gaussian integral which can be easily computed (e.g. completing the square).
To calculate the free energy, we use the replica trick: ln Z = lim n → 0 Z n − 1 n {\displaystyle \ln Z=\lim _{n\to 0}{\dfrac {Z^{n}-1}{n}}} which reduces the complicated task of averaging the logarithm to solving a relatively simple Gaussian integral, provided n {\displaystyle n} is an integer. [ 2 ] The replica trick postulates that if Z n {\displaystyle Z^{n}} can be calculated for all positive integers n {\displaystyle n} then this may be sufficient to allow the limiting behavior as n → 0 {\displaystyle n\to 0} to be calculated.
Clearly, such an argument poses many mathematical questions, and the resulting formalism for performing the limit n → 0 {\displaystyle n\to 0} typically introduces many subtleties. [ 3 ]
When using mean-field theory to perform one's calculations, taking this limit often requires introducing extra order parameters, a property known as " replica symmetry breaking " which is closely related to ergodicity breaking and slow dynamics within disorder systems.
The replica trick is used in determining ground states of statistical mechanical systems, in the mean-field approximation . Typically, for systems in which the determination of ground state is easy, one can analyze fluctuations near the ground state. Otherwise one uses the replica method. [ papers on spin glasses 1 ] An example is the case of a quenched disorder in a system like a spin glass with different types of magnetic links between spins, leading to many different configurations of spins having the same energy.
In the statistical physics of systems with quenched disorder, any two states with the same realization of the disorder (or in case of spin glasses, with the same distribution of ferromagnetic and antiferromagnetic bonds) are called replicas of each other. [ papers on spin glasses 2 ] For systems with quenched disorder, one typically expects that macroscopic quantities will be self-averaging , whereby any macroscopic quantity for a specific realization of the disorder will be indistinguishable from the same quantity calculated by averaging over all possible realizations of the disorder. Introducing replicas allows one to perform this average over different disorder realizations.
In the case of a spin glass, we expect the free energy per spin (or any self averaging quantity) in the thermodynamic limit to be independent of the particular values of ferromagnetic and antiferromagnetic couplings between individual sites, across the lattice. So, we explicitly find the free energy as a function of the disorder parameter (in this case, parameters of the distribution of ferromagnetic and antiferromagnetic bonds) and average the free energy over all realizations of the disorder (all values of the coupling between sites, each with its corresponding probability, given by the distribution function). As free energy takes the form:
where J i j {\displaystyle J_{ij}} describes the disorder (for spin glasses, it describes the nature of magnetic interaction between each of the individual sites i {\displaystyle i} and j {\displaystyle j} ) and we are taking the average over all values of the couplings described in J {\displaystyle J} , weighted with a given distribution. To perform the averaging over the logarithm function, the replica trick comes in handy, in replacing the logarithm with its limit form mentioned above. In this case, the quantity Z n {\displaystyle Z^{n}} represents the joint partition function of n {\displaystyle n} identical systems.
The random energy model (REM) is one of the simplest models of statistical mechanics of disordered systems , and probably the simplest model to show the meaning and power of the replica trick to the level 1 of replica symmetry breaking . The model is especially suitable for this introduction because an exact result by a different procedure is known, and the replica trick can be proved to work by crosschecking of results.
The cavity method is an alternative method, often of simpler use than the replica method, for studying disordered mean-field problems. It has been devised to deal with models on locally tree-like graphs .
Another alternative method is the supersymmetric method . The use of the supersymmetry method provides a mathematical rigorous alternative to the replica trick, but only in non-interacting systems. See for example the book: [ other approaches 1 ]
Also, it has been demonstrated [ other approaches 2 ] that the Keldysh formalism provides a viable alternative to the replica approach.
The first of the above identities is easily understood via Taylor expansion :
For the second identity, one simply uses the definition of the derivative | https://en.wikipedia.org/wiki/Replica_trick |
Hundreds of replicas of the Statue of Liberty ( Liberty Enlightening the World ) have been created worldwide. The original Statue of Liberty, designed by sculptor Frédéric Auguste Bartholdi , is 151 feet tall and stands on a pedestal that is 154 feet tall, making the height of the entire sculpture 305 feet. The design for the original Statue of Liberty began in 1865, with final installation in 1886. [ 1 ]
On the occasion of the Exposition Universelle of 1900 , sculptor Frédéric Bartholdi crafted a 1/16 scale, 2.74-metre (9 ft) version [ 2 ] of his Liberty Enlightening the World. It was cast in 1889 and he subsequently gave it to the Musée du Luxembourg . In 1906, the statue was placed outside the museum in the Jardin du Luxembourg , where it stood for over a century, until 2011. Since 2012 it has stood within the entrance hall to the Musée d'Orsay , and a newly constructed bronze replica stands in its place in the Jardin du Luxembourg. [ 3 ]
This statue was given in 1889 to France by U.S. citizens [ 4 ] living in Paris, only three years after the main statue in New York was inaugurated, to celebrate the centennial of the French Revolution . Originally, the statue was turned towards the east in order to face the Eiffel Tower . In 1937 it was turned towards the west so that it would be facing the original statue in New York. It is one of three replicas in Paris.
The statue is near the Pont de Grenelle on the Île aux Cygnes , [ 5 ] a man-made island in the Seine ( 48°51′0″N 2°16′47″E / 48.85000°N 2.27972°E / 48.85000; 2.27972 ). It is a quarter-scale version (11.50 metres (37 feet 9 inches) high), and was one of the working models used during construction of the actual Statue of Liberty. [ 6 ] The statue is a short distance away from the Eiffel Tower and Alexandre-Gustave Eiffel designed its interior. [ 5 ] This model weighs 14 tons. [ 7 ] It was inaugurated on 4 July 1889. Its tablet bears two dates: "IV JUILLET 1776" (4 July 1776: the United States Declaration of Independence ) like the New York statue, and "XIV JUILLET 1789" (14 July 1789: the storming of the Bastille ) associated with an equal sign. This statue is shown in the film National Treasure: Book of Secrets as a historic location.
The 2.86-metre (9.4 ft) tall original plaster maquette finished in 1878 by Auguste Bartholdi that was used to make the statue in New York is in the Musée des Arts et Métiers in Paris. [ 8 ] [ 9 ] This original plaster model was bequeathed by the artist's widow in 1907, [ 10 ] together with part of the artist's estate.
On the square outside the Musée des Arts et Métiers ' s entrance was a bronze copy made from the plaster maquette, number 1 from an original edition of 12, made by the museum and cast by Susse Fondeur Paris. It was this replica that was shipped to the U.S. under a joint effort by the Embassy of France in the United States , the Conservatoire national des arts et métiers and the shipping company CMA CGM Group. [ 11 ] After spending time on Ellis Island for Independence Day 2021, it now resides at the French ambassador's residence in Washington, D.C. [ 12 ]
A life-size copy of the torch, Flame of Liberty , can be seen above the entrance to the Pont de l'Alma tunnel near the Champs-Élysées in Paris. It was given to the city as a return gift in honour of the centennial celebration of the statue's dedication. In 1997, the torch became an unofficial memorial to Diana, Princess of Wales after she was killed in a car accident in the Pont de l'Alma tunnel. [ 13 ]
There is a 13.5 m (44 feet) polyester replica in the northwest of France, in the small town of Barentin near Rouen . It was made for the 1969 French film, Le Cerveau ("The Brain"), directed by Gérard Oury and featuring actors Jean-Paul Belmondo and Bourvil . [ 14 ]
There is a 2.5 m (8.2 ft) replica of the statue in the city of Bordeaux . The first Bordeaux statue was seized and melted down by the Nazis in World War II. The statue was replaced in 2000 and a plaque was added to commemorate the victims of the 11 September terrorist attacks . On the night of 25 March 2003, unknown vandals poured red paint and gasoline on the replica and set it on fire. The vandals also cracked the pedestal of the plaque. The mayor of Bordeaux, former prime minister Alain Juppé , condemned the attack. [ 15 ]
A 12 m (39 ft 4 in) replica of the Statue of Liberty in Colmar , the city of Bartholdi's birth, was dedicated on 4 July 2004, to commemorate the 100th anniversary of his death. It stands at the north entrance of the city. [ 16 ] The Bartholdi Museum in Colmar contains numerous models of various sizes made by Bartholdi during the process of designing the statue. [ 17 ]
Frédéric Bartholdi donated a copy of the Statue of Liberty to the town square of Saint-Cyr-sur-Mer .
Other Liberty Enlightening the World statues are displayed in Poitiers and Lunel . The Musée des beaux-arts de Lyon owns a terracotta version.
Near Chaumont, Haute Marne , is a miniature replica in the flag plaza of the former Chaumont Air Base . This was the home of the US 48th Tactical Fighter Wing, now based at Lakenheath, England, with its own statue at the flag plaza. The 48th TFW is the only USAF wing with a name: "The Statue of Liberty Wing".
Another example is of a Liberty Enlightening the World replica in Châteauneuf-la-Forêt , near the city of Limoges in the area of Haute-Vienne , Limousin . There is another "original" Bartholdi replica at Roybon (near Grenoble)
There is a small replica on Promenade des Anglais in Nice . This is one of Bartholdi's first models and overlooks the Mediterranean Sea on the Quai des Etats-Unis (the promenade of the United States). [ 5 ]
In Minimundus , a miniature park located at the Wörthersee in Carinthia , Austria, is another replica of the Statue of Liberty. [ 18 ]
In Graz, standing between the Opera House and the NextLiberty Theater, stands a steel structure built out of steel beams, that depict the original size of the statue of liberty, before the plates of the final form were being put into place. Instead of torch of flame, this depiction is holding a sword in extended left arm and a sphere in the right arm representing the world.
A small replica in lego is situated in the original Legoland in Billund . The replica is made from 400,000 Lego bricks and also resides in other Lego theme parks. [ 5 ]
A 35 m (115 ft) copy is in the German Heidepark Soltau theme park, located on a lake with cruising Mississippi steamboats. It weighs 28 metric tons (31 short tons), is made of plastic foam on a steel frame with polyester cladding, and was designed by the Dutch artist Gerla Spee. [ 19 ]
A green painted replica of the Statue of Liberty can be found near Mulnamina More, County Donegal , Ireland. [ 20 ]
A replica stands atop the Hotel Victory in Pristina , Kosovo. [ 21 ] Today the Hotel is closed and the Police from Kosovo used the building.
A 33 ft (10 m) replica has its temporary location in the Dutch city of Assen . The statue bears characteristic features that represent the culture and landscape of the region, like a can of beans instead of the original torch. The replica, by sculptor Natasja Bennink, was on display for the duration of an exhibition on American realism in the Drents Museum until 27 May 2018.
A smaller replica is in the Norwegian village of Visnes , where the copper used in the original statue was mined. [ 22 ] A replica is also on the facade of a pub in Bleik , county of Nordland [ 23 ] [ 24 ]
There is a 2.5 m tall statue in Boldeşti-Scăeni, near Prahova County.
There is a unique "sitting" Statue of Liberty in the Ukrainian city of Lviv . It is a sculpture on a dome of the house (15, Liberty Avenue [ uk ] ) built by architect Yuriy Zakharevych and decorated by sculptor Leandro Marconi in 1874–1891.
A 17 ft (5.2 m), 9,200 kg (9.2 tons) replica stood atop the Liberty Shoe factory in Leicester , England, until 2002 when the building was demolished. The statue was put into storage while the building was replaced. The statue, which dates back to the 1920s, was initially going to be put back on the replacement building, but was too heavy, so in December 2008 following restoration, it was placed on a pedestal near Liberty Park Halls of Residence on a traffic island, "Liberty Circus", close to where it originally stood. [ 25 ] [ 26 ]
There used to be a 10-foot-high (3.0 m) replica in the stairwell of bowling alley LA Bowl, in Warrington , England. Prior to that it was above the entrance of Liberty Street, a nearby restaurant. It is thought that this is now situated approximately 4 miles away on Mustard Lane in Croft . [ citation needed ]
There is also a small replica located at RAF Lakenheath , England, at the base flag plaza, made from leftover copper from the original. [ 27 ]
In Coquitlam , British Columbia a small replica stands on Delestre Avenue just east of North Road. The statue was removed in 2019 when the hotel behind it was demolished. [ 28 ]
In Buenos Aires there is a small cast iron original replica in Barrancas de Belgrano Park located near the intersection of "La Pampa" and "Arribeños" streets. It is cast by Bartholdi from the same mould as those cast in Paris; although it is much smaller (3 meters tall). It was inaugurated on October 3, 1886, 25 days before the one in New York. On its base you can read the inscription “Val d'Osne – 8 Rue Voltaire, Paris”, the name of the French workshops and “1884” probably the year of creation. Another replica was bought by the government and placed in a school, Colegio Nacional Sarmiento, about the same date. [ 82 ] There is another replica in Plaza Libertad (Liberty Square) in the city of Villa Aberastain , San Juan . This one was installed on the city square in 1931. [ 83 ] There are also two cheaper non-metallic replicas; one is 6 m tall, located in the "New York" Casino in San Luis [ 84 ] and the other crowns a commercial gallery, "Galería de Fabricantes", in Munro , a city in the northeast suburbs of Buenos Aires. [ 85 ] [ 86 ]
In Bangu, Rio de Janeiro exists a nickel replica made by Bartholdi in 1899. Bartholdi was commissioned by José Paranhos, Baron of Rio Branco to make a replica in order to celebrate the 10th anniversary of the Republic of Brazil . Until 1940, the statue was Paranhos family property. In 1940 the statue was passed to Guanabara State . On 20 January 1964, Carlos Lacerda , governor of Guanabara State , placed the statue in Miami Square, Bangu . [ 87 ] 22°51′24″S 43°29′33″W / 22.856681°S 43.492577°W / -22.856681; -43.492577
A small-scale cast metal replica can be found in Maceió , the capital of Alagoas State, in northeast Brazil. The replica is in front of a building constructed in 1869 as the seat of the Conselho Provincial (Provincial Council), and which today is the Museu da Imagem e do Som de Alagoas (Museum of Image and Sound of Alagoas). This replica is possibly a casting produced by the Fundição Val d'Osne [ pt ] [ 88 ] in France, as in the Praça Lavenere Machado (formerly Praça Dois Leões) on the opposite side of the museum, there are four somewhat larger-than-life size cast metal statues of wild animals, at least one of which is embossed with the name of the foundry. These castings and the replica all appear to be made of similar material and to be of similar age. It is also probable that they are near contemporaries of the actual Statue of Liberty. 9°40′22″S 35°43′20″W / 9.6728563°S 35.7223114°W / -9.6728563; -35.7223114
A large modern replica stands in front of the New York City Center, a shopping center constructed in 1999 in Barra da Tijuca in the State of Rio de Janeiro . 22°59′57″S 43°21′38″W / 22.9992837°S 43.360481°W / -22.9992837; -43.360481
The Havan department store chain has replicas in many of their stores. The largest one of these, 57 meters tall, [ 89 ] is allegedly in the Barra Velha branc, in the state of Santa Catarina . 26°38′09″S 48°41′55″W / 26.635870°S 48.698708°W / -26.635870; -48.698708 There is another large replica the parking area of a Havan Department Store on the outskirts of Curitiba , in the State of Paraná , opened in 2000. 25°27′50″S 49°15′08″W / 25.4639912°S 49.2521676°W / -25.4639912; -49.2521676
Also, there is a small replica of the statue in Belém , in front of a Belém Importados store, near the city's port.
In Guayaquil , a little replica gives the name of "New York" to a neighborhood in the Valle Alto area.
In Lima the New York Casino in the Jesús María District has a small replica in the main entrance. The casino is a tribute to the state of New York and the USA.
In Arequipa , the Plaza Las Américas in the Cerro Colorado district has a small replica in the monument in the middle of the square on top of a globe pointing to the American continent.
A small replica can be found in Vardhaman Fantasy , an amusement park in Mira Road, Mumbai along with six other wonders of the world .
The 7 wonders of the world are made in Eco Park, Kolkata, West Bengal.
Another small replica can be found in Seven Wonders Park , a park in Kotri, Kota , Rajasthan along with six other wonders of the world .
A large replica can be found in Genting Highlands in the state of Pahang .
A small replica can be found in Haw Par Villa , a theme park.
Siting on top of the memorial tomb of "72 Martyrs of Huanghuagang" (see Huanghuagang Uprising ). The current one was re-built in 1981.
During the Tiananmen Square protest of 1989 , Chinese student demonstrators in Beijing built a 10 m (33 ft) image called the Goddess of Democracy , which sculptor Tsao Tsing-yuan said was intentionally "dissimilar" to the Statue of Liberty to avoid being "too openly pro-American." (See article for a list of replicas of that statue.)
A replica can be found in Window of the World Park.
A 15 foot high replica of the Statue of Liberty is at the western entrance of the village of Arraba in Israel, near a local restaurant.
At a highway intersection in Jerusalem called "New York Square," there is an abstract skeletal replica of the statue.
The 13 foot high Statue of Liberty, crafted by YouFine Sculpture, was custom-made for an Israeli Resident and installed in front of his newly constructed yard. This bespoke piece highlights Liberty sculpture has entered the homes of ordinary people. "Custom Made Large Statue of Liberty" . YouFine . Retrieved August 7, 2024 .
The French Statue of Liberty from the Île aux Cygnes came to Odaiba , Tokyo , from April 1998 to May 1999 in commemoration of "The French year in Japan". [ 7 ] Because of its popularity, in 2000 a replica of the French Statue of Liberty was erected at the same place. The Tokyo Bay statue is about 1/7th the size of the statue in New York Harbor. [ 5 ] In Japan, a small Statue of Liberty is in the Amerika-mura ( American Village ) shopping district in Osaka , Japan. Another replica is in Oirase [ 90 ] near the town of Shimoda south of Misawa in Aomori Prefecture , where the United States has an 8,000-person U.S. Air Force base. A replica of the Statue of Liberty in Ishinomaki, Miyagi Prefecture , was damaged by the 2011 Tōhoku earthquake and tsunami . [ 91 ] There is also a replica in Oyabe , Toyama . [ 92 ]
There are replicas of the Statue of Liberty in Bahria Town , Lahore , and also in Bahria Town Phase 8 , Islamabad .
As early as January 1945, there were already news of a campaign that would help erect a Statue of Liberty replica in the Philippines . The said monument was supposed to be sponsored by The Chicago Daily Times whose goal was "to commemorate one of the great epics in the struggle for human freedom–the liberation of the Philippines."
In 1950, the Boy Scouts of America marked its 40th anniversary. Jack P. Whitaker, the Scout Commissioner of the Kansas City Area Council at the time, had previously proposed the idea of creating and distributing replicas of the Statue of Liberty to every U.S. state and territory, as well as the Philippines.
The eight-foot statues, which were cast in bronze, were distributed all over the U.S. and the world from 1949 to 1951. Almost 200 replicas were delivered to the 39 states of the U.S. and countries such as Panama and Puerto Rico. The Boy Scouts of the Philippines , on the other hand, received its own replica in the early part of 1950.
The statues were donated by the Boy Scouts of America as "an expression of scout brotherhood and goodwill." Their 40th anniversary theme was "Strengthen the Arm of Liberty."
Miniature versions of the statue were also given as gifts. The Philippines became the first independent nation to receive one of the 4,000 eight-inch statues from the Boy Scouts of America. In April 1950, the said statue was officially given by Chief Scout Executive Arthur A. Shuck to Carlos P. Romulo , then chief of the Philippine Mission to the United Nations.
In the Philippines, several places were suggested as the site where the eight-foot bronze replica would be erected. The task of choosing the perfect site was delegated to the National Urban Planning Commission, and among those it considered were “Engineer Island, atop the proposed reviewing stand on the Rizal Park , and on the center island rotunda between the Old Legislative building and Manila City Hall .”
In the end, the Boy Scouts of the Philippines (BSP) erected the statue just outside Intramuros . As the icon of the United States, the replica of Lady Liberty would survive several attacks by student protesters in the 1960s. It remained standing until the early 1970s, when the BSP decided to transfer it to the Scout Reservation in Mt. Makiling which would serve as the statue's home for two decades or so.
In a 2002 article published by the Philippine Star, then BSP PR head Nixon Canlapan revealed that the Statue of Liberty was eventually moved and stored at the BSP headquarters on Concepcion Street (now Natividad Almeda-Lopez) in Ermita, Manila.
Turns out, the U.S.-sponsored replica was not the first Lady Liberty in Manila . In the 1930s, one of Manila's biggest shopping stores at that time became the talk of the town not just for its products but also for its unique multi-story building. Located in Juan Luna Street, the L.R. Aguinaldo's Emporium had an Art Deco facade featuring two contrasting statues: Andres Bonifacio on the right and the Statue of Liberty on the left.
Founded by the Philippine retailing pioneer Leopoldo R. Aguinaldo, the establishment would eventually be recognized as Aguinaldo's Department Store. Following the war, Leopoldo's son, Francisco, assumed control of the business, and subsequently, the store relocated to Echague.
The Echague branch in the 1950s was known for introducing its customers to quality products both from the Philippines and abroad. It also commissioned young interior designers to update the store's furniture section. Thus, the store catapulted the careers of famous designers like Myra Cruz, Edgar Ramirez, and Bonnie Ramos, among others. Aguinaldo's succumbed to the competition and closed in the 1960s. The original building in Juan Luna Street still stands, along with both the Bonifacio and the Liberty statues.
Since the creation of the Liberty statues in Intramuros and Juan Luna Street, other Philippine provinces soon followed suit. Statue of Liberty replicas in can be found in Pangasinan and as far as Camp John Hay amphitheater in Baguio .
The Mini Siam and Mini Europe model village , in Pattaya , has a miniature Statue of Liberty amongst others.
There are at least two Statue of Liberty replicas (greater than 30 feet in height) in Taiwan . These two statues are in the cities of Keelung and Taipei . [ 93 ]
From 1887 to 1945, Hanoi was home to another copy of the statue. Measuring 2.85 m (9 ft 4 in) tall, it was erected by the French colonial government after being sent from France for an exhibition. It was known to locals unaware of its history as Tượng Bà đầm xòe [ vi ] ( Statue of the Western lady wearing dress ). When the French lost control of French Indochina during World War II, the statue was toppled on 1 August 1945, after being deemed a vestige of the colonial government along with other statues erected by the French. [ 94 ]
A 30-foot replica was once found at the Westfield Marion shopping complex in Adelaide , South Australia. The statue was demolished in 2019. [ citation needed ] | https://en.wikipedia.org/wiki/Replicas_of_the_Statue_of_Liberty |
In the biological sciences , replicates are an experimental units that are treated identically. Replicates are an essential component of experimental design because they provide an estimate of between sample error. Without replicates, scientists are unable to assess whether observed treatment effects are due to the experimental manipulation or due to random error. There are also analytical replicates which is when an exact copy of a sample is analyzed, such as a cell, organism or molecule, using exactly the same procedure. This is done in order to check for analytical error. In the absence of this type of error replicates should yield the same result. However, analytical replicates are not independent and cannot be used in tests of the hypothesis because they are still the same sample. [ 1 ] [ 2 ]
This biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Replicate_(biology) |
Replication , in metallography , is the use of thin plastic films to nondestructively duplicate the microstructure of a component. The film is then examined at high magnifications .
Replication is a method of copying the topography of a surface by casting or impressing material onto the surface. It is the commonly used technique to duplicate surfaces that are inaccessible in metrology to other forms of nondestructive testing . Replicas can be used in biology as well:
The replicas may be imaged in the light microscope or coated with heavy metals, the replicating film melted away, and the heavy metal replica imaged in a Transmission Electron Microscope (TEM).
The same materials, cellulose acetate films , are used for creating replicas of biological materials such as bacteria.
Field Metallurgical Replication (FMR) , in field metallography , is the use of metallurgical preparation on surfaces in the field, by polishing to a mirror image, along with application of acetate or other thin plastic films designed to nondestructively duplicate the microstructure of a part or structure in-situ. The FMR replica is then transferred to a glass slide for examination by optical microscopy, electron microscopy, and other methods.
This industry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Replication_(microscopy) |
In mathematics , the replicator equation is a type of dynamical system used in evolutionary game theory to model how the frequency of strategies in a population changes over time. It is a deterministic , monotone , non-linear , and non-innovative dynamic that captures the principle of natural selection in strategic interactions. [ 1 ]
The replicator equation describes how strategies with higher-than-average fitness increase in frequency, while less successful strategies decline. Unlike other models of replication—such as the quasispecies model —the replicator equation allows the fitness of each type to depend dynamically on the distribution of population types, making the fitness function an endogenous component of the system. This allows it to model frequency-dependent selection , where the success of a strategy depends on its prevalence relative to others.
Another key difference from the quasispecies model is that the replicator equation does not include mechanisms for mutation or the introduction of new strategies, and is thus considered non-innovative . It assumes all strategies are present from the outset and models only the relative growth or decline of their proportions over time.
Replicator dynamics have been widely applied in fields such as biology (to study evolution and population dynamics), economics (to analyze bounded rationality and strategy evolution), and machine learning (particularly in multi-agent systems and reinforcement learning ).
The most general continuous form of the replicator equation is given by the differential equation :
x i ˙ = x i [ f i ( x ) − ϕ ( x ) ] , ϕ ( x ) = ∑ j = 1 n x j f j ( x ) {\displaystyle {\dot {x_{i}}}=x_{i}[f_{i}(x)-\phi (x)],\quad \phi (x)=\sum _{j=1}^{n}{x_{j}f_{j}(x)}}
where x i {\displaystyle x_{i}} is the proportion of type i {\displaystyle i} in the population, x = ( x 1 , … , x n ) {\displaystyle x=(x_{1},\ldots ,x_{n})} is the vector of the distribution of types in the population, f i ( x ) {\displaystyle f_{i}(x)} is the fitness of type i {\displaystyle i} (which is dependent on the population), and ϕ ( x ) {\displaystyle \phi (x)} is the average population fitness (given by the weighted average of the fitness of the n {\displaystyle n} types in the population). Since the elements of the population vector x {\displaystyle x} sum to unity by definition, the equation is defined on the n-dimensional simplex .
The replicator equation assumes a uniform population distribution; that is, it does not incorporate population structure into the fitness. The fitness landscape does incorporate the population distribution of types, in contrast to other similar equations, such as the quasispecies equation.
In application, populations are generally finite, making the discrete version more realistic. The analysis is more difficult and computationally intensive in the discrete formulation, so the continuous form is often used, although there are significant properties that are lost due to this smoothing. Note that the continuous form can be obtained from the discrete form by a limiting process.
To simplify analysis, fitness is often assumed to depend linearly upon the population distribution, which allows the replicator equation to be written in the form:
where the payoff matrix A {\displaystyle A} holds all the fitness information for the population: the expected payoff can be written as ( A x ) i {\displaystyle \left(Ax\right)_{i}} and the mean fitness of the population as a whole can be written as x T A x {\displaystyle x^{T}Ax} . It can be shown that the change in the ratio of two proportions x i / x j {\displaystyle x_{i}/x_{j}} with respect to time is: d d t ( x i x j ) = x i x j [ f i ( x ) − f j ( x ) ] {\displaystyle {d \over {dt}}\left({x_{i} \over {x_{j}}}\right)={x_{i} \over {x_{j}}}\left[f_{i}(x)-f_{j}(x)\right]} In other words, the change in the ratio is driven entirely by the difference in fitness between types.
Suppose that the number of individuals of type i {\displaystyle i} is N i {\displaystyle N_{i}} and that the total number of individuals is N {\displaystyle N} . Define the proportion of each type to be x i = N i / N {\displaystyle x_{i}=N_{i}/N} . Assume that the change in each type is governed by geometric Brownian motion : d N i = f i N i d t + σ i N i d W i {\displaystyle dN_{i}=f_{i}N_{i}dt+\sigma _{i}N_{i}dW_{i}} where f i {\displaystyle f_{i}} is the fitness associated with type i {\displaystyle i} . The average fitness of the types ϕ = x T f {\displaystyle \phi =x^{T}f} . The Wiener processes are assumed to be uncorrelated. For x i ( N 1 , . . . , N m ) {\displaystyle x_{i}(N_{1},...,N_{m})} , Itô's lemma then gives us: d x i ( N 1 , . . . , N m ) = ∂ x i ∂ N j d N j + 1 2 ∂ 2 x i ∂ N j ∂ N k d N j d N k = ∂ x i ∂ N j d N j + 1 2 ∂ 2 x i ∂ N j 2 ( d N j ) 2 {\displaystyle {\begin{aligned}dx_{i}(N_{1},...,N_{m})&={\partial x_{i} \over {\partial N_{j}}}dN_{j}+{1 \over {2}}{\partial ^{2}x_{i} \over {\partial N_{j}\partial N_{k}}}dN_{j}dN_{k}\\&={\partial x_{i} \over {\partial N_{j}}}dN_{j}+{1 \over {2}}{\partial ^{2}x_{i} \over {\partial N_{j}^{2}}}(dN_{j})^{2}\end{aligned}}} The partial derivatives are then: ∂ x i ∂ N j = 1 N δ i j − x i N ∂ 2 x i ∂ N j 2 = − 2 N 2 δ i j + 2 x i N 2 {\displaystyle {\begin{aligned}{\partial x_{i} \over {\partial N_{j}}}&={1 \over {N}}\delta _{ij}-{x_{i} \over {N}}\\{\partial ^{2}x_{i} \over {\partial N_{j}^{2}}}&=-{2 \over {N^{2}}}\delta _{ij}+{2x_{i} \over {N^{2}}}\end{aligned}}} where δ i j {\displaystyle \delta _{ij}} is the Kronecker delta function . These relationships imply that: d x i = d N i N − x i ∑ j d N j N − ( d N i ) 2 N 2 + x i ∑ j ( d N j ) 2 N 2 {\displaystyle dx_{i}={dN_{i} \over {N}}-x_{i}\sum _{j}{dN_{j} \over {N}}-{(dN_{i})^{2} \over {N^{2}}}+x_{i}\sum _{j}{(dN_{j})^{2} \over {N^{2}}}} Each of the components in this equation may be calculated as: d N i N = f i x i d t + σ i x i d W i − x i ∑ j d N j N = − x i ( ϕ d t + ∑ j σ j x j d W j ) − ( d N i ) 2 N 2 = − σ i 2 x i 2 d t x i ∑ j ( d N j ) 2 N 2 = x i ( ∑ j σ j 2 x j 2 ) d t {\displaystyle {\begin{aligned}{dN_{i} \over {N}}&=f_{i}x_{i}dt+\sigma _{i}x_{i}dW_{i}\\-x_{i}\sum _{j}{dN_{j} \over {N}}&=-x_{i}\left(\phi dt+\sum _{j}\sigma _{j}x_{j}dW_{j}\right)\\-{(dN_{i})^{2} \over {N^{2}}}&=-\sigma _{i}^{2}x_{i}^{2}dt\\x_{i}\sum _{j}{(dN_{j})^{2} \over {N^{2}}}&=x_{i}\left(\sum _{j}\sigma _{j}^{2}x_{j}^{2}\right)dt\end{aligned}}} Then the stochastic replicator dynamics equation for each type is given by: d x i = x i ( f i − ϕ − σ i 2 x i + ∑ j σ j 2 x j 2 ) d t + x i ( σ i d W i − ∑ j σ j x j d W j ) {\displaystyle dx_{i}=x_{i}\left(f_{i}-\phi -\sigma _{i}^{2}x_{i}+\sum _{j}\sigma _{j}^{2}x_{j}^{2}\right)dt+x_{i}\left(\sigma _{i}dW_{i}-\sum _{j}\sigma _{j}x_{j}dW_{j}\right)} Assuming that the σ i {\displaystyle \sigma _{i}} terms are identically zero, the deterministic replicator dynamics equation is recovered.
The analysis differs in the continuous and discrete cases: in the former, methods from differential equations are utilized, whereas in the latter the methods tend to be stochastic. Since the replicator equation is non-linear, an exact solution is difficult to obtain (even in simple versions of the continuous form) so the equation is usually analyzed in terms of stability. The replicator equation (in its continuous and discrete forms) satisfies the folk theorem of evolutionary game theory which characterizes the stability of equilibria of the equation. The solution of the equation is often given by the set of evolutionarily stable states of the population.
In general nondegenerate cases, there can be at most one interior evolutionary stable state (ESS), though there can be many equilibria on the boundary of the simplex. All the faces of the simplex are forward-invariant which corresponds to the lack of innovation in the replicator equation: once a strategy becomes extinct there is no way to revive it.
Phase portrait solutions for the continuous linear-fitness replicator equation have been classified in the two and three dimensional cases. Classification is more difficult in higher dimensions because the number of distinct portraits increases rapidly.
The continuous replicator equation on n {\displaystyle n} types is equivalent to the Generalized Lotka–Volterra equation in n − 1 {\displaystyle n-1} dimensions. [ 2 ] [ 3 ] The transformation is made by the change of variables:
where y i {\displaystyle y_{i}} is the Lotka–Volterra variable. The continuous replicator dynamic is also equivalent to the Price equation . [ 4 ]
When one considers an unstructured infinite population with non-overlapping generations, one should work with the discrete forms of the replicator equation. Mathematically, two simple phenomenological versions---
---are consistent with the Darwinian tenet of natural selection or any analogous evolutionary phenomena. Here, prime stands for the next time step. However, the discrete nature of the equations puts bounds on the payoff-matrix elements. [ 5 ] Interestingly, for the simple case of two-player-two-strategy games, the type I replicator map is capable of showing period doubling bifurcation leading to chaos and it also gives a hint on how to generalize [ 6 ] the concept of the evolutionary stable state to accommodate the periodic solutions of the map.
A generalization of the replicator equation which incorporates mutation is given by the replicator-mutator equation, which takes the following form in the continuous version: [ 7 ]
where the matrix Q {\displaystyle Q} gives the transition probabilities for the mutation of type j {\displaystyle j} to type i {\displaystyle i} , f i {\displaystyle f_{i}} is the fitness of the i t h {\displaystyle i^{th}} and ϕ {\displaystyle \phi } is the mean fitness of the population. This equation is a simultaneous generalization of the replicator equation and the quasispecies equation, and is used in the mathematical analysis of language.
The discrete version of the replicator-mutator equation may have two simple types in line with the two replicator maps written above:
and
respectively.
The replicator equation or the replicator-mutator equation can be extended [ 8 ] to include the effect of delay that either corresponds to the delayed information about the population state or in realizing the effect of interaction among players. The replicator equation can also easily be generalized to asymmetric games . A recent generalization that incorporates population structure is used in evolutionary graph theory . [ 9 ] | https://en.wikipedia.org/wiki/Replicator_equation |
The Repligen Award in Chemistry of Biological Processes was established in 1985 and consists of a silver medal and honorarium. Its purpose is to acknowledge and encourage outstanding contributions to the understanding of the chemistry of biological processes, with particular emphasis on structure, function, and mechanism. The Award is administered by the Division of Biological Chemistry of the American Chemical Society . [ 1 ]
The award was suspended in 2018 before being reestablished in 2022 as the Abeles and Jencks Award in Chemistry of Biological Processes. With the help of multiple financial donors, the award was endowed to honor the legacies of Professors Robert H. Abeles and William P. Jencks . [ 2 ]
Source: ACS - Division of Biological Chemistry | https://en.wikipedia.org/wiki/Repligen_Corporation_Award_in_Chemistry_of_Biological_Processes |
Replikins are a group of peptides , whose increase in concentration in virus or other organism proteins is associated with rapid replication. It is often measured in number of replikins per 100 amino acids. This particular group of peptides have been found to play a significant role in predicting both infectivity and lethality of various viral strains. In particular, this group allowed the prediction of the A/H1N1 pandemic almost one year before onset. [ 1 ]
A method for identifying replikins was patented by Samuel and Elenore S. Bogoch in 2001. [ 2 ] The peptide group was first identified by a proprietary company called Replikins, who have trademarked the name "Replikin Count".
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Replikins |
The replisome is a complex molecular machine that carries out replication of DNA . The replisome first unwinds double stranded DNA into two single strands. For each of the resulting single strands, a new complementary sequence of DNA is synthesized. The total result is formation of two new double stranded DNA sequences that are exact copies of the original double stranded DNA sequence. [ 1 ]
In terms of structure, the replisome is composed of two replicative polymerase complexes, one of which synthesizes the leading strand , while the other synthesizes the lagging strand . The replisome is composed of a number of proteins including helicase , RFC , PCNA , gyrase / topoisomerase , SSB / RPA , primase , DNA polymerase III , RNAse H , and DNA ligase .
For prokaryotes , each dividing nucleoid (region containing genetic material which is not a nucleus) requires two replisomes for bidirectional replication . The two replisomes continue replication at both forks in the middle of the cell. Finally, as the termination site replicates, the two replisomes separate from the DNA. The replisome remains at a fixed, midcell location in the cell, attached to the membrane , and the template DNA threads through it. DNA is fed through the stationary pair of replisomes located at the cell membrane.
For eukaryotes , numerous replication bubbles form at origins of replication throughout the chromosome . As with prokaryotes, two replisomes are required, one at each replication fork located at the terminus of the replication bubble. Because of significant differences in chromosome size, and the associated complexities of highly condensed chromosomes, various aspects of the DNA replication process in eukaryotes, including the terminal phases, are less well-characterised than for prokaryotes.
The replisome is a system in which various factors work together to solve the structural and chemical challenges of DNA replication. Chromosome size and structure varies between organisms, but since DNA molecules are the reservoir of genetic information for all forms of life, many replication challenges and solutions are the same for different organisms. As a result, the replication factors that solve these problems are highly conserved in terms of structure, chemistry, functionality, or sequence. General structural and chemical challenges include the following:
In general, the challenges of DNA replication involve the structure of the molecules, the chemistry of the molecules, and, from a systems perspective, the underlying relationships between the structure and the chemistry.
Many of the structural and chemical problems associated with DNA replication are managed by molecular machinery that is highly conserved across organisms. This section discusses how replisome factors solve the structural and chemical challenges of DNA replication.
DNA replication begins at sites called origins of replication. In organisms with small genomes and simple chromosome structure, such as bacteria, there may be only a few origins of replication on each chromosome. Organisms with large genomes and complex chromosome structure, such as humans, may have hundreds, or even thousands, of origins of replication spread across multiple chromosomes.
DNA structure varies with time, space, and sequence, and it is thought that these variations, in addition to their role in gene expression, also play active roles in replisome assembly during DNA synthesis. Replisome assembly at an origin of replication is roughly divided into three phases.
For bacteria:
For eukaryotes:
For both bacteria and eukaryotes, the next stage is generally referred to as 'elongation', and it is during this phase that the majority of DNA synthesis occurs.
DNA is a duplex formed by two anti-parallel strands. Following Meselson-Stahl , the process of DNA replication is semi-conservative, whereby during replication the original DNA duplex is separated into two daughter strands (referred to as the leading and lagging strand templates). Each daughter strand becomes part of a new DNA duplex. Factors generically referred to as helicases unwind the duplex.
Helicase is an enzyme which breaks hydrogen bonds between the base pairs in the middle of the DNA duplex. Its doughnut like structure wraps around DNA and separates the strands ahead of DNA synthesis. In eukaryotes, the Mcm2-7 complex acts as a helicase, though which subunits are required for helicase activity is not entirely clear. [ 2 ] This helicase translocates in the same direction as the DNA polymerase (3' to 5' with respect to the template strand). In prokaryotic organisms, the helicases are better identified and include dnaB , which moves 5' to 3' on the strand opposite the DNA polymerase.
As helicase unwinds the double helix, topological changes induced by the rotational motion of the helicase lead to supercoil formation ahead of the helicase (similar to what happens when you twist a piece of thread).
Gyrase (a form of topoisomerase ) relaxes and undoes the supercoiling caused by helicase. It does this by cutting the DNA strands, allowing it to rotate and release the supercoil, and then rejoining the strands. Gyrase is most commonly found upstream of the replication fork, where the supercoils form.
Single-stranded DNA is highly unstable and can form hydrogen bonds with itself that are referred to as 'hairpins' (or the single strand can improperly bond to the other single strand). To counteract this instability, single-strand binding proteins (SSB in prokaryotes and Replication protein A in eukaryotes) bind to the exposed bases to prevent improper ligation.
If you consider each strand as a "dynamic, stretchy string", the structural potential for improper ligation should be obvious.
An expanded schematic reveals the underlying chemistry of the problem: the potential for hydrogen bond formation between unrelated base pairs.
Binding proteins stabilise the single strand and protected the strand from damage caused by unlicensed chemical reactions.
The combination of a single strand and its binding proteins serves as a better substrate for replicative polymerases than a naked single strand (binding proteins provide extra thermodynamic driving force for the polymerisation reaction). Strand binding proteins are removed by replicative polymerases.
From both a structural and chemical perspective, a single strand of DNA by itself (and the associated single strand binding proteins) is not suitable for polymerisation. This is because the chemical reactions catalysed by replicative polymerases require a free 3' OH in order to initiate nucleotide chain elongation. In terms of structure, the conformation of replicative polymerase active sites (which is highly related to the inherent accuracy of replicative polymerases) means these factors cannot start chain elongation without a pre-existing chain of nucleotides, because no known replicative polymerase can start chain elongation de novo.
Priming enzymes, (which are DNA-dependent RNA polymerases ), solve this problem by creating an RNA primer on the leading and lagging strands. The leading strand is primed once, and the lagging strand is primed approximately every 1000 (+/- 200) base pairs (one primer for each Okazaki fragment on the lagging strand). Each RNA primer is approximately 10 bases long.
The interface at (A*) contains a free 3' OH that is chemically suitable for the reaction catalysed by replicative polymerases, and the "overhang" configuration is structurally suitable for chain elongation by a replicative polymerase. Thus, replicative polymerases can begin chain elongation at (A*).
In prokaryotes, the primase creates an RNA primer at the beginning of the newly separated leading and lagging strands.
In eukaryotes, DNA polymerase alpha creates an RNA primer at the beginning of the newly separated leading and lagging strands, and, unlike primase, DNA polymerase alpha also synthesizes a short chain of deoxynucleotides after creating the primer.
Processivity refers to both speed and continuity of DNA replication, and high processivity is a requirement for timely replication. High processivity is in part ensured by ring-shaped proteins referred to as 'clamps' that help replicative polymerases stay associated with the leading and lagging strands. There are other variables as well: from a chemical perspective, strand binding proteins stimulate polymerisation and provide extra thermodynamic energy for the reaction. From a systems perspective, the structure and chemistry of many replisome factors (such as the AAA+ ATPase features of the individual clamp loading sub-units, along with the helical conformation they adopt), and the associations between clamp loading factors and other accessory factors, also increases processivity.
To this point, according to research by Kuriyan et al., [ 3 ] due to their role in recruiting and binding other factors such as priming enzymes and replicative polymerases, clamp loaders and sliding clamps are at the heart of the replisome machinery. Research has found that clamp loading and sliding clamp factors are absolutely essential to replication, which explains the high degree of structural conservation observed for clamp loading and sliding clamp factors. This architectural and structural conservation is seen in organisms as diverse as bacteria, phages, yeast, and humans. That such a significant degree of structural conservation is observed without sequence homology further underpins the significance of these structural solutions to replication challenges.
Clamp loader is a generic term that refers to replication factors called gamma (bacteria) or RFC (eukaryotes). The combination of template DNA and primer RNA is referred to as ' A-form DNA ' and it is thought that clamp loading replication proteins (helical heteropentamers) want to associate with A-form DNA because of its shape (the structure of the major/minor groove) and chemistry (patterns of hydrogen bond donors and acceptors). [ 3 ] [ 4 ] Thus, clamp loading proteins associate with the primed region of the strand which causes hydrolysis of ATP and provides energy to open the clamp and attach it to the strand. [ 3 ] [ 4 ]
Sliding clamp is a generic term that refers to ring-shaped replication factors called beta (bacteria) or PCNA (eukaryotes and archaea). Clamp proteins attract and tether replicative polymerases, such as DNA polymerase III, in order to extend the amount of time that a replicative polymerase stays associated with the strand. From a chemical perspective, the clamp has a slightly positive charge at its centre that is a near perfect match for the slightly negative charge of the DNA strand.
In some organisms, the clamp is a dimer, and in other organisms the clamp is a trimer. Regardless, the conserved ring architecture allows the clamp to enclose the strand.
Replicative polymerases form an asymmetric dimer at the replication fork by binding to sub-units of the clamp loading factor. This asymmetric conformation is capable of simultaneously replicating the leading and lagging strands, and the collection of factors that includes the replicative polymerases is generally referred to as a holoenzyme . However, significant challenges remain: the leading and lagging strands are anti-parallel. This means that nucleotide synthesis on the leading strand naturally occurs in the 5' to 3' direction. However, the lagging strand runs in the opposite direction and this presents quite a challenge since no known replicative polymerases can synthesise DNA in the 3' to 5' direction.
The dimerisation of the replicative polymerases solves the problems related to efficient synchronisation of leading and lagging strand synthesis at the replication fork, but the tight spatial-structural coupling of the replicative polymerases, while solving the difficult issue of synchronisation, creates another challenge: dimerisation of the replicative polymerases at the replication fork means that nucleotide synthesis for both strands must take place at the same spatial location, despite the fact that the lagging strand must be synthesised backwards relative to the leading strand. Lagging strand synthesis takes place after the helicase has unwound a sufficient quantity of the lagging strand, and this "sufficient quantity of the lagging strand" is polymerised in discrete nucleotide chains called Okazaki fragments.
Consider the following: the helicase continuously unwinds the parental duplex, but the lagging strand must be polymerised in the opposite direction. This means that, while polymerisation of the leading strand proceeds, polymerisation of the lagging strand only occurs after enough of the lagging strand has been unwound by the helicase. At this point, the lagging strand replicative polymerase associates with the clamp and primer in order to start polymerisation. During lagging strand synthesis, the replicative polymerase sends the lagging strand back toward the replication fork. The replicative polymerase disassociates when it reaches an RNA primer. Helicase continues to unwind the parental duplex, the priming enzyme affixes another primer, and the replicative polymerase reassociates with the clamp and primer when a sufficient quantity of the lagging strand has unwound.
Collectively, leading and lagging strand synthesis is referred to as being 'semidiscontinuous'.
Prokaryotic and eukaryotic organisms use a variety of replicative polymerases, some of which are well-characterised:
This polymerase synthesizes leading and lagging strand DNA in bacteria.
This polymerase synthesizes lagging strand DNA in eukaryotes. [ 5 ] (Thought to form an asymmetric dimer with DNA polymerase epsilon.) [ 6 ]
This polymerase synthesizes leading strand DNA in eukaryotes. [ 7 ] (Thought to form an asymmetric dimer with DNA polymerase delta.) [ 5 ]
Although rare, incorrect base pairing polymerisation does occur during chain elongation. (The structure and chemistry of replicative polymerases mean that errors are unlikely, but they do occur.) Many replicative polymerases contain an "error correction" mechanism in the form of a 3' to 5' exonuclease domain that is capable of removing base pairs from the exposed 3' end of the growing chain. Error correction is possible because base pair errors distort the position of the magnesium ions in the polymerisation sub-unit, and the structural-chemical distortion of the polymerisation unit effectively stalls the polymerisation process by slowing the reaction. [ 8 ] Subsequently, the chemical reaction in the exonuclease unit takes over and removes nucleotides from the exposed 3' end of the growing chain. [ 9 ] Once an error is removed, the structure and chemistry of the polymerisation unit returns to normal and DNA replication continues. Working collectively in this fashion, the polymerisation active site can be thought of as the "proof-reader", since it senses mismatches, and the exonuclease is the "editor", since it corrects the errors.
Base pair errors distort the polymerase active site for between 4 and 6 nucleotides, which means, depending on the type of mismatch, there are up to six chances for error correction. [ 8 ] The error sensing and error correction features, combined with the inherent accuracy that arises from the structure and chemistry of replicative polymerases, contribute to an error rate of approximately 1 base pair mismatch in 10 8 to 10 10 base pairs.
Errors can be classified in three categories: purine-purine mismatches, pyrimidine-pyrimidine mismatches, and pyrimidine-purine mismatches. The chemistry of each mismatch varies, and so does the behaviour of the replicative polymerase with respect to its mismatch sensing activity.
The replication of bacteriophage T4 DNA upon infection of E. coli is a well-studied DNA replication system. During the period of exponential DNA increase at 37°C, the rate of elongation is 749 nucleotides per second. [ 11 ] The mutation rate during replication is 1.7 mutations per 10 8 base pairs. [ 12 ] Thus DNA replication in this system is both very rapid and highly accurate.
There are two problems after leading and lagging strand synthesis: RNA remains in the duplex and there are nicks between each Okazaki fragment in the lagging duplex. These problems are solved by a variety of DNA repair enzymes that vary by organism, including: DNA polymerase I, DNA polymerase beta, RNAse H, ligase, and DNA2. This process is well-characterised in bacteria and much less well-characterised in many eukaryotes.
In general, DNA repair enzymes complete the Okazaki fragments through a variety of means, including: base pair excision and 5' to 3' exonuclease activity that removes the chemically unstable ribonucleotides from the lagging duplex and replaces them with stable deoxynucleotides. This process is referred to as 'maturation of Okazaki fragments', and ligase (see below) completes the final step in the maturation process.
Primer removal and nick ligation can be thought of as DNA repair processes that produce a chemically-stable, error-free duplex. To this point, with respect to the chemistry of an RNA-DNA duplex, in addition to the presence of uracil in the duplex, the presence of ribose (which has a reactive 2' OH) tends to make the duplex much less chemically-stable than a duplex containing only deoxyribose (which has a non-reactive 2' H).
DNA polymerase I is an enzyme that repairs DNA.
RNAse H is an enzyme that removes RNA from an RNA-DNA duplex.
After DNA repair factors replace the ribonucleotides of the primer with deoxynucleotides, a single gap remains in the sugar-phosphate backbone between each Okazaki fragment in the lagging duplex. An enzyme called DNA ligase connects the gap in the backbone by forming a phosphodiester bond between each gap that separates the Okazaki fragments. The structural and chemical aspects of this process, generally referred to as 'nick translation', exceed the scope of this article.
Replication stress can result in a stalled replication fork. One type of replicative stress results from DNA damage such as inter-strand cross-links (ICLs). An ICL can block replicative fork progression due to failure of DNA strand separation. In vertebrate cells, replication of an ICL-containing chromatin template triggers recruitment of more than 90 DNA repair and genome maintenance factors. [ 13 ] These factors include proteins that perform sequential incisions and homologous recombination .
Katherine Lemon and Alan Grossman showed using Bacillus subtilis that replisomes do not move like trains along a track but DNA is actually fed through a stationary pair of replisomes located at the cell membrane. In their experiment, the replisomes in B. subtilis were each tagged with green fluorescent protein, and the location of the complex was monitored in replicating cells using fluorescence microscopy . If the replisomes moved like a train on a track, the polymerase-GFP protein would be found at different positions in each cell. Instead, however, in every replicating cell, replisomes were observed as distinct fluorescent foci located at or near midcell. Cellular DNA stained with a blue fluorescent dye (DAPI) clearly occupied most of the cytoplasmic space. [ 14 ] | https://en.wikipedia.org/wiki/Replisome |
In neuroscience , repolarization refers to the change in membrane potential that returns it to a negative value just after the depolarization phase of an action potential which has changed the membrane potential to a positive value. The repolarization phase usually returns the membrane potential back to the resting membrane potential . The efflux of potassium (K + ) ions results in the falling phase of an action potential. The ions pass through the selectivity filter of the K + channel pore.
Repolarization typically results from the movement of positively charged K + ions out of the cell. The repolarization phase of an action potential initially results in hyperpolarization , attainment of a membrane potential, termed the afterhyperpolarization , that is more negative than the resting potential . Repolarization usually takes several milliseconds. [ 1 ]
Repolarization is a stage of an action potential in which the cell experiences a decrease of voltage due to the efflux of potassium (K + ) ions along its electrochemical gradient. This phase occurs after the cell reaches its highest voltage from depolarization. After repolarization, the cell hyperpolarizes as it reaches resting membrane potential (−70 mV in neuron). Sodium (Na + ) and potassium ions inside and outside the cell are moved by a sodium potassium pump, ensuring that electrochemical equilibrium remains unreached to allow the cell to maintain a state of resting membrane potential. [ 2 ] In the graph of an action potential, the hyper-polarization section looks like a downward dip that goes lower than the line of resting membrane potential. In this afterhyperpolarization (the downward dip), the cell sits at more negative potential than rest (about −80 mV) due to the slow inactivation of voltage gated K + delayed rectifier channels, which are the primary K + channels associated with repolarization. [ 3 ] At these low voltages, all of the voltage gated K + channels close, and the cell returns to resting potential within a few milliseconds. A cell which is experiencing repolarization is said to be in its absolute refractory period. Other voltage gated K + channels which contribute to repolarization include A-type channels and Ca 2+ -activated K + channels . [ 4 ] Protein transport molecules are responsible for Na + out of the cell and K + into the cell to restore the original resting ion concentrations. [ 5 ]
Blockages in repolarization can arise due to modifications of the voltage-gated K + channels. This is demonstrated with selectively blocking voltage gated K + channels with the antagonist tetraethylammonium (TEA). By blocking the channel, repolarization is effectively stopped. [ 6 ] Dendrotoxins are another example of a selective pharmacological blocker for voltage gated K + channels. The lack of repolarization means that neuron stays at a high voltage, which slows sodium channel deactivation to a point where there is not enough inwards Na + current to depolarize and sustain firing. [ 7 ]
The structure of the voltage gated K + channel is that of six transmembrane helices along the lipid bilayer . The selectivity of this channel to voltage is mediated by four of these transmembrane domains (S1–S4) – the voltage sensing domain. The other two domains (S5, S6) form the pore by which ions traverse. [ 8 ] Activation and deactivation of the voltage gated K + channel is triggered by conformational changes in the voltage sensing domain. Specifically, the S4 domain moves such that it activates and deactivates the pore. During activation, there is outward S4 motion, causing tighter VSD-pore linkage. Deactivation is characterized by inward S4 motion. [ 9 ]
The switch from depolarization into repolarization is dependent on the kinetic mechanisms of both voltage gated K + and Na + channels . Although both voltage gated Na + and K + channels activate at roughly the same voltage (−50 mV ), Na + channels have faster kinetics and activate/deinactivate much more quickly. [ 10 ] Repolarization occurs as the influx of Na + decreases (channels deinactivate) and the efflux of K + ions increases as its channels open. [ 11 ] The decreased conductance of sodium ions and increased conductance of potassium ions cause the cell's membrane potential to very quickly return to, and past the resting membrane potential, which causes the hyperpolarization due to the potassium channels closing slowly, allowing more potassium to flow through after the resting membrane potential has been reached. [ 10 ]
Following the action potential, characteristically generated by the influx of Na + through voltage gated Na + channels, there is a period of repolarization in which the Na + channels are inactivated while K + channels are activated. Further study of K + channels shows that there are four types which influence the repolarization of the cell membrane to re-establish the resting potential. The four types are K v 1, K v 2, K v 3 and K v 4. The K v 1 channel primarily influences the repolarization of the axon. The K v 2 channel is characteristically activated slower. The K v 4 channels are characteristically activated rapidly. When K v 2 and K v 4 channels are blocked, the action potential predictably widens. [ 12 ] The K v 3 channels open at a more positive membrane potential and deactivate 10 times faster than the other K v channels. These properties allow for the high-frequency firing that mammalian neurons require. Areas with dense K v 3 channels include the neocortex , basal ganglia , brain stem and hippocampus as these regions create microsecond action potentials that requires quick repolarization. [ 13 ]
Utilizing voltage-clamp data from experiments based on rodent neurons, the K v 4 channels are associated with the primary repolarization conductance following the depolarization period of a neuron. When the K v 4 channel is blocked, the action potential becomes broader, resulting in an extended repolarization period, delaying the neuron from being able to fire again. The rate of repolarization closely regulates the amount of Ca 2+ ions entering the cell. When large quantities of Ca 2+ ions enter the cell due to extended repolarization periods, the neuron may die, leading to the development of stroke or seizures. [ 12 ]
The K v 1 channels are found to contribute to repolarization of pyramidal neurons , likely associated with an upregulation of the K v 4 channels. The K v 2 channels were not found to contribute to repolarization rate as blocking these channels did not result in changes in neuron repolarization rates. [ 12 ]
Another type of K + channel that helps to mediate repolarization in the human atria is the SK channel , which are K + channels which are activated by increases in Ca 2+ concentration. "SK channel" stands for a small conductance calcium activated potassium channel, and the channels are found in the heart. SK channels specifically act in the right atrium of the heart, and have not been found to be functionally important in the ventricles of the human heart. The channels are active during repolarization as well as during the atrial diastole phase when the current undergoes hyperpolarization. [ 14 ] Specifically, these channels are activated when Ca 2+ binds to calmodulin (CaM) because the N-lobe of CaM interacts with the channel's S4/S5 linker to induce conformational change. [ 15 ] When these K + channels are activated, the K + ions rush out of the cell during the peak of its action potential causing the cell to repolarize as the influx of Ca 2+ ions are exceeded by K + ions leaving the cell continuously. [ 16 ]
In the human ventricles , repolarization can be seen on an ECG ( electrocardiogram ) via the J-wave (Osborn), ST segment , T wave and U wave . Due to the complexity of the heart, specifically how it contains three layers of cells ( endocardium , myocardium and epicardium ), there are many physiological changes effecting repolarization that will also affect these waves. [ 17 ] Apart from changes in the structure of the heart that effect repolarization, there are many pharmaceuticals that have the same effect.
On top of that, repolarization is also altered based on the location and duration of the initial action potential . In action potentials stimulated on the epicardium, it was found that the duration of the action potential needed to be 40–60 msec to give a normal, upright T-wave, whereas a duration of 20–40 msec would give an isoelectric wave and anything under 20 msec would result in a negative T-wave. [ 18 ]
Early repolarization is a phenomenon that can be seen in ECG recordings of ventricular cells where there is an elevated ST segment, also known as a J wave. The J wave is prominent when there is a larger outward current in the epicardium compared to the endocardium. [ 19 ] It has been historically considered to be a normal variant in cardiac rhythm but recent studies show that it is related to an increased risk of cardiac arrest. Early repolarization occurs mainly in males and is associated with a larger potassium current caused by the hormone testosterone . Additionally, although the risk is unknown, African American individuals seem more likely to have the early repolarization more often. [ 20 ]
As mentioned in the previous section, early repolarization is known as appearing as elevated wave segments on ECGs. Recent studies have shown a connection between early repolarization and sudden cardiac death , which is identified as early repolarization syndrome. The condition is shown in both ventricular fibrillation without other structural heart defects as well as an early depolarization pattern, which can be seen on ECG. [ 21 ]
The primary root of early repolarization syndrome stems from malfunctions of electrical conductance in ion channels, which may be due to genetic factors. Malfunctions of the syndrome include fluctuating sodium, potassium, and calcium currents. Changes in these currents may result in overlap of myocardial regions undergoing different phases of the action potential simultaneously, leading to risk of ventricular fibrillation and arrhythmias . [ 22 ]
Upon being diagnosed, most individuals do not need immediate intervention, as early repolarization on an ECG does not indicate any life-threatening medical emergency. [ 23 ] Three to thirteen percent of healthy individuals have been observed to have early repolarization on an ECG. [ 21 ] However, patients who display early repolarization after surviving an event of early repolarization syndrome (a sudden-cardiac death experience), an implantable cardioverter-defibrillator (ICD) is strongly recommended. [ 23 ] In addition, a patient may be more prone to atrial fibrillation if the individual has early repolarization syndrome and is under sixty years of age. [ 21 ]
Patients who suffer from obstructive sleep apnea can experience impaired cardiac repolarization, increasing the morbidity and mortality of the condition greatly. Especially at higher altitudes, patients are much more susceptible to repolarization disturbances. This can be somewhat mitigated through the use of medications such as acetazolamide , but the drugs do not provide sufficient protection. Acetazolamide and similar drugs are known to be able to improve the oxygenation and sleep apnea for patients in higher altitudes, but the benefits of the drug have been observed only when traveling at altitudes temporarily, not for people who remain at a higher altitude for a longer time. [ 24 ] | https://en.wikipedia.org/wiki/Repolarization |
Reporter genes are molecular tools widely used in molecular biology , genetics , and biotechnology to study gene function, expression patterns, and regulatory mechanisms. These genes encode proteins that produce easily detectable signals, such as fluorescence , luminescence , or enzymatic activity, allowing researchers to monitor cellular processes in real-time. Reporter genes are often fused to regulatory sequences of genes of interest, enabling scientists to analyze promoter activity, transcriptional regulation, and signal transduction pathways. Common reporter gene systems include green fluorescent protein (GFP), β-galactosidase (lacZ), luciferase , and chloramphenicol acetyltransferase (CAT), each offering distinct advantages depending on the experimental application. [ 1 ] Their versatility makes reporter genes invaluable in fields such as drug discovery, gene therapy , and synthetic biology . [ 1 ]
To introduce a reporter gene into an organism, scientists place the reporter gene and the gene of interest in the same DNA construct to be inserted into the cell or organism. For bacteria or prokaryotic cells in culture, this is usually in the form of a circular DNA molecule called a plasmid . For viruses , this is known as a viral vector . It is important to use a reporter gene that is not natively expressed in the cell or organism under study, since the expression of the reporter is being used as a marker for successful uptake of the gene of interest. [ 1 ]
Commonly used reporter genes that induce visually identifiable characteristics usually involve fluorescent and luminescent proteins. Examples include the gene that encodes jellyfish green fluorescent protein (GFP), which causes cells that express it to glow green under blue or ultraviolet light, the enzyme luciferase , which catalyzes a reaction with luciferin to produce light, [ 2 ] and the red fluorescent protein from the gene dsRed . [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] The GUS gene has been commonly used in plants, but luciferase and GFP are becoming more common. [ 8 ] [ 9 ]
A common reporter in bacteria is the E. coli lacZ gene, which encodes the protein beta-galactosidase . [ 1 ] This enzyme causes bacteria expressing the gene to appear blue when grown on a medium that contains the substrate analog X-gal . An example of a selectable marker, which is also a reporter in bacteria, is the chloramphenicol acetyltransferase (CAT) gene, which confers resistance to the antibiotic chloramphenicol . [ 10 ]
Many methods of transfection and transformation – two ways of expressing a foreign or modified gene in an organism – are effective in only a small percentage of a population subjected to the techniques. Thus, a method for identifying those few successful gene uptake events is necessary. Reporter genes used in this way are normally expressed under their own promoter (DNA regions that initiates gene transcription) independent from that of the introduced gene of interest; the reporter gene can be expressed constitutively ("always on") or inducibly . This independence is advantageous when the gene of interest is expressed under specific or hard-to-access conditions. [ 1 ]
Reporter genes employ diverse mechanisms to visualize or quantify gene activity:
In the case of selectable-marker reporters such as CAT , the transfected population can be grown on a chloramphenicol-containing substrate. Only cells with the CAT gene survive, confirming successful transformation. [ 10 ]
Reporter genes can be used to assay for the expression of a gene of interest that is normally difficult to quantitatively assay. [ 1 ] Reporter genes can produce a protein that has little obvious or immediate effect on the cell culture or organism. They are ideally not present in the native genome to be able to isolate reporter gene expression as a result of the gene of interest's expression. [ 1 ] [ 16 ]
To activate reporter genes, they can be expressed constitutively , where they are directly attached to the gene of interest to create a gene fusion . [ 17 ] This method is an example of using cis -acting elements where the two genes are under the same promoter elements and are transcribed into a single messenger RNA molecule. The mRNA is then translated into protein. It is important that both proteins be able to properly fold into their active conformations and interact with their substrates despite being fused. In building the DNA construct, a segment of DNA coding for a flexible polypeptide linker region is usually included so that the reporter and the gene product will only minimally interfere with one another. [ 18 ] [ 19 ] Reporter genes can also be expressed by induction during growth. In these cases, trans -acting elements, such as transcription factors are used to express the reporter gene. [ 20 ] [ 21 ]
Reporter gene assay have been increasingly used in high throughput screening (HTS) to identify small molecule inhibitors and activators of protein targets and pathways for drug discovery and chemical biology . Because the reporter enzymes themselves (e.g. firefly luciferase ) can be direct targets of small molecules and confound the interpretation of HTS data, novel coincidence reporter designs incorporating artifact suppression have been developed. [ 22 ] [ 23 ]
Reporter genes can be used to assay for the activity of a particular promoter in a cell or organism. [ 24 ] In this case there is no separate "gene of interest"; the reporter gene is simply placed under the control of the target promoter and the reporter gene product's activity is quantitatively measured. The results are normally reported relative to the activity under a "consensus" promoter known to induce strong gene expression. [ 25 ]
While reporter gene technology has become an essential component of molecular biology, its application still has limitations. One primary concern is the influence of genomic context on reporter expression. Reporter genes integrated into the genome can be subject to position-effect variegation , where the surrounding chromatin structure influences transcriptional activity. This can lead to inconsistent expression and complicate the interpretation of results, especially in stable cell lines and transgenic organisms. [ 26 ] Additionally, reporter expression may not always accurately reflect the activity of the endogenous gene of interest due to differences in post-transcriptional regulation , mRNA stability, or translational efficiency . [ 27 ]
Another common limitation is the cellular burden that reporter expression may impose. High levels of reporter protein production, such as fluorescent proteins or luciferases, can divert cellular resources, potentially impacting normal metabolism or physiology. This is particularly problematic in sensitive systems like stem cells or primary cell cultures , where even subtle changes in metabolism can influence cell behavior. [ 28 ] Additionally, some reporter systems, like luciferase assays, require the addition of exogenous substrates (e.g., luciferin), adds complexity and may reduce reproducibility, particularly in live animal models where substrate availability can vary. [ 28 ]
To address these challenges, several innovations have improved the reliability and flexibility of reporter gene technologies. One advancement involves the use of the 2A peptide , which allows the co-expression of multiple proteins from a single transcript without requiring a direct fusion. This approach enables the simultaneous expression of a gene of interest and a reporter while preserving the function of both. [ 29 ] Additionally, split-reporter systems, which produce a functional signal only when two proteins of interest interact, have become widely used in studies of protein–protein interactions due to their low background activity and high specificity. [ 30 ]
Common Research Applications
The most commonly used application used for Reporter genes has been for the identification of cis and trans acting elements. Through fusion to the promoter region of possible trans-cis acting elements, the change in fluorescence is measured and allows for tracking into transcriptional activity . [ 31 ] This provides useful information into understanding the pathways these elements are involved in and its regulatory uses for cell development and growth . [ 31 ] Immune responses are also a commonly used application of reporter genes and have benefited greatly through their use. They have allowed for further understanding in cell proliferation and differentiation into B-cells and T-cells during immune responses and have contributed to understanding activation through tracking cytokine signaling pathways. [ 32 ]
The development of reporter cell lines have also emerged with the discovery and use of reporter genes. [ 31 ] The cell lines are labelled with reporter genes to allow for fluorescent detection to help with identification into proteins used in cellular pathways and identification into protein localization . [ 30 ] This has allowed for a simple way to study protein progression that doesn't permit further experimentation for introduction and fusion of a reporter gene as the reporter gene is already present in the cell line.
Common Medical Applications
Tracking expression has allowed for multiple investigations into the progression of diseased cells . [ 33 ] Reporter genes have shown to provide critical insight into genes upregulated in cancer regulatory pathways as well as the identification into oncogenes and tumor suppressor genes . These have been used for further research into the development of therapeutics to stop further disease progression and metastasis. [ 33 ] Gene therapy has also been tracked through the use of reporter genes. This allows for the monitoring of gene therapy vectors to see if they are achieving intended results as well as to monitor patient safety for short and long term periods. [ 34 ] Therapeutics developed have benefited from the use of reporter genres such as a dual-reporter system developed for CRISPR/Cas9 models to monitor progression and success and benefits of being gene editing tools. [ 35 ]
A more complex use of reporter genes on a large scale is in two-hybrid screening , which aims to identify proteins that natively interact with one another in vivo .The yeast two-hybrid (Y2H) system , developed in the late 1980s and early 1990s, was an immense advancement in the use of reporter genes to study protein-protein interactions in vivo . [ 36 ] This technique takes advantage of transcription factors' modular nature, which often consists of separate DNA-binding and activation domains . By genetically fusing two proteins of interest to these domains, researchers can detect physical interactions between them through the activation of a downstream reporter gene. Due to the simple genetic nature of the Y2H system, this technique significantly increased the accessibility of protein-protein interaction studies without the requirement of protein purification or complex biochemical assays . Experimental Y2H data have played a pivotal role in building large-scale synthetic human interactomes and in dissecting mechanisms in human disease. [ 37 ] [ 38 ]
However, there are still some limitations. Y2H sometimes detects interactions that don't occur naturally or fails to detect weak or transient interactions, and because it occurs in an artificial setting, Y2H is missing key factors like post-translational modifications and compartmentalization . For example, Y2H has been shown to generate false positives due to indirect interactions mediated by host proteins, as demonstrated in studies of cyanobacterial PipX interactions where the self-interaction of PipX was found to be dependent on PII homologues from the host organism rather than a direct interaction. [ 39 ]
Massively parallel reporter assays (MPRAs) and machine learning are newer ways we study gene regulation which utilize reporter genes. One major use is in synthetic biology and gene therapy, where researchers can design better regulatory elements to control gene expression. [ 40 ] For example, deep learning models trained on MPRA data have been used to optimize 5' untranslated regions (UTRs) for mRNA translation, enabling tailored designs that enhance gene-editing efficiency in the therapeutic context. This could make mRNA-based treatments more effective, as MPRAs also help identify how genetic variants affect gene expression, which is used in precision medicine and developing personalized treatments. [ 40 ]
Machine learning models trained on MPRA data can predict how different sequences impact gene activity, making it easier to design reporter genes that respond in specific ways. Combining MPRAs with next-gen sequencing also makes reporter gene experiments faster and more scalable. These advances could even improve mRNA-based vaccines and therapeutics by optimizing untranslated regions (UTRs) to boost stability and translation. For instance, modular MPRAs have uncovered context-specific regulatory sequences linked to type 2 diabetes, revealing enhancer-promoter interactions dependent on cell-specific transcription factors like HNF1. [ 41 ] Similarly, MPRA screens of cardiac enhancer variants have pinpointed functional noncoding sequences influencing QT interval variability, directly linking genetic variation to disease-associated gene dysregulation. [ 42 ] | https://en.wikipedia.org/wiki/Reporter_gene |
Reporter virus particles (RVPs) are replication -incompetent virus particles engineered to express one or more reporter genes upon infecting susceptible cells. [ 1 ] [ 2 ] [ 3 ] Since the RVP genome lacks genes essential for viral replication, RVPs are capable of only a single round of infection. [ 4 ] [ 5 ] [ 6 ] [ 7 ] Thus they are safe to work with under BSL-2 conditions, enabling the study of highly pathogenic viruses using standard laboratory facilities. [ 4 ] [ 5 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] Expression of a reporter such as luciferase can provide a quantitative readout of infection. [ 4 ] [ 5 ] [ 10 ] With proper design and quality control, RVPs remain stable under common assay conditions and yield reproducible results that correlate with those obtained from live virus. [ 4 ] [ 5 ] These qualities make RVPs a safer and faster alternative to plaque assays, and especially well-suited for high-throughput applications. [ 4 ] [ 5 ] [ 9 ] [ 11 ] [ 12 ] RVPs offer flexibility for different uses, as they are antigenically identical to wild-type virus, and can be engineered with various proteins or express mutant envelopes to study infectivity or antigenicity. [ 12 ] [ 13 ]
RVPs are most commonly used in neutralization assays, which measure the ability of serum or antibodies to prevent virus infectivity in vitro, with applications in vaccine development, antibody discovery, and serological testing. [ 14 ] A related assay tests for antibody-dependent enhancement (ADE), a phenomenon where non-neutralizing antibodies against viruses can increase infectivity through their binding to the cellular Fc receptor, aiding entry of the virus into host cells. [ 15 ]
Depending on the virus of interest and the desired application, RVPs can be pseudotypes, containing a heterologous self-assembling core (typically of lentiviral origin), as well as native envelope proteins corresponding to the studied virus. [ 2 ] [ 3 ] [ 16 ] This type of RVP facilitates exceptional reliability and reproducibility of neutralization assay results, while maintaining antigenicity and safety. [ 16 ] Alternatively, for structurally complex viruses such dengue and Zika viruses, RVPs are engineered to be antigenically identical to wild-type virus, using all of the structural proteins of the native virus. [ 1 ] [ 4 ] [ 5 ]
RVP production requires optimization of several elements, such as expression constructs, cell lines, and processing steps, to reach a yield sufficient for downstream applications and reproducibility across production lots. [ 2 ] Although ideal for studying the effects of the immune response on virus entry, RVPs are replication-incompetent, and therefore typically do not allow study of the later stages of the viral life cycle. RVPs formed by pseudotyping contain the native form of the viral Envelope (or Spike) protein, but may not contain other structural elements from the original virus. Results obtained with RVPs are often compared to those obtained with live virus. [ 17 ] | https://en.wikipedia.org/wiki/Reporter_virus_particles |
As part of the Apollo 12 mission in November 1969, the camera from the Surveyor 3 probe was brought back from the Moon to Earth. On analyzing the camera, it was found that the common bacterium Streptococcus mitis was alive on the camera. NASA reasoned that the camera was not sterilized on Earth before the space probe 's launch in April 1967, two and a half years earlier. [ 1 ] However, later study showed that the scientists analysing the camera on return to Earth used procedures that were inadequate to prevent recontamination after return to Earth, for instance with their arms exposed, not covering their entire bodies as modern scientists would do. There may also have been possibilities for contamination during the return mission as the camera was returned in a porous bag rather than the airtight containers used for lunar sample return. [ 2 ] As a result, the source of the contamination remains controversial.
Since the Apollo Program, there has been at least one independent investigation into the validity of the NASA claim. Leonard D. Jaffe , a Surveyor program scientist and custodian of the Surveyor 3 parts brought back from the Moon, stated in a letter to the Planetary Society that a member of his staff reported that a "breach of sterile procedure" took place at just the right time to produce a false positive result. One of the implements being used to scrape samples off the Surveyor parts was laid down on a non-sterile laboratory bench, and then was used to collect surface samples for culturing.Jaffe wrote, "It is, therefore, quite possible that the microorganisms were transferred to the camera after its return to Earth, and that they had never been to the Moon." [ 3 ] In 2007, NASA funded an archival study that sought the film of the camera-body microbial sampling, to confirm the report of a breach in sterile technique.
The bacterial test is now non-repeatable because the parts were subsequently taken out of quarantine and fully re-exposed to terrestrial conditions (the Surveyor 3 camera is now on display in the Smithsonian Air and Space Museum in Washington, D.C. ).
The Surveyor 3 camera was returned from the Moon in a nylon duffel bag , and was not in the type of sealed airtight metal container used to return lunar samples in the early Apollo missions. It is therefore possible that it was contaminated by the astronauts and the environment in the Apollo 12 capsule itself. [ 1 ]
In March 2011, three researchers co-authored a paper titled "A Microbe on the Moon? Surveyor III and Lessons Learned for Future Sample Return Missions" that assessed the validity of claims that the S. mitis samples found on the camera had indeed survived for nearly three years on the Moon. The paper concluded that the presence of microbes could more likely be attributed to poor clean room conditions rather than the survival of bacteria for three years in the harsh environment of the Moon. The paper also discussed the implication this incident would have for contamination control in future space missions. [ 2 ] [ 4 ]
Countervailing evidence against the secondary contamination hypothesis is the fact that, according to Lieutenant Colonel Fred Mitchell, lead author of the original 1971 paper, [ 5 ] there was a significant delay before the sampled culture began growing; this is consistent with the sampled bacteria consisting of dormant cells, but not if the sampled culture was the result of fresh contamination. In addition, according to Mitchell, the microbes clung exclusively to the foam during culturing, which would not have happened had there been contamination. [ 2 ] Furthermore, if fresh contamination had occurred, millions of individual bacteria and "a representation of the entire microbial population would be expected"; instead, only a few individual bacteria were sampled, and only from a single species. [ 5 ]
This subject was covered in the 2008 Discovery Channel documentary series, When We Left Earth and in the Science Channel series Nasa's Unexplained Files episode Return of the Moon Bugs .
This article incorporates public domain material from Life Sciences Data Archive (LSDA) – Experiment: Surveyor 3 Streptococcus Mitis (APSTREPMIT) . National Aeronautics and Space Administration . | https://en.wikipedia.org/wiki/Reports_of_Streptococcus_mitis_on_the_Moon |
Walter Julius Reppe (29 July 1892 in Göringen – 26 July 1969 in Heidelberg ) was a German chemist . He is notable for his contributions to the chemistry of acetylene .
Walter Reppe began his study of the natural sciences University of Jena in 1911. Interrupted by the First World War , he obtained his doctorate in Munich in 1920.
In 1921, Reppe worked for BASF 's main laboratory. From 1923, he worked on the catalytic dehydration of formamide to prussic acid in the indigo laboratory, developing this procedure for industrial use. In 1924, he left research for 10 years, only resuming it in 1934.
Reppe began his interest in acetylene in 1928. Acetylene is a gas which can take part in many chemical reactions . However, it is explosive and accidents often occurred. Because of this danger, small quantities of acetylene were used at a time, and always without high pressures. In fact, it was forbidden to compress acetylene over 1.5 bar at BASF.
To work with acetylene safely, Reppe designed special test tubes, the so-called "Reppe glasses" — stainless steel spheres with screw-type cap, which permitted high pressure experiments. The efforts ended finally with a large number of interrelated reactions, known as Reppe chemistry .
The high pressure reactions catalysed by heavy metal acetylides , especially copper acetylide , or metal carbonyls are called Reppe chemistry . Reactions can be classified into four large classes:
This simple synthesis was used to prepare acrylic acid derivatives for the production of acrylic glass .
If a competing ligand such as triphenylphosphine is present in sufficient proportion to occupy one coordination site, then room is left for only three acetylene molecules, and these come together to form benzene
This reaction provided an unusual route to benzene and especially to cyclooctatetraene , which was difficult to prepare otherwise.
Products from these four reaction types proved to be versatile intermediates in the syntheses of lacquers, adhesives, foam materials, textile fibers, and pharmaceuticals could now be produced.
After the Second World War , Reppe led the research of BASF from 1949 up to his retirement in 1957. From 1952 to 1966, he also sat on the supervisory board. He was also a professor at the University of Mainz and TH Darmstadt from 1951 and 1952 respectively. Together with Otto Bayer and Karl Ziegler he received the Werner von Siemens Ring in 1960 for expanding the scientific knowledge on and for the technical development of new synthetic high-molecular materials.
Most of the industrial processes that were developed by Reppe and coworkers have been superseded, largely because the chemical industry has shifted from coal as feedstock to oil. Alkenes from thermal cracking are readily available, but acetylene is not.
Together with his contemporaries Otto Roelen , Karl Ziegler , Hans Tropsch , and Franz Fischer , Reppe was a leader in demonstrating the utility of metal-catalyzed reactions in large scale synthesis of organic compounds. The economic benefits demonstrated by this research motivated the eventual flowering of organometallic chemistry and its close connection to industry. | https://en.wikipedia.org/wiki/Reppe_synthesis |
In mathematics , a representation is a very general relationship that expresses similarities (or equivalences) between mathematical objects or structures . Roughly speaking, a collection Y of mathematical objects may be said to represent another collection X of objects, provided that the properties and relationships existing among the representing objects y i conform, in some consistent way, to those existing among the corresponding represented objects x i . More specifically, given a set Π of properties and relations , a Π -representation of some structure X is a structure Y that is the image of X under a homomorphism that preserves Π . The label representation is sometimes also applied to the homomorphism itself (such as group homomorphism in group theory ). [ 1 ] [ 2 ]
Perhaps the most well-developed example of this general notion is the subfield of abstract algebra called representation theory , which studies the representing of elements of algebraic structures by linear transformations of vector spaces . [ 2 ]
Although the term representation theory is well established in the algebraic sense discussed above, there are many other uses of the term representation throughout mathematics.
An active area of graph theory is the exploration of isomorphisms between graphs and other structures.
A key class of such problems stems from the fact that, like adjacency in undirected graphs , intersection of sets
(or, more precisely, non-disjointness ) is a symmetric relation .
This gives rise to the study of intersection graphs for innumerable families of sets. [ 3 ] One foundational result here, due to Paul Erdős and his colleagues, is that every n - vertex graph may be represented in terms of intersection among subsets of a set of size no more than n 2 /4. [ 4 ]
Representing a graph by such algebraic structures as its adjacency matrix and Laplacian matrix gives rise to the field of spectral graph theory . [ 5 ]
Dual to the observation above that every graph is an intersection graph is the fact that every partially ordered set (also known as poset) is isomorphic to a collection of sets ordered by the inclusion (or containment) relation ⊆.
Some posets that arise as the inclusion orders for natural classes of objects include the Boolean lattices and the orders of dimension n . [ 6 ]
Many partial orders arise from (and thus can be represented by) collections of geometric objects. Among them are the n -ball orders. The 1-ball orders are the interval-containment orders, and the 2-ball orders are the so-called circle orders —the posets representable in terms of containment among disks in the plane. A particularly nice result in this field is the characterization of the planar graphs , as those graphs whose vertex-edge incidence relations are circle orders. [ 7 ]
There are also geometric representations that are not based on inclusion. Indeed, one of the best studied classes among these are the interval orders , [ 8 ] which represent the partial order in terms of what might be called disjoint precedence of intervals on the real line : each element x of the poset is represented by an interval [ x 1 , x 2 ], such that for any y and z in the poset, y is below z if and only if y 2 < z 1 .
In logic , the representability of algebras as relational structures is often used to prove the equivalence of algebraic and relational semantics . Examples of this include Stone's representation of Boolean algebras as fields of sets , [ 9 ] Esakia's representation of Heyting algebras as Heyting algebras of sets, [ 10 ] and the study of representable relation algebras and representable cylindric algebras . [ 11 ]
Under certain circumstances, a single function f : X → Y is at once an isomorphism from several mathematical structures on X . Since each of those structures may be thought of, intuitively, as a meaning of the image Y (one of the things that Y is trying to tell us), this phenomenon is called polysemy —a term borrowed from linguistics . Some examples of polysemy include: | https://en.wikipedia.org/wiki/Representation_(mathematics) |
In the mathematical field of representation theory , a representation of a Lie superalgebra is an action of Lie superalgebra L on a Z 2 -graded vector space V , such that if A and B are any two pure elements of L and X and Y are any two pure elements of V , then
Equivalently, a representation of L is a Z 2 -graded representation of the universal enveloping algebra of L which respects the third equation above.
A * Lie superalgebra is a complex Lie superalgebra equipped with an involutive antilinear map * such that * respects the grading and
A unitary representation of such a Lie algebra is a Z 2 graded Hilbert space which is a representation of a Lie superalgebra as above together with the requirement that self-adjoint elements of the Lie superalgebra are represented by Hermitian transformations.
This is a major concept in the study of supersymmetry together with representation of a Lie superalgebra on an algebra. Say A is an *-algebra representation of the Lie superalgebra (together with the additional requirement that * respects the grading and L[a]*=-(-1) La L*[a*]) and H is the unitary rep and also, H is a unitary representation of A.
These three reps are all compatible if for pure elements a in A, |ψ> in H and L in the Lie superalgebra,
Sometimes, the Lie superalgebra is embedded within A in the sense that there is a homomorphism from the universal enveloping algebra of the Lie superalgebra to A. In that case, the equation above reduces to
This approach avoids working directly with a Lie supergroup, and hence avoids the use of auxiliary Grassmann numbers .
This algebra -related article is a stub . You can help Wikipedia by expanding it .
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Representation_of_a_Lie_superalgebra |
In mathematics , a representation theorem is a theorem that states that every abstract structure with certain properties is isomorphic to another (abstract or concrete) structure. | https://en.wikipedia.org/wiki/Representation_theorem |
In nonrelativistic quantum mechanics , an account can be given of the existence of mass and spin (normally explained in Wigner's classification of relativistic mechanics) in terms of the representation theory of the Galilean group , which is the spacetime symmetry group of nonrelativistic quantum mechanics.
In 3 + 1 dimensions, this is the subgroup of the affine group on ( t, x, y, z ), whose linear part leaves invariant both the metric ( g μν = diag(1, 0, 0, 0) ) and the (independent) dual metric ( g μν = diag(0, 1, 1, 1) ). A similar definition applies for n + 1 dimensions.
We are interested in projective representations of this group, which are equivalent to unitary representations of the nontrivial central extension of the universal covering group of the Galilean group by the one-dimensional Lie group R , cf. the article Galilean group for the central extension of its Lie algebra . The method of induced representations will be used to survey these.
We focus on the (centrally extended, Bargmann) Lie algebra here, because it is simpler to analyze and we can always extend the results to the full Lie group through the Frobenius theorem .
E is the generator of time translations ( Hamiltonian ), P i is the generator of translations ( momentum operator ), C i is the generator of Galilean boosts, and L ij stands for a generator of rotations ( angular momentum operator ).
The central charge M is a Casimir invariant .
The mass-shell invariant
is an additional Casimir invariant .
In 3 + 1 dimensions, a third Casimir invariant is W 2 , where
somewhat analogous to the Pauli–Lubanski pseudovector of relativistic mechanics.
More generally, in n + 1 dimensions, invariants will be a function of
and
as well as of the above mass-shell invariant and central charge.
Using Schur's lemma , in an irreducible unitary representation, all these Casimir invariants are multiples of the identity. Call these coefficients m and mE 0 and (in the case of 3 + 1 dimensions) w , respectively. Recalling that we are considering unitary representations here, we see that these eigenvalues have to be real numbers .
Thus, m > 0 , m = 0 and m < 0 . (The last case is similar to the first.) In 3 + 1 dimensions, when In m > 0 , we can write, w = ms for the third invariant, where s represents the spin, or intrinsic angular momentum. More generally, in n + 1 dimensions, the generators L and C will be related, respectively, to the total angular momentum and center-of-mass moment by
From a purely representation-theoretic point of view, one would have to study all of the representations; but, here, we are only interested in applications to quantum mechanics. There, E represents the energy , which has to be bounded below, if thermodynamic stability is required. Consider first the case where m is nonzero.
Considering the ( E , P → ) space with the constraint m E = m E 0 + P 2 2 , {\displaystyle mE=mE_{0}+{P^{2} \over 2}~,} we see that the Galilean boosts act transitively on this hypersurface. In fact, treating the energy E as the Hamiltonian, differentiating with respect to P , and applying Hamilton's equations, we obtain the mass-velocity relation m v → = P → .
The hypersurface is parametrized by this velocity In v → . Consider the stabilizer of a point on the orbit , ( E 0 , 0 ), where the velocity is 0 . Because of transitivity, we know the unitary irrep contains a nontrivial linear subspace with these energy-momentum eigenvalues. (This subspace only exists in a rigged Hilbert space , because the momentum spectrum is continuous.)
The subspace is spanned by E , P → , M and L ij . We already know how the subspace of the irrep transforms under all operators but the angular momentum . Note that the rotation subgroup is Spin(3) . We have to look at its double cover , because we are considering projective representations. This is called the little group , a name given by Eugene Wigner . His method of induced representations specifies that the irrep is given by the direct sum of all the fibers in a vector bundle over the mE = mE 0 + P 2 /2 hypersurface, whose fibers are a unitary irrep of Spin(3) .
Spin(3) is none other than SU(2) . (See representation theory of SU(2) , where it is shown that the unitary irreps of SU(2) are labeled by s , a non-negative integer multiple of one half. This is called spin , for historical reasons.)
is nonpositive. Suppose it is zero. Here, it is also the boosts as well as the rotations that constitute the little group. Any unitary irrep of this little group also gives rise to a projective irrep of the Galilean group. As far as we can tell, only the case which transforms trivially under the little group has any physical interpretation, and it corresponds to the no-particle state, the vacuum .
The case where the invariant is negative requires additional comment. This corresponds to the representation class for m = 0 and non-zero P → . Extending the bradyon , luxon , tachyon classification from the representation theory of the Poincaré group to an analogous classification, here, one may term these states as synchrons . They represent an instantaneous transfer of non-zero momentum across a (possibly large) distance. Associated with them, by above, is a "time" operator
which may be identified with the time of transfer. These states are naturally interpreted as the carriers of instantaneous action-at-a-distance forces.
N.B. In the 3 + 1 -dimensional Galilei group, the boost generator may be decomposed into
with W → playing a role analogous to helicity . | https://en.wikipedia.org/wiki/Representation_theory_of_the_Galilean_group |
Representational oligonucleotide microarray analysis ( ROMA ) is a technique that was developed by Michael Wigler and Rob Lucito at the Cold Spring Harbor Laboratory (CSHL) in 2003. [ citation needed ] Wigler and Lucito currently run laboratories at CSHL using ROMA to explore genomic copy number variation in cancer and other genetic diseases.
In this technique, two genomes are compared for their differences in copy number on a microarray. The ROMA technology emerged from a previous method called representational difference analysis (RDA). ROMA, in comparison to other comparative genomic hybridization (CGH) techniques, has the advantage of reducing the complexity of a genome with a restriction enzyme which highly increases the efficiency of genomic fragment hybridization to a microarray.
In ROMA, a genome is digested with a restriction enzyme, ligated with adapters specific to the restriction fragment sticky ends and amplified by PCR. After the PCR step, representations of the entire genome (restriction fragments) are amplified to pronounce relative increases, decreases or preserve equal copy number in the two genomes. The representations of the two different genomes are labeled with different fluorophores and co-hybridized to a microarray with probes specific to locations across the entire human genome. After analysis of the ROMA microarray image is completed, a copy number profile of the entire human genome is generated. This allows researchers to detect with high accuracy amplifications (amplicons) and deletions that occur across the entire genome.
In cancer, the genome becomes very unstable, resulting in specific regions that may be deleted (if they contain a tumor suppressor) or amplified (if they contain an oncogene ). Amplifications and deletions have also been observed in the normal human population and are referred to as Copy Number Polymorphisms (CNPs). Jonathan Sebat was one of the first researchers to report in the journal 'Science' in 2004 that these CNPs give rise to human genomic variation and may contribute to our phenotypic differences. [ 1 ] [ citation needed ] Tremendous research efforts are being conducted now to understand the role of CNPs in normal human variation and neurological diseases such as autism. By understanding which regions of the genome have undergone copy number polymorphisms in disease, scientists can ultimately identify genes that are overexpressed or deleted and design drugs to compensate for these genes to cure genetic diseases. | https://en.wikipedia.org/wiki/Representational_oligonucleotide_microarray_analysis |
The concept of the representative layer came about though the work of Donald Dahm, with the assistance of Kevin Dahm and Karl Norris, to describe spectroscopic properties of particulate samples, especially as applied to near-infrared spectroscopy . [ 1 ] [ 2 ] A representative layer has the same void fraction as the sample it represents and each particle type in the sample has the same volume fraction and surface area fraction as does the sample as a whole. The spectroscopic properties of a representative layer can be derived from the spectroscopic properties of particles, which may be determined by a wide variety of ways. [ 3 ] While a representative layer could be used in any theory that relies on the mathematics of plane parallel layers, there is a set of definitions and mathematics, some old and some new, which have become part of representative layer theory .
Representative layer theory can be used to determine the spectroscopic properties of an assembly of particles from those of the individual particles in the assembly. [ 4 ] The sample is modeled as a series of layers, each of which is parallel to each other and perpendicular to the incident beam. The mathematics of plane parallel layers is then used to extract the desired properties from the data, most notably that of the linear absorption coefficient which behaves in the manner of the coefficient in Beer’s law. The representative layer theory gives a way of performing the calculations for new sample properties by changing the properties of a single layer of the particles, which doesn’t require reworking the mathematics for a sample as a whole.
The first attempt to account for transmission and reflection of a layered material was carried out by George G. Stokes in about 1860 [ 5 ] and led to some very useful relationships. John W. Strutt (Lord Rayleigh) [ 6 ] and Gustav Mie [ 7 ] developed the theory of single scatter to a high degree, but Aurthur Schuster [ 8 ] was the first to consider multiple scatter. He was concerned with the cloudy atmospheres of stars, and developed a plane-parallel layer model in which the radiation field was divided into forward and backward components. This same model was used much later by Paul Kubelka and Franz Munk , whose names are usually attached to it by spectroscopists.
Following WWII, the field of reflectance spectroscopy was heavily researched, both theoretically and experimentally. The remission function, F ( R ∞ ) {\textstyle F(R_{\infty })} , following Kubelka-Munk theory, was the leading contender as the metric of absorption analogous to the absorbance function in transmission absorption spectroscopy.
The form of the K-M solution originally was: F ( R ∞ ) ≡ ( 1 − R ∞ ) 2 2 R ∞ = a 0 r 0 {\textstyle F(R_{\infty })\equiv {\frac {(1-R_{\infty })^{2}}{2R_{\infty }}}={\frac {a_{0}}{r_{0}}}} , but it was rewritten in terms of linear coefficients by some authors, becoming F ( R ∞ ) ≡ ( 1 − R ∞ ) 2 2 R ∞ = k s {\textstyle F(R_{\infty })\equiv {\frac {(1-R_{\infty })^{2}}{2R_{\infty }}}={\frac {k}{s}}} , taking k {\displaystyle k} and s {\displaystyle s} as being equivalent to the linear absorption and scattering coefficients as they appear in the Bouguer-Lambert law, even though sources who derived the equations preferred the symbolism K S {\textstyle {\frac {K}{S}}} and usually emphasized that K = 2 k {\textstyle K=2k} and S {\displaystyle S} was a remission or back-scattering parameter, which for the case of diffuse scatter should properly be taken as an integral. [ 9 ]
In 1966, in a book entitled Reflectance Spectroscopy, Harry Hecht had pointed out that the formulation F ( R ∞ ) = k s {\textstyle F(R_{\infty })={\frac {k}{s}}} led to log F ( R ∞ ) = log k − log s {\textstyle \log F(R_{\infty })=\log k-\log s} , which enabled plotting F ( R ∞ ) {\textstyle F(R_{\infty })} "against the wavelength or wave-number for a particular sample" giving a curve corresponding "to the real absorption determined by transmission measurements, except for a displacement by − log s {\textstyle -\log s} in the ordinate direction." However, in data presented, "the marked deviation in the remission function ... in the region of large extinction is obvious." He listed various reasons given by other authors for this "failure ... to remain valid in strongly absorbing materials", including: "incomplete diffusion in the scattering process"; failure to use "diffuse illumination; "increased proportion of regular reflection"; but concluded that "notwithstanding the above mentioned difficulties, ... the remission function should be a linear function of the concentration at a given wavelength for a constant particle size" though stating that "this discussion has been restricted entirely to the reflectance of homogeneous powder layers" though "equation systems for combination of inhomogeneous layers cannot be solved for the scattering and absorbing properties even in the simple case of a dual combination of sublayers. ... This means that the (Kubelka-Munk) theory fails to include, in an explicit manner, any dependence of reflection on particle size or shape or refractive index". [ 10 ]
The field of Near infrared spectroscopy (NIR) got its start in 1968, when Karl Norris and co-workers with the Instrumentation Research Lab of the U.S. Department of Agriculture first applied the technology to agricultural products. [ 11 ] The USDA discovered how to use NIR empirically, based on available sources, gratings, and detector materials. Even the wavelength range of NIR was empirically set based on the operational range of a PbS detector. Consequently, it was not seen as a rigorous science: it had not evolved in the usual way, from research institutions to general usage. [ 12 ] Even though the Kubelka-Munk theory provided a remission function that could have been used as the absorption metric, Norris selected log ( 1 / R ∞ ) {\textstyle \log(1/R_{\infty })} for convenience. [ 13 ] He believed that the problem of non-linearity between the metric and concentration was due to particle size (a theoretical concern) and stray light (an instrumental effect). In qualitative terms, he would explain differences in spectra of different particle size as changes in the effective path length that the light traveled though the sample. [ 14 ]
In 1976, Hecht [ 15 ] published an exhaustive evaluation of the various theories which were considered to be fairly general. In it, he presented his derivation of the Hecht finite difference formula by replacing the fundamental differential equations of the Kubelka-Munk theory by the finite difference equations, and obtained: F ( R ∞ ) = a ( 1 r − 1 ) − a 2 2 r {\textstyle F(R_{\infty })=a{\biggl (}{\frac {1}{r}}-1{\biggr )}-{\frac {a^{2}}{2r}}} . He noted "it is well known that a plot of F ( R ∞ ) {\textstyle F(R_{\infty })} versus K {\displaystyle K} deviates from linearity for high values of K {\displaystyle K} , and it appears that (this equation) can be used to explain the deviations in part", and "represents an improvement in the range of validity and shows the need to consider the particulate nature of scattering media in developing a more precise theory by which absolute absorptivities can be determined."
In 1982, Gerry Birth convened a meeting of experts in several areas that impacted NIR Spectroscopy , with emphasis on diffuse reflectance spectroscopy, no matter which portion of the electromagnetic spectrum might be used. This was the beginning of the International Diffuse Reflectance Conference. At this meeting was Harry Hecht, who may have at the time been the world's most knowledgeable person in the theory of diffuse reflectance. Gerry himself took many photographs illustrating various aspects of diffuse reflectance, many of which were not explainable with the best available theories. In 1987, Birth and Hecht wrote a joint article in a new handbook, [ 16 ] which pointed a direction for future theoretical work.
In 1994, Donald and Kevin Dahm began using numerical techniques to calculate remission and transmission from samples of varying numbers of plane parallel layers from absorption and remission fractions for a single layer. Using this entirely independent approach, they found a function that was the independent of the number of layers of the sample. This function, called the Absorption/Remission function and nick-named the ART function, is defined as: [ 17 ] A ( R , T ) ≡ ( 1 − R n ) 2 − T n 2 R n = ( 2 − a − 2 r ) a r = a ( 1 + t − r ) r = 2 F ( R ∞ ) = 2 a 0 r 0 {\textstyle A(R,T)\equiv {\frac {(1-R_{n})^{2}-T_{n}^{2}}{R_{n}}}={\frac {(2-a-2r)a}{r}}={\frac {a(1+t-r)}{r}}=2F(R_{\infty })={\frac {2a_{0}}{r_{0}}}} . Besides the relationships displayed here, the formulas obtained for the general case are entirely consistent with the Stokes formulas , the equations of Benford , and Hecht's finite difference formula . For the special cases of infinitesimal or infinitely dilute particles, it gives results consistent with the Schuster equation for isotropic scattering and Kubelka–Munk equation . These equations are all for plane parallel layers using two light streams. This cumulative mathematics was tested on data collected using directed radiation on plastic sheets, a system that precisely matches the physical model of a series of plane parallel layers, and found to conform. [ 2 ] The mathematics provided: 1) a method to use plane parallel mathematics to separate absorption and remission coefficients for a sample; 2) an Absorption/Remission function that is constant for all sample thickness; and 3) equations relating the absorption and remission of one thickness of sample to that of any other thickness.
Using simplifying assumptions, the spectroscopic parameters (absorption, remission, and transmission fractions) of a plane parallel layer can be built from the refractive index of the material making up the layer, the linear absorption coefficient (absorbing power) of the material, and the thickness of the layer. While other assumptions could be made, those most often used are those of normal incidence of a directed beam of light, with internal and external reflection from the surface being the same.
For the special case where the incident radiation is normal (perpendicular) to a surface and the absorption is negligible, the intensity of the reflected and transmitted beams can be calculated from the refractive indices η 1 and η 2 of the two media, where r is the fraction of the incident light reflected, and t is the fraction of the transmitted light:
r = ( η 2 − η 1 ) 2 ( η 2 + η 1 ) 2 {\displaystyle r={\frac {(\eta _{2}-\eta _{1})^{2}}{(\eta _{2}+\eta _{1})^{2}}}} , t = 4 η 1 η 2 ( η 2 + η 1 ) 2 {\displaystyle t={\frac {4\eta _{1}\eta _{2}}{(\eta _{2}+\eta _{1})^{2}}}} , with the fraction absorbed taken as zero ( a {\displaystyle a} = 0 ).
For a beam of light traveling in air with an approximate index of refraction of 1.0, and encountering the surface of a material having an index of refraction of 1.5:
r = ( η 2 − η 1 ) 2 ( η 2 + η 1 ) 2 = 0.5 2 2.5 2 = 0.04 {\displaystyle r={\frac {(\eta _{2}-\eta _{1})^{2}}{(\eta _{2}+\eta _{1})^{2}}}={\frac {0.5^{2}}{2.5^{2}}}=0.04} , t = 4 η 1 η 2 ( η 2 + η 1 ) 2 = ( 4 ) ( 1 ) ( 1.5 ) 2.5 2 = 0.96 {\displaystyle t={\frac {4\eta _{1}\eta _{2}}{(\eta _{2}+\eta _{1})^{2}}}={\frac {(4)(1)(1.5)}{2.5^{2}}}=0.96}
There is a simplified special case for the spectroscopic parameters of a sheet. This sheet consists of three plane parallel layers (1:front surface, 2:interior, 3:rear surface) in which the surfaces both have the same remission fraction when illuminated from either direction, regardless of the relative refractive indices of the two media on either side of the surface. For the case of zero absorption in the interior, the total remission and transmission from the layer can be determined from the infinite series, where r 0 {\displaystyle r_{0}} is the remission from the surface:
These formulas can be modified to account for absorption. [ 15 ] [ 18 ] Alternatively, the spectroscopic parameters of a sheet (or slab) can be built up from the spectroscopic parameters of the individual pieces that compose the layer: surface, interior, surface. This can be done using an approach developed by Kubelka for treatment of inhomogeneous layers . Using the example from the previous section: { A 1 = 0 , R 1 = 0.04 , T 1 = 0.96 } { A 3 = 0 , R 3 = 0.04 , T 3 = 0.96 }.
We will assume the interior of the sheet is composed of a material that has Napierian absorption coefficient k of 0.5 cm −1 , and the sheet is 1 mm thick ( d = 1 mm ). For this case, on a single trip through the interior, according to the Bouguer-Lambert law, T = exp ( − k d ) {\textstyle T=\exp(-kd)} , which according to our assumptions yields T = exp ( − 0.5 cm − 1 ⋅ 0.1 cm ) = 0.95 {\textstyle T=\exp(-0.5\ {\text{cm}}^{-1}\cdot 0.1\ {\text{cm}})=0.95} and A = 0.05 {\textstyle A=0.05} . Thus { A 2 = 0.05 , R 2 = 0 , T 2 = 0.95 }.
Then one of Benford's equations [ 19 ] can be applied. If A x , R x and T x are known for layer x and A y R y and T y are known for layer y , the ART fractions for a sample composed of layer x and layer y are:
Step 1: We take layer 1 as x, and layer 2 as y. By our assumptions in this case, { R 1 = R ( − 1 ) = 0.04 , R 2 = 0 , T 1 = 0.96 , T 2 = 0.95 {\displaystyle R_{1}=R_{(-1)}=0.04,R_{2}=0,T_{1}=0.96,T_{2}=0.95} }. T x + y = T x T y 1 − R ( − x ) R y = ( 0.96 ) ( 0.95 ) 1 − ( 0.04 ) 0 = 0.912 {\displaystyle T_{x+y}={\frac {T_{x}T_{y}}{1-R_{(-x)}R_{y}}}={\frac {(0.96)(0.95)}{1-(0.04)0}}=0.912\quad } R x + y = R x + T x 2 R y 1 − R ( − x ) R y = 0.04 + ( 0.96 2 ) 0 1 − ( 0.04 ) 0 = 0.04 {\displaystyle R_{x+y}=R_{x}+{\frac {T_{x}^{2}R_{y}}{1-R_{(-x)}R_{y}}}=0.04+{\frac {(0.96^{2})0}{1-(0.04)0}}=0.04\quad } R y + x = R y + T y 2 R x 1 − R ( − y ) R x = 0 + ( 0.95 2 ) ( 0.04 ) 1 − 0 ( 0.04 ) = 0.0361 {\displaystyle R_{y+x}=R_{y}+{\frac {T_{y}^{2}R_{x}}{1-R_{(-y)}R_{x}}}=0+{\frac {(0.95^{2})(0.04)}{1-0(0.04)}}=0.0361}
Step 2: We take the result from step 1 as the value for new x [ x is old x+y; (-x) is old y+x ], and the value for layer 3 as new y.
T x + y = T x T y 1 − R ( − x ) R y = ( 0.912 ) ( 0.96 ) 1 − ( 0.0361 ) ( 0.04 ) = 0.877 = T 123 {\displaystyle T_{x+y}={\frac {T_{x}T_{y}}{1-R_{(-x)}R_{y}}}={\frac {(0.912)(0.96)}{1-(0.0361)(0.04)}}=0.877=T_{123}} R x + y = R x + T x 2 R y 1 − R ( − x ) R y = 0.04 + ( 0.912 2 ) ( 0.04 ) 1 − ( 0.0361 ) ( 0.04 ) = .0733 = R 123 {\displaystyle R_{x+y}=R_{x}+{\frac {T_{x}^{2}R_{y}}{1-R_{(-x)}R_{y}}}=0.04+{\frac {(0.912^{2})(0.04)}{1-(0.0361)(0.04)}}=.0733=R_{123}}
A 123 = 1 − R 123 − T 123 = 1 − 0.877 − .073 = 0.05 {\displaystyle A_{123}=1-R_{123}-T_{123}=1-0.877-.073=0.05}
Dahm has shown that for this special case, the total amount of light absorbed by the interior of the sheet (considering surface remission) is the same as that absorbed in a single trip (independent of surface remission). [ 21 ] This is borne out by the calculations.
The decadic absorbance ( A b 10 {\displaystyle {\mathsf {\mathrm {A} b}}_{10}} ) of the sheet is given by: A b 10 = − l o g ( 1 − A 123 ) = 0.0222 {\displaystyle {\mathsf {\mathrm {A} b}}_{10}=-log(1-A_{123})=0.0222}
The Stokes Formulas can be used to calculate the ART fractions for any number of layers. Alternatively, they can be calculated by successive application of Benford's equation for "one more layer".
If A 1 , R 1 , and T 1 are known for the representative layer of a sample, and A n , R n and T n are known for a layer composed of n representative layers, the ART fractions for a layer with thickness of n + 1 are:
In the above example, { A 1 = 0.05 , R 1 = 0.0733 , T 1 = 0.877 {\displaystyle A_{1}=0.05,R_{1}=0.0733,T_{1}=0.877} }. The Table shows the results of repeated application of the above formulas.
Within a homogeneous media such as a solution, there is no scatter. For this case, the function is linear with both the concentration of the absorbing species and the path-length. Additionally, the contributions of individual absorbing species are additive. For samples which scatter light, absorbance is defined as "the negative logarithm of one minus absorptance (absorption fraction: α {\displaystyle \alpha } ) as measured on a uniform sample". [ 22 ] For decadic absorbance , [ 23 ] this may be symbolized as: A 10 = − l o g 10 ( 1 − α ) {\displaystyle \mathrm {A} _{10}=-log_{10}(1-\alpha )} . Even though this absorbance function is useful with scattering samples, the function does not have the same desirable characteristics as it does for non-scattering samples. There is, however, a property called absorbing power which may be estimated for these samples. The absorbing power of a single unit thickness of material making up a scattering sample is the same as the absorbance of the same thickness of the materiel in the absence of scatter. [ 24 ]
Suppose that we have a sample consisting of 14 sheets described above, each one of which has an absorbance of 0.0222. If we are able to estimate the absorbing power (the absorbance of a sample of the same thickness, but having no scatter) from the sample without knowing how many sheets are in the sample (as would be the general case), it would have the desirable property of being proportional to the thickness. In this case, we know that the absorbing power (scatter corrected absorbance) should be: {14 x the absorbance of a single sheet} = ( 14 ) ( 0.0222 ) = 0.312 {\displaystyle =(14)(0.0222)=0.312} . This is the value we should have for the sample if the absorbance is to follow the law of Bouguer (often referred to as Beer's law).
In the Table below, we see that the sample has the A,R,T values for the case of 14 sheets in the Table above. Because of the presence of scatter, the measured absorbance of the sample would be: A b 10 = − l o g ( 1 − A S ) = − l o g ( 0.466 ) = 0.2728 {\displaystyle {\mathsf {\mathrm {A} b}}_{10}=-log(1-A_{S})=-log(0.466)=0.2728} . Then we calculate this for the half sample thickness using another of Benford's equations. If A d , R d and T d are known for a layer with thickness d , the ART fractions for a layer with thickness of d /2 are:
In the line for half sample [S/2], we see the values which are the same as those for 7 layers in the Table above, as we expect. Note that − l o g ( 1 − A S / 2 ) = − l o g ( 1 − 0.292 ) = 0.150 {\displaystyle -log(1-A_{S/2})=-log(1-0.292)=0.150} . We desire to have the absorbance be linear with sample thickness, but we find when we multiply this value by 2, we get ( 2 ) ( 0.150 ) = 0.300 {\displaystyle (2)(0.150)=0.300} , which is a significant departure from the previous estimate for the absorbing power.
The next iteration of the formula produces the estimate for A,R,T for a quarter sample: − l o g ( 1 − 0.162 ) × 4 = 0.307 {\displaystyle -log(1-0.162)\times 4=0.307} . Note that this time the calculation corresponds to three and a half layers, a thickness of sample that cannot exist physically.
Continuing for the sequentially higher powers of two, we see a monotonically increasing estimate. Eventually the numbers will start jumping with round off error, but one can stop when getting a constant value to a specified number of significant figures. In this case, we become constant to 4 significant figures at 0.3105, which is our estimate for the absorbing power of the sample. This corresponds to our target value of 0.312 determined above.
Power Estimate
If one wants to use a theory based on plane parallel layers, optimally the samples would be describable as layers. But a particulate sample often looks a jumbled maze of particles of various sizes and shapes, showing no structured pattern of any kind, and certainly not literally divided into distinct, identical layers. Even so, it is a tenet of Representative Layer Theory that for spectroscopic purposes, we may treat the complex sample as if it were a series of layers, each one representative of the sample as a whole.
To be representative, the layer must meet the following criteria: [ 25 ]
• The volume fraction of each type of particle is the same in the representative layer as in the sample as a whole.
• The surface area fraction of each type of particle is the same in the representative layer as in the sample as a whole.
• The void fraction of the representative layer is the same as in the sample.
• The representative layer is nowhere more than one particle thick. Note this means the “thickness” of the representative layer is not uniform. This criterion is imposed so that we can assume that a given photon of light has only one interaction with the layer. It might be transmitted, remitted, or absorbed as a result of this interaction, but it is assumed not to interact with a second particle within the same layer.
In the above discussion, when we talk about a “type” of particle, we must clearly distinguish between particles of different composition. In addition, however, we must distinguish between particles of different sizes. Recall that scattering is envisioned as a surface phenomenon and absorption is envisioned as occurring at the molecular level throughout the particle. Consequently, our expectation is that the contribution of a “type” of particle to absorption will be proportional to the volume fraction of that particle in the sample, and the contribution of a “type” of particle to scattering will be proportional to the surface area fraction of that particle in the sample. This is why our “representative layer” criteria above incorporate both volume fraction and surface area fraction. Since small particles have larger surface area-to-volume ratios than large particles, it is necessary to distinguish between them.
Under these criteria, we can propose a model for the fractions of incident light that are absorbed ( A 1 {\displaystyle A_{1}} ), remitted ( R 1 {\displaystyle R_{1}} ), and transmitted ( T 1 {\displaystyle T_{1}} ) by one representative layer. [ 26 ]
A 1 = ∑ S j ( 1 − e x p ( − k j d j ) ) {\displaystyle A_{1}=\sum S_{j}(1-exp(-k_{j}d_{j}))} , R 1 = ∑ S j b j d j {\displaystyle R_{1}=\sum S_{j}b_{j}d_{j}} , T 1 = 1 − R 1 − T 1 {\displaystyle T_{1}=1-R_{1}-T_{1}}
in which:
• S j {\displaystyle S_{j}} is the fraction of cross-sectional surface area that is occupied by particles of type j {\displaystyle j} .
• K j {\displaystyle K_{j}} is the effective absorption coefficient for particles of type j {\displaystyle j} .
• b j {\displaystyle b_{j}} is the remission coefficient for particles of type j {\displaystyle j} .
• d j {\displaystyle d_{j}} is the thickness of a particle of type j {\displaystyle j} in the direction of the incident beam.
• The summation is carried out over all of the distinct “types” of particle.
In effect, S j {\displaystyle S_{j}} represents the fraction of light that will interact with a particle of type j {\displaystyle j} , and K j {\displaystyle K_{j}} and b j {\displaystyle b_{j}} quantify the likelihood of that interaction resulting in absorption and remission, respectively.
Surface area fractions and volume fractions for each type of particle can be defined as follows:
v i = w i ρ i ∑ w j ρ j {\displaystyle v_{i}={\frac {\frac {w_{i}}{\rho _{i}}}{\sum {\frac {w_{j}}{\rho _{j}}}}}} , s i = w i ρ i d i ∑ w j ρ j d j {\displaystyle s_{i}={\frac {\frac {w_{i}}{\rho _{i}d_{i}}}{\sum {\frac {w_{j}}{\rho _{j}d_{j}}}}}} , V i = ( 1 − v 0 ) v i {\displaystyle V_{i}=(1-v_{0})v_{i}} , S i = ( 1 − v 0 ) s i {\displaystyle S_{i}=(1-v_{0})s_{i}}
in which:
• w i {\displaystyle w_{i}} is the mass fraction of particles of type i in the sample.
• v i {\displaystyle v_{i}} is the fraction of occupied volume composed of particles of type i.
• s i {\displaystyle s_{i}} is the fraction of particle surface area that is composed of particles of type i.
• V i {\displaystyle V_{i}} is the fraction of total volume composed of particles of type i.
• S i {\displaystyle S_{i}} is the fraction of cross-sectional surface area that is composed of particles of type i.
• ρ i {\displaystyle \rho _{i}} is the density of particles of type i.
• v 0 {\displaystyle v_{0}} is the void fraction of the sample.
This is a logical way of relating the spectroscopic behavior of a “representative layer” to the properties of the individual particles that make up the layer. The values of the absorption and remission coefficients represent a challenge in this modeling approach. Absorption is calculated from the fraction of light striking each type of particle and a “Beer’s law”-type calculation of the absorption by each type of particle, so the values of K i {\displaystyle K_{i}} used should ideally model the ability of the particle to absorb light, independent of other processes (scattering, remission) that also occur. We referred to this as the absorbing power in the section above.
Where a given letter is used in both capital and lower case form ( r , R and t , T ) the capital letter refers to the macroscopic observable and the lower case letter to the corresponding variable for an individual particle or layer of the material. Greek symbols are used for properties of a single particle. | https://en.wikipedia.org/wiki/Representative_layer_theory |
In social sciences and other domains, representative sequences are whole sequences that best characterize or summarize a set of sequences. [ 1 ] In bioinformatics, representative sequences also designate substrings of a sequence that characterize the sequence. [ 2 ] [ 3 ]
In Sequence analysis in social sciences , representative sequences are used to summarize sets of sequences describing for example the family life course or professional career of several thousands individuals. [ 4 ]
The identification of representative sequences [ 1 ] [ 4 ] proceeds from the pairwise dissimilarities between sequences. One typical solution is the medoid sequence, i.e., the observed sequence that minimizes the sum of its distances to all other sequences in the set. An other solution is the densest observed sequence, i.e., the sequence with the greatest number of other sequences in its neighborhood. When the diversity of the sequences is large, a single representative is often insufficient to efficiently characterize the set. In such cases, an as small as possible set of representative sequences covering (i.e., which includes in at least one neighborhood of a representative) a given percentage of all sequences is searched.
A solution also considered is to select the medoids of relative frequency groups. More specifically, the method consists in sorting the sequences (for example, according to the first principal coordinate of the pairwise dissimilarity matrix), splitting the sorted list into equal sized groups (called relative frequency groups), and selecting the medoids of the equal sized groups. [ 5 ]
The methods for identifying representative sequences described above have been implemented in the R package TraMineR . [ 6 ]
Representative sequences are short regions within protein sequences that can be used to approximate the evolutionary relationships of those proteins, or the organisms from which they come. Representative sequences are contiguous subsequences (typically 300 residues ) from ubiquitous , conserved proteins, such that each orthologous family of representative sequences taken alone gives a distance matrix in close agreement with the consensus matrix. [ 7 ]
Protein sequences can provide data about the biological function and evolution of proteins and protein domains . Grouping and interrelating protein sequences can therefore provide information about both human biological processes, and the evolutionary development of biological processes on earth; such sequence clusters allow for the effective coverage of sequence space. Sequence clusters can reduce a large database of sequences to a smaller set of sequence representatives , each of which should represent its cluster at the sequence level. Sequence representatives allow the effective coverage of the original database with fewer sequences. The database of sequence representatives is called non-redundant , as similar (or redundant) sequences have been removed at a certain similarity threshold.
Sequence analysis in social sciences
Sequence analysis in bioinformatics | https://en.wikipedia.org/wiki/Representative_sequences |
The representativeness heuristic is used when making judgments about the probability of an event being representational in character and essence of a known prototypical event. [ 1 ] It is one of a group of heuristics (simple rules governing judgment or decision-making) proposed by psychologists Amos Tversky and Daniel Kahneman in the early 1970s as "the degree to which [an event] (i) is similar in essential characteristics to its parent population, and (ii) reflects the salient features of the process by which it is generated". [ 1 ] The representativeness heuristic works by comparing an event to a prototype or stereotype that we already have in mind. For example, if we see a person who is dressed in eccentric clothes and reading a poetry book, we might be more likely to think that they are a poet than an accountant. This is because the person's appearance and behavior are more representative of the stereotype of a poet than an accountant.
The representativeness heuristic can be a useful shortcut in some cases, but it can also lead to errors in judgment. For example, if we only see a small sample of people from a particular group, we might overestimate the degree to which they are representative of the entire group.
Heuristics are described as "judgmental shortcuts that generally get us where we need to go – and quickly – but at the cost of occasionally sending us off course." [ 2 ] Heuristics are useful because they use effort-reduction and simplification in decision-making. [ 3 ]
When people rely on representativeness to make judgments, they are likely to judge wrongly because the fact that something is more representative does not actually make it more likely. [ 4 ] The representativeness heuristic is simply described as assessing similarity of objects and organizing them based around the category prototype (e.g., like goes with like, and causes and effects should resemble each other). [ 2 ] This heuristic is used because it is an easy computation. [ 4 ] The problem is that people overestimate its ability to accurately predict the likelihood of an event. [ 5 ] Thus, it can result in neglect of relevant base rates and other cognitive biases . [ 6 ] [ 7 ]
The representativeness heuristic is more likely to be used when the judgement or decision to be made has certain factors.
When judging the representativeness of a new stimulus/event, people usually pay attention to the degree of similarity between the stimulus/event and a standard/process. [ 1 ] It is also important that those features be salient. [ 1 ] Nilsson, Juslin, and Olsson (2008) found this to be influenced by the exemplar account of memory (concrete examples of a category are stored in memory) so that new instances were classified as representative if highly similar to a category as well as if frequently encountered. [ 8 ] Several examples of similarity have been described in the representativeness heuristic literature. This research has focused on medical beliefs. [ 2 ] People often believe that medical symptoms should resemble their causes or treatments. For example, people have long believed that ulcers were caused by stress, due to the representativeness heuristic, when in fact bacteria cause ulcers. [ 2 ] In a similar line of thinking, in some alternative medicine beliefs patients have been encouraged to eat organ meat that corresponds to their medical disorder. Use of the representativeness heuristic can be seen in even simpler beliefs, such as the belief that eating fatty foods makes one fat. [ 2 ] Even physicians may be swayed by the representativeness heuristic when judging similarity , in diagnoses, for example. [ 9 ] The researcher found that clinicians use the representativeness heuristic in making diagnoses by judging how similar patients are to the stereotypical or prototypical patient with that disorder. [ 9 ]
Irregularity and local representativeness affect judgments of randomness. Things that do not appear to have any logical sequence are regarded as representative of randomness and thus more likely to occur. For example, THTHTH as a series of coin tosses would not be considered representative of randomly generated coin tosses as it is too well ordered. [ 1 ]
Local representativeness is an assumption wherein people rely on the law of small numbers, whereby small samples are perceived to represent their population to the same extent as large samples ( Tversky & Kahneman 1971 ). A small sample which appears randomly distributed would reinforce the belief, under the assumption of local representativeness, that the population is randomly distributed. Conversely, a small sample with a skewed distribution would weaken this belief.If a coin toss is repeated several times and the majority of the results consists of "heads", the assumption of local representativeness will cause the observer to believe the coin is biased toward "heads". [ 1 ]
In a study done in 1973, [ 10 ] Kahneman and Tversky divided their participants into three groups:
The judgments of likelihood were much closer for the judgments of similarity than for the estimated base rates. The findings supported the authors' predictions that people make predictions based on how representative something is (similar), rather than based on relative base rate information. For example, more than 95% of the participants said that Tom would be more likely to study computer science than education or humanities, when there were much higher base rate estimates for education and humanities than computer science. [ 10 ]
In another study done by Tversky and Kahneman, subjects were given the following problem: [ 4 ]
A cab was involved in a hit and run accident at night. Two cab companies, the Green and the Blue, operate in the city. 85% of the cabs in the city are Green and 15% are Blue. [ 4 ]
A witness identified the cab as Blue. The court tested the reliability of the witness under the same circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colours 80% of the time and failed 20% of the time. [ 4 ]
What is the probability that the cab involved in the accident was Blue rather than Green knowing that this witness identified it as Blue? [ 4 ]
Most subjects gave probabilities over 50%, and some gave answers over 80%. The correct answer, found using Bayes' theorem , is lower than these estimates: [ 4 ]
This result can be achieved by Bayes' theorem which states:
P ( B | I ) = P ( I | B ) P ( B ) P ( I ) . {\displaystyle P(B|I)={\frac {P(I|B)\,P(B)}{P(I)}}.}
where:
P(x) - a probability of x,
B - the cab was blue,
I - the cab is identified by the witness as blue,
B | I - the cab that is identified as blue, was blue,
I | B - the cab that was blue, is identified by the witness as blue.
Representativeness is cited in the similar effect of the gambler's fallacy , the regression fallacy and the conjunction fallacy . [ 4 ]
The use of the representativeness heuristic will likely lead to violations of Bayes' Theorem : [ 11 ]
However, judgments by representativeness only look at the resemblance between the hypothesis and the data, thus inverse probabilities are equated: [ 11 ]
P ( H | D ) = P ( D | H ) {\displaystyle P(H|D)=P(D|H)}
As can be seen, the base rate P(H) is ignored in this equation, leading to the base rate fallacy . A base rate is a phenomenon's basic rate of incidence. The base rate fallacy describes how people do not take the base rate of an event into account when solving probability problems. [ 12 ] This was explicitly tested by Dawes, Mirels, Gold and Donahue (1993) who had people judge both the base rate of people who had a particular personality trait and the probability that a person who had a given personality trait had another one. For example, participants were asked how many people out of 100 answered true to the question "I am a conscientious person" and also, given that a person answered true to this question, how many would answer true to a different personality question. They found that participants equated inverse probabilities (e.g., P ( c o n s c i e n t i o u s | n e u r o t i c ) = P ( n e u r o t i c | c o n s c i e n t i o u s ) {\displaystyle P(conscientious|neurotic)=P(neurotic|conscientious)} ) even when it was obvious that they were not the same (the two questions were answered immediately after each other). [ 11 ]
A medical example is described by Axelsson. Say a doctor performs a test that is 99% accurate, and the patient tests positive for the disease. However, the incidence of the disease is 1/10,000. The patient's actual risk of having the disease is 1%, because the population of healthy people is so much larger than the disease. This statistic often surprises people, due to the base rate fallacy, as many people do not take the basic incidence into account when judging probability. [ 12 ] Research by Maya Bar-Hillel (1980) suggests that perceived relevancy of information is vital to base-rate neglect: base rates are only included in judgments if they seem equally relevant to the other information. [ 13 ]
Some research has explored base rate neglect in children, as there was a lack of understanding about how these judgment heuristics develop. [ 14 ] [ 15 ] The authors of one such study wanted to understand the development of the heuristic, if it differs between social judgments and other judgments, and whether children use base rates when they are not using the representativeness heuristic. The authors found that the use of the representativeness heuristic as a strategy begins early on and is consistent. The authors also found that children use idiosyncratic strategies to make social judgments initially, and use base rates more as they get older, but the use of the representativeness heuristic in the social arena also increase as they get older. The authors found that, among the children surveyed, base rates were more readily used in judgments about objects than in social judgments. [ 15 ] After that research was conducted, Davidson (1995) was interested in exploring how the representativeness heuristic and conjunction fallacy in children related to children's stereotyping. Consistent with previous research, children based their responses to problems off of base rates when the problems contained nonstereotypic information or when the children were older. There was also evidence that children commit the conjunction fallacy. Finally, as students get older, they used the representativeness heuristic on stereotyped problems, and so made judgments consistent with stereotypes. There is evidence that even children use the representativeness heuristic, commit the conjunction fallacy, and disregard base rates. [ 14 ]
Research suggests that use or neglect of base rates can be influenced by how the problem is presented, which reminds us that the representativeness heuristic is not a "general, all purpose heuristic", but may have many contributing factors. [ 16 ] Base rates may be neglected more often when the information presented is not causal. [ 17 ] Base rates are used less if there is relevant individuating information. [ 18 ] Groups have been found to neglect base rate more than individuals do. [ 19 ] Use of base rates differs based on context. [ 20 ] Research on use of base rates has been inconsistent, with some authors suggesting a new model is necessary. [ 21 ]
A group of undergraduates were provided with a description of Linda, modelled to be representative of an active feminist. Then participants were then asked to evaluate the probability of her being a feminist, the probability of her being a bank teller, or the probability of being both a bank teller and feminist. Probability theory dictates that the probability of being both a bank teller and feminist (the conjunction of two sets) must be less than or equal to the probability of being either a feminist or a bank teller. A conjunction cannot be more probable than one of its constituents. However, participants judged the conjunction (bank teller and feminist) as being more probable than being a bank teller alone. [ 22 ] Some research suggests that the conjunction error may partially be due to subtle linguistic factors, such as inexplicit wording or semantic interpretation of "probability". [ 23 ] [ 24 ] The authors argue that both logic and language use may relate to the error, and it should be more fully investigated. [ 24 ]
From probability theory the disjunction of two events is at least as likely as either of the events individually. For example, the probability of being either a physics or biology major is at least as likely as being a physics major, if not more likely. However, when a personality description (data) seems to be very representative of a physics major (e.g., having a pocket protector ) over a biology major, people judge that it is more likely for this person to be a physics major than a natural sciences major (which is a superset of physics). [ 22 ]
Evidence that the representativeness heuristic may cause the disjunction fallacy comes from Bar-Hillel and Neter (1993). They found that people judge a person who is highly representative of being a statistics major (e.g., highly intelligent, does math competitions) as being more likely to be a statistics major than a social sciences major (superset of statistics), but they do not think that he is more likely to be a Hebrew language major than a humanities major (superset of Hebrew language). Thus, only when the person seems highly representative of a category is that category judged as more probable than its superordinate category. These incorrect appraisals remained even in the face of losing real money in bets on probabilities. [ 22 ]
Representativeness heuristic is also employed when subjects estimate the probability of a specific parameter of a sample. If the parameter highly represents the population, the parameter is often given a high probability. This estimation process usually ignores the impact of the sample size.
A concept proposed by Tversky and Kahneman provides an example of this bias in a problem about two hospitals of differing size. [ 25 ]
Approximately 45 babies are born in the large hospital while 15 babies are born in the small hospital. Half (50%) of all babies born in general are boys. However, the percentage changes from 1 day to another. For a 1-year period, each hospital recorded the days on which >60% of the babies born were boys. The question posed is: Which hospital do you think recorded more such days?
The values shown in parentheses are the number of students choosing each answer. [ 25 ]
The results show that more than half the respondents selected the wrong answer (third option). This is due to the respondents ignoring the effect of sample size. The respondents selected the third option most likely because the same statistic represents both the large and small hospitals. According to statistical theory, a small sample size allows the statistical parameter to deviate considerably compared to a large sample. [ 25 ] Therefore, the large hospital would have a higher probability to stay close to the nominal value of 50%.
The gambler's fallacy , also known as the Monte Carlo fallacy or the fallacy of the maturity of chances, is the belief that, if an event (whose occurrences are independent and identically distributed ) has occurred less frequently than expected, it is more likely to happen again in the future (or vice versa). The fallacy is commonly associated with gambling , where it may be believed, for example, that the next dice roll is more likely to be six than is usually the case because there have recently been fewer than the expected number of sixes. | https://en.wikipedia.org/wiki/Representativeness_heuristic |
The repressilator is a genetic regulatory network consisting of at least one feedback loop with at least three genes, each expressing a protein that represses the next gene in the loop. [ 1 ] In biological research, repressilators have been used to build cellular models and understand cell function. There are both artificial and naturally-occurring repressilators. Recently, the naturally-occurring repressilator clock gene circuit in Arabidopsis thaliana ( A. thaliana ) and mammalian systems have been studied.
Artificial repressilators were first engineered by Michael Elowitz and Stanislas Leibler in 2000, [ 2 ] complementing other research projects studying simple systems of cell components and function. In order to understand and model the design and cellular mechanisms that confers a cell’s function, Elowitz and Leibler created an artificial network consisting of a loop with three transcriptional repressors . This network was designed from scratch to exhibit a stable oscillation that acts like an electrical oscillator system with fixed time periods. The network was implemented in Escherichia coli ( E. coli) via recombinant DNA transfer. It was then verified that the engineered colonies did indeed exhibit the desired oscillatory behavior.
The repressilator consists of three genes connected in a feedback loop , such that each gene represses the next gene in the loop and is repressed by the previous gene. In the synthetic insertion into E. Coli , green fluorescent protein (GFP) was used as a reporter so that the behavior of the network could be observed using fluorescence microscopy .
The design of the repressilator was guided by biological and circuit principles with discrete and stochastic models of analysis. Six differential equations were used to model the kinetics of the repressilator system based on protein and mRNA concentrations, as well as appropriate parameter and Hill coefficient values. In the study, Elowitz and Leibler generated figures showing oscillations of repressor proteins, using integration and typical parameter values as well as a stochastic version of the repressilator model using similar parameters. These models were analyzed to determine the values of various rates that would yield a sustained oscillation. It was found that these oscillations were favored by promoters coupled to efficient ribosome binding sites , cooperative transcriptional repressors, and comparable protein and mRNA decay rates.
This analysis motivated two design features which were engineered into the genes. First, promoter regions were replaced with a more efficient hybrid promoter which combined the E. coli phage lambda PL (λ PL) promoter with lac repressor ( Lacl ) and Tet repressor ( TetR ) operator sequences. Second, to reduce the disparity between the lifetimes of the repressor proteins and the mRNAs, a carboxy terminal tag based on the ssrA-RNA sequence was added at the 3' end of each repressor gene. This tag is recognized by proteases which target the protein for degradation. The design was implemented using a low-copy plasmid encoding the repressilator and a higher-copy reporter, which were used to transform a culture of E. coli .
Circadian circuits in plants feature a transcriptional regulatory feedback loop called the repressilator. In the core oscillator loop (outlined in gray) in A. thaliana , light is first sensed by two cryptochromes and five phytochromes . Two transcription factors, Circadian Clock Associated 1 (CCA1) and Late Elongated Hypocotyl (LHY), repress genes associated with evening expression like Timing of CAB expression 1 ( TOC1 ) and activate genes associated with morning expression by binding to their promoters. TOC1 , an evening gene, positively regulates CCA1 and LHY via an unknown mechanism. [ 3 ] Evening-phased transcription factor CCA1 Hiking Expedition (CHE) and histone demethylase jumonji C domain-containing 5 (JMJD5) directly repress CCA1 . Other components have been found to be expressed throughout the day and either directly or indirectly inhibit or activate a consequent element in the circadian circuit, thereby creating a complex, robust and flexible network of feedback loops. [ 3 ]
The morning-phase expression loop refers to the genes and proteins that regulate rhythms during the day in A. thaliana . The two main genes are LHY and CCA1, which encode LHY and CCA1 transcription factors. [ 4 ] These proteins form heterodimers that enter the nucleus and bind to the TOC1 gene promoter, repressing the production of TOC1 protein. When TOC1 protein is expressed, it serves to regulate LHY and CCA1 by inhibition of their transcription. This was later supported in 2012 by Dr. Alexandra Pokhilo, who used computational analyses to show that TOC1 served this role as an inhibitor of LHY and CCA1 expression. [ 5 ] The morning loop serves to inhibit hypocotyl elongation, in contrast with the evening-phase loop which promotes hypocotyl elongation. The morning phase loop has shown to be incapable of supporting circadian oscillation when evening-phase expression genes have been mutated, [ 5 ] suggesting the interdependency of each component in this naturally-occurring repressilator.
Early Flowering 3 ( ELF3 ), Early Flowering 4 ( ELF4 ) and Phytoclock1 ( LUX ) are the key elements in evening-phased clock gene expression in A. thaliana. They form the evening complex, in which LUX binds to the promoters of Phytochrome Interacting Factor 4 ( PIF4 ) and Phytochrome Interacting Factor 5 ( PIF5 ) and inhibits them. [ 3 ] As a result, hypocotyl elongation is repressed in the early-evening. When the inhibition is alleviated late at night, the hypocotyl elongates. Photoperiod flowering is controlled by output gene Gigantea ( GI ). GI is activated at night and activates the expression of Constans ( CO ), which activates the expression of Flowering Locus T ( FT ). FT then causes flowering in long-days. [ 3 ]
Mammals evolved an endogenous timing mechanism to coordinate both physiology and behavior to the 24 hour period. [ 6 ] In 2016, researchers identified a sequence of three subsequent inhibitions within this mechanism that they identified as a repressilator, which is now believed to serve as a major core element of this circadian network. The necessity of this system was established through a series of gene knockouts amongst cryptochrome ( Cry ), period ( Per ), and Rev-erb -- core mammalian clock genes whose knockouts lead to arrhythmicity. [ 6 ] The model that these researchers generated includes Bmal1 as a driver of E-box mediated transcription, Per2 and Cry1 as early and late E-box repressors, respectively, as well as the D-box regulator Dbp and the nuclear receptor Rev-erb-α . The sequential inhibitions by Rev-erb , Per and Cry1 can generate sustained oscillations, and by clamping all other components except for this repressilator oscillations persisted with similar amplitudes and periods. [ 6 ] All oscillating networks seem to involve any combination of these three core genes, as demonstrated in various schematics released by researchers.
The repressilator model has been used to model and study other biological pathways and systems. Since, extensive work into the repressilator’s modeling capacities has been performed. In 2003, the repressilator’s representation and validation of biological models, being a model with many variables, was performed using the Simpathica system, which verified that the model does indeed oscillate with all of its complexities.
As stated in Elowitz and Leibler’s original work, the ultimate goal for repressilator research is to build an artificial circadian clock that mirrors its natural, endogenous counterpart. This would involve developing an artificial clock with reduced noise and temperature compensation in order to better understand circadian rhythms that can be found in every domain of life. [ 7 ] Disruption of circadian rhythms may lead to loss of rhythmicity in metabolic and transcriptional processes, and even quicken the onset of certain neurodegenerative diseases such as Alzheimer's disease . [ 8 ] In 2017, oscillators that generated circadian rhythms and were not influenced much by temperature were created in a laboratory. [ 6 ]
Pathologically , the repressilator model can be used to model cell growth and abnormalities that may arise, such as those present in cancer cells. [ 9 ] In doing so, new treatments may be developed based on circadian activity of cancerous cells. Additionally, in 2016, a research team improved upon the previous design of the repressilator. Following noise (signal processing) analysis, the authors moved the GFP reporter construct onto the repressilator plasmid and removed the ssrA degradation tags from each repressor protein. This extended the period and improved the regularity of the oscillations of the repressilator. [ 10 ]
In 2019, a study furthered Elowitz and Leibler's model by improving the repressilator system by achieving a model with a unique steady state and new rate function. This experiment expanded the current knowledge of repression and gene regulation . [ 11 ]
Artificial repressilators were discovered by implanting a synthetic inhibition loop into E. coli . This represented the first implementation of synthetic oscillations into an organism. Further implications of this include the possibility of rescuing mutated components of oscillations synthetically in model organisms. [ 7 ]
The artificial repressilator is a milestone of synthetic biology which shows that genetic regulatory networks can be designed and implemented to perform novel functions. However, it was found that the cells' oscillations drifted out of phase after a period of time and the artificial repressilator's activity was influenced by cell growth. The initial experiment [ 7 ] therefore gave new appreciation to the circadian clock found in many organisms, as endogenous repressilators are significantly more robust than implanted artificial repressilators. New investigations at the RIKEN Quantitative Biology Center have found that chemical modifications to a single protein molecule could form a temperature independent, self-sustainable oscillator . [ 12 ]
Artificial repressilators could potentially aid research and treatments in fields ranging from circadian biology to endocrinology. They are increasingly able to demonstrate the synchronization inherent to natural biological systems and the factors that affect them. [ 13 ]
A better understanding of the naturally-occurring repressilator in model organisms with endogenous, circadian timings, like A. thaliana, has applications in agriculture, especially in regards to plant rearing and livestock management. [ 14 ] | https://en.wikipedia.org/wiki/Repressilator |
In molecular genetics , a repressor is a DNA- or RNA-binding protein that inhibits the expression of one or more genes by binding to the operator or associated silencers . A DNA-binding repressor blocks the attachment of RNA polymerase to the promoter , thus preventing transcription of the genes into messenger RNA . An RNA-binding repressor binds to the mRNA and prevents translation of the mRNA into protein. This blocking or reducing of expression is called repression.
If an inducer , a molecule that initiates the gene expression, is present, then it can interact with the repressor protein and detach it from the operator. RNA polymerase then can transcribe the message (expressing the gene). A co-repressor is a molecule that can bind to the repressor and make it bind to the operator tightly, which decreases transcription.
A repressor that binds with a co-repressor is termed an aporepressor or inactive repressor . One type of aporepressor is the trp repressor , an important metabolic protein in bacteria. The above mechanism of repression is a type of a feedback mechanism because it only allows transcription to occur if a certain condition is present: the presence of specific inducer (s). In contrast, an active repressor binds directly to an operator to repress gene expression.
While repressors are more commonly found in prokaryotes, they are rare in eukaryotes. Furthermore, most known eukaryotic repressors are found in simple organisms (e.g., yeast), and act by interacting directly with activators. [ 1 ] This contrasts prokaryotic repressors which can also alter DNA or RNA structure.
Within the eukaryotic genome are regions of DNA known as silencers . These are DNA sequences that bind to repressors to partially or fully repress a gene. Silencers can be located several bases upstream or downstream from the actual promoter of the gene. Repressors can also have two binding sites: one for the silencer region and one for the promoter . This causes chromosome looping, allowing the promoter region and the silencer region to come in proximity of each other.
The lacZYA operon houses genes encoding proteins needed for lactose breakdown. [ 2 ] The lacI gene codes for a protein called "the repressor" or "the lac repressor", which functions to repressor of the lac operon. [ 2 ] The gene lacI is situated immediately upstream of lacZYA but is transcribed from a lacI promoter. [ 2 ] The lacI gene synthesizes LacI repressor protein. The LacI repressor protein represses lacZYA by binding to the operator sequence lacO . [ 2 ]
The lac repressor is constitutively expressed and usually bound to the operator region of the promoter , which interferes with the ability of RNA polymerase (RNAP) to begin transcription of the lac operon. [ 2 ] In the presence of the inducer allolactose , the repressor changes conformation, reduces its DNA binding strength and dissociates from the operator DNA sequence in the promoter region of the lac operong. RNAP is then able to bind to the promoter and begin transcription of the lacZYA gene. [ 2 ]
An example of a repressor protein is the methionine repressor MetJ. MetJ interacts with DNA bases via a ribbon-helix-helix (RHH) motif. [ 3 ] MetJ is a homodimer consisting of two monomers , which each provides a beta ribbon and an alpha helix . Together, the beta ribbons of each monomer come together to form an antiparallel beta-sheet which binds to the DNA operator ("Met box") in its major groove. Once bound, the MetJ dimer interacts with another MetJ dimer bound to the complementary strand of the operator via its alpha helices. AdoMet binds to a pocket in MetJ that does not overlap the site of DNA binding.
The Met box has the DNA sequence AGACGTCT, a palindrome (it shows dyad symmetry ) allowing the same sequence to be recognized on either strand of the DNA. The junction between C and G in the middle of the Met box contains a pyrimidine-purine step that becomes positively supercoiled forming a kink in the phosphodiester backbone. This is how the protein checks for the recognition site as it allows the DNA duplex to follow the shape of the protein. In other words, recognition happens through indirect readout of the structural parameters of the DNA, rather than via specific base sequence recognition.
Each MetJ dimer contains two binding sites for the cofactor S-Adenosyl methionine (SAM) which is a product in the biosynthesis of methionine. When SAM is present, it binds to the MetJ protein, increasing its affinity for its cognate operator site, which halts transcription of genes involved in methionine synthesis. When SAM concentration becomes low, the repressor dissociates from the operator site, allowing more methionine to be produced.
The L-arabinose operon houses genes coding for arabinose-digesting enzymes. These function to break down arabinose as an alternative source for energy when glucose is low or absent. [ 4 ] The operon consists of a regulatory repressor gene (araC), three control sites (ara02, ara01, araI1, and araI2), two promoters (Parac/ParaBAD) and three structural genes (araBAD). Once produced, araC acts as repressor by binding to the araI region to form a loop which prevents polymerases from binding to the promotor and transcribing the structural genes into proteins.
In the absence of Arabinose and araC (repressor), loop formation is not initiated and structural gene expression will be lower. In the absence of Arabinose but presence of araC, araC regions form dimers, and bind to bring ara02 and araI1 domains closer by loop formation. [ 5 ] In the presence of both Arabinose and araC, araC binds with the arabinose and acts as an activator. This conformational change in the araC no longer can form a loop, and the linear gene segment promotes RNA polymerase recruitment to the structural araBAD region. [ 4 ]
+
The FLC operon is a conserved eukaryotic locus that is negatively associated with flowering via repression of genes needed for the development of the meristem to switch to a floral state in the plant species Arabidopsis thaliana . FLC expression has been shown be regulated by the presence of FRIGIDA , and negatively correlates with decreases in temperature resulting in the prevention of vernalization . [ 6 ] The degree to which expression decreases depends on the temperature and exposure time as seasons progress. After the downregulation of FLC expression, the potential for flowering is enabled. The regulation of FLC expression involves both genetic and epigenetic factors such as histone methylation and DNA methylation . [ 7 ] Furthermore, a number of genes are cofactors act as negative transcription factors for FLC genes. [ 8 ] FLC genes also have a large number of homologues across species that allow for specific adaptations in a range of climates. [ 9 ] | https://en.wikipedia.org/wiki/Repressor |
Reproductive biology includes both sexual and asexual reproduction. [ 1 ] [ 2 ]
Reproductive biology includes a wide number of fields:
Human reproductive biology is primarily controlled through hormones , which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands , and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands. [ 3 ]
Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring. [ 4 ]
The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth. [ 3 ]
These structures include:
Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female. [ 2 ]
The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia. [ 3 ]
Testosterone , an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. [ 2 ] Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. [ 5 ] Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract. [ 6 ]
Animal reproduction occurs by two modes of action, including both sexual and asexual reproduction. [ 1 ] In asexual reproduction the generation of new organisms does not require the fusion sperm with an egg. [ 1 ] However, in sexual reproduction new organisms are formed by the fusion of haploid sperm and eggs resulting in what is known as the zygote . [ 1 ] Although animals exhibit both sexual and asexual reproduction the vast majority of animals reproduce by sexual reproduction. [ 1 ]
In many species, relatively little is known about the conditions needed for successful breeding. Such information may be critical to preventing widespread extinction as species are increasingly affected by climate change and other threats. [ 7 ] [ 8 ] In the case of some species of frogs, such as the Mallorcan midwife toad and the Kihansi spray toad , it has been possible to repopulate areas where wild populations had been lost. [ 9 ]
Gametogenesis is the formation of gametes , or reproductive cells.
Spermatogenesis is the production of sperm cells in the testis. In mature testes primordial germ cells divide mitotically to form the spermatogonia, which in turn generate spermatocytes by mitosis. [ 10 ] Then each spermatocyte gives rise to four spermatids through meiosis. [ 10 ] Spermatids are now haploid and undergo differentiation into sperm cells. [ 10 ] Later in reproduction the sperm will fuse with a female oocyte to form the zygote.
Oogenesis is the formation of a cell who will produce one ovum and three polar bodies . [ 10 ] Oogenesis begins in the female embryo with the production of oogonia from primordial germ cells. Like spermatogenesis, the primordial germ cell undergo mitotic division to form the cells that will later undergo meiosis, but will be halted at the prophase I stage. [ 10 ] This is known as the primary oocyte. Human females are born with all the primary oocytes they will ever have. [ 10 ] Starting at puberty the process of meiosis can complete resulting in the secondary oocyte and the first polar body. [ 10 ] The secondary oocyte can later be fertilized with the male sperm.
Sexual reproduction provides two principal adaptive benefits in contrast to asexual reproduction . These are genetic recombination and outcrossing . [ 11 ] Genetic recombination during meiosis provides the benefit of recombinational repair of damage in the germline DNA passed on to progeny. [ 11 ] Outcrossing provides the benefit of genetic complementation , that is, the masking of expression of deleterious recessive alleles in progeny (see heterosis , hybrid vigor). [ 11 ] Genetic variation among progeny produced as a byproduct of sexual reproduction may also have adaptive consequences by providing infrequent beneficial variants that contribute to long-term evolutionary success. | https://en.wikipedia.org/wiki/Reproductive_biology |
Reproductive immunology refers to a field of medicine that studies interactions (or the absence of them) between the immune system and components related to the reproductive system , such as maternal immune tolerance towards the fetus, or immunological interactions across the blood-testis barrier . The concept has been used by fertility clinics to explain fertility problems , recurrent miscarriages and pregnancy complications observed when this state of immunological tolerance is not successfully achieved. Immunological therapy is a method for treating many cases of previously unexplained infertility or recurrent miscarriage. [ 1 ]
The immunological system of the mother plays an important role in pregnancy considering the embryo's tissue is half foreign and unlike mismatched organ transplant , is not normally rejected. During pregnancy, immunological events that take place within the body of the mother are crucial in determining the health of both mother and fetus. The mother must develop immunotolerance to her fetus since both organisms live in an intimate symbiotic situation. Progesterone-induced-blocking factor 1 ( PIBF1 ) is one of several known contributing immunomodulatory factors to play a role in immunotolerance during pregnancy. [ 2 ]
The placenta also plays an important part in protecting the embryo for the immune attack from the mother's system. Secretory molecules produced by placental trophoblast cells and maternal uterine immune cells, within the decidua , work together to develop a functioning placenta. [ 3 ] Studies have proposed that proteins in semen may help a person's immune system prepare for conception and pregnancy. For example, there is substantial evidence for exposure to partner's semen as prevention for pre-eclampsia , a pregnancy disorder, largely due to the absorption of several immune modulating factors present in seminal fluid, such as transforming growth factor beta (TGFβ). [ 4 ] [ 5 ]
An insufficiency in the maternal immune system where the fetus is treated as a foreign substance in the body can lead to many pregnancy-related complications.
The maternal immune system, specifically within the uterus , changes to allow for implantation and protect a pregnancy from immune attack. While natural killer cells (NK cells), part of the innate immune system, are cytotoxic and responsible for attacking pathogens and infected cells, one subtype, uterine natural killer cells (uNK) is modified during pregnancy. [ 12 ] Despite the fetus containing foreign paternal antigens, uNK cells do not recognize it as "non-self", [ 12 ] so that their cytotoxic effects do not target the developing fetus. [ 12 ] The number and type of uNK cells and receptors change during a healthy pregnancy; the uNK profile differs in an abnormal pregnancy. In the first trimester of pregnancy, uNK cells are among the most abundant leukocytes present, but the number slowly declines up until term. [ 13 ] It has even been proposed that uNK contributes to the protection of extravillous trophoblast (EVT), important cells that contribute to the growth and development of a fetus. [ 14 ] [ 15 ] The uNK cells secrete transforming growth factor-β (TGF-β) which is believed to have an immunosuppressive effect through modulation of leukocyte response to trophoblasts. [ 14 ]
Killer-cell immunoglobulin-like receptors (KIRs) are expressed by the uNK cells of the mother. Both polymorphic maternal KIRs and fetal human leukocyte antigen (HLA)-C molecules are variable and specific to a particular pregnancy. In any pregnancy, the maternal KIR genotype could be AA (no activating KIRs), AB, or BB (1–10 activating KIRs) and the HLA-C ligands for KIRs are divided into two groups: HLA-C1 and HLA-C2. Studies have shown that there is poor compatibility between specifically maternal KIR AA and fetal HLA-C2, which leads to recurrent miscarriage, preeclampsia and implantation failures. In assisted reproduction, these new insights could have an impact on the selection of single embryo transfer, oocyte, or sperm donor selection according to KIRs and HLA in patients with recurrent miscarriages. [ 16 ]
In both cancer and pregnancy, cells grow and divide at fast rates without being effectively targeted by the human immune system. There is a parallel immunomodulatory mechanism in pregnancy and cancer: T helper cell expression differs based on cytokine levels; in pregnancy, Type 1 (T h 1) is up-regulated, whereas in cancer Type 2 (T h 2) is up-regulated. [ 17 ] [ 18 ] In pregnancy, regulatory T cells (T reg cells or Tregs) allow the body to accept the fetus. [ 17 ] Tregs perform a similar task with tumors. Cancer treatment aims to lower Treg activity, while treatment for pregnancy complications aims to increase Treg activity. This can cause complications in a person with cancer who is pregnant, since the goal is to decrease Tregs to eliminate the cancer, while that could also harm the fetus. Careful use of Treg-modifying immunotherapy is required to ensure the safety of the pregnant person and the fetus. [ 19 ]
Common in women of reproductive age, with an incidence approaching 80% by age 50, uterine fibroids are benign (non-cancerous) smooth muscle tumours. They are generally asymptomatic, although they can cause pain, sometimes severe, especially if large, or subjected to torsion (twisting; may occur when fibroid is pedunculated , with a "stem" or "stalk") or impaction (compression; may be more likely in pregnancy). Between 10 and 30% of women with fibroids develop complications during pregnancy. While their relationship to adverse outcomes is unclear, fibroids are associated with early pregnancy bleeding and loss, premature rupture of membranes and labor, and caesarean sections . [ 20 ]
Pregnancy-related anatomical and physiological changes affect pharmacokinetics (absorption, distribution, metabolism, and excretion) of many drugs, which may require drug regimen adjustment. Gastrointestinal motility is affected by delayed gastric emptying and increase gastric pH during pregnancy, which may alter drug absorption. [ 21 ] [ 22 ] Changes in body composition during pregnancy may change drugs volume of distribution due to increased body weight and fat, increased total plasma volume, and decreased albumin. [ 22 ] For drugs susceptible to hepatic elimination are influenced by increased production of estrogen and progesterone. [ 21 ] In addition, change in hepatic enzyme activity may increase or decrease drug metabolism based on drug composition, however most hepatic enzymes increase both metabolism and elimination during pregnancy. [ 21 ] Also, pregnancy increase glomerular filtration, renal plasma flow, and the activity of transporters, which may require increased drug dosage. [ 21 ]
FDA established labeling request for drugs and biological products with medication risks, allowing informed decision making for pregnant and breastfeeding women and their health care providers. [ 23 ] Pregnancy category was required on the drug label for systemically absorbed medications with the risk of fetal injury, which is now replaced with pregnancy and lactation labeling rule (PLLR). [ 24 ] In addition to pregnancy category requirements on information of pregnancy, labor and delivery, and nursing mothers, PLLR also includes information on females animals of reproductive potential. [ 24 ] The labeling change were effective starting June 30, 2015. [ 24 ] The labeling requirements of over-the-counter (OTC) medicines we not affected. [ 24 ]
The change in medication exposure during pregnancy should concern both mother and fetus independently. For example, within antibiotics, penicillin may be used during pregnancy, whereas tetracycline is not recommended due to potential risk of fetus for a wide range of adverse effects. [ 25 ]
Some studies have shown that maternal exposure to sulfonamides during pregnancy may be qan increased risk of congenital malformations . [ 26 ] There has been no evidence that certain types of sulfonamides or doses administered may increase or decrease the risk. Exposure to sulfonamides has been the only direct connection. [ 26 ]
A threatened miscarriage is when signs or symptoms of miscarriage , most often bleeding that occurs in the first 20-weeks of a pregnancy, are present. [ 27 ]
As the hormone progesterone is essential for the maintenance of pregnancy – amongst its important effects is maternal immune modulation and suppression of inflammatory responses – it is often used to prevent a threatened miscarriage from completing. Treatment with exogenous progesterone can lower the incidence of miscarriage; overall, though, the research suggests it does not alter the rate of pre-term births or live births . [ 28 ] However, one review suggested live birth rates were improved for a subgroup of women treated with micronized vaginal progesterone. The improved outcome was seen in the group at higher risk of miscarriage, this being people who had had three or more miscarriages and were also currently experiencing bleeding. [ 29 ]
The use of low dose aspirin may be linked to increased rates of live births and fewer pregnancy losses for people who have had one or two miscarriages. [ 30 ] Based on this research, the National Institute of Health (NIH) revised their 2014 advice on using low dose aspirin, stating in 2021 that "low-dose aspirin therapy before conception and during early pregnancy may increase pregnancy chances and live births among women who have experienced one or two prior miscarriages." [ 31 ] [ a ]
Some studies have found that using both aspirin and heparin can increase the rate of live birth in a person with antiphospholipid syndrome . [ 32 ] It was also found to increase birth weight and gestation age when using heparin and aspirin together. [ 32 ] It was also found that people with antiphospholipid syndrome had an increased live birth rate when low-molecular-weight heparin was substituted for heparin and co-administered with aspirin. [ 33 ]
The presence of anti-sperm antibodies in infertile men was first reported in 1954 by Rumke and Wilson. It has been noticed that the number of cases of sperm autoimmunity is higher in the infertile population leading to the idea that autoimmunity could be a cause of infertility. Anti sperm antigen has been described as three immunoglobulin isotopes (IgG, IgA, IgM) each of which targets different part of the spermatozoa. If more than 10% of the sperm are bound to anti-sperm antibodies (ASA), then infertility is suspected. The blood-testis barrier separates the immune system and the developing spermatozoa. The tight junction between the Sertoli cells form the blood-testis barrier but it is usually breached by physiological leakage. Not all sperms are protected by the barrier because spermatogonia and early spermatocytes are located below the junction. They are protected by other means like immunologic tolerance and immunomodulation .
Infertility after anti-sperm antibody binding can be caused by autoagglutination , sperm cytotoxicity , blockage of sperm-ovum interaction, and inadequate motility. Each presents itself depending on the binding site of ASA.
Immunocontraceptive vaccines with a variety of proposed intervention strategies have been in development and under investigation since the 1970s. [ 34 ] Population-level use in wildlife for ecological management has accelerated, with research less constrained by possible outcomes which would be considered unacceptable in humans, such as permanent sterility. Experience and research in the non-human animal context informs the human research that is ongoing, albeit with slower progress. [ 35 ]
One approach is a vaccine designed to inhibit the fusing of spermatozoa to the zona pellucida (ZP). Normally in fertilisation, spermatozoa fuse with the zona pellucida surrounding the mature oocyte ; the resulting acrosome reaction breaks down the ovum's tough coating so that the sperm and ovum unite. A vaccine targeting this process has been tested in animals with a view to develop an effective contraceptive for humans. This DNA-based vaccine uses cloned ZP cDNA. It results in the production of antibodies against the ZP, which stop the sperm from binding to the zona pellucida and ultimately from fertilizing the ovum. [ 36 ]
Another vaccine that has been investigated is one against human chorionic gonadotropin (hCG). In phase I and early phase II human clinical trials, an experimental vaccine consisting of a dimer of β-hCG, with the tetanus toxoid (TT) as an adjuvant , produced antibodies against hCG in the small group of women immunized. The anti-hCG antibodies generated were capable of neutralizing the biological activity of hCG. Without active hCG, maintenance of the uterus in a condition receptive for implantation is not possible, thereby forestalling pregnancy. As only 80% of the women in the study had a level of circulating anti-hCG sufficient to prevent pregnancy, further development of this approach is needed to enhance the immunogenicity of the vaccine, in order that it produces a reliable and consistent immune response in a higher proportion of women. Towards this goal, vaccine variations using a peptide of β-hCG that is uniquely specific to hCG, while absent in other hormones – luteinizing hormone (LH), follicle-stimulating hormone (FSH), and thyroid-stimulating hormone (TSH) – are under investigation in animal models, for their possible enhancement of responses. [ 36 ]
Challenges to a fuller understanding of human reproductive immunology, including in pregnancy, are research limitations in existing in vitro and in vivo tools, and ethical concerns. Direct human research in this field mostly relies on stem cell culture and technological advancements that allow scientists to conduct research on organoids instead of living human subjects. In 2018, a review study concluded that organoids can be used to model organ development and disease. [ 37 ] Other studies have concluded that with further technological advancements, it is possible to create a detailed 3D organoid model of the female reproductive tract which introduces a more efficient method to conduct research and collect data in the fields of drug discovery , basic research and essentially reproductive immunology. [ 38 ]
The maternal-fetal interface has the ability to protect against pathogens by providing reproductive immunity. Simultaneously, it is remodeling the tissues needed for placentation . This unique feature of the maternal-fetal interface suggests that the decidual immunome , or the immune function of the female reproductive tract, is not fully understood, yet. [ 38 ] [ 39 ]
In order to have a better understanding of Reproductive Immunology, more data needs to be collected and analyzed. Technological advances allow reproductive immunologists to collect increasingly complex data at a cellular resolution. Polychromatic flow cytometry allows for greater resolution in the identifying novel cell types by surface and intracellular protein. [ 39 ] Two examples of methods in data acquisition include:
Reproductive immunology remains an open area of research as not enough data is available to introduce a significant finding. [ 38 ]
Maternal immune activation can be assessed by measuring multiple cytokines (cytokine profiling) in serum or plasma . This method is safe for the fetus since it only requires a peripheral blood sample from the mother and has been used to map maternal immune development throughout normal pregnancies as well as studying the relationship between immune activation and pregnancy complications or abnormal development of the fetus. Unfortunately, the method itself is unable to determine the sources and the targets of the cytokines and only shows systemic immune activation (as long as peripheral blood is analyzed), and the cytokine profile may vary rapidly as cytokines are short-lived proteins. It is also difficult to establish the exact relation between a cytokine profile and the underlying immunological processes.
The impact of unfavorable immune activation on fetal development and the risk of pregnancy complications is an active field of research. Many studies have reported an association between cytokine levels, especially for inflammatory cytokines, and the risk of developing preeclampsia, although the findings are mixed. [ 40 ] However, decreased cytokine levels in early pregnancy has been associated to impaired fetal growth . [ 41 ] Increased maternal cytokine levels have also been found to increase the risk of neurodevelopmental disorders such as autism spectrum disorders [ 42 ] and depression [ 43 ] in the offspring. However, more research is needed before these associations are fully understood. | https://en.wikipedia.org/wiki/Reproductive_immunology |
Reproductive interference is the interaction between individuals of different species during mate acquisition that leads to a reduction of fitness in one or more of the individuals involved. The interactions occur when individuals make mistakes or are unable to recognise their own species, labelled as ‘incomplete species recognition '. Reproductive interference has been found within a variety of taxa, including insects , mammals , birds , amphibians , marine organisms , and plants . [ 1 ]
There are seven causes of reproductive interference, namely signal jamming, heterospecific rivalry , misdirected courtship , heterospecific mating attempts, erroneous female choice , heterospecific mating, and hybridisation . All types have fitness costs on the participating individuals, generally from a reduction in reproductive success, a waste of gametes , and the expenditure of energy and nutrients . These costs are variable and dependent on numerous factors, such as the cause of reproductive interference, the sex of the parent, and the species involved. [ 1 ]
Reproductive interference occurs between species that occupy the same habitat and can play a role in influencing the coexistence of these species. It differs from competition as reproductive interference does not occur due to a shared resource . [ 1 ] Reproductive interference can have ecological consequences, such as through the segregation of species both spatially and temporally. [ 2 ] It can also have evolutionary consequences, for example; it can impose a selective pressure on the affected species to evolve traits that better distinguish themselves from other species. [ 3 ]
Reproductive interference can occur at different stages of mating, from locating a potential mate, to the fertilisation of an individual of a different species. There are seven causes of reproductive interference that each have their own consequences on the fitness of one or both of the involved individuals. [ 1 ]
Signal jamming refers to the interference of one signal by another. [ 1 ] Jamming can occur by signals emitted from environmental sources (e.g. noise pollution), or from other species. In the context of reproductive interference, signal jamming only refers to the disruption of the transmission or retrieval of signals by another species. [ 1 ] The process of mate attraction and acquisition involves signals to aid in locating and recognising potential mates. Signals can also give the receiver an indication of the quality of a potential mate. [ 4 ] Signal jamming can occur in different types of communication. Auditory signal jamming, otherwise labelled as auditory masking, is when a noisy environment created by heterospecific signals causes difficulties in identifying conspecifics. [ 5 ] Likewise in chemical signals, pheromones that are meant to attract conspecifics and drive off others may overlap with heterospecific pheromones, leading to confusion. [ 6 ] Difficulties in recognising and locating conspecifics can result in a reduction of encounters with potential mates and a decrease in mating frequencies. [ 6 ]
Vibrational signalling in the American grapevine leafhopper - Individuals of the American grapevine leafhopper communicate with each other through vibrational signals that they transmit through the host plant. American grapevine leafhoppers are receptive of signals within their receptor’s sensitivity range of 50 to 1000 Hz. The vibrations can be used to identify and locate potential female mates. To successfully communicate, a duet is performed between the male and female American grapevine leafhopper. The female replies within a specific timeframe after the male signal, and the male may use the timing of her reply to identify her. However, vibrational signals are prone to disruption and masking by heterospecific signals, conspecific signals, and background noise that are within their species-specific sensitivity range. The interference of the duet between a male and female American grapevine leafhopper can reduce the male’s success in identifying and locating the female, which can reduce the frequency of mating. [ 7 ]
Auditory signalling in the gray treefrog ( Hyla versicolor ) and the Cope's gray treefrogs ( Hyla chrysoscelis ) – The success of reproduction is dependent on a female’s ability to correctly identify and respond to the advertisement call of a potential mate. At a breeding site with high densities of males, the male’s chorus may overlap with heterospecific calls, making it difficult for the female to successfully locate a mate. When the advertisement calls of the male gray treefrog and male Cope’s gray treefrog overlap, female gray treefrogs make mistakes and choose the heterospecific call. The amount of errors the female makes is dependent on the amount of overlap between signals. Female Cope’s gray treefrogs can better differentiate the signals and are only significantly affected when heterospecifics completely overlap conspecific male signals. However, female Cope’s gray treefrogs prefer conspecific male signals that have less overlap (i.e. less interference). Furthermore, females have longer response times to overlapped calls, where it takes longer for them to choose a mate. Signal jamming can affect both males and females as difficulties in identifying and locating a mate reduces their mating frequencies. Females may have more costs if they mate with a male of a lower quality, and may be susceptible to a higher risk of predation by predators within the breeding site if they take longer to choose and locate a male. Heterospecific mating between the gray treefrog and Cope’s gray treefrogs also can form an infertile hybrid which is highly costly to both parents due to the wastage of gametes. [ 8 ]
Chemical signalling in ticks – Female ticks produce a pheromone that is a species-specific signal to attract conspecific males that are attached to the host. Female ticks also produce a pheromone that is not species-specific which can attract males that are in a close proximity to her. Pheromones emitted from closely related species can mix and lead to interference. Three species of ticks: Aponomma hydrosauri , Amblyomma albolimbatum , and Amblyomma limbatum , are closely related and can interfere with one another when attached to the same host. When two of the species of tick are attached on the same host, males have difficulties locating a female of the same species, potentially due to the mixing of pheromones. The pheromone that is not species-specific also has the capability of attracting males of all three species when they are in close proximity to the female. The presence of a heterospecific female can also reduce the time a male spends with conspecific females, leading to a reduction of reproductive success. Furthermore, when Amblyomma albolimbatum males attach to Aponomma hydrosauri females to mate, despite being unsuccessful, they remain attached which physically inhibits following males from mating. [ 9 ]
Heterospecific rivalry occurs between males, when a male of a different species is mistaken as a rival for mates (i.e. mistaken for a conspecific male). [ 1 ] In particular, heterospecific rivalry is hard to differentiate from other interspecific interactions, such as the competition over food and other resources. [ 1 ] Costs to the mistaken males can include the wastage of time and energy, and a higher risk of injury and predation if they leave their mating territory to pursue the heterospecific male. [ 10 ] Males that chase off a heterospecific male may also leave females exposed to following intruders, whether it be a conspecific or heterospecific male. [ 10 ]
Eastern amberwing dragonfly ( Perithemis tenera ) – Male Eastern amberwing dragonflies are territorial as they defend mating territories from rival conspecific males. The male will perch around their territory and pursue conspecifics that fly near the perch. When the male is approached by a species of horsefly and butterfly , they are similarly pursued. The horsefly and butterfly do not compete over a common resource with the Eastern amberwing dragonfly, have not been seen interfering with the mating within the territory, and are neither a predator nor prey of the Eastern amberwing dragonfly. Instead, they are pursued potentially due to being mistaken for a rival conspecific as they share similar characteristics in size, colour, and flight height. The similar characteristics may be cues used by the male Eastern amberwing dragonfly to identify conspecifics. The heterospecific pursuit is costly for the male as they waste energy and time, have a higher risk of injury, and may lose opportunities to defend their territory against subsequent intruders. [ 11 ]
Misdirected courtship occurs when males display courtship towards individuals of a different species of either sex. [ 1 ] The misdirection is caused by a mistake during species recognition, or by an attraction towards heterospecifics that possess desirable traits. [ 1 ] Such desirable traits are those traits that normally are an indicator of conspecific mate quality, such as body size. [ 12 ] Costs associated with misdirecting courtship for males include the wasted energy investment in the attempt to court heterospecifics, and a decrease in mating frequency within species. [ 13 ]
Waxbill – Waxbills are monogamous , where an individual only has one partner. Parents also display biparental care , where both the mother and father contribute to the care of the offspring. The combination of monogamy and biparental investment suggest that both male and female waxbills should be ‘choosy’ and have strong preferences to reduce the chances of mating with a heterospecific female. Males of the three species of waxbill: blue breast ( Uraeginthus angolensis ), red cheek ( Uraeginthus bengalus ) , and blue cap ( Uraeginthus cyanocephalus ), have differing strengths of preferences for conspecific females when also presented with a heterospecific female. The differing preferences is affected by the body size of the females, potentially due to body size being an indicator of fecundity , which is the ability to produce offspring. Blue breast males prefer conspecifics over red cheek females that are smaller; however, have a weaker preference for conspecifics over blue breast females that are only slightly smaller. Red cheek males have no preference for conspecifics in the presence of a larger blue breast female or blue cap female. Blue cap males prefer conspecifics over red cheek females; however, have no preference for conspecifics in the presence of a larger blue breast male. [ 14 ]
Atlantic salmon ( Salmo salar ) – Atlantic salmon that were once native to Lake Ontario were reintroduced to the lake to study their spawning interactions with other species of fish, including the chinook salmon , coho salmon , and brown trout . Chinook salmon interacted with Atlantic salmon the most, where male chinooks attempted to court female Atlantic salmon. Male chinooks also chased away, and in some interactions, behaved aggressively towards other Atlantic salmon that approached female Atlantic salmon. A male brown trout was also observed to court a female Atlantic salmon. Misdirected courtship towards the Atlantic salmon can cause problems in waters that the Atlantic salmon currently occupy, and towards conservation efforts to reintroduce the Atlantic salmon to Lake Ontario. Implications of misdirected courtship on the Atlantic salmon can cause the delay or prevention of spawning, and the hybridisation of the Atlantic salmon with other species. [ 15 ]
Heterospecific mating attempts occur when males attempt to mate with females of a different species, regardless of whether courtship occurs. [ 1 ] During each mating attempt, sperm transfer may or may not occur. [ 1 ] Both sexes have costs when a heterospecific attempts to mate. Costs associated with heterospecific mating attempts include wasted energy, time, and potentially gametes if sperm transfer occurs. [ 16 ] There is also a risk of injury and increased risk of predation for both sexes. [ 16 ]
Cepero's grasshopper ( Tetrix ceperoi ) and the slender groundhopper ( Tetrix subulata ) – Naturally the distribution of the Cepero’s grasshopper and slender groundhopper overlap; however, they rarely co-exist. The reproductive success of the Cepero’s grasshopper decreases when housed within the same enclosure as high numbers of the slender groundhopper. The reduction of reproductive success stems from an increase in mating attempts by the Cepero's grasshopper towards the slender groundhopper, which may be due to their larger body size. However, these mating attempts are generally unsuccessful as the mate recognition of female slender groundhoppers are reliable, which may be due to the different courtship displays of the two species. The reduced reproductive success can cause the displacement in one of the species, potentially a factor as to why the species rarely co-exist despite sharing similar habitat preferences. [ 17 ]
Italian agile frog ( Rana latastei ) - The distribution of Italian agile frog and the agile frog ( Rana dalmatina ) overlap naturally in ponds and drainage ditches. In the areas of overlap, the abundance of agile frogs is higher than Italian agile frogs. When there is a higher abundance of agile frogs, the mating between Italian agile frogs is interfered with. Male agile frogs attempt to displace male Italian agile frogs during amplexus , which is a type of mating position where the male grasps onto the female. The Italian agile frog and agile frog have been seen in amplexus when co-existing. The mating attempts by the agile frog reduces the reproductive success of the Italian agile frog. The Italian agile frog also produces a lower number of viable eggs in the presence of the agile frog, potentially due to sperm competition between the male Italian agile frog and agile frog. [ 18 ]
Species and sex-recognition errors among true toads are very well studied. [ 19 ] [ 20 ] Toads are known to have amplexus with species from other genera in the same family, [ 21 ] and species belonging to other families. [ 22 ] Hybridization cases have also been reported among toads. [ 23 ]
Erroneous female choice refers to mistakes made by females when differentiating males of the same species from males of a different species. [ 1 ] Female choice may occur at different stages of mating, including male courtship, copulation , or after copulation. [ 24 ] Female choice can depend on the availability of appropriate males. [ 25 ] When there are less available conspecific males, females may make more mistakes as they become less ‘choosy’. [ 25 ]
Striped ground cricket ( Allonemobius fasciatus ) and Southern ground cricket ( Allonemobius socius ) - The striped ground cricket and the Southern ground cricket are closely related species that have an overlapping distribution. Both crickets use calling songs in order to identify and locate potential mates. The songs of the two species have a different frequency and period. Females of both species show little preference between the songs from conspecific and heterospecific males. The minor preference disappears if the intensity of the calls are altered. The lack of ability to differentiate between the two songs can result in erroneous female choice. Erroneous female choice has costs, including energy wastage, and increases in predation risk when searching for a conspecific. Additionally, it is highly costly when the mistake leads to heterospecific mating, which involves the wastage of gametes. However, the cost of erroneous female choice may be small for the striped ground and Southern ground cricket due to their high abundance. The lack of ability to differentiate between the calling songs is proposed to be due to the weak selective pressure on the females. [ 26 ]
Heterospecific mating is when two individuals from different species mate. After the male transfers his sperm into the heterospecific female, different processes can occur that may change the outcome of the copulation. Heterospecific mating may result in the production of a hybrid in some pairings. Costs associated to heterospecific mating include the wastage of time, energy, and gametes. [ 1 ]
Spider mites – two closely related Panonychus mites: the Panonychus citri and Panonychus mori , are generally geographically segregated and on occasion co-exist. However, the co-existence is not stable as the Panonychus mori is eventually excluded. The exclusion is a result of reproductive interference and also due to the higher reproductive rate of the Panonychus citri . Heterospecific mating occurs between the two species which can produce infertile eggs or infertile hybrid females. Furthermore, females are not able to produce female offspring after mating with a heterospecific. In addition to the wastage of energy, time, and gametes, the inability to produce female offspring after heterospecific mating skews the sex ratio of the co-existing populations. The high costs associated with heterospecific mating along with the higher reproductive rate of the Panonychus citri lead to the displacement of the Panonychus mori . [ 27 ]
Black-legged meadow katydid ( Orchelimum nigripes ) and the handsome meadow katydid ( Orchelimum pulchellum ) – The two closely related species of katydid have the same habitat preferences and co-exist along the Potomac River . Females of both species that mate heterospecifically have a large reduction in fecundity compared to conspecific pairings. Heterospecific mating either produces no eggs or male hybrids that may be sterile. Both individuals suffer a large fitness cost from the wastage of energy, time, and gametes, as they unsuccessfully pass on their genes. However, females may be able to offset this cost through multiple mating, as they receive nutritional benefits from consuming a nuptial food gift from the male, otherwise known as the spermatophylax . [ 28 ]
Hybridisation, in the context of reproductive interference, is defined as the mating between individuals of different species that can lead to a hybrid, an inviable egg, or an inviable offspring. [ 29 ] The frequency of hybridisation increases if it is hard to recognise potential mates, especially when heterospecifics share similarities, such as body size, [ 30 ] colouration, [ 31 ] and acoustic signals. [ 32 ] Costs associated with hybridisation are dependent on the level of parental investment and on the product of the pairing (hybrid). [ 1 ] Hybrids have the potential to become invasive if they develop traits that make them more successful than their parent species in surviving within new and changing habitats, otherwise known as hybrid vigor or heterosis . [ 33 ] Compared to each individual parent species, they hold a different combination of characteristics that can be more adaptable and 'fit' within particular environments. [ 34 ] If an inviable product is produced, both parents suffer from the cost of unsuccessfully passing on their genes. [ 1 ]
California Tiger Salamanders ( Ambystoma californiense ) x Barred Tiger Salamanders ( Ambystoma mavortium ) - California tiger salamanders are native to California, and were geographically isolated from Barred tiger salamanders. [ 35 ] Barred tiger salamanders were then introduced by humans to California, and the mating between these two species led to the formation of a population of hybrids. [ 35 ] The hybrids have since established in their parent habitat and spread into human modified environments. [ 35 ] Within hybrids, the survivability of individuals with a mixed-ancestry is higher than individuals with a highly native or highly introduced genetic background. [ 36 ] Stable populations can form as populations with a large native ancestry become mixed with more introduced genes, and vice versa. [ 36 ] Hybrids pose both ecological and conservation consequences as they threaten the population viability of the native California tiger salamanders, which is currently listed as an endangered species. [ 37 ] The hybrids may also affect the viability of other native organisms within the invaded regions, as they consume large quantities of aquatic invertebrate and tadpole. [ 36 ]
Red deer ( Cervus elaphus ) x sika deer ( Cervus nippon ) - The sika deer were originally introduced by humans to Britain and has since established and spread through deliberate reintroductions and escape. The red deer are native to Britain and hybridise with the sika deer in areas which they co-exist. Heterospecific mating between the red deer and sika deer can produce viable hybrids. Sika deer and the hybrids may outcompete and displace native deer from dense woodland. As the complete eradication of sika and the hybrids is impractical, management efforts are directed at minimising spread by not planting vegetation that would facilitate their spread into regions where the red deer still persist. [ 38 ] | https://en.wikipedia.org/wiki/Reproductive_interference |
The mechanisms of reproductive isolation are a collection of evolutionary mechanisms, behaviors and physiological processes critical for speciation . They prevent members of different species from producing offspring , or ensure that any offspring are sterile. These barriers maintain the integrity of a species by reducing gene flow between related species. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
The mechanisms of reproductive isolation have been classified in a number of ways. Zoologist Ernst Mayr classified the mechanisms of reproductive isolation in two broad categories: pre-zygotic for those that act before fertilization (or before mating in the case of animals ) and post-zygotic for those that act after it. [ 5 ] The mechanisms are genetically controlled and can appear in species whose geographic distributions overlap ( sympatric speciation ) or are separate ( allopatric speciation ).
Pre-zygotic isolation mechanisms are the most economic in terms of the natural selection of a population, as resources are not wasted on the production of a descendant that is weak, non-viable or sterile. These mechanisms include physiological or systemic barriers to fertilization.
Any of the factors that prevent potentially fertile individuals from meeting will reproductively isolate the members of distinct species. The types of barriers that can cause this isolation include: different habitats , physical barriers, and a difference in the time of sexual maturity or flowering. [ 6 ] [ 7 ]
An example of the ecological or habitat differences that impede the meeting of potential pairs occurs in two fish species of the family Gasterosteidae (sticklebacks). One species lives all year round in fresh water , mainly in small streams. The other species lives in the sea during winter, but in spring and summer individuals migrate to river estuaries to reproduce. The members of the two populations are reproductively isolated due to their adaptations to distinct salt concentrations. [ 6 ] An example of reproductive isolation due to differences in the mating season are found in the toad species Bufo americanus and Bufo fowleri . The members of these species can be successfully crossed in the laboratory producing healthy, fertile hybrids. However, mating does not occur in the wild even though the geographical distribution of the two species overlaps. The reason for the absence of inter-species mating is that B. americanus mates in early summer and B. fowleri in late summer. [ 6 ] Certain plant species, such as Tradescantia canaliculata and T. subaspera , are sympatric throughout their geographic distribution, yet they are reproductively isolated as they flower at different times of the year. In addition, one species grows in sunny areas and the other in deeply shaded areas. [ 3 ] [ 7 ]
The different mating rituals of animal species creates extremely powerful reproductive barriers, termed sexual or behavior isolation, that isolate apparently similar species in the majority of the groups of the animal kingdom. In dioecious species, males and females have to search for a partner, be in proximity to each other, carry out the complex mating rituals and finally copulate or release their gametes into the environment in order to breed. [ 8 ] [ 9 ] [ 10 ]
Mating dances, the songs of males to attract females or the mutual grooming of pairs, are all examples of typical courtship behavior that allows both recognition and reproductive isolation. This is because each of the stages of courtship depend on the behavior of the partner. The male will only move onto the second stage of the exhibition if the female shows certain responses in her behavior. He will only pass onto the third stage when she displays a second key behavior. The behaviors of both interlink, are synchronized in time and lead finally to copulation or the liberation of gametes into the environment. No animal that is not physiologically suitable for fertilization can complete this demanding chain of behavior. In fact, the smallest difference in the courting patterns of two species is enough to prevent mating (for example, a specific song pattern acts as an isolation mechanism in distinct species of grasshopper of the genus Chorthippus [ 11 ] ).
Even where there are minimal morphological differences between species, differences in behavior can be enough to prevent mating. For example, Drosophila melanogaster and D. simulans which are considered twin species due to their morphological similarity, do not mate even if they are kept together in a laboratory. [ 3 ] [ 12 ] Drosophila ananassae and D. pallidosa are twin species from Melanesia . In the wild they rarely produce hybrids, although in the laboratory it is possible to produce fertile offspring. Studies of their sexual behavior show that the males court the females of both species but the females show a marked preference for mating with males of their own species. A different regulator region has been found on Chromosome II of both species that affects the selection behavior of the females. [ 12 ]
Pheromones play an important role in the sexual isolation of insect species. [ 13 ] These compounds serve to identify individuals of the same species and of the same or different sex. Evaporated molecules of volatile pheromones can serve as a wide-reaching chemical signal. In other cases, pheromones may be detected only at a short distance or by contact.
In species of the melanogaster group of Drosophila , the pheromones of the females are mixtures of different compounds, there is a clear dimorphism in the type and/or quantity of compounds present for each sex. In addition, there are differences in the quantity and quality of constituent compounds between related species, it is assumed that the pheromones serve to distinguish between individuals of each species. An example of the role of pheromones in sexual isolation is found in 'corn borers' in the genus Ostrinia . There are two twin species in Europe that occasionally cross. The females of both species produce pheromones that contain a volatile compound which has two isomers , E and Z; 99% of the compound produced by the females of one species is in the E isomer form, while the females of the other produce 99% isomer Z. The production of the compound is controlled by just one locus and the interspecific hybrid produces an equal mix of the two isomers. The males, for their part, almost exclusively detect the isomer emitted by the females of their species, such that the hybridization although possible is scarce. The perception of the males is controlled by one gene , distinct from the one for the production of isomers, the heterozygous males show a moderate response to the odour of either type. In this case, just 2 'loci' produce the effect of ethological isolation between species that are genetically very similar. [ 12 ]
Sexual isolation between two species can be asymmetrical. This can happen when the mating that produces descendants only allows one of the two species to function as the female progenitor and the other as the male, while the reciprocal cross does not occur. For instance, half of the wolves tested in the Great Lakes area of America show mitochondrial DNA sequences of coyotes , while mitochondrial DNA from wolves is never found in coyote populations. This probably reflects an asymmetry in inter-species mating due to the difference in size of the two species as male wolves take advantage of their greater size in order to mate with female coyotes, while female wolves and male coyotes do not mate. [ 14 ] [a]
Mating pairs may not be able to couple successfully if their genitals are not compatible. The relationship between the reproductive isolation of species and the form of their genital organs was signaled for the first time in 1844 by the French entomologist Léon Dufour . [ 15 ] Insects' rigid carapaces act in a manner analogous to a lock and key, as they will only allow mating between individuals with complementary structures, that is, males and females of the same species (termed co-specifics ).
Evolution has led to the development of genital organs with increasingly complex and divergent characteristics, which will cause mechanical isolation between species. Certain characteristics of the genital organs will often have converted them into mechanisms of isolation. However, numerous studies show that organs that are anatomically very different can be functionally compatible, indicating that other factors also determine the form of these complicated structures. [ 16 ]
Mechanical isolation also occurs in plants and this is related to the adaptation and coevolution of each species in the attraction of a certain type of pollinator (where pollination is zoophilic ) through a collection of morphophysiological characteristics of the flowers (called pollination syndrome ), in such a way that the transport of pollen to other species does not occur. [ 17 ]
The synchronous spawning of many species of coral in marine reefs means that inter-species hybridization can take place as the gametes of hundreds of individuals of tens of species are liberated into the same water at the same time. Approximately a third of all the possible crosses between species are compatible, in the sense that the gametes will fuse and lead to individual hybrids. This hybridization apparently plays a fundamental role in the evolution of coral species. [ 18 ] However, the other two-thirds of possible crosses are incompatible. It has been observed that in sea urchins of the genus Strongylocentrotus the concentration of spermatocytes that allow 100% fertilization of the ovules of the same species is only able to fertilize 1.5% of the ovules of other species. This inability to produce hybrid offspring, despite the fact that the gametes are found at the same time and in the same place, is due to a phenomenon known as gamete incompatibility , which is often found between marine invertebrates, and whose physiological causes are not fully understood. [ 19 ] [ 20 ]
In some Drosophila crosses, the swelling of the female's vagina has been noted following insemination. This has the effect of consequently preventing the fertilization of the ovule by sperm of a different species. [ 21 ] [ clarification needed ]
In plants the pollen grains of a species can germinate in the stigma and grow in the style of other species. However, the growth of the pollen tubes may be detained at some point between the stigma and the ovules, in such a way that fertilization does not take place. This mechanism of reproductive isolation is common in the angiosperms and is called cross-incompatibility or incongruence . [ 22 ] [ 23 ] A relationship exists between self-incompatibility and the phenomenon of cross-incompatibility. In general crosses between individuals of a self-compatible species (SC) with individuals of a self-incompatible (SI) species give hybrid offspring. On the other hand, a reciprocal cross (SI x SC) will not produce offspring, because the pollen tubes will not reach the ovules. This is known as unilateral incompatibility , which also occurs when two SC or two SI species are crossed. [ 24 ]
A number of mechanisms which act after fertilization preventing successful inter-population crossing are discussed below.
A type of incompatibility that is found as often in plants as in animals occurs when the egg or ovule is fertilized but the zygote does not develop, or it develops and the resulting individual has a reduced viability. [ 3 ] This is the case for crosses between species of the frog order, where widely differing results are observed depending upon the species involved. In some crosses there is no segmentation of the zygote (or it may be that the hybrid is extremely non-viable and changes occur from the first mitosis ). In others, normal segmentation occurs in the blastula but gastrulation fails. Finally, in other crosses, the initial stages are normal but errors occur in the final phases of embryo development . This indicates differentiation of the embryo development genes (or gene complexes) in these species and these differences determine the non-viability of the hybrids. [ 25 ]
Similar results are observed in mosquitoes of the genus Culex , but the differences are seen between reciprocal crosses , from which it is concluded that the same effect occurs in the interaction between the genes of the cell nucleus (inherited from both parents) as occurs in the genes of the cytoplasmic organelles which are inherited solely from the female progenitor through the cytoplasm of the ovule. [ 3 ]
In Angiosperms, the successful development of the embryo depends on the normal functioning of its endosperm . [ 26 ]
The failure of endosperm development and its subsequent abortion has been observed in many interploidal crosses (that is, those between populations with a particular degree of intra or interspecific ploidy ), [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] and in certain crosses in species with the same level of ploidy. [ 30 ] [ 31 ] [ 32 ] The collapse of the endosperm , and the subsequent abortion of the hybrid embryo is one of the most common post-fertilization reproductive isolation mechanism found in angiosperms .
A hybrid may have normal viability but is typically deficient in terms of reproduction or is sterile. This is demonstrated by the mule and in many other well known hybrids. In all of these cases sterility is due to the interaction between the genes of the two species involved; to chromosomal imbalances due to the different number of chromosomes in the parent species; or to nucleus-cytoplasmic interactions such as in the case of Culex described above. [ 3 ]
Hinnies and mules are hybrids resulting from a cross between a horse and a donkey or between a mare and a donkey, respectively. These animals are nearly always sterile due to the difference in the number of chromosomes between the two parent species. Both horses and donkeys belong to the genus Equus , but Equus caballus has 64 chromosomes, while Equus asinus only has 62. A cross will produce offspring (mule or hinny) with 63 chromosomes, that will not form pairs, which means that they do not divide in a balanced manner during meiosis . In the wild, the horses and donkeys ignore each other and do not cross. In order to obtain mules or hinnies it is necessary to train the progenitors to accept copulation between the species or create them through artificial insemination .
The sterility of many interspecific hybrids in angiosperms has been widely recognised and studied. [ 33 ] Interspecific sterility of hybrids in plants has multiple possible causes. These may be genetic, related to the genomes, or the interaction between nuclear and cytoplasmic factors, as will be discussed in the corresponding section. Nevertheless, in plants, hybridization is a stimulus for the creation of new species – the contrary to the situation in animals. [ 34 ] Although the hybrid may be sterile, it can continue to multiply in the wild by asexual reproduction , whether vegetative propagation or apomixis or the production of seeds. [ 35 ] [ 36 ] [ clarification needed ] Indeed, interspecific hybridization can be associated with polyploidia and, in this way, the origin of new species that are called allopolyploids . Rosa canina , for example, is the result of multiple hybridizations. [ 37 ] The common wheat ( Triticum aestivum ) is an allohexaploid (allopolyploid with six chromosome sets) that contains the genomes of three different species. [ 38 ] [ 39 ]
In general, the barriers that separate species do not consist of just one mechanism. The twin species of Drosophila , D. pseudoobscura and D. persimilis , are isolated from each other by habitat ( persimilis generally lives in colder regions at higher altitudes), by the timing of the mating season ( persimilis is generally more active in the morning and pseudoobscura at night) and by behavior during mating (the females of both species prefer the males of their respective species). In this way, although the distribution of these species overlaps in wide areas of the west of the United States of America, these isolation mechanisms are sufficient to keep the species separated. Such that, only a few fertile females have been found amongst the other species among the thousands that have been analyzed. However, when hybrids are produced between both species, the gene flow between the two will continue to be impeded as the hybrid males are sterile. Also, and in contrast with the great vigor shown by the sterile males, the descendants of the backcrosses of the hybrid females with the parent species are weak and notoriously non-viable. This last mechanism restricts even more the genetic interchange between the two species of fly in the wild. [ 3 ]
Haldane's rule states that when one of the two sexes is absent in interspecific hybrids between two specific species, then the sex that is not produced, is rare or is sterile is the heterozygous (or heterogametic) sex. [ 40 ] In mammals, at least, there is growing evidence to suggest that this is due to high rates of mutation of the genes determining masculinity in the Y chromosome . [ 40 ] [ 41 ] [ 42 ]
It has been suggested that Haldane's rule simply reflects the fact that the male sex is more sensitive than the female when the sex-determining genes are included in a hybrid genome . But there are also organisms in which the heterozygous sex is the female: birds and butterflies and the law is followed in these organisms. Therefore, it is not a problem related to sexual development, nor with the sex chromosomes. Haldane proposed that the stability of hybrid individual development requires the full gene complement of each parent species, so that the hybrid of the heterozygous sex is unbalanced (i.e. missing at least one chromosome from each of the parental species). For example, the hybrid male obtained by crossing D. melanogaster females with D. simulans males, which is non-viable, lacks the X chromosome of D. simulans . [ 12 ]
The genetics of ethological isolation barriers will be discussed first. Pre-copulatory isolation occurs when the genes necessary for the sexual reproduction of one species differ from the equivalent genes of another species, such that if a male of species A and a female of species B are placed together they are unable to copulate. Study of the genetics involved in this reproductive barrier tries to identify the genes that govern distinct sexual behaviors in the two species. The males of Drosophila melanogaster and those of D. simulans conduct an elaborate courtship with their respective females, which are different for each species, but the differences between the species are more quantitative than qualitative. In fact the simulans males are able to hybridize with the melanogaster females. Although there are lines of the latter species that can easily cross there are others that are hardly able to. Using this difference, it is possible to assess the minimum number of genes involved in pre-copulatory isolation between the melanogaster and simulans species and their chromosomal location. [ 12 ]
In experiments, flies of the D. melanogaster line, which hybridizes readily with simulans , were crossed with another line that it does not hybridize with, or rarely. The females of the segregated populations obtained by this cross were placed next to simulans males and the percentage of hybridization was recorded, which is a measure of the degree of reproductive isolation. It was concluded from this experiment that 3 of the 8 chromosomes of the haploid complement of D. melanogaster carry at least one gene that affects isolation, such that substituting one chromosome from a line of low isolation with another of high isolation reduces the hybridization frequency. In addition, interactions between chromosomes are detected so that certain combinations of the chromosomes have a multiplying effect. [ 12 ] Cross incompatibility or incongruence in plants is also determined by major genes that are not associated at the self-incompatibility S locus . [ 43 ] [ 44 ] [ 45 ]
Reproductive isolation between species appears, in certain cases, a long time after fertilization and the formation of the zygote, as happens – for example – in the twin species Drosophila pavani and D. gaucha . The hybrids between both species are not sterile, in the sense that they produce viable gametes, ovules and spermatozoa. However, they cannot produce offspring as the sperm of the hybrid male do not survive in the semen receptors of the females, be they hybrids or from the parent lines. In the same way, the sperm of the males of the two parent species do not survive in the reproductive tract of the hybrid female. [ 12 ] This type of post-copulatory isolation appears as the most efficient system for maintaining reproductive isolation in many species. [ 46 ]
The development of a zygote into an adult is a complex and delicate process of interactions between genes and the environment that must be carried out precisely, and if there is any alteration in the usual process, caused by the absence of a necessary gene or the presence of a different one, it can arrest the normal development causing the non-viability of the hybrid or its sterility. It should be borne in mind that half of the chromosomes and genes of a hybrid are from one species and the other half come from the other. If the two species are genetically different, there is little possibility that the genes from both will act harmoniously in the hybrid. From this perspective, only a few genes would be required in order to bring about post copulatory isolation, as opposed to the situation described previously for pre-copulatory isolation. [ 12 ] [ 47 ]
In many species where pre-copulatory reproductive isolation does not exist, hybrids are produced but they are of only one sex. This is the case for the hybridization between females of Drosophila simulans and Drosophila melanogaster males: the hybridized females die early in their development so that only males are seen among the offspring. However, populations of D. simulans have been recorded with genes that permit the development of adult hybrid females, that is, the viability of the females is "rescued". It is assumed that the normal activity of these speciation genes is to "inhibit" the expression of the genes that allow the growth of the hybrid. There will also be regulator genes. [ 12 ]
A number of these genes have been found in the melanogaster species group. The first to be discovered was "Lhr" (Lethal hybrid rescue) located in Chromosome II of D. simulans . This dominant allele allows the development of hybrid females from the cross between simulans females and melanogaster males. [ 48 ] A different gene, also located on Chromosome II of D. simulans is "Shfr" that also allows the development of female hybrids, its activity being dependent on the temperature at which development occurs. [ 49 ] Other similar genes have been located in distinct populations of species of this group. In short, only a few genes are needed for an effective post copulatory isolation barrier mediated through the non-viability of the hybrids.
As important as identifying an isolation gene is knowing its function. The Hmr gene, linked to the X chromosome and implicated in the viability of male hybrids between D. melanogaster and D. simulans , is a gene from the proto-oncogene family myb , that codes for a transcriptional regulator. Two variants of this gene function perfectly well in each separate species, but in the hybrid they do not function correctly, possibly due to the different genetic background of each species. Examination of the allele sequence of the two species shows that change of direction substitutions are more abundant than synonymous substitutions , suggesting that this gene has been subject to intense natural selection. [ 50 ]
The Dobzhansky –Muller model proposes that reproductive incompatibilities between species are caused by the interaction of the genes of the respective species. It has been demonstrated recently that Lhr has functionally diverged in D. simulans and will interact with Hmr which, in turn, has functionally diverged in D. melanogaster to cause the lethality of the male hybrids. Lhr is located in a heterochromatic region of the genome and its sequence has diverged between these two species in a manner consistent with the mechanisms of positive selection. [ 51 ] An important unanswered question is whether the genes detected correspond to old genes that initiated the speciation favoring hybrid non-viability, or are modern genes that have appeared post-speciation by mutation, that are not shared by the different populations and that suppress the effect of the primitive non-viability genes. The OdsH (abbreviation of Odysseus ) gene causes partial sterility in the hybrid between Drosophila simulans and a related species, D. mauritiana , which is only encountered on Mauritius , and is of recent origin. This gene shows monophyly in both species and also has been subject to natural selection. It is thought that it is a gene that intervenes in the initial stages of speciation, while other genes that differentiate the two species show polyphyly . Odsh originated by duplication in the genome of Drosophila and has evolved at very high rates in D. mauritania , while its paralogue , unc-4 , is nearly identical between the species of the group melanogaster . [ 52 ] [ 53 ] [ 54 ] [ 55 ] Seemingly, all these cases illustrate the manner in which speciation mechanisms originated in nature, therefore they are collectively known as "speciation genes", or possibly, gene sequences with a normal function within the populations of a species that diverge rapidly in response to positive selection thereby forming reproductive isolation barriers with other species. In general, all these genes have functions in the transcriptional regulation of other genes. [ 56 ]
The Nup96 gene is another example of the evolution of the genes implicated in post-copulatory isolation. It regulates the production of one of the approximately 30 proteins required to form a nuclear pore . In each of the simulans groups of Drosophila the protein from this gene interacts with the protein from another, as yet undiscovered, gene on the X chromosome in order to form a functioning pore. However, in a hybrid the pore that is formed is defective and causes sterility. The differences in the sequences of Nup96 have been subject to adaptive selection, similar to the other examples of speciation genes described above. [ 57 ] [ 58 ]
Post-copulatory isolation can also arise between chromosomally differentiated populations due to chromosomal translocations and inversions . [ 59 ] If, for example, a reciprocal translocation is fixed in a population, the hybrid produced between this population and one that does not carry the translocation will not have a complete meiosis . This will result in the production of unequal gametes containing unequal numbers of chromosomes with a reduced fertility. In certain cases, complete translocations exist that involve more than two chromosomes, so that the meiosis of the hybrids is irregular and their fertility is zero or nearly zero. [ 60 ] Inversions can also give rise to abnormal gametes in heterozygous individuals but this effect has little importance compared to translocations. [ 59 ] An example of chromosomal changes causing sterility in hybrids comes from the study of Drosophila nasuta and D. albomicans which are twin species from the Indo-Pacific region. There is no sexual isolation between them and the F1 hybrid is fertile. However, the F2 hybrids are relatively infertile and leave few descendants which have a skewed ratio of the sexes. The reason is that the X chromosome of albomicans is translocated and linked to an autosome which causes abnormal meiosis in hybrids. Robertsonian translocations are variations in the numbers of chromosomes that arise from either: the fusion of two acrocentric chromosomes into a single chromosome with two arms, causing a reduction in the haploid number, or conversely; or the fission of one chromosome into two acrocentric chromosomes, in this case increasing the haploid number. The hybrids of two populations with differing numbers of chromosomes can experience a certain loss of fertility, and therefore a poor adaptation, because of irregular meiosis.
A large variety of mechanisms have been demonstrated to reinforce reproductive isolation between closely related plant species that either historically lived or currently live in sympatry . This phenomenon is driven by strong selection against hybrids, typically resulting from instances in which hybrids suffer reduced fitness. Such negative fitness consequences have been proposed to be the result of negative epistasis in hybrid genomes and can also result from the effects of hybrid sterility . [ 61 ] In such cases, selection gives rise to population-specific isolating mechanisms to prevent either fertilization by interspecific gametes or the development of hybrid embryos.
Because many sexually reproducing species of plants are exposed to a variety of interspecific gametes , natural selection has given rise to a variety of mechanisms to prevent the production of hybrids. [ 62 ] These mechanisms can act at different stages in the developmental process and are typically divided into two categories, pre-fertilization and post-fertilization, indicating at which point the barrier acts to prevent either zygote formation or development. In the case of angiosperms and other pollinated species, pre-fertilization mechanisms can be further subdivided into two more categories, pre-pollination and post-pollination, the difference between the two being whether or not a pollen tube is formed. (Typically when pollen encounters a receptive stigma, a series of changes occur which ultimately lead to the growth of a pollen tube down the style, allowing for the formation of the zygote.) Empirical investigation has demonstrated that these barriers act at many different developmental stages and species can have none, one, or many barriers to hybridization with interspecifics.
A well-documented example of a pre-fertilization isolating mechanism comes from study of Louisiana iris species. These iris species were fertilized with interspecific and conspecific pollen loads and it was demonstrated by measure of hybrid progeny success that differences in pollen-tube growth between interspecific and conspecific pollen led to a lower fertilization rate by interspecific pollen. [ 63 ] This demonstrates how a specific point in the reproductive process is manipulated by a particular isolating mechanism to prevent hybrids.
Another well-documented example of a pre-fertilization isolating mechanism in plants comes from study of the 2 wind-pollinated birch species. Study of these species led to the discovery that mixed conspecific and interspecific pollen loads still result in 98% conspecific fertilization rates, highlighting the effectiveness of such barriers. [ 64 ] In this example, pollen tube incompatibility and slower generative mitosis have been implicated in the post-pollination isolation mechanism.
Crosses between diploid and tetraploid species of Paspalum provide evidence of a post-fertilization mechanism preventing hybrid formation when pollen from tetraploid species was used to fertilize a female of a diploid species. [ 65 ] There were signs of fertilization and even endosperm formation but subsequently this endosperm collapsed. This demonstrates evidence of an early post-fertilization isolating mechanism, in which the hybrid early embryo is detected and selectively aborted. [ 66 ] This process can also occur later during development in which developed, hybrid seeds are selectively aborted. [ 67 ]
Plant hybrids often suffer from an autoimmune syndrome known as hybrid necrosis. In the hybrids, specific gene products contributed by one of the parents may be inappropriately recognized as foreign and pathogenic, and thus trigger pervasive cell death throughout the plant. [ 68 ] In at least one case, a pathogen receptor, encoded by the most variable gene family in plants, was identified as being responsible for hybrid necrosis. [ 69 ]
In brewers' yeast Saccharomyces cerevisiae , chromosomal rearrangements are a major mechanism to reproductively isolate different strains. Hou et al. [ 70 ] showed that reproductive isolation acts postzygotically and could be attributed to chromosomal rearrangements. These authors crossed 60 natural isolates sampled from diverse niches with the reference strain S288c and identified 16 cases of reproductive isolation with reduced offspring viabilities, and identified reciprocal chromosomal translocations in a large fraction of isolates. [ 70 ]
In addition to the genetic causes of reproductive isolation between species there is another factor that can cause post zygotic isolation: the presence of microorganisms in the cytoplasm of certain species. The presence of these organisms in a species and their absence in another causes the non-viability of the corresponding hybrid. For example, in the semi-species of the group D. paulistorum the hybrid females are fertile but the males are sterile, this is due to the presence of a Wolbachia [ 71 ] in the cytoplasm which alters spermatogenesis leading to sterility. It is interesting that incompatibility or isolation can also arise at an intraspecific level. Populations of D. simulans have been studied that show hybrid sterility according to the direction of the cross. The factor determining sterility has been found to be the presence or absence of a microorganism Wolbachia and the populations tolerance or susceptibility to these organisms. This inter population incompatibility can be eliminated in the laboratory through the administration of a specific antibiotic to kill the microorganism. Similar situations are known in a number of insects, as around 15% of species show infections caused by this symbiont . It has been suggested that, in some cases, the speciation process has taken place because of the incompatibility caused by this bacteria. Two wasp species Nasonia giraulti and N. longicornis carry two different strains of Wolbachia . Crosses between an infected population and one free from infection produces a nearly total reproductive isolation between the semi-species. However, if both species are free from the bacteria or both are treated with antibiotics there is no reproductive barrier. [ 72 ] [ 73 ] Wolbachia also induces incompatibility due to the weakness of the hybrids in populations of spider mites ( Tetranychus urticae ), [ 74 ] between Drosophila recens and D. subquinaria [ 75 ] and between species of Diabrotica (beetle) and Gryllus (cricket). [ 76 ]
In 1950 K. F. Koopman reported results from experiments designed to examine the hypothesis that selection can increase reproductive isolation between populations. He used D. pseudoobscura and D. persimilis in these experiments. When the flies of these species are kept at 16 °C approximately a third of the matings are interspecific. In the experiment equal numbers of males and females of both species were placed in containers suitable for their survival and reproduction. The progeny of each generation were examined in order to determine if there were any interspecific hybrids. These hybrids were then eliminated. An equal number of males and females of the resulting progeny were then chosen to act as progenitors of the next generation. As the hybrids were destroyed in each generation the flies that solely mated with members of their own species produced more surviving descendants than the flies that mated solely with individuals of the other species. In the adjacent table it can be seen that for each generation the number of hybrids continuously decreased up to the tenth generation when hardly any interspecific hybrids were produced. [ 77 ] It is evident that selection against the hybrids was very effective in increasing reproductive isolation between these species. From the third generation, the proportions of the hybrids were less than 5%. This confirmed that selection acts to reinforce the reproductive isolation of two genetically divergent populations if the hybrids formed by these species are less well adapted than their parents.
These discoveries allowed certain assumptions to be made regarding the origin of reproductive isolation mechanisms in nature. Namely, if selection reinforces the degree of reproductive isolation that exists between two species due to the poor adaptive value of the hybrids, it is expected that the populations of two species located in the same area will show a greater reproductive isolation than populations that are geographically separated (see reinforcement ). This mechanism for "reinforcing" hybridization barriers in sympatric populations is also known as the " Wallace effect ", as it was first proposed by Alfred Russel Wallace at the end of the 19th century, and it has been experimentally demonstrated in both plants and animals. [ 78 ] [ 79 ] [ 80 ] [ 81 ] [ 82 ] [ 83 ]
The sexual isolation between Drosophila miranda and D. pseudoobscura , for example, is more or less pronounced according to the geographic origin of the flies being studied. Flies from regions where the distribution of the species is superimposed show a greater sexual isolation than exists between populations originating in distant regions.
On the other hand, interspecific hybridization barriers can also arise as a result of the adaptive divergence that accompanies allopatric speciation . This mechanism has been experimentally proved by an experiment carried out by Diane Dodd on D. pseudoobscura . A single population of flies was divided into two, with one of the populations fed with starch -based food and the other with maltose -based food. This meant that each sub population was adapted to each food type over a number of generations. After the populations had diverged over many generations, the groups were again mixed; it was observed that the flies would mate only with others from their adapted population. This indicates that the mechanisms of reproductive isolation can arise even though the interspecific hybrids are not selected against. [ 84 ]
a. ^ The DNA of the mitochondria and chloroplasts is inherited from the maternal line, i.e. all the progeny derived from a particular cross possess the same cytoplasm (and genetic factors located in it) as the female progenitor. This is because the zygote possesses the same cytoplasm as the ovule, although its nucleus comes equally from the father and the mother. [ 3 ] | https://en.wikipedia.org/wiki/Reproductive_isolation |
Reproductive success is an individual's production of offspring per breeding event or lifetime. [ 1 ] This is not limited by the number of offspring produced by one individual, but also the reproductive success of these offspring themselves.
Reproductive success is different from fitness in that individual success is not necessarily a determinant for adaptive strength of a genotype since the effects of chance and the environment have no influence on those specific genes. [ 1 ] Reproductive success turns into a part of fitness when the offspring are actually recruited into the breeding population. If offspring quantity is not correlated with quality this holds up, but if not then reproductive success must be adjusted by traits that predict juvenile survival in order to be measured effectively. [ 1 ]
Quality and quantity is about finding the right balance between reproduction and maintenance. The disposable soma theory of aging tells us that a longer lifespan will come at the cost of reproduction and thus longevity is not always correlated with high fecundity. [ 2 ] [ 3 ]
Parental investment is a key factor in reproductive success since taking better care to offspring is what often will give them a fitness advantage later in life. [ 4 ] This includes mate choice and sexual selection as an important factor in reproductive success, which is another reason why reproductive success is different from fitness as individual choices and outcomes are more important than genetic differences. [ 5 ] As reproductive success is measured over generations, longitudinal studies are the preferred study type as they follow a population or an individual over a longer period of time in order to monitor the progression of the individual(s). These long term studies are preferable since they negate the effects of the variation in a single year or breeding season.
Nutrition is one of the factors that influences reproductive success. For example, different amounts of consumption and more specifically carbohydrate to protein ratios. In some cases, the amounts or ratios of intake are more influential during certain stages of the lifespan. For example, in the Mexican fruit fly , male protein intake is critical only at eclosion. Intake at this time provides longer lasting reproductive ability. After this developmental stage, protein intake will have no effect and is not necessary for reproductive success. [ 6 ] In addition, Ceratitis capitata males were experimented on to see how protein influence during the larval stage affects mating success. Males were fed either a high protein diet, which consisted of 6.5g/100mL, or a no protein diet during the larval stage. Males that were fed protein had more copulations than those that were not fed protein, which ultimately correlates with a higher mating success. [ 7 ] Protein-deprived black blow fly males have been seen to exhibit lower numbers of oriented mounts and inseminate fewer females than more lively fed males. [ 8 ] In still other instances, prey deprivation or an inadequate diet has been shown to lead to a partial or complete halt in male mating activity. [ 9 ] Copulation time lasted longer for sugar-fed males than protein-fed flies, showing that carbohydrates were more necessary for a longer copulation duration. [ 10 ]
In mammals, amounts of protein, carbohydrates, and fats are seen to influence reproductive success. This was evaluated among 28 female black bears evaluated by measuring the number of cubs born. Using different foods during the fall including corn, herbaceous, red oak, beech, and cherry, nutritional facts of protein, carbohydrate, and fat were noted, as each varied in percent compositions. Seventy-percent of the bears who had high fat and high carbohydrate diets produced cubs. Conversely, all 10 females who had low carbohydrate diets did not reproduce cubs, deeming carbohydrates a critical factor for reproductive success where fat was not a hindrance. [ 11 ]
Adequate nutrition at pre-mating time periods showed to have the most effect on various reproductive processes in mammals. Increased nutrition, in general, during this time was most beneficial for oocyte and embryo development. As a result, offspring number and viability was also improved. Thus, proper nutrition timing during the pre-mating time is key for development and long-term benefit of the offspring. [ 12 ] Two different diets were fed to Florida scrub-jays and breeding performance was noted to have different effects. One diet consisted of high protein and high fat, and the other consisting of just high fat. The significant result was that the birds with the high protein and high fat diet laid heavier eggs than the birds with the rich-in-fat diet. There was a difference in the amount of water inside the eggs, which accounted for the different weights. It is hypothesized that the added water resulting from the adequate protein-rich and fat-rich diet may contribute to development and survival of the chick, therefore aiding reproductive success. [ 13 ]
Dietary intake also improves egg production, which can also be considered to help create viable offspring. Post-mating changes are seen in organisms in response to necessary conditions for development. This is depicted in the two-spotted cricket where feeding was tested for in females. It was found that mated females exhibited more overall consumption than unmated. Observations of female crickets showed that after laying their eggs, their protein intake increased towards the end of the second day. The female crickets therefore require a larger consumption of protein to nourish the development of subsequent eggs and even mating. More specifically, using geometrical framework analysis, mated females fed off of a more protein rich diet after mating. Unmated and mated female crickets were found to prefer a 2:1 and 3.5:1 protein to carbohydrate, respectively. [ 14 ] In the Japanese quail, the influence of diet quality on egg production was studied. The diet quality differed in the percent composition of protein, with the high-protein diet having 20%, and the low-protein diet having 12%. It was found that both the number of eggs produced and the size of the eggs were greater in the high-protein diet than the low. What was found unaffected, however, was the maternal antibody transmission. Thus, immune response was not affected since there was still a source of protein, although low. This means that the bird is able to compensate for the lack of protein in the diet by protein reserves, for example. [ 15 ]
Higher concentrations of protein in diet have also positively correlated with gamete production across various animals. The formation of oothecae in brown-banded cockroaches based on protein intake was tested. A protein intake of 5% deemed too low as it delayed mating and an extreme of 65% protein directly killed the cockroach. Oothecae production for the female as was more optimal at a 25% protein diet. [ 16 ]
Although there is a trend of protein and carbohydrates being essential for various reproductive functions including copulation success, egg development, and egg production, the ratio and amounts of each are not fixed. These values vary across a span of animals, from insects to mammals. For example, many insects may need a diet consisting of both protein and carbohydrates with a slightly higher protein ratio for reproductive success. On the other hand, a mammal like a black bear would need a higher amount of carbohydrates and fats, but not necessarily protein. Different types of animals have different necessities based on their make-up. One cannot generalize as the results may vary across different types of animals, and even more across different species.
Evolutionarily, humans are socially well adapted to their environment and coexist with one another in a way that benefits the entire species. Cooperative breeding , the ability for humans to invest in and help raise others' offspring, is an example of some of their unique characteristics that sets them apart from other non-human primates even though some practice this system at a low frequency. [ 17 ] One of the reasons why humans require significantly more non-parental investment in comparison to other species is because they are still dependent on adults to take care of them throughout most of their juvenile period. [ 17 ] Cooperative breeding can be expressed through economic support that requires humans to financially invest in someone else's offspring or through social support, which may require active energy investment and time. [ 17 ] This parenting system eventually aids people in increasing their survival rate and reproductive success as a whole. [ 17 ] Hamilton's rule and kin selection are used to explain why this altruistic behavior has been naturally selected and what non-parents gain by investing in offspring that is not their own. [ 17 ] Hamilton's rule states that rb > c where r= relatedness, b= benefit to recipient, c= cost of the helper. [ 17 ] This formula describes the relationship that has to occur among the three variables for kin selection take place. If the relative genetic relatedness of the helper with the offspring is closer and their benefit is greater than the cost of the helper, then kin selection will be most likely be favored. [ 17 ] Even though kin selection does not benefit individuals who invest in relatives' offspring, it still highly increases the reproduction success of a population by ensuring genes are being passed down to the next generation. [ 17 ]
Some research has suggested that historically, women have had a far higher reproductive success rate than men. Dr. Baumeister has suggested that the modern human has twice as many female ancestors as male ancestors. [ 18 ] [ 19 ] [ 20 ] [ 21 ]
Males and females should be considered separately in reproduction success for their different limitations in producing the maximum amount of offspring. Females have limitations such as gestation time (typically 9 months), then followed by lactation which suppresses ovulation and her chances of becoming pregnant again quickly. [ 22 ] In addition, a female's ultimate reproductive success is limited due to ability to distribute her time and energy towards reproducing. Peter T. Ellison states, "The metabolic task of converting energy from the environment into viable offspring falls to the female, and the rate at which she can produce offspring is limited by the rate at which she can direct metabolic energy to the task" [ 22 ] The reasoning for the transfer of energy from one category to another takes away from each individual category overall. For example, if a female has not reached menarche yet, she will only need to be focusing her energy into growth and maintenance because she cannot yet place energy towards reproducing. However, once a female is ready to begin putting forth energy into reproduction she will then have less energy to put towards overall growth and maintenance.
Females have a constraint on the amount of energy they will need to put forth into reproduction. Since females go through gestation they have a set obligation for energy output into reproduction. Males, however, do no have this constraint and therefore could potentially put forth more offspring as their commitment of energy into reproduction is less than a females. All things considered, men and women are constrained for different reasons and the number of offspring they can produce. Males contrastingly are not constrained by the time and energy of gestation or lactation. Females are reliant on the genetic quality of their mate as well. This refers to sperm quality of the male and the compatibility of the sperms antigens with the females immune system. [ 22 ] If the Humans in general, consider phenotypic traits that present their health and body symmetry. The pattern of constraints on female reproduction is consistent with human life-history and across all populations.
A difficulty in studying human reproductive success is its high variability. [ 23 ] Every person, male or female, is different, especially when it comes to reproductive success and also fertility. Reproductive success is determined not only by behavior (choices), but also physiological variables that cannot be controlled. [ 23 ]
In human males of advanced age (≥40 years), infertility is associated with a high prevalence of sperm DNA damage as measured by DNA fragmentation. [ 24 ] DNA fragmentation was also found to be inversely correlated with sperm motility . [ 24 ] These factors likely contribute to reduced reproductive success by males of advanced age.
The Blurnton-Jones 'backload model' "tested a hypothesis that the length of the birth intervals of !Kung hunter-gatherers allowed women to balance optimally the energetic demands of child bearing and foraging in a society where women had to carry small children and foraged substantial distances". [ 23 ] Behind this hypothesis is the fact that spacing birth intervals allowed for a better chance of child survival and that ultimately promoted evolutionary fitness. [ 23 ] This hypothesis goes along with the evolutionary trend of having three areas to divide up one's individual energy: growth, maintenance, and reproduction. This hypothesis is good for gaining an understanding of "individual-level variation in fertility in small-scale, high fertility, societies( sometimes referred to by demographers as 'natural-fertility' populations". [ 23 ] Reproduction success is hard to study as there are many different variables, and a lot of the concept is subject to each condition and environment.
To supplement a complete understanding of reproductive success or biological fitness it is necessary to understand the theory of natural selection . Darwin's theory of natural selection explains how the change of genetic variation over time within a species allows some individuals to be better suited to their environmental pressures, finding suitable mates, and/or finding food sources than others. Over time those same individuals pass on their genetic makeup onto their offspring and therefore the frequency of this advantageous trait or gene increases within that population.
The same may be true for the opposite as well. If an individual is born with a genetic makeup that makes them less suited for their environment, they may have less of a chance of surviving and passing on their genes and therefore may see these disadvantageous traits decrease in frequency. [ 25 ] This is one example of how reproductive success as well as biological fitness is a main component of the theory of Natural Selection and Evolution.
Throughout evolutionary history, often an advantageous trait or gene will continue to increase in frequency within a population only due to a loss or decrease in functionality of another trait. This is known as an evolutionary trade-off, and is related to the concept of pleiotropy , where changes to a single gene have multiple effects. From Oxford Academic, "The resulting 'evolutionary tradeoffs' reflect necessary compromises among the functions of multiple traits". [ 26 ] Due to a variety of limitations like energy availability, resource allocation during biological development or growth, or limitations of the genetic makeup itself means that there is a balance between traits. The increase in effectiveness in one trait may lead to a decrease in effectiveness of other traits as result.
This is important to understand because if certain individuals within a population have a certain trait that raises their reproductive fitness, this trait may have developed at the expense of others. Changes in genetic makeup through natural selection is not necessarily changes that are either just beneficial or deleterious but are changes that may be both. For example, an evolutionary change over time that results in higher reproductive success at younger ages might ultimately result in a decrease in life expectancy for those with that particular trait. [ 27 ] | https://en.wikipedia.org/wiki/Reproductive_success |
Reproductive synchrony is a term used in evolutionary biology and behavioral ecology . Reproductive synchrony—sometimes termed "ovulatory synchrony"—may manifest itself as "breeding seasonality". Where females undergo regular menstruation, " menstrual synchrony " is another possible term.
Reproduction is said to be synchronised when fertile matings across a population are temporarily clustered, resulting in multiple conceptions (and consequent births) within a restricted time window. In marine and other aquatic contexts, the phenomenon may be referred to as mass spawning . Mass spawning has been observed and recorded in a large number of phyla, including in coral communities within the Great Barrier Reef . [ 1 ] [ 2 ]
In primates , reproductive synchrony usually takes the form of conception and birth seasonality. [ 3 ] The regulatory "clock", in this case, is the sun's position in relation to the tilt of the earth. In nocturnal or partly nocturnal primates—for example, owl monkeys—the periodicity of the moon may also come into play. [ 4 ] [ 5 ] Synchrony in general is for primates an important variable determining the extent of "paternity skew"—defined as the extent to which fertile matings can be monopolised by a fraction of the population of males. The greater the precision of female reproductive synchrony—the greater the number of ovulating females who must be guarded simultaneously—the harder it is for any dominant male to succeed in monopolising a harem all to himself. This is simply because, by attending to any one fertile female, the male unavoidably leaves the others at liberty to mate with his rivals. The outcome is to distribute paternity more widely across the total male population, reducing paternity skew (figures a , b ). [ 6 ]
Reproductive synchrony can never be perfect. On the other hand, theoretical models predict that group-living species will tend to synchronise wherever females can benefit by maximising the number of males offered chances of paternity, minimising reproductive skew. [ 7 ] For example, the cichlid fish V. moorii spawns in the days leading up to each full moon (lunar synchrony), [ 8 ] and broods often exhibit multiple paternity. [ 9 ] The same models predict that female primates, including evolving humans, will tend to synchronise wherever fitness benefits can be gained by securing access to multiple males. Conversely, group-living females who need to restrict paternity to a single dominant harem-holder should assist him by avoiding synchrony. [ 10 ] [ 11 ]
In the human case, evolving females with increasingly heavy childcare burdens would have done best by resisting attempts at harem-holding by locally dominant males. No human female needs a partner who will get her pregnant only to disappear, abandoning her in favour of his next sexual partner. [ 12 ] To any local group of females, the more such philandering can be successfully resisted—and the greater the proportion of previously excluded males who can be included in the breeding system and persuaded to invest effort—the better. [ 13 ] Hence scientists would expect reproductive synchrony—whether seasonal, lunar or a combination of the two—to be central to evolving human strategies of reproductive levelling, reducing paternity skew and culminating in the predominantly monogamous egalitarian norms illustrated by extant hunter-gatherers . [ 14 ] Divergent climate regimes differentiating Neanderthal reproductive strategies from those of modern Homo sapiens have recently been analysed in these terms. [ 15 ] | https://en.wikipedia.org/wiki/Reproductive_synchrony |
Reproductive technology encompasses all current and anticipated uses of technology in human and animal reproduction, including assisted reproductive technology (ART), [ 1 ] contraception and others. It is also termed Assisted Reproductive Technology, where it entails an array of appliances and procedures that enable the realization of safe, improved and healthier reproduction . While this is not true of all people, for an array of married couples, the ability to have children is vital. But through the technology , infertile couples have been provided with options that would allow them to conceive children. [ 2 ]
Assisted reproductive technology (ART) is the use of reproductive technology to treat low fertility or infertility . Modern technology can provide infertile couples with assisted reproductive technologies. The natural method of reproduction has become only one of many new techniques used today. There are millions of couples that do not have the ability to reproduce on their own because of infertility and therefore, must resort to these new techniques. The main causes of infertility are that of hormonal malfunctions and anatomical abnormalities. [ 3 ] ART is currently the only form of assistance for individuals who, for the time being, can only conceive through surrogacy methods). [ 4 ] Examples of ART include in vitro fertilization (IVF) and its possible expansions, including:
In 1981, after the birth of Elizabeth Carr, the first baby in the United States to be conceived through in vitro fertilization (IVF). Her birth gave hope to many couples struggling with infertility. Dr. Howard Jones brought together the leading practitioners of the five US-based IVF programs ( Norfolk , [ clarification needed ] Vanderbilt , University of Texas at Houston , and the University of Southern California , Yale) to discuss the establishment of a national registry for in vitro fertilization attempts and outcomes. 2 years later, in 1985 the society for assisted reproductive technology (SART) was founded as a special interest entity within the American Fertility Society. [ 5 ] SART has not only informed the evolution of infertility care but also improved success of antiretroviral therapy. [ 6 ]
Reproductive technology can inform family planning by providing individual prognoses regarding the likelihood of pregnancy. It facilitates the monitoring of ovarian reserve , follicular dynamics and associated biomarkers in females, [ 7 ] as well as semen analysis in males. [ 8 ]
Contraception , also known as birth control , is a form of reproductive technology that enables people to prevent pregnancy . [ 9 ] There are many forms of contraception, but the term covers any method or device which is intended to prevent pregnancy in a sexually active woman. Methods are intended to "prevent the fertilization of an egg or implantation of a fertilized egg in the uterus ." [ 10 ] Different forms of birth control have been around since ancient times, but widely available effective and safe methods only became available during the mid-1900s. [ 11 ]
The following reproductive techniques are not currently in routine clinical use; most are still undergoing development:
Research is currently investigating the possibility of same-sex procreation, which would produce offspring with equal genetic contributions from either two females or two males. [ 12 ] This form of reproduction has become a possibility through the creation of either female sperm (containing the genetic material of a female) or male eggs (containing the genetic material of a male). Same-sex procreation would remove the need for lesbian and gay couples to rely on a third party donation of a sperm or an egg for reproduction. [ 13 ] The first significant development occurred in 1991, in a patent application filed by U.Penn. scientists to fix male sperm by extracting some sperm, correcting a genetic defect in vitro, and injecting the sperm back into the male's testicles. [ 14 ] While the vast majority of the patent application dealt with male sperm, one line suggested that the procedure would work with XX cells, i.e., cells from an adult woman to make female sperm.
In the two decades that followed, the idea of female sperm became more of a reality. In 1997, scientists partially confirmed such techniques by creating chicken female sperm in a similar manner. [ 15 ] They did so by injecting blood stem cells from an adult female chicken into a male chicken's testicles. In 2004, other Japanese scientists created two female offspring by combining the eggs of two adult mice. [ 16 ] [ 17 ]
In 2008, research was done specifically for methods on creating human female sperm using artificial or natural Y chromosomes and testicular transplantation. [ 18 ] A UK-based group predicted they would be able to create human female sperm within five years. So far no conclusive successes have been achieved. [ 3 ]
In 2018 Chinese research scientists produced 29 viable mice offspring from two mother mice by creating sperm-like structures from haploid Embryonic stem cells using gene editing to alter imprinted regions of DNA. They were unable to get viable offspring from two fathers. Experts noted that there was little chance of these techniques being applied to humans in the near future. [ 19 ] [ 20 ]
Recent technological advances in fertility treatments introduce ethical problems, such as the affordability of the various procedures. The exorbitant prices can limit who has access. [ 12 ] The cost of performing ART per live birth varies among countries. [ 21 ] The average cost per IVF cycle in the United States is USD 9,266. [ 22 ] However, the cost per live birth for autologous ART treatment cycles in the United States, Canada, and the United Kingdom ranged from approximately USD 33,000 to 41,000 compared to USD 24,000 to 25,000 in Scandinavia, Japan, and Australia [ 23 ]
The funding structure for IVF/ART is highly variable among different nations. For example, no federal government reimbursement exists for IVF in the United States, although certain states have insurance mandates for ART [ 24 ]
Many issues of reproductive technology have given rise to bioethical issues, since technology often alters the assumptions that lie behind existing systems of sexual and reproductive morality . Other ethical considerations arise with the application of ART to women of advanced maternal age, who have higher changes of medical complications (including pre-eclampsia ), and possibly in the future its application to post- menopausal women. [ 25 ] [ 26 ] [ 27 ] Also, ethical issues of human enhancement arise when reproductive technology has evolved to be a potential technology for not only reproductively inhibited people but even for otherwise re-productively healthy people. [ 28 ]
Cryopreservation of Oocytes
Cryopreservation techniques have significantly evolved in recent decades, enabling the long-term storage of human oocytes and embryos for fertility preservation. The introduction of vitrification , a rapid-freezing method that prevents the formation of ice crystals, has markedly improved post-thaw survival rates and oocyte viability. This method employs high concentrations of cryoprotectants to ensure cellular integrity while maintaining spindle structure and chromosomal alignment.
Ovarian Reserve Assessment
Accurate assessment of ovarian reserve has become a cornerstone of individualized reproductive treatment plans. Anti-Müllerian hormone (AMH) and antral follicle count (AFC) are the primary markers used to evaluate the remaining oocyte pool. AMH, secreted by granulosa cells of preantral and small antral follicles, offers a cycle-independent, minimally invasive method for predicting ovarian response in assisted reproductive technology (ART).
Ethical Considerations
The use of reproductive technologies, particularly for non-medical fertility preservation, has raised ethical questions. Critics argue that societal pressures may drive unnecessary interventions, while proponents highlight the empowerment of individuals in making reproductive choices. Balancing accessibility and ethical integrity remains a key challenge for the field. | https://en.wikipedia.org/wiki/Reproductive_technology |
Reproductive toxicity refers to the potential risk from a given chemical, physical or biologic agent to adversely affect both male and female fertility as well as offspring development . [ 1 ] Reproductive toxicants may adversely affect sexual function, ovarian failure, fertility as well as causing developmental toxicity in the offspring. [ 2 ] [ 3 ] Lowered effective fertility related to reproductive toxicity relates to both male and female effects alike and is reflected in decreased sperm counts, semen quality and ovarian failure.
Infertility is medically defined as a failure of a couple to conceive over the course of one year of unprotected intercourse. [ 4 ] Primary infertility indicates that a person has never been able to achieve pregnancy while secondary inferility is defined as a person having at least one pregnancy before. [ 5 ] As many as 20% of couples experience infertility. [ 4 ] Infertility may be caused by an issue along any part of the process of fertilizing an egg through birth of the child. This can include: the release of the egg, the ability of the sperm to fertilize the egg, the implantation of the egg in the uterine wall, and the ability of the fetus to complete development without miscarriage. [ 6 ] Among males oligospermia is defined as a paucity of viable spermatozoa in the semen , whereas azoospermia refers to the complete absence of viable spermatozoa in the semen. [ 4 ] Males may also experience issues in sperm motility and morphology , which means the sperm are less likely to make it to the egg or to be able to fertilize the egg. [ 6 ] Female infertility could be a result of an issue regarding their uterus, ovaries, or fallopian tubes and can be impacted by various diseases, endocrine/hormone disruption, or reproductive toxicant. [ 6 ] [ 5 ]
The Globally Harmonized System of Classification and Labelling of Chemicals (GHS) separates reproductive toxicity from germ cell mutagenicity and carcinogenicity , even though both these hazards may also affect fertility. [ 7 ]
Many drugs can affect the human reproductive system . Their effects can be
However, most studies of reproductive toxicity have focused on occupational or environmental exposure to chemicals and their effects on reproduction. Both consumption of alcohol and tobacco smoking are known to be "toxic for reproduction" in the sense used here.
One well-known group of substances which are toxic for reproduction are teratogens – substances which cause birth defects . ( S )-thalidomide is possibly the most notorious of these. [ 8 ]
Another group of substances which have received much attention (and prompted some controversy) as possibly toxic for reproduction are the so-called endocrine disruptors . [ 8 ] Endocrine disruptors change how hormones are produced and how they interact with their receptors. [ 9 ] Endocrine disruptors are classified as estrogenic, anti-estrogenic, androgenic or anti-androgenic. Each category includes pharmaceutical compounds and environmental compounds. Estrogenic or androgenic compounds will cause the same hormonal responses as the sex steroids (estrogen and testosterone). However anti-estrogenic and anti-andogenic compounds bind to a receptor and block the hormones from binding to their receptors, thus preventing their function. A few examples of the many types of endocrine disruptors are trenbolone (androgenic), flutamide (anti-androgenic), diethylstilbestrol (estrogenic), bisphenol A (estrogenic) and tributyltin (anti-estrogenic). [ 10 ] [ 11 ]
However, many substances which are toxic for reproduction do not fall into any of these groups: lead compounds, for example, are considered to be toxic for reproduction [ 10 ] [ 11 ] given their adverse effects on the normal intellectual and psychomotor development of human babies and children.
Lead , a heavy metal that can exist in both organic and inorganic forms, and is associated with adverse effects on male libido, erectile disfunction, premature ejaculation and poor sperm quality. [ 12 ] Lead is also associated with negative effects on the female reproductive system particularly for pregnant people. [ 13 ] Elevated blood lead levels can increase risk of preeclampsia and miscarriage and can lead to birth defects. [ 14 ] [ 15 ] Lead is believed to predominantly affect male reproduction by the disruption of hormones, which reduces the quantity of sperm production in the seminiferous tubules . It has also been proposed that lead causes poor semen quality by promoting the generation of reactive oxygen species such as hydrogen peroxide due to lipid peroxidation , which can cause cellular damage. [ 16 ] [ 17 ] Lead can be found in contaminated soil, water, as well as manufactured goods like jewelry, toys, and paint. [ 18 ] Common routes of exposure are inhalation and digestion, though dermal exposure can occur albeit less frequently. [ 18 ] Occupational exposures remain a high risk, particularly for industries such as battery/electronic recycling, construction, mining, smelting, and welders or any other industry which interacts with lead. [ 13 ] Families and cohabitants of the above workers may be at risk of take-home exposure and may need to take precautions to avoid reproductive impacts. [ 19 ]
Cadmium is a heavy metal used in jewelry making, electronics, welding and galvanizing steel. [ 20 ] The human route of exposure is primarily inhalational or oral; environmental exposure among the non-occupationally exposed can occur due to exposure to cigarette smoking. [ 20 ] The oral route of exposure can occur due to ingesting plants and shellfish that have taken up cadmium from water and soil. [ 20 ] Exposure to cadmium results in adverse male fertility in terms of decreased spermatogenesis, semen quality, sperm motility and impaired hormonal synthesis. [ 21 ] Likewise, exposure to cadmium impairs female fertility in terms of menstrual cycle regularity and reproductive hormonal balance. [ 21 ] Cadmium exposure can negatively impact fetal development throughout the gestation as well as ovulation and implantation. [ 22 ]
Hexavalent chromium ( Cr VI) is used in the electronics industry and for metal plating. [ 23 ] Chromium exposure is primarily inhalation or through ingestion. [ 24 ] Human and animal studies show that exposure to hexavalent chromium decreases semen quality and sperm counts. [ 25 ]
Elemental mercury( Hg 0 ) is a metal that exists as liquid form at room temperature and is commonly found in thermometers, blood pressure cuffs and dental amalgams. In terms of exposure, the route of absorption is primarily via inhalation through mercury vapor, which can in turn lead to mercury poisoning . [ 26 ] Occupational exposure to inorganic mercury can occur in industries such as dentistry, fluorescent lamp production, and Chloralkali workers . [ 27 ] Data among female dental technicians exposed to mercury vapors have demonstrated decreased fertility among those who were exposed and practiced poor industrial hygiene while handling dental amalgams. [ 26 ] [ 28 ] Elemental and organic mercury can cross the blood brain barrier, like many other heavy metals, making it particularly significant for pregnant people as it can impact fetal development and birth outcomes. [ 27 ] Among female workers in mercury smelting plants an increase in spontaneous abortions has been reported. [ 28 ]
Dibromochloropropane (DBCP) is used as a pesticide against nematodes in the agricultural industry. [ 29 ] DBCP is one of the most well-known reproductive toxicants known to cause testicular toxicity. [ 12 ] Workers in chemical factories exposed to dibromochloropropane have been shown to develop dose-dependent oligospermia and azoospermia . [ 12 ] Additional studies also demonstrated that DBCP-exposed workers in banana and pineapple plantations in central America and other countries also developed oligospermia and azoospermia. [ 30 ] In 1977, the United States Environmental Protection Agency banned the use of DBCP in agriculture due to its effect on male fertility. [ 31 ] Despite being banned from use in agriculture, DBCP is still used as an intermediate in chemical manufacturing as well as a reagent in research. [ 31 ]
Ethylene dibromide
Ethylene dibromide (EDB) is a fumigant that was originally used to protect citrus fruits, grains and vegetables from insects. [ 32 ] Use of EDB in the United States was banned by the United States Environmental Protection Agency in 1984, however EDB is still used in the United States as fumigant to treat timber logs for beetles and termites. [ 32 ] Likewise, it is still used as an intermediate in chemical manufacturing. [ 32 ] Exposure to EDB has been shown to adversely affect male fertility by leading to a decreased sperm counts, decreased numbers of viable sperm and increased abnormal sperm morphology. [ 33 ] [ 34 ] The primary route of exposure is through inhalation. [ 32 ]
Solvent exposure is common among men and women working in industrial settings. Specific solvents including xylene , perchloroethylene , toluene and methylene chloride have been shown to be associated with a concurrent elevation in risk for spontaneous abortion [ 35 ]
Ionizing radiation in the form alpha, beta and gamma emissions are well known to adversely affect male and female fertility, as well as fetal development. [ 36 ] [ 37 ] Exposure to low doses of ionizing radiation can occur naturally in the environment or due to medical treatment or diagnosis, however, higher exposures may be associated with occupation. [ 36 ] Occupations with documented risk include: healthcare workers who interact with radioactive material, certain manufacturing processes, and airline personnel. [ 36 ] Exposure in the range of 0.1 to1.2 Gy is associated with spermatogonial injury; whereas between 4-6 Gy reductions of sperm counts have been reported. [ 37 ] Ionizing radiation is considered a hazard particularly in pregnancy, due to its potential impact of gestational development. [ 36 ] More specifically, ionizing radiation is associated with an increased risk of miscarriage and stillbirth. [ 38 ] Recent studies suggest that routine medical examinations that expose a pregnant person to ionizing radiation are not associated with an increase of risk of miscarriage or stillbirth. [ 39 ]
Radio frequency electromagnetic fields, such as those generated from mobile phone devices, have been shown to decrease semen quality production in experimental animal models; however human data is still equivocal at best. [ 40 ] [ 41 ] The International Association for the Research of Cancer(IARC) classifies radio frequency electromagnetic fields as a group 2B or possibly carcinogenic. [ 42 ]
Lipid soluble compounds that can cross the cell lipid bilayer and bind cytoplasmic steroid hormone receptors can translocate to the nucleus and act as estrogen agonists. [ 43 ] Diethylstilbestrol (DES), a synthetic estrogen, is one such endocrine disruptor and acts as an estrogen agonist. Diethylstilbestrol was used from 1938 to 1971 to prevent spontaneous abortions. [ 43 ] Diethylstilbestrol causes cancer and mutations by producing highly reactive metabolites , also causing DNA adducts to form. Exposure to diethylstilbestrol in the womb can cause atypical reproductive tract formation. Specifically, females exposed to diethylstilbestrol in utero during the first trimester have are more likely to develop clear cell vaginal carcinoma, and males have an increased risk of hypospadias . [ 44 ]
Bisphenol A (BPA) is used in polycarbonate plastic consumer goods and aluminum can liners. [ 45 ] BPA is an example of an endocrine disruptor which negatively affects reproductive development by acting as an estrogen mimicker ( xenoestrogen ) and a likely androgen mimicker. [ 46 ] Bisphenol A exposure in fetal female rats leads to mammary gland morphogenesis , increased formation of ovarian tumors , and increased risk of developing mammary gland neoplasia in adult life. In lab animal models, BPA is considered to be both an ovarian and uterine toxicant as it impairs endometrial proliferation, decreases uterine receptivity and decreases the chances for successful implantation of the embryo [ 47 ] The adverse reproductive toxicological impacts of bisphenol A have been better studied in females than in males. [ 48 ] [ 49 ] [ 47 ]
Antineoplastic drugs, commonly known as chemotherapy drugs, are considered hazardous drugs by the CDC, including hazardous to reproductive health. [ 50 ] Exposure to chemotherapy drugs most often occurs through treatment for cancer, however, unintentional occupational exposure may occur in for workers involved in pharmaceutical production, pharmacists or technicians preparing the drugs, and nurses or other healthcare professionals who are administering medication to patients. [ 51 ] Other hospital staff, particularly custodial workers, who interact or handle antineoplastic drugs in any capacity may also be at risk of exposure. [ 51 ] Exposure can occur through inhalation, skin contact, ingestion, or injection. [ 51 ]
Work schedule can become a reproductive toxicant when working hours are during the employee's typical sleeping hours (night shift), when a worker has an irregular work schedule ( shift work ) or long working hours. [ 52 ] Work schedule's reproductive toxicity is primarily a result of impact on regularity, quality, and rhythm of sleep. [ 52 ] Shift work is associated with menstrual disorders, which can in turn impact fertility. [ 52 ] [ 53 ] Irregular work schedule, working long hours, and working the night shift is associated with an increased risk of miscarriage and pre-term birth. [ 52 ] Many occupations engage in shift work, including requiring rotating work schedules, long hours, or night shift work. Some occupations that frequently engage in shift work include first responders, airline personnel, healthcare workers, and service workers. [ 52 ] The CDC estimates that fifteen-million Americans engage in shift work and 30% get less than six-hours of sleep. [ 52 ]
Physical demands can include bending, lifting, and standing. Physical demands are considered a reproductive toxicant as they can increase the risk of adverse outcomes during pregnancy. [ 54 ] Bending, lifting, and standing are often associated with occupational responsibilities as the risk is minimal unless physical activity is prolonged. [ 54 ] Standing and walking for more than three hours a day is associated with an increased risk of pre-term birth, while standing for six to eight hours a day is associated with an increased risk of miscarriage. [ 55 ] [ 56 ] The weight and frequency of lifting is also associated with increased risk of miscarriage and preterm birth, with estimates of loads over 10 kg, or frequency a cumulative 100 kg/day. [ 56 ] [ 57 ]
Noise is considered a reproductive toxicant due to its potential impact on fetal development during pregnancy. While pregnant people may be able to use proper hearing protection to conserve their own hearing, after the 20th week of development babies' ears are susceptible to hearing loss . [ 58 ] Pregnant people who are past 20 weeks of development should consider avoiding noises above 85 decibels, including at work and recreational activities. [ 58 ] | https://en.wikipedia.org/wiki/Reproductive_toxicity |
Reproductive value is a concept in demography and population genetics that represents the discounted number of future female children that will be born to a female of a specific age. Ronald Fisher first defined reproductive value in his 1930 book The Genetical Theory of Natural Selection where he proposed that future offspring be discounted at the rate of growth of the population; this implies that sexually reproductive value measures the contribution of an individual of a given age to the future growth of the population . [ 1 ] [ 2 ]
Consider a species with a life history table with survival and reproductive parameters given by ℓ x {\displaystyle \ell _{x}} and m x {\displaystyle m_{x}} , where
and
In a population with a discrete set of age classes, Fisher's reproductive value is calculated as
where λ {\displaystyle \lambda } is the long-term population growth rate given by the dominant eigenvalue of the Leslie matrix . When age classes are continuous,
where r {\displaystyle r} is the intrinsic rate of increase or Malthusian growth rate . | https://en.wikipedia.org/wiki/Reproductive_value_(population_genetics) |
In biology, reprogramming refers to erasure and remodeling of epigenetic marks, such as DNA methylation , during mammalian development or in cell culture. [ 1 ] Such control is also often associated with alternative covalent modifications of histones .
Reprogrammings that are both large scale (10% to 100% of epigenetic marks) and rapid (hours to a few days) occur at three life stages of mammals. Almost 100% of epigenetic marks are reprogrammed in two short periods early in development after fertilization of an ovum by a sperm . In addition, almost 10% of DNA methylations in neurons of the hippocampus can be rapidly altered during formation of a strong fear memory.
After fertilization in mammals, DNA methylation patterns are largely erased and then re-established during early embryonic development. Almost all of the methylations from the parents are erased, first during early embryogenesis , and again in gametogenesis , with demethylation and remethylation occurring each time. Demethylation during early embryogenesis occurs in the preimplantation period. After a sperm fertilizes an ovum to form a zygote , rapid DNA demethylation of the paternal DNA and slower demethylation of the maternal DNA occurs until formation of a morula , which has almost no methylation. After the blastocyst is formed, methylation can begin, and with formation of the epiblast a wave of methylation then takes place until the implantation stage of the embryo. Another period of rapid and almost complete demethylation occurs during gametogenesis within the primordial germ cells (PGCs). Other than the PGCs, in the post-implantation stage, methylation patterns in somatic cells are stage- and tissue -specific with changes that presumably define each individual cell type and last stably over a long time. [ 2 ]
The mouse sperm genome is 80–90% methylated at its CpG sites in DNA, amounting to about 20 million methylated sites. [ citation needed ] After fertilization , the paternal chromosome is almost completely demethylated in six hours by an active process, before DNA replication (blue line in Figure). In the mature oocyte , about 40% of its CpG sites are methylated. Demethylation of the maternal chromosome largely takes place by blockage of the methylating enzymes from acting on maternal-origin DNA and by dilution of the methylated maternal DNA during replication (red line in Figure). The morula (at the 16 cell stage), has only a small amount of DNA methylation (black line in Figure). Methylation begins to increase at 3.5 days after fertilization in the blastocyst , and a large wave of methylation then occurs on days 4.5 to 5.5 in the epiblast , going from 12% to 62% methylation, and reaching maximum level after implantation in the uterus. [ 3 ] By day seven after fertilization, the newly formed primordial germ cells (PGC) in the implanted embryo segregate from the remaining somatic cells . At this point the PGCs have about the same level of methylation as the somatic cells.
The newly formed primordial germ cells (PGC) in the implanted embryo devolve from the somatic cells. At this point the PGCs have high levels of methylation. These cells migrate from the epiblast toward the gonadal ridge . Now the cells are rapidly proliferating and beginning demethylation in two waves. In the first wave, demethylation is by replicative dilution, but in the second wave demethylation is by an active process. The second wave leads to demethylation of specific loci . At this point the PGC genomes display the lowest levels of DNA methylation of any cells in the entire life cycle [at embryonic day 13.5 (E13.5), see the second figure in this section]. [ 4 ]
After fertilization some cells of the newly formed embryo migrate to the germinal ridge and will eventually become the germ cells (sperm and oocytes) of the next generation. Due to the phenomenon of genomic imprinting , maternal and paternal genomes are differentially marked and must be properly reprogrammed every time they pass through the germline. Therefore, during the process of gametogenesis the primordial germ cells must have their original biparental DNA methylation patterns erased and re-established based on the sex of the transmitting parent.
After fertilization, the paternal and maternal genomes are demethylated in order to erase their epigenetic signatures and acquire totipotency . There is asymmetry at this point: the male pronucleus undergoes a quick and active demethylation. Meanwhile the female pronucleus is demethylated passively during consecutive cell divisions. The process of DNA demethylation involves base excision repair and likely other DNA-repair-based mechanisms. [ 5 ] Despite the global nature of this process, there are certain sequences that avoid it, such as differentially methylated regions (DMRS) associated with imprinted genes, retrotransposons and centromeric heterochromatin . Remethylation is needed again to differentiate the embryo into a complete organism. [ 6 ]
In vitro manipulation of pre-implantation embryos has been shown to disrupt methylation patterns at imprinted loci [ 7 ] and plays a crucial role in cloned animals. [ 8 ]
Learning and memory have levels of permanence, differing from other mental processes such as thought, language, and consciousness, which are temporary in nature. Learning and memory can be either accumulated slowly (multiplication tables) or rapidly (touching a hot stove), but once attained, can be recalled into conscious use for a long time. Rats subjected to one instance of contextual fear conditioning create an especially strong long-term memory. At 24 h after training, 9.17% of the genes in the rat genomes of hippocampus neurons were found to be differentially methylated . This included more than 2,000 differentially methylated genes at 24 hours after training, with over 500 genes being demethylated. [ 9 ] The hippocampus region of the brain is where contextual fear memories are first stored (see figure of the brain, this section), but this storage is transient and does not remain in the hippocampus. In rats contextual fear conditioning is abolished when the hippocampus is subjected to hippocampectomy just 1 day after conditioning, but rats retain a considerable amount of contextual fear when a long delay (28 days) is imposed between the time of conditioning and the time of hippocampectomy. [ 10 ]
Three molecular stages are required for reprogramming the DNA methylome . Stage 1: Recruitment. The enzymes needed for reprogramming are recruited to genome sites that require demethylation or methylation. Stage 2: Implementation. The initial enzymatic reactions take place. In the case of methylation, this is a short step that results in the methylation of cytosine to 5-methylcytosine . Stage 3: Base excision DNA repair . The intermediate products of demethylation are catalysed by specific enzymes of the base excision DNA repair pathway that finally restore cystosine in the DNA sequence.
The Figure in this section indicates the central roles of ten-eleven translocation methylcytosine dioxygenases (TETs) in the demethylation of 5-methylcytosine to form cytosine. [ 12 ] As reviewed in 2018, [ 12 ] 5mC is very often initially oxidized by TET dioxygenases to generate 5-hydroxymethylcytosine (5hmC). In successive steps (see Figure) TET enzymes further hydroxylate 5hmC to generate 5-formylcytosine (5fC) and 5-carboxylcytosine (5caC). Thymine-DNA glycosylase (TDG) recognizes the intermediate bases 5fC and 5caC and excises the glycosidic bond resulting in an apyrimidinic site (AP site). In an alternative oxidative deamination pathway, 5hmC can be oxidatively deaminated by APOBEC (AID/APOBEC) deaminases to form 5-hydroxymethyluracil (5hmU) or 5mC can be converted to thymine (Thy). 5hmU can be cleaved by TDG, SMUG1 , NEIL1 , or MBD4 . AP sites and T:G mismatches are then repaired by base excision repair (BER) enzymes to yield cytosine (Cyt).
The isoforms of the TET enzymes include at least two isoforms of TET1, one of TET2 and three isoforms of TET3 . [ 13 ] [ 14 ] The full-length canonical TET1 isoform appears virtually restricted to early embryos, embryonic stem cells and primordial germ cells (PGCs). The dominant TET1 isoform in most somatic tissues, at least in the mouse, arises from alternative promoter usage which gives rise to a short transcript and a truncated protein designated TET1s. The isoforms of TET3 are the full length form TET3FL, a short form splice variant TET3s, and a form that occurs in oocytes and neurons designated TET3o. TET3o is created by alternative promoter use and contains an additional first N-terminal exon coding for 11 amino acids . TET3o only occurs in oocytes and neurons and was not expressed in embryonic stem cells or in any other cell type or adult mouse tissue tested. Whereas TET1 expression can barely be detected in oocytes and zygotes, and TET2 is only moderately expressed, the TET3 variant TET3o shows extremely high levels of expression in oocytes and zygotes, but is nearly absent at the 2-cell stage. It is possible that TET3o, high in neurons, oocytes and zygotes at the one cell stage, is the major TET enzyme utilized when very large scale rapid demethylations occur in these cells.
The TET enzymes do not specifically bind to 5-methylcytosine except when recruited. Without recruitment or targeting, TET1 predominantly binds to high CG promoters and CpG islands (CGIs) genome-wide by its CXXC domain that can recognize un-methylated CGIs. [ 15 ] TET2 does not have an affinity for 5-methylcytosine in DNA. [ 16 ] The CXXC domain of the full-length TET3, which is the predominant form expressed in neurons, binds most strongly to CpGs where the C was converted to 5-carboxycytosine (5caC). However, it also binds to un-methylated CpGs . [ 14 ]
For a TET enzyme to initiate demethylation it must first be recruited to a methylated CpG site in DNA. Two of the proteins shown to recruit a TET enzyme to a methylated cytosine in DNA are OGG1 (see figure Initiation of DNA demthylation) [ 17 ] and EGR1 . [ 18 ]
Oxoguanine glycosylase (OGG1) catalyses the first step in base excision repair of the oxidatively damaged base 8-OHdG . OGG1 finds 8-OHdG by sliding along the linear DNA at 1,000 base pairs of DNA in 0.1 seconds. [ 19 ] OGG1 very rapidly finds 8-OHdG. OGG1 proteins bind to oxidatively damaged DNA with a half maximum time of about 6 seconds. [ 20 ] When OGG1 finds 8-OHdG it changes conformation and complexes with 8-OHdG in the binding pocket of OGG1. [ 21 ] OGG1 does not immediately act to remove the 8-OHdG. Half maximum removal of 8-OHdG takes about 30 minutes in HeLa cells in vitro , [ 22 ] or about 11 minutes in the livers of irradiated mice. [ 23 ] DNA oxidation by reactive oxygen species preferentially occurs at a guanine in a methylated CpG site, because of a lowered ionization potential of guanine bases adjacent to 5-methylcytosine. [ 24 ] TET1 binds (is recruited to) the OGG1 bound to 8-OHdG (see figure). [ 17 ] This likely allows TET1 to demethylate an adjacent methylated cytosine. When human mammary epithelial cells (MCF-10A) were treated with H 2 O 2 , 8-OHdG increased in DNA by 3.5-fold and this caused large scale demethylation of 5-methylcytosine to about 20% of its initial level in DNA. [ 17 ]
The gene early growth response protein 1 ( EGR1 ) is an immediate early gene (IEG). The defining characteristic of IEGs is the rapid and transient up-regulation—within minutes—of their mRNA levels independent of protein synthesis. [ 25 ] EGR1 can rapidly be induced by neuronal activity. [ 26 ] In adulthood, EGR1 is expressed widely throughout the brain, maintaining baseline expression levels in several key areas of the brain including the medial prefrontal cortex , striatum , hippocampus and amygdala . [ 25 ] This expression is linked to control of cognition, emotional response, social behavior and sensitivity to reward. [ 25 ] EGR1 binds to DNA at sites with the motifs 5′-GCGTGGGCG-3′ and 5'-GCGGGGGCGG-3′ and these motifs occur primarily in promoter regions of genes. [ 26 ] The short isoform TET1s is expressed in the brain. EGR1 and TET1s form a complex mediated by the C-terminal regions of both proteins, independently of association with DNA. [ 26 ] EGR1 recruits TET1s to genomic regions flanking EGR1 binding sites. [ 26 ] In the presence of EGR1, TET1s is capable of locus-specific demethylation and activation of the expression of downstream genes regulated by EGR1. [ 26 ]
The first person to successfully demonstrate reprogramming was John Gurdon , who in 1962 demonstrated that differentiated somatic cells could be reprogrammed back into an embryonic state when he managed to obtain swimming tadpoles following the transfer of differentiated intestinal epithelial cells into enucleated frog eggs. [ 27 ] For this achievement he received the 2012 Nobel Prize in Medicine alongside Shinya Yamanaka . [ 28 ] Yamanaka was the first to demonstrate (in 2006) that this somatic cell nuclear transfer or oocyte-based reprogramming process (see below), that Gurdon discovered, could be recapitulated (in mice) by defined factors ( Oct4 , Sox2 , Klf4 , and c-Myc ) to generate induced pluripotent stem cells (iPSCs). [ 29 ] Other combinations of genes have also been used, including LIN25 [ 30 ] and Homeobox protein NANOG . [ 30 ] [ 31 ]
With the discovery that cell fate could be altered, the question of what progression of events occurs signifies a cell undergoing reprogramming. As the final product of iPSC reprogramming was similar in morphology , proliferation, gene expression , pluripotency , and telomerase activity, genetic and morphological markers were used as a way to determine what phase of reprogramming was occurring. [ 32 ] Reprogramming is defined into three phase: initiation, maturation, and stabilization. [ 33 ]
The initiation phase is associated with the downregulation of cell type specific genes and the upregulation of pluripotent genes. [ 33 ] As the cells move towards pluripotency, the telomerase activity is reactivated to extend telomeres . The cell morphology can directly affect the reprogramming process as the cell is modifying itself to prepare for the gene expression of pluripotency. [ 34 ] The main indicator that the initiation phase has completed is that the first genes associated with pluripotency are expressed. This includes the expression of Oct-4 or Homeobox protein NANOG , while undergoing a mesenchymal–epithelial transition (MET), and the loss of apoptosis and senescence . [ 35 ]
If the cell is directly reprogrammed from one somatic cell to another, the genes associated with each cell type begin to be upregulated and downregulated accordingly. [ 33 ] This can either occur through direct cell reprogramming or creating an intermediate, such as a iPSC, and differentiating into the desired cell type. [ 35 ]
The initiation phase is completed through one of three pathways: nuclear transfer , cell fusion , or defined factors ( microRNA , transcription factor , epigenetic markers, and other small molecules). [ 30 ] [ 35 ]
An oocyte can reprogram an adult nucleus into an embryonic state after somatic cell nuclear transfer , so that a new organism can be developed from such cell. [ 36 ]
Reprogramming is distinct from development of a somatic epitype , [ 37 ] as somatic epitypes can potentially be altered after an organism has left the developmental stage of life. [ 38 ] During somatic cell nuclear transfer, the oocyte turns off tissue specific genes in the somatic cell nucleus and turns back on embryonic specific genes. This process has been shown through cloning, as seen through John Gurdon with the tadpoles [ 27 ] and Dolly the Sheep . [ 39 ] Notably, these events have shown that cell fate is a reversible process.
[ 35 ] Cell fusion is used to create a multi nucleated cell called a heterokaryon . [ 35 ] The fused cells allow for otherwise silenced genes to become reactivated and expressive. As the genes are reactivated, the cells can re-differentiate. There are instances where transcriptional factors, such as the Yamanaka factors, are still needed to aid in heterokaryon cell reprogramming. [ 40 ]
Unlike nuclear transfer and cell fusion, defined factors do not require a full genome, only reprogramming factors. These reprogramming factors include microRNA , transcription factor , epigenetic markers, and other small molecules. [ 35 ] The original transcription factors, that lead to iPSC development, discovered by Yamanaka include Oct4 , Sox2 , Klf4 , and c-Myc (OSKM factors). [ 29 ] [ 32 ] Although the OSKM factors have been shown to induce and aid in pluripotency, other transcription factors such as Homeobox protein NANOG , [ 41 ] LIN25, [ 30 ] TRA-1-60, [ 41 ] and C/EBPα [ 42 ] aid in the efficiency of reprogramming. The use of microRNA and other small molecule-driven processes has been utilized as a means of increasing the efficiency of the differentiation from somatic cells to pluripotency. [ 35 ]
The maturation phase begins at the end of the initiation phase, when the first pluripotent genes are expressed. [ 33 ] The cell is preparing itself to be independent from the defined factors, that started the reprogramming process. The first genes to be detected in iPSCs are Oct4 , Homeobox protein NANOG , and Esrrb, followed later by Sox2 . [ 35 ] In the later stages of maturation, transgene silencing marks the start of the cell becoming independent from the induced transcription factor . Once the cell is independent, the maturation phase ends and the stabilization phase begins.
As reprogramming efficiency has proven to be a variable and low efficiency process, not all the cells complete the maturation phase and achieve pluripotency . [ 42 ] Some cells that undergo reprogramming still remain under apoptosis at the beginning of the maturation stage from oxidative stress brought on by the stresses of gene expression change. The use of microRNA , proteins, and different combinations of the OSKM factors have started to lead towards a higher efficiency rate of reprogramming.
The stabilization phase refers to the processes in the cell that occur after the cell reaches pluripotency . One genetic marker is the expression of Sox2 and X chromosome reactivation , while epigenetic changes include the telomerase extending the telomeres [ 30 ] and loss of the cell’s epigenetic memory. [ 33 ] The epigenetic memory of a cell is reset by the changes in DNA methylation, [ 43 ] using activation-induced cytidine deaminase (AID), TET enzymes (TET), and DNA methyltransferase (DMNTs), starting in the maturation phase and into the stabilization stage. [ 33 ] Once the epigenetic memory of the cell is lost, the possibility of differentiation into the three germ layers is achieved. [ 32 ] This is considered a fully reprogrammed cell as it can be passaged without reverting to its original somatic cell type. [ 35 ]
Reprogramming can also be induced artificially through the introduction of exogenous factors, usually transcription factors . In this context, it often refers to the creation of induced pluripotent stem cells from mature cells such as adult fibroblasts . This allows the production of stem cells for biomedical research , such as research into stem cell therapies , without the use of embryos. It is carried out by the transfection of stem-cell associated genes into mature cells using viral vectors such as retroviruses .
One of the first transacting factors discovered to change a cell was found in a myoblast when the complementary DNA (cDNA) coding for MyoD was expressed and converted a fibroblast to a myoblast. Another transacting factor that directly transformed a lymphoid cell into a myeloid cell was C/EBPα. MyoD and C/EBPα are examples of a small number of single factors that can transform cells. More often, a combination of transcription factors work in conjunction to reprogram a cell.
The OSKM factors ( Oct4 , Sox2 , Klf4 , and c-Myc ) were initially discovered by Yamanaka in 2006, by the induction of a mouse fibroblast into an induced pluripotent stem cell (iPSCs). [ 29 ] Within the following year, these factors were used to induce human fibroblasts into iPSCs. [ 32 ]
Oct4 is part of the core regulatory genes needed for pluripotency, as it is seen in both embryonic stem cells and tumors. [ 44 ] The use of Oct4 even in small increases allows for the start differentiation into pluripotency. Oct4 works in conjecture with Sox2 for the expression of FGF4 which could aid in differentiation.
Sox2 is a gene used in maintaining pluripotency in stem cells. Oct4 and Sox2 work together to regulate hundreds of genes utilized in pluripotency. [ 44 ] However, Sox2 is not the only possible Sox family member to participate in gene regulation with Oct4 – Sox4 , Sox11 , and Sox15 also participate, as the Sox protein is redundant throughout the stem cell genome .
Klf4 is a transcription factor used in proliferation , differentiation , apoptosis , and somatic cell reprogramming. When being utilized in cellular reprogramming, Klf4 prevents cell division of damaged cells using its apoptotic ability, and aids in histone acetyltransferase activity. [ 32 ]
c-Myc is also known as an oncogene , and in certain conditions can become cancer causing. [ 45 ] In cellular reprogramming, c-Myc is used for cell cycle progression, apoptosis , and cellular transformation for further differentiation.
Homeobox protein NANOG (NANOG) is a transcription factor used to aid in the efficiency of generating iPSCs by maintaining pluripotency [ 46 ] and suppressing cell determination factors . [ 47 ] NANOG works by promoting chromatin accessibility through repression of histone markers, such as H3K27me3 . NANOG aids recruitment of Oct4 , Sox2 , and Esrrb used in transcription , while also recruiting Brahma-related gene-1 (BRG1) for chromatin accessibility.
CEBPA is a commonly used factor when reprogramming cells into not only iPSCs, but also other cells. C/EBPα has shown itself to be a single transacting factor during direct reprogramming of a lymphoid cell into a myeloid cell. [ 42 ] C/EBPα is considered a 'path breaker' to aid in preparing the cell for intake of the OSKM factors and specific transcription events. [ 41 ] C/EBPα has also been shown to increase the efficiency of the reprogramming events. [ 33 ]
The properties of cells obtained after reprogramming can vary significantly, in particular among iPSCs. [ 48 ] Factors leading to variation in the performance of reprogramming and functional features of end products include genetic background, tissue source, reprogramming factor stoichiometry and stressors related to cell culture. [ 48 ] | https://en.wikipedia.org/wiki/Reprogramming |
The reprojection error is a geometric error corresponding to the image distance between a projected point and a measured one. It is used to quantify how closely an estimate of a 3D point X ^ {\displaystyle {\hat {\mathbf {X} }}} recreates the point's true projection x {\displaystyle \mathbf {x} } . More precisely, let P {\displaystyle \mathbf {P} } be the projection matrix of a camera and x ^ {\displaystyle {\hat {\mathbf {x} }}} be the image projection of X ^ {\displaystyle {\hat {\mathbf {X} }}} , i.e. x ^ = P X ^ {\displaystyle {\hat {\mathbf {x} }}=\mathbf {P} \,{\hat {\mathbf {X} }}} . The reprojection error of X ^ {\displaystyle {\hat {\mathbf {X} }}} is given by d ( x , x ^ ) {\displaystyle d(\mathbf {x} ,\,{\hat {\mathbf {x} }})} , where d ( x , x ^ ) {\displaystyle d(\mathbf {x} ,\,{\hat {\mathbf {x} }})} denotes the Euclidean distance between the image points represented by vectors x {\displaystyle \mathbf {x} } and x ^ {\displaystyle {\hat {\mathbf {x} }}} .
Minimizing the reprojection error can be used for estimating the error from point correspondences between two images. Suppose we are given 2D to 2D point imperfect correspondences { x i ↔ x i ′ } {\displaystyle \{\mathbf {x_{i}} \leftrightarrow \mathbf {x_{i}} '\}} . We wish to find a homography H ^ {\displaystyle {\hat {\mathbf {H} }}} and pairs of perfectly matched points x i ^ {\displaystyle {\hat {\mathbf {x_{i}} }}} and x ^ i ′ {\displaystyle {\hat {\mathbf {x} }}_{i}'} , i.e. points that satisfy x i ^ ′ = H ^ x ^ i {\displaystyle {\hat {\mathbf {x_{i}} }}'={\hat {H}}\mathbf {{\hat {x}}_{i}} } that minimize the reprojection error function given by
So the correspondences can be interpreted as imperfect images of a world point and the reprojection error quantifies their deviation from the true image projections x i ^ , x i ^ ′ {\displaystyle {\hat {\mathbf {x_{i}} }},{\hat {\mathbf {x_{i}} }}'} | https://en.wikipedia.org/wiki/Reprojection_error |
The Reprringer is a 3D printed pepperbox firearm, [ 1 ] [ 2 ] [ 3 ] [ 4 ] made public around September 2013. [ 2 ] It is a 5-shot, single-action, manually-indexed .22 CB Cap revolver. [ 2 ] [ 3 ]
Unlike the many early 3D-printed firearm designs, which are usually massively overbuilt in order to withstand the pressures and strain on the material from modern gunpowder cartridges, the Reprringer is small and only slightly larger than a gun made from steel. [ 2 ] It is chambered for .22 CB Cap which is considered the least powerful commercially produced cartridge on the market. [ 2 ] The barrels are not rifled, the lack of theoretical accuracy is considered a non-issue in a small gun with no sights. [ 2 ] | https://en.wikipedia.org/wiki/Reprringer |
Reptar is a CPU vulnerability discovered in late 2023, affecting a number of recent families of Intel x86 CPUs. According to The Register , the following CPU families are vulnerable: Alder Lake , Raptor Lake and Sapphire Rapids . [ 1 ]
The Reptar vulnerability relates to processing of x86 instruction prefixes in ways that lead to unexpected behavior. It was discovered by Google's security team. [ 2 ] [ 3 ] The vulnerability can be exploited in a number of ways, potentially leading to information leakage , denial of service , or privilege escalation . [ 4 ] [ 5 ]
It has been assigned the CVE ID CVE-2023-23583. [ 5 ] Intel have released new microcode in an out-of-band patch to mitigate the vulnerability, which it calls "redundant prefix". [ 1 ] [ 6 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Reptar_(vulnerability) |
A peculiarity of thermal motion of very long linear macromolecules in entangled polymer melts or concentrated polymer solutions is reptation. [ 1 ] Derived from the word reptile , reptation suggests the movement of entangled polymer chains as being analogous to snakes slithering through one another. [ 2 ] Pierre-Gilles de Gennes introduced (and named) the concept of reptation into polymer physics in 1971 to explain the dependence of the mobility of a macromolecule on its length. Reptation is used as a mechanism to explain viscous flow in an amorphous polymer. [ 3 ] [ 4 ] Sir Sam Edwards and Masao Doi later refined reptation theory. [ 5 ] [ 6 ] Similar phenomena also occur in proteins. [ 7 ]
Two closely related concepts are reptons and entanglement . A repton is a mobile point residing in the cells of a lattice, connected by bonds. [ 8 ] [ 9 ] Entanglement means the topological restriction of molecular motion by other chains. [ 10 ]
Reptation theory describes the effect of polymer chain entanglements on the relationship between molecular mass and chain relaxation time . The theory predicts that, in entangled systems, the relaxation time τ is proportional to the cube of molecular mass, M : τ ∝ M 3 . The prediction of the theory can be arrived at by a relatively simple argument. First, each polymer chain is envisioned as occupying a tube of length L , through which it may move with snake-like motion (creating new sections of tube as it moves). Furthermore, if we consider a time scale comparable to τ , we may focus on the overall, global motion of the chain. Thus, we define the tube mobility as
where v is the velocity of the chain when it is pulled by a force , f . μ tube will be inversely proportional to the degree of polymerization (and thus also inversely proportional to chain weight).
The diffusivity of the chain through the tube may then be written as
By then recalling that in 1-dimension the mean squared displacement due to Brownian motion is given by
we obtain
The time necessary for a polymer chain to displace the length of its original tube is then
By noting that this time is comparable to the relaxation time, we establish that τ ∝ L 2 / μ tube . Since the length of the tube is proportional to the degree of polymerization, and μ tube is inversely proportional to the degree of polymerization, we observe that τ ∝ ( DP n ) 3 (and so τ ∝ M 3 ).
From the preceding analysis, we see that molecular mass has a very strong effect on relaxation time in entangled polymer systems. Indeed, this is significantly different from the untangled case, where relaxation time is observed to be proportional to molecular mass. This strong effect can be understood by recognizing that, as chain length increases, the number of tangles present will dramatically increase. These tangles serve to reduce chain mobility. The corresponding increase in relaxation time can result in viscoelastic behavior, which is often observed in polymer melts. Note that
the polymer’s zero-shear viscosity gives an approximation of the actual observed dependency, τ ∝ M 3.4 ; [ 11 ] this relaxation time has nothing to do with the reptation relaxation time.
Entangled polymers are characterized with effective internal scale, commonly known as the length of macromolecule between adjacent entanglements M e {\displaystyle M_{\text{e}}} .
Entanglements with other polymer chains restrict polymer chain motion to a thin virtual tube passing through the restrictions. [ 12 ] Without breaking polymer chains to allow the restricted chain to pass through it, the chain must be pulled or flow through the restrictions. The mechanism for movement of the chain through these restrictions is called reptation.
In the blob model, [ 13 ] the polymer chain is made up of n {\displaystyle n} Kuhn lengths of individual length l {\displaystyle l} . The chain is assumed to form blobs between each entanglement, containing n e {\displaystyle n_{\text{e}}} Kuhn length segments in each. The mathematics of random walks can show that the average end-to-end distance of a section of a polymer chain, made up of n e {\displaystyle n_{\text{e}}} Kuhn lengths is d = l n e {\displaystyle d=l{\sqrt {n_{\text{e}}}}} . Therefore if there are n {\displaystyle n} total Kuhn lengths, and A {\displaystyle A} blobs on a particular chain:
The total end-to-end length of the restricted chain L {\displaystyle L} is then:
This is the average length a polymer molecule must diffuse to escape from its particular tube, and so the characteristic time for this to happen can be calculated using diffusive equations. A classical derivation gives the reptation time t {\displaystyle t} :
where μ {\displaystyle \mu } is the coefficient of friction on a particular polymer chain, k {\displaystyle k} is the Boltzmann constant, and T {\displaystyle T} is the absolute temperature.
The linear macromolecules reptate if the length of macromolecule M {\displaystyle M} is bigger than the critical entanglement molecular weight M c {\displaystyle M_{\text{c}}} . M c {\displaystyle M_{\text{c}}} is 1.4 to 3.5 times M e {\displaystyle M_{\text{e}}} . [ 14 ] There is no reptation motion for polymers with M < M c {\displaystyle M<M_{\text{c}}} , so that the point M c {\displaystyle M_{\text{c}}} is a point of dynamic phase transition.
Due to the reptation motion the coefficient of self-diffusion and conformational relaxation times of macromolecules depend on the length of macromolecule as M − 2 {\displaystyle M^{-2}} and M 3 {\displaystyle M^{3}} , correspondingly. [ 15 ] [ 16 ] The conditions of existence of reptation in the thermal motion of macromolecules of complex architecture (macromolecules in the form of branch, star, comb and others) have not been established yet.
The dynamics of shorter chains or of long chains at short times is usually described by the Rouse model . | https://en.wikipedia.org/wiki/Reptation |
Reptation Monte Carlo is a quantum Monte Carlo method.
It is similar to Diffusion Monte Carlo , except that it works with paths rather than points. This has some advantages relating to calculating certain properties of the system under study that diffusion Monte Carlo has difficulty with.
In both diffusion Monte Carlo and reptation Monte Carlo, the method first aims to solve the time-dependent Schrödinger equation in the imaginary time direction. When you propagate the Schrödinger equation in time, you get the dynamics of the system under study. When you propagate it in imaginary time, you get a system that tends towards the ground state of the system.
When substituting i t {\displaystyle it} in place of t {\displaystyle t} , the Schrodinger equation becomes identical with a diffusion equation . Diffusion equations can be solved by imagining a huge population of particles (sometimes called "walkers"), each diffusing in a way that solves the original equation. This is how diffusion Monte Carlo works.
Reptation Monte Carlo works in a very similar way, but is focused on the paths that the walkers take, rather than the density of walkers.
In particular, a path may be mutated using a Metropolis algorithm which tries a change (normally at one end of the path) and then accepts or rejects the change based on a probability calculation.
The update step in diffusion Monte Carlo would be moving the walkers slightly, and then duplicating and removing some of them. By contrast, the update step in reptation Monte Carlo mutates a path, and then accepts or rejects the mutation.
S. Baroni & S. Moroni (1999). "Reptation Quantum Monte Carlo: A Method for Unbiased Ground-State Averages and Imaginary-Time Correlations". Phys. Rev. Lett . 82 (24): 4745– 4748. Bibcode : 1999PhRvL..82.4745B . doi : 10.1103/PhysRevLett.82.4745 .
S. Baroni & S. Moroni (1998). "Reptation Quantum Monte Carlo". arXiv : cond-mat/9808213 .
G. Carleo; F. Becca; S. Moroni & S. Baroni (2010). "Reptation quantum Monte Carlo algorithm for lattice Hamiltonians with a directed-update scheme". Phys. Rev. E . 82 (4): 046710. arXiv : 1003.3696 . Bibcode : 2010PhRvE..82d6710C . doi : 10.1103/PhysRevE.82.046710 . PMID 21230415 . S2CID 23090095 .
This quantum chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Reptation_Monte_Carlo |
Repulsive guidance molecules ( RGMs ) are members of a three gene family (in vertebrates) composed of RGMa , RGMb , and RGMc (also called hemojuvelin).
RGMa has been implicated to play an important role in the developing brain and in the scar tissue that forms after a brain injury . For example, RGMa helps guide retinal ganglion cell (RGC) axons to the tectum in the midbrain . It has also been demonstrated that after induced spinal cord injury RGMa accumulates in the scar tissue around the lesion. Further research has shown that RGMa is an inhibitor of axonal outgrowth. Taken together, these findings highlight the importance of RGMa in axonal guidance and outgrowth. [ 1 ] | https://en.wikipedia.org/wiki/Repulsive_guidance_molecule |
In quantum mechanics , a repulsive state is an electronic state of a molecule for which there is no minimum in the potential energy . This means that the state is unstable and unbound since the potential energy smoothly decreases with the interatomic distance and the atoms repel one another. In such a state there are no discrete vibrational energy levels ; instead, these levels form a continuum. This should not be confused with an excited state , which is a metastable electronic state containing a minimum in the potential energy, and may be short or long-lived.
When a molecule is excited by means such as UV/VIS spectroscopy it can undergo a molecular electronic transition : if such a transition brings the molecule into a repulsive state, it will spontaneously dissociate . This condition is also known as predissociation since the chemical bond is broken at an energy which is lower than what might be expected. In electronic spectroscopy, this often appears as a strong, continuous feature in the absorption or emission spectrum , making repulsive states easy to detect.
For example, triatomic hydrogen has a repulsive ground state, which means it can only exist in an excited state: if it drops down to the ground state, it will immediately break up into one of the several possible dissociation products.
This spectroscopy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Repulsive_state |
Request price quotation or RPQ is a long-standing IBM designation for a product or component that is potentially available, but that is not on the "standard" price list. [ 1 ] [ 2 ] Typical RPQ offerings are custom interfaces, hardware modifications, research or experimental systems, or variable-cost items. In the days of IBM's large mainframes, e.g. the System/360 and System/370 series, many unusual features were flagged as "RPQ".
A special-order software item is known as a Programming Request Price Quotation or PRPQ .
The standard punched card code for the groupmark character on the IBM 1401 computer system used punches in rows 12, 7, and 8 of a card column (written as 12-7-8). The older IBM 705 computer used 12-5-8 for this character. An RPQ was available for the 1401 for compatibility that allowed the system to read or punch the 705 code rather than the standard code. Since not all 1401 users would need this feature it was marketed as an RPQ. [ 3 ]
The features used by the Compatible Time-Sharing System to support time-sharing on the IBM 7090 and IBM 7094 were offered as RPQs. [ 1 ] [ 4 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Request_price_quotation |
A requirement diagram is a diagram specially used in SysML in which requirements and the relations between them and their relationship to other model elements are shown as discussed in the following paragraphs.
If a requirement is derived from another requirement, their relation is named "derive requirement relationship".
If a requirement is contained in another requirement, their relation is named "namespace containment".
If a requirement is satisfied by a design element, their relation is named "satisfy relationship".
If a requirement is a copy of another requirement, their relation is named "copy relationship".
If there exists a relation between a requirement and a test case verifying this requirement, their relation is named "verify relationship".
A test case is defined by a flow checking whether the system under consideration satisfies a requirement.
If a requirement is refined by other requirements / model elements, the relation is named "refine relationship".
If there exists a relation between a requirement and an arbitrary model element traced by this requirement, their relation is named "trace relationship". | https://en.wikipedia.org/wiki/Requirement_diagram |
In systems engineering and software engineering , requirements analysis focuses on the tasks that determine the needs or conditions to meet the new or altered product or project, taking account of the possibly conflicting requirements of the various stakeholders , analyzing , documenting , validating , and managing software or system requirements . [ 2 ]
Requirements analysis is critical to the success or failure of systems or software projects . [ 3 ] The requirements should be documented, actionable, measurable, testable, [ 4 ] traceable, [ 4 ] related to identified business needs or opportunities, and defined to a level of detail sufficient for system design .
Conceptually, requirements analysis includes three types of activities: [ citation needed ]
Requirements analysis can be a long and tiring process during which many delicate psychological skills are involved. New systems change the environment and relationships between people, so it is important to identify all the stakeholders, take into account all their needs, and ensure they understand the implications of the new systems. Analysts can employ several techniques to elicit the requirements from the customer. These may include the development of scenarios (represented as user stories in agile methods ), the identification of use cases , the use of workplace observation or ethnography , holding interviews , or focus groups (more aptly named in this context as requirements workshops, or requirements review sessions) and creating requirements lists. Prototyping may be used to develop an example system that can be demonstrated to stakeholders. Where necessary, the analyst will employ a combination of these methods to establish the exact requirements of the stakeholders, so that a system that meets the business needs is produced. [ 5 ] [ 6 ] Requirements quality can be improved through these and other methods:
See Stakeholder analysis for a discussion of people or organizations (legal entities such as companies, and standards bodies) that have a valid interest in the system. They may be affected by it either directly or indirectly.
A major new emphasis in the 1990s was a focus on the identification of stakeholders . It is increasingly recognized that stakeholders are not limited to the organization employing the analyst. Other stakeholders will include:
Requirements often have cross-functional implications that are unknown to individual stakeholders and often missed or incompletely defined during stakeholder interviews. These cross-functional implications can be elicited by conducting JRD sessions in a controlled environment, facilitated by a trained facilitator (Business Analyst), wherein stakeholders participate in discussions to elicit requirements, analyze their details, and uncover cross-functional implications. A dedicated scribe should be present to document the discussion, freeing up the Business Analyst to lead the discussion in a direction that generates appropriate requirements that meet the session objective.
JRD Sessions are analogous to Joint Application Design Sessions. In the former, the sessions elicit requirements that guide design, whereas the latter elicit the specific design features to be implemented in satisfaction of elicited requirements.
One traditional way of documenting requirements has been contract-style requirement lists. In a complex system such requirements lists can run hundreds of pages long.
An appropriate metaphor would be an extremely long shopping list. Such lists are very much out of favor in modern analysis; as they have proved spectacularly unsuccessful at achieving their aims [ citation needed ] ; but they are still seen to this day.
As an alternative to requirement lists, Agile Software Development uses User stories to suggest requirements in everyday language.
Best practices take the composed list of requirements merely as clues and repeatedly ask "why?" until the actual business purposes are discovered. Stakeholders and developers can then devise tests to measure what level of each goal has been achieved thus far. Such goals change more slowly than the long list of specific but unmeasured requirements. Once a small set of critical, measured goals has been established, rapid prototyping and short iterative development phases may proceed to deliver actual stakeholder value long before the project is half over.
A prototype is a computer program that exhibits a part of the properties of another computer program, allowing users to visualize an application that has not yet been constructed. A popular form of prototype is a mockup , which helps future users and other stakeholders get an idea of what the system will look like. Prototypes make it easier to make design decisions because aspects of the application can be seen and shared before the application is built. Major improvements in communication between users and developers were often seen with the introduction of prototypes. Early views of applications led to fewer changes later and hence reduced overall costs considerably. [ citation needed ]
Prototypes can be flat diagrams (often referred to as wireframes ) or working applications using synthesized functionality. Wireframes are made in a variety of graphic design documents, and often remove all color from the design (i.e. use a greyscale color palette) in instances where the final software is expected to have a graphic design applied to it. This helps to prevent confusion as to whether the prototype represents the final visual look and feel of the application. [ citation needed ]
A use case is a structure for documenting the functional requirements for a system, usually involving software, whether that is new or being changed. Each use case provides a set of scenarios that convey how the system should interact with a human user or another system, to achieve a specific business goal. Use cases typically avoid technical jargon, preferring instead the language of the end-user or domain expert . Use cases are often co-authored by requirements engineers and stakeholders.
Use cases are deceptively simple tools for describing the behavior of software or systems. A use case contains a textual description of how users are intended to work with the software or system. Use cases should not describe the internal workings of the system, nor should they explain how that system will be implemented. Instead, they show the steps needed to perform a task without sequential assumptions.
Requirements specification is the synthesis of discovery findings regarding current state business needs and the assessment of these needs to determine, and specify, what is required to meet the needs within the solution scope in focus. Discovery, analysis, and specification move the understanding from a current as-is state to a future to-be state. Requirements specification can cover the full breadth and depth of the future state to be realized, or it could target specific gaps to fill, such as priority software system bugs to fix and enhancements to make. Given that any large business process almost always employs software and data systems and technology, requirements specification is often associated with software system builds, purchases, cloud computing strategies, embedded software in products or devices, or other technologies. The broader definition of requirements specification includes or focuses on any solution strategy or component, such as training, documentation guides, personnel, marketing strategies, equipment, supplies, etc.
Requirements are categorized in several ways. The following are common categorizations of requirements that relate to technical management: [ 1 ]
Statements of business level goals, without reference to detailed functionality. These are usually high-level (software and/or hardware) capabilities that are needed to achieve a business outcome.
Statements of fact and assumptions that define the expectations of the system in terms of mission objectives, environment, constraints, and measures of effectiveness and suitability (MOE/MOS). The customers are those that perform the eight primary functions of systems engineering, with special emphasis on the operator as the key customer. Operational requirements will define the basic need and, at a minimum, answer the questions posed in the following listing: [ 1 ]
Architectural requirements explain what has to be done by identifying the necessary systems architecture of a system .
Behavioral requirements explain what has to be done by identifying the necessary behavior of a system.
Functional requirements explain what has to be done by identifying the necessary task, action or activity that must be accomplished. Functional requirements analysis will be used as the toplevel functions for functional analysis. [ 1 ]
Non-functional requirements are requirements that specify criteria that can be used to judge the operation of a system, rather than specific behaviors.
The extent to which a mission or function must be executed; is generally measured in terms of quantity, quality, coverage, timeliness, or readiness. During requirements analysis, performance (how well does it have to be done) requirements will be interactively developed across all identified functions based on system life cycle factors; and characterized in terms of the degree of certainty in their estimate, the degree of criticality to the system success, and their relationship to other requirements. [ 1 ]
The "build to", "code to", and "buy to" requirements for products and "how to execute" requirements for processes are expressed in technical data packages and technical manuals. [ 1 ]
Requirements that are implied or transformed from higher-level requirements. For example, a requirement for long-range or high speed may result in a design requirement for low weight. [ 1 ]
A requirement is established by dividing or otherwise allocating a high-level requirement into multiple lower-level requirements. Example: A 100-pound item that consists of two subsystems might result in weight requirements of 70 pounds and 30 pounds for the two lower-level items. [ 1 ]
Well-known requirements categorization models include FURPS and FURPS+, developed at Hewlett-Packard .
Steve McConnell, in his book Rapid Development , details a number of ways users can inhibit requirements gathering:
This may lead to the situation where user requirements keep changing even when system or product development has been started.
Possible problems caused by engineers and developers during requirements analysis are:
One attempted solution to communications problems has been to employ specialists in business or system analysis.
Techniques introduced in the 1990s like prototyping , Unified Modeling Language (UML), use cases , and agile software development are also intended as solutions to problems encountered with previous methods.
Also, a new class of application simulation or application definition tools has entered the market. These tools are designed to bridge the communication gap between business users and the IT organization — and also to allow applications to be 'test marketed' before any code is produced. The best of these tools offer:
[ 1 ] | https://en.wikipedia.org/wiki/Requirements_analysis |
Requirements engineering ( RE ) [ 1 ] is the process of defining, documenting, and maintaining requirements [ 2 ] in the engineering design process . It is a common role in systems engineering and software engineering .
The first use of the term requirements engineering was probably in 1964 in the conference paper "Maintenance, Maintainability, and System Requirements Engineering", [ 3 ] but it did not come into general use until the late 1990s with the publication of an IEEE Computer Society tutorial [ 4 ] in March 1997 and the establishment of a conference series on requirements engineering that has evolved into the International Requirements Engineering Conference .
In the waterfall model , [ 5 ] requirements engineering is presented as the first phase of the development process. Later development methods, including the Rational Unified Process (RUP) for software, assume that requirements engineering continues through a system's lifetime.
Requirements management , which is a sub-function of Systems Engineering practices, is also indexed in the International Council on Systems Engineering (INCOSE) manuals.
The activities involved in requirements engineering vary widely, depending on the type of system being developed and the organization's specific practice(s) involved. [ 6 ] These may include:
These are sometimes presented as chronological stages although, in practice, there is considerable interleaving of these activities.
Requirements engineering has been shown to clearly contribute to software project successes. [ 8 ]
One limited study in Germany presented possible problems in implementing requirements engineering and asked respondents whether they agreed that they were actual problems. The results were not presented as being generalizable but suggested that the principal perceived problems were incomplete requirements, moving targets, and time boxing, with lesser problems being communications flaws, lack of traceability, terminological problems, and unclear responsibilities. [ 9 ]
Problem structuring, a key aspect of requirements engineering, has been speculated to reduce design performance. [ 10 ] Some research suggests that it is possible if there are deficiencies in the requirements engineering process resulting in a situation where requirements do not exist, software requirements may be created regardless as an illusion misrepresenting design decisions as requirements [ 11 ] | https://en.wikipedia.org/wiki/Requirements_engineering |
Resazurin (7-Hydroxy-3 H -phenoxazin-3-one 10-oxide) is a phenoxazine dye that is weakly fluorescent , nontoxic , cell-permeable, and redox‐sensitive. [ 2 ] [ 3 ] Resazurin has a blue to purple color above pH 6.5 and an orange color below pH 3.8. [ 4 ] It is used in microbiological , cellular , and enzymatic assays because it can be irreversibly reduced to the pink -colored and highly fluorescent resorufin (7-Hydroxy-3 H -phenoxazin-3-one). At circum-neutral pH, resorufin can be detected by visual observation of its pink color or by fluorimetry , with an excitation maximum at 530-570 nm and an emission maximum at 580-590 nm. [ 5 ]
When a solution containing resorufin is submitted to reducing conditions (E h < -110 mV), almost all resorufin is reversibly reduced to the translucid non-fluorescent dihydroresorufin (also known as hydroresorufin) and the solution becomes translucid (the redox potential of the resorufin/dihydroresorufin pair is -51 mV vs. standard hydrogen electrode at pH 7.0). When the E h of this same solution is increased, dihydroresorufin is oxidized back to resorufin, and this reversible reaction can be used to monitor if the redox potential of a culture medium remains at a sufficiently low level for anaerobic organisms .
Resazurin solution has one of the highest values known of Kreft's dichromaticity index . [ 6 ] This means that it has a large change in perceived color hue when the thickness or concentration of observed sample increases or decreases.
Usually, resazurin is available commercially as the sodium salt .
Resazurin is reduced to resorufin by aerobic respiration of metabolically active cells, and it can be used as an indicator of cell viability. It was first used to quantify bacterial content in milk by Pesch and Simmert in 1929. [ 7 ] It can be used to detect the presence of viable cells in mammalian cell cultures. [ 8 ] It was introduced commercially initially under Alamar Blue trademark (Trek Diagnostic Systems, Inc), and now also available under other names such as AB assay, Vybrant ( Molecular Probes ) and UptiBlue ( Interchim ).
Resazurin based assays show excellent correlation to reference viability assays such as formazan -based assays ( MTT /XTT) and tritiated thymidine based techniques. [ 9 ] [ 10 ] The low toxicity makes it suitable for longer studies, and it has been applied for animal cells, bacteria, and fungi [ 10 ] for cell culture assays such as cell counting, cell survival, and cell proliferation . [ 11 ] In antimicrobial assays, resazurin is commonly utilized to assess the minimum inhibitory concentration (MIC) or minimum bactericidal concentration (MBC) of antimicrobial agents. [ 12 ]
To take the place of a standard live/dead assay, resazurin also be multiplexed with chemiluminescent assays, such as cytokine assays, caspase assays to measure apoptosis, or reporter assays to measure a gene or a protein expression. [ 10 ]
The irreversible reaction of resazurin to resorufin is proportional to aerobic respiration. [ 13 ]
Resazurin can be used as one of a series of rapid tests to determine the quality of a milk sample. In this test, resazurin is added as a violet redox dye which turns mauvish-pink due to conversion to resorufin and then to colourless dihydroresorufin. This happens due to lowering of the oxidation-reduction potential in the milk sample caused by presence of bacteria which utilize available oxygen present in the milk for aerobic respiration . The rate of the colour change is used as an index for the number of bacteria present in the milk sample. [ 14 ]
Resazurin is effectively reduced in mitochondria , making it useful also to assess mitochondrial metabolic activity.
Usually, in the presence of NADPH dehydrogenase or NADH dehydrogenase as the enzyme, NADPH or NADH is the reductant that converts resazurin to resorufin. Hence the resazurin/diaphorase/NADPH system can be used to detect NADH, NADPH, or diaphorase level, and any biochemical or enzyme activity that is involved in a biochemical reaction generating NADH or NADPH. [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ]
Resazurin can be used to assay L-Glutamate , achieving a sensitivity of 2.0 pmol per well in a 96 well plate. [ 20 ]
Resazurin can also be used to measure the aerobic biodegradation of organic matter found in effluents. [ 21 ]
Resazurin is used to measure the amount of aerobic respiration in streams. [ 22 ] Since most aerobic respiration occurs in the stream bed, the conversion of resazurin to resorufin is also a measure of the amount of exchange between the water column and the stream bed.
Resazurin is prepared by acid-catalyzed condensation between resorcinol and 4-nitrosoresorcinol followed by oxidation of the intermediate with manganese(IV) oxide :
Treatment of the crude reaction product with excess sodium carbonate yields the sodium salt of resazurin, which is typically the commercial form of the dye. Running the condensation step in alcohols is possible but results in lower yields of the product; in pure water or acetic acid, the reaction does not proceed satisfactorily. [ 23 ]
10-acetyl-3,7-dihydroxyphenoxazine (also known as Amplex Red), structurally related to resazurin, reacts with H 2 O 2 in a 1:1 stoichiometry to produce the same by-product resorufin (used in many assays combining for example horseradish peroxidase (HRP), or NADH, NADPH using enzymes). [ 24 ]
7-ethoxyresorufin , a compound used as the substrate in the measurement of cytochrome P450 ( CYP1A1 ) induction using the ethoxyresorufin-O-deethylase (EROD) assay system in cell culture and environmental samples, produced in response to exposure to aryl hydrocarbons . The compound is catalysed by the enzyme to produce the same fluorescent product, resorufin. [ 25 ] [ 26 ]
1,3-dichloro-7-hydroxy-9,9-dimethylacridin-2(9 H )-one (DDAO dye), a fluorescent dye used for oligonucleotide labeling. [ 27 ] | https://en.wikipedia.org/wiki/Resazurin |
Rescue fusion hybridization is a process used to manufacture some therapeutic cancer vaccines in which individual tumor cells obtained through biopsy are fused with an antibody-secreting cell to form a heterohybridoma . This cell then secretes the unique idiotype, or immunoglobulin antigen characteristic of the individual tumor, which is purified for use as the vaccine. [ 1 ] It is used to produce the BiovaxID vaccine for follicular lymphoma . [ citation needed ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Rescue_fusion_hybridization |
The research-based design process is a research process proposed by Teemu Leinonen, [ 1 ] [ 2 ] inspired by several design theories. [ 3 ] [ 4 ] [ 5 ] It is strongly oriented towards the building of prototypes and it emphasizes creative solutions, exploration of various ideas and design concepts, continuous testing and redesign of the design solutions.
The method is firmly influenced by the Scandinavian participatory design approach. Therefore, most of the activities take place in a close dialogue with the community that is expected to use the tools or services designed.
The process can be divided into four major phases, although they all happen concurrently and side by side. At different times of the research, researchers are asked to put more effort into different phases. The continuous iteration, however, asks researchers to keep all the phases alive all the time: contextual inquiry, participatory design, product design, prototype as hypothesis.
Contextual inquiry refers to the exploration of the socio-cultural context of the design. The aim is to understand the environment, situation, and culture where the design takes place. The results of the contextual inquiry are better understanding of the context by recognizing in it possible challenges and design opportunities. In this phase, design researchers use rapid ethnographic methods, such as participatory observation , note-taking, sketching, informal conversations, and interviews. At the same time as the field work, the design researchers are doing a focused review of the literature, benchmarking existing solutions, and analyzing trends in the area in order to understand and recognise design challenges.
Throughout the contextual inquiry design researchers start to develop some preliminary design ideas, which would be developed during the next stage—participatory design—in workshops with different stakeholders. The participatory design sessions tend to take place with small groups of 4 to 6 participants. A common practice is to present the results as scenarios made by the design researchers containing challenges and design opportunities. In the workshop, the participants are invited to come up with design solutions to the challenges and to bring to the discussion new challenges and solutions.
Since one of the main features of the research-based design is its participatory nature, the user's involvement is an integral part of the process. In this regard, participatory design workshops are organized during the different stages in order to validate initial ideas and discuss the prototypes at different stage of development.
The results of the participatory design are analysed in a design studio by the design researchers who use the materials from the contextual inquiry and participatory design sessions to redefine the design problems and redesign the prototypes. By keeping a distance from the stakeholders, in the product design phase the design researchers will get a chance to analyse the results of the participatory design, categorize them, use specific design language related to implementation of the prototypes, and finally make design decisions.
Ultimately, the prototypes are developed to be functional on a level that they can be tested with real people in their everyday situations. The prototypes are still considered to be a hypothesis, prototypes as hypothesis, because they are expected to be part of the solutions for the challenges defined and redefined during the research. It remains to the stakeholders to decide whether they support the assertions made by the design researchers. Therefore the first prototypes brought to the use of the real people can be considered to be also minimum viable products .
Research-based design is not to be confused with design-based research or educational design research. [ 6 ] [ 7 ] [ 8 ] [ 1 ] [ 9 ] [ 10 ] In research-based design, which builds on art and design tradition, the focus is on the artifacts, the end-results of the design. The way the artifacts are, the affordances and features they have or do not have, form an important part of the research argumentation. As such, research-based design as a methodological approach includes research, design, and design interventions that are all intertwined. | https://en.wikipedia.org/wiki/Research-based_design |
The REsearch Consortium On Nearby Stars ( RECONS ) is an international group of astronomers founded in 1994 to investigate the stars nearest to the Solar System - with a focus on those within 10 parsecs (32.6 light years ), but as of 2012 the horizon was stretched to 25 parsecs. In part the project hopes a more accurate survey of local star systems will give a better picture of the star systems in the Galaxy as a whole.
The Consortium claims authorship of the series The Solar Neighborhood in The Astronomical Journal , that began in 1994. [ 1 ] This series now numbers nearly 40 papers and submissions. The following discoveries are from this series:
RECONS is listed explicitly as an author on papers submitted to the Bulletin of the American Astronomical Society since 2004. [ 5 ]
The RECONS web page includes the frequently referenced "List of the 100 nearest star systems". [ 6 ] They update this list as discoveries are made. A list of all RECONS parallaxes [ 7 ] is available, as are all papers in the solar neighborhood series [ 8 ] and [ 9 ] which illustrates data from the RECONS 25 Parsec Database .
Key astronomers involved in the project include | https://en.wikipedia.org/wiki/Research_Consortium_On_Nearby_Stars |
The Research Defence Society was a British scientific society and lobby group founded by Stephen Paget in 1908 to fight against the anti-vivisectionist "enemies of reason" at the beginning of the 20th century. At the end of 2008, after being active for 100 years, it merged with the communications group Coalition for Medical Progress to form the advocacy group Understanding Animal Research . [ 1 ]
The Research Defence Society's aim was to disseminate information about, and to defend the use of, research involving animals , including animal testing . It represented the interests of 5,000 researchers and institutions. Its sources of funding changed over the hundred years that the society was active, and included individuals, government the pharmaceutical industry and universities. [ 2 ] The organisation's literature stated that it was funded by its members, including medical scientists, doctors, veterinarians, pharmaceutical companies, research institutes, universities, and charities that support medical research. [ 3 ]
Its last executive director was Dr. Simon Festing , who became CEO of Understanding Animal Research. [ 4 ]
One campaign to demonstrate the support for animal research within the scientific and medical community was the co-signing of a petition in support of the use of animals in research called Declaration on Animals in Medical Research . [ 5 ] The declaration was signed in 1990, and a modified version in 2005. Over 700 scientists, of whom 500 were British, signed the declaration in the first month, including three Nobel laureates , 190 Fellows of the Royal Society and the Medical Royal Colleges and over 250 academic professors.
This article about an organisation in the United Kingdom is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Research_Defence_Society |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.