id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
24,919,125
https://en.wikipedia.org/wiki/Fosbretabulin
Fosbretabulin (also known as combretastatin A-4 phosphate or CA4P) is a microtubule destabilizing experimental drug, a type of vascular-targeting agent, a drug designed to damage the vasculature (blood vessels) of cancer tumours causing central necrosis. It is a derivative of combretastatin. It is formulated as the salts fosbretabulin disodium and fosbretabulin tromethamine. Fosbretabulin is a prodrug. In vivo, it is dephosphorylated to its active metabolite, combretastatin A-4. In July 2007, the pharmaceutical company OXiGENE initiated a 180-patient phase III clinical trial of fosbretabulin in combination with carboplatin for the treatment of anaplastic thyroid cancer. There is currently no fully FDA approved treatment for this form of cancer. By 2017, it had completed multiple clinical trials (e.g. for solid tumours, non-small cell lung cancer) with more in progress. See also Combretastatin, e.g. for the chemical synthesis References Prodrugs Experimental cancer drugs Phosphate esters
Fosbretabulin
[ "Chemistry" ]
259
[ "Chemicals in medicine", "Prodrugs" ]
24,925,594
https://en.wikipedia.org/wiki/Fermi%E2%80%93Pustyl%27nikov%20model
The Fermi–Pustyl'nikov model, named after Enrico Fermi and Lev Pustyl'nikov, is a model of the Fermi acceleration mechanism. A point mass falls with a constant acceleration vertically on the infinitely heavy horizontal wall, which moves vertically in accordance with analytic periodic law in time. The point interacts with the wall by the law of elastic collision. For this model it was proved that under some general conditions the velocity and energy of the point at the moments of collisions with the wall tend to infinity for an open set of initial data having the infinite Lebesgue measure. This model was introduced in 1968 in, and studied in, by L. D. Pustyl'nikov in connection with the justification of the Fermi acceleration mechanism. (See also and references therein). References Dynamical systems
Fermi–Pustyl'nikov model
[ "Physics", "Mathematics" ]
172
[ "Mechanics", "Dynamical systems" ]
24,926,819
https://en.wikipedia.org/wiki/Replication%20terminator%20Tus%20family
Tus, also known as terminus utilization substance, is a protein that binds to terminator sequences and acts as a counter-helicase when it comes in contact with an advancing helicase. The bound Tus protein effectively halts DNA polymerase movement. Tus helps end DNA replication in prokaryotes. In E. coli, Tus binds to 10 closely related sites encoded in the chromosome, although only 6 are likely to be involved in replication termination. Each site is 23 base pairs. The sites are called Ter sites, and are designated TerA, TerB, ..., TerG. These binding sites are asymmetric, such that when a Tus-Ter complex (Tus protein bound to a Ter site) is encountered by a replication fork from one direction, the complex is dissociated and replication continues (permissive). But when encountered from the other direction, the Tus-Ter complex provides a much larger kinetic barrier and halts replication (non-permissive). The multiple Ter sites in the chromosome are oriented such that the two oppositely moving replication forks are both stalled in the desired termination region. Bacillus subtilis utilize replication terminator protein (RTP) instead of Tus. Protein domains The Ter protein contains two domains. The N-terminal domain is composed of an alpha helices, beta sheet, and three loops. The C-terminal domain is made of two alpha helices and one beta sheet. Further reading "Interaction of the Escherichia coli replication terminator protein (Tus) with DNA: a model derived from DNA-binding studies of mutant proteins by surface plasmon resonance." "Replication termination in Escherichia coli: structure and antihelicase activity of the Tus-Ter complex." "A molecular mousetrap determines polarity of termination of DNA replication in E. coli." "Isolation and characterization of mutants of Tus, the replication arrest protein of Escherichia coli." "Biophysical characteristics of Tus, the replication arrest protein of Escherichia coli." "Structure of a replication-terminator protein complexed with DNA." Structure at protein data bank References DNA replication
Replication terminator Tus family
[ "Biology" ]
455
[ "Genetics techniques", "Protein classification", "DNA replication", "Molecular genetics", "Protein families" ]
24,927,248
https://en.wikipedia.org/wiki/Liquid%20resistor
A liquid resistor is an electrical resistor in which the resistive element is a solution. Fixed-value liquid resistors are typically used where very high power dissipation is required. They are used in the rotor circuits of large slip ring induction motors to control starting current, torque and to limit large electrical fault currents (while other protection systems operate to clear or isolate the fault). They typically have electrodes made of welded steel plate (galvanised to reduce corrosion), suspended by insulated connections in a conductive chemical solution held in a tank - which may be open or enclosed. The tank body is normally solidly grounded or earthed. A typical unit can be rated for continuous use, or for short periods when used for current limitation in protection systems. Liquid neutral earthing resistor A common use in the electrical power generating and distribution industry is as a fault current limiter in the common neutral leg of large three-phase transformers and generators. In the UK they are known as Liquid Neutral Earthing Resistors (LNERs). A rating of 0.5 megawatt for 30 seconds would not be unusual to protect the winding of a 660 MW generator. In this application where a neutral leg current flows due to a current imbalance between the generator or transformer windings due to a fault, it is limited in magnitude by the resistor. A permanently installed current transformer (CT) fixed around the neutral leg feeder to the LNER senses the current. The CT sends a signal current to an external sensing circuit (the protection system) which then sends a signal to the relevant circuit breaker to open and disconnect the supply or isolate the generator (or transformer) from the fault. The ohmic value of the resistor is calculated as a function of the permissible fault current and the system voltage to earth. Electrolyte in power industry LNERs The electrolyte is normally sodium carbonate (soda ash) and/or sodium bicarbonate (baking soda). This salt solution is alkaline and electrode corrosion effects on galvanised steel tanks is manageable if re-galvanisation is possible. The resistance of the bulk electrolyte is a function of temperature and the concentration of the salt governed by the formula: Rθ = where θ is the temperature (in Celsius) and R0 is the standardised or datum calibration temperature. A specialised technique is required to accurately calibrate an LNER electrolyte bulk resistance value and it is normally carried out by experienced power engineers and technicians. This task must be done off-load at routine intervals (roughly 2–4 years depending on results from routine monitoring) to maintain the resistance value within tolerance and facilitate inspection of the LNER. See also Liquid rheostat References Electric power Resistive components
Liquid resistor
[ "Physics", "Engineering" ]
572
[ "Physical quantities", "Resistive components", "Power (physics)", "Electric power", "Electrical engineering", "Electrical resistance and conductance" ]
24,927,616
https://en.wikipedia.org/wiki/Disjoining%20pressure
In surface chemistry, disjoining pressure (symbol ) according to an IUPAC definition arises from an attractive interaction between two surfaces. For two flat and parallel surfaces, the value of the disjoining pressure (i.e., the force per unit area) can be calculated as the derivative of the Gibbs energy of interaction per unit area in respect to distance (in the direction normal to that of the interacting surfaces). There is also a related concept of disjoining force, which can be viewed as disjoining pressure times the surface area of the interacting surfaces. The concept of disjoining pressure was introduced by Derjaguin (1936) as the difference between the pressure in a region of a phase adjacent to a surface confining it, and the pressure in the bulk of this phase. Description Disjoining pressure can be expressed as: where (in SI units): - disjoining pressure (N/m2) - the surface area of the interacting surfaces (m2) - total Gibbs energy of the interaction of the two surfaces (J) - distance (m) indices and signify that the temperature, volume, and the surface area remain constant in the derivative. Using the concept of the disjoining pressure, the pressure in a film can be viewed as: where: - pressure in a film (Pa) - pressure in the bulk of the same phase as that of the film (Pa) Disjoining pressure is interpreted as a sum of several interactions: dispersion forces, electrostatic forces between charged surfaces, interactions due to layers of neutral molecules adsorbed on the two surfaces, and the structural effects of the solvent. Classic theory predicts that the disjoining pressure of a thin liquid film on a flat surface as follows, where: - Hamaker constant (J) - liquid film thickness (m) For a solid-liquid-vapor system where the solid surface is structured, the disjoining pressure is affected by the solid surface profile, , and the meniscus shape, where: - solid-liquid potential (J/m6) The meniscus shape can be by minimization of total system free energy as follows where: - total system free energy including surface excess energy and free energy due to solid-liquid interactions (J/m2) - meniscus shape (m) - slope of meniscus shape (1) In the theory of liquid drops and films, the disjoining pressure can be shown to be related to the equilibrium liquid-solid contact angle through the relation where is the liquid-vapor surface tension and is the precursor film thickness. See also Capillary condensation Capillary pressure Hamaker constant Thin-film equation References Surface science
Disjoining pressure
[ "Physics", "Chemistry", "Materials_science" ]
566
[ "Condensed matter physics", "Surface science" ]
24,929,135
https://en.wikipedia.org/wiki/Chained%20volume%20series
A chained volume series is a series of economic data (such as GDP, GNP or similar kinds of data) from successive years, put in real (or constant, i.e. inflation- and deflation-adjusted) terms by computing the aggregate value of the measure (e.g. GDP or GNP) for each year using the prices of the preceding year, and then 'chain linking' the data together to obtain a time-series of figures from which the effects of price changes (i.e., monetary inflation or deflation) have, at least in theory, been removed. In other words, from the raw (i.e. nominal) GDP or GNP data, which reflect changes in both production volume and prices, a series is obtained where the changes between years reflect only changes in production volume (and not changes in price). The year-by-year chain linking method differs from some other techniques for compensating for monetary inflation and deflation that are used in economics, such as the consumer price index. The consumer price index uses the observed price of a set 'market basket' of goods and services in any two given years to determine the relative prices in those two years; it does not rely on a cumulative accounting of changes in the intervening years. The consumer price index is thus an example of a fixed-weight compensation method; fixed weight methods relate prices in all years to some single base year. The problem is that the compensation factor derived from any index depends on the weights given to the various items in the market-basket — or the proportions of each item in whatever aggregate amount is being looked at — and the weights that were correct for one time may not be correct for other times. For example, in 1850 the price of horse-fodder would have been an important component of overall price levels, but now it is not. If one is comparing 1850 price levels to present ones, then, the question arises, what weight to give to horse-fodder? It is difficult to know. The chain linking method attempts to avoid this conundrum by never making large leaps in time. The United Kingdom presently uses chain linking to put its national accounts aggregates (e.g., GDP, GNP) in constant-price terms. From the GDP figures thus obtained can be derived an implicit GDP deflator which gives a good indication of inflation or deflation in the economy as a whole. The United States switched to using chained volume series in 1996 as its featured method of putting GDP in constant-price terms. Before that it had used the Laspeyres index, a fixed-weight method. Notes National accounts Index numbers
Chained volume series
[ "Mathematics" ]
545
[ "Index numbers", "Mathematical objects", "Numbers" ]
40,368,781
https://en.wikipedia.org/wiki/Ancyronyx
Ancyronyx, commonly known as spider water beetles or spider riffle beetles, is a genus of aquatic riffle beetles from North America, South Asia, China, and Southeast Asia. They are small beetles with extremely long legs ending in strong claws. Both the adults and the larvae are found underwater in the shallow riffles of streams and rivers, clinging to rocks or submerged wood. They feed on algae and decaying wood tissue. The genus contains twenty-one species, eleven of which are endemic to the Philippines. Taxonomy The genus Ancyronyx was established in 1847 by the German entomologist Wilhelm Ferdinand Erichson based on the type species Macronychus variegatus first described in 1824 by the German coleopterologist Ernst Friedrich Germar. It was regarded as a monotypic species until the French entomologist Antoine Henri Grouvelle described the second species, A. acaroides in 1896. It is the sole member of the tribe Ancyronychini, and is classified under the subfamily Elminae of the riffle beetle family, Elmidae. Description Members of Ancyronyx superficially resemble spiders and are aquatic, hence their common name, "spider water beetles". They are typically very small, with an average body length (without legs) of . They are characterized by extremely long legs (longer than the body length). The legs have widely separated coxae, with the procoxae (coxae attached to the prothorax) usually visible dorsally. The legs are tipped with large claws, each with two or three basal teeth. The distal teeth are the largest. The pair of antennae are typically 11-segmented. Most of the species possess brightly colored patterns on their elytra, but not all. The elytra also possess eight to eleven grooves (elytral striae), as well as small depressions (elytral punctures) of varying depth and number. The pronotum possesses a transverse groove and a more or less straight front margin, with pronotal carinae absent or weakly present. Spider water beetles can be divided into two species groups, based on morphological and ecological adaptation patterns. The Ancyronyx variegatus species group are larger in size (usually larger than ), with very long legs, stout coxites on the ovipositor, and a transverse prosternal process. Their larvae are also larger, depressed in cross-section, and possess large side-pointing projections on the sides of the abdomen. The Ancyronyx patrolus species group have small and slender bodies, with comparatively shorter legs, long and slender coxites on the ovipositor, and a squarish prosternal process. Their larvae are smaller, with a more vaulted cross-section, and backwards pointing projections from the sides of the abdomen. Ancyronyx is closely related to the genus Podelmis, but can be distinguished from the latter by the more or less straight and slender last segment of the ovipositor (versus the conical sideways-bent terminal segment of the ovipositor of Podelmis), and the absence of an anterior process on the prosternum. Ecology Like almost all riffle beetles, spider water beetles are aquatic, feeding on algae and decaying wood tissue. However, they can not actively swim. They can be found crawling along or clinging with their claws on boulders or submerged wood in lotic riffles of streams and rivers. The larvae are exclusively aquatic. They breathe by means of tracheal gills. Spider water beetle adults, like all members of the subfamily Elminae, can also remain indefinitely underwater by means of a plastron, a thin film of gas trapped by hydrophobic bristles (setae) on their body. As the insect breathes, the oxygen concentration in the gas film drops in comparison to the surrounding water, causing new oxygen to diffuse again into the plastron. Because of their reliance on the plastron for breathing, spider water beetles are restricted to the highly oxygenated environments in moderate to fast-moving permanent running water. They are therefore extremely sensitive to water pollution and are potentially valuable bioindicators for measuring the health of river ecosystems. Members of the Ancyronyx variegatus species group are mostly found in slightly to moderately polluted (mesosaprobic) rivers, almost always found on submerged wood (with the exception of Ancyronyx yunju which were collected from sand and grass roots). Members of the Ancyronyx patrolus species group, meanwhile, are only found in clean permanent streams, usually among rocks. Ancyronyx malickyi have been caught using light traps, which might indicate phototaxis. Distribution The genus was previously only known from two species from highly disjunct localities – Ancyronyx variegatus from North America (described in 1824) and Ancyronyx acaroides from Palembang in Sumatra (described in 1896). This strange distribution pattern (and the fact that there were no new specimens of A. acaroides recovered) initially led 20th century specialists to question whether A. acaroides was indeed collected from Sumatra, or whether it was correctly designated to the genus as originally described. However, in 1991, new specimens of A. acaroides were rediscovered from Sumatra by the Austrian coleopterologists Manfred A. Jäch and S. Schödl, confirming its type locality. In addition they also discovered the species in Southeast Asia, including West Malaysia, Sarawak and Bali during 1992 and 1993. The species was also subsequently found in Thailand, Indonesia and the Philippines. Since then, nineteen new species of the genus have been described from Southeast Asia and China. Species There are twenty-one species currently classified under Ancyronyx. Eleven of these are endemic to the Philippines, which may indicate that the country is a center of diversity for the genus. Most of the species have highly restricted distributions, often being found in only one island. Ancyronyx acaroides Grouvelle, 1896 – Southeast Asia Ancyronyx acaroides acaroides Grouvelle, 1896 – Myanmar, Vietnam, Malaysia (peninsular Malaysia, Borneo), Brunei, Indonesia (Sumatra, Java) Ancyronyx acaroides cursor Jäch, 1994 – Indonesia (Bali) Ancyronyx buhid Freitag, 2013 – Philippines (Mindoro) Ancyronyx helgeschneideri Freitag & Jäch, 2007 – Philippines (Palawan, Busuanga) Ancyronyx hjarnei Jäch, 2003 – Indonesia (Sulawesi) Ancyronyx jaechi Freitag, 2012 – Sri Lanka Ancyronyx johanni Jäch, 1994 – Indonesia (Siberut) Ancyronyx malickyi Jäch, 1994 – Laos, southern Thailand, Malaysia (peninsular Malaysia, Borneo), Indonesia (Sumatra) Ancyronyx minerva Freitag & Jäch, 2007 – Philippines (Palawan, Busuanga, Mindoro) Ancyronyx minutulus Freitag & Jäch, 2007 – Philippines (Palawan) Ancyronyx montanus Freitag & Balke, 2011 – Philippines (Palawan) Ancyronyx patrolus Freitag & Jäch, 2007 – Philippines (Palawan, Busuanga) Ancyronyx procerus Jäch, 1994 – Vietnam, Malaysia (Borneo), Brunei, Philippines (Busuanga) Ancyronyx pseudopatrolus Freitag & Jäch, 2007 – Philippines (Palawan) Ancyronyx punkti Freitag & Jäch, 2007 – Philippines (Palawan) Ancyronyx raffaelacatharina Jäch, 2004 – Indonesia (Sulawesi) Ancyronyx sarawacensis Jäch, 1994 – Malaysia (Borneo) Ancyronyx schillhammeri Jäch, 1994 – Philippines (Mindoro) Ancyronyx sophiemarie Jäch, 2004 – Philippines (Sibuyan) Ancyronyx tamaraw Freitag, 2013 – Philippines (Mindoro) Ancyronyx variegatus (Germar, 1824) – United States of America Ancyronyx yunju Bian, Guo, & Ji, 2012 – China (Jiangxi) References External links Description at SringerImages Classification at Animal Diversity Web Discover Life Taxonomic profile of Ancyronyx variegatus at ITIS Report Taxonomy at BioLib Taxonomy at the Taxonomicon Information at Bug Guide Encyclopedia of Life Byrrhoidea genera Bioindicators Beetles of North America Beetles of Asia
Ancyronyx
[ "Chemistry", "Environmental_science" ]
1,767
[ "Bioindicators", "Environmental chemistry" ]
40,370,137
https://en.wikipedia.org/wiki/International%20Oil%20and%20Gas%20University
Yagshygeldi Kakayev International Oil and Gas University () is a university located in Ashgabat, the main university of the Turkmenistan oil and gas community. It was founded on May 25, 2012 as the Turkmen State Institute of Oil and Gas. On August 10, 2013, it became an international university. A branch of the University operate in Balkanabat. History Created in order to improve existing work on the diversification of exports to the world market of Turkmen minerals, the implementation of high-quality level of development programs of oil and gas industry. The decree № PP-6081 was signed by President of Turkmenistan on the establishment of the Turkmen State Institute of Oil and Gas. Was placed under the supervision to the Ministry of Education of Turkmenistan. August 10, 2013 "in order to radically improve the training of highly qualified specialists for the oil and gas industry" was renamed the International University of Oil and Gas. On February 12, 2019, by the Resolution of the Assembly of Turkmenistan, the International University of Oil and Gas was named after the political and public figure Yagshygeldi Kakayev. As of June 2022, the school’s rector and administrative leader is Atamanov Bayrammyrat Yaylymovich. Education The institute created about twenty specialties in eight areas: geology, exploration and mining, chemical engineering, computer technology, construction, architecture, manufacturing machinery and equipment, energy, economics and management in industry, management. The institute seven faculties and 27 departments. Taught by some 250 teachers, including 6 doctors, including 5 professors and 33 candidates of sciences, including 14 professors. Faculties Geology Exploration and development of mineral resources Сhemical Engineering Computer Technology Engineering and Architecture Technological machinery and equipment Energy Economics and management in the industry Management Сampus The building was built in the southern part of Ashgabat, where a new business and cultural center of the capital of Turkmenistan. 17-storey office building was built by Turkish company «Renaissance». The project started in 2010. The opening ceremony of the buildings took place September 1, 2012 with the participation of President of Turkmenistan Gurbanguly Berdimuhamedow. The building is symbolically resembles an oil rig. The complex of buildings of the University covers an area of 30 hectares, consists of a main 18-storey building and five academic buildings, 86 classrooms at the same time be able to learn to 3,000 students. The institute located assembly and meeting rooms, museum, archive, library (with 250 seats) equipped with multimedia equipment reading rooms, Center for Information Technology, cafe, clinic, grocery and department store. Classrooms and laboratories equipped with modern equipment. The university has a museum, where the archive is created fund, telling about the oil and gas production, oil and gas development and the national economy of Turkmenistan. The building in 2012, recognized as the best building of the CIS according to the International Union of Architects Association of CIS. Dormitories 6 dormitories constructed for each faculty, designed for 230 seats. Rooms - double the terms, kitchen room is equipped with household appliances. Sport complex Operates an indoor sports complex, with a boxing ring, gym and swimming pool. The multi-purpose sports hall has fields for football, basketball, volleyball, tennis and other sports. Separately located gymnasium. There are showers. On the sports field with natural grass flooring classes are held in the open air. References Buildings and structures in Ashgabat Universities in Turkmenistan Universities and colleges established in 2012 Petroleum engineering schools 2012 establishments in Turkmenistan
International Oil and Gas University
[ "Engineering" ]
715
[ "Petroleum engineering", "Petroleum engineering schools", "Engineering universities and colleges" ]
40,378,214
https://en.wikipedia.org/wiki/DMol3
{{DISPLAYTITLE:DMol3}} DMol3 is a commercial (and academic) software package which uses density functional theory with a numerical radial function basis set to calculate the electronic properties of molecules, clusters, surfaces and crystalline solid materials from first principles. DMol3 can either use gas phase boundary conditions or 3D periodic boundary conditions for solids or simulations of lower-dimensional periodicity. It has also pioneered the use of the conductor-like screening model COSMO Solvation Model for quantum simulations of solvated molecules and recently of wetted surfaces. DMol3 permits geometry optimisation and saddle point search with and without geometry constraints, as well as calculation of a variety of derived properties of the electronic configuration. DMol3 development started in the early eighties with B. Delley then associated with A.J. Freeman and D.E. Ellis at Northwestern University. In 1989 DMol3 appeared as DMol, the first commercial density functional package for industrial use by Biosym Technologies now Accelrys. Delley's 1990 publication was cited more than 3000 times. See also Quantum chemistry computer programs External links DMol3 datasheet developers page Materials Studio References Computational chemistry software Density functional theory software Physics software
DMol3
[ "Physics", "Chemistry" ]
258
[ "Quantum chemistry stubs", "Quantum chemistry", "Computational chemistry software", "Chemistry software", "Theoretical chemistry stubs", "Computational physics", "Computational chemistry", "Density functional theory software", "Physical chemistry stubs", "Physics software" ]
40,378,553
https://en.wikipedia.org/wiki/Story-driven%20modeling
Story-driven modeling is an object-oriented modeling technique. Other forms of object-oriented modeling focus on class diagrams. Class diagrams describe the static structure of a program, i.e. the building blocks of a program and how they relate to each other. Class diagrams also model data structures, but with an emphasis on rather abstract concepts like types and type features. Instead of abstract static structures, story-driven modeling focuses on concrete example scenarios and on how the steps of the example scenarios may be represented as object diagrams and how these object diagrams evolve during scenario execution. Software development approach Story-driven modeling proposes the following software development approach: Textual scenarios: For the feature you want to implement, develop a textual scenario description for the most common case. Look on only one example at a time. Try to use specific terms and individual names instead of general terms and e.g. role names: Scenario Go-Dutch barbecue Start: This Sunday Peter, Putri, and Peng meet at the park for a go-Dutch barbecue. They use the Group Account app to do the accounting. Step 1: Peter brings the meat for $12. Peter adds this item to the Group Account app. Step 2: Putri brings salad for $9. Peter adds this item, too. The app shows that by now the average share is $7 and that Peng still have to bring these $7 while Peter gets $5 out and Putri gets $2 out. Step 3: ... GUI mock-ups: To illustrate the graphical user interface (GUI) for the desired feature, you may add some wireframe models or GUI mock-ups to your scenario: Scenario Go-Dutch barbecue Start: This Sunday Peter, Putri, and Peng meet at the park for a go-Dutch barbecue. They use the Group Account app to do the accounting. Step 1: Peter brings the meat for $12. Peter adds this item to the Group Account app. Step 2: Putri brings salad for $9. Peter adds this item, too. The app shows that by now the average share is $7 and that Peng still have to bring these $7 while Peter gets $5 out and Putri gets $2 out: Step 3: ... Storyboarding: Next, you think about how a certain situation, i.e. a certain step of a scenario may be represented within a computer by a runtime object structure. This is done by adding object diagrams to the scenario. In story driven modeling, a scenario with object diagrams is also called a storyboard. Scenario Go-Dutch barbecue Start: This Sunday Peter, Putri, and Peng meet at the park for a go-Dutch barbecue. They use the Group Account app to do the accounting. Step 1: Peter brings the meat for $12. Peter adds this item to the Group Account app. Step 2: Putri brings salad for $9. Peter adds this item, too. The app shows that by now the average share is $7 and that Peng still have to bring these $7 while Peter gets $5 out and Putri gets $2 out: Step 3: ... Class diagram derivation: Now it is fairly straightforward to derive a class diagram from the object diagrams used in the storyboards.Note, the class diagram serves as a common reference for all object diagrams. This ensures that overall the same types and attributes are used. Using a UML tool, you may generate a first implementation from this class diagram. Algorithm design: So far you have modeled and implemented that object structures that are deployed in your application. Now you need to add behavior, i.e. algorithms and method bodies. Programming the behavior of an application is a demanding task. To facilitate it, you should first outline the behavior in pseudocode notation. You might do this, e.g. with an object game. For example, to update the saldo attributes of all persons you look at our object structure and from the point of view of the GroupAccount object you do the following: Update the saldo of all persons: visit each item for each item add the value to the total value and add 1 to the number of items compute the average share of each person by dividing the total value by the number of persons visit each person for each person reset the saldo for each person visit each item bought by this person for each item add the value to the saldo of the current person for each person subtract the share from the saldo Behavior implementation: Once you have refined your algorithm pseudocode down to the level of operations on object structures it is straightforward to derive source code that executes the same operations on your object model implementation. Testing: Finally, the scenarios may be used to derive automatic JUnit tests. The pseudocode for a test for our example might look like: Test update the saldo of all persons: create a group account object add a person object with name Peter and a person object with name Putri and a person object with name Peng to the group account object add an item object with buyer Peter, description Meat, and value $12 to the group account object add an item object with buyer Putri, description Salad, and value $9 to the group account object call method update the saldo of all persons on the group account object ensure that the saldo of the Peter object is $5 ensure that the saldo of the Putri object is $2 ensure that the saldo of the Peter object is -$7 ensure that the sum of all saldos is $0 Such automatic tests ensure that in the example situation the behavior implementation actually does what is outlined in the storyboard. While these tests are pretty simple and may not identify all kinds of bugs, these tests are very useful to document the desired behavior and the usage of the new features and these tests ensure that the corresponding functionality is not lost due to future changes. Summary Story driven modeling has proven to work very well for the cooperation with non IT experts. People from other domains usually have difficulties to describe their needs in general terms (i.e. classes) and general rules (pseudocode). Similarly, normal people have problems to understand pseudocode or to judge, whether their needs are properly addressed or not. However, these people know their business very well and with the help of concrete examples and scenarios it is very easy for normal people to spot problematic cases and to judge whether their needs have been addressed properly. Story Driven Modeling has matured since its beginning in 1997. In 2013 it is used for teaching e.g. in Kassel University, Paderborn University, Tartu University, Antwerp University, Nazarbayev University Astana, Hasso Platner Institute Potsdam, University of Victoria, ... See also Agile modeling Entity–control–boundary Agile software development Class-responsibility-collaboration card Object-oriented analysis and design Object-oriented modeling Test-driven development Unified Modeling Language References Object-oriented programming Software design
Story-driven modeling
[ "Engineering" ]
1,405
[ "Design", "Software design" ]
40,379,651
https://en.wikipedia.org/wiki/IBM
International Business Machines Corporation (using the trademark IBM), nicknamed Big Blue, is an American multinational technology company headquartered in Armonk, New York and present in over 175 countries. It is a publicly traded company and one of the 30 companies in the Dow Jones Industrial Average. IBM is the largest industrial research organization in the world, with 19 research facilities across a dozen countries, having held the record for most annual U.S. patents generated by a business for 29 consecutive years from 1993 to 2021. IBM was founded in 1911 as the Computing-Tabulating-Recording Company (CTR), a holding company of manufacturers of record-keeping and measuring systems. It was renamed "International Business Machines" in 1924 and soon became the leading manufacturer of punch-card tabulating systems. During the 1960s and 1970s, the IBM mainframe, exemplified by the System/360, was the world's dominant computing platform, with the company producing 80 percent of computers in the U.S. and 70 percent of computers worldwide. IBM debuted in the microcomputer market in 1981 with the IBM Personal Computer, — its DOS software provided by Microsoft, — which became the basis for the majority of personal computers to the present day. The company later also found success in the portable space with the ThinkPad. Since the 1990s, IBM has concentrated on computer services, software, supercomputers, and scientific research; it sold its microcomputer division to Lenovo in 2005. IBM continues to develop mainframes, and its supercomputers have consistently ranked among the most powerful in the world in the 21st century. As one of the world's oldest and largest technology companies, IBM has been responsible for several technological innovations, including the Automated Teller Machine (ATM), Dynamic Random-Access Memory (DRAM), the floppy disk, the hard disk drive, the magnetic stripe card, the relational database, the SQL programming language, and the Universal Product Code (UPC) barcode. The company has made inroads in advanced computer chips, quantum computing, artificial intelligence, and data infrastructure. IBM employees and alumni have won various recognitions for their scientific research and inventions, including six Nobel Prizes and six Turing Awards. History 1910s–1950s IBM originated with several technological innovations developed and commercialized in the late 19th century. Julius E. Pitrap patented the computing scale in 1885; Alexander Dey invented the dial recorder (1888); Herman Hollerith patented the Electric Tabulating Machine (1889); and Willard Bundy invented a time clock to record workers' arrival and departure times on a paper tape (1889). On June 16, 1911, their four companies were amalgamated in New York State by Charles Ranlett Flint forming a fifth company, the Computing-Tabulating-Recording Company (CTR) based in Endicott, New York. The five companies had 1,300 employees and offices and plants in Endicott and Binghamton, New York; Dayton, Ohio; Detroit, Michigan; Washington, D.C.; and Toronto, Canada. Collectively, the companies manufactured a wide array of machinery for sale and lease, ranging from commercial scales and industrial time recorders, meat and cheese slicers, to tabulators and punched cards. Thomas J. Watson, Sr., fired from the National Cash Register Company by John Henry Patterson, called on Flint and, in 1914, was offered a position at CTR. Watson joined CTR as general manager and then, 11 months later, was made President when antitrust cases relating to his time at NCR were resolved. Having learned Patterson's pioneering business practices, Watson proceeded to put the stamp of NCR onto CTR's companies. He implemented sales conventions, "generous sales incentives, a focus on customer service, an insistence on well-groomed, dark-suited salesmen and had an evangelical fervor for instilling company pride and loyalty in every worker". His favorite slogan, "THINK", became a mantra for each company's employees. During Watson's first four years, revenues reached $9 million ($ today) and the company's operations expanded to Europe, South America, Asia and Australia. Watson never liked the clumsy hyphenated name "Computing-Tabulating-Recording Company" and chose to replace it with the more expansive title "International Business Machines" which had previously been used as the name of CTR's Canadian Division; the name was changed on February 14, 1924. By 1933, most of the subsidiaries had been merged into one company, IBM. The Nazis made extensive use of Hollerith punch card and alphabetical accounting equipment and IBM's majority-owned German subsidiary, Deutsche Hollerith Maschinen GmbH (Dehomag), supplied this equipment from the early 1930s. This equipment was critical to Nazi efforts to categorize citizens of both Germany and other nations that fell under Nazi control through ongoing censuses. These census data were used to facilitate the round-up of Jews and other targeted groups, and to catalog their movements through the machinery of the Holocaust, including internment in the concentration camps. Nazi concentration camps operated a Hollerith department called Hollerith Abteilung, which had IBM machines, including calculating and sorting machines. IBM as a military contractor produced 6% of the M1 Carbine rifles used in World War II, about 346,500 of them, between August 1943 and May 1944. IBM built the Automatic Sequence Controlled Calculator, an electromechanical computer, during World War II. It offered its first commercial stored-program computer, the vacuum tube based IBM 701, in 1952. The IBM 305 RAMAC introduced the hard disk drive in 1956. The company switched to transistorized designs with the 7000 and 1400 series, beginning in 1958. In which, IBM considered the 1400 series the ''model T'' of computing, due to it being the first computer with over ten thousand sales by IBM. In 1956, the company demonstrated the first practical example of artificial intelligence when Arthur L. Samuel of IBM's Poughkeepsie, New York, laboratory programmed an IBM 704 not merely to play checkers but "learn" from its own experience. In 1957, the FORTRAN scientific programming language was developed. 1960s–1980s In 1961, IBM developed the SABRE reservation system for American Airlines and introduced the highly successful Selectric typewriter. Also in 1961 IBM used the IBM 7094 to generate the first song sung completely by a computer using synthesizers. The song was Daisy, Daisy (Bicycle built for two). In 1963, IBM employees and computers helped NASA track the orbital flights of the Mercury astronauts. A year later, it moved its corporate headquarters from New York City to Armonk, New York. The latter half of the 1960s saw IBM continue its support of space exploration, participating in the 1965 Gemini flights, 1966 Saturn flights, and 1969 lunar mission. IBM also developed and manufactured the Saturn V's Instrument Unit and Apollo spacecraft guidance computers. On April 7, 1964, IBM launched the first computer system family, the IBM System/360. It spanned the complete range of commercial and scientific applications from large to small, allowing companies for the first time to upgrade to models with greater computing capability without having to rewrite their applications. It was followed by the IBM System/370 in 1970. Together the 360 and 370 made the IBM mainframe the dominant mainframe computer and the dominant computing platform in the industry throughout this period and into the early 1980s. They and the operating systems that ran on them such as OS/VS1 and MVS, and the middleware built on top of those such as the CICS transaction processing monitor, had a near-monopoly-level market share and became the thing IBM was most known for during this period. In 1969, the United States of America alleged that IBM violated the Sherman Antitrust Act by monopolizing or attempting to monopolize the general-purpose electronic digital computer system market, specifically computers designed primarily for business, and subsequently alleged that IBM violated the antitrust laws in IBM's actions directed against leasing companies and plug-compatible peripheral manufacturers. Shortly after, IBM unbundled its software and services in what many observers believed was a direct result of the lawsuit, creating a competitive market for software. In 1982, the Department of Justice dropped the case as "without merit". Also in 1969, IBM engineer Forrest Parry invented the magnetic stripe card that would become ubiquitous for credit/debit/ATM cards, driver's licenses, rapid transit cards and a multitude of other identity and access control applications. IBM pioneered the manufacture of these cards, and for most of the 1970s, the data processing systems and software for such applications ran exclusively on IBM computers. In 1974, IBM engineer George J. Laurer developed the Universal Product Code. IBM and the World Bank first introduced financial swaps to the public in 1981, when they entered into a swap agreement. IBM entered the microcomputer market in the 1980s with the IBM Personal Computer (IBM 5150). The computer, which spawned a long line of successors, had a profound influence on the development of the personal computer market and became one of IBM's best selling products of all time. Due to a lack of foresight by IBM, the PC was not well protected by intellectual property laws. As a consequence, IBM quickly began losing its market dominance to emerging, compatible competitors in the PC market. In 1985, IBM collaborated with Microsoft to develop a new operating system, which was released as OS/2. Following a dispute, Microsoft severed the collaboration and IBM continued development of OS/2 on its own but it failed in the marketplace against Microsoft's Windows during the mid-1990s. 1990s–2000s In 1991 IBM began spinning off its many divisions into autonomous subsidiaries (so-called "Baby Blues") in an attempt to make the company more manageable and to streamline IBM by having other investors finance those companies. These included AdStar, dedicated to disk drives and other data storage products; IBM Application Business Systems, dedicated to mid-range computers; IBM Enterprise Systems, dedicated to mainframes; Pennant Systems, dedicated to mid-range and large printers; Lexmark, dedicated to small printers; and more. Lexmark was acquired by Clayton & Dubilier in a leveraged buyout shortly after its formation. In September 1992, IBM completed the spin-off of their various non-mainframe and non-midrange, personal computer manufacturing divisions, combining them into an autonomous wholly owned subsidiary known as the IBM Personal Computer Company (IBM PC Co.). This corporate restructuring came after IBM reported a sharp drop in profit margins during the second quarter of fiscal year 1992; market analysts attributed the drop to a fierce price war in the personal computer market over the summer of 1992. The corporate restructuring was one of the largest and most expensive in history up to that point. By the summer of 1993, the IBM PC Co. had divided into multiple business units itself, including Ambra Computer Corporation and the IBM Power Personal Systems Group, the former an attempt to design and market "clone" computers of IBM's own architecture and the latter responsible for IBM's PowerPC-based workstations. IBM PC Co. introduced the ThinkPad clone computers, which IBM would heavily market and would eventually become one of the best-selling series of notebook computers. In 1993, IBM posted an $8 billion loss – at the time the biggest in American corporate history. Lou Gerstner was hired as CEO from RJR Nabisco to turn the company around. In 1995, IBM purchased Lotus Software, best known for its Lotus 1-2-3 spreadsheet software. During the decade, IBM was working on a new operating system, named the Workplace OS project. Despite a large amount of money spent on the project, it was cancelled in 1996. In 1998, IBM merged the enterprise-oriented Personal Systems Group of the IBM PC Co. into IBM's own Global Services personal computer consulting and customer service division. The resulting merged business units then became known simply as IBM Personal Systems Group. A year later, IBM stopped selling their computers at retail outlets after their market share in this sector had fallen considerably behind competitors Compaq and Dell. Immediately afterwards, the IBM PC Co. was dissolved and merged into IBM Personal Systems Group. In 2002 IBM acquired PwC Consulting, the consulting arm of PwC which was merged into its IBM Global Services. On September 14, 2004, LG and IBM announced that their business alliance in the South Korean market would end at the end of that year. Both companies stated that it was unrelated to the charges of bribery earlier that year. Xnote was originally part of the joint venture and was sold by LG in 2012. Continuing a trend started in the 1990s of downsizing its operations and divesting from commodity production, IBM sold all of its personal computer business to Chinese technology company Lenovo and, in 2009, it acquired software company SPSS Inc. Later in 2009, IBM's Blue Gene supercomputing program was awarded the National Medal of Technology and Innovation by U.S. President Barack Obama. 2010s–present In 2011, IBM gained worldwide attention for its artificial intelligence program Watson, which was exhibited on Jeopardy! where it won against game-show champions Ken Jennings and Brad Rutter. The company also celebrated its 100th anniversary in the same year on June 16. In 2012, IBM announced it had agreed to buy Kenexa and Texas Memory Systems, and a year later it also acquired SoftLayer Technologies, a web hosting service, in a deal worth around $2 billion. Also that year, the company designed a video surveillance system for Davao City. In 2014 IBM announced it would sell its x86 server division to Lenovo for $2.1 billion. while continuing to offer Power ISA-based servers. Also that year, IBM began announcing several major partnerships with other companies, including Apple Inc., Twitter, Facebook, Tencent, Cisco, UnderArmour, Box, Microsoft, VMware, CSC, Macy's, Sesame Workshop, the parent company of Sesame Street, and Salesforce.com. In 2015, its chip division transitioned to a fabless model with semiconductors design, offloading manufacturing to GlobalFoundries. In 2015, IBM announced three major acquisitions: Merge Healthcare for $1 billion, data storage vendor Cleversafe, and all digital assets from The Weather Company, including Weather.com and The Weather Channel mobile app. Also that year, IBM employees created the film A Boy and His Atom, which was the first molecule movie to tell a story. In 2016, IBM acquired video conferencing service Ustream and formed a new cloud video unit. In April 2016, it posted a 14-year low in quarterly sales. The following month, Groupon sued IBM accusing it of patent infringement, two months after IBM accused Groupon of patent infringement in a separate lawsuit. In 2015, IBM bought the digital part of The Weather Company, Truven Health Analytics for $2.6 billion in 2016, and in October 2018, IBM announced its intention to acquire Red Hat for $34 billion, which was completed on July 9, 2019. In February 2020, IBM's John Kelly III joined Brad Smith of Microsoft to sign a pledge with the Vatican to ensure the ethical use and practice of Artificial Intelligence (AI). IBM announced in October 2020 that it would divest the Managed Infrastructure Services unit of its Global Technology Services division into a new public company. The new company, Kyndryl, will have 90,000 employees, 4,600 clients in 115 countries, with a backlog of $60 billion. IBM's spin off was greater than any of its previous divestitures, and welcomed by investors. IBM appointed Martin Schroeter, who had been IBM's CFO from 2014 through the end of 2017, as CEO of Kyndryl. In 2021, IBM announced the acquisition of the enterprise software company Turbonomic for $1.5 billion. In January 2022, IBM announced it would sell Watson Health to private equity firm Francisco Partners. On March 7, 2022, a few days after the start of the Russian invasion of Ukraine, IBM CEO Arvind Krishna published a Ukrainian flag and announced that "we have suspended all business in Russia". All Russian articles were also removed from the IBM website. On June 7, Krishna announced that IBM would carry out an "orderly wind-down" of its operations in Russia. In late 2022, IBM started a collaboration with new Japanese manufacturer Rapidus, which led GlobalFoundries to file a lawsuit against IBM the following year. In 2023, IBM acquired Manta Software Inc. to complement its data and A.I. governance capabilities for an undisclosed amount. On November 16, 2023, IBM suspended ads on Twitter after ads were found next to pro-Nazi content. In August 2023, IBM agreed to sell The Weather Company to Francisco Partners for an undisclosed sum. The sale was finalized on February 1, 2024, and the cost was disclosed as $1.1 billion, with $750 million in cash, $100 million deferred over seven years, and $250 million in contingent consideration. In December 2023, IBM announced it would acquire Software AG's StreamSets and webMethods platforms for €2.13 billion ($2.33 billion). Corporate affairs Business trends IBM's market capitalization was valued at over $153 billion as of May 2024. Despite its relative decline within the technology sector, IBM remains the seventh largest technology company by revenue, and 67th largest overall company by revenue in the United States. IBM ranked No. 38 on the 2020 Fortune 500 rankings of the largest United States corporations by total revenue. In 2014, IBM was accused of using "financial engineering" to hit its quarterly earnings targets rather than investing for the longer term. The key trends of IBM are (as at the financial year ending December 31): Board and shareholders The company's 15-member board of directors are responsible for overall corporate management and includes the current or former CEOs of Anthem, Dow Chemical, Johnson and Johnson, Royal Dutch Shell, UPS, and Vanguard as well as the president of Cornell University and a retired U.S. Navy admiral. Vanguard Group is the largest shareholder of IBM and as of March 31, 2023, held 15.7% of total shares outstanding. In 2011, IBM became the first technology company Warren Buffett's holding company Berkshire Hathaway invested in. Initially he bought 64 million shares costing $10.5 billion. Over the years, Buffett increased his IBM holdings, but by the end of 2017 had reduced them by 94.5% to 2.05 million shares; by May 2018, he was completely out of IBM. Headquarters and offices IBM is headquartered in Armonk, New York, a community north of Midtown Manhattan. A nickname for the company is the "Colossus of Armonk". Its principal building, referred to as CHQ, is a glass and stone edifice on a parcel amid a 432-acre former apple orchard the company purchased in the mid-1950s. There are two other IBM buildings within walking distance of CHQ: the North Castle office, which previously served as IBM's headquarters; and the Louis V. Gerstner, Jr., Center for Learning (formerly known as IBM Learning Center (ILC)), a resort hotel and training center, which has 182 guest rooms, 31 meeting rooms, and various amenities. IBM operates in 174 countries , with mobility centers in smaller market areas and major campuses in the larger ones. In New York City, IBM has several offices besides CHQ, including the IBM Watson headquarters at Astor Place in Manhattan. Outside of New York, major campuses in the United States include Austin, Texas; Research Triangle Park (Raleigh-Durham), North Carolina; Rochester, Minnesota; and Silicon Valley, California. IBM's real estate holdings are varied and globally diverse. Towers occupied by IBM include 1250 René-Lévesque (Montreal, Canada) and One Atlantic Center (Atlanta, Georgia, US). In Beijing, China, IBM occupies Pangu Plaza, the city's seventh tallest building and overlooking Beijing National Stadium ("Bird's Nest"), home to the 2008 Summer Olympics. IBM India Private Limited is the Indian subsidiary of IBM, which is headquartered at Bangalore, Karnataka. It has facilities in Coimbatore, Chennai, Kochi, Ahmedabad, Delhi, Kolkata, Mumbai, Pune, Gurugram, Noida, Bhubaneshwar, Surat, Visakhapatnam, Hyderabad, Bangalore and Jamshedpur. Other notable buildings include the IBM Rome Software Lab (Rome, Italy), Hursley House (Winchester, UK), 330 North Wabash (Chicago, Illinois, United States), the Cambridge Scientific Center (Cambridge, Massachusetts, United States), the IBM Toronto Software Lab (Toronto, Canada), the IBM Building, Johannesburg (Johannesburg, South Africa), the IBM Building (Seattle) (Seattle, Washington, United States), the IBM Hakozaki Facility (Tokyo, Japan), the IBM Yamato Facility (Yamato, Japan), the IBM Canada Head Office Building (Ontario, Canada) and the Watson IoT Headquarters (Munich, Germany). Defunct IBM campuses include the IBM Somers Office Complex (Somers, New York), Spango Valley (Greenock, Scotland), and Tour Descartes (Paris, France). The company's contributions to industrial architecture and design include works by Marcel Breuer, Eero Saarinen, Ludwig Mies van der Rohe, I.M. Pei and Ricardo Legorreta. Van der Rohe's building in Chicago was recognized with the 1990 Honor Award from the National Building Museum. Products IBM has a large and diverse portfolio of products and services. , these offerings fall into the categories of cloud computing, artificial intelligence, commerce, data and analytics, Internet of things (IoT), IT infrastructure, mobile, digital workplace and cybersecurity. Hardware Mainframe computers Since 1954, IBM sells mainframe computers, the latest being the IBM z series. The most recent model, the IBM z16, was released in 2022. Microprocessors In 1990, IBM released the Power microprocessors, which were designed into many console gaming systems, including Xbox 360, PlayStation 3, and Nintendo's Wii U. IBM Secure Blue is encryption hardware that can be built into microprocessors, and in 2014, the company revealed TrueNorth, a neuromorphic CMOS integrated circuit and announced a $3 billion investment over the following five years to design a neural chip that mimics the human brain, with 10 billion neurons and 100 trillion synapses, but that uses just 1 kilowatt of power. In 2016, the company launched all-flash arrays designed for small and midsized companies, which includes software for data compression, provisioning, and snapshots across various systems. Quantum Computing In January 2019, IBM introduced its first commercial quantum computer: IBM Q System One. In March 2020, it was announced that IBM will build Europe's first quantum computer in Ehningen, Germany. The center, to be operated by the Fraunhofer Society, was still in construction as of 2023, with cloud access planned in 2024. Software Since 2009, IBM owns SPSS, a software package used for statistical analysis in the social sciences. IBM also owned The Weather Company, which provides weather forecasting and includes weather.com and Weather Underground, which was sold in 2024. Cloud services IBM Cloud includes infrastructure as a service (IaaS), software as a service (SaaS) and platform as a service (PaaS) offered through public, private and hybrid cloud delivery models. For instance, the IBM Bluemix PaaS enables developers to quickly create complex websites on a pay-as-you-go model. IBM SoftLayer is a dedicated server, managed hosting and cloud computing provider, which in 2011 reported hosting more than 81,000 servers for more than 26,000 customers. IBM also provides Cloud Data Encryption Services (ICDES), using cryptographic splitting to secure customer data. In May 2022, IBM announced the company had signed a multi-year Strategic Collaboration Agreement with Amazon Web Services to make a wide variety of IBM software available as a service on AWS Marketplace. Additionally, the deal includes both companies making joint investments that make it easier for companies to consume IBM's offering and integrate them with AWS, including developer training and software development for select markets. Artificial intelligence IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data. Watson was debuted in 2011 on the American game show Jeopardy!, where it competed against champions Ken Jennings and Brad Rutter in a three-game tournament and won. Watson has since been applied to business, healthcare, developers, and universities. For example, IBM has partnered with Memorial Sloan Kettering Cancer Center to assist with considering treatment options for oncology patients and for doing melanoma screenings. Several companies use Watson for call centers, either replacing or assisting customer service agents. IBM also provides infrastructure for the New York City Police Department through their IBM Cognos Analytics to perform data visualizations of CompStat crime data. In June 2020, IBM announced that it was exiting the facial recognition business. In a letter to congress, IBM's Chief Executive Officer Arvind Krishna told lawmakers, "now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies." In May 2023, IBM revealed Watsonx, a Generative AI toolkit that is powered by IBM's own Granite models with option to use other publicly available LLMs. Watsonx has multiple services for training and fine tuning models based on confidential data. A year later, IBM open-sourced Granite code models and put them on Hugging Face for public use. In October 2024, IBM introduced Granite 3.0, an open-source large language model designed for enterprise AI applications. Consulting With 160,000 consultants globally as of 2024, it is one of the ten largest consulting companies in the world with capabilities spanning strategy and management consulting, experience design, technology and systems integration, and operations. IBM's consulting business was valued at $20 billion, as of 2024. Research Research has been part of IBM since its founding, and its organized efforts trace their roots back to 1945, when the Watson Scientific Computing Laboratory was founded at Columbia University in New York City, converting a renovated fraternity house on Manhattan's West Side into IBM's first laboratory. Now, IBM Research constitutes the largest industrial research organization in the world, with 12 labs on 6 continents. IBM Research is headquartered at the Thomas J. Watson Research Center in New York, and facilities include the Almaden lab in California, Austin lab in Texas, Australia lab in Melbourne, Brazil lab in São Paulo and Rio de Janeiro, China lab in Beijing and Shanghai, Ireland lab in Dublin, Haifa lab in Israel, India lab in Delhi and Bangalore, Tokyo lab, Zurichlab and Africa lab in Nairobi. In terms of investment, IBM's R&D expenditure totals several billion dollars each year. In 2012, that expenditure was approximately $6.9 billion. Recent allocations have included $1 billion to create a business unit for Watson in 2014, and $3 billion to create a next-gen semiconductor along with $4 billion towards growing the company's "strategic imperatives" (cloud, analytics, mobile, security, social) in 2015. IBM has been a leading proponent of the Open Source Initiative, and began supporting Linux in 1998. The company invests billions of dollars in services and software based on Linux through the IBM Linux Technology Center, which includes over 300 Linux kernel developers. IBM has also released code under different open-source licenses, such as the platform-independent software framework Eclipse (worth approximately $40 million at the time of the donation), the three-sentence International Components for Unicode (ICU) license, and the Java-based relational database management system (RDBMS) Apache Derby. IBM's open source involvement has not been trouble-free, however (see SCO v. IBM). Famous inventions and developments by IBM include: the automated teller machine (ATM), Dynamic Random Access Memory (DRAM), the electronic keypunch, the financial swap, the floppy disk, the hard disk drive, the magnetic stripe card, the relational database, RISC, the SABRE airline reservation system, SQL, the Universal Product Code (UPC) bar code, and the virtual machine. Additionally, in 1990 company scientists used a scanning tunneling microscope to arrange 35 individual xenon atoms to spell out the company acronym, marking the first structure assembled one atom at a time. A major part of IBM research is the generation of patents. Since its first patent for a traffic signaling device, IBM has been one of the world's most prolific patent sources. In 2021, the company held the record for most patents generated by a business for 29 consecutive years for the achievement. Patents As of 2021, IBM holds the record for most annual U.S. patents generated by a business for 29 consecutive years. In 2001, IBM became the first company to generate more than 3,000 patents in one year, beating this record in 2008 with over 4,000 patents. As of 2022, the company held 150,000 patents. IBM has also been criticized as being a patent troll. Brand and reputation IBM is nicknamed Big Blue partly due to its blue logo and color scheme, and also in reference to its former de facto dress code of white shirts with blue suits. The company logo has undergone several changes over the years, with its current "8-bar" logo designed in 1972 by graphic designer Paul Rand. It was a general replacement for a 13-bar logo, since period photocopiers did not render narrow (as opposed to tall) stripes well. Aside from the logo, IBM used Helvetica as a corporate typeface for 50 years, until it was replaced in 2017 by the custom-designed IBM Plex. IBM has a valuable brand as a result of over 100 years of operations and marketing campaigns. Since 1996, IBM has been the exclusive technology partner for the Masters Tournament, one of the four major championships in professional golf, with IBM creating the first Masters.org (1996), the first course cam (1998), the first iPhone app with live streaming (2009), and first-ever live 4K Ultra High Definition feed in the United States for a major sporting event (2016). As a result, IBM CEO Ginni Rometty became the third female member of the Master's governing body, the Augusta National Golf Club. IBM is also a major sponsor in professional tennis, with engagements at the U.S. Open, Wimbledon, the Australian Open, and the French Open. The company also sponsored the Olympic Games from 1960 to 2000, and the National Football League from 2003 to 2012. In Japan, IBM employees also have an American football team complete with pro stadium, cheerleaders and televised games, competing in the Japanese X-League as the "Big Blue". Environmental In 2004, concerns were raised related to IBM's contribution in its early days to pollution in its original location in Endicott, New York. IBM reported its total CO2e emissions (direct and indirect) for the twelve months ending December 31, 2020 at 621 kilotons (-324 /-34.3% year-on-year). In February 2021, IBM committed to achieve net zero greenhouse gas emissions by the year 2030. People and culture Employees It is among the world's largest employers, with over 297,900 employees worldwide in 2022, with about 160,000 of those being tech consultants. IBM's leadership programs include Extreme Blue, an internship program, and the IBM Fellow award, offered since 1963 based on technical achievement. Notable current and former employees Many IBM employees have achieved notability outside of work and after leaving IBM. In business, former IBM employees include Apple Inc. CEO Tim Cook, former EDS CEO and politician Ross Perot, Microsoft chairman John W. Thompson, SAP co-founder Hasso Plattner, Gartner founder Gideon Gartner, Advanced Micro Devices (AMD) CEO Lisa Su, Cadence Design Systems CEO Anirudh Devgan, former Citizens Financial Group CEO Ellen Alemany, former Yahoo! chairman Alfred Amoroso, former AT&T CEO C. Michael Armstrong, former Xerox Corporation CEOs David T. Kearns and G. Richard Thoman, former Fair Isaac Corporation CEO Mark N. Greene, Citrix Systems co-founder Ed Iacobucci, ASOS.com chairman Brian McBride, former Lenovo CEO Steve Ward, and former Teradata CEO Kenneth Simonds. In government, Patricia Roberts Harris served as United States Secretary of Housing and Urban Development, the first African American woman to serve in the United States Cabinet. Samuel K. Skinner served as U.S. Secretary of Transportation and as the White House Chief of Staff. Alumni also include U.S. Senators Mack Mattingly and Thom Tillis; Wisconsin governor Scott Walker; former U.S. Ambassadors Vincent Obsitnik (Slovakia), Arthur K. Watson (France), and Thomas Watson Jr. (Soviet Union); and former U.S. Representatives Todd Akin, Glenn Andrews, Robert Garcia, Katherine Harris, Amo Houghton, Jim Ross Lightfoot, Thomas J. Manton, Donald W. Riegle Jr., and Ed Zschau. Other former IBM employees include NASA astronaut Michael J. Massimino, Canadian astronaut and former Governor General Julie Payette, noted musician Dave Matthews, Harvey Mudd College president Maria Klawe, Western Governors University president emeritus Robert Mendenhall, former University of Kentucky president Lee T. Todd Jr., former University of Iowa president Bruce Harreld, NFL referee Bill Carollo, former Rangers F.C. chairman John McClelland, and recipient of the Nobel Prize in Literature J. M. Coetzee. Thomas Watson Jr. also served as the 11th national president of the Boy Scouts of America. Five IBM employees have received the Nobel Prize: Leo Esaki, of the Thomas J. Watson Research Center in Yorktown Heights, N.Y., in 1973, for work in semiconductors; Gerd Binnig and Heinrich Rohrer, of the Zurich Research Center, in 1986, for the scanning tunneling microscope; and Georg Bednorz and Alex Müller, also of Zurich, in 1987, for research in superconductivity. Six IBM employees have won the Turing Award, including the first female recipient Frances E. Allen. Ten National Medals of Technology (USA) and five National Medals of Science (USA) have been awarded to IBM employees. Workplace culture Employees are often referred to as "IBMers". IBM's culture has evolved significantly over its century of operations. In its early days, a dark (or gray) suit, white shirt, and a "sincere" tie constituted the public uniform for IBM employees. During IBM's management transformation in the 1990s, CEO Louis V. Gerstner Jr. relaxed these codes, normalizing the dress and behavior of IBM employees. The company's culture has also given to different plays on the company acronym (IBM), with some saying it stands for "I've Been Moved" due to relocations and layoffs, others saying it stands for "I'm By Myself" pursuant to a prevalent work-from-anywhere norm, and others saying it stands for "I'm Being Mentored" due to the company's open door policy and encouragement for mentoring at all levels. The company has traditionally resisted labor union organizing, although unions represent some IBM workers outside the United States. See also List of electronics brands List of largest Internet companies List of largest manufacturing companies by revenue Tech companies in the New York City metropolitan region Top 100 US Federal Contractors Quantum Energy Teleportation using IBM superconducting computers Notes References Further reading . External links 1911 establishments in New York (state) Technology companies established in 1911 American companies established in 1911 Cloud computing providers Collier Trophy recipients Companies based in Westchester County, New York Companies in the Dow Jones Industrial Average Companies in the Dow Jones Global Titans 50 Companies in the S&P 500 Dividend Aristocrats Companies listed on the New York Stock Exchange Computer companies of the United States Computer hardware companies Computer systems companies Data companies Data quality companies Display technology companies Electronics companies of the United States Information technology consulting firms of the United States Multinational companies headquartered in the United States National Medal of Technology recipients Outsourcing companies Point of sale companies Software companies based in New York (state) Storage Area Network companies Software companies of the United States International information technology consulting firms
IBM
[ "Technology" ]
7,561
[ "Computer hardware companies", "Computer systems companies", "Computers", "Computer systems" ]
40,380,701
https://en.wikipedia.org/wiki/Annals%20of%20Clinical%20Biochemistry
Annals of Clinical Biochemistry is a bimonthly peer-reviewed scientific journal covering all aspects of clinical biochemistry. The editor-in-chief is Michael J Murphy (University of Dundee). It was established 1960 and is published by SAGE Publications on behalf of The Association for Clinical Biochemistry and Laboratory Medicine. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, its 2012 impact factor is 1.922, ranking it 12th out of 31 journals in the category "Medical Laboratory Technology". References External links SAGE Publishing academic journals English-language journals Bimonthly journals Laboratory medicine journals Medicinal chemistry journals Academic journals established in 1960
Annals of Clinical Biochemistry
[ "Chemistry" ]
134
[ "Biochemistry journal stubs", "Medicinal chemistry journals", "Medicinal chemistry stubs", "Biochemistry stubs", "Medicinal chemistry" ]
29,572,828
https://en.wikipedia.org/wiki/Quality%20control%20system%20for%20paper%2C%20board%20and%20tissue%20machines
A quality control system (QCS) refers to a system used to measure and control the quality of moving sheet processes on-line as in the paper produced by a paper machine. Generally, a control system is concerned with measurement and control of one or multiple properties in time in a single dimension. A QCS is designed to continuously measure and control the material properties of the moving sheet in two dimensions: in the machine direction (MD) and the cross-machine direction (CD). The ultimate goal is maintaining a good and homogeneous quality and meeting users' economic goals. A basic quality measurement system generally includes basis weight and moisture profile measurements and in addition average basis weight of the paper web and moisture control related to these variables. Caliper is also one of the basic measurements. Other commonly used continuous measurements include: ash content, color, brightness, smoothness and gloss, coat weight, formation, porosity, fiber orientation, and surface properties (topography). QCS is used in paper machines, board machines, tissue machines, pulp drying machines, and other plastic or metal film processes In modern systems QCS applications can be embedded to distributed control systems. Sensor platform Sensors measuring the paper quality (online meters) are attached to a sensor platform that move across the web guided by the scanner beam. A typical crossing time for a sensor platform is 10–30 s (an 8 m web, 60 cm/s). The sensor platform scans across the paper web and continuously measures paper characteristics from edge to edge. It can also be directed and stopped to a specific, fixed point on the web to measure the machine-direction (MD) variation at a single point. Scanner beam The QCS scanner beam is an essential part of a QCS system. Wide machines and accurate profile calculations require beam stability and accuracy of mechanical movement. As high accuracy in demanding and variable conditions is required, the sensitive sensors must be securely fastened. The most important goal is maintaining the exact respective position of the upper and lower measurement platforms in relation to their distance from each other, in MD and in CD . This is achieved through a robust construction and by reducing the effects of temperature and other environmental effects and through a moving mechanism with minimized backlash of the measurement platform. Usually the scanner beam also contains all the cables and the air, cooling liquid and protection gas pipes. The base of the scanner beam contains elements that dampen vertical vibration. Variables to be measured A basic quality measurement system generally includes basis weight and moisture profile measurements and in addition average basis weight of the paper web and moisture control related to these variables. Caliper is also one of the basic measurements. Other commonly used continuous measurements include: ash content, color, brightness, smoothness and gloss, coat weight, formation, porosity, fiber orientation, and surface properties (topography). Measurement process Online sensors are set on the scanner beam to scan across the web. Typical crossing time of the web in new systems is 10–30 s (8m web, 60 cm/s). If the web speed is 1200 m/min and web width 8.5 m, the web moves 280 m during a scan, and the sensor moves the same distance diagonally across the web. The measurements are taken on the diagonal line and act as basis for profile (machine and cross-direction) and variation calculations. This value is subject to integration depending on the machine speed. If the measurement signal sampling frequency is in size range 2000/s, then the smallest measurement element is about 0.2 cm in cross-direction. The measurement data is integrated to eliminate a small-scale formation variation from the measurement result. The measurement value is averaged so that each sensor gives one measurement value per one data box which is typically from 5 mm to one centimeter of web width. For a 1 meter wide web, for instance, 100 - 200 measurement values are taken. These measurement values from a single scan (profile points) are called 'raw profiles'. In modern quality control systems, the width of these data boxes can be changed, and accurate profiles can be formed using several thousands profile data boxes. Typically, the sensor's output is the instantaneous value, the profile average value and the complete profile. Sensor requirements The requirements for an ideal paper machine online sensor include the following: the sensor is calibrated to a natural constant during the measurement; the sensor and the related electronics include fault diagnostics; digital processing of the signal is possible from the start without destroying the possibility of analyzing large frequency components; the sensor system does not disturb the production; the measurements are performed real-time and can be adjusted without delays; the measurement concerns the entire production, not just small sample values. It must be possible to distinguish between the machine-directional and cross-directional deviation and the residual deviation as the control system handles these three deviations separately and in different ways. The earlier systems calculated a long-term average profile to filter the profile. As several quality profiles can be adjusted automatically it is important to get the right profile data with high resolution quickly to the control system. This is especially important during changes, after breaks and during grade changes. In advanced systems, algorithms are used to calculate the profile data. See also Distributed Control System Programmable logic controller (PLC) Safety instrumented system (SIS) Industrial control systems References Control engineering Applications of distributed computing Industrial automation Papermaking
Quality control system for paper, board and tissue machines
[ "Engineering" ]
1,083
[ "Control engineering", "Industrial automation", "Automation", "Industrial engineering" ]
29,576,059
https://en.wikipedia.org/wiki/Indium%20trihydride
Indium trihydride is an inorganic compound with the chemical formula (). It has been observed in matrix isolation and laser ablation experiments. Gas phase stability has been predicted. The infrared spectrum was obtained in the gas phase by laser ablation of indium in presence of hydrogen gas is of no practical importance. Chemical properties Solid is a three-dimensional network polymeric structure, where In atoms are connected by In-H-In bridging bonds, is suggested to account for the growth of broad infrared bands when samples of and produced on a solid hydrogen matrix are warmed. Such a structure is known for solid . When heated above , indium trihydride decomposes to produce indium–hydrogen alloy and elemental hydrogen. As of 2013, the only known method of synthesising indium trihydride is the autopolymerisation of indane below . Other indium hydrides Several compounds with In-H bonds have been reported. Examples of complexes with two hydride ligands replaced by other ligands are and . Although is labile, adducts are known with the stoichiometry (n = 1 or 2). 1:1 amine adducts are made by the reaction of (lithium tetrahydridoindate(III)) with a trialkylammonium salt. The trimethylamine complex is only stable below −30 °C or in dilute solution. The 1:1 and 1:2 complexes with tricyclohexylphosphine () have been characterised crystallographically. The average In-H bond length is 168 pm. Indium hydride is also known to form adducts with NHCs. References Indium compounds Metal hydrides
Indium trihydride
[ "Chemistry" ]
361
[ "Metal hydrides", "Inorganic compounds", "Reducing agents" ]
29,576,130
https://en.wikipedia.org/wiki/Vegetative%20treatment%20system
A Vegetative Treatment System (VTS) is a combination of treatment steps for managing runoff. It treats runoff by settling, infiltrating, and nutrient usage. Individual components of a VTS include, a settling structure, an outlet structure, a distribution system, and a Vegetative Treatment Area (VTA). All these components when used together are considered to be a Vegetative Treatment System. Introduction A Vegetative Treatment System (VTS) is a new alternative treatment option for treating the runoff from an animal feeding operation in an effort to protect water quality in South Dakota (SD). A VTS consists of a sediment basin to settle the solids from the feedlot, and uses controlled release of the liquids to a vegetated treatment area (VTA). The VTA area is commonly confused with vegetative buffer (or filter) strips. A buffer strip is a narrow strip of vegetation (usually 30–60 feet wide) between cropland or a water source, such as a river, lake, or stream. In contrast, a VTA is a specifically sized area of perennial vegetation to which runoff from a barnyard or feedlot is applied uniformly. The VTA utilizes the water holding capacity of the soil to store the runoff water until the nutrients and water can be used by the vegetation. Therefore, the application of the runoff to the VTA must be at a rate to prevent deep percolation below the root zone, and not allow the flow to extend past the end of the VTA. A VTS can be an economical alternative to runoff retention (holding) ponds for controlling runoff from an open lot feeding production system (feedlots). A Vegetative Treatment Area (VTA) is an area of perennial vegetation, such as a grass or a forage. The VTA is used to treat runoff from a feedlot or barnyard. It treats runoff by settling, infiltration, and nutrient use. Runoff passes through buffers with some “filtering” of pollutants, but no attempt is made to control solids or flow. A VTS, however, collects runoff from a barnyard or feedlot, separates the solids from the liquids, and uniformly distributes the liquid over the vegetated area. Little or no runoff should leave a VTA. Runoff is first collected from an open lot or barnyard area in a sediment settling structure, usually a basin. Such basins are very effective for removing most solids. The runoff then flows into a VTA where the soil treats and stores the runoff. Once the runoff is in the soil, natural processes allow plants to use the nutrients. The general idea behind VTS technology is that the plants will take up the nutrients contained in the runoff and that natural processes will eliminate undesirable components such as pathogens. There are many different types of VTA’s such as level, infiltration basins, sloped, sprinkler, dual and multiple systems, etc. A Vegetative Treatment System can be used to manage runoff from open lots of both AFOs and CAFOs. VTS systems for large CAFOs can be permitted under the National Pollutant Discharge Elimination System (NPDES) in the US. Advantages May provide lower initial investment and operating costs More aesthetically palatable than large ponds No long-term storage of runoff required, such as holding or evaporation ponds Fewer safety issues Land designated for VTA can produce usable forage Disadvantages A VTA may not be a “closed” system; Saturated soils from previous rains could allow a discharge Special management required during runoff events The VTAs can be damaged by a lack of maintenance and attention - gullies, erosion, and poor vegetation stands dramatically reduce their effectiveness Not currently permittable in SD by the Department of Environment and Natural Resources The VTAs may not provide the same level of water quality improvement as a total runoff containment system, such as holding or evaporation ponds provide References Introduction to Vegetative Treatment Systems Need a Vegetative Treatment System for Your Barnyard or Lot? a Small Farms Fact Sheet from the Livestock and Poultry Environmental Learning Center External links Introduction to Vegetative Treatment Systems VTS guidance document from the Heartland Regional Water Coordination Initiative Animal Feeding Operation Information from the University of Nebraska - Lincoln Environmental soil science Hydrology
Vegetative treatment system
[ "Chemistry", "Engineering", "Environmental_science" ]
874
[ "Hydrology", "Environmental soil science", "Environmental engineering" ]
29,578,326
https://en.wikipedia.org/wiki/Cell%20encapsulation
Cell encapsulation is a possible solution to graft rejection in tissue engineering applications. Cell microencapsulation technology involves immobilization of cells within a polymeric semi-permeable membrane. It permits the bidirectional diffusion of molecules such as the influx of oxygen, nutrients, growth factors etc. essential for cell metabolism and the outward diffusion of waste products and therapeutic proteins. At the same time, the semi-permeable nature of the membrane prevents immune cells and antibodies from destroying the encapsulated cells, regarding them as foreign invaders. Cell encapsulation could reduce the need for long-term use of immunosuppressive drugs after an organ transplant to control side effects. History In 1933 Vincenzo Bisceglie made the first attempt to encapsulate cells in polymer membranes. He demonstrated that tumor cells in a polymer structure transplanted into pig abdominal cavity remained viable for a long period without being rejected by the immune system. Thirty years later in 1964, the idea of encapsulating cells within ultra thin polymer membrane microcapsules so as to provide immunoprotection to the cells was then proposed by Thomas Chang who introduced the term "artificial cells" to define this concept of bioencapsulation. He suggested that these artificial cells produced by a drop method not only protected the encapsulated cells from immunorejection but also provided a high surface-to-volume relationship enabling good mass transfer of oxygen and nutrients. Twenty years later, this approach was successfully put into practice in small animal models when alginate-polylysine-alginate (APA) microcapsules immobilizing xenograft islet cells were developed. The study demonstrated that when these microencapsulated islets were implanted into diabetic rats, the cells remained viable and controlled glucose levels for several weeks. Human trials utilising encapsulated cells were performed in 1998. Encapsulated cells expressing a cytochrome P450 enzyme to locally activate an anti-tumour prodrug were used in a trial for advanced, non-resectable pancreatic cancer. Approximately a doubling of survival time compared to historic controls was demonstrated. Cell microencapsulation as a tool for tissue engineering and regenerative medicine Questions could arise as to why the technique of encapsulation of cells is even required when therapeutic products could just be injected at the site. An important reason for this is that the encapsulated cells would provide a source of sustained continuous release of therapeutic products for longer durations at the site of implantation. Another advantage of cell microencapsulation technology is that it allows the loading of non-human and genetically modified cells into the polymer matrix when the availability of donor cells is limited. Microencapsulation is a valuable technique for local, regional and oral delivery of therapeutic products as it can be implanted into numerous tissue types and organs. For prolonged drug delivery to the treatment site, implantation of these drug loaded artificial cells would be more cost effective in comparison to direct drug delivery. Moreover, the prospect of implanting artificial cells with similar chemical composition in several patients irrespective of their leukocyte antigen could again allow reduction in costs. Key parameters of cell microencapsulation technology The potential of using cell microencapsulation in successful clinical applications can be realized only if several requirements encountered during the development process are optimized such as the use of an appropriate biocompatible polymer to form the mechanically and chemically stable semi-permeable matrix, production of uniformly sized microcapsules, use of an appropriate immune-compatible polycations cross-linked to the encapsulation polymer to stabilized the capsules, selection of a suitable cell type depending on the situation. Biomaterials The use of the best biomaterial depending on the application is crucial in the development of drug delivery systems and tissue engineering. The polymer alginate is very commonly used due to its early discovery, easy availability and low cost but other materials such as cellulose sulphate, collagen, chitosan, gelatin and agarose have also been employed. Alginate Several groups have extensively studied several natural and synthetic polymers with the goal of developing the most suitable biomaterial for cell microencapsulation. Extensive work has been done using alginates which are regarded as the most suitable biomaterials for cell microencapsulation due to their abundance, excellent biocompatibility and biodegradability properties. Alginate is a natural polymer which can be extracted from seaweed and bacteria with numerous compositions based on the isolation source. Alginate is not free from all criticism. Some researchers believe that alginates with high-M content could produce an inflammatory response and an abnormal cell growth while some have demonstrated that alginate with high-G content lead to an even higher cell overgrowth and inflammatory reaction in vivo as compared to intermediate-G alginates. Even ultrapure alginates may contain endotoxins, and polyphenols which could compromise the biocompatibility of the resultant cell microcapsules. It has been shown that even though purification processes successfully lower endotoxin and polyphenol content in the processed alginate, it is difficult to lower the protein content and the purification processes could in turn modify the properties of the biomaterial. Thus it is essential that an effective purification process is designed so as to remove all the contaminants from alginate before it can be successfully used in clinical applications. Modification and functionalization of alginate Researchers have also been able to develop alginate microcapsules with an altered form of alginate with enhanced biocompatibility and higher resistance to osmotic swelling. Another approach to increasing the biocompatibility of the membrane biomaterial is through surface modification of the capsules using peptide and protein molecules which in turn controls the proliferation and rate of differentiation of the encapsulated cells. One group that has been working extensively on coupling the amino acid sequence Arg-Gly-Asp (RGD) to alginate hydrogels demonstrated that the cell behavior can be controlled by the RGD density coupled on the alginate gels. Alginate microparticles loaded with myoblast cells and functionalized with RGD allowed control over the growth and differentiation of the loaded cells. Another vital factor that controls the use of cell microcapsules in clinical applications is the development of a suitable immune-compatible polycation to coat the otherwise highly porous alginate beads and thus impart stability and immune protection to the system. Poly-L-lysine is the most commonly used polycation but its low biocompatibility restricts the successful clinical use of these PLL formulated microcapsules which attract inflammatory cells thus inducing necrosis of the loaded cells. Studies have also shown that alginate-PLL-alginate (APA) microcapsules demonstrate low mechanical stability and short term durability. Thus several research groups have been looking for alternatives to PLL and have demonstrated promising results with poly-L-ornithine and poly(methylene-co-guanidine) hydrochloride by fabricating durable microcapsules with high and controlled mechanical strength for cell encapsulation. Several groups have also investigated the use of chitosan which is a naturally derived polycation as a potential replacement for PLL to fabricate alginate-chitosan (AC) microcapsules for cell delivery applications. However, studies have also shown that the stability of this AC membrane is again limited and one group demonstrated that modification of this alginate-chitosan microcapsules with genipin, a naturally occurring iridoid glucosid from gardenia fruits, to form genipin cross-linked alginate-chitosan (GCAC) microcapsules could augment stability of the cell loaded microcapsules. Collagen Collagen, a major protein component of the ECM, provides support to tissues like skin, cartilage, bones, blood vessels and ligaments and is thus considered a model scaffold or matrix for tissue engineering due to its properties of biocompatibility, biodegradability and ability to promote cell binding. This ability allows chitosan to control distribution of cells inside the polymeric system. Thus, Type-I collagen obtained from animal tissues is now successfully being used commercially as tissue engineered biomaterial for multiple applications. Collagen has also been used in nerve repair and bladder engineering. Immunogenicity has limited the applications of collagen. Gelatin has been considered as an alternative for that reason. Gelatin Gelatin is prepared from the denaturation of collagen and many desirable properties such as biodegradability, biocompatibility, non-immunogenity in physiological environments, and easy processability make this polymer a good choice for tissue engineering applications. It is used in engineering tissues for the skin, bone and cartilage and is used commercially for skin replacements. Chitosan Chitosan is a polysaccharide composed of randomly distributed β-(1-4)-linked D-glucosamine (deacetylated unit) and N-acetyl-D-glucosamine (acetylated unit). It is derived from the N-deacetylation of chitin and has been used for several applications such as drug delivery, space-filling implants and in wound dressings. However, one drawback of this polymer is its weak mechanical properties and is thus often combined with other polymers such collagen to form a polymer with stronger mechanical properties for cell encapsulation applications. Agarose Agarose is a polysaccharide derived from seaweed used for nanoencapsulation of cells and the cell/agarose suspension can be modified to form microbeads by reducing the temperature during preparation. However, one drawback with the microbeads so obtained is the possibility of cellular protrusion through the polymeric matrix wall after formation of the capsules. Cellulose Sulphate Cellulose sulphate is derived from cotton and, once processed appropriately, can be used as a biocompatible base in which to suspend cells. When the poly-anionic cellulose sulphate solution is immersed in a second, poly-cationic solution (e.g. pDADMAC), a semi-permeable membrane is formed around the suspended cells as a result of gelation between the two poly-ions. Both mammalian cell lines and bacterial cells remain viable and continue to replicate within the capsule membrane in order to fill-out the capsule. As such, in contrast to some other encapsulation materials, the capsules can be used to grow cells and act as such like a mini-bioreactor. The biocompatible nature of the material has been demonstrated by observation during studies using the cell-filled capsules themselves for implantation as well as isolated capsule material. Capsules formed from cellulose sulphate have been successfully used, showing safety and efficacy, in clinical and pre-clinical trials in both humans and animals, primarily as anti-cancer treatments, but also exploring possible uses for gene therapy or antibody therapies. Using cellulose sulphate it has been possible to manufacture encapsulated cells as a pharmaceutical product at large scale and fulfilling Good Manufacturing Process (cGMP) standards. This was achieved by the company Austrianova in 2007. Biocompatibility The use of an ideal high quality biomaterial with the inherent properties of biocompatibility is the most crucial factor that governs the long term efficiency of this technology. An ideal biomaterial for cell encapsulation should be one that is totally biocompatible, does not trigger an immune response in the host and does not interfere with cell homeostasis so as to ensure high cell viability. However, one major limitation has been the inability to reproduce the different biomaterials and the requirements to obtain a better understanding of the chemistry and biofunctionality of the biomaterials and the microencapsulation system. Several studies demonstrate that surface modification of these cell containing microparticles allows control over the growth and cellular differentiation. of the encapsulated cells. One study proposed the use of zeta potential which measures the electric charge of the microcapsule as a means to predict the interfacial reaction between microcapsule and the surrounding tissue and in turn the biocompatibility of the delivery system. Microcapsule permeability A fundamental criterion that must be established while developing any device with a semi-permeable membrane is to adjust the permeability of the device in terms of entry and exit of molecules. It is essential that the cell microcapsule is designed with uniform thickness and should have a control over both the rate of molecules entering the capsule necessary for cell viability and the rate of therapeutic products and waste material exiting the capsule membrane. Immunoprotection of the loaded cell is the key issue that must be kept in mind while working on the permeability of the encapsulation membrane as not only immune cells but also antibodies and cytokines should be prevented entry into the microcapsule which in fact depends on the pore size of the biomembrane. It has been shown that since different cell types have different metabolic requirements, thus depending on the cell type encapsulated in the membrane the permeability of the membrane has to be optimized. Several groups have been dedicated towards the study of membrane permeability of cell microcapsules and although the role of permeability of certain essential elements like oxygen has been demonstrated, the permeability requirements of each cell type are yet to be determined. Sodium Citrate is used for degradation of alginate beads after encapsulation of cells. In order to determine viability of the cells or for further experimentation. Concentrations of approximately 25mM are used to dissolve the alginate spheres and the solution is spun down using a centrifuge so the sodium citrate can be removed and the cells can be collected. Mechanical strength and durability It is essential that the microcapsules have adequate membrane strength (mechanical stability) to endure physical and osmotic stress such as during the exchange of nutrients and waste products. The microcapsules should be strong enough and should not rupture on implantation as this could lead to an immune rejection of the encapsulated cells. For instance, in the case of xenotransplantation, a tighter more stable membrane would be required in comparison to allotransplantation. Also, while investigating the potential of using APA microcapsules loaded with bile salt hydrolase (BSH) overproducing active Lactobacillus plantarum 80 cells, in a simulated gastro intestinal tract model for oral delivery applications, the mechanical integrity and shape of the microcapsules was evaluated. It was shown that APA microcapsules could potentially be used in the oral delivery of live bacterial cells. However, further research proved that the GCAC microcapsules possess a higher mechanical stability as compared to APA microcapsules for oral delivery applications. Martoni et al. were experimenting with bacteria-filled capsules that would be taken by mouth to reduce serum cholesterol. The capsules were pumped through a series of vessels simulating the human GI tract to determine how well the capsules would survive in the body. Extensive research into the mechanical properties of the biomaterial to be used for cell microencapsulation is necessary to determine the durability of the microcapsules during production and especially for in vivo applications where a sustained release of the therapeutic product over long durations is required. van der Wijngaart et al. grafted a solid, but permeable, shell around the cells to provide increased mechanical strength. Sodium Citrate is used for degradation of alginate beads after encapsulation of cells. In order to determine viability of the cells or for further experimentation. Concentrations of approximately 25mM are used to dissolve the alginate spheres and the solution is spun down using a centrifuge so the sodium citrate can be removed and the cells can be collected. Methods for testing mechanical properties of microcapsules A Rheometer is a machine used to test shear rate shear strength consistency coefficient flow behavior index Viscometer - shear strength testing Microcapsule Generation Microfluidics Droplet-based microfluidics can be used to generate microparticles with repeatable size. manipulation of alginate solution to allow microcapsules to be created Electrospraying Techniques Eletrospraying is used to create alginate spheres by pumping an alginate solution through a needle. A source of high voltage usually provided by a clamp attached to the needle is used to generate an electric potential with the alginate falling from the needle tip into a solution that contains a ground. Calcium chloride is used as cross linking solution in which the generated capsules drop into where they harden after approximately 30 minutes. Beads are formed from the needle due to charge and surface tension. Size dependency of the beads height alterations of device from needle to calcium chloride solution voltage alterations of clamp on the needle alginate concentration alterations Microcapsule size The diameter of the microcapsules is an important factor that influences both the immune response towards the cell microcapsules as well as the mass transport across the capsule membrane. Studies show that the cellular response to smaller capsules is much lesser as compared to larger capsules and in general the diameter of the cell loaded microcapsules should be between 350-450 μm so as to enable effective diffusion across the semi-permeable membrane. Cell choice The cell type chosen for this technique depends on the desired application of the cell microcapsules. The cells put into the capsules can be from the patient (autologous cells), from another donor (allogeneic cells) or from other species (xenogeneic cells). The use of autologous cells in microencapsulation therapy is limited by the availability of these cells and even though xenogeneic cells are easily accessible, danger of possible transmission of viruses, especially porcine endogenous retrovirus to the patient restricts their clinical application, and after much debate several groups have concluded that studies should involve the use of allogeneic instead of xenogeneic cells. Depending on the application, the cells can be genetically altered to express any required protein. However, enough research has to be carried out to validate the safety and stability of the expressed gene before these types of cells can be used. This technology has not received approval for clinical trial because of the high immunogenicity of cells loaded in the capsules. They secrete cytokines and produce a severe inflammatory reaction at the implantation site around the capsules, in turn leading to a decrease in viability of the encapsulated cells. One promising approach being studied is the administration of anti-inflammatory drugs to reduce the immune response produced due to administration of the cell loaded microcapsules. Another approach which is now the focus of extensive research is the use of stem cells such as mesenchymal stem cells for long term cell microencapsulation and cell therapy applications in hopes of reducing the immune response in the patient after implantation. Another issue which compromises long term viability of the microencapsulated cells is the use of fast proliferating cell lines which eventually fill up the entire system and lead to decrease in the diffusion efficiency across the semi-permeable membrane of the capsule. A solution to this could be in the use of cell types such as myoblasts which do not proliferate after the microencapsulation procedure. Non-therapeutic applications Probiotics are increasingly being used in numerous dairy products such as ice cream, milk powders, yoghurts, frozen dairy desserts and cheese due to their important health benefits. But, low viability of probiotic bacteria in the food still remains a major hurdle. The pH, dissolved oxygen content, titratable acidity, storage temperature, species and strains of associative fermented dairy product organisms and concentration of lactic and acetic acids are some of the factors that greatly affect the probiotic viability in the product. As set by Food and Agriculture Organization (FAO) of the United Nations and the World Health Organization (WHO), the standard in order to be considered a health food with probiotic addition, the product should contain per gram at least 106-107 cfu of viable probiotic bacteria. It is necessary that the bacterial cells remain stable and healthy in the manufactured product, are sufficiently viable while moving through the upper digestive tract and are able to provide positive effects upon reaching the intestine of the host. Cell microencapsulation technology has successfully been applied in the food industry for the encapsulation of live probiotic bacteria cells to increase viability of the bacteria during processing of dairy products and for targeted delivery to the gastrointestinal tract. Apart from dairy products, microencapsulated probiotics have also been used in non-dairy products, such as TheresweetTM which is a sweetener. It can be used as a convenient vehicle for delivery of encapsulated Lactobacillus to the intestine although it is not itself a dairy product. Therapeutic applications Diabetes The potential of using bioartificial pancreas, for treatment of diabetes mellitus, based on encapsulating islet cells within a semi permeable membrane is extensively being studied by scientists. These devices could eliminate the need for of immunosuppressive drugs in addition to finally solving the problem of shortage of organ donors. The use of microencapsulation would protect the islet cells from immune rejection as well as allow the use of animal cells or genetically modified insulin-producing cells. It is hoped that development of these islet encapsulated microcapsules could prevent the need for the insulin injections needed several times a day by type 1 diabetic patients. The Edmonton protocol involves implantation of human islets extracted from cadaveric donors and has shown improvements towards the treatment of type 1 diabetics who are prone to hypoglycemic unawareness. However, the two major hurdles faced in this technique are the limited availability of donor organs and with the need for immunosuppresents to prevent an immune response in the patient's body. Several studies have been dedicated towards the development of bioartificial pancreas involving the immobilization of islets of Langerhans inside polymeric capsules. The first attempt towards this aim was demonstrated in 1980 by Lim et al. where xenograft islet cells were encapsulated inside alginate polylysine microcapsules and showed significant in vivo results for several weeks. It is envisaged that the implantation of these encapsulated cells would help to overcome the use of immunosuppressive drugs and also allow the use of xenograft cells thus obviating the problem of donor shortage. The polymers used for islet microencapsulation are alginate, chitosan, polyethylene glycol (PEG), agarose, sodium cellulose sulfate and water-insoluble polyacrylates with alginate and PEG being commonly used polymers. With successful in vitro studies being performed using this technique, significant work in clinical trials using microencapsulated human islets is being carried out. In 2003, the use of alginate/PLO microcapsules containing islet cells for pilot phase-1 clinical trials was permitted to be carried out at the University of Perugia by the Italian Ministry of Health. In another study, the potential of clinical application of PEGylation and low doses of the immunosuppressant cyclosporine A were evaluated. The trial which began in 2005 by Novocell, now forms the phase I/II of clinical trials involving implantation of islet allografts into the subcutaneous site. However, there have been controversial studies involving human clinical trials where Living Cell technologies Ltd demonstrated the survival of functional xenogeneic cells transplanted without immunosuppressive medication for 9.5 years. However, the trial received harsh criticism from the International Xenotransplantation Association as being risky and premature. However, even though clinical trials are under way, several major issues such as biocompatibility and immunoprotection need to be overcome. Potential alternatives to encapsulating isolated islets (of either allo- or xenogeneic origin) are also being explored. Using sodium cellulose sulphate technology from Austrianova Singapore an islet cell line was encapsulated and it was demonstrated that the cells remain viable and release insulin in response to glucose. In pre-clinical studies, implanted, encapsulated cells were able to restore blood glucose levels in diabetic rats over a period of 6 months. Cancer The use of cell encapsulated microcapsules towards the treatment of several forms of cancer has shown great potential. One approach undertaken by researchers is through the implantation of microcapsules containing genetically modified cytokine secreting cells. An example of this was demonstrated by Cirone et al. when genetically modified IL-2 cytokine secreting non-autologous mouse myoblasts implanted into mice showed a delay in the tumor growth with an increased rate of survival of the animals. However, the efficiency of this treatment was brief due to an immune response towards the implanted microcapsules. Another approach to cancer suppression is through the use of angiogenesis inhibitors to prevent the release of growth factors which lead to the spread of tumors. The effect of implanting microcapsules loaded with xenogenic cells genetically modified to secrete endostatin, an antiangiogenic drug which causes apoptosis in tumor cells, has been extensively studied. However, this method of local delivery of microcapsules was not feasible in the treatment of patients with many tumors or in metastasis cases and has led to recent studies involving systemic implantation of the capsules. In 1998, a murine model of pancreatic cancer was used to study the effect of implanting genetically modified cytochrome P450 expressing feline epithelial cells encapsulated in cellulose sulfate polymers for the treatment of solid tumors. The approach demonstrated for the first time the application of enzyme expressing cells to activate chemotherapeutic agents. On the basis of these results, an encapsulated cell therapy product, NovaCaps, was tested in a phase I and II clinical trial for the treatment of pancreatic cancer in patients and has recently been designated by the European medicines agency (EMEA) as an orphan drug in Europe. A further phase I/II clinical trial using the same product confirmed the results of the first trial, demonstrating an approximate doubling of survival time in patients with stage IV pancreatic cancer. In all of these trials using cellulose sulphate, in addition to the clear anti-tumour effects, the capsules were well tolerated and there were no adverse reactions seen such as immune response to the capsules, demonstrating the biocompatible nature of the cellulose sulphate capsules. In one patient the capsules were in place for almost 2 years with no side effects. These studies show the promising potential application of cell microcapsules towards the treatment of cancers. However, solutions to issues such as immune response leading to inflammation of the surrounding tissue at the site of capsule implantation have to be researched in detail before more clinical trials are possible. Heart Diseases Numerous studies have been dedicated towards the development of effective methods to enable cardiac tissue regeneration in patients after ischemic heart disease. An emerging approach to answer the problems related to ischemic tissue repair is through the use of stem cell-based therapy. However, the actual mechanism due to which this stem cell-based therapy has generative effects on cardiac function is still under investigation. Even though numerous methods have been studied for cell administration, the efficiency of the number of cells retained in the beating heart after implantation is still very low. A promising approach to overcome this problem is through the use of cell microencapsulation therapy which has shown to enable a higher cell retention as compared to the injection of free stem cells into the heart. Another strategy to improve the impact of cell based encapsulation technique towards cardiac regenerative applications is through the use of genetically modified stem cells capable of secreting angiogenic factors such as vascular endothelial growth factor (VEGF) which stimulate neovascularization and restore perfusion in the damaged ischemic heart. An example of this is shown in the study by Zang et al. where genetically modified xenogeneic CHO cells expressing VEGF were encapsulated in alginate-polylysine-alginate microcapsules and implanted into rat myocardium. It was observed that the encapsulation protected the cells from an immunoresponse for three weeks and also led to an improvement in the cardiac tissue post-infarction due to increased angiogenesis. Monoclonal Antibody Therapy The use of monoclonal antibodies for therapy is now widespread for treatment of cancers and inflammatory diseases. Using cellulose sulphate technology, scientists have successfully encapsulated antibody producing hybridoma cells and demonstrated subsequent release of the therapeutic antibody from the capsules. The capsules containing the hybridoma cells were used in pre-clinical studies to deliver neutralising antibodies to the mouse retrovirus FrCasE, successfully preventing disease. Other conditions Many other medical conditions have been targeted with encapsulation therapies, especially those involving a deficiency in some biologically derived protein. One of the most successful approaches is an external device that acts similarly to a dialysis machine, only with a reservoir of pig hepatocytes surrounding the semipermeable portion of the blood-infused tubing. This apparatus can remove toxins from the blood of patients suffering severe liver failure. Other applications that are still in development include cells that produce ciliary-derived neurotrophic factor for the treatment of ALS and Huntington's disease, glial-derived neurotrophic factor for Parkinson's disease, erythropoietin for anemia, and HGH for dwarfism. In addition, monogenic diseases such as haemophilia, Gaucher's disease and some mucopolysaccharide disorders could also potentially be targeted by encapsulated cells expressing the protein that is otherwise lacking in the patient. References Biomaterials Biomedical engineering Tissue engineering Regenerative biomedicine Drug delivery devices
Cell encapsulation
[ "Physics", "Chemistry", "Engineering", "Biology" ]
6,459
[ "Biomaterials", "Pharmacology", "Biological engineering", "Biomedical engineering", "Cloning", "Chemical engineering", "Drug delivery devices", "Materials", "Tissue engineering", "Matter", "Medical technology" ]
47,926,105
https://en.wikipedia.org/wiki/Open%20Energy%20Modelling%20Initiative
The Open Energy Modelling Initiative (openmod) is a grassroots community of energy system modellers from universities and research institutes across Europe and elsewhere. The initiative promotes the use of open-source software and open data in energy system modelling for research and policy advice. The Open Energy Modelling Initiative documents a variety of open-source energy models and addresses practical and conceptual issues regarding their development and application. The initiative runs an email list, an internet forum, and a wiki and hosts occasional academic workshops. A statement of aims is available. Context The application of open-source development to energy modelling dates back to around 2003. This section provides some background for the growing interest in open methods. Growth in open energy modelling Just two active open energy modelling projects were cited in a 2011 paper: OSeMOSYS and TEMOA. Balmorel was also public at that time, having been made available on a website in 2001. the openmod wiki lists 24 such undertakings. the Open Energy Platform lists 17open energy frameworks and about 50open energy models. Academic literature This 2012 paper presents the case for using "open, publicly accessible software and data as well as crowdsourcing techniques to develop robust energy analysis tools". The paper claims that these techniques can produce high-quality results and are particularly relevant for developing countries. There is an increasing call for the energy models and datasets used for energy policy analysis and advice to be made public in the interests of transparency and quality. A 2010 paper concerning energy efficiency modeling argues that "an open peer review process can greatly support model verification and validation, which are essential for model development". One 2012 study argues that the source code and datasets used in such models should be placed under publicly accessible version control to enable third-parties to run and check specific models. Another 2014 study argues that the public trust needed to underpin a rapid transition in energy systems can only be built through the use of transparent open-source energy models. The UK TIMES project (UKTM) is open source, according to a 2014 presentation, because "energy modelling must be replicable and verifiable to be considered part of the scientific process" and because this fits with the "drive towards clarity and quality assurance in the provision of policy insights". In 2016, the Deep Decarbonization Pathways Project (DDPP) is seeking to improve its modelling methodologies, a key motivation being "the intertwined goals of transparency, communicability and policy credibility." A 2016 paper argues that model-based energy scenario studies, wishing to influence decision-makers in government and industry, must become more comprehensible and more transparent. To these ends, the paper provides a checklist of transparency criteria that should be completed by modelers. The authors note however that they "consider open source approaches to be an extreme case of transparency that does not automatically facilitate the comprehensibility of studies for policy advice." An editorial from 2016 opines that closed energy models providing public policy support "are inconsistent with the open access movement [and] funded research". A 2017 paper lists the benefits of open data and models and the reasons that many projects nonetheless remain closed. The paper makes a number of recommendations for projects wishing to transition to a more open approach. The authors also conclude that, in terms of openness, energy research has lagged behind other fields, most notably physics, biotechnology, and medicine. Moreover: A one-page opinion piece in Nature News from 2017 advances the case for using open energy data and modeling to build public trust in policy analysis. The article also argues that scientific journals have a responsibility to require that data and code be submitted alongside text for scrutiny, currently only Energy Economics makes this practice mandatory within the energy domain. Copyright and open energy data Issues surrounding copyright remain at the forefront with regard to open energy data. Most energy datasets are collated and published by official or semi-official sources, for example, national statistics offices, transmission system operators, and electricity market operators. The doctrine of open data requires that these datasets be available under free licenses (such as ) or be in the public domain. But most published energy datasets carry proprietary licenses, limiting their reuse in numerical and statistical models, open or otherwise. Measures to enforce market transparency have not helped because the associated information is normally licensed to preclude downstream usage. Recent transparency measures include the 2013 European energy market transparency regulation 543/2013 and a 2016 amendment to the German Energy Industry Act to establish a nation energy information platform, slated to launch on 1July 2017. Energy databases may also be protected under general database law, irrespective of the copyright status of the information they hold. In December 2017, participants from the Open Energy Modelling Initiative and allied research communities made a written submission to the European Commission on the of public sector information. The document provides a comprehensive account of the data issues faced by researchers engaged in open energy system modeling and energy market analysis and quoted extensively from a German legal opinion. In May 2020, participants from the Open Energy Modelling Initiative made a further submission on the European strategy for data. In mid2021, participants made two written submissions on a proposed Data Act legislative work-in-progress intended primarily to improve public interest business-to-government (B2G) information transfers within the European Economic Area (EEA). More specifically, the two Data Act submissions drew attention to restrictive but nonetheless compliant public disclosure reporting practices deployed by the European Energy Exchange (EEX). Public policy support In May 2016, the European Union announced that "all scientific articles in Europe must be freely accessible as of 2020". This is a step in the right direction, but the new policy makes no mention of open software and its importance to the scientific process. In August 2016, the United States government announced a new federal source code policy which mandates that at least 20% of custom source code developed by or for any agency of the federal government be released as open-source software (OSS). The US Department of Energy (DOE) is participating in the program. The project is hosted on a dedicated website and subject to a three-year pilot. Open-source campaigners are using the initiative to advocate that European governments adopt similar practices. In 2017 the Free Software Foundation Europe (FSFE) issued a position paper calling for free software and open standards to be central to European science funding, including the flagship EU program Horizon2020. The position paper focuses on open data and open data processing and the question of open modeling is not traversed perse. Adoption by regulators and industry generally A trend evident by 2023 is the adoption of regulators within the European Union and North America. Fairley(2023), writing in the IEEE Spectrum publication, provides an overview. And as one example, the Canada Energy Regulator is using the PyPSA framework for systems analysis. Workshops The Open Energy Modelling Initiative participants take turns to host regular academic workshops. The Open Energy Modelling Initiative also holds occasional specialist meetings. See also Crowdsourcing Energy modeling Energy system – the interpretation of the energy sector in system terms Free Software Foundation Europe – a non-profit organization advocating for free software in Europe Open data Open energy system models – a review of energy system models that are also open source Open energy system databases – database projects which collect, clean, and republish energy-related datasets Notes Further reading GenerationR open science blog on the openmod community Introductory video on open energy system modeling using the python language as an example Introductory video on the Open Energy Outlook (OEO) project specific to the United States External links Related to openmod Open Energy Modelling Initiative website Open Energy Modelling Initiative wiki Open Energy Modelling Initiative discussion forum Open Energy Modelling Initiative email list archive Open Energy Modelling Initiative YouTube channel Open Energy Modelling Initiative GitHub account Open Energy Modelling Initiative twitter feed Open Energy Modelling Initiative manifesto written in 2014 Open energy data Open Energy Platform – a collaborative versioned database for storing open energy system model datasets – a semantic wiki-site and database covering energy systems data worldwide Energypedia – a wiki-based collaborative knowledge exchange covering sustainable energy topics in developing countries Open Power System Data project – triggered by the work of the Open Energy Modelling Initiative OpenEI – a US-based open energy data portal Similar initiatives soundsoftware.ac.uk – an open modelling community for acoustic and music software Other REEEM – a scientific project modeling sustainable energy futures for Europe EERAdata – a project exploring FAIR energy data for Europe References Economics models Energy development Energy models Energy organizations Energy policy Free and open-source software organizations Mathematical modeling Open data Open science Simulation
Open Energy Modelling Initiative
[ "Mathematics", "Engineering", "Environmental_science" ]
1,750
[ "Mathematical modeling", "Applied mathematics", "Energy organizations", "Energy policy", "Environmental social science" ]
47,929,502
https://en.wikipedia.org/wiki/IRONCAD
IRONCAD is a software product for 3D and 2D CAD (computer-aided-design) design focused mainly on the mechanical design market that runs on Microsoft Windows. It is developed by Atlanta, GA based IronCAD LLC. History IRONCAD was originally developed by Visionary Design Systems (VDS) based in Santa Clara, CA. The product launched in 1998. In 2001 the development team led by Dr. Tao-Yan Han split from VDS (now known as Alventive) to form IronCAD LLC to continue the development of the IRONCAD product. IronCAD descended from a product called Trispectives, developed by 3D Eye, an Atlanta-based company that was acquired by VDS. Release History Solid Modeling Methodology IRONCAD primary focus is on 3D CAD design using solid modeling technology. IRONCAD uses both Parasolid and ACIS modeling kernels to provide computational methods for solving geometric calculations such as calculating blends and shells. Users create designs in 3D using a drag and drop design methodology by dragging and dropping shapes and components from 3D catalogs to build parts and assemblies. They then use those designs to communicate with other users in the design process using both 3D models and 2D drawings. The drawings remain associative to the 3D model so as the model is updated the drawings reflect the changes. IRONCAD also employs the use of direct face editing and allows the combination of features and direct face edits within the same part. To assist people in learning products many users have written books on IronCAD's products to assist customers in training of the software. References External links Computer-aided design software Windows-only proprietary software Computer-aided engineering software Solid mechanics Product design Computer-aided design software for Windows
IRONCAD
[ "Physics", "Engineering" ]
355
[ "Product design", "Solid mechanics", "Design", "Mechanics" ]
47,936,656
https://en.wikipedia.org/wiki/Offensive%20Security
Offensive Security (also known as OffSec) is an American international company working in information security, penetration testing and digital forensics. Operating from around 2007, the company created open source projects, advanced security courses, the ExploitDB vulnerability database, and the Kali Linux distribution. The company was started by Mati Aharoni, and employs security professionals with experience in security penetration testing and system security evaluation. The company has provided security counseling and training to many technology companies. The company also provides training courses and certifications. Background and history Mati Aharoni, Offensive Security's co-founder, started the business around 2006 with his wife Iris. Offensive Security LLC was formed in 2008. The company was structured as Offensive Security Services, LLC in 2012 in North Carolina. In September 2019 the company received its first venture capital investment, from Spectrum Equity, and CEO Ning Wang replaced Joe Steinbach, the previous CEO for four years, who ran the business from the Philippines. Jim O’Gorman, the company's chief strategy officer, also gives training and writes books. Customers include Cisco, Wells Fargo, Booz Allen Hamilton, and defense-related U.S. government agencies. The company gives training sessions at the annual Black Hat hacker conference. In 2019, J.M. Porup of CSO online wrote "few infosec certifications have developed the prestige in recent years of the Offensive Security Certified Professional (OSCP)," and said it has "a reputation for being one of the most difficult," because it requires student to hack into a test network during a difficult "24-hour exam." He also summarized accusations of cheating, and Offensive Security's responses, concluding hiring based only on credentials was a mistake, and an applicants skills should be validated. In 2020, cybersecurity professional Matt Day of Start a Cyber Career, writing a detailed review and comparison of OSCP and CompTIA PenTest+, said OSCP was "well known in the pentesting community, and therefore well known by the managers that hire them." Projects In addition to their training and security services, the company also founded open source projects, online exploit databases and security information teaching aids. Kali Linux The company is known for developing Kali Linux, which is a Debian Linux based distribution modeled after BackTrack. It succeeds BackTrack Linux, and is designed for security information needs, such as penetration testing and digital forensics. Kali NetHunter is Offensive Security's project for the ARM architecture and Android devices. Kali Linux contains over 600 security programs. The release of the second version (2.0) received a wide coverage in the digital media Offensive Security provides a book, Kali Linux Revealed, and makes the first edition available for free download. Users and employees have been inspired to have careers in social engineering. In 2019, in a detailed review, Cyberpunk called Offensive Security's Kali Linux, " known as BackTrack," the "best penetration testing distribution." BackTrack BackTrack Linux was an open source GNU General Public License Linux distribution developed by programmers from around the world with assistance, coordination, and funding from Offensive Security. The distribution was originally developed under the names Whoppix, IWHAX, and Auditor. It was designed to delete any trace of its usage. The distribution was widely known and used by security experts. ExploitDB Exploit Database is an archive of vulnerable software and exploits that have been made public by the information security community. The database is designated to help penetration testers test small projects easily by sharing information with each other. The database also contains proof-of-concepts (POC), helping information security professionals learn new exploits variations. In Ethical Hacking and Penetration Testing Guide, Rafay Baloch said Exploit-db had over 20,000 exploits, and was available in BackTrack Linux by default. In CEH v10 Certified Ethical Hacker Study Guide, Ric Messier called exploit-db a "great resource," and stated it was available within Kali Linux by default, or could be added to other Linux distributions. Metasploit Metasploit Unleashed is a charity project created by Offensive Security for the sake of Hackers for Charity, which was started by Johnny Long. The projects teaches Metasploit and is designed especially for people who consider starting a career in penetration testing. Google Hacking Database Google Hacking Database was created by Johnny Long and is now hosted by Offensive Security. The project was created as a part of Hackers for Charity. The database helps security professionals determine whether a given application or website is compromised. The database uses Google search to establish whether usernames and passwords had been compromised. See also Offensive Security Certified Professional Kali Linux Kali NetHunter BackTrack Linux List of computer security certifications References External links Offensive Security Official Website Kali Linux Official Website Digital forensics software Computer security procedures Computer network security Software testing Data security Security Crime prevention National security Cryptography Information governance
Offensive Security
[ "Mathematics", "Engineering" ]
1,006
[ "Cybersecurity engineering", "Cryptography", "Software testing", "Applied mathematics", "Computer networks engineering", "Software engineering", "Computer network security", "Data security", "Computer security procedures" ]
39,082,112
https://en.wikipedia.org/wiki/Tubercle%20effect
The tubercle effect is a phenomenon where tubercles or large 'bumps' on the leading edge of an airfoil can improve its aerodynamics. The effect, while already discovered, was analyzed extensively by Frank E. Fish et al in the early 2000 onwards. The tubercle effect works by channeling flow over the airfoil into more narrow streams, creating higher velocities. Another side effect of these channels is the reduction of flow moving over the wingtip and resulting in less parasitic drag due to wingtip vortices. Using computational modeling, it was determined that the presence of tubercles produces a delay in the angle of attack until stall, thereby increasing maximum lift and decreasing drag. Fish first discovered this effect when looking at the fins of humpback whales. These whales are the only known organisms to take advantage of the tubercle effect. It is believed that this effect allows them to be much more manoeuvrable in the water, allowing for easier capture of prey. The tubercles on their fins allow them to do aquatic maneuvers to catch their prey. The tiny hooklets on the fore edge of an owl's wing have a similar effect that contributes to its aerodynamic manoeuvrability and stealth. Science behind the effect The tubercle effect is a phenomenon in which tubercles, or large raised bumps on the leading edge of a wing, blade, or sail increase its aerodynamic or hydrodynamic performance. Research on this topic was inspired by the work of marine biologists on the behavior of humpback whales. Despite their large size, these whales are agile and are able to perform rolls and loops underwater. Research on humpback whales indicated that the presence of these tubercles on the leading edge of whale fins reduced stall and increased lift, while reducing noise in the post-stall regime. Researchers were motivated by these positive results to apply these concepts to aircraft wings as well as industrial and wind turbines. Early research on this topic was performed by Watts & Fish followed by further experiments both in water and wind tunnels. Watts & Fish determined that the presence of tubercles on the leading edge of airfoil increased lift by 4.8%. Further numerical computations confirmed this result, and indicated that the presence of tubercles can decrease the effects of drag by 40%. Leading-edge tubercles have been found to reduce the point of maximum lift and increase the region of post-stall lift. In the post-stall regime, foils with tubercles experienced a gradual loss of lift as opposed to foils without tubercles, which experienced a sudden loss of lift. An example of a wing without protuberances compared to a wing with protuberances is shown. The geometry of tubercles must also be considered, as the amplitude and wavelength of tubercles have an effect on flow control. Tubercles can be thought of as small delta wings with a curved apex, since they create a vortex on the upper edge of the tubercle. These vortical structures impose a downward deflection of the airflow (downwash) over the crests of tubercles. This downward deflection delays stall on the airfoil. On the contrary, in the troughs of these structures, there is a net upward deflection of airflow (upwash). Localized upwash is associated with higher angles of attack, which relates to increased lift, as the flow separation occurs in the troughs and stays there. The vortex created by the tubercle delays flow separation toward the trailing edge of the wing, thus reducing the effects of drag. However, in water, due to the crest/trough structure, cavitation is possible, and is undesirable. Cavitation occurs in areas of high flow velocity and low pressure, such as the trough of a tubercled structure. In water, air bubbles or pockets form on the upper side of the tubercle. These bubbles reduce lift and increase drag, while increasing noise in the flow when the bubbles collapse. However, tubercles can be modified to manipulate the location of cavitation. The effect of amplitude of tubercles has a more significant impact on post-stall performance than wavelength. Higher amplitude of tubercles has been linked to more gradual stall and higher post-stall lift, as well as lower pre-stall lift slope. The wavelength and amplitude can both be optimized to increase the post-stall performance. Experiments on the effects of leading-edge tubercles have primarily focused on rigid bodies, and more research is needed in order to apply the knowledge of the tubercle effect to industrial, aircraft, or energy applications. Biological occurrences of tubercles Tubercles are a material phenomenon that occurs in multiple organisms. These organisms include the humpback whale, hammerhead sharks, scallops, and chondrichthyans, an extinct aquatic organism. One organism that tubercles are notable in is the humpback whale. The tubercles on humpback whales are located on the leading edge of the flippers. The tubercles allow the very large whales to execute tight turns underwater and swim efficiently; a task imperative for the humpback whales feeding. The tubercles on the flippers help to maintain lift, preventing stall, and decreasing the drag coefficient during turning maneuvers. Tubercles on the humpback whale are considered passive flow control because they are structural. Tubercles develop in the fetus of the humpback whale. Typically 9-11 tubercles are present on each flipper and decrease in size as they near the tip of the flipper. The largest tubercles are the first and fourth tubercles from the shoulder of the whale. This anatomical structure is common among large fish species, primarily predatory species on their pectoral fins. Modern applications in industry Leading edge tubercles are up and coming in the manufacturing area. Wind turbine performances rely on blade aerodynamics where similar flow characteristics are observed (source # 9) modern turbines have twisted blades to account for the angle of attack at specific design conditions. However, in practical application, turbines often operate at off-design conditions where stall occurs, causing a decrease in performance and efficiency. In order to look for possible improvement of the energy efficiency of turbine, the influence of leading edge tubercles must be investigated in more depth. Tubercles provide a bio-inspired design that offers commercial viability in design of watercraft, aircraft, ventilation fans, and windmills. Control of passive flow through tubercle designs has the advantage of eliminating complex, costly, high-maintenance heavy control mechanisms while improving properties of performance for lifting bodies in air and water. One issue that remains today is the difference in the scale of structure and operation that each of these bio-inspired technologies use. New techniques are being implemented in order to develop methods of delaying stall in flow applications. For example, jet aircraft with leading edge defects can carry greater payloads at faster speeds and higher altitudes, allowing for greater economic efficiency in the aeronautical field. While these effects are found in many aquatic animals and birds, scaling these designs up to industrial application brings forward another set of issues regarding the high stresses associated by machinery. In airplanes for example, designs are much more limited than the complex kinematics and structures of the joints in the wings of birds which produces agile turning maneuvers. This problem can be rectified by further researching into the overlap between size and performance between biological structure and engineering application. It was also observed in turbine design that leading-edge effects have the ability to improve power generation by a factor of up to 20%. In the aeronautical engineering field, leading-edge tubercles placed on turbine blades can increase generation of energy. Blades with tubercles were also found to be effective at generation of power at both high and low wind speeds, meaning that comparing blades with smooth leading edges to those with leading-edge tubercles, the blades with leading-edge tubercles demonstrated enhanced performance. The utility of tubercle in performance improvement of engineering systems comes directly from examination of biological structures. It is important to realize the versatility that creating designs with bio-enhanced properties offers promise into many flow design applications. As these designs become more and more advanced, the application of biomimetric technologies become crucial to the next development of high-performance machinery and equipment as different methods of efficiency are developed through these methods. See also Biomimicry References External links Other examples of biomimicry Aerodynamics
Tubercle effect
[ "Chemistry", "Engineering" ]
1,732
[ "Aerospace engineering", "Aerodynamics", "Fluid dynamics" ]
39,082,282
https://en.wikipedia.org/wiki/Bony%E2%80%93Brezis%20theorem
In mathematics, the Bony–Brezis theorem, due to the French mathematicians Jean-Michel Bony and Haïm Brezis, gives necessary and sufficient conditions for a closed subset of a manifold to be invariant under the flow defined by a vector field, namely at each point of the closed set the vector field must have non-positive inner product with any exterior normal vector to the set. A vector is an exterior normal at a point of the closed set if there is a real-valued continuously differentiable function maximized locally at the point with that vector as its derivative at the point. If the closed subset is a smooth submanifold with boundary, the condition states that the vector field should not point outside the subset at boundary points. The generalization to non-smooth subsets is important in the theory of partial differential equations. The theorem had in fact been previously discovered by Mitio Nagumo in 1942 and is also known as the Nagumo theorem. Statement Let F be closed subset of a C2 manifold M and let X be a vector field on M which is Lipschitz continuous. The following conditions are equivalent: Every integral curve of X starting in F remains in F. (X(m),v) ≤ 0 for every exterior normal vector v at a point m in F. Proof Following , to prove that the first condition implies the second, let c(t) be an integral curve with c(0) = x in F and dc/dt= X(c). Let g have a local maximum on F at x. Then g(c(t)) ≤ g (c(0)) for t small and positive. Differentiating, this implies that g '(x)⋅X(x) ≤ 0. To prove the reverse implication, since the result is local, it enough to check it in Rn. In that case X locally satisfies a Lipschitz condition If F is closed, the distance function D(x) = d(x,F)2 has the following differentiability property: where the minimum is taken over the closest points z to x in F. To check this, let where the minimum is taken over z in F such that d(x,z) ≤ d(x,F) + ε. Since fε is homogeneous in h and increases uniformly to f0 on any sphere, with a constant C(ε) tending to 0 as ε tends to 0. This differentiability property follows from this because and similarly if |h| ≤ ε The differentiability property implies that minimized over closest points z to c(t). For any such z Since −|y − c(t)|2 has a local maximum on F at y = z, c(t) − z is an exterior normal vector at z. So the first term on the right hand side is non-negative. The Lipschitz condition for X implies the second term is bounded above by 2C⋅D(c(t)). Thus the derivative from the right of is non-positive, so it is a non-increasing function of t. Thus if c(0) lies in F, D(c(0))=0 and hence D(c(t)) = 0 for t > 0, i.e. c(t) lies in F for t > 0. References Literature , Theorem 8.5.11 See also Barrier certificate Ordinary differential equations Dynamical systems Manifolds
Bony–Brezis theorem
[ "Physics", "Mathematics" ]
709
[ "Theorems in differential geometry", "Space (mathematics)", "Topological spaces", "Topology", "Mechanics", "Manifolds", "Theorems in geometry", "Dynamical systems" ]
39,082,552
https://en.wikipedia.org/wiki/Barsoum%20elements
Barsoum elements is a finite element analysis technique used in fracture analysis to determine the stress intensity factor of a crack. It was introduced by R. Barsoum in 1976. Technique In this method, the usual isoparametric 6 node triangular or 8 node isoparametric quadrilateral elements are employed. The mid side nodes on 2 adjacent sides are shifted towards the corner node to the quarter point location. For these locations of the mid nodes, the Jacobian becomes singular at the corner node thus making displacement derivatives infinite and stresses and strains become infinite as well. It can be shown that the variation of stresses along the 2 sides of the elements is according to . On the other hand, if all the three nodes on the side of an 8 node quadrilateral element are collapsed to one node (given the same node number) then the stress or strain varies as along any radial line emanating from crack tip. All the mid side nodes adjacent to the crack tip are at quarter point locations. From the displacement field solution the stress intensity factor K1 in a mode 1 case can be calculated as per the following relation: where VB and VC are the displacement in the y direction behind the crack tip. It has been demonstrated that K1 found by this method is within 2% of theoretical solutions. Accuracy of finite element calculation can be improved if the neighboring elements are also modeled to have the terms depicting the stresses for a crack with its tip outside the element. References Finite element method Fracture mechanics
Barsoum elements
[ "Materials_science", "Engineering" ]
301
[ "Structural engineering", "Materials degradation", "Materials science", "Fracture mechanics" ]
39,087,496
https://en.wikipedia.org/wiki/Crack%20growth%20resistance%20curve
In fracture mechanics, a crack growth resistance curve shows the energy required for crack extension as a function of crack length in a given material. For materials that can be modeled with linear elastic fracture mechanics (LEFM), crack extension occurs when the applied energy release rate exceeds the material's resistance to crack extension . Conceptually, can be thought of as the energetic gain associated with an additional infinitesimal increment of crack extension, while can be thought of as the energetic penalty of an additional infinitesimal increment of crack extension. At any moment in time, if then crack extension is energetically favorable. A complication to this process is that in some materials, is not a constant value during the crack extension process. A plot of crack growth resistance versus crack extension is called a crack growth resistance curve, or R-curve. A plot of energy release rate versus crack extension for a particular loading configuration is called the driving force curve. The nature of the applied driving force curve relative to the material's R-curve determines the stability of a given crack. The usage of R-curves in fracture analysis is a more complex, but more comprehensive failure criteria compared to the common failure criteria that fracture occurs when where is simply a constant value called the critical energy release rate. An R-curve based failure analysis takes into account the notion that a material's resistance to fracture is not necessarily constant during crack growth. R-curves can alternatively be discussed in terms of stress intensity factors rather than energy release rates , where the R-curves can be expressed as the fracture toughness (, sometimes referred to as ) as a function of crack length . Types of R-Curves Flat R-Curves The simplest case of a material's crack resistance curve would be materials which exhibit a "flat R-curve" ( is constant with respect to ). In materials with flat R-curves (the shape of R curve depends on the material properties and more importantly on the crack tip plasticity. The reduction in stress intensity factor due to high deformation can result in a flat R curve), as a crack propagates, the resistance to further crack propagation remains constant. Thus, the common failure criteria of is largely valid. In these materials, if increases as a function of (which is the case in many loading configurations and crack geometries), then as soon as the applied exceeds the crack will unstably grow to failure without ever halting. Physically, the independence of from is indicative that in these materials the phenomena which are energetically costly during crack propagation do not evolve during crack propagation. This tends to be an accurate model for perfectly brittle materials such as ceramics, in which the principal energetic cost of fracture is the development of new free surfaces on the crack faces. The character of the energetic cost of the creation of new surfaces remains largely unchanged regardless of how long the crack has propagated from its initial length. Rising R-Curves Another category of R-curve that is common in real materials is a "rising R-curve" ( increases as increases). In materials with rising R-curves, as a crack propagates, the resistance to further crack propagation increases, and it requires a higher and higher applied in order to achieve each subsequent increment of crack extension . As such, it can be technically challenging in these materials in practice to define a single value to quantify resistance to fracture (i.e. or ) as the resistance to fracture rises continuously as any given crack propagates. Materials with rising R-curves can also more easily exhibit stable crack growth than materials with flat R-curves, even if strictly increases as a function of . If at some moment in time a crack exists with initial length and an applied energy release rate which is infinitesimally exceeding the R-curve at this crack length then this material would immediately fail if it exhibited flat R-curve behavior. If instead it exhibits rising R-curve behavior, then the crack has an added criteria for crack growth that the instantaneous slope of the driving force curve must be greater than the instantaneous slope of the crack resistance curve or else it is energetically unfavorable to grow the crack further. If is infinitesimally greater than but then the crack will grow by an infinitesimally small increment such that and then crack growth will arrest. If the applied crack driving force was gradually increased over time (through increasing the applied force for example) then this would lead to stable crack growth in this material as long as the instantaneous slope of the driving force curve continued to be less than the slope of the crack resistance curve. Physically, the dependence of on is indicative that in rising R-curve materials, the phenomena which are energetically costly during crack propagation are evolving as the crack grows in such a way that leads to accelerated energy dissipation during crack growth. This tends to be the case in materials which undergo ductile fracture as it can be observed that the plastic zone at the crack tip increases in size as the crack propagates, indicating that an increasing amount of energy must be dissipated to plastic deformation for the crack to continue to grow. A rising R-curve can also sometimes be observed in situations where a material's fracture surface becomes significantly rougher as the crack propagates, leading to additional energy dissipation as additional area of free surfaces is generated. In theory, does not continue to increase to infinity as , and instead will asymptotically approach some steady-state value after a finite amount of crack growth. It is usually not feasible to reach this steady-state condition, as it often requires very long crack extensions before reaching this condition, and thus would require large testing specimen geometries (and thus high applied forces) to observe. As such, most materials with rising R-curves are treated as if continually rises until failure. Falling R-Curves While far less common, some materials can exhibit falling R-curves ( decreases as increases). In some cases, the material may initially exhibit rising R-curve behavior, reach a steady-state condition, and then transition into falling R-curve behavior. In a falling R-curve regime, as a crack propagates, the resistance to further crack propagation drops, and it requires less and less applied in order to achieve each subsequent increment of crack extension . Materials experiencing these conditions would exhibit highly unstable crack growth as soon as any initial crack began to propagate. Polycrystalline graphite has been reported to demonstrate falling R-curve behavior after initially exhibiting rising R-curve behavior, which is postulated to be due to the gradual development of a microcracking damage zone in front of the crack tip which eventually dominates after the phenomena leading to the initial rising R-curve behavior reach steady-state. Effect of size and shape Size and geometry also plays a role in determining the shape of the R curve. A crack in a thin sheet tends to produce a steeper R curve than a crack in a thick plate because there is a low degree of stress triaxiality at the crack tip in the thin sheet while the material near the tip of the crack in the thick plate may be in plane strain. The R curve can also change at free boundaries in the structure. Thus, a wide plate may exhibit a somewhat different crack growth resistance behavior than a narrow plate of the same material. Ideally, the R curve, as well as other measures of fracture toughness, is a property only of the material and does not depend on the size or shape of the cracked body. Much of fracture mechanics is predicated on the assumption that fracture toughness is a material property. Testing ASTM evolved a standard practice for determining R-curves to accommodate the widespread need for this type of data. While the materials to which this standard practice can be applied are not restricted by strength, thickness or toughness, the test specimens must be of sufficient size to remain predominantly elastic throughout the test. The size requirement is to ensure the validity of the linear elastic fracture mechanics calculations. Specimens of standard proportions are required, but size is variable, adjusted for yield strength and toughness of the material considered. ASTM Standard E561 covers the determination of R-curves using a middle cracked tension panel [M(T)], compact tension [C(T)], and crack-line-wedge-loaded [C(W)] specimens. While the C(W) specimen had gained substantial popularity for collecting KR curve data, many organizations still conduct wide panel, center cracked tension tests to obtain fracture toughness data. As with the plane-strain fracture toughness standard, ASTM E399, the planar dimensions of the specimens are sized to ensure that nominal elastic conditions are met. For the M(T) specimen, the width (W) and half crack size (a) must be chosen so that the remaining ligament is below net section yielding at failure. External links References Fracture mechanics
Crack growth resistance curve
[ "Materials_science", "Engineering" ]
1,815
[ "Structural engineering", "Materials degradation", "Materials science", "Fracture mechanics" ]
39,090,181
https://en.wikipedia.org/wiki/Isotopes%20in%20medicine
A medical isotope is an isotope used in medicine. The first uses of isotopes in medicine were in radiopharmaceuticals, and this is still the most common use. However more recently, separated stable isotopes have come into use. Radioactive isotopes Radioactive isotopes are used in medicine for both treatment and diagnostic scans. The most common isotope used in diagnostic scans is Technetium-99m, used in approximately 85% of all nuclear medicine diagnostic scans worldwide. It is used for diagnoses involving a large range of body parts and diseases such as cancers and neurological problems. Another well-known radioactive isotope used in medicine is Iodine-131, which is used as a radioactive label for some radiopharmaceutical therapies or the treatment of some types of thyroid cancer. Non-radioactive isotopes Examples of non-radioactive medical isotopes are: Deuterium in deuterated drugs Carbon-13 used in liver function and metabolic tests References External links Radionuclide production simulator – IAEA Medicinal radiochemistry Chemicals in medicine
Isotopes in medicine
[ "Physics", "Chemistry" ]
216
[ "Medicinal radiochemistry", "Isotope stubs", "Isotopes", "Nuclear chemistry stubs", "Nuclear and atomic physics stubs", "Medicinal chemistry", "Nuclear physics", "Chemicals in medicine", "Medical isotopes" ]
39,090,245
https://en.wikipedia.org/wiki/Transient%20modelling
Transient modelling is a way of looking at a process with the primary criterion of time, observing the pattern of changes in the subject being studied over time. Its obverse is Steady state, where you might know only the starting and ending figures but do not understand the process by which they were derived. Transient models will reveal the pattern of a process, which might be sinusoidal or another shape that will help to design a better system to manage that process. Transient models can be done on a spreadsheet with an ability to generate charts, or by any software that can handle data of inputs and outputs and generate some sort of a display. Transient modelling does not need a computer. It is a methodology that has worked for centuries, by observers noting patterns of change against time, analysing the result and proposing improved design solutions. A simple example is a garden water tank. This is being topped up by rainfall from the roof, but when the tank is full, the remaining water goes to the drain. When the gardener draws water off, the level falls. If the garden is large and the summer is hot, a steady state will occur in summer where the tank is nearly always empty in summer. If the season is wet, the garden is getting water from the sky, and the tank is not being emptied sufficiently, so in steady state it will be observed to be always full. If the gardener has a way of observing the level of water in the tank, and a record of daily rainfall and temperatures, and is precisely metering the amount of water being drawn off every day, the numbers and the dates can be recorded in spreadsheet at daily intervals. After enough samples are taken, a chart can be developed to model the rise and fall pattern over a year, or over 2 years. With a better understanding of the process, it might emerge that a 200litre water tank would run out 20–25 days a year, but a 400-litre water tank would never run out, and a 300-litre tank would run out only 1-2 day a year and therefore that would be an acceptable risk and it would be the most economical solution. One of the best examples of transient modelling is transient climate simulation. The analysis of ice cores in glaciers to understand climate change. Ice cores have thousands of layers, each of which represents a winter season of snowfall, and trapped in these are bubbles of air, particle of space dust and pollen which reveal climatic data of the time. By mapping these to a time scale, scientists can analyse the fluctuations over time and make predictions for the future. Transient modelling is the basis of weather forecasting, of managing ecosystems, rail timetabling, managing the electricity grid, setting the national budget, floating currency, understanding traffic flows on a freeway, solar gains on glass fronted buildings, or even of checking the day-to-day transactions of one's monthly bank statement. With the transient modelling approach, you understand the whole process better when the inputs and outputs are graphed against time. References GroundwaterSoftware.com - Steady State vs. Transient Modeling and FEFLOW Well Test by Design: Transient Modelling to Predict Behaviour in Extreme Wells Conceptual modelling
Transient modelling
[ "Physics", "Mathematics", "Engineering" ]
643
[ "Systems engineering", "Mechanics", "Dynamical systems" ]
37,612,594
https://en.wikipedia.org/wiki/Nielsen%E2%80%93Ninomiya%20theorem
In lattice field theory, the Nielsen–Ninomiya theorem is a no-go theorem about placing chiral fermions on a lattice. In particular, under very general assumptions such as locality, hermiticity, and translational symmetry, any lattice formulation of chiral fermions necessarily leads to fermion doubling, where there are the same number of left-handed and right-handed fermions. It was first proved by Holger Bech Nielsen and Masao Ninomiya in 1981 using two methods, one that relied on homotopy theory and another that relied on differential topology. Another proof provided by Daniel Friedan uses differential geometry. The theorem was also generalized to any regularization scheme of chiral theories. One consequence of the theorem is that the Standard Model cannot be put on a lattice. Common methods for overcoming the fermion doubling problem is to use modified fermion formulations such as staggered fermions, Wilson fermions, or Ginsparg–Wilson fermions, among others. Lattice regularization The theorem was originally formulated in the Hamiltonian formulation of lattice field theory where time is continuous but space has been discretized. Consider a theory with a Hamiltonian of the form together with a charge . The Nielsen–Ninomiya theorem states that there is an equal number of left-handed and right-handed fermions for every set of charges if the following assumptions are met Translational invariance: Implies that . Locality: must vanish fast enough to have a Fourier transform with continuous derivatives. Hermiticity: For the Hamiltonian to be Hermitian, must also be Hermitian. The charge is defined locally through some local charge density. The charge is quantized. The charge is exactly conserved. This theorem trivially holds in odd dimensions since odd dimensional theories do not admit chiral fermions due to the absence of a valid chirality operator, that is an operator that anticommutes with all gamma matrices. This follows from the properties of Dirac algebras in odd dimensions. The Nielsen–Ninomiya theorem has also been proven in the Euclidean formulation. For example, consider a weaker version of the theorem which assumes a less generic action of the form where is the right-handed projection operator, together with three assumptions Translational invariance: . Hermiticity: For the action to be hermitian, it must hold that . Locality: The inverse propagator decreases fast enough so that its Fourier transform exists and all its derivatives are continuous. If all these conditions are met then there is once again an equal number of left-handed and right-handed fermions. Proof summary The simplified Euclidean version of the theorem has a much shorter proof, relying on a key theorem from differential topology known as the Poincaré–Hopf theorem. It can be summarized as follows. From the locality assumption, the Fourier transform of the inverse propagator must be a continuous vector field on the Brillouin zone whose isolated zeros correspond to different species of particles of the theory. Around each zero the vector field behavior is either a saddle singularity or a sink/source singularity. This is captured by the index of the vector field at the zero which takes the values for the two cases. It can be shown that the two cases determine whether the particle is left-handed or right-handed. The Poincaré–Hopf theorem states that the sum of the indices of a vector field on a manifold is equal the Euler characteristic of that manifold. In this case, the vector field lives on the Brillouin zone which is topologically a 4-torus which has Euler characteristic zero. Therefore, there must be an equal number of left-handed and right-handed particles. General regularization schemes The Nielsen–Ninomiya theorem can be generalized to all possible regularization schemes, not just lattice regularization. This general no-go theorem states that no regularized chiral fermion theory can satisfy all the following conditions Invariance under at least the global part of the gauge group. Different number of left-handed and right-handed Weyl species for a given combination of generators. The correct chiral anomaly. An action bilinear in Weyl fields. A short proof by contradiction points out that the Noether current acquired from some of assumptions is conserved, while other assumptions imply that it is not. Every regularization scheme must violate one or more of the conditions. For lattice regularization the Nielsen–Ninomiya theorem leads to the same result under even weaker assumptions where the requirement for the correct chiral anomaly is replaced by an assumption of locality of interactions. Dimensional regularization depends on the particular implementation of chirality. If the matrix is defined as for infinitesimal then this leads to a vanishing chiral anomaly, while using breaks global invariance. Meanwhile, Pauli–Villars regularization breaks global invariance since it introduces a regulator mass. See also Lattice gauge theory References Lattice field theory Fermions Theorems in quantum mechanics No-go theorems
Nielsen–Ninomiya theorem
[ "Physics", "Materials_science", "Mathematics" ]
1,024
[ "Theorems in quantum mechanics", "No-go theorems", "Equations of physics", "Fermions", "Quantum mechanics", "Theorems in mathematical physics", "Subatomic particles", "Condensed matter physics", "Matter", "Physics theorems" ]
37,613,671
https://en.wikipedia.org/wiki/Nagata%27s%20compactification%20theorem
In algebraic geometry, Nagata's compactification theorem, introduced by , implies that every abstract variety can be embedded in a complete variety, and more generally shows that a separated and finite type morphism to a Noetherian scheme S can be factored into an open immersion followed by a proper morphism. Nagata's original proof used the older terminology of Zariski–Riemann spaces and valuation theory, which sometimes made it hard to follow. Deligne showed, in unpublished notes expounded by Conrad, that Nagata's proof can be translated into scheme theory and that the condition that S is Noetherian can be replaced by the much weaker condition that S is quasi-compact and quasi-separated. gave another scheme-theoretic proof of Nagata's theorem. An important application of Nagata's theorem is in defining the analogue in algebraic geometry of cohomology with compact support, or more generally higher direct image functors with proper support. The idea is that given a compactifiable morphism one defines by choosing a factorization by an open immersion j and proper morphism p, and then setting , where is the extension by zero functor. One then shows the independence of the definition from the choice of compactification. In the context of étale sheaves, this idea was carried out by Deligne in SGA 4, Exposé XVII. In the context of coherent sheaves, the statements are more delicate since for an open immersion j, the inverse image functor does not usually admit a left adjoint. Nonetheless, exists as a pro-left adjoint, and Deligne was able to define the functor as valued in the pro-derived category of coherent sheaves. References Stacks Project - Nagata compactification - See Lemma 38.33.8 first, then backtrack Stacks Project - Derived lower shriek via compactifications Stacks Project - Compactly supported cohomology for coherent modules Theorems in algebraic geometry
Nagata's compactification theorem
[ "Mathematics" ]
410
[ "Theorems in algebraic geometry", "Theorems in geometry" ]
37,620,134
https://en.wikipedia.org/wiki/Russian%20Academy%20of%20Engineering
The Russian Academy of Engineering (RAE) is a public academy of sciences, which unites leading Russian and foreign scientists, engineers, scientific-research organizations, higher educational institutions and enterprises. The Russian Academy of Engineering is the legal successor of the Engineering Academy of the USSR. Mission uniting creative potential of Russian scientists and engineers; development and efficient implementation of intellectual potential in the field of engineering activities; development and accompanying in carrying out the most important and prospective research and innovation programmes; creation and application of principally new types of technics, technologies and materials; providing acceleration of scientific-technical progress on the key directions of development of the Russian economy. Activity The Russian Academy of Engineering (RAE) includes 28 sections, which cover the key branches of industry, and a range of councils on various scientific-technical issues. In December 2020 the RAE Chinese Centre has been opened in Beijing in the framework of joint meeting of the RAE Presidium and the branch of the Chinese Academy of Engineering. Periodicals The Academy is the founder of 23 branch and regional magazines and newspapers. Members The Academy involves over thousand individual members from 40 countries of the world. References External links Official website Russian National Academies National academies of engineering
Russian Academy of Engineering
[ "Engineering" ]
239
[ "National academies of engineering" ]
37,622,210
https://en.wikipedia.org/wiki/Accessible%20housing
Accessible housing refers to the construction or modification (such as through renovation or home modification) of housing to enable independent living for persons with disabilities. Accessibility is achieved through architectural design, but also by integrating accessibility features such as modified furniture, shelves and cupboards, or even electronic devices in the home. Canada In Canada, Flexhousing is a concept that encourages homeowners to make renovations that modify their house over time to meet changing accessibility needs. The concept supports the goals of enabling "homeowners to occupy a dwelling for longer periods of time, perhaps over their entire lifetimes, while adapting to changing circumstances and meeting a wide range of needs"; Universal Housing in the United States and Lifetime Homes in the United Kingdom are similar concepts. United Kingdom Great Britain applies the most widespread application of home access to date. In 1999, Parliament passed Section M, an amendment to residential building regulations requiring basic access in all new homes, but even so in a survey by YouGov in 2019 only 21% of respondents said a wheelchair user would reasonably be able access all areas of their home. United States In the United States, the 1988 Amendments to the Fair Housing Act added people with disabilities, as well as familial status, to the classes already protected by law from discrimination (race, color, gender, religion, creed, and country of origin). Among the protection for people with disabilities in the 1988 Amendments are seven construction requirements for all multifamily buildings of more than four units first occupied after March 13, 1991. These seven requirements are as follows: An accessible building entrance on an accessible route, Accessible common and public use areas, Doors usable by a person in a wheelchair, Accessible route into and through the dwelling unit, Light switches, electrical outlets, thermostats and other environmental controls in accessible locations, Reinforced walls in bathrooms for later installation of grab bars, and Usable kitchens and bathrooms. Access is typically defined within the limits of what a person sitting in a wheelchair is able to reach with arm movement only, with minimal shifting of the legs and torso. Lighting and thermostat controls should not be above and power outlets should not be below the reach of a person in a wheelchair. Sinks and cooking areas typically need to be designed without cupboards below them, to permit the legs of the wheelchair user to roll underneath, and countertops may be of reduced height to accommodate a sitting rather than standing user. In some cases two food preparation areas may be combined into a single kitchen to permit both standing and wheelchair users. In spite of these advancements, the housing types where most people in the United States reside – single-family homes – are not covered by the Americans with Disabilities Act, the Fair Housing Act, or any other federal law with the exception of the small percentage of publicly funded homes impacted by Section 504 of the Rehabilitation Act. As a result, the great majority of new single-family homes replicate the barriers in existing homes. Renovations for accessibility Homeowners may be challenged by the need to find renovators familiar with accessible design issues. The federal government of Canada and the provincial governments work jointly to share the cost of offering reimbursement programs for homeowners in need of house renovations for accessibility. These programs improve the ability of homeowners to fund house modifications, through renovating existing houses. Adaptations and accommodations Many ranch style homes and manufactured homes use a main floor slightly raised above ground level, but have an overall flat layout with either a crawlspace or slightly raised basement below for plumbing, electrical, and heating systems. These homes can be relatively easily modified to accommodate wheelchairs and walkers, with the installation of a long low-rise ramp outside the building, up to the house entrance, placed over the existing stairway. This ramp can then be removed at a later time, reverting to the stairway entrance if the handicapped access is no longer necessary. Split level homes tend to be designed with multiple internal stairways and half-floor landings inside the building. There may be an entrance area inside the building at ground level, with stairs inside the entrance that immediately go up and down from the ground level. These homes are difficult to accommodate inexpensively since there is often no space available inside the structure to install long sloping wheelchair ramps to access the various floors. It may be possible to retrofit stair lifts into the stairwells or wheelchair lifts into balconies near the stairwell. Multi-story homes can sometimes be accommodated by installing a private residential elevator, which is usually much less expensive and has fewer design and layout requirements than a full commercial elevator. Homebuilders can in some cases plan for a future residential elevator by designing closet spaces in each floor stacked vertically with the same dimensions and location. At a later time the closet floors and ceilings are removed and the elevator equipment is installed into the open shaft. Below are some current suggested architectural features of a universally-accessible house: Aging in place and accessibility A growing trend among senior citizens is to "age in place", reflecting a desire to retain independence for as long as possible. Adaptations for seniors take into account the most common physical impairments affecting the elderly. For example, a common cause of serious injury for seniors is falling inside the home. U.S. Centers for Disease Control and Prevention: Falls are the leading cause of fatal injury and the most common cause of nonfatal trauma-related hospital admissions among older adults. Falls result in more than 2.8 million injuries treated in emergency departments annually, including over 800,000 hospitalizations and more than 27,000 deaths. In 2015, the total cost of fall injuries was $50 billion. Medicare and Medicaid shouldered 75% of these costs. The financial toll for older adult falls is expected to increase as the population ages and may reach $67.7 billion by 2020. Bathrooms commonly are believed to be a particularly hazardous location. Adding handrails and grab bars throughout the home, particularly in bathrooms and along stairways, helps reduce the risk of falling. Other adaptations that improve accessibility for seniors include: easy-to-reach work and storage areas in the kitchen; reaching devices to grab objects on high shelves; lever handles on doors; accessible toilets; toilet seat risers; walk-in showers; and bathtub and shower seats. Alzheimer disease and housing adaptations Alzheimer disease presents specific challenges for caregivers, who need to make the home as accessible as possible to the elderly resident, while keeping safety features in mind. Removable stove switch knobs, locks on kitchen cabinets, electric kettles with automatic shut-off, and adding lighting to eliminate shadows in the house can all help caregivers to reduce dangers to the person with Alzheimer disease. Other features that can improve the well-being of the elderly person can include marking doors with conspicuous and distinct signs or objects such as ribbons or wreaths, can assist memory. Adding a cot or bed to the main floor of the house to allow the elderly person to rest without climbing stairs to a bedroom can be helpful to the Alzheimer patient. Furniture and clutter can also be removed to make the house more safe for an elderly person inclined to pace or wander. See also Accessible bathtub Accessible toilet Americans with Disabilities Act of 1990 Bathroom emergency pullstring Grab bar Transfer bench Unisex public toilet Visitability References Accessibility Architecture Housing
Accessible housing
[ "Engineering" ]
1,475
[ "Construction", "Accessibility", "Design", "Architecture" ]
28,195,097
https://en.wikipedia.org/wiki/Admiralty%20scaffolding
Admiralty scaffolding, also known as Obstacle Z.1 or sometimes simply given as beach scaffolding or anti-tank scaffolding, was a British design of anti-tank and anti-boat obstacle made of tubular steel. It was widely deployed on beaches of southern England, eastern England and South West England during the invasion crisis of 1940-1941. Scaffolding was also used, though more sparingly, inland. Design and use Of a number of similar designs, by far the most common was designated obstacle Z.1. This design comprised upright tubes high and apart, these were connected by up to four horizontal tubes. Each upright was braced by a pair of diagonal tubes, at about 45°, to the rear. wide sections were assembled then carried to the sea to be placed in position at the half tide mark as an obstacle to boats. However, trials found that a 250-ton barge at or an 80-ton trawler at would pass through the obstacle as if it were not there and a trawler easily pulled out one bay with an attached wire rope. Tests in October 1940 confirmed that tanks could only break through with difficulty, as a result Z.1 was adopted as an anti-tank barrier for beaches thought suitable for landing tanks. As an anti-tank barrier it was placed at or just above the high water point where it would be difficult for tanks to get enough momentum to break through the barrier. In some places, two sets of scaffolding were set up, one in the water against boats and one at high water against tanks. The problem of securing the barriers on sand was overcome by the development of the "sword picket" by Stewarts & Lloyds – this device was later known at the Admiralty as the "Wallace Sword". Barriers varying in length from a couple of hundred feet to three miles were constructed consuming 50% of Britain's production of scaffolding steel at an estimated cost of £6,600 per mile (equivalent to £ today). Despite this, many miles of Admiralty scaffolding were erected using more than of scaffolding tube. After the war, the scaffolding got in the way of swimmers, subsequently it was removed for scrap and remaining traces are very rare, but occasionally revealed by storms. See also British anti-invasion preparations of World War II British hardened field defences of World War II References Notes General references Collections Further reading Anti-tank obstacles Area denial weapons United Kingdom home front during World War II
Admiralty scaffolding
[ "Engineering" ]
503
[ "Anti-tank obstacles", "Area denial weapons", "Military engineering" ]
28,200,503
https://en.wikipedia.org/wiki/Cruachan%20Power%20Station
The Cruachan Power Station (also known as the Cruachan Dam) is a pumped-storage hydroelectric power station in Argyll and Bute, Scotland, UK. The scheme can provide 440MW of power and produced 705GWh in 2009. The turbine hall is located inside Ben Cruachan, and the scheme moves water between Cruachan Reservoir and Loch Awe, a height difference of . It is one of only four pumped storage power stations in the United Kingdom, and is capable of providing a black start capability to the National Grid. Construction began in 1959 to coincide with the Hunterston A nuclear power station in Ayrshire. Cruachan uses cheap electricity generated at night to pump water to the higher reservoir, which can then be released during the day to provide power as necessary. The power station is open to visitors, and around 50,000tourists visit it each year. Location The power station is on the A85 road, about west of Dalmally, on a branch of Loch Awe leading to the River Awe, which is the outflow from the loch, at its north west corner. There is a seasonally open Falls of Cruachan railway station nearby. History Construction commenced in 1959, and the power station was opened by Queen Elizabeth II on 15 October 1965. The concept was designed by Sir Edward MacColl, who died before it opened. The civil engineering design of the scheme was carried out by James Williamson & Partners of Glasgow, and the main project contractors were William Tawse of Aberdeen and Edmund Nuttall of Camberley. Consulting electrical engineers were Merz & McLellan of Newcastle upon Tyne. At the peak of the construction, there were around 4,000people working on the project. Thirty-six men died in the construction of the power station and dam. The cost of the scheme was . Cruachan was one of the first reversible pumped-storage systems, where the same turbines are used as both pumps and generators. Previous pumped-storage systems used separate pumps with a network of pipes to return water to the upper reservoir, making them more expensive to build than conventional hydroelectric systems. Cruachan is pre-dated by the smaller Lünerseewerk (de) (Austria, 1958) and the Ffestiniog Power Station (Wales, 1963). It is one of four pumped storage schemes in the United Kingdom. Its construction was linked to that of Hunterston A nuclear power station, to store surplus night-time nuclear-generated electrical energy. The power station was originally operated by the North of Scotland Hydro-Electric Board, before being transferred to the South of Scotland Electricity Board. It was owned by ScottishPower from the privatisation of Britain's electricity industry in 1990 until Drax Group purchased it along with other ScottishPower assets on 1 January 2019. Maintenance of the penstocks, which formerly required them to be drained, is now done using a remotely operated underwater vehicle. To commemorate the 50th anniversary of the station's opening, a 2015 BBC radio documentary Inside the Rock described its construction. Design The Cruachan station temporarily stores energy at times of low demand, and releases it at times of high demand, when electricity prices are higher, reducing the maximum power that must be provided by other power stations. It is also used to cope with sudden surges in the demand for electricity, such as at the end of popular television programmes. Despite the use of some rainwater, Cruachan is not a net generator of electricity: it uses more energy for pumping water and spinning its turbines than it generates. Water is pumped from Loch Awe to the upper reservoir, above, during periods of low energy use (most often at night), and then released when needed. The upper reservoir also receives rainwater, supplemented by a network of of tunnels. Around 10% of the energy from the station is generated from rainwater; the rest is from the water pumped up from Loch Awe. The station is capable of generating of electricity from four turbines, two of and two of capacity, after two units were upgraded in 2005. It can go from standby to full production in twominutes, or 30seconds if compressed air is used to start the turbines spinning. When the top reservoir is full, Cruachan can operate for 22hours before the supply of water is exhausted. At full power, the turbines can pump at per second and generate at per second. The power station is required to keep a 12-hour water supply in order to provide a black start capability to the National Grid, to enable utilities to be restarted without access to external power. It began supplying grid inertia in 2020. In June 2021, Drax applied to build a further 600MW pumped storage system using the same reservoir, to a combined 1GW for seven hours of storage. Approval was granted in July 2023, and Drax intended to complete the project in 2030. Several financing modes are possible for the £500M project. Seismic surveys began in June 2024. Turbine hall There are four Francis turbines, which operate as both pumps and generators. These are housed in a cavern within Ben Cruachan, which is long, wide and high, with an adjacent transformer hall. The chamber is at a depth of around , and is within a hard granite intrusion. Construction of the power station required the removal of of rock. Access to the hall is gained by a road tunnel long, high and wide, which is warm and humid enough to allow tropical plants to grow. The transformers step up the voltage from 16kV to 275kV for transmission. Six oil-filled cables carry the electric current up a cable shaft to a point in front of the dam, and from there it is carried on pylons to Dalmally to the east. The staircase in the cable shaft has 1,420steps, making it the tallest in Britain. After passing through the turbines, the water enters a surge chamber designed to balance fluctuations in the level of water before entering the tailrace tunnel to Loch Awe, which is in diameter and long. Reservoir The Cruachan Reservoir is above Loch Awe, and is contained by a dam long. The reservoir has a catchment area of , and is capable of holding of energy. Environmental restrictions meant that the dam had to have a "clean" appearance, so the operational equipment is housed within the dam wall. The penstocks are a pair of tunnels, long and inclined at 56° from the horizontal with a diameter, which then bifurcate into four steel lined long, diameter shafts. The penstocks underwent a major inspection and refurbishment in 2003. Tourist attraction The power station was listed by the conservation organisation DoCoMoMo as one of the sixty key monuments of post-war Scottish architecture. In November 2012, the power station received the Institution of Mechanical Engineers' Engineering Heritage Award. A visitor centre, refurbished in 2009, is sited by the outflow to Loch Awe and receives around 50,000visitors a year. The power station houses a three-section modernist mural in wood, plastic and gold leaf by English artist Elizabeth Falconer. The mural includes Celtic crosses, pylons, mythical beasts, and men of industry. The first section depicts the mythical Cailleach Bheur, who guarded the spring underneath the mountain. The middle panel commemorates 15 workers killed when the roof of the turbine hall collapsed, and the final section shows the station working. Popular culture In the Disney+ Star Wars series Andor episode six "The Eye", the Cruachan Power Station appeared as the Empire's supply hub on the planet Aldhani. References Sources External links Dams completed in 1965 Energy infrastructure completed in 1965 Pumped-storage hydroelectric power stations in the United Kingdom Hydroelectric power stations in Scotland Buildings and structures in Argyll and Bute Civil engineering
Cruachan Power Station
[ "Engineering" ]
1,589
[ "Construction", "Civil engineering" ]
28,202,032
https://en.wikipedia.org/wiki/Hybrid%20genome%20assembly
In bioinformatics, hybrid genome assembly refers to utilizing various sequencing technologies to achieve the task of assembling a genome from fragmented, sequenced DNA resulting from shotgun sequencing. Genome assembly presents one of the most challenging tasks in genome sequencing as most modern DNA sequencing technologies can only produce reads that are, on average, 25–300 base pairs in length. This is orders of magnitude smaller than the average size of a genome (the genome of the octoploid plant Paris japonica is 149 billion base pairs). This assembly is computationally difficult and has some inherent challenges, one of these challenges being that genomes often contain complex tandem repeats of sequences that can be thousands of base pairs in length. These repeats can be long enough that second generation sequencing reads are not long enough to bridge the repeat, and, as such, determining the location of each repeat in the genome can be difficult. Resolving these tandem repeats can be accomplished by utilizing long third generation sequencing reads, such as those obtained using the PacBio RS DNA sequencer. These sequences are, on average, 10,000–15,000 base pairs in length and are long enough to span most repeated regions. Using a hybrid approach to this process can increase the fidelity of assembling tandem repeats by being able to accurately place them along a linear scaffold and make the process more computationally efficient. Genome Assembly Classical Genome Assembly The term genome assembly refers to the process of taking a large number of DNA fragments that are generated during shotgun sequencing and assembling them into the correct order such as to reconstruct the original genome. Sequencing involves using automated machines to determine the order of nucleic acids in the DNA of interest (the nucleic acids in DNA are adenine, cytosine, guanine and thymine) to conduct genomic analyses involving an organism of interest. The advent of next generation sequencing has presented significant improvements in the speed, accuracy and cost of DNA sequencing and has made the sequencing of entire genomes a feasible process. There are many different sequencing technologies that have been developed by various biotechnology companies, each of which produce different sequencing reads in terms of accuracy and read length. Some of these technologies include Roche 454, Illumina, SOLiD, and IonTorrent. These sequencing technologies produce relatively short reads (50–700 bases) and have a high accuracy (>98%). Third-generation sequencing include technologies as the PacBio RS system which can produce long reads (maximum of 23kb) but have a relatively low accuracy. Genome assembly is normally done by one of two methods: assembly using a reference genome as a scaffold, or de novo assembly. The scaffolding approach can be useful if the genome of a similar organism has been previously sequenced. This process involves assembling the genome of interest by comparing it to a known genome or scaffold. De novo genome assembly is used when the genome to be assembled is not similar to any other organisms whose genomes have been previously sequenced. This process is carried out by assembling single reads into contiguous sequences (contigs) which are then extended in the 3' and 5' directions by overlapping other sequences. The latter is preferred because it allows for the conservation of more sequences. The de novo assembly of DNA sequences is a very computationally challenging process and can fall into the NP-hard class of problems if the Hamiltonian-cycle approach is used. This is because millions of sequences must be assembled to reconstruct a genome. Within genomes, there are often tandem repeats of DNA segments that can be thousands of base pairs in length, which can cause problems during assembly. Although next generation sequencing technology is now capable of producing millions of reads, the assembly of these reads can cause a bottleneck in the entire genome assembly process. As such, extensive research is being done to develop new techniques and algorithms to streamline the genome assembly process and make it a more computationally efficient process and to increase the accuracy of the process as a whole. Hybrid Genome Assembly One hybrid approach to genome assembly involves supplementing short, accurate second-generation sequencing data (i.e. from IonTorrent, Illumina or Roche 454) with long less accurate third-generation sequencing data (i.e. from PacBio RS) to resolve complex repeated DNA segments. The main limitation of single-molecule third-generation sequencing that prevents it from being used alone is its relatively low accuracy, which causes inherent errors in the sequenced DNA. Using solely second-generation sequencing technologies for genome assembly can miss or lead to the incomplete assembly of important aspects of the genome. Supplementation of third generation reads with short, high-accuracy second generation sequences can overcome these inherent errors and completed crucial details of the genome. This approach has been used to sequence the genomes of some bacterial species including a strain of Vibrio cholerae. Algorithms specific for this type of hybrid genome assembly have been developed, such as the PacBio corrected Reads algorithm. There are inherent challenges when utilizing sequence reads from various technologies to assemble a sequenced genome; data coming from different sequencers can have different characteristics. An example of this can be seen when using the overlap-layout-consensus (OLC) method of genome assembly, which can be difficult when using reads of substantially different lengths. Currently, this challenge is being overcome by using multiple genome assembly programs. An example of this can be seen in Goldberg et al. where the authors paired 454 reads with Sanger reads. The 454 reads were first assemble using the Newbler assembler (which is optimized to use short reads) generating pseudo reads that were then paired with the longer Sanger reads and assembled using the Celera assembler. Hybrid genome assembly can also be accomplished using the Eulerian path approach. In this approach, the length of the assembled sequences does not matter as once a k-mer spectrum has been constructed, the lengths of the reads are irrelevant. Practical approaches Hybrid error correction and de novo assembly of single-molecule sequencing reads The authors of this study developed a correction algorithm called the PacBio corrected Reads (PBcR) algorithm which is implemented as part of the Celera assembly program. This algorithm calculates an accurate hybrid consensus sequence by mapping higher accuracy short reads (from second generation sequencing technologies) to individual lower accuracy long reads (from third-generation sequencing technologies). This mapping allows for trimming and correction of the long reads to improve the read accuracy from as low as 80% to over 99.9%. In the best example of this application from this paper, the contig size was quintupled when compared to the assemblies using only second-generation reads. This study offers an improvement over the typical programs and algorithms used to assemble uncorrected PacBio reads. ALLPATHS-LG (another program that can assemble PacBio reads) uses the uncorrected PacBio reads to assist in scaffolding and for the closing of gaps in short sequence assemblies. Due to computational limitations, this approach limits assembly to relatively small genomes (maximum of 10Mbp). The PBcR algorithm allows for the assembly of much larger genomes with higher fidelity and using uncorrected PacBio reads. This study also shows that using a lower coverage of corrected long reads is similar to using a higher coverage of shorter reads; 13x PBcR data (corrected using 50x Illumina data) was comparable to an assembly constructed using 100x paired-end Illumina reads. The N50 for the corrected PBcR data was also longer than the Illumina data (4.65MBp compared to 3.32 Mbp for the Illumina reads). A similar trend was seen in the sequencing of the Escherichia coli JM221 genome: a 25x PBcR assembly had a N50 triple that of 50x 454 assembly. Automated finishing of bacterial genomes This study employed two different methods for hybrid genome assembly: a scaffolding approach that supplemented currently available sequenced contigs with PacBio reads, as well as an error correction approach to improve the assembly of bacterial genomes. The first approach in this study started with high-quality contigs constructed from sequencing reads from second-generation (Illumina and 454) technology. These contigs were supplemented by aligning them to PacBio long reads to achieve linear scaffolds that were gap-filled using PacBio long reads. These scaffolds were then supplemented again, but using PacBio strobe reads (multiple subreads from a single contiguous fragment of DNA ) to achieve a final, high-quality assembly. This approach was used to sequence the genome of a strain of Vibrio cholerae that was responsible for a cholera outbreak in Haiti. This study also used a hybrid approach to error-correction of PacBio sequencing data. This was done by utilizing high-coverage Illumina short reads to correct errors in the low-coverage PacBio reads. BLASR (a long read aligner from PacBio) was used in this process. In areas where the Illumina reads could be mapped, a consensus sequence was constructed using overlapping reads in that region. One area of the genome where the use of the long PacBio reads was especially helpful was the ribosomal operon. This region is usually greater than 5kb in size and occurs seven time throughout the genome with an average identity ranging from 98.04% to 99.94%. Resolving these regions using only short second generation reads would be very difficult but the use of long third generation reads makes the process much more efficient. Utilization of the PacBio reads allowed for unambiguous placement of the complex repeated along the scaffold. Using only short reads This study employs a hybrid genome assembly approach that only uses sequencing reads generated using SOLiD sequencing (a second-generation sequencing technology). The genome of C. pseudotuberculosis was assembled twice: once using a classical reference genome approach, and once using a hybrid approach. The hybrid approach consisted of three contiguous steps. Firstly, contigs were generated de novo, secondly, the contigs were ordered and concatenated into supercontigs, and, thirdly, the gaps between contigs were closed using an iterative approach. The initial de novo assembly of contigs was achieved in parallel using Velvet, which assembles contigs by manipulating De Bruijn graphs, and Edena, which is an OLC-based assembler Comparing the assembly constructed using the hybrid approach to the assembly created using the traditional reference genome approach showed that, with the availability of a reference genome, it is more beneficial to utilize an hybrid de novo assembly strategy as it preserves more genome sequences. Using high throughput short and long reads The authors of this paper present Cerulean, a hybrid genome assembly program that differs from traditional hybrid assembly approaches. Normally, hybrid assembly involved mapping short high quality reads to long low quality reads, but this still introduces errors in the assembled genomes. This process is also computationally expensive and require a large amount of running time, even for relatively small bacterial genomes. Cerulean, unlike other hybrid assembly approaches, doesn’t use the short reads directly, instead it uses an assembly graph that is created in a similar manner to the OLC method or the De Bruijn method. This graph is used to assemble a skeleton graph, which only uses long contigs with the edges of the graph representing the putative genomic connection between the contigs. The skeleton graph is a simplified version of a typical De Bruijn graph, which means that unambiguous assembly using the skeleton graph is more favourable than traditional methods. This method was tested by assembling the genome of an ‘’Escherichia coli’’ strain. First, short reads were assembled using the ABySS assembler. These reads were then mapped to the long reads using BLASR. The results from the ABySS assembly were used to create the assembly graph, which were used to generate scaffolds using the filtered BLASR data . The advantages of cerulean are that it requires minimal resources and results in assembled scaffolds with high accuracy. These characteristics make it better suited for up-scaling to be used on larger eukaryotic genomes, but the efficiency of cerulean when applied to larger genomes remains to be verified. Future prospectives The current challenges in genome assembly are related to the limitation of modern sequencing technologies. Advances in sequencing technology aim to develop systems that are able to produce long sequencing reads with very high fidelity but, at this point, these two things are mutually exclusive. The advent of third-generation sequencing technology is expanding the limits of genomic research as the cost of generating high quality sequencing data is decreasing. The idea of using multiple sequencing technologies to facilitate genome assembly may become an idea of the past as the quality of long sequencing reads (hundreds or thousands of base pairs) approaches and exceeds the quality of current second generation sequencing reads. The computational difficulties that are encountered during genome assembly will also become a concept of the past as computation efficiency and performance increases. The development of more efficient sequencing algorithms and assembly programs is needed to develop more effective assembly approaches that can tandemly incorporate sequencing reads from multiple technologies. Many of the current limitations in genomic research revolve around the ability to produce large amounts of high quality sequencing data and to assemble entire genomes of organisms of interest. Developing more effective hybrid genome assembly strategies is taking the next step in advancing sequence assembly technology and these strategies are guaranteed to become more effective as more powerful technologies emerge. References External links Hybrid Error Correction and De Novo Assembly of Single-Molecule Sequencing Reads Virtual Poster: Hybrid Genome Assembly of a Nocturnal Lemur National Center for Biotechnology Information: Genome Assembly Bioinformatics
Hybrid genome assembly
[ "Engineering", "Biology" ]
2,819
[ "Bioinformatics", "Biological engineering" ]
35,152,264
https://en.wikipedia.org/wiki/Shimura%27s%20reciprocity%20law
In mathematics, Shimura's reciprocity law, introduced by , describes the action of ideles of imaginary quadratic fields on the values of modular functions at singular moduli. It forms a part of the Kronecker Jugendtraum, explicit class field theory for such fields. There are also higher-dimensional generalizations. References Theorems in number theory
Shimura's reciprocity law
[ "Mathematics" ]
76
[ "Number theory stubs", "Theorems in number theory", "Mathematical problems", "Mathematical theorems", "Number theory" ]
35,154,091
https://en.wikipedia.org/wiki/Software%20Industry%20Survey
The Software Industry Survey is an annual, for-the-public scientific survey about the size, composition, current state and future of the software industry and companies in Europe with origin in Finland. Apart from these reports, the survey's data is used as input for various scientific studies based on the collected data. Results are also used by media companies like newspapers as source for news articles. History The survey organization was led (in 2010 and 2011) by the Software Business Lab research group of the BIT research centre at Aalto University, School of Science and Technology (former Helsinki University of Technology) with the help of several industry partners. Researchers from the Helsinki University of Technology and Centres of Expertise first organized this survey in 1997 to provide an overview of the Finnish software industry with financing mainly from the National Technology Agency (Tekes) and the Finnish Ministry of Trade and Industry. The 2011 Finnish survey received responses from 506 participants which is a bit smaller than 2010 due to more strict selection criteria. Surveys analysing the industry in other European countries than Finland are run by research partners. In 2011 the survey was implemented in Austria and Germany. The published reports by the survey group cover economic impacts on the software industry such as the Great Recession or Nokia’s changes for Symbian in 2011, on roughly 100 pages. Starting with the 2009 report, all included images and tables can be re-used under the free Creative Commons Attribution license version 3.0. Past surveys and key results Software Industry Survey 2022 The 2022 survey was carried out in cooperation between the University of Jyväskylä, Software Finland, University of Vaasa and Technology Industries of Finland. As the pandemic was only beginning to subside, the economy was hit by Russia's invasion of Ukraine. Industries and companies that are able to grow even in difficult circumstances are therefore even more important for the economy and well-being of Finland as a whole. The survey highlights strong growth in Finland's software industry, particularly in the Software-as-a-Service (SaaS) sector. While other industries are struggling, the software industry is making rapid progress, with more than 40% of companies expecting significant growth. SaaS in particular stands out, with 56% of companies predicting significant growth, making it a key export driver for Finland as the world continues to digitalise. Software Industry Survey 2018 The 2018 survey was conducted through interviews, exploring the main directions of business development, challenges, trends in the IT sector, and their impact on business growth for 99 software companies. Software Industry Survey 2017 The software and IT services sector grew by 5.9% in 2016, with growth seen across all company sizes: small, medium, and large. The 2017 survey highlights that companies remain growth-oriented and increasingly eager to expand internationally. Many companies are also driving innovation by experimenting with new technologies and business models. The survey, now in its 20th year, analyzes the development of the Finnish software business and the outlook for software companies. This year, particular attention was paid to the industry's evolving skill requirements. The survey reveals that software companies are struggling to find talent that matches their needs, with a demand for thousands of skilled workers. The challenge is not just about the number of skilled professionals, but also about the rapidly changing skill requirements. The survey indicates that most software companies find it difficult to hire sufficient software professionals with the right expertise. Around half of the needed roles are related to programming and related tasks, but there is also demand for other types of expertise. Software Industry Survey 2015 The survey was conducted in two parts. The first part focused on the overall state of the industry. The second part, including a survey study with a short questionnaire, shall focus on details of software firms’ growth and internationalisation. Analysis of publicly listed software and IT services companies reveals a slight revenue decline of 2.2% compared to the previous year. The positive aspect is that this decline is smaller than in previous years, and many companies have still managed to increase their revenues. This decline is attributed to structural changes in the industry, as demand shifts from customer-specific projects to cloud services. Despite this, the outlook remains promising, as companies are adapting and developing new capabilities to meet these industry changes. Software Industry Survey 2014 The 2014 Survey found that most companies favor agile methodologies over traditional plan-based approaches, prioritizing customer collaboration, flexibility, and individual empowerment. The study also highlighted differences in agility between software product firms, which tend to be more flexible, and service firms, which are more structured. Smaller firms were found to be more agile, leveraging individual strengths and maintaining openness to customer collaboration, while larger firms standardize their processes. These differences are linked to business models, with startups benefiting from greater agility and a focus on customer-oriented strategies. References External links softwareindustrysurvey.fi/ – the Software Industry Survey group's website in Finland Surveys (human research) Software industry
Software Industry Survey
[ "Technology", "Engineering" ]
1,006
[ "Computer industry", "Software industry", "Software engineering" ]
35,154,335
https://en.wikipedia.org/wiki/Phase-contrast%20X-ray%20imaging
Phase-contrast X-ray imaging or phase-sensitive X-ray imaging is a general term for different technical methods that use information concerning changes in the phase of an X-ray beam that passes through an object in order to create its images. Standard X-ray imaging techniques like radiography or computed tomography (CT) rely on a decrease of the X-ray beam's intensity (attenuation) when traversing the sample, which can be measured directly with the assistance of an X-ray detector. However, in phase contrast X-ray imaging, the beam's phase shift caused by the sample is not measured directly, but is transformed into variations in intensity, which then can be recorded by the detector. In addition to producing projection images, phase contrast X-ray imaging, like conventional transmission, can be combined with tomographic techniques to obtain the 3D distribution of the real part of the refractive index of the sample. When applied to samples that consist of atoms with low atomic number Z, phase contrast X-ray imaging is more sensitive to density variations in the sample than conventional transmission-based X-ray imaging. This leads to images with improved soft tissue contrast. In the last several years, a variety of phase-contrast X-ray imaging techniques have been developed, all of which are based on the observation of interference patterns between diffracted and undiffracted waves. The most common techniques are crystal interferometry, propagation-based imaging, analyzer-based imaging, edge-illumination and grating-based imaging (see below). History The first to discover X-rays was Wilhelm Conrad Röntgen in 1895, where he found that they had the ability to penetrate opaque materials. He recorded the first X-ray image, displaying the hand of his wife. He was awarded the first Nobel Prize in Physics in 1901 "in recognition of the extraordinary services he has rendered by the discovery of the remarkable rays subsequently named after him". Since then, X-rays have been used as a tool to safely determine the inner structures of different objects, although the information was for a long time obtained by measuring the transmitted intensity of the waves only, and the phase information was not accessible. The principle of phase-contrast imaging was first developed by Frits Zernike during his work with diffraction gratings and visible light. The application of his knowledge to microscopy won him the Nobel Prize in Physics in 1953. Ever since, phase-contrast microscopy has been an important field of optical microscopy. The transfer of phase-contrast imaging from visible light to X-rays took a long time, due to slow progress in improving the quality of X-ray beams and the inaccessibility of X-ray lenses. In the 1970s, it was realized that the synchrotron radiation, emitted from charged particles circulating in storage rings constructed for high-energy nuclear physics experiments, may have been a more intense and versatile source of X-rays than X-ray tubes; this, combined with progress in the development of X-rays optics, was fundamental for the further advancement of X-ray physics. The pioneer work to the implementation of the phase-contrast method to X-ray physics was presented in 1965 by Ulrich Bonse and Michael Hart, Department of Materials Science and Engineering of Cornell University, New York. They presented a crystal interferometer, made from a large and highly perfect single crystal. Not less than 30 years later the Japanese scientists Atsushi Momose, Tohoru Takeda and co-workers adopted this idea and refined it for application in biological imaging, for instance by increasing the field of view with the assistance of new setup configurations and phase retrieval techniques. The Bonse–Hart interferometer provides several orders of magnitude higher sensitivity in biological samples than other phase-contrast techniques, but it cannot use conventional X-ray tubes because the crystals only accept a very narrow energy band of X-rays (ΔE/E ~ 10−4). In 2012, Han Wen and co-workers took a step forward by replacing the crystals with nanometric phase gratings. The gratings split and direct X-rays over a broad spectrum, thus lifting the restriction on the bandwidth of the X-ray source. They detected sub nanoradian refractive bending of X-rays in biological samples with a grating Bonse–Hart interferometer. At the same time, two further approaches to phase-contrast imaging emerged with the aim to overcome the problems of crystal interferometry. The propagation-based imaging technique was primarily introduced by the group of at the ESRF (European Synchrotron Radiation Facility) in Grenoble, France, and was based on the detection of "Fresnel fringes" that arise under certain circumstances in free-space propagation. The experimental setup consisted of an inline configuration of an X-ray source, a sample and a detector and did not require any optical elements. It was conceptually identical to the setup of Dennis Gabor's revolutionary work on holography in 1948. An alternative approach called analyzer-based imaging was first explored in 1995 by Viktor Ingal and Elena Beliaevskaya at the X-ray laboratory in Saint Petersburg, Russia, and by Tim Davis and colleagues at the CSIRO (Commonwealth Scientific and Industrial Research Organisation) Division of Material Science and Technology in Clayton, Australia. This method uses a Bragg crystal as angular filter to reflect only a small part of the beam fulfilling the Bragg condition onto a detector. Important contributions to the progress of this method have been made by a US collaboration of the research teams of Dean Chapman, Zhong Zhong and William Thomlinson, for example the extracting of an additional signal caused by ultra-small angle scattering and the first CT image made with analyzer-based imaging. An alternative to analyzer-based imaging, which provides equivalent results without requiring the use of a crystal, was developed by Alessandro Olivo and co-workers at the Elettra synchrotron in Trieste, Italy. This method, called “edge-illumination”, operates a fine selection on the X-ray direction by using the physical edge of the detector pixels themselves, hence the name. Later on Olivo, in collaboration with Robert Speller at University College London, adapted the method for use with conventional X-ray sources, opening the way to translation into clinical and other applications. Peter Munro (also from UCL) substantially contributed to the development of the lab-based approach, by demonstrating that it imposes practically no coherence requirements and that, this notwithstanding, it still is fully quantitative. The latest approach discussed here is the so-called grating-based imaging, which makes use of the Talbot effect, discovered by Henry Fox Talbot in 1836. This self-imaging effect creates an interference pattern downstream of a diffraction grating. At a particular distance this pattern resembles exactly the structure of the grating and is recorded by a detector. The position of the interference pattern can be altered by bringing an object in the beam, that induces a phase shift. This displacement of the interference pattern is measured with the help of a second grating, and by certain reconstruction methods, information about the real part of the refractive index is gained. The so-called Talbot–Lau interferometer was initially used in atom interferometry, for instance by John F. Clauser and Shifang Li in 1994. The first X-ray grating interferometers using synchrotron sources were developed by Christian David and colleagues from the Paul Scherrer Institute (PSI) in Villingen, Switzerland and the group of Atsushi Momose from the University of Tokyo. In 2005, independently from each other, both David's and Momose's group incorporated computed tomography into grating interferometry, which can be seen as the next milestone in the development of grating-based imaging. In 2006, another great advancement was the transfer of the grating-based technique to conventional laboratory X-ray tubes by Franz Pfeiffer and co-workers, which fairly enlarged the technique's potential for clinical use. About two years later the group of Franz Pfeiffer also accomplished to extract a supplementary signal from their experiments; the so-called "dark-field signal" was caused by scattering due to the porous microstructure of the sample and provided "complementary and otherwise inaccessible structural information about the specimen at the micrometer and submicrometer length scale". At the same time, Han Wen and co-workers at the US National Institutes of Health arrived at a much simplified grating technique to obtain the scattering (“dark-field”) image. They used a single projection of a grid and a new approach for signal extraction named "single-shot Fourier analysis". Recently, a lot of research was done to improve the grating-based technique: Han Wen and his team analyzed animal bones and found out that the intensity of the dark-field signal depends on the orientation of the grid and this is due to the anisotropy of the bone structure. They made significant progress towards biomedical applications by replacing mechanical scanning of the gratings with electronic scanning of the X-ray source. The grating-based phase-contrast CT field was extended by tomographic images of the dark-field signal and time-resolved phase-contrast CT. Furthermore, the first pre-clinical studies using grating-based phase-contrast X-ray imaging were published. Marco Stampanoni and his group examined native breast tissue with "differential phase-contrast mammography", and a team led by Dan Stutman investigated how to use grating-based imaging for the small joints of the hand. Most recently, a significant advance in grating-based imaging occurred due to the discovery of a phase moiré effect by Wen and colleagues. It led to interferometry beyond the Talbot self-imaging range, using only phase gratings and conventional sources and detectors. X-ray phase gratings can be made with very fine periods, thereby allowing imaging at low radiation doses to achieve high sensitivity. Physical principle Conventional X-ray imaging uses the drop in intensity through attenuation caused by an object in the X-ray beam and the radiation is treated as rays like in geometrical optics. But when X-rays pass through an object, not only their amplitude but their phase is altered as well. Instead of simple rays, X-rays can also be treated as electromagnetic waves. An object then can be described by its complex refractive index (cf.): . The term is the decrement of the real part of the refractive index, and the imaginary part describes the absorption index or extinction coefficient. Note that in contrast to optical light, the real part of the refractive index is less than but close to unity, this is "due to the fact that the X-ray spectrum generally lies to the high-frequency side of various resonances associated with the binding of electrons". The phase velocity inside of the object is larger than the velocity of light c. This leads to a different behavior of X-rays in a medium compared to visible light (e.g. refractive angles have negative values) but does not contradict the law of relativity, "which requires that only signals carrying information do not travel faster than c. Such signals move with the group velocity, not with the phase velocity, and it can be shown that the group velocity is in fact less than c." The impact of the index of refraction on the behavior of the wave can be demonstrated with a wave propagating in an arbitrary medium with a fixed refractive index . For reason of simplicity, a monochromatic plane wave with no polarization is assumed here. The wave propagates in direction normal to the surface of the medium, named z in this example (see figure on the right). The scalar wave function in vacuum is . Within the medium, the angular wavenumber changes from to . Now the wave can be described as: , where is the phase shift and is an exponential decay factor decreasing the amplitude of the wave. In more general terms, the total phase shift of the beam propagating a distance z can be calculated by using the integral , where is the wavelength of the incident X-ray beam. This formula means that the phase shift is the projection of the decrement of the real part of the refractive index in imaging direction. This fulfills the requirement of the tomographic principle, which states that "the input data to the reconstruction algorithm should be a projection of a quantity f that conveys structural information inside a sample. Then, one can obtain a tomogram which maps the value f." In other words, in phase-contrast imaging a map of the real part of the refraction index can be reconstructed with standard techniques like filtered back projection which is analog to conventional X-ray computed tomography where a map of the imaginary part of the refraction index can be retrieved. To get information about the compounding of a sample, basically the density distribution of the sample, one has to relate the measured values for the refractive index to intrinsic parameters of the sample, such a relation is given by the following formulas: , where is the atomic number density, the absorption cross section, the length of the wave vector and , where the phase shift cross section. Far from the absorption edges (peaks in the absorption cross-section due to the enhanced probability for the absorption of a photon that has a frequency close to the resonance frequency of the medium), dispersion effects can be neglected; this is the case for light elements (atomic number Z<40) that are the components of human tissue and X-ray energies above 20 keV, which are typically used in medical imaging. Assuming these conditions, the absorption cross section is approximately stated by where 0.02 is a constant given in barn, the typical unit of particle interaction cross section area, the length of the wave vector, the length of a wave vector with wavelength of 1 Angstrom and the atomic number. The valid formula under these conditions for the phase shift cross section is: where is the atomic number, the length of the wave vector, and the classical electron radius. This results in the following expressions for the two parts of the complex index of refraction: Inserting typical values of human tissue in the formulas given above shows that is generally three orders of magnitude larger than within the diagnostic X-ray range. This implies that the phase-shift of an X-ray beam propagating through tissue may be much larger than the loss in intensity thus making phase contrast X-ray imaging more sensitive to density variations in the tissue than absorption imaging. Due to the proportionalities , the advantage of phase contrast over conventional absorption contrast even grows with increasing energy. Furthermore, because the phase contrast image formation is not intrinsically linked to the absorption of X-rays in the sample, the absorbed dose can potentially be reduced by using higher X-ray energies. As mentioned above, concerning visible light, the real part of the refractive index n can deviate strongly from unity (n of glass in visible light ranges from 1.5 to 1.8) while the deviation from unity for X-rays in different media is generally of the order of 10−5. Thus, the refraction angles caused at the boundary between two isotropic media calculated with Snell's formula are also very small. The consequence of this is that refraction angles of X-rays passing through a tissue sample cannot be detected directly and are usually determined indirectly by "observation of the interference pattern between diffracted and undiffracted waves produced by spatial variations of the real part of the refractive index." Experimental realisation Crystal interferometry Crystal interferometry, sometimes also called X-ray interferometry, is the oldest but also the most complex method used for experimental realization. It consists of three beam splitters in Laue geometry aligned parallel to each other. (See figure to the right) The incident beam, which usually is collimated and filtered by a monochromator (Bragg crystal) before, is split at the first crystal (S) by Laue diffraction into two coherent beams, a reference beam which remains undisturbed and a beam passing through the sample. The second crystal (T) acts as a transmission mirror and causes the beams to converge one towards another. The two beams meet at the plane of the third crystal (A), which is sometimes called, the analyzer crystal, and create an interference pattern the form of which depends on the optical path difference between the two beams caused by the sample. This interference pattern is detected with an X-ray detector behind the analyzer crystal. By putting the sample on a rotation stage and recording projections from different angles, the 3D-distribution of the refractive index and thus tomographic images of the sample can be retrieved. In contrast to the methods below, with the crystal interferometer the phase itself is measured and not any spatial alternation of it. To retrieve the phase shift out of the interference patterns; a technique called phase-stepping or fringe scanning is used: a phase shifter (with the shape of a wedge) is introduced in the reference beam. The phase shifter creates straight interference fringes with regular intervals; so called carrier fringes. When the sample is placed in the other beam, the carrier fringes are displaced. The phase shift caused by the sample corresponds to the displacement of the carrier fringes. Several interference patterns are recorded for different shifts of the reference beam and by analyzing them the phase information modulo 2 can be extracted. This ambiguity of the phase is called the phase wrapping effect and can be removed by so-called "phase unwrapping techniques". These techniques can be used when the signal-to-noise ratio of the image is sufficiently high and phase variation is not too abrupt. As an alternative to the fringe scanning method, the Fourier-transform method can be used to extract the phase shift information with only one interferogram, thus shortening the exposure time, but this has the disadvantage of limiting the spatial resolution by the spacing of the carrier fringes. X-ray interferometry is considered to be the most sensitive to the phase shift, of the 4 methods, consequently providing the highest density resolution in range of mg/cm3. But due to its high sensitivity, the fringes created by a strongly phase-shifting sample may become unresolvable; to overcome this problem a new approach called "coherence-contrast X-ray imaging" has been developed recently, where instead of the phase shift the change of the degree of coherence caused by the sample is relevant for the contrast of the image. A general limitation to the spatial resolution of this method is given by the blurring in the analyzer crystal which arises from dynamical refraction, i.e. the angular deviation of the beam due to the refraction in the sample is amplified about ten thousand times in the crystal, because the beam path within the crystal depends strongly on its incident angle. This effect can be reduced by thinning down the analyzer crystal, e.g. with an analyzer thickness of 40 m a resolution of about 6 m was calculated. Alternatively the Laue crystals can be replaced by Bragg crystals, so the beam doesn't pass through the crystal but is reflected on the surface. Another constraint of the method is the requirement of a very high stability of the setup; the alignment of the crystals must be very precise and the path length difference between the beams should be smaller than the wavelength of the X-rays; to achieve this the interferometer is usually made out of a highly perfect single block of silicon by cutting out two grooves. By the monolithic production the very important spatial lattice coherence between all three crystals can be maintained relatively well but it limits the field of view to a small size,(e.g. 5 cm x 5 cm for a 6-inch ingot) and because the sample is normally placed in one of the beam paths the size of the sample itself is also constrained by the size of the silicon block. Recently developed configurations, using two crystals instead of one, enlarge the field of view considerably, but are even more sensitive to mechanical instabilities. Another additional difficulty of the crystal interferometer is that the Laue crystals filter most of the incoming radiation, thus requiring a high beam intensity or very long exposure times. That limits the use of the method to highly brilliant X-ray sources like synchrotrons. According to the constraints on the setup the crystal interferometer works best for high-resolution imaging of small samples which cause small or smooth phase gradients. Grating Bonse-Hart (interferometry) To have the superior sensitivity of crystal Bonse-Hart interferometry without some of the basic limitations, the monolithic crystals have been replaced with nanometric x-ray phase-shift gratings. The first such gratings have periods of 200 to 400 nanometers. They can split x-ray beams over the broad energy spectra of common x-ray tubes. The main advantage of this technique is that it uses most of the incoming x-rays that would have been filtered by the crystals. Because only phase gratings are used, grating fabrication is less challenging than techniques that use absorption gratings. The first grating Bonse-Hart interferometer (gBH) operated at 22.5 keV photon energy and 1.5% spectral bandwidth. The incoming beam is shaped by slits of a few tens of micrometers such that the transverse coherence length is greater than the grating period. The interferometer consists of three parallel and equally spaced phase gratings, and an x-ray camera. The incident beam is diffracted by a first grating of period 2P into two beams. These are further diffracted by a second grating of period P into four beams. Two of the four merge at a third grating of period 2P. Each is further diffracted by the third grating. The multiple diffracted beams are allowed to propagate for sufficient distance such that the different diffraction orders are separated at the camera. There exists a pair of diffracted beams that co-propagate from the third grating to the camera. They interfere with each other to produce intensity fringes if the gratings are slightly misaligned with each other. The central pair of diffraction paths are always equal in length regardless of the x-ray energy or the angle of the incident beam. The interference patterns from different photon energies and incident angles are locked in phase. The imaged object is placed near the central grating. Absolute phase images are obtained if the object intersects one of a pair of coherent paths. If the two paths both pass through the object at two locations which are separated by a lateral distance d, then a phase difference image of Φ(r) - Φ(r-d) is detected. Phase stepping one of the gratings is performed to retrieve the phase images. The phase difference image Φ(r) - Φ(r-d) can be integrated to obtain a phase shift image of the object. This technique achieved substantially higher sensitivity than other techniques with the exception of the crystal interferometer. A basic limitation of the technique is the chromatic dispersion of grating diffraction, which limits its spatial resolution. A tabletop system with a tungsten-target x-ray tube running at 60 kVp will have a limiting resolution of 60 μm. Another constraint is that the x-ray beam is slitted down to only tens of micrometers wide. A potential solution has been proposed in the form of parallel imaging with multiple slits. Analyzer-based imaging Analyzer-based imaging (ABI) is also known as diffraction-enhanced imaging, phase-dispersion Introscopy and multiple-image radiography Its setup consists of a monochromator (usually a single or double crystal that also collimates the beam) in front of the sample and an analyzer crystal positioned in Bragg geometry between the sample and the detector. (See figure to the right) This analyzer crystal acts as an angular filter for the radiation coming from the sample. When these X-rays hit the analyzer crystal the condition of Bragg diffraction is satisfied only for a very narrow range of incident angles. When the scattered or refracted X-rays have incident angles outside this range they will not be reflected at all and don't contribute to the signal. Refracted X-rays within this range will be reflected depending on the incident angle. The dependency of the reflected intensity on the incident angle is called a rocking curve and is an intrinsic property of the imaging system, i.e. it represents the intensity measured at each pixel of the detector when the analyzer crystal is "rocked" (slightly rotated in angle θ) with no object present and thus can be easily measured. The typical angular acceptance is from a few microradians to tens of microradians and is related to the full width at half maximum (FWHM) of the rocking curve of the crystal. When the analyzer is perfectly aligned with the monochromator and thus positioned to the peak of the rocking curve, a standard X-ray radiograph with enhanced contrast is obtained because there is no blurring by scattered photons. Sometimes this is referred to as "extinction contrast". If, otherwise, the analyzer is oriented at a small angle (detuning angle) with respect to the monochromator then X-rays refracted in the sample by a smaller angle will be reflected less, and X-rays refracted by a larger angle will be reflected more. Thus the contrast of the image is based on different refraction angles in the sample. For small phase gradients the refraction angle can be expressed as where is the length of the wave vector of the incident radiation and the second term on the right hand side is the first derivative of the phase in the diffraction direction. Since not the phase itself, but the first derivative of the phase front is measured, analyzer-based imaging is less sensitive to low spatial frequencies than crystal interferometry but more sensitive than PBI. Contrary to the former methods analyzer-based imaging usually provides phase information only in the diffraction direction, but is not sensitive to angular deviations on the plane perpendicular to the diffraction plane. This sensitivity to only one component of the phase gradient can lead to ambiguities in phase estimation. By recording several images at different detuning angles, meaning at different positions on the rocking curve, a data set is gained which allows the retrieval of quantitative differential phase information. There are several algorithms to reconstruct information from the rocking curves, some of them provide an additional signal. This signal comes from Ultra-small-angle scattering by sub-pixel sample structures and causes angular broadening of the beam and hence a broadening of the shape of the rocking curve. Based on this scattering contrast a new kind of image called Dark-field image can be produced. Tomographic imaging with analyzer-based imaging can be done by fixing the analyzer at a specific angle and rotating the sample through 360° while the projection data are acquired. Several sets of projections are acquired from the same sample with different detuning angles and then a tomographic image can be reconstructed. Assuming that the crystals are normally aligned such that the derivative of the refractive index is measured in the direction parallel to the tomographic axis, the resulting "refraction CT image" shows the pure image of the out-of-plane gradient. For analyzer-based imaging, the stability requirements of the crystals is less strict than for crystal interferometry but the setup still requires a perfect analyzer crystal that needs to be very precisely controlled in angle and the size of the analyzer crystal and the constraint that the beam needs to be parallel also limits the field of view. Additionally as in crystal interferometry a general limitation for the spatial resolution of this method is given by the blurring in the analyzer crystal due to dynamic diffraction effects, but can be improved by using grazing incidence diffraction for the crystal. While the method in principle requires monochromatic, highly collimated radiation and hence is limited to a synchrotron radiation source, it was shown recently that the method remains feasible using a laboratory source with a polychromatic spectrum when the rocking curve is adapted to the K spectral line radiation of the target material. Due to its high sensitivity to small changes in the refraction index this method is well suited to image soft tissue samples and is already implemented to medical imaging, especially in Mammography for a better detection of microcalcifications and in bone cartilage studies. Propagation-based imaging Propagation-based imaging (PBI) is the most common name for this technique but it is also called in-line holography, refraction-enhanced imaging or phase-contrast radiography. The latter denomination derives from the fact that the experimental setup of this method is basically the same as in conventional radiography. It consists of an in-line arrangement of an X-ray source, the sample and an X-ray detector and no other optical elements are required. The only difference is that the detector is not placed immediately behind the sample, but in some distance, so the radiation refracted by the sample can interfere with the unchanged beam. This simple setup and the low stability requirements provides a big advantage of this method over other methods discussed here. Under spatially coherent illumination and an intermediate distance between sample and detector an interference pattern with "Fresnel fringes" is created; i.e. the fringes arise in the free space propagation in the Fresnel regime, which means that for the distance between detector and sample the approximation of Kirchhoff's diffraction formula for the near field, the Fresnel diffraction equation is valid. In contrast to crystal interferometry the recorded interference fringes in PBI are not proportional to the phase itself but to the second derivative (the Laplacian) of the phase of the wavefront. Therefore, the method is most sensitive to abrupt changes in the decrement of the refractive index. This leads to stronger contrast outlining the surfaces and structural boundaries of the sample (edge enhancement) compared with a conventional radiogram. PBI can be used to enhance the contrast of an absorption image, in this case the phase information in the image plane is lost but contributes to the image intensity (edge enhancement of attenuation image). However it is also possible to separate the phase and the attenuation contrast, i.e. to reconstruct the distribution of the real and imaginary part of the refractive index separately. The unambiguous determination of the phase of the wave front (phase retrieval) can be realized by recording several images at different detector-sample distances and using algorithms based on the linearization of the Fresnel diffraction integral to reconstruct the phase distribution, but this approach suffers from amplified noise for low spatial frequencies and thus slowly varying components may not be accurately recovered. There are several more approaches for phase retrieval and a good overview about them is given in. Tomographic reconstructions of the 3D distribution of the refractive index or "Holotomography" is implemented by rotating the sample and recording for each projection angle a series of images at different distances. A high resolution detector is required to resolve the interference fringes, which practically limits the field of view of this technique or requires larger propagation distances. The achieved spatial resolution is relatively high in comparison to the other methods and, since there are no optical elements in the beam, is mainly limited by the degree of spatial coherence of the beam. As mentioned before, for the formation of the Fresnel fringes, the constraint on the spatial coherence of the used radiation is very strict, which limits the method to small or very distant sources, but in contrast to crystal interferometry and analyzer-based imaging the constraint on the temporal coherence, i.e. the polychromaticity is quite relaxed. Consequently, the method cannot only be used with synchrotron sources but also with polycromatic laboratory X-ray sources providing sufficient spatial coherence, such as microfocus X-ray tubes. Generally spoken, the image contrast provided by this method is lower than of other methods discussed here, especially if the density variations in the sample are small. Due to its strength in enhancing the contrast at boundaries, it's well suited for imaging fiber or foam samples. A very important application of PBI is the examination of fossils with synchrotron radiation, which reveals details about the paleontological specimens which would otherwise be inaccessible without destroying the sample. Grating-based imaging Grating-based imaging (GBI) includes Shearing interferometry or X-ray Talbot interferometry (XTI), and polychromatic far-field interferometry (PFI). Since the first X-ray grating interferometer—consisting of two phase gratings and an analyzer crystal—was built, various slightly different setups for this method have been developed; in the following the focus lies on the nowadays standard method consisting of a phase grating and an analyzer grating. (See figure to the right). The XTI technique is based on the Talbot effect or "self-imaging phenomenon", which is a Fresnel diffraction effect and leads to repetition of a periodic wavefront after a certain propagation distance, called the "Talbot length". This periodic wavefront can be generated by spatially coherent illumination of a periodic structure, like a diffraction grating, and if so the intensity distribution of the wave field at the Talbot length resembles exactly the structure of the grating and is called a self-image. It has also been shown that intensity patterns will be created at certain fractional Talbot lengths. At half the distance the same intensity distribution appears except for a lateral shift of half the grating period while at certain smaller fractional Talbot distances the self-images have fractional periods and fractional sizes of the intensity maxima and minima, that become visible in the intensity distribution behind the grating, a so-called Talbot carpet. The Talbot length and the fractional lengths can be calculated by knowing the parameters of the illuminating radiation and the illuminated grating and thus gives the exact position of the intensity maxima, which needs to be measured in GBI. While the Talbot effect and the Talbot interferometer were discovered and extensively studied by using visible light it has been demonstrated several years ago for the hard X-ray regime as well. In GBI a sample is placed before or behind the phase grating (lines of the grating show negligible absorption but substantial phase shift) and thus the interference pattern of the Talbot effect is modified by absorption, refraction and scattering in the sample. For a phase object with a small phase gradient the X-ray beam is deflected by where is the length of the wave vector of the incident radiation and the second factor on the right hand side is the first derivative of the phase in the direction perpendicular to the propagation direction and parallel to the alignment of the grating. Since the transverse shift of the interference fringes is linear proportional to the deviation angle the differential phase of the wave front is measured in GBI, similar as in ABI. In other words, the angular deviations are translated into changes of locally transmitted intensity. By performing measurements with and without sample the change in position of the interference pattern caused by the sample can be retrieved. The period of the interference pattern is usually in the range of a few micrometers, which can only be conveniently resolved by a very high resolution detector in combination with a very intense illumination ( a source providing a very high flux) and hence limits the field of view significantly . This is the reason why a second grating, typically an absorption grating, is placed at a fractional Talbot length to analyze the interference pattern. The analyzer grating does normally have the same period as the interference fringes and thus transforms local fringe position into signal intensity variation on the detector, which is placed immediately behind the grating. In order to separate the phase information from other contributions to the signal, a technique called "phase-stepping" is used. One of the gratings is scanned along the transverse direction term over one period of the grating, and for different positions of the grating an image is taken. The intensity signal in each pixel in the detector plane oscillates as a function of . The recorded intensity oscillation can be represented by a Fourier series and by recording and comparing these intensity oscillations with or without the sample the separated differential phase shift and absorption signal relative to the reference image can be extracted. As in analyzer-based imaging, an additional signal coming from Ultra-small-angle scattering by sub-pixel microstructures of the sample, called dark-field contrast, can also be reconstructed. This method provides high spatial resolution, but also requires long exposure times. An alternative approach is the retrieval of the differential phase by using Moiré fringes. These are created as a superposition of the self-image of G1 and the pattern of G2 by using gratings with the same periodicity and inclining G2 against G1 regarding to the optical axis with a very small angle(<<1). This moiré fringes act as carrier fringes because they have a much larger spacing/period (smaller spatial frequency) than the Talbot fringes and thus the phase gradient introduced by the sample can be detected as the displacement of the Moiré fringes. With a Fourier analysis of the Moiré pattern the absorption and dark-field signal can also be extracted. Using this approach, the spatial resolution is lower than one achieved by the phase-stepping technique, but the total exposure time can be much shorter, because a differential phase image can be retrieved with only one Moiré pattern. Single-shot Fourier analysis technique was used in early grid-based scattering imaging similar to the shack-Hartmann wavefront sensor in optics, which allowed first live animal studies. A technique to eliminate mechanical scanning of the grating and still retain the maximum spatial resolution is electronic phase stepping. It scans the source spot of the x-ray tube with an electro-magnetic field. This causes the projection of the object to move in the opposite direction, and also causes a relative movement between the projection and the Moiré fringes. The images are digitally shifted to realign the projections. The result is that the projection of the object is stationary, while the Moiré fringes move over it. This technique effectively synthesizes the phase stepping process, but without the costs and delays associated with mechanical movements. With both of these phase-extraction methods tomography is applicable by rotating the sample around the tomographic axis, recording a series of images with different projection angles and using back projection algorithms to reconstruct the 3-dimensional distributions of the real and imaginary part of the refractive index. Quantitative tomographic reconstruction of the dark-field signal has also been demonstrated for the phase-stepping technique and very recently for the Moiré pattern approach as well. It has also been demonstrated that dark-field imaging with the grating interferometer can be used to extract orientational information of structural details in the sub-micrometer regime beyond the spatial resolution of the detection system. While the scattering of X-rays in a direction perpendicular to the grating lines provides the dark-field contrast, the scattering in a direction parallel to the grating lines only lead to blurring in the image, which is not visible at the low resolution of the detector. This intrinsic physical property of the setup is utilized to extract orientational information about the angular variation of the local scattering power of the sample by rotating the sample around the optical axis of the set-up and collecting a set of several dark-field images, each measuring the component of the scattering perpendicular to the grating lines for that particular orientation. This can be used to determine the local angle and degree of orientation of bone and could yield valuable information for improving research and diagnostics of bone diseases like osteoporosis or osteoarthritis. The standard configuration as shown in the figure to the right requires spatial coherence of the source and consequently is limited to high brilliant synchrotron radiation sources. This problem can be handled by adding a third grating close to the X-ray source, known as a Talbot-Lau interferometer. This source grating, which is usually an absorption grating with transmission slits, creates an "array of individually coherent but mutually incoherent sources". As the source grating can contain a large number of individual apertures, each creating a sufficiently coherent virtual line source, standard X-ray generators with source sizes of a few square millimeters can be used efficiently and the field of view can be significantly increased. Since the position of the interference fringes formed behind the beam-splitter grating is independent of wavelength over a wide energy range of the incident radiation the interferometer in phase-stepping configuration can still be used efficiently with polychromatic radiation. For the Moiré pattern configuration the constraint on the radiation energy is a bit stricter, because a finite bandwidth of energy instead of monochromatic radiation causes a decrease in the visibility of the Moiré fringes and thus the image quality, but a moderate polychromaticity is still allowed. A great advantage of the usage of polychromatic radiation is the shortening of the exposure times and this has recently been exploited by using white synchrotron radiation to realize the first dynamic (time-resolved) Phase contrast tomography. A technical barrier to overcome is the fabrication of gratings with high aspect ratio and small periods. The production of these gratings out of a silicon wafer involves microfabrication techniques like photolithography, anisotropic wet etching, electroplating and molding. A very common fabrication process for X-ray gratings is LIGA, which is based on deep X-ray lithography and electroplating. It was developed in the 1980s for the fabrication of extreme high aspect ratio microstructures by scientists from the Karlsruhe Institute of Technology (KIT). Another technical requirement is the stability and precise alignment and movement of the gratings (typically in the range of some nm), but compared to other methods, e.g. the crystal interferometer the constraint is easy to fulfill. The grating fabrication challenge was eased by the discovery of a phase moiré effect which provides an all-phase-grating interferometer that works with compact sources, called the polychromatic far-field interferometer (see figure on the right). Phase gratings are easier to make when compared with the source and analyzer gratings mentioned above, since the grating depth required to cause phase shift is much less than what is needed to absorb x-rays. Phase gratings of 200 - 400 nanometer periods have been used to improve phase sensitivity in table-top PFI imagers. In PFI a phase grating is used to convert the fine interference fringes into a broad intensity pattern at a distal plane, based on the phase moiré effect. Besides higher sensitivity, another incentive for smaller grating periods is that the lateral coherence of the source needs to be at least one grating period. A disadvantage of the standard GBI setup is the sensitivity to only one component of the phase gradient, which is the direction parallel to the 1-D gratings. This problem has been solved either by recording differential phase contrast images of the sample in both direction x and y by turning the sample (or the gratings) by 90° or by the employment of two-dimensional gratings. Being a differential phase technique, GBI is not as sensitive as crystal interferometry to low spatial frequencies, but because of the high resistance of the method against mechanical instabilities, the possibility of using detectors with large pixels and a large field of view and, of crucial importance, the applicability to conventional laboratory X-ray tubes, grating-based imaging is a very promising technique for medical diagnostics and soft tissue imaging. First medical applications like a pre-clinical mammography study, show great potential for the future of this technique. Beyond that GBI has applications in a wide field of material science, for instance it could be used to improve security screening. Edge-illumination Edge-illumination (EI) was developed at the Italian synchrotron (Elettra) in the late ‘90s, as an alternative to ABI. It is based on the observation that, by illuminating only the edge of detector pixels, high sensitivity to phase effects is obtained (see figure). Also in this case, the relation between X-ray refraction angle and first derivative of the phase shift caused by the object is exploited: If the X-ray beam is vertically thin and impinges on the edge of the detector, X-ray refraction can change the status of the individual X-ray from "detected" to "undetected" and vice versa, effectively playing the same role as the crystal rocking curve in ABI. This analogy with ABI, already observed when the method was initially developed, was more recently formally demonstrated. Effectively, the same effect is obtained – a fine angular selection on the photon direction; however, while in analyzer-based imaging the beam needs to be highly collimated and monochromatic, the absence of the crystal means that edge-illumination can be implemented with divergent and polychromatic beams, like those generated by a conventional rotating-anode X-ray tube. This is done by introducing two opportunely designed masks (sometimes referred to as “coded-aperture” masks), one immediately before the sample, and one in contact with the detector (see figure). The purpose of the latter mask is simply to create insensitive regions between adjacent pixels, and its use can be avoided if specialized detector technology is employed. In this way, the edge-illumination configuration is simultaneously realized for all pixel rows of an area detector. This plurality of individual beamlets means that, in contrast to the synchrotron implementation discussed above, no sample scanning is required – the sample is placed downstream of the sample mask and imaged in a single shot (two if phase retrieval is performed). Although the set-up perhaps superficially resembles that of a grating interferometer, the underpinning physical mechanism is different. In contrast to other phase contrast X-ray imaging techniques, edge-illumination is an incoherent technique, and was in fact proven to work with both spatially and temporally incoherent sources, without any additional source aperturing or collimation. For example, 100 μm focal spots are routinely used which are compatible with, for example, diagnostic mammography systems. Quantitative phase retrieval was also demonstrated with (uncollimated) incoherent sources, showing that in some cases results analogous to the synchrotron gold standard can be obtained. The relatively simple edge-illumination set-up results in phase sensitivity at least comparable with other phase contrast X-ray imaging techniques, results in a number of advantages, which include reduced exposure time for the same source power, reduced radiation dose, robustness against environmental vibrations, and easier access to high X-ray energy. Moreover, since their aspect ratio is not particularly demanding, masks are cheap, easy to fabricate (e.g.do not require X-ray lithography) and can already be scaled to large areas. The method is easily extended to phase sensitivity in two directions, for example, through the realization of L-shaped apertures for the simultaneous illumination of two orthogonal edges in each detector pixel. More generally, while in its simplest implementation beamlets match individual pixel rows (or pixels), the method is highly flexible, and, for example, sparse detectors and asymmetric masks can be used and compact and microscopy systems can be built. So far, the method has been successfully demonstrated in areas such as security scanning, biological imaging, material science, paleontology and others; adaptation to 3D (computed tomography) was also demonstrated. Alongside simple translation for use with conventional x-ray sources, there are substantial benefits in the implementation of edge-illumination with coherent synchrotron radiation, among which are high performance at very high X-ray energies and high angular resolutions. Phase-contrast x-ray imaging in medicine Four potential benefits of phase contrast have been identified in a medical imaging context: Phase contrast bears promise to increase the signal-to-noise ratio because the phase shift in soft tissue is in many cases substantially larger than the absorption. Phase contrast has a different energy dependence than absorption contrast, which changes the conventional dose-contrast trade-off and higher photon energies may be optimal with a resulting lower dose (because of lower tissue absorption) and higher output from the x-ray tube (because of the option to use a higher acceleration voltage) Phase contrast is a different contrast mechanism that enhances other target properties than absorption contrast, which may be beneficial in some cases The dark-field signal provided by some phase-contrast realizations offers additional information on the small-angle scattering properties of the target. A quantitative comparison of phase- and absorption-contrast mammography that took realistic constraints into account (dose, geometry, and photon economy) concluded that grating-based phase-contrast imaging (Talbot interferometry) does not exhibit a general signal-difference-to-noise improvement relative to absorption contrast, but the performance is highly task dependent. Such a comparison is yet to be undertaken for all phase contrast methods, however, the following considerations are central to such a comparison: The optimal imaging energy for phase contrast is higher than for absorption contrast and independent of target. Differential phase contrast imaging methods such as, e.g., Analyser Based Imaging, Grating Based Imaging and Edge Illumination intrinsically detect the phase differential, which causes the noise-power spectrum to decrease rapidly with spatial frequency so that phase contrast is beneficial for small and sharp targets, e.g., tumor spicula rather than solid tumors, and for discrimination tasks rather than for detection tasks. Phase contrast favors detection of materials that differ in density compared to the background tissue, rather than materials with differences in atomic number. For instance, the improvement for detection / discrimination of calcified structures is less than the improvement for soft tissue. Grating-based imaging is relatively insensitive to spectrum bandwidth. It should also be noted, however, that other techniques such as propagation-based imaging and edge-illumination are even more insensitive, to the extent that they can be considered practically achromatic. In addition, if phase-contrast imaging is combined with an energy sensitive photon-counting detector, the detected spectrum can be weighted for optimal detection performance. Grating-based imaging is sensitive to the source size, which must be kept small; indeed, a "source" grating must be used to enable its implementation with low-brilliance x-ray sources. Similar considerations apply to propagation-based imaging and other approaches. The higher optimal energy in phase-contrast imaging compensates for some of the loss of flux when going to a smaller source size (because a higher acceleration voltage can be used for the x-ray tube), but photon economy remains to be an issue. It should be noted, however, that edge illumination was proven to work with source sizes of up to 100 micron, compatible with some existing mammography sources, without a source grating. Some of the tradeoffs are illustrated in the right-hand figure, which shows the benefit of phase contrast over absorption contrast for detection of different targets of relevance in mammography as a function of target size. Note that these results do not include potential benefits from the dark-field signal. Following preliminary, lab-based studies in e.g. computed tomography and mammography, phase contrast imaging is beginning to be applied in real medical applications, such as lung imaging, imaging of extremities, intra-operative specimen imaging. In vivo applications of phase contrast imaging have been kick-started by the pioneering mammography study with synchrotron radiation undertaken in Trieste, Italy. References External links Diagnostic radiology X-ray instrumentation Imaging Interferometry
Phase-contrast X-ray imaging
[ "Technology", "Engineering" ]
10,790
[ "X-ray instrumentation", "Measuring instruments" ]
35,154,764
https://en.wikipedia.org/wiki/Magnetic%20lattice%20%28accelerator%29
In accelerator physics, a magnetic lattice is a composition of electromagnets at given longitudinal positions around the vacuum tube of a particle accelerator, and thus along the path of the enclosed charged particle beam. The lattice properties have a large influence on the properties of the particle beam, which is shaped by magnetic fields. Lattices can be closed (cyclic accelerators like the synchrotrons), linear (for linac facilities) and are also used at interconnects between different accelerator structures (transfer beamlines). Such a structure is needed for focusing of the particle beam in modern, large-scale facilities. Its basic elements are dipole magnets for deflection, quadrupole magnets for strong focusing, sextupole magnets for correction of chromatic aberration, and sometimes even higher order magnets. Many lattices are composed of identical substructures or cells, which denote a special magnet arrangement that may reoccur at several positions along the path. While almost all accelerator lattices that are in use in modern facilities are specifically designed for their particular purpose, the lattice development starts at a given ideal lattice design with high periodicity and mostly using only one base cell. The most widely known are FODO lattice, which is the simplest possible strong focusing lattice Chasman–Green lattice A Magnet Accelerator is very important for fast moving of objects. References Accelerator physics
Magnetic lattice (accelerator)
[ "Physics" ]
285
[ "Applied and interdisciplinary physics", "Accelerator physics", "Experimental physics" ]
35,161,367
https://en.wikipedia.org/wiki/Register%E2%80%93memory%20architecture
In computer engineering, a register–memory architecture is an instruction set architecture that allows operations to be performed on (or from) memory, as well as registers. If the architecture allows all operands to be in memory or in registers, or in combinations, it is called a "register plus memory" architecture. In a register–memory approach one of the operands for operations such as the ADD operation may be in memory, while the other is in a register. This differs from a load–store architecture (used by RISC designs such as MIPS) in which both operands for an ADD operation must be in registers before the ADD. An example of register-memory architecture is Intel x86. Examples of register plus memory architecture are: IBM System/360 and its successors, which support memory-to-memory fixed-point decimal arithmetic operations, but not binary integer or floating-point arithmetic operations; PDP-11, which supports memory or register source and destination operands for most two-operand integer operations; VAX, which supports memory or register source and destination operands for binary integer and floating-point arithmetic; Motorola 68000 series, which supports integer arithmetic with a memory source or destination, but not with a memory source and destination. However, the 68000 can move data memory-to-memory with nearly all addressing modes. See also Load–store architecture Addressing mode References Computer architecture
Register–memory architecture
[ "Technology", "Engineering" ]
285
[ "Computers", "Computer engineering", "Computer architecture" ]
21,968,037
https://en.wikipedia.org/wiki/FDU%20materials
FDU Materials are a class of regularly structured mesoporous organic materials first synthesized at Fudan University in Shanghai, China (hence FDU). FDU-14 -15 and -16 are formed by polymerizing resol around a lyotropic liquid crystal template and then removing the template by calcination. Notes Denyuan Zhao et al. Angew. Chem. Int. Ed. 2005, 44, 7053 Materials
FDU materials
[ "Physics" ]
91
[ "Materials stubs", "Materials", "Matter" ]
21,973,091
https://en.wikipedia.org/wiki/Hibernia%20Gravity%20Base%20Structure
The Hibernia Gravity Base Structure is an offshore oil platform on the Hibernia oilfield southeast of St. John's, Newfoundland, Canada. A 600-kilotonne gravity base structure (GBS) built after the Ocean Ranger disaster, it sits in of water directly on the floor of the North Atlantic Ocean off St. John's, Newfoundland at . This GBS is designed to resist iceberg forces and supports a topsides weighing 39,000 tonnes at towout, increasing to 58,000 tonnes in operation. There were significant challenges faced by the engineering firms Doris Development Canada, Morrison Hershfield and Mobil Technology in developing a structural solution with adequate strength which was also constructible. In addition, unusual design situations resulted from the construction methods and the structural components used. Construction The majority of the construction was performed at a site in Bull Arm, Trinity Bay, Newfoundland and Labrador. A new community housing 3,500 workers was constructed, with its own cafeteria, gym and entertainment facilities. Many of the topsides modules were constructed locally, with some sourced internationally. The 550,000-ton slipform concrete GBS was built inside a drydock and mated with the topsides in the nearby deepwater construction site. Kiewit performed outfitting of equipment inside utility shafts and provided construction management services for the gravity base structure. The assembled GBS was towed out on May 23, 1997, and installed in position on June 5. First oil was produced on November 17, 1997, four weeks ahead of schedule. References External links Hibernia Gravity Base Structure Oil platforms off Canada Buildings and structures in Newfoundland and Labrador Petroleum industry in Newfoundland and Labrador
Hibernia Gravity Base Structure
[ "Chemistry" ]
336
[ "Petroleum", "Petroleum stubs" ]
21,974,455
https://en.wikipedia.org/wiki/Faradaic%20current
In electrochemistry, the faradaic current is the electric current generated by the reduction or oxidation of some chemical substance at an electrode. The net faradaic current is the algebraic sum of all the faradaic currents flowing through an indicator electrode or working electrode. Limiting current The limiting current in electrochemistry is the limiting value of a faradaic current that is approached as the rate of charge transfer to an electrode is increased. The limiting current can be approached, for example, by increasing the electric potential or decreasing the rate of mass transfer to the electrode. It is independent of the applied potential over a finite range, and is usually evaluated by subtracting the appropriate residual current from the measured total current. A limiting current can have the character of an adsorption, catalytic, diffusion, or kinetic current, and may include a migration current. Migration current The difference between the current that is actually obtained, at any particular value of the potential of the indicator or working electrode, for the reduction or oxidation of an ionic electroactive substance and the current that would be obtained, at the same potential, if there were no transport of that substance due to the electric field between the electrodes. The sign convention regarding current is such that the migration current is negative for the reduction of a cation or for the oxidation of an anion, and positive for the oxidation of a cation or the reduction of an anion. Hence the migration current may tend to either increase or decrease the total current observed. In any event the migration current approaches zero as the transport number of the electroactive substance is decreased by increasing the concentration of the supporting electrolyte, and hence the conductivity. See also Butler–Volmer equation Gas diffusion electrode References Electrochemistry
Faradaic current
[ "Chemistry" ]
354
[ "Electrochemistry" ]
21,975,254
https://en.wikipedia.org/wiki/L-form%20bacteria
L-form bacteria, also known as L-phase bacteria, L-phase variants or cell wall-deficient bacteria (CWDB), are growth forms derived from different bacteria. They lack cell walls. Two types of L-forms are distinguished: unstable L-forms, spheroplasts that are capable of dividing, but can revert to the original morphology, and stable L-forms, L-forms that are unable to revert to the original bacteria. Discovery and early studies L-form bacteria were first isolated in 1935 by Emmy Klieneberger-Nobel, who named them "L-forms" after the Lister Institute in London where she was working. She first interpreted these growth forms as symbionts related to pleuropneumonia-like organisms (PPLOs, later commonly called mycoplasmas). Mycoplasmas (now in scientific classification called Mollicutes), parasitic or saprotrophic species of bacteria, also lack a cell wall (peptidoglycan/murein is absent). Morphologically, they resemble L-form bacteria. Therefore, mycoplasmas formerly were sometimes considered stable L-forms or, because of their small size, even viruses, but phylogenetic analysis has identified them as bacteria that have lost their cell walls in the course of evolution. Both, mycoplasmas and L-form bacteria are resistant against penicillin. After the discovery of PPLOs (mycoplasmas/Mollicutes) and L-form bacteria, their mode of reproduction (proliferation) became a major subject of discussion. In 1954, using phase-contrast microscopy, continual observations of live cells have shown that L-form bacteria (previously also called L-phase bacteria) and pleuropneumonia-like organisms (PPLOs, now mycoplasmas/Mollicutes) ) do not proliferate by binary fission, but by a uni- or multi-polar budding mechanism. Microphotograph series of growing microcultures of different strains of L-form bacteria, PPLOs and, as a control, a Micrococcus species (dividing by binary fission) have been presented. Additionally, electron microscopic studies have been performed. Appearance and cell division Bacterial morphology is determined by the cell wall. Since the L-form has no cell wall, its morphology is different from that of the strain of bacteria from which it is derived. Typical L-form cells are spheres or spheroids. For example, L-forms of the rod-shaped bacterium Bacillus subtilis appear round when viewed by phase contrast microscopy or by transmission electron microscopy. Although L-forms can develop from Gram-positive as well as from Gram-negative bacteria, in a Gram stain test, the L-forms always colour Gram-negative, due to the lack of a cell wall. The cell wall is important for cell division, which, in most bacteria, occurs by binary fission. This process usually requires a cell wall and components of the bacterial cytoskeleton such as FtsZ. The ability of L-form bacteria and mycoplasmas to grow and divide in the absence of both of these structures is highly unusual, and may represent a form of cell division that was important in early forms of life. This mode of division seems to involve the extension of thin protrusions from the cell's surface and these protrusions then pinching off to form new cells. The lack of cell wall in L-forms means that division is disorganised, giving rise to a variety of cell sizes, from very tiny to very big. Generation in cultures L-forms can be generated in the laboratory from many bacterial species that usually have cell walls, such as Bacillus subtilis or Escherichia coli. This is done by inhibiting peptidoglycan synthesis with antibiotics or treating the cells with lysozyme, an enzyme that digests cell walls. The L-forms are generated in a culture medium that is the same osmolarity as the bacterial cytosol (an isotonic solution), which prevents cell lysis by osmotic shock. L-form strains can be unstable, tending to revert to the normal form of the bacteria by regrowing a cell wall, but this can be prevented by long-term culture of the cells under the same conditions that were used to produce them – letting the wall-disabling mutations to accumulate by genetic drift. Some studies have identified mutations that occur, as these strains are derived from normal bacteria. One such point mutation D92E is in an enzyme yqiD/ispA () involved in the mevalonate pathway of lipid metabolism that increased the frequency of L-form formation 1,000-fold. The reason for this effect is not known, but it is presumed that the increase is related to this enzyme's role in making a lipid important in peptidoglycan synthesis. Another methodology of induction relies on nanotechnology and landscape ecology. Microfluidics devices can be built in order to challenge peptidoglycan synthesis by extreme spatial confinement. After biological dispersal through a constricted (sub-micrometre scale) biological corridor connecting adjacent micro habitat patches, L-form-like cells can be derived using a microfluifics-based (synthetic) ecosystem implementing an adaptive landscape selecting for shape-shifting phenotypes similar to L-forms. Significance and applications Some publications have suggested that L-form bacteria might cause diseases in humans, and other animals but, as the evidence that links these organisms to disease is fragmentary and frequently contradictory, this hypothesis remains controversial. The two extreme viewpoints on this question are that L-form bacteria are either laboratory curiosities of no clinical significance or important but unappreciated causes of disease. Research on L-form bacteria is continuing. For example, L-form organisms have been observed in mouse lungs after experimental inoculation with Nocardia caviae, and a recent study suggested that these organisms may infect immunosuppressed patients having undergone bone marrow transplants. The formation of strains of bacteria lacking cell walls has also been proposed to be important in the acquisition of bacterial antibiotic resistance. L-form bacteria may be useful in research on early forms of life, and in biotechnology. These strains are being examined for possible uses in biotechnology as host strains for recombinant protein production. Here, the absence of a cell wall can allow production of large amounts of secreted proteins that would otherwise accumulate in the periplasmic space of bacteria. L-form bacteria are seen as a persister cells, and a source of recurrent infection that has become of medical interest. See also Mycoplasmataceae—lack peptidoglycan but supplement their membranes with sterols for stability. Protoplast Spheroplast Ultramicrobacteria References Further reading External links Errington Group at Newcastle University Scientists explore new window on the origins of life 2009 Newcastle University press release Cell biology Bacteria
L-form bacteria
[ "Biology" ]
1,457
[ "Cell biology", "Microorganisms", "Prokaryotes", "Bacteria" ]
21,977,771
https://en.wikipedia.org/wiki/P-delta%20effect
In structural engineering, the P-Δ or P-delta effect refers to the abrupt changes in ground shear, overturning moment, and/or the axial force distribution at the base of a sufficiently tall structure or structural component when it is subject to a critical lateral displacement. A distinction can be made between P-delta effects on a multi-tiered building, written as P-Δ, and the effects on members deflecting within a tier, written as P-δ. P-delta is a second-order effect on a structure which is loaded laterally. One first-order effect is the initial deflection of the structure in reaction to the lateral load. The magnitude of the P-delta effect depends on the magnitude of this initial deflection. P-delta is a moment found by multiplying the force due to the weight of the structure and applied axial load, P, by the first-order deflection, Δ or δ. NUMERICAL EXAMPLE OF P DELTA EFFECT ON A CALCULATOR You have a 1 meter tall rigid vertical rod that rotates on a hinge at the bottom of the rod. There is a 1 newton load on the top of the rod. The rod has a hinge with a rotational stiffness of 0.8 newton meters per radian of rotation. So you input any initial rotational angle on the rod. The following table shows that the rod will iterate to 1.13 radians where the rod will be in stable equilibrium. The formula for this table is next radians rotation=sin(last radians rotation)/.8 In the table from the formula you can see the rod starts at .1 radians and iterates to 1.13 radians where it is in stable equilibrium. .1 .124 .156 .194 .241 .300 .367 .448 .542 .645 .751 .853 .942 1.01 1.06 1.09 1.11 1.12 1.12 1.13 1.13 and so on as it converges to 1.13 radians where the rod is stable. The P DELTA effect finds the stable final deformed shape of a structure just like how the rod rotates to a final deformed position at 1.13 radians. The idea is that iteratively repeated linear structural analyses can solve a non linear structural analysis problem. It takes multiple iterations of a linear analysis to compute the final deformed shape of a structure where the P DELTA effect is significant. To illustrate the effect, consider a case in statics, a perfectly rigid body anchored on the ground subject to small lateral forces. In this example, a concentrated vertical load applied to the top of the structure and the weight of the structure itself are used to compute the ground reaction force and moment. Real structures are flexible and will bend to the side. The amount of bending is found through a strength of materials analysis. During this side displacement, the top has changed position and the structure is experiencing an additional moment, P×Δ, or near the middle, P×δ. This moment is not accounted for in a basic first-order analysis. By superposition, the structure responds to this moment by additional bending and displacement at the top. In some sense, the P-delta effect is similar to the buckling load of an elastic, small-scale solid column given the boundary conditions of a free end on top and a completely restrained end at the bottom, with the exception that there may exist an invariant vertical load at the top of the column. A rod planted firmly into the ground, given a constant cross-section, can only extend so far up before it buckles under its own weight; in this case the lateral displacement for the solid is an infinitesimal quantity governed by Euler buckling. If the lateral displacement and/or the vertical axial loads through the structure are significant then a P-delta analysis should be performed to account for the non-linearities. References Lindeburg, M.R., Baradar, M. Seismic Design of Building Structures : A Professional's Introduction to Earthquake Forces and Design Details (8th ed.). Professional Publications, Inc. Belmont, CA (2001). Comino, P. What is P-Delta Analysis? SkyCiv Engineering. Sydney, Australia (2016). Statics
P-delta effect
[ "Physics" ]
898
[ "Statics", "Classical mechanics" ]
26,330,146
https://en.wikipedia.org/wiki/Apparent%20infection%20rate
Apparent infection rate is an estimate of the rate of progress of a disease, based on proportional measures of the extent of infection at different times. Firstly, a proportional measure of the extent of infection is chosen as the disease extent metric. For example, the metric might be the proportion of leaf area affected by mildew or the proportion of plants in a population showing dieback lesions. Measures of disease extent are then taken over time, and a mathematical model is fitted. The model is based on two assumptions: the progress of the infection is constrained by the amount of tissue that remains to be infected; and if it were not so constrained, the extent of infection would exhibit exponential growth. There is a single model parameter r, which is the apparent infection rate. It can be calculated analytically using the formula where r is the apparent infection rate t1 is the time of the first measurement t2 is the time of the second measurement x1 is the proportion of infection measured at time t1 x2 is the proportion of infection measured at time t2 See also Odds ratio Basic reproduction number References Epidemiology Mathematical modeling
Apparent infection rate
[ "Mathematics", "Environmental_science" ]
224
[ "Epidemiology", "Applied mathematics", "Mathematical modeling", "Environmental social science" ]
26,330,426
https://en.wikipedia.org/wiki/Failure%20detector
In a distributed computing system, a failure detector is a computer application or a subsystem that is responsible for the detection of node failures or crashes. Failure detectors were first introduced in 1996 by Chandra and Toueg in their book Unreliable Failure Detectors for Reliable Distributed Systems. The book depicts the failure detector as a tool to improve consensus (the achievement of reliability) and atomic broadcast (the same sequence of messages) in the distributed system. In other words, failure detectors seek errors in the process, and the system will maintain a level of reliability. In practice, after failure detectors spot crashes, the system will ban the processes that are making mistakes to prevent any further serious crashes or errors. In the 21st century, failure detectors are widely used in distributed computing systems to detect application errors, such as a software application stops functioning properly. As the distributed computing projects (see List of distributed computing projects) become more and more popular, the usage of the failure detects also becomes important and critical. Origin Unreliable failure detector Chandra and Toueg, the co-authors of the book Unreliable Failure Detectors for Reliable Distributed Systems (1996), approached the concept of detecting failure nodes by introducing the unreliable failure detector. They describe the behavior of a unreliable failure detector in a distributed computing system as: after each process in the system entered a local failure detector component, each local component will examine a portion of all processes within the system. In addition, each process must also contain programs that are currently suspected by failure detectors. Failure detector Chandra and Toueg claimed that an unreliable failure detector can still be reliable in detecting the errors made by the system. They generalize unreliable failure detectors to all forms of failure detectors because unreliable failure detectors and failure detectors share the same properties. Furthermore, Chandra and Toueg point out an important fact that the failure detector does not prevent any crashes in the system, even if the crashed program has been suspected previously. The construction of a failure detector is an essential, but a very difficult problem that occurred in the development of the fault-tolerant component in a distributed computer system. As a result, the failure detector was invented because of the need for detecting errors in the massive information transaction in distributed computing systems. Properties The classes of failure detectors are distinguished by two important properties: completeness and accuracy. Completeness means that the failure detectors would find the programs that finally crashed in a process, whereas accuracy means that correct decisions that the failure detectors made in a process. Degrees of completeness The degrees of completeness depend on the number of crashed process is suspected by a failure detector in a certain period. Strong completeness: "every faulty process is eventually permanently suspected by every non-faulty process." Weak completeness: "every faulty process is eventually permanently suspected by some non-faulty process." Degrees of accuracy The degrees of accuracy depend on the number of mistakes that a failure detector made in a certain period. Strong accuracy: "no process is suspected (by anybody) before it crashes." Weak accuracy: "some non-faulty process is never suspected." Eventual strong accuracy: "no non-faulty process is suspected after some time since the end of the initial period of chaos as the time at which the last crash occurs." Eventual weak accuracy: "after some initial period of confusion, some non-faulty process is never suspected." Classification Failure detectors can be categorized in the following eight types: Perfect failure detector (P) Eventually perfect failure detectors (♦P) Strong failure detectors (S) Eventually strong failure detectors (♦S) Weak failure detectors (W) Eventually weak failure detectors (♦W) Quasi-perfect failure detectors (Q) Eventually quasi-perfect failure detectors (♦Q) The properties of these failure detectors are described below: In a nutshell, the properties of failure detectors depend on how fast the failure detector detects actual failures and how well it avoids false detection. A perfect failure detector will find all errors without any mistakes, whereas a weak failure detector will not find any errors and make numerous mistakes. Applications Different types of failure detectors can be obtained by changing the properties of failure detectors. The first examples show that how to increase completeness of a failure detector, and the second example shows that how to change one type of the failure detector to another. Boosting completeness The following is an example abstracted from the Department of Computer Science at Yale University. It functions by boosting the completeness of a failure detector.initially suspects = ∅ do forever: for each process p: if my weak-detector suspects p, then send p to all processes upon receiving p from some process q: suspects := suspects + p - q From the example above, if p crashes, then the weak-detector will eventually suspect it. All failure detectors in the system will eventually suspect the p because of the infinite loop created by failure detectors. This example also shows that a weak completeness failure detector can also suspect all crashes eventually. The inspection of crashed programs does not depend on completeness. Reducing a failure detector W to a failure detector S The following are correctness arguments to satisfy the algorithm of changing a failure detector W to a failure detector S. The failure detector W is weak in completeness, and the failure detector S is strong in completeness. They are both weak in accuracy. It transforms weak completeness into strong completeness. It preserves the perpetual accuracy. It preserves the eventual accuracy. If all arguments above are satisfied, the reduction of a weak failure detector W to a strong failure detector S will agree with the algorithm within the distributed computing system. See also Distributed computing List of distributed computing projects SWIM Protocol Crash (computing) Fault tolerance Consensus Atomic broadcast References External links http://www.cs.yale.edu/homes/aspnes/pinewiki/FailureDetectors.html http://www.cs.cornell.edu/home/sam/FDpapers.html Distributed computing Fault-tolerant computer systems
Failure detector
[ "Technology", "Engineering" ]
1,204
[ "Fault-tolerant computer systems", "Reliability engineering", "Computer systems" ]
26,331,205
https://en.wikipedia.org/wiki/Turbo%20equalizer
In digital communications, a turbo equalizer is a type of receiver used to receive a message corrupted by a communication channel with intersymbol interference (ISI). It approaches the performance of a maximum a posteriori (MAP) receiver via iterative message passing between a soft-in soft-out (SISO) equalizer and a SISO decoder. It is related to turbo codes in that a turbo equalizer may be considered a type of iterative decoder if the channel is viewed as a non-redundant convolutional code. The turbo equalizer is different from classic a turbo-like code, however, in that the 'channel code' adds no redundancy and therefore can only be used to remove non-gaussian noise. History Turbo codes were invented by Claude Berrou in 1990–1991. In 1993, turbo codes were introduced publicly via a paper listing authors Berrou, Glavieux, and Thitimajshima. In 1995 a novel extension of the turbo principle was applied to an equalizer by Douillard, Jézéquel, and Berrou. In particular, they formulated the ISI receiver problem as a turbo code decoding problem, where the channel is thought of as a rate 1 convolutional code and the error correction coding is the second code. In 1997, Glavieux, Laot, and Labat demonstrated that a linear equalizer could be used in a turbo equalizer framework. This discovery made turbo equalization computationally efficient enough to be applied to a wide range of applications. Overview Standard communication system overview Before discussing turbo equalizers, it is necessary to understand the basic receiver in the context of a communication system. This is the topic of this section. At the transmitter, information bits are encoded. Encoding adds redundancy by mapping the information bits to a longer bit vector – the code bit vector . The encoded bits are then interleaved. Interleaving permutes the order of the code bits resulting in bits . The main reason for doing this is to insulate the information bits from bursty noise. Next, the symbol mapper maps the bits into complex symbols . These digital symbols are then converted into analog symbols with a D/A converter. Typically the signal is then up-converted to pass band frequencies by mixing it with a carrier signal. This is a necessary step for complex symbols. The signal is then ready to be transmitted through the channel. At the receiver, the operations performed by the transmitter are reversed to recover , an estimate of the information bits. The down-converter mixes the signal back down to baseband. The A/D converter then samples the analog signal, making it digital. At this point, is recovered. The signal is what would be received if were transmitted through the digital baseband equivalent of the channel plus noise. The signal is then equalized. The equalizer attempts to unravel the ISI in the received signal to recover the transmitted symbols. It then outputs the bits associated with those symbols. The vector may represent hard decisions on the bits or soft decisions. If the equalizer makes soft decisions, it outputs information relating to the probability of the bit being a 0 or a 1. If the equalizer makes hard decisions on the bits, it quantizes the soft bit decisions and outputs either a 0 or a 1. Next, the signal is deinterleaved which is a simple permutation transformation that undoes the transformation the interleaver executed. Finally, the bits are decoded by the decoder. The decoder estimates from . A diagram of the communication system is shown below. In this diagram, the channel is the equivalent baseband channel, meaning that it encompasses the D/A, the up converter, the channel, the down converter, and the A/D. Turbo equalizer overview The block diagram of a communication system employing a turbo equalizer is shown below. The turbo equalizer encompasses the equalizer, the decoder, and the blocks in between. The difference between a turbo equalizer and a standard equalizer is the feedback loop from the decoder to the equalizer. Due to the structure of the code, the decoder not only estimates the information bits , but it also discovers new information about the coded bits . The decoder is therefore able to output extrinsic information, about the likelihood that a certain code bit stream was transmitted. Extrinsic information is new information that is not derived from information input to the block. This extrinsic information is then mapped back into information about the transmitted symbols for use in the equalizer. These extrinsic symbol likelihoods, , are fed into the equalizer as a priori symbol probabilities. The equalizer uses this a priori information as well as the input signal to estimate extrinsic probability information about the transmitted symbols. The a priori information fed to the equalizer is initialized to 0, meaning that the initial estimate made by the turbo equalizer is identical to the estimate made by the standard receiver. The information is then mapped back into information about for use by the decoder. The turbo equalizer repeats this iterative process until a stopping criterion is reached. Turbo equalization in practical systems In practical turbo equalization implementations, an additional issue need to be considered. The channel state information (CSI) that the equalizer operates on comes from some channel estimation technique, and hence un-reliable. Firstly, in order to improve the reliability of the CSI, it is desirable to include the channel estimation block also into the turbo equalization loop, and parse soft or hard decision directed channel estimation within each turbo equalization iteration. Secondly, incorporating the presence of CSI uncertainty into the turbo equalizer design leads to a more robust approach with significant performance gains in practical scenarios. References Further reading — primer on turbo equalization. Since it was written for the signal processing community in general, it is relatively accessible. — offers a detailed, clear explanation of turbo equalization. See also Equalizer (communications) Telecommunication theory Signal processing
Turbo equalizer
[ "Technology", "Engineering" ]
1,244
[ "Telecommunications engineering", "Computer engineering", "Signal processing" ]
26,334,458
https://en.wikipedia.org/wiki/Journal%20of%20Polymer%20Science
Journal of Polymer Science is a peer-reviewed journal of polymer science currently published by John Wiley & Sons. It was originally established as the Journal of Polymer Science in 1946 by Interscience Publishers and the founding editor Herman F. Mark, but it was split in various parts in 1962. The journal has undergone re-organization several times since. In 2020, the journal will consolidate in one single publication. The editor-in-chief is Joseph W Krumpfer. History Establishment Journal of Polymer Science (1946–1962), First re-organization Journal of Polymer Science Part A: General Papers (1963–1965), Journal of Polymer Science Part A-1: Polymer Chemistry (1966–September 1972), Journal of Polymer Science Part A-2: Polymer Physics (1966–September 1972), Journal of Polymer Science Part B: Polymer Letters (1963–September 1972), Journal of Polymer Science Part C: Polymer Symposia (1963–1972), The coverage of biopolymers was split into a distinct journal, Biopolymers. Second re-organization Journal of Polymer Science: Polymer Physics Edition (October 1972 – 1985), Journal of Polymer Science: Polymer Letters Edition (October 1972 – 1985), Journal of Polymer Science: Polymer Chemistry Edition (1973–1985), Journal of Polymer Science: Polymer Symposia (1973–1986), Third re-organization Journal of Polymer Science Part A: Polymer Chemistry (1986–2019), Journal of Polymer Science Part B: Polymer Physics (1986–2019), Journal of Polymer Science Part C: Polymer Letters (1986–1990), Fourth re-organization Journal of Polymer Science (2020 onwards), References External links Chemistry journals Materials science journals Academic journals established in 1946 Wiley (publisher) academic journals English-language journals
Journal of Polymer Science
[ "Materials_science", "Engineering" ]
365
[ "Materials science journals", "Materials science" ]
26,335,058
https://en.wikipedia.org/wiki/SPARQCode
A SPARQCode is a matrix code (or two-dimensional bar code) encoding standard that is based on the physical QR Code definition created by Japanese corporation Denso-Wave. Overview The QR Code standard as defined by Denso-Wave in ISO/IEC 18004 covers the physical encoding method of a binary data stream. However, the Denso-Wave standard lacks an encoding standard for interpreting the data stream on the application layer for decoding URLs, phone numbers, and all other data types. NTT Docomo has established de facto standards for encoding some data types such as URLs, and contact information in Japan, but not all applications in other countries adhere to this convention as listed by the open-source project "zxing" for QR Code data types. Encoding standards The SPARQCode encoding standard specifies a convention for the following encoding data types. E-mail address Phone Number SMS TEXT MAP URL BIZCARD MeCard vCard BlackBerry PIN Geographic information Google Play link Wifi Network config for Android YouTube URI iCalendar The SPARQCode convention also recommends but does not require the inclusion of visual pictograms to denote the type of encoded data. License The use of the SPARQCode is free of any license. The term SPARQCode itself is a trademark of MSKYNET, but has chosen to open it to be royalty-free. References Barcodes Encodings Automatic identification and data capture NTT Docomo
SPARQCode
[ "Technology" ]
304
[ "Members of the Conexus Mobile Alliance", "Data", "NTT Docomo", "Automatic identification and data capture" ]
26,340,005
https://en.wikipedia.org/wiki/Thyristor-controlled%20reactor
In an electric power transmission system, a thyristor-controlled reactor (TCR) is a reactance connected in series with a bidirectional thyristor valve. The thyristor valve is phase-controlled, which allows the value of delivered reactive power to be adjusted to meet varying system conditions. Thyristor-controlled reactors can be used for limiting voltage rises on lightly loaded transmission lines. Another device which used to be used for this purpose is a magnetically controlled reactor (MCR), a type of magnetic amplifier otherwise known as a transductor. In parallel with series connected reactance and thyristor valve, there may also be a capacitor bank, which may be permanently connected or which may use mechanical or thyristor switching. The combination is called a static VAR compensator. Circuit diagram A thyristor controlled reactor is usually a three-phase assembly, normally connected in a delta arrangement to provide partial cancellation of harmonics. Often the main TCR reactor is split into two halves, with the thyristor valve connected between the two halves. This protects the vulnerable thyristor valve from damage due to flashovers, lightning strikes etc. Operating principles The current in the TCR is varied from maximum (determined by the connection voltage and the inductance of the reactor) to almost zero by varying the "Firing Delay Angle", α. α is defined as the delay angle from the point at which the voltage becomes positive to the point at which the thyristor valve is turned on and current starts to flow. Maximum current is obtained when α is 90°, at which point the TCR is said to be in "full conduction" and the rms current is given by: Where: Vsvc is the rms value of the line-to-line busbar voltage to which the SVC is connected Ltcr is the total TCR inductance per phase The current lags 90° behind the voltage in accordance with classical AC circuit theory. As α increases above 90°, up to a maximum of 180°, the current decreases and becomes discontinuous and non-sinusoidal. The TCR current, as a function of time, is then given by: Otherwise, zero. Main equipment A TCR comprises two main items of equipment: the reactor itself, which is usually air-cored (although iron-cored reactors are possible) and the thyristor valve. Depending on the system voltage, an intermediate power transformer may be required to step up from the voltage handled by the thyristors to the transmission system voltage. Thyristor valve The thyristor valve typically consists of 5-20 inverse-parallel-connected pairs of thyristors connected in series. The inverse-parallel connection is needed because most commercially available thyristors can conduct current in only one direction. The series connection is needed because the maximum voltage rating of commercially available thyristors (up to approximately 8.5 kV) is insufficient for the voltage at which the TCR is connected. For some low-voltage applications, it may be possible to avoid the series-connection of thyristors; in such cases the thyristor valve is simply an inverse-parallel connection of two thyristors. In addition to the thyristors themselves, each inverse-parallel pair of thyristors has a resistor - capacitor circuit connected across it, to force the voltage across the valve to divide uniformly amongst the thyristors and to damp the "commutation overshoot" which occurs when the valve turns off. Harmonics A TCR operating with α > 90° generates substantial amounts of harmonic currents, particularly at 3rd, 5th and 7th harmonics. By connecting the TCR in delta, the harmonic currents of order 3n ("triplen harmonics") flow only around the delta and do not escape into the connected AC system. However, the 5th and 7th harmonics (and to a lesser extent 11th, 13th, 17th etc.) must be filtered in order to prevent excessive voltage distortion on the AC network. This is usually accomplished by connecting harmonic filters in parallel with the TCR. The filters provide capacitive reactive power which partly offsets the inductive reactive power provided by the TCR. References Electric power Electric power systems components
Thyristor-controlled reactor
[ "Physics", "Engineering" ]
875
[ "Power (physics)", "Electrical engineering", "Electric power", "Physical quantities" ]
45,048,610
https://en.wikipedia.org/wiki/CXCR4%20antagonist
A CXCR4 antagonist is a substance which blocks the CXCR4 receptor and prevent its activation. Blocking the receptor stops the receptor's ligand, CXCL12, from binding which prevents downstream effects. CXCR4 antagonists are especially important for hindering cancer progression because one of the downstream effects initiated by CXCR4 receptor activation is cell movement which helps the spread of cancer, known as metastasis. The CXCR4 receptor has been targeted by antagonistic substances since being identified as a co-receptor in HIV and assisting the development of cancer. Macrocyclic ligands have been utilised as CXCR4 antagonists. Plerixafor is an example of a CXCR4 antagonist, and has approvals (e.g. US FDA 2008) for clinical use (to mobilize hematopoietic stem cells). BL-8040 is a CXCR4 antagonist that has undergone clinical trials (e.g. in various leukemias), with one planned for pancreatic cancer (in combination with pembrolizumab). Previously called BKT140, it is a synthetic cyclic 14-residue peptide with an aromatic ring. In a 2018 mouse tumor model study, BL-8040 treatment enhanced anti-tumor immune response potentially by increasing the CD8+ T-cells in the tumor microenvironment. Mavorixafor (Xolremdi) is a small-molecule drug that targets CXCR4 mutations, it was approved for medical use in the United States in April of 2024 for the treatment of WHIM syndrome. References Cancer treatments Receptor antagonists
CXCR4 antagonist
[ "Chemistry" ]
345
[ "Neurochemistry", "Receptor antagonists" ]
45,048,969
https://en.wikipedia.org/wiki/Nanoscale%20secondary%20ion%20mass%20spectrometry
NanoSIMS (nanoscale secondary ion mass spectrometry) is an analytical instrument manufactured by CAMECA which operates on the principle of secondary ion mass spectrometry. The NanoSIMS is used to acquire nanoscale resolution measurements of the elemental and isotopic composition of a sample. The NanoSIMS is able to create nanoscale maps of elemental or isotopic distribution, parallel acquisition of up to seven masses, isotopic identification, high mass resolution, subparts-per-million sensitivity with spatial resolution down to 50 nm. The original design of the NanoSIMS instrument was conceived by Georges Slodzian at the University of Paris Sud in France and at the Office National d'Etudes et de Recherches Aérospatiales. There are currently around 50 NanoSIMS instruments worldwide. How it works The NanoSIMS uses an ion source to produce a primary beam of ions. These primary ions erode the sample surface and produce atomic collisions, some of these collisions result in the release of secondary ion particles. These ions are transmitted through a mass spectrometer, where the masses are measured and identified. The primary ion beam is rastered across the sample surface and a ‘map’ of the element and isotope distribution is created by counting the number of ions that originated from each pixel with at best a 50 nanometer (nm) resolution, 10-50 times greater than conventional SIMS. This is achieved by positioning the primary probe in close proximity to the sample using a coaxial lens assembly. The primary ion beam impacts the sample surface at 90°, with the secondary ions extracted back through the same lens assembly. This allows for the isotopic composition of individual cells to be distinguished at parts per million (ppm) or parts per billion (ppb) range. The main drawback of this set up is that the primary and secondary ion beams must be of opposite polarity which can limit which elements can be detected simultaneously. NanoSIMS can detect minute mass differences between ions at the resolution of M/dM > 5000, where M is the nominal mass of the isotope and dM is the mass difference between the isotopes of interest. The high mass resolution capabilities of NanoSIMS allows for different elements and their isotopes to be identified and spatially mapped in the sample, even if very close in mass. The mass spectrometer is capable of multicollection, meaning up to 5 (NanoSIMS 50) or 7 (NanoSIMS 50 L) masses can be simultaneously detected, from hydrogen to uranium, though with limitations. The relatively large number of masses helps eliminate measurement errors as possible changes in instrumental or sample conditions that may occur in between runs are avoided. The ion beam must either be set to detect negative or positive ions, commonly completed by using a cesium+ or oxygen- beam, respectively. The high mass resolution achievable is particularly relevant to biological applications. For example, nitrogen is one of the most common elements in organisms. However, due to the low electron affinity of the nitrogen atom, the production of secondary ions is rare. Instead, molecules such as CN can be generated and measured. However, due to isotope combinations, such as the isobars 13C14N-, and 12C15N-, nearly identical molecular weights of 27.000 and 27.006 daltons, respectively, will be generated. Unlike other imaging techniques, where 13C14N and 12C15N cannot be independently measured due to nearly identical masses, NanoSIMS can safely distinguish the differences between these molecules allowing isotopic spiking experiments to be conducted. The physics of NanoSIMS The magnetic sector mass spectrometer causes a physical separation of ions of a different mass-to-charge ratio. The physical separation of the secondary ions is caused by the Lorentz force when the ions pass through a magnetic field that is perpendicular to the velocity vector of the secondary ions. The Lorentz force states that a particle will experience a force when it maintains a charge q and travels through an electric field E and magnetic field B with a velocity v. The secondary ions that leave the surface of the sample typically have a kinetic energy of a few electron volts (eV), although a rather small portion have been found to have energy of a few keV. An electrostatic field captures the secondary ions that leave the sample surface; these extracted ions are then transferred to a mass spectrometer. In order to achieve precise isotope measurements, there is a need for high transmission and high mass resolution. High transmission refers to the low loss of secondary ions between the sample surface and the detector, and high mass resolution refers to the ability to efficiently separate the secondary ions (or molecules of interest) from other ions and/or ions of similar mass. Primary ions will collide with the surface at a specific frequency per unit of surface area. The collision that occurs causes atoms to sputter from the sample surface, and of these atoms only a small amount will undergo ionization. These become secondary ions, which are then detected after transfer through the mass spectrometer. Each primary ion generates a number of secondary ions of an isotope that will reach the detector to be counted. The count rate is determined by where I(iM)is the count rate of the isotope iM of element M. The counting rate of the isotope is dependent on the concentration, XM and the element's isotopic abundance, denoted Ai. Because the primary ion beam determines the secondary ions, Y, that are sputtered, the density of the primary ion beam, db, which is defined as the amount of ions per second per unit of surface area, will affect a portion of the surface area of the sample, S, with an even distribution of the primary ions. Of the sputtered secondary ions, there is only a fraction that will be ionized, Yi. The probability that any ion will be successfully transferred from mass spectrometer to detector is T. The product of Yi and T determines the amount of isotopes that will be ionized, as well as detected, so it is considered the useful yield. Sample preparation Sample preparation is one of the most critical steps in NanoSIMS analysis, particularly when analysing biological samples. Specific protocols should be developed for individual experiments in order to best preserve not only the structure of the sample but also the true spatial distribution and abundance of molecules within the sample. As the NanoSIMS operates under ultra high vacuum, the sample must be vacuum compatible (i.e., volatile free), flat, which reduces varying ionization trajectories, and conductive, which can be accomplished by sputter coating with Au, Pt, or C. Biological samples, such as cells or tissue, can be prepared with chemical fixation or cryo-fixation and embedded in a resin before sectioning into thin slices (100 nm - 1μm), and placed on silicon wafers or slides for analysis. Sample preparation for metallographic samples is generally much simpler but a very good metallographic polish is required to achieve a flat, scratch free surface. Applications NanoSIMS can capture the spatial variability of isotopic and elemental measurements of sub-micron areas, grains or inclusions from geological, materials science and biological samples. This instrument can characterise nanostructured materials with complex composition that are increasingly important candidates for energy generation and storage. Geological applications NanoSIMS has also proved useful in studying cosmochemical issues, where samples of single, micro- or sub-micrometer-sized grains from meteorites as well as microtome sections prepared by the focused ion beam (FIB) technique can be analyzed. NanoSIMS can be combined with transmission electron microscopy (TEM) when using microtome or FIB sections. This combination allows for correlated mineralogical and isotopic studies in situ at a sub-micrometer scale. It is particularly useful in materials research because of its high sensitivity at high mass resolution, which allow for trace element imaging and quantification. Biological applications Initially developed for geochemical and related research, NanoSIMS is now utilized by a wide variety of fields, including biology and microbiology. In biomedical research, NanoSIMS is also referred to as multi-isotope imaging mass spectrometry (MIMS). The 50 nm resolution allows unprecedented resolution of cellular and sub-cellular features (as reference, the model organism E. coli is typically 1,000 to 2,000 nm in diameter). The high resolution that it offers allows intracellular measurement of accumulations and fluxes of molecules containing various stable isotopes. NanoSIMS can be used for pure cultures, co-cultures, and mixed community samples. The first use of NanoSIMS in biology was by Peteranderl and Lechene in 2004, who used a prototype of NanoSIMS to examine and measure carbon and nitrogen isotopes of eukaryotic cells. This study was the first time that carbon and nitrogen isotope ratios were directly measured at a sub-cellular scale in a biological sample. Pharmacology applications The development of NanoSIMS for organo-metallic drugs paved the way for exploring the distribution of biologically active molecules at the subcellular level. Legin et al. combined NanoSIMS with fluorescence confocal laser scanning microscopy to characterize the subcellular distribution of 15N isotopically labeled Pt-bearing cisplatin in human colon cancer cells. Cisplatin appears in the targeted nucleus of the colon cancer cells. 15N and Pt are separated showing subcellular metabolism is in the path of action. The internalization of amiodarone into the lysosomes of macrophages is illustrated in Jiang et al. Thanks to low detection limit, two iodine atoms of 127I in amiodarone molecule enables a label-free imaging by NanoSIMS. Iodine and phosphorus imaging along with plotting the intensity of 127I− vs 31P− indicated a linear relationship between the amount of iodine and phospholipids. These results disclose evidence of amiodarone-induced phospholipidosis. He et al. visualized the distribution of therapeutic antisense oligonucleotides labelled with bromine (Br-ASO) in some varieties of cultured cells and importantly mouse tissues (heart, kidney, and Liver) using NanoSIMS data combined with back scattered electron microscopy. They demonstrated that phosphorothioate ASOs associate with filopodia and the inner nuclear membrane of cells. They also documented essential cellular and subcellular heterogeneity in ASO distribution in the mouse tissues. Becquart et al. report absolute concentration of Antisense Oligonucleotide therapeutics in human hepatocytes. Their method built upon work in Thomen et al. where they reported the absolute concentration of the prodrug 13C labeled L-dopa. Materials science applications The NanoSIMS has been used in many different areas of materials science. It is able to map hydrogen and deuterium at microstructurally relevant scales which is important for studies of hydrogen embrittlement in metals although there are significant challenges associated with accurately detecting hydrogen and deuterium. Methods commonly coupled with NanoSIMS Microscopy Other microscopy techniques are commonly used in tandem with NanoSIMS that allow for multiple types of information to be obtained, such as taxonomic information through fluorescence in situ hybridization (FISH) or identification of additional physiological or microstructural features via transmission electron microscopy (TEM) or scanning electron microscopy (SEM). Immunogold labeling Traditional methods that are used to label and identify subcellular features of cells, such as immunogold labeling, can also be used with NanoSIMS analysis. Immunogold labeling uses antibodies to target specific proteins, and subsequently labels the antibodies with gold nano particles. The NanoSIMS instrument can detect the gold particles, providing the location of the labelled proteins at a high scale resolution. Gold-containing or platinum-containing compounds used as anticancer drugs were imaged using NanoSIMS to examine the subcellular distribution in breast cancer and colon cancer cells, respectively. In a separate study, antibody-antigen binding was studied without the need for a fluorescent label to be added to the antibody, allowing for label-free localization and quantitative analysis at a high resolution. Stable isotope labeling Another common technique typically used in NanoSIMS analysis is stable isotope probing. This method involves the introduction of stable isotopically labelled biologically relevant compounds to organisms for consumption and integration into organic matter. When analyzed via NanoSIMS, the technique is referred to as nanoSIP. NanoSIMS can be used to detect which organisms incorporated which molecules, how much of the labeled molecules was incorporated in a semi-quantitative manner, and where in the cell the incorporation occurred. Previous quantitative analysis techniques at a lower resolution than NanoSIMS of stable isotopically labeled molecules was limited to analyzed bulk material, which did not allow for insights about the contributions of individual cells or subcellular compartments to be made. Additionally, the removal of large foreign molecules (such as antibodies or gold particles) from the experimental setup alleviates concerns that tagged molecules required for other microscopy techniques may have different biochemical responses or properties than normal. This technique can be used to study nutrient exchange. The mouse gut microbiome was investigated to determine which microbes fed on host-derived compounds. For this, mice were given food enriched in the stable isotopically labelled amino acids and the microbial biomass examined. NanoSIMS allows for the metabolic contributions of individual microbes to be examined. NanoSIMS was used to study and prove for the first time the nitrogen fixing abilities of bacteria and archaea from the deep ocean by supplying 15N nitrogen contain compounds to sediment samples. NanoSIMS can also be used to estimate growth rate of organisms, as the amount of carbon or other substrate accumulated inside the cell allows for estimation of how much biomass is being generated. Measuring natural isotope abundances in organisms Organic material naturally contains stable isotopes at different ratios in the environment, which can provide information on the origin of the food source for the organisms. Different types of organic material of food sources has different amounts of stable isotopes, which is reflected in the composition of the organism that eats these food sources. This type of analysis was first used in 2001 in conjunction with FISH to examine syntrophic relationships between anaerobic methane-oxidizing archaea and sulfate reducing bacteria. Isotopes with naturally low abundances may not be able to be detected with this method. Paleobiology NanoSIMS can also be used to examine the elemental and isotopic composition of microparticles preserved in the rock record. The types of elements and isotopic ratios can help determine if the material is of biological origin. NanoSIMS was first used in this field of paleobiology in 2005 by Robert et al. In this study, microfossils were found to contain carbon, nitrogen, and sulfur elements arranged as ‘globules’ that were reminiscent of cell walls. The ratio of carbon to nitrogen measured also served as an indicator of biological origin, as the rock surrounding the fossils had very different C to N ratios. References Imaging Mass spectrometry Semiconductor analysis
Nanoscale secondary ion mass spectrometry
[ "Physics", "Chemistry" ]
3,118
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "Matter" ]
49,307,795
https://en.wikipedia.org/wiki/Infinite%20sites%20model
The Infinite sites model (ISM) is a mathematical model of molecular evolution first proposed by Motoo Kimura in 1969. Like other mutation models, the ISM provides a basis for understanding how mutation develops new alleles in DNA sequences. Using allele frequencies, it allows for the calculation of heterozygosity, or genetic diversity, in a finite population and for the estimation of genetic distances between populations of interest. The assumptions of the ISM are that (1) there are an infinite number of sites where mutations can occur, (2) every new mutation occurs at a novel site, and (3) there is no recombination. The term ‘site’ refers to a single nucleotide base pair. Because every new mutation has to occur at a novel site, there can be no homoplasy, or back-mutation to an allele that previously existed. All identical alleles are identical by descent. The four gamete rule can be applied to the data to ensure that they do not violate the model assumption of no recombination. The mutation rate () can be estimated as follows, where is the number of mutations found within a randomly selected DNA sequence (per generation), is the effective population size. The coefficient is the product of twice the gene copies in individuals of the population; in the case of diploid, biparentally-inherited genes the appropriate coefficient is 4 whereas for uniparental, haploid genes, such as mitochondrial genes, the coefficient would be 2 but applied to the female effective population size which is, for most species, roughly half of . When considering the length of a DNA sequence, the expected number of mutations is calculated as follows Where k is the length of a DNA sequence and is the probability a mutation will occur at a site. Watterson developed an estimator for mutation rate that incorporates the number of segregating sites (Watterson's estimator). One way to think of the ISM is in how it applies to genome evolution. To understand the ISM as it applies to genome evolution, we must think of this model as it applies to chromosomes. Chromosomes are made up of sites, which are nucleotides represented by either A, C, G, or T. While individual chromosomes are not infinite, we must think of chromosomes as continuous intervals or continuous circles. Multiple assumptions are applied to understanding the ISM in terms of genome evolution: k breaks are made in these chromosomes, which leaves 2k free ends available. The 2k free ends will rejoin in a new manner rearranging the set of chromosomes (i.e. reciprocal translocation, fusion, fission, inversion, circularized incision, circularized excision). No break point is ever used twice. A set of chromosomes can be duplicated or lost. DNA that never existed before can be observed in the chromosomes, such as horizontal gene transfer of DNA or viral integration. If the chromosomes become different enough, evolution can form a new species. Substitutions that alter a single base pair are individually invisible and substitutions occur at a finite rate per site. The substitution rate is the same for all sites in a species, but is allowed to vary between species (i.e. no molecular clock is assumed). Instead of thinking about substitutions themselves, think about the effect of the substitution at each point along the chromosome as a continuous increase in evolutionary distance between the previous version of the genome at that site and the next version of the genome at the corresponding site in the descendant. References Further reading Molecular evolution Population genetics Mathematical and theoretical biology
Infinite sites model
[ "Chemistry", "Mathematics", "Biology" ]
734
[ "Evolutionary processes", "Molecular evolution", "Mathematical and theoretical biology", "Applied mathematics", "Molecular biology" ]
49,311,943
https://en.wikipedia.org/wiki/IRAP%20PhD%20Program
International relativistic astrophysics PhD program, or IRAP PhD program is the international joint doctorate program in relativistic astrophysics initiated and co-sponsored by ICRANet. During 2010-2017 IRAP PhD takes part in the Erasmus Mundus program of the European Commission. For the first time in Europe the IRAP PhD program grants a joint PhD degree among the participating institutions. Education in relativistic astrophysics In 2005 a new international organization was founded: the International Center for Relativistic Astrophysics Network (ICRANet), dedicated to the theoretical aspects of relativistic astrophysics. Together with the University of Arizona, the University of Stanford and the International Centre for Relativistic Astrophysics (ICRA), such states as Armenia, Brazil, Italy and the Vatican are the founding members of ICRANet. As a first step, the international relativistic astrophysics Ph.D. program (IRAP Ph.D.) was established in 2005. Among the proponents of the IRAP Ph.D. program are Riccardo Giacconi, Roy Kerr and Remo Ruffini. The objectives of the program The IRAP Ph.D. program grants a joint Ph.D. degree among the participating institutions. The core of the program is the international consortium. It assembles expertise of its members for preparation of scientists in the field of relativistic astrophysics, and related fields of general relativity, cosmology and quantum field theory. One of the goals of the program is mobility: every student admitted to the IRAP Ph.D. is part of a team at one of the consortium members, and each year visits the other centers to keep track of developments in the other fields. Within the IRAP Ph.D. program, students are systematically trained in the techniques of research management and in the nature and organization of scientific projects. The consortium Since 2005 ICRANet has co-organized the IRAP Ph.D. program together with: AEI - Albert Einstein Institute - Potsdam (Germany) CBPF - Brazilian Center for Physics Research (Brazil) Indian Center for Space Physics (India) INPE (Instituto Nacional de Pesquisas Espaciais, Brazil) Institut Hautes Etudes Scientifiques - IHES (France) Côte d’Azur Observatory (France) Observatory of Shanghai (China) Observatory of Tartu (Estonia) University of Bremen (Germany) University of Oldenburg (Germany) University of Ferrara (Italy) University of Nice (France) University of Rome "La Sapienza" (Italy) University of Savoy (France) The faculty As of 2015, the faculty of the IRAP Ph.D. program consists of: Erasmus Mundus Joint Doctorate (2010-2017) Launched in 2004 under the Bologna Declaration, the Erasmus Mundus programs have supported academic cooperation and mobility with partner countries to form the joint European Higher Education Area. During 2010-2017 the IRAP Ph.D. has taken part in the Erasmus Mundus program of the European Commission and has enrolled 5 cycles of students with the total number of 44 students. The Nice University was the host organization. CAPES-ICRANet Program (2013-2018) In 2013 CAPES and ICRANet have signed a Memorandum of Understanding regarding the establishment of the CAPES-ICRANet Program. Each year five fellowships for Brazilian students are granted. Each fellowship lasts for three years with the final Ph.D. degree jointly delivered by the academic institutions participating in the program. Schools and seminars Within IRAP Ph.D. program ICRANet organizes Ph.D. schools. In particular 15 schools were held in Nice and Les Houches, France, within the EMJD program: February, 1–19, 2010 - Nice (France) March 22–26, 2010 - Ferrara (Italy) September, 6-24, 2010 - Nice (France) March 21–26, 2011 - Pescara (Italy) April 3–8, 2011 - Les Houches (France) May 25 - June 10, 2011 (France) September 5–17, 2011 (France) October 2–7, 2011 - Les Houches (France) October 12–16, 2011 - Beijing (China) September 3–21, 2012 - Nice (France) May 16–31, 2013 - Nice (France) September 2–20, 2013 - Nice (France) February 23 - March 2, 2014 - Nice (France) May 10–16, 2014 - Les Houches (France) September 8–19, 2014 - Nice (France) ICRANet co-organizes a Joint Astrophysics Seminar at the Department of Physics of the University of Rome "La Sapienza" and ICRA in Rome. All institutions collaborating with ICRANet as well as the ICRANet centers participate in these seminars via video conferencing. Statistics The official language of the program is English, but students have the opportunity to learn the language of their host country, following a variety of courses at the partner universities. As of 2015, the IRAP-Ph.D. has seen the enrollment of 111 students: 1 from Albania, 3 from Argentina, 5 from Armenia, 1 from Austria, 2 from Belarus, 16 from Brazil, 5 from China, 9 from Colombia, 2 from Croatia, 5 from France, 5 from Germany, 7 from India, 2 from Iran, 34 from Italy, 2 from Kazakhstan, 1 from Mexico, 1 from Pakistan, 4 from Russia, 1 from Serbia, 1 from Sweden, 1 from Switzerland, 1 from Taiwan, 1 from Turkey, 1 from Ukraine. During 2011–2015, more than 500 applications from 70 countries from all over the world were considered and 44 Ph.D. candidates were selected including roughly 30% female candidates. References Astrophysics Doctoral degrees
IRAP PhD Program
[ "Physics", "Astronomy" ]
1,191
[ "Astronomical sub-disciplines", "Astrophysics" ]
49,313,318
https://en.wikipedia.org/wiki/PEG-PVA
Polyethylene glycol–polyvinyl alcohol (PEG-PVA) brand name Kollicoat IR (BASF) is a multifunctional excipient used as a pill binder as well as a wet binder. A typical formulation is composed of 25% polyethylene glycol (PEG) and 75% polyvinyl alcohol (PVA); where the vinyl alcohol moieties are grafted on a polyethylene glycol backbone. See also Polyvinylpolypyrrolidone References Excipients Polymers
PEG-PVA
[ "Chemistry", "Materials_science" ]
122
[ "Pharmacology", "Medicinal chemistry stubs", "Polymer chemistry", "Polymers", "Pharmacology stubs" ]
43,313,234
https://en.wikipedia.org/wiki/Jiles%E2%80%93Atherton%20model
In electromagnetism and materials science, the Jiles–Atherton model of magnetic hysteresis was introduced in 1984 by David Jiles and D. L. Atherton. This is one of the most popular models of magnetic hysteresis. Its main advantage is the fact that this model enables connection with physical parameters of the magnetic material. Jiles–Atherton model enables calculation of minor and major hysteresis loops. The original Jiles–Atherton model is suitable only for isotropic materials. However, an extension of this model presented by Ramesh et al. and corrected by Szewczyk enables the modeling of anisotropic magnetic materials. Principles Magnetization of the magnetic material sample in Jiles–Atherton model is calculated in the following steps for each value of the magnetizing field : effective magnetic field is calculated considering interdomain coupling and magnetization , anhysteretic magnetization is calculated for effective magnetic field , magnetization of the sample is calculated by solving ordinary differential equation taking into account sign of derivative of magnetizing field (which is the source of hysteresis). Parameters Original Jiles–Atherton model considers following parameters: Extension considering uniaxial anisotropy introduced by Ramesh et al. and corrected by Szewczyk requires additional parameters: Modelling the magnetic hysteresis loops Effective magnetic field Effective magnetic field influencing on magnetic moments within the material may be calculated from the following equation: This effective magnetic field is analogous to the Weiss mean field acting on magnetic moments within a magnetic domain. Anhysteretic magnetization Anhysteretic magnetization can be observed experimentally, when magnetic material is demagnetized under the influence of constant magnetic field. However, measurements of anhysteretic magnetization are very sophisticated due to the fact, that the fluxmeter has to keep accuracy of integration during the demagnetization process. As a result, experimental verification of the model of anhysteretic magnetization is possible only for materials with negligible hysteresis loop. Anhysteretic magnetization of typical magnetic material can be calculated as a weighted sum of isotropic and anisotropic anhysteretic magnetization: Isotropic Isotropic anhysteretic magnetization is determined on the base of Boltzmann distribution. In the case of isotropic magnetic materials, Boltzmann distribution can be reduced to Langevin function connecting isotropic anhysteretic magnetization with effective magnetic field : Anisotropic Anisotropic anhysteretic magnetization is also determined on the base of Boltzmann distribution. However, in such a case, there is no antiderivative for the Boltzmann distribution function. For this reason, integration has to be made numerically. In the original publication, anisotropic anhysteretic magnetization is given as: where It should be highlighted, that a typing mistake occurred in the original Ramesh et al. publication. As a result, for an isotropic material (where ), the presented form of anisotropic anhysteretic magnetization is not consistent with the isotropic anhysteretic magnetization given by the Langevin equation. Physical analysis leads to the conclusion that the equation for anisotropic anhysteretic magnetization has to be corrected to the following form: In the corrected form, the model for anisotropic anhysteretic magnetization was confirmed experimentally for anisotropic amorphous alloys. Magnetization as a function of magnetizing field In Jiles–Atherton model, M(H) dependence is given in form of following ordinary differential equation: where depends on direction of changes of magnetizing field ( for increasing field, for decreasing field) Flux density as a function of magnetizing field Flux density in the material is given as: where is magnetic constant. Vectorized Jiles–Atherton model Vectorized Jiles–Atherton model is constructed as the superposition of three scalar models one for each principal axis. This model is especially suitable for finite element method computations. Numerical implementation The Jiles–Atherton model is implemented in JAmodel, a MATLAB/OCTAVE toolbox. It uses the Runge-Kutta algorithm for solving ordinary differential equations. JAmodel is open-source is under MIT license. The two most important computational problems connected with the Jiles–Atherton model were identified: numerical integration of the anisotropic anhysteretic magnetization solving the ordinary differential equation for dependence. For numerical integration of the anisotropic anhysteretic magnetization the Gauss–Kronrod quadrature formula has to be used. In GNU Octave this quadrature is implemented as quadgk() function. For solving ordinary differential equation for dependence, the Runge–Kutta methods are recommended. It was observed, that the best performing was 4-th order fixed step method. Further development Since its introduction in 1984, Jiles–Atherton model was intensively developed. As a result, this model may be applied for the modeling of: frequency dependence of magnetic hysteresis loop in conductive materials influence of stresses on magnetic hysteresis loops magnetostriction of soft magnetic materials Moreover, different corrections were implemented, especially: to avoid unphysical states when reversible permeability is negative to consider changes of average energy required to break pinning site Applications Jiles–Atherton model may be applied for modeling: rotating electric machines power transformers magnetostrictive actuators magnetoelastic sensors magnetic field sensors (e. g. fluxgates) It is also widely used for electronic circuit simulation, especially for models of inductive components, such as transformers or chokes. See also Preisach model of hysteresis Stoner–Wohlfarth model References External links Jiles–Atherton model for Octave/MATLAB - open-source software for implementation of Jiles–Atherton model in GNU Octave and Matlab Magnetic hysteresis
Jiles–Atherton model
[ "Physics", "Materials_science" ]
1,241
[ "Physical phenomena", "Hysteresis", "Magnetic hysteresis" ]
36,186,413
https://en.wikipedia.org/wiki/Protein%20Expression%20and%20Purification%20%28journal%29
Protein Expression and Purification is a peer-reviewed scientific journal covering biotechnological research on protein production and isolation based on conventional fractionation as well as techniques employing various molecular biological procedures to increase protein expression. Abstracting and indexing The journal is abstracted and indexed in Biological Abstracts, Chemical Abstracts, Current Contents/Life Sciences, EMBiology, Food Science and Technology Abstracts, MEDLINE, Science Citation Index and Scopus. See also Protein production Protein purification External links Elsevier academic journals Biotechnology journals English-language journals Academic journals established in 1990 Monthly journals
Protein Expression and Purification (journal)
[ "Biology" ]
112
[ "Biotechnology literature", "Biotechnology journals" ]
36,186,626
https://en.wikipedia.org/wiki/Uniformly%20bounded%20representation
In mathematics, a uniformly bounded representation of a locally compact group on a Hilbert space is a homomorphism into the bounded invertible operators which is continuous for the strong operator topology, and such that is finite. In 1947 Béla Szőkefalvi-Nagy established that any uniformly bounded representation of the integers or the real numbers is unitarizable, i.e. conjugate by an invertible operator to a unitary representation. For the integers this gives a criterion for an invertible operator to be similar to a unitary operator: the operator norms of all the positive and negative powers must be uniformly bounded. The result on unitarizability of uniformly bounded representations was extended in 1950 by Dixmier, Day and Nakamura-Takeda to all locally compact amenable groups, following essentially the method of proof of Sz-Nagy. The result is known to fail for non-amenable groups such as SL(2,R) and the free group on two generators. conjectured that a locally compact group is amenable if and only if every uniformly bounded representation is unitarizable. Statement Let G be a locally compact amenable group and let Tg be a homomorphism of G into GL(H), the group of an invertible operators on a Hilbert space such that for every x in H the vector-valued gx on G is continuous; the operator norms of the operators Tg are uniformly bounded. Then there is a positive invertible operator S on H such that S Tg S−1 is unitary for every g in G. As a consequence, if T is an invertible operator with all its positive and negative powers uniformly bounded in operator norm, then T is conjugate by a positive invertible operator to a unitary. Proof By assumption the continuous functions generate a separable unital C* subalgebra A of the uniformly bounded continuous functions on G. By construction the algebra is invariant under left translation. By amenability there is an invariant state φ on A. It follows that is a new inner product on H satisfying where So there is a positive invertible operator P such that By construction Let S be the unique positive square root of P. Then Applying S−1 to x and y, it follows that Since the operators are invertible, it follows that they are unitary. Examples of non-unitarizable representations SL(2,R) The complementary series of irreducible unitary representations of SL(2,R) was introduced by . These representations can be realized on functions on the circle or on the real line: the Cayley transform provides the unitary equivalence between the two realizations. In fact for 0 < σ < 1/2 and f, g continuous functions on the circle define where Since the function kσ is integrable, this integral converges. In fact where the norms are the usual L2 norms. The functions are orthogonal with Since these quantities are positive, (f,g)σ defines an inner product. The Hilbert space completion is denoted by Hσ. For F, G continuous functions of compact support on R, define Since, regarded as distributions, the Fourier transform of |x|2σ – 1 is Cσ|t|−2σ for some positive constant Cσ, the above expression can be rewritten: Hence it is an inner product. Let Hσ denote its Hilbert space completion. The Cayley transform gives rise to an operator U: U extends to an isometry of Hσ onto H 'σ. Its adjoint is given by The Cayley transform exchanges the actions by Möbius transformations of SU(1,1) on S1 and of SL(2, R) on R. The operator U intertwines corresponding actions of SU(1,1) on Hσ and SL(2,R''') on H 'σ. For g in SU(1,1) given by with and f continuous, set For g in SL(2,R) given by with ad – bc = 1, set If g ' corresponds to g under the Cayley transform then Polar decomposition shows that SL(2,R) = KAK with K = SO(2) and A the subgroup of positive diagonal matrices. K corresponds to the diagonal matrices in SU(1,1). Since evidently K acts unitarily on Hσ and A acts unitarily on H 'σ, both representations are unitary. The representations are irreducible because the action of the Lie algebra on the basis vectors fm is irreducible. This family of irreducible unitary representations is called the complementary series. constructed an analytic continuation of this family of representations as follows. If s = σ + iτ, g lies in SU(1,1) and f in Hσ, define Similarly if g ' lies in SL(2,R) and F in H 'σ, define As before the unitary U intertwines these two actions. K acts unitarily on Hσ and A by a uniformly bounded representation on H 'σ. The action of the standard basis of the complexification Lie algebra on this basis can be computed: If the representation were unitarizable for τ ≠ 0, then the similarity operator T on Hσ would have to commute with K, since K preserves the original inner product. The vectors Tfm would therefore still be orthogonal for the new inner product and the operators would satisfy the same relations for In this case It is elementary to verify that infinitesimally such a representation cannot exist if τ ≠ 0. Indeed, let v0 = f '0 and set Then for some constant c. On the other hand, Thus c must be real and positive. The formulas above show that so the representation πs is unitarizable only if τ = 0. Free group on two generators The group G = SL(2,R) contains the discrete group Γ = SL(2,Z) as a closed subgroup of finite covolume, since this subgroup acts on the upper half plane with a fundamental domain of finite hyperbolic area. The group SL(2,Z) contains a subgroup of index 12 isomorphic to F2 the free group on two generators. Hence G has a subgroup Γ1 of finite covolume, isomorphic to F2. If L is a closed subgroup of finite covolume in a locally compact group G, and π is non-unitarizable uniformly bounded representation of G on a Hilbert space L, then its restriction to L is uniformly bounded and non-unitarizable. For if not, applying a bounded invertible operator, the inner product can be made invariant under L; and then in turn invariant under G by redefining As in the previous proof, uniform boundedess guarantees that the norm defined by this inner product is equivalent to the original inner product. But then the original representation would be unitarizable on G, a contradiction. The same argument works for any discrete subgroup of G of finite covolume. In particular the surface groups, which are cocompact subgroups, have uniformly bounded representations that are not unitarizable. There are more direct constructions of uniformly bounded representations of free groups that are non-unitarizable: these are surveyed in . The first such examples are described in , where an analogue of the complementary series is constructed. Later gave a related but simpler construction, on the Hilbert space H = 2(F2), of a holomorphic family of uniformly bounded representations πz of F2 for |z| < 1; these are non-unitarizable when 1/√3 < |z| < 1 and z is not real. Let L(g) denote the reduced word length on F2 for a given set of generators a, b. Let T be the bounded operator defined on basis elements by where g ' is obtained by erasing the last letter in the expression of g as a reduced word; identifying F2 with the vertices of its Cayley graph, a rooted tree, this corresponds to passing from a vertex to the next closest vertex to the origin or root. For |z| < 1 is well-defined on finitely supported functions. had earlier proved that it extends to a uniformly bounded representation on H satisfying In fact it is easy to check that the operator λ(g)Tλ(g)−1 – T has finite rank, with rangeVg, the finite-dimensional space of functions supported on the set of vertices joining g to the origin. For on any function vanishing on this finite set, T and λ(g)Tλ(g)−1 are equal; and they both leave invariant Vg, on which they acts as contractions and adjoints of each other. Hence if f has finite support and norm 1, For |z| < 1/√3, these representations are all similar to the regular representation λ. If on the other hand 1/√3 < |z| <1, then the operator satisfies where f in H is defined by Thus, if z is not real, D has an eigenvalue which is not real. But then πz cannot be unitarizable, since otherwise D would be similar to a self-adjoint operator. Dixmier problem Jacques Dixmier asked in 1950 whether amenable groups are characterized by unitarizability''', i.e. the property that all their uniformly bounded representations are unitarizable. This problem remains open to this day. An elementary induction argument shows that a subgroup of a unitarizable group remains unitarizable. Therefore, the von Neumann conjecture would have implied a positive answer to Dixmier's problem, had it been true. In any case, it follows that a counter-example to Dixmier's conjecture could only be a non-amenable group without free subgroups. In particular, Dixmier's conjecture is true for all linear groups by the Tits alternative. A criterion due to Epstein and Monod shows that there are also non-unitarizable groups without free subgroups. In fact, even some Burnside groups are non-unitarizable, as shown by Monod and Ozawa. Considerable progress has been made by Pisier who linked unitarizability to a notion of factorization length. This allowed him to solve a modified form of the Dixmier problem. The potential gap between unitarizability and amenability can be further illustrated by the following open problems, all of which become elementary if "unitarizable" were replaced by "amenable": Is the direct product of two unitarizable groups unitarizable? Is a directed union of unitarizable groups unitarizable? If contains a normal amenable subgroup such is unitarizable, does it follow that is unitarizable? (It is elementary that is unitarizable if is so and is amenable.) Notes References Operator theory Functional analysis Representation theory
Uniformly bounded representation
[ "Mathematics" ]
2,287
[ "Functions and mappings", "Functional analysis", "Mathematical objects", "Fields of abstract algebra", "Mathematical relations", "Representation theory" ]
36,187,009
https://en.wikipedia.org/wiki/Earle%E2%80%93Hamilton%20fixed-point%20theorem
In mathematics, the Earle–Hamilton fixed point theorem is a result in geometric function theory giving sufficient conditions for a holomorphic mapping of an open domain in a complex Banach space into itself to have a fixed point. The result was proved in 1968 by Clifford Earle and Richard S. Hamilton by showing that, with respect to the Carathéodory metric on the domain, the holomorphic mapping becomes a contraction mapping to which the Banach fixed-point theorem can be applied. Statement Let D be a connected open subset of a complex Banach space X and let f be a holomorphic mapping of D into itself such that: the image f(D) is bounded in norm; the distance between points f(D) and points in the exterior of D is bounded below by a positive constant. Then the mapping f has a unique fixed point x in D and if y is any point in D, the iterates fn(y) converge to x. Proof Replacing D by an ε-neighbourhood of f(D), it can be assumed that D is itself bounded in norm. For z in D and v in X, set where the supremum is taken over all holomorphic functions g on D with |g(z)| < 1. Define the α-length of a piecewise differentiable curve γ:[0,1] D by The Carathéodory metric is defined by for x and y in D. It is a continuous function on D x D for the norm topology. If the diameter of D is less than R then, by taking suitable holomorphic functions g of the form with a in X* and b in C, it follows that and hence that In particular d defines a metric on D. The chain rule implies that and hence f satisfies the following generalization of the Schwarz-Pick inequality: For δ sufficiently small and y fixed in D, the same inequality can be applied to the holomorphic mapping and yields the improved estimate: The Banach fixed-point theorem can be applied to the restriction of f to the closure of f(D) on which d defines a complete metric, defining the same topology as the norm. Other holomorphic fixed point theorems In finite dimensions the existence of a fixed point can often be deduced from the Brouwer fixed point theorem without any appeal to holomorphicity of the mapping. In the case of bounded symmetric domains with the Bergman metric, and showed that the same scheme of proof as that used in the Earle-Hamilton theorem applies. The bounded symmetric domain D = G / K is a complete metric space for the Bergman metric. The open semigroup of the complexification Gc taking the closure of D into D acts by contraction mappings, so again the Banach fixed-point theorem can be applied. Neretin extended this argument by continuity to some infinite-dimensional bounded symmetric domains, in particular the Siegel generalized disk of symmetric Hilbert-Schmidt operators with operator norm less than 1. The Earle-Hamilton theorem applies equally well in this case. References Theorems in complex analysis Fixed-point theorems
Earle–Hamilton fixed-point theorem
[ "Mathematics" ]
642
[ "Theorems in mathematical analysis", "Theorems in complex analysis", "Fixed-point theorems", "Theorems in topology" ]
36,187,349
https://en.wikipedia.org/wiki/Gossard%20perspector
In geometry the Gossard perspector (also called the Zeeman–Gossard perspector) is a special point associated with a plane triangle. It is a triangle center and it is designated as X(402) in Clark Kimberling's Encyclopedia of Triangle Centers. The point was named Gossard perspector by John Conway in 1998 in honour of Harry Clinton Gossard who discovered its existence in 1916. Later it was learned that the point had appeared in an article by Christopher Zeeman published during 1899 – 1902. From 2003 onwards the Encyclopedia of Triangle Centers has been referring to this point as Zeeman–Gossard perspector. Definition Gossard triangle Let ABC be any triangle. Let the Euler line of triangle ABC meet the sidelines BC, CA and AB of triangle ABC at D, E and F respectively. Let AgBgCg be the triangle formed by the Euler lines of the triangles AEF, BFD and CDE, the vertex Ag being the intersection of the Euler lines of the triangles BFD and CDE, and similarly for the other two vertices. The triangle AgBgCg is called the Gossard triangle of triangle ABC. Gossard perspector Let ABC be any triangle and let AgBgCg be its Gossard triangle. Then the lines AAg, BBg and CCg are concurrent. The point of concurrence is called the Gossard perspector of triangle ABC. Properties Let AgBgCg be the Gossard triangle of triangle ABC. The lines BgCg, CgAg and AgBg are respectively parallel to the lines BC, CA and AB. Any triangle and its Gossard triangle are congruent. Any triangle and its Gossard triangle have the same Euler line. The Gossard triangle of triangle ABC is the reflection of triangle ABC in the Gossard perspector. Trilinear coordinates The trilinear coordinates of the Gossard perspector of triangle ABC are ( f ( a, b, c ) : f ( b, c, a ) : f ( c, a, b ) ) where f ( a, b, c ) = p ( a, b, c ) y ( a, b, c ) / a where p ( a, b, c ) = 2a4 − a2b2 − a2c2 − ( b2 − c2 )2 and y ( a, b, c ) = a8 − a6 ( b2 + c2 ) + a4 ( 2b2 − c2 ) ( 2c2 − b2 ) + ( b2 − c2 )2 [ 3a2 ( b2 + c2 ) − b4 − c4 − 3b2c2 ] Generalizations The construction yielding the Gossard triangle of a triangle ABC can be generalised to produce triangles A'B'C'  which are congruent to triangle ABC and whose sidelines are parallel to the sidelines of triangle ABC. Zeeman’s Generalization This result is due to Christopher Zeeman. Let l be any line parallel to the Euler line of triangle ABC. Let l intersect the sidelines BC, CA, AB of triangle ABC at X, Y, Z respectively. Let A'B'C'  be the triangle formed by the Euler lines of the triangles AYZ, BZX and CXY. Then triangle A'B'C'  is congruent to triangle ABC and its sidelines are parallel to the sidelines of triangle ABC. Yiu’s Generalization This generalisation is due to Paul Yiu. Let P be any point in the plane of the triangle ABC different from its centroid G. Let the line PG meet the sidelines BC, CA and AB at X, Y and Z respectively. Let the centroids of the triangles AYZ, BZX and CXY be Ga, Gb and Gc respectively. Let Pa be a point such that YPa is parallel to CP and ZPa is parallel to BP. Let Pb be a point such that ZPb is parallel to AP and XPb is parallel to CP. Let Pc be a point such that XPc is parallel to BP and YPc is parallel to AP. Let A'B'C'  be the triangle formed by the lines GaPa, GbPb and GcPc. Then the triangle A'B'C'  is congruent to triangle ABC and its sides are parallel to the sides of triangle ABC. When P coincides with the orthocenter H of triangle ABC then the line PG coincides with the Euler line of triangle ABC. The triangle A'B'C'  coincides with the Gossard triangle AgBgCg of triangle ABC. Dao's Generalisation The theorem was further generalized by Dao Thanh Oai. Let ABC be a triangle. Let H and O be two points in the plane, and let the line HO meets BC, CA, AB at A0, B0, C0 respectively. Let AH and AO be two points such that C0AH parallel to BH, B0AH parallel to CH and C0AO parallel to BO, B0AO parallel to CO. Define BH, BO, CH, CO cyclically. Then the triangle formed by the lines AHAO, BHBO, CHCO and triangle ABC are homothetic and congruent, and the homothetic center lies on the line OH. Dao Thanh Oai's result is generalization of all results above. When HO is the Euler line, Dao's result is the Gossard perspector theorem. When PQ parallel to the Euler line, Dao's result is the Zeeman's generalization. When P is the centroid, Dao's result is the Yiu's generalization. The homothetic center in Encyclopedia of Triangle Centers named Dao-Zeeman perspector of the line OH. See also Central line Encyclopedia of Triangle Centers Triangle centroid Central triangle Euler line References Triangle centers
Gossard perspector
[ "Physics", "Mathematics" ]
1,271
[ "Point (geometry)", "Triangle centers", "Points defined for a triangle", "Geometric centers", "Symmetry" ]
36,187,709
https://en.wikipedia.org/wiki/RELAP5-3D
RELAP5-3D is a simulation tool that allows users to model the coupled behavior of the reactor coolant system and the core for various operational transients and postulated accidents that might occur in a nuclear reactor. RELAP5-3D (Reactor Excursion and Leak Analysis Program) can be used for reactor safety analysis, reactor design, simulator training of operators, and as an educational tool by universities. RELAP5-3D was developed at Idaho National Laboratory to address the pressing need for reactor safety analysis and continues to be developed through the United States Department of Energy and the International RELAP5 Users Group (IRUG) with over $3 million invested annually. The code is distributed through INL's Technology Deployment Office and is licensed to numerous universities, governments, and corporations worldwide. Background RELAP5-3D is an outgrowth of the one-dimensional RELAP5/MOD3 code developed at Idaho National Laboratory (INL) for the U.S. Nuclear Regulatory Commission (NRC). The U.S. Department of Energy (DOE) began sponsoring additional RELAP5 development in the early 1980s to meet its own reactor safety assessment needs. Following the Chernobyl disaster, DOE undertook a re-assessment of the safety of all its test and production reactors throughout the United States. The RELAP5 code was chosen as the thermal-hydraulic analysis tool because of its widespread acceptance. The application of RELAP5 to various reactor designs created the need for new modeling capabilities. In particular, the analysis of the Savannah River reactors necessitated a three-dimensional flow model. Later, under laboratory-discretionary funding, multi-dimensional reactor kinetics were added. Up until the end of 1995, INL maintained NRC and DOE versions of the code in a single source code that could be partitioned before compilation. It became clear by then, however, that the efficiencies realized by the maintenance of a single source were being overcome by the extra effort required to accommodate sometimes conflicting requirements. The code was therefore "split" into two versions—one for NRC and the other for DOE. The DOE version maintained all of the capabilities and validation history of the predecessor code, plus the added capabilities that had been sponsored by the DOE before and after the split. The most prominent attribute that distinguishes the DOE code from the NRC code is the fully integrated, multi-dimensional thermal-hydraulic and kinetic modeling capability in the DOE code. This removes any restrictions on the applicability of the code to the full range of postulated reactor accidents. Other enhancements include a new matrix solver, additional water properties, and improved time advancement for greater robustness. Features Modeling Capability RELAP5-3D has multidimensional thermal hydraulics and neutron kinetic modeling capabilities. The multidimensional component in RELAP5-3D was developed to allow the user to accurately model the multidimensional flow behavior that can be exhibited in any component or region of a nuclear reactor coolant system. There is also two dimensional conductive and radiative heat transfer capability and modeling of plant trips and control systems. RELAP5-3D allows for the simulation of the full range of reactor transients and postulated accidents, including: Trips and controls Component models (pumps, valves, separators, branches, etc.) Operational transients Startup and shutdown Maneuvers (e.g. change in power level, starting/tripping pump) Small and large break Loss Of Coolant Accidents (LOCA) Anticipated Transient Without Scram (ATWS) Loss of offsite power Loss of feedwater Loss of flow Light Water Reactors (PWR, BWR, APWR, ABWR, etc.) Heavy Water Reactors (e.g. CANDU reactor) Gas-cooled Reactors (VHTGR, NGNP) Liquid metal cooled reactors Molten-salt cooled reactors Hydrodynamic Model RELAP5-3D is a transient, two-fluid model for flow of a two-phase vapor/gas-liquid mixture that can contain non-condensable components in the vapor/gas phase and/or a soluble component in the liquid phase. The multi-dimensional component in RELAP5-3D was developed to allow the user to more accurately model the multi-dimensional flow behavior that can be exhibited in any component or region of an LWR system. Typically, this will be the lower plenum, core, upper plenum and downcomer regions of an LWR. However, the model is general, and is not restricted to use in the reactor vessel. The component defines a one, two, or three-dimensional array of volumes and the internal junctions connecting them. The geometry can be either Cartesian (x, y, z) or cylindrical (r, q, z). An orthogonal, three-dimensional grid is defined by mesh interval input data in each of the three coordinate directions. The functionality of the multi-dimensional component has been under testing and refinement since it was first applied to study the K reactor at Savannah River in the early 1990s. A set of ten verification test cases with closed form solutions are used to demonstrate the correctness of the numerical formulation for the conservation equations. Recent developments have updated the programming language to FORTRAN 95 and incorporated viscous effects in multi-dimensional hydrodynamic models. Currently, RELAP5-3D contains 27 different working fluids including: Light water (e.g. 1967, 1984, and 1995 steam tables) Heavy water Gases (e.g. helium and carbon dioxide) Molten salts (e.g. FLiBe and FLiNaK) Liquid metals (e.g. sodium and lead-bismuth eutectic) Alternative fluids (e.g. glycerin and ammonia) Refrigerants (e.g. R-134a) Working fluids allow single-phase, two-phase, and supercritical applications. Thermal Model Heat structures provided in RELAP5-3D permit calculation of heat transferred across solid boundaries of hydrodynamic volumes. Modeling capabilities of heat structures are general and include fuel pins or plates with nuclear or electrical heating, heat transfer across steam generator tubes, and heat transfer from pipe and vessel walls. Temperature-dependent and space-dependent thermal conductivities and volumetric heat capacities are provided in tabular or functional form either from built-in or user-supplied data. There is also a radiative/conductive enclosure model, for which the user may supply/view conductance factors. Control System RELAP5-3D allows the user to model a control system typically used in hydrodynamic systems, including other phenomena described by algebraic and ordinary differential equations. Each control system component defines a variable as a specific function of time-advanced quantities; this permits control variables to be developed from components that perform simple, basic operations. Reactor Kinetics There are two options that include a point reactor kinetics model and a multidimensional neutron kinetics model. A flexible neutron cross section model and a control rod model have been implemented to allow for the complete modeling of the reactor core. The decay heat model developed as part of the point reactor kinetics model has been modified to compute decay power for point reactor kinetics and multi-dimensional neutron kinetics models. Recent Major Upgrades Accurate Verification Capability Verification ensures the program is built right by: (1) showing it meets its design specifications, (2) comparing its calculations against analytical solutions and method of manufactured solutions. RELAP5-3D Sequential Verification writes a file of extremely accurate representations of primary variables for comparing calculations between code versions to reveal any changes. The test suite of input models exercise code capabilities important for modeling nuclear plants. This verification capability also provides means to test that important code functions such as restart and backup work properly. Moving System Modeling Capability The ability to simulate movement, such as could be encountered in ships, airplanes, or a terrestrial reactor during an earthquake becomes available in the 2013 release of RELAP5-3D. This capability allows the user to simulate motion through input, including translational displacement and rotation about the origin implied by the position of the reference volume. The transient rotation can be input using either Euler or pitch-yaw-roll angles. The movement is simulated using a combination of sine functions and tables of rotational angles and translational displacement. Since the gravitational constant is also an input quantity, this capability is not limited to the surface of the Earth. It allows RELAP5-3D to model reactor systems on space craft, a space station, the moon, or other extraterrestrial bodies. International RELAP5 Users Group There are five different levels of membership available in the International RELAP5 Users Group (IRUG). Each has a different level of benefits, services, and membership fee. Members A full member organization is the highest level of participation possible in the IRUG. Members receive the RELAP5-3D software in source code form. Multiple copy use is allowed. Two levels of membership are available: Regular and "Super User". Regular Member organizations receive up to 40 hours of on-call assistance in areas such as model noding, code usage recommendations, debugging, and interpretations of results from INL RELAP5 technical experts. Super Users receive up to 100 hours of staff assistance. Multi-Use Participants Multi-use participants are organizations that require use of the code but do not need or desire all the benefits of a full member. Participants receive the RELAP5-3D software in executable form only. Multiple copy use is allowed. Participants receive up to 20 hours of staff assistance. Single-Use Participants Single-use participants are restricted to use RELAP5-3D on a single computer, one user at a time. They receive the RELAP5-3D executable code and may receive up to 5 hours of staff assistance. University Participants University Participants may acquire a license to RELAP5-3D for educational purposes. Training Participants Training participants have two main options available: they can receive a 3-month single-use license for the RELAP5-3D code and up to 10 hours of staff assistance, or a 3-month multiple-use license and up to 40 hours of on-call technical assistance. Alternative arrangements can be made based on customers' needs. These levels of participation are designed for those interested in participating in training courses. One set of RELAP5-3D training videos is included. Major RELAP5-3D Releases Notes References J. A. Findley and G. L. Sozzi, "BWR Refill-Reflood Program – Model Qualification Task Plan," EPRI NP-1527, NUREG/CR-1899, GEAP-24898, October 1981. T. M. Anklam, R. J. Miller, M. D. White, "Experimental Investigation of Uncovered-Bundle Heat Transfer and Two-Phase Mixture Level Swell Under High-Pressure and Low Heat Flux Conditions," NUREG/CR-2456, ORNL-5848, Oak Ridge National Laboratory, March 1982. K. Carlson, R. Riemke, R. Wagner, J. Trapp, "Addition of Three-Dimensional Modeling," RELAP5/TRAC-B International Users Seminar, Baton Rouge, LA, November 4–8, 1991. R. Riemke, "RELAP5 Multi-Dimensional Constitutive Models," RELAP5/TRAC-B International Users Seminar, Baton Rouge, LA, November 4–8, 1991. H. Finnemann and A. Galati, "NEACRP 3-D LWR Core Transient Benchmark – Final Specifications," NEACRP-L-335 (Revision 1), January, 1992. K. Carlson, R. Riemke, R. Wagner, "Theory and Input Requirements for the Multi-Dimensional Component in RELAP5 for Savannah River Site Thermal-Hydraulic Analysis," EGG-EAST-9878, Idaho National Engineering Laboratory, July, 1992. K. Carlson, C. Chou, C. Davis, R. Martin, R. Riemke, R. Wagner, "Developmental Assessment of the Multi-Dimensional Component in RELAP5 for Savannah River Site Thermal-Hydraulic Analysis," EGG-EAST-9803, Idaho National Engineering Laboratory, July, 1992. K. Carlson, C. Chou, C. Davis, R. Martin, R. Riemke, R. Wagner, R. Dimenna, G. Taylor, V. Ransom, J. Trapp, "Assessment of the Multi-Dimensional Component in RELAP5/MOD2.5", Proceedings of the 5th International Topical Meeting on Nuclear Reactor Thermal-Hydraulics, Salt Lake City, Utah, USA, September 21–24, 1992. P. Murray, R. Dimenna, C. Davis, "A Numerical Study of the Three Dimensional Hydrodynamic Component in RELAP5/MOD3", RELAP5 International Users Seminar, Boston, MA, USA, July, 1993. G. Johnsen, "Status and Details of the 3-D Fluid Modeling of RELAP5," Code Application and Maintenance Program Meeting, Santa Fe, NM, October, 1993. H. Finnemann, et al., "Results of LWR Core Transient Benchmarks," Proceedings of the Joint International Conference on Mathematical Methods and Supercomputing in Nuclear Applications, Vol. 2, pg. 243, Kernforschungszentrum, Karlsruhe, Germany, April, 1993. A. S. Shieh, V. H. Ransom, R Krishnamurthy, RELAP5/MOD3 Code Manual Volume 6: Validation of Numerical Techniques in RELAP5/MOD3, NUREG/CR-5535, EGG-2596, October, 1994. C. Davis, "Assessment of the RELAP5 Multi-Dimensional Component Model Using Data from LOFT Test L2-5," INEEL-EXT-97-01325, Idaho National Engineering Laboratory, January, 1998. R. M. Al-Chalabi, et al., "NESTLE: A Nodal Kinetics Code," Transactions of the American Nuclear Society, Volume 68, June, 1993. J. L. Judd, W. L. Weaver, T. Downar, J. G. Joo, "A Three Dimensional Nodal Neutron Kinetics Capability for RELAP5," Proceedings of the 1994 Topical Meeting on Advances in Reactor Physics, Knoxville, TN, April 11–15, 1994, Vol. II, pp 269–280. E. Tomlinson, T. Rens, R. Coffield, "Evaluation of the RELAP5/MOD3 Multidimensional Component Model", RELAP5 International Users Seminar, Baltimore, MD, August 29 – September 1, 1994. K. Carlson, "1D to 3D Connection for the Semi-Implicit Scheme," R5M3BET-001, Idaho National Engineering Laboratory, June, 1997. A. Shieh, "1D to 3D Connection for the Nearly-Implicit Scheme," R5M3BET-002, Idaho National Engineering Laboratory, June, 1997. J. A. Galbraith, G. L. Mesina, "RELAP5/RGUI Architectural Framework", Proceedings of the 8th International Conference on Nuclear Energy (ICONE-8), Baltimore, MD, USA, April 2–6, 2000. G. L. Mesina and P. P. Cebull, "Extreme Vectorization in RELAP5-3D," Proceedings of the Cray User Group 2004, Knoxville, TN, USA, May 16–21, 2004. D. P. Guillen, G. L. Mesina, J. M. Hykes, "Restructuring RELAP5-3D for Next Generation Nuclear Plant Analysis," 2006 Transactions of the American Nuclear Society, Vol. 94, June 2006. G. L. Mesina, "Reformulation RELAP5-3D in FORTRAN 95 and Results," Proceedings of the ASME 2010 Joint US-European Fluids Engineering Summer Meeting and 8th International Conference on Nanochannels Microchannels, and Minichannels, FEDSM2010-ICNMM2010, Montreal, Quebec, Canada, Aug 1–5, 2010. The RELAP5-3D Code Development Team, RELAP5-3D Code Manual Volume I: Code Structure, System Models and Solution Methods, INL-EXT-98-00834-V1, Revision 4.2, Idaho National Laboratory, June, 2014. The RELAP5-3D Code Development Team, RELAP5-3D Code Manual Volume II: User's Guide and Input Requirements, INEEL-EXT-98-00834, Revision 4.2, Section 8.7, Idaho National Laboratory, PO Box 1625, Idaho Falls, Idaho 83415, June, 2014. The RELAP5-3D Code Development Team, RELAP5-3D Code Manual Volume II: User's Guide and Input Requirements, Appendix A, INEEL-EXT-98-00834, Revision 4.2, Idaho National Laboratory, PO Box 1625, Idaho Falls, Idaho 83415, June, 2014. The RELAP5-3D Code Development Team, RELAP5-3D Code Manual Volume III: Developmental Assessment, INL-EXT-98-00834, Revision 4.2, June, 2014. The RELAP5-3D Code Development Team, RELAP5-3D Code Manual Volume IV: Models and Correlations, INL-EXT-98-00834, Revision 4.2, June, 2014. The RELAP5-3D Code Development Team, RELAP5-3D Code Manual Volume V: User's Guidelines, INL-EXT-98-00834, Revision 4.2, June, 2014. G. L. Mesina, D. L. Aumiller, F. X. Buschman, "Automated, Highly Accurate Verification of RELAP5-3D," ICONE22-31153, Proceedings of the 22nd International Conference on Nuclear Engineering, Prague, Czech Republic, July 7–11, 2014. See also Thermal-hydraulics Nuclear safety Computational fluid dynamics External links RELAP5-3D Homepage International RELAP5 Users Group Fortran software Physics software Industrial software Computational fluid dynamics Nuclear reactors Idaho National Laboratory
RELAP5-3D
[ "Physics", "Chemistry", "Technology" ]
3,838
[ "Computational fluid dynamics", "Physics software", "Computational physics", "Industrial software", "Industrial computing", "Fluid dynamics" ]
36,189,675
https://en.wikipedia.org/wiki/C22H27FN4O2
{{DISPLAYTITLE:C22H27FN4O2}} The molecular formula C22H27FN4O2 (molar mass: 398.47 g/mol) may refer to: DPA-714, or N,N-diethyl-2-[4-(2-fluoroethoxy)phenyl]-5,7-dimethylpyrazolo[1,5-a]pyrimidine-3-acetamide Sunitinib Molecular formulas
C22H27FN4O2
[ "Physics", "Chemistry" ]
115
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
36,190,447
https://en.wikipedia.org/wiki/Kinetic-segregation%20model%20of%20T%20cell%20activation
Kinetic-segregation is a model proposed for the mechanism of T-cell receptor (TCR) triggering. It offers an explanation for how TCR binding to its ligand triggers T-cell activation, based on size-sensitivity for the molecules involved. Simon J. Davis and Anton van der Merwe, University of Oxford, proposed this model in 1996. According to the model, TCR signalling is initiated by segregation of phosphatases with large extracellular domains from the TCR complex when binding to its ligand, allowing small kinases to phosphorylate intracellular domains of the TCR without inhibition. Its might also be applicable to other receptors of the Non-catalytic tyrosine-phosphorylated receptors family such as CD28. Mechanism On plasma membrane of a T cell there is the T-cell receptor (consists of α,β chains and multiple CD3 adaptor proteins), as well as molecules that induce signalling (tyrosine kinase Lck phosphorylates ITAMs in CD3 complex) and factors that inhibit signalling (tyrosine phosphatases CD45 and CD148). In the resting T-cell, all molecules are repeatedly colliding by means of diffusion. The TCR/CD3 complex is constantly being phosphorylated by Lck. Because of an abundance of CD45 and CD148 in the cell membrane, phosphorylations are readily removed before they can recruit downstream signalling molecules. Overall phosphorylation of the TCR is low and tonic TCR signalling is avoided. The TCR/peptide-MHC complex, formed when a T cell recognises its ligand on an antigen presenting cell (APC) and the T-cell-APC contact occurs, spans a short length. This results in the formation of close contact zones between the membranes of the T cell and antigen presenting cell (~15 nm apart) around the TCR/peptide-MHC complex. Phosphatases CD45 and CD148 with much larger ectodomains than TCR are sterically excluded from the close contact zones, while the region is still accessible for the small kinase Lck. This perturbs the balance of kinase activity to phosphatase activity and ITAM phosphorylation is strongly favoured. Such prolonged phosphorylation of ITAMs by Lck kinase allows time for ZAP-70 recruitment, its activation by phosphorylation and subsequent phosphorylation of adaptor proteins LAT and SLP-76. Full T-cell activation is initiated by multiple triggering events described above. When T-cell and APC membranes separate, the close-contact zone vanishes and large-ectodomain tyrosine phosphatases are allowed to restore the ground state. Supporting evidence During ligand binding, CD45 and CD148 are excluded from the TCR region. It was also shown that both the truncation of CD45 and CD148 (hence are able to enter the close contact zone) and the elongation of the MHC inhibit TCR triggering. Furthermore CAR cell function is affected by the size of the ligand it recognises. Finally, T cells can be activated by pMHC immobilised on a plate surface but not by soluble, monomeric pMHC, providing evidence that TCR triggering depends on restricting width between two membranes. Kinetic segregation as model for other signalling receptors Antibody-induced signaling by CD28 In the resting T-cell there is no net phosphorylation of CD28 (one of the molecules providing co-stimulatory signals required for T-cell activation). Kinetic-segregation model uses here the same explanation as it provides for low net phosphorylation of TCR in the resting T-cell described previously. Binding of both conventional and superagonistic (mitogenic) antibodies in suspension does not constrict the dephosphorylation effect of phosphatases acting on CD28. However, when these antibodies are immobilized (either by secondary antibody bound to plastic or by Fc receptors on other cells) considerable steric constraints emerge. It is of note, that the immobilized conventional antibody poses less prominent spatial constraints than the immobilized superagonistic antibody. CD45 phosphatase is not completely excluded from the close-contact zone and thus the signal generated in the case of a conventional antibody is weaker. Immobilized superagonistic antibodies bound to CD28 exclude CD45 phosphatases completely and the signal leading to T-cell activation is stronger. Further applications The tyrosine kinase Lck functions either in conjunction with a co-receptor molecule (CD4 or CD8) or as a free Lck kinase. The kinetic-segregation model might be applied to both co-receptor dependent and co-receptor independent signaling through TCR. See also Non-catalytic tyrosine-phosphorylated receptor T-cell receptor References Cell biology Immunology theories
Kinetic-segregation model of T cell activation
[ "Biology" ]
1,040
[ "Cell biology" ]
36,193,702
https://en.wikipedia.org/wiki/De%20Beghinselen%20Der%20Weeghconst
De Beghinselen der Weeghconst ( "The Principles of the Art of Weighing") is a book about statics written by the Flemish physicist Simon Stevin in Dutch. It was published in 1586 in a single volume with De Weeghdaet ( "The Act of Weighing"), De Beghinselen des Waterwichts ("The Principles of Hydrostatics") and an Anhang (an appendix). In 1605, there was another edition. Importance The importance of the book was summarized by the Encyclopædia Britannica: Contents The first part consists of two books, together account for 95 pages, here divided into 10 pieces. Book I Start: panegyrics, Mission to Rudolf II, Uytspraeck Vande Weerdicheyt of Duytsche Tael, Cortbegryp Bepalinghen and Begheerten (definitions and assumptions) Proposal 1 t / m 4: hefboomwet Proposal 5 t / m 12: a balance with weights pilaer Proposition 13 t / m 18: follow-up, with hefwicht, two supports Proposition 19: balance on an inclined plane, with cloot Crans Proposal 20 t / m 28: pilaer with scheefwichten, hanging, body Book II Proposal 1 t / m 6: center of gravity boards – triangle, rectilinear flat Proposal 7 t / m 13: trapezium, divide, cut fire Proposition 14 t / m 24: center of gravity of bodies – pillar, pyramid, burner The Weeghdaet The Beghinselen des Waterwichts Anhang Byvough See also Simon Stevin References Further reading 1586 books Mathematics books Physics books Statics
De Beghinselen Der Weeghconst
[ "Physics" ]
363
[ "Statics", "Classical mechanics" ]
36,194,532
https://en.wikipedia.org/wiki/Fractal%20derivative
In applied mathematics and mathematical analysis, the fractal derivative or Hausdorff derivative is a non-Newtonian generalization of the derivative dealing with the measurement of fractals, defined in fractal geometry. Fractal derivatives were created for the study of anomalous diffusion, by which traditional approaches fail to factor in the fractal nature of the media. A fractal measure t is scaled according to tα. Such a derivative is local, in contrast to the similarly applied fractional derivative. Fractal calculus is formulated as a generalization of standard calculus. Physical background Porous media, aquifers, turbulence, and other media usually exhibit fractal properties. Classical diffusion or dispersion laws based on random walks in free space (essentially the same result variously known as Fick's laws of diffusion, Darcy's law, and Fourier's law) are not applicable to fractal media. To address this, concepts such as distance and velocity must be redefined for fractal media; in particular, scales for space and time are to be transformed according to (xβ, tα). Elementary physical concepts such as velocity are redefined as follows for fractal spacetime (xβ, tα): , where Sα,β represents the fractal spacetime with scaling indices α and β. The traditional definition of velocity makes no sense in the non-differentiable fractal spacetime. Definition Based on above discussion, the concept of the fractal derivative of a function f(t) with respect to a fractal measure t has been introduced as follows: , A more general definition is given by . For a function y(t) on -perfect fractal set F the fractal derivative or -derivative of y(t) at t is defined by . Motivation The derivatives of a function f can be defined in terms of the coefficients ak in the Taylor series expansion: From this approach one can directly obtain: This can be generalized approximating f with functions (xα-(x0)α)k: Note that the lowest order coefficient still has to be b0=f(x0), since it's still the constant approximation of the function f at x0. Again one can directly obtain: The Fractal Maclaurin series of f(t) with fractal support F is as follows: Properties Expansion coefficients Just like in the Taylor series expansion, the coefficients bk can be expressed in terms of the fractal derivatives of order k of f: Proof idea: Assuming exists, bk can be written as One can now use and since Chain rule If for a given function f both the derivative Df and the fractal derivative Dαf exist, one can find an analog to the chain rule: The last step is motivated by the implicit function theorem which, under appropriate conditions, gives us Similarly for the more general definition: Application in anomalous diffusion As an alternative modeling approach to the classical Fick's second law, the fractal derivative is used to derive a linear anomalous transport-diffusion equation underlying anomalous diffusion process, where 0 < α < 2, 0 < β < 1, , and δ(x) is the Dirac delta function. To obtain the fundamental solution, we apply the transformation of variables then the equation (1) becomes the normal diffusion form equation, the solution of (1) has the stretched Gaussian kernel: The mean squared displacement of above fractal derivative diffusion equation has the asymptote: Fractal-fractional calculus The fractal derivative is connected to the classical derivative if the first derivative exists. In this case, . However, due to the differentiability property of an integral, fractional derivatives are differentiable, thus the following new concept was introduced by Prof Abdon Atangana from South Africa. The following differential operators were introduced and applied very recently. Supposing that y(t) be continuous and fractal differentiable on (a, b) with order β, several definitions of a fractal–fractional derivative of y(t) hold with order α in the Riemann–Liouville sense: Having power law type kernel: Having exponentially decaying type kernel: , Having generalized Mittag-Leffler type kernel: The above differential operators each have an associated fractal-fractional integral operator, as follows: Power law type kernel: Exponentially decaying type kernel: . Generalized Mittag-Leffler type kernel: . FFM refers to fractal-fractional with the generalized Mittag-Leffler kernel. Fractal non-local calculus Fractal analogue of the right-sided Riemann-Liouville fractional integral of order of f is defined by: . Fractal analogue of the left-sided Riemann-Liouville fractional integral of order of f is defined by: Fractal analogue of the right-sided Riemann-Liouville fractional derivative of order of f is defined by: Fractal analogue of the left-sided Riemann-Liouville fractional derivative of order of f is defined by: Fractal analogue of the right-sided Caputo fractional derivative of order of f is defined by: Fractal analogue of the left-sided Caputo fractional derivative of order of f is defined by: See also Fractional calculus Fractional-order system Multifractal system References Bibliography External links Power Law & Fractional Dynamics Non-Newtonian calculus website Fractals Applied mathematics Non-Newtonian calculus Mathematical analysis
Fractal derivative
[ "Mathematics" ]
1,155
[ "Mathematical analysis", "Functions and mappings", "Calculus", "Applied mathematics", "Mathematical objects", "Non-Newtonian calculus", "Fractals", "Mathematical relations" ]
41,808,229
https://en.wikipedia.org/wiki/Tetranitratoborate
Tetranitratoborate is an anion composed of boron with four nitrate groups. It has formula . It can form salts with large cations such as tetramethylammonium nitratoborate, or tetraethylammonium tetranitratoborate. The ion was first discovered by C. R. Guibert and M. D. Marshall in 1966 after failed attempts to make neutral (non-ionic) boron nitrate, , which has resisted attempts to make it; if it exists, it is unstable above −78 °C. Other related ions are the slightly more stable tetraperchloratoborates, with perchlorate groups instead of nitrate, and tetranitratoaluminate with the next atom down the periodic table, aluminium instead of boron (). Formation Tetramethylammonium chloride reacts with to make . Then the tetrachloroborate is reacted with at around −20 °C to form tetramethylammonium nitratoborate, and other gases such as and . Another mechanism to make tetranitratoborate salts is to shake a metal nitrate with in chloroform at 20 °C for several days. Trichloronitratoborate is an unstable intermediate. Properties The infrared spectrum of tetramethylammonium nitratoborate includes a prominent line at 1,612 cm−1 with shoulders at 1582 and 1,626 cm−1 attributed to v4. Also prominent is 1,297 and 1,311 cm−1 attributed to v1, with these vibrations due to the nitrate bonded via one oxygen. The density of tetramethylammonium nitratoborate is 1.555 g·cm−3. It is colourless and crystalline. As tetramethylammonium nitratoborate is heated it has some sort of transition between 51 and 62 °C. It decomposes above 75 °C producing gas. Above 112 °C it is exothermic, and a solid is left if it is heated to 160 °C. Tetramethylammonium nitratoborate is insoluble in cold water but slightly soluble in hot water. It does not react with water. It also dissolves in liquid ammonia, acetonitrile, methanol, and dimethylformamide. It reacts with liquid sulfur dioxide. At room temperature tetramethylammonium nitratoborate is stable for months. It does not explode with impact. Alkali metal tetranitratoborates are unstable at room temperature and decompose. 1-Ethyl-3-methyl-imidazolimium tetranitratoborate was discovered in 2002. It is an ionic liquid that turns solid at −25 °C. References Nitrates Boron compounds Borates Anions
Tetranitratoborate
[ "Physics", "Chemistry" ]
592
[ "Matter", "Anions", "Nitrates", "Salts", "Oxidizing agents", "Ions" ]
41,808,834
https://en.wikipedia.org/wiki/National%20Petroleum%20Authority
National Petroleum Authority (NPA) is a statutory body set up by the Government of Ghana to regulate, oversee and monitor the Ghanaian petroleum industry. The authority was established after calls by the general public for efficiency, growth and stakeholder satisfaction in the industry. Dr. Mustapha Abdul-Hamid currently heads the NPA. History The authority was set up by Legislative Instrument of the Parliament of Ghana and it is powered by NPA Act of 2005. It is also known as ACT 691. The authority is headquartered in Accra. Functions The regulatory mission of the NPA allows it to regularly monitor and adjust the price of Petroleum products in Ghana. References Government ministries of Ghana Petroleum organizations Energy in Ghana Energy regulatory authorities Regulation in Ghana
National Petroleum Authority
[ "Chemistry", "Engineering" ]
148
[ "Petroleum", "Petroleum organizations", "Energy organizations" ]
41,813,667
https://en.wikipedia.org/wiki/Tetraperchloratoaluminate
Tetraperchloratoaluminates are salts of the tetraperchloratoaluminate anion, [Al(ClO4)4]−. The anion contains aluminium tetrahedrally surrounded by four perchlorate groups. The perchlorate is covalently bonded to the aluminium, but perchlorate is much more well known as an ion. The covalent bond to aluminium distorts the perchlorate and renders it unstable. Related chemicals are the haloperchloroatoaluminates, where there is one perchloro group attached to aluminium, and three halogens such as chlorine (chloroperchloroatoaluminates) or bromine (bromoperchloroatoaluminates). Formation Nitronium tetraperchloratoaluminate is made from exact amounts of nitronium perchlorate and anhydrous aluminium chloride combined in liquid sulfur dioxide. Ammonium tetraperchloratoaluminate can be formed by three moles of nitronium perchlorate, one mole of anhydrous aluminium chloride, and one mole of ammonium perchlorate combined in liquid sulfur dioxide. Properties The tetraperchloratoaluminates are yellowish crystalline solids. They are stable up to 50 °C. Above this temperature they decompose to hexaperchloratoaluminates which are more temperature stable. References Aluminium complexes Perchlorates Anions
Tetraperchloratoaluminate
[ "Physics", "Chemistry" ]
307
[ "Matter", "Anions", "Perchlorates", "Salts", "Ions" ]
29,580,777
https://en.wikipedia.org/wiki/Luxol%20fast%20blue%20stain
Luxol fast blue stain, abbreviated LFB stain or simply LFB, is a commonly used stain to observe myelin under light microscopy, created by Heinrich Klüver and Elizabeth Barrera in 1953. LFB is commonly used to detect demyelination in the central nervous system (CNS), but cannot discern myelination in the peripheral nervous system. Procedure Luxol fast blue is a copper phthalocyanine dye that is soluble in alcohol and is attracted to bases found in the lipoproteins of the myelin sheath. Under the stain, myelin fibers appear blue, neuropil appears pink, and nerve cells appear purple. Tissues sections are treated over an extended period of time (usually overnight) and then differentiated with a lithium carbonate solution. Combination methods The combination of LFB with a variety of common staining methods provides the most useful and reliable method for the demonstration of pathological processes in the CNS. It is often combined with H&E stain (hematoxylin and eosin), which is abbreviated H-E-LFB, H&E-LFB. Other common staining methods include the periodic acid-Schiff, Oil Red O, phosphotungstic acid, and Holmes silver nitrate method. See also Bielschowsky stain References Staining Histology Histochemistry
Luxol fast blue stain
[ "Chemistry", "Biology" ]
280
[ "Staining", "Histology", "Microbiology techniques", "Microscopy", "Cell imaging" ]
29,583,215
https://en.wikipedia.org/wiki/Cotransformation
Cotransformation is the simultaneous transformation of two or more genes. Only genes in the same chromosomal vicinity can be transformed; the closer together the genes lie, the more frequently they will be cotransformed. By contrast, genes sufficiently far apart that they cannot appear together on a fragment of foreign DNA will almost never be cotransformed, because transformation is so inefficient that recipient cells usually take up only a single DNA. Example In one study of natural transformation, investigators isolated B. subtilis bacteria with two mutations—trpC2 and hisB2—that made them Trp- , His- auxotrophs. These double auxotrophs served as the recipient in the study, wild-type cells (Trp+ , His+ ) were the donors. In this study, the numbers of Trp+ and His+ transformants were equal. Further tests showed that 40 of every 100 Trp+ transferred colonies were also His+. Similarly, tests of the His+ transformants showed that roughly 40% are also Trp+ . Thus, in 40% of the analyzed colonies, the trpC+ and hisB+ genes had been cotransformed. Explanation Since during transformation, donor DNA replaces only a small percentage of the recipient's chromosome, why are the two B.subtilis genes cotransformed with such high frequency? Because the trpC and hisB genes lie very close together on the chromosome and are thus genetically linked. Although the donor chromosome is fragmented into small pieces of about 20 kb during its extraction for the transformation process, the wild-type trpC+ and hisB+ alleles are so close that they often appear in the same donor DNA molecule. Sequence analysis shows that trpC and hisB genes are only about 7 kb apart. References Genetics
Cotransformation
[ "Biology" ]
379
[ "Genetics" ]
29,584,036
https://en.wikipedia.org/wiki/Deliberative%20agent
Deliberative agent (also known as intentional agent) is a sort of software agent used mainly in multi-agent system simulations. According to Wooldridge's definition, a deliberative agent is "one that possesses an explicitly represented, symbolic model of the world, and in which decisions (for example about what actions to perform) are made via symbolic reasoning". Compared to reactive agents, which are able to reach their goal only by reacting reflexively on external stimuli, a deliberative agent's internal processes are more complex. The difference lies in fact, that deliberative agent maintains a symbolic representation of the world it inhabits. In other words, it possesses internal image of the external environment and is thus capable to plan its actions. Most commonly used architecture for implementing such behavior is a Belief–Desire–Intention (BDI) software model, where an agent's beliefs about the world (its image of a world), desires (goal) and intentions are internally represented and practical reasoning is applied to decide, which action to select. There has been considerable research focused on integrating both reactive and deliberative agent strategies resulting in developing a compound called hybrid agent, which combines extensive manipulation with nontrivial symbolic structures and reflexive reactive responses to the external events. How do deliberative agents work? It has already been mentioned, that deliberative agents possess a) inherent image of an outer world and b) goal to achieve and is thus able to produce a list of actions (plan) to reach the goal. In unfavorable conditions, when the plan is no more applicable, agent is usually able to recompute it. The process of plan computing (or recomputing) is as follows: a sensory input is received by the belief revision function and agent's beliefs are altered option generation function evaluates altered beliefs and intentions and creates the options available to the agent. Agent's desires are constituted. filter function then considers current beliefs, desires and intentions and produces new intentions action selection function then receives intentions filter function and decides what action to perform The deliberative agent requires symbolic representation with compositional semantics (e. g. data tree) in all major functions, for its deliberation is not limited to present facts, but construes hypotheses about possible future states and potentially also holds information about past (i.e. memory). These hypothetic states involve goals, plans, partial solutions, hypothetical states of the agent's beliefs, etc. It is evident, that deliberative process may become considerably complex and hardware killing. History of a concept Since the early 1970, the AI planning community has been involved in developing artificial planning agent (a predecessor of a deliberative agent), which would be able to choose a proper plan leading to a specified goal. These early attempts resulted in constructing simple planning system called STRIPS. It soon became obvious that STRIPS concept needed further improvement, for it was unable to effectively solve problems of even moderate complexity. In spite of considerable effort to raise the efficiency (for example by implementing hierarchical and non-linear planning), the system remained somewhat weak while working with any time-constrained system. More successful attempts have been made in late 1980s to design planning agents. For example, the IPEM (Integrated Planning, Execution and Monitoring system) had a sophisticated non-linear planner embedded. Further, Wood's AUTODRIVE simulated a behavior of deliberative agents in a traffic and Cohen's PHOENIX system was construed to simulate a forest fire management. In 1976, Simon and Newell formulated the Physical Symbol System hypothesis, which claims, that both human and artificial intelligence have the same principle - symbol representation and manipulation. According to the hypothesis it follows, that there is no substantial difference between human and machine in intelligence, but just quantitative and structural - machines are much less complex. Such a provocative proposition must have become the object of serious criticism and raised a wide discussion, but the problem itself still remains unsolved in its merit until these days. Further development of classical symbolic AI proved not to be dependent on final verifying the Physical Symbol System hypothesis at all. In 1988, Bratman, Israel and Pollack introduced Intelligent Resource-bounded Machine Architecture (IRMA), the first system implementing the Belief–Desire–Intention (BDI) software model. IRMA exemplifies the standard idea of deliberative agent as it is known today: a software agent embedding the symbolic representation and implementing the BDI. Efficiency of deliberative agents compared to reactive ones Above-mentioned troubles with symbolic AI have led to serious doubts about the viability of such a concept, which resulted in developing a reactive architecture, which is based on wholly different principles. Developers of the new architecture have rejected using symbolic representation and manipulation as a base of any artificial intelligence. Reactive agents achieve their goals simply through reactions on changing environment, which implies reasonable computational modesty. Even though deliberative agents consume much more system resources than their reactive colleagues, their results are significantly better just in few special situations, whereas it is usually possible to replace one deliberative agent with few reactive ones in many cases, without losing a substantial deal of the simulation result's adequacy. It seems that classical deliberative agents may be usable especially where correct action is required, for their ability to produce optimal, domain-independent solution. Deliberative agent often fails in changing environment, for it is unable to re-plan its actions quickly enough. See also Multi-agent system Artificial intelligence (AI) Software agent Intelligent agent Notes External links Reactive vs. Deliberative agents Keyword 'Deliberative agent' at Encyclopedia.com Multi-agent systems
Deliberative agent
[ "Engineering" ]
1,158
[ "Artificial intelligence engineering", "Multi-agent systems" ]
29,586,267
https://en.wikipedia.org/wiki/Origin%20and%20function%20of%20meiosis
The origin and function of meiosis are currently not well understood scientifically, and would provide fundamental insight into the evolution of sexual reproduction in eukaryotes. There is no current consensus among biologists on the questions of how sex in eukaryotes arose in evolution, what basic function sexual reproduction serves, and why it is maintained, given the basic two-fold cost of sex. It is clear that it evolved over 1.2 billion years ago, and that almost all species which are descendants of the original sexually reproducing species are still sexual reproducers, including plants, fungi, and animals. Meiosis is a key event of the sexual cycle in eukaryotes. It is the stage of the life cycle when a cell gives rise to haploid cells (gametes) each having half as many chromosomes as the parental cell. Two such haploid gametes, ordinarily arising from different individual organisms, fuse by the process of fertilization, thus completing the sexual cycle. Meiosis is ubiquitous among eukaryotes. It occurs in single-celled organisms such as yeast, as well as in multicellular organisms, such as humans. Eukaryotes arose from prokaryotes more than 2.2 billion years ago and the earliest eukaryotes were likely single-celled organisms. To understand sex in eukaryotes, it is necessary to understand (1) how meiosis arose in single celled eukaryotes, and (2) the function of meiosis. Origin of meiosis There are two conflicting theories on how meiosis arose. One is that meiosis evolved from prokaryotic sex (bacterial recombination) as eukaryotes evolved from prokaryotes. The other is that meiosis arose from mitosis. From prokaryotic sex In prokaryotic sex, DNA from one prokaryote is taken up by another prokaryote and its information integrated into the DNA of the recipient prokaryote. In extant prokaryotes the donor DNA can be transferred either by transformation or conjugation. Transformation in which DNA from one prokaryote is released into the surrounding medium and then taken up by another prokaryotic cell may have been the earliest form of sexual interaction. One theory on how meiosis arose is that it evolved from transformation. According to this view, the evolutionary transition from prokaryotic sex to eukaryotic sex was continuous. Transformation, like meiosis, is a complex process requiring the function of numerous gene products. A key similarity between prokaryotic sex and eukaryotic sex is that DNA originating from two different individuals (parents) join up so that homologous sequences are aligned with each other, and this is followed by exchange of genetic information (a process called genetic recombination). After the new recombinant chromosome is formed it is passed on to progeny. When genetic recombination occurs between DNA molecules originating from different parents, the recombination process is catalyzed in prokaryotes and eukaryotes by enzymes that have similar functions and that are evolutionarily related. One of the most important enzymes catalyzing this process in bacteria is referred to as RecA, and this enzyme has two functionally similar counterparts that act in eukaryotic meiosis, RAD51 and DMC1. Support for the theory that meiosis arose from prokaryotic transformation comes from the increasing evidence that early diverging lineages of eukaryotes have the core genes for meiosis. This implies that the precursor to meiosis was already present in the prokaryotic ancestor of eukaryotes. For instance the common intestinal parasite Giardia intestinalis, a simple eukaryotic protozoan was, until recently, thought to be descended from an early diverging eukaryotic lineage that lacked sex. However, it has since been shown that G. intestinalis contains within its genome a core set of genes that function in meiosis, including five genes that function only in meiosis. In addition, G. intestinalis was recently found to undergo a specialized, sex-like process involving meiosis gene homologs. This evidence, and other similar examples, suggest that a primitive form of meiosis, was present in the common ancestor of all eukaryotes, an ancestor that arose from an antecedent prokaryote. From mitosis Mitosis is the normal process in eukaryotes for cell division; duplicating chromosomes and segregating one of the two copies into each of the two daughter cells, in contrast with meiosis. The mitosis theory states that meiosis evolved from mitosis. According to this theory, early eukaryotes evolved mitosis first, became established, and only then did meiosis and sexual reproduction arise. Supporting this idea are observations of some features, such as the meiotic spindles that draw chromosome sets into separate daughter cells upon cell division, as well as processes regulating cell division that employ the same, or similar molecular machinery. Yet there is no compelling evidence for a period in the early evolution of eukaryotes, during which meiosis and accompanying sexual capability did not yet exist. In addition, as noted by Wilkins and Holliday, there are four novel steps needed in meiosis that are not present in mitosis. These are: (1) pairing of homologous chromosomes, (2) extensive recombination between homologs; (3) suppression of sister chromatid separation in the first meiotic division; and (4) avoiding chromosome replication during the second meiotic division. Although the introduction of these steps seems to be complicated, Wilkins and Holliday argue that only one new step, homolog synapsis, was particularly initiated in the evolution of meiosis from mitosis. Meanwhile, two of the other novel features could have been simple modifications, and extensive recombination could have evolved later. Coevolution with mitosis If meiosis arose from prokaryotic transformation, during the early evolution of eukaryotes, mitosis and meiosis could have evolved in parallel. Both processes use shared molecular components, where mitosis evolved from the molecular machinery used by prokaryotes for DNA replication and segregation, and meiosis evolved from the prokaryotic sexual process of transformation. However, meiosis also made use of the evolving molecular machinery for DNA replication and segregation. Function Stress-induced sex Abundant evidence indicates that facultative sexual eukaryotes tend to undergo sexual reproduction under stressful conditions. For instance, the budding yeast Saccharomyces cerevisiae (a single-celled fungus) reproduces mitotically (asexually) as diploid cells when nutrients are abundant, but switches to meiosis (sexual reproduction) under starvation conditions. The unicellular green alga, Chlamydomonas reinhardtii grows as vegetative cells in nutrient rich growth medium, but depletion of a source of nitrogen in the medium leads to gamete fusion, zygote formation and meiosis. The fission yeast Schizosaccharomyces pombe, treated with H2O2 to cause oxidative stress, substantially increases the proportion of cells which undergo meiosis. The simple multicellular eukaryote Volvox carteri undergoes sex in response to oxidative stress or stress from heat shock. These examples, and others, suggest that, in simple single-celled and multicellular eukaryotes, meiosis is an adaptation to respond to stress. Prokaryotic sex also appears to be an adaptation to stress. For instance, transformation occurs near the end of logarithmic growth, when amino acids become limiting in Bacillus subtilis, or in Haemophilus influenzae when cells are grown to the end of logarithmic phase. In Streptococcus mutans and other streptococci, transformation is associated with high cell density and biofilm formation. In Streptococcus pneumoniae, transformation is induced by the DNA damaging agent mitomycin C. These, and other, examples indicate that prokaryotic sex, like meiosis in simple eukaryotes, is an adaptation to stressful conditions. This observation suggests that the natural selection pressures maintaining meiosis in eukaryotes are similar to the selective pressures maintaining prokaryotic sex. This similarity suggests continuity, rather than a gap, in the evolution of sex from prokaryotes to eukaryotes. Stress is, however, a general concept. What is it specifically about stress that needs to be overcome by meiosis? And what is the specific benefit provided by meiosis that enhances survival under stressful conditions? DNA repair In one theory, meiosis is primarily an adaptation for repairing DNA damage. Environmental stresses often lead to oxidative stress within the cell, which is well known to cause DNA damage through the production of reactive forms of oxygen, known as reactive oxygen species (ROS). DNA damages, if not repaired, can kill a cell by blocking DNA replication, or transcription of essential genes. When only one strand of the DNA is damaged, the lost information (nucleotide sequence) can ordinarily be recovered by repair processes that remove the damaged sequence and fill the resulting gap by copying from the opposite intact strand of the double helix. However, ROS also cause a type of damage that is difficult to repair, referred to as double-strand damage. One common example of double-strand damage is the double-strand break. In this case, genetic information (nucleotide sequence) is lost from both strands in the damaged region, and proper information can only be obtained from another intact chromosome homologous to the damaged chromosome. The process that the cell uses to accurately accomplish this type of repair is called recombinational repair. Meiosis is distinct from mitosis in that a central feature of meiosis is the alignment of homologous chromosomes followed by recombination between them. The two chromosomes which pair are referred to as non-sister chromosomes, since they did not arise simply from the replication of a parental chromosome. Recombination between non-sister chromosomes at meiosis is known to be a recombinational repair process that can repair double-strand breaks and other types of double-strand damage. In contrast, recombination between sister chromosomes cannot repair double-strand damages arising prior to the replication which produced them. Thus on this view, the adaptive advantage of meiosis is that it facilitates recombinational repair of DNA damage that is otherwise difficult to repair, and that occurs as a result of stress, particularly oxidative stress. If left unrepaired, this damage would likely be lethal to gametes and inhibit production of viable progeny. Even in multicellular eukaryotes, such as humans, oxidative stress is a problem for cell survival. In this case, oxidative stress is a byproduct of oxidative cellular respiration occurring during metabolism in all cells. In humans, on average, about 50 DNA double-strand breaks occur per cell in each cell generation. Meiosis, which facilitates recombinational repair between non-sister chromosomes, can efficiently repair these prevalent damages in the DNA passed on to germ cells, and consequently prevent loss of fertility in humans. Thus with the theory that meiosis arose from prokaryotic sex, recombinational repair is the selective advantage of meiosis in both single celled eukaryotes and multicellular eukaryotes, such as humans. An argument against this hypothesis is that adequate repair mechanisms including those involving recombination already exist in prokaryotes. Prokaryotes do have DNA repair mechanism enriched with recombinational repair, and the existence of prokaryotic life in severe environment indicates the extreme efficiency of this mechanism to help them survive many DNA damages related to the environment. This implies that an extra costly repair in the form of meiosis would be unnecessary. However, most of these mechanisms cannot be as accurate as meiosis and are possibly more mutagenic than the repair mechanism provided by meiosis. They primarily do not require a second homologous chromosome for the recombination that promotes a more extensive repair. Thus, despite the efficiency of recombinational repair involving sister chromatids, the repair still needs to be improved, and another type of repair is required. Moreover, due to the more extensive homologous recombinational repair in meiosis in comparison to the repair in mitosis, meiosis as a repair mechanism can accurately remove any damage that arises at any stage of the cell cycle more than mitotic repair mechanism can do and was, therefore, naturally selected. In contrast, the sister chromatid in mitotic recombination could have been exposed to similar amount of stress, and, thus, this type of recombination, instead of eliminating the damage, could actually spread the damage and decrease fitness. Prophase I arrest Female mammals and birds are born possessing all the oocytes needed for future ovulations, and these oocytes are arrested at the prophase I stage of meiosis. In humans, as an example, oocytes are formed between three and four months of gestation within the fetus and are therefore present at birth. During this prophase I arrested stage (dictyate), which may last for many years, four copies of the genome are present in the oocytes. The arrest of ooctyes at the four genome copy stage was proposed to provide the informational redundancy needed to repair damage in the DNA of the germline. The repair process used likely involves homologous recombinational repair. Prophase arrested oocytes have a high capability for efficient repair of DNA damages. The adaptive function of the DNA repair capability during meiosis appears to be a key quality control mechanism in the female germ line and a critical determinant of fertility. Genetic diversity Another hypothesis to explain the function of meiosis is that stress is a signal to the cell that the environment is becoming adverse. Under this new condition, it may be beneficial to produce progeny that differ from the parent in their genetic make up. Among these varied progeny, some may be more adapted to the changed condition than their parents. Meiosis generates genetic variation in the diploid cell, in part by the exchange of genetic information between the pairs of chromosomes after they align (recombination). Thus, on this view, an advantage of meiosis is that it facilitates the generation of genomic diversity among progeny, allowing adaptation to adverse changes in the environment. However, in the presence of a fairly stable environment, individuals surviving to reproductive age have genomes that function well in their current environment. This raises the question of why such individuals should risk shuffling their genes with those of another individual, as occurs during meiotic recombination? Considerations such as this have led many investigators to question whether genetic diversity is a major adaptive advantage of sex. See also DNA repair Giardia Oxidative stress Asexual reproduction, ways to avoid the two-fold cost of sexual reproduction Apomixis Parthenogenesis References Mitosis DNA repair Meiosis
Origin and function of meiosis
[ "Biology" ]
3,171
[ "DNA repair", "Meiosis", "Molecular genetics", "Cellular processes", "Mitosis" ]
29,587,726
https://en.wikipedia.org/wiki/O-linked%20glycosylation
O-linked glycosylation is the attachment of a sugar molecule to the oxygen atom of serine (Ser) or threonine (Thr) residues in a protein. O-glycosylation is a post-translational modification that occurs after the protein has been synthesised. In eukaryotes, it occurs in the endoplasmic reticulum, Golgi apparatus and occasionally in the cytoplasm; in prokaryotes, it occurs in the cytoplasm. Several different sugars can be added to the serine or threonine, and they affect the protein in different ways by changing protein stability and regulating protein activity. O-glycans, which are the sugars added to the serine or threonine, have numerous functions throughout the body, including trafficking of cells in the immune system, allowing recognition of foreign material, controlling cell metabolism and providing cartilage and tendon flexibility. Because of the many functions they have, changes in O-glycosylation are important in many diseases including cancer, diabetes and Alzheimer's. O-glycosylation occurs in all domains of life, including eukaryotes, archaea and a number of pathogenic bacteria including Burkholderia cenocepacia, Neisseria gonorrhoeae and Acinetobacter baumannii. Common types of O-glycosylation O-N-acetylgalactosamine (O-GalNAc) Addition of N-acetylgalactosamine (GalNAc) to a serine or threonine occurs in the Golgi apparatus, after the protein has been folded. The process is performed by enzymes known as GalNAc transferases (GALNTs), of which there are 20 different types. The initial O-GalNAc structure can be modified by the addition of other sugars, or other compounds such as methyl and acetyl groups. These modifications produce 8 core structures known to date. Different cells have different enzymes that can add further sugars, known as glycosyltransferases, and structures therefore change from cell to cell. Common sugars added include galactose, N-acetylglucosamine, fucose and sialic acid. These sugars can also be modified by the addition of sulfates or acetyl groups. Biosynthesis GalNAc is added onto a serine or threonine residue from a precursor molecule, through the activity of a GalNAc transferase enzyme. This precursor is necessary so that the sugar can be transported to where it will be added to the protein. The specific residue onto which GalNAc will be attached is not defined, because there are numerous enzymes that can add the sugar and each one will favour different residues. However, there are often proline (Pro) residues near the threonine or serine. Once this initial sugar has been added, other glycosyltransferases can catalyse the addition of additional sugars. Two of the most common structures formed are Core 1 and Core 2. Core 1 is formed by the addition of a galactose sugar onto the initial GalNAc. Core 2 consists of a Core 1 structure with an additional N-acetylglucosamine (GlcNAc) sugar. A poly-N-acetyllactosamine structure can be formed by the alternating addition of GlcNAc and galactose sugars onto the GalNAc sugar. Terminal sugars on O-glycans are important in recognition by lectins and play a key role in the immune system. Addition of fucose sugars by fucosyltransferases forms Lewis epitopes and the scaffold for blood group determinants. Addition of a fucose alone creates the H-antigen, present in people with blood type O. By adding a galactose onto this structure, the B-antigen of blood group B is created. Alternatively, adding a GalNAc sugar will create the A-antigen for blood group A. Functions O-GalNAc sugars are important in a variety of processes, including leukocyte circulation during an immune response, fertilisation, and protection against invading microbes. O-GalNAc sugars are common on membrane glycoproteins, where they help increase rigidity of the region close to the membrane so that the protein extends away from the surface. For example, the low-density lipoprotein receptor (LDL) is projected from the cell surface by a region rigidified by O-glycans. In order for leukocytes of the immune system to move into infected cells, they have to interact with these cells through receptors. Leukocytes express ligands on their cell surface to allow this interaction to occur. P-selectin glycoprotein ligand-1 (PSGL-1) is such a ligand, and contains a lot of O-glycans that are necessary for its function. O-glycans near the membrane maintain the elongated structure and a terminal sLex epitope is necessary for interactions with the receptor. Mucins are a group of heavily O-glycosylated proteins that line the gastrointestinal and respiratory tracts to protect these regions from infection. Mucins are negatively charged, which allows them to interact with water and prevent it from evaporating. This is important in their protective function as it lubricates the tracts so bacteria cannot bind and infect the body. Changes in mucins are important in numerous diseases, including cancer and inflammatory bowel disease. Absence of O-glycans on mucin proteins changes their 3D shape dramatically and often prevents correct function. O-N-acetylglucosamine (O-GlcNAc) Addition of N-acetylglucosamine (O-GlcNAc) to serine and threonine residues usually occurs on cytoplasmic and nuclear proteins that remain in the cell, compared to O-GalNAc modifications which usually occur on proteins that will be secreted. O-GlcNAc modifications were only recently discovered, but the number of proteins with known O-GlcNAc modifications is increasing rapidly. It is the first example of glycosylation that does not occur on secretory proteins. O-GlcNAcylation differs from other O-glycosylation processes because there are usually no sugars added onto the core structure and because the sugar can be attached or removed from a protein several times. This addition and removal occurs in cycles and is performed by two very specific enzymes. O-GlcNAc is added by O-GlcNAc transferase (OGT) and removed by O-GlcNAcase (OGA). Because there are only two enzymes that affect this specific modification, they are very tightly regulated and depend on a lot of other factors. Because O-GlcNAc can be added and removed, it is known as a dynamic modification and has a lot of similarities to phosphorylation. O-GlcNAcylation and phosphorylation can occur on the same threonine and serine residues, suggesting a complex relationship between these modifications that can affect many functions of the cell. The modification affects processes like the cells response to cellular stress, the cell cycle, protein stability and protein turnover. It may be implicated in neurodegenerative diseases like Parkinson's and late-onset Alzheimer's and has been found to play a role in diabetes. Additionally, O-GlcNAcylation can enhance the Warburg Effect, which is defined as the change that occurs in the metabolism of cancer cells to favour their growth. Because both O-GlcNAcylation and phosphorylation can affect specific residues and therefore both have important functions in regulating signalling pathways, both of these processes provide interesting targets for cancer therapy. O-Mannose (O-Man) O-mannosylation involves the transfer of a mannose from a dolichol-P-mannose donor molecule onto the serine or threonine residue of a protein. Most other O-glycosylation processes use a sugar nucleotide as a donor molecule. A further difference from other O-glycosylations is that the process is initiated in the endoplasmic reticulum of the cell, rather than the Golgi apparatus. However, further addition of sugars occurs in the Golgi. Until recently, it was believed that the process is restricted to fungi, however it occurs in all domains of life; eukaryotes, (eu)bacteria and archae(bacteri)a. The best characterised O-mannosylated human protein is α-dystroglycan. O-Man sugars separate two domains of the protein, required to connect the extracellular and intracellular regions to anchor the cell in position. Ribitol, xylose and glucuronic acid can be added to this structure in a complex modification that forms a long sugar chain. This is required to stabilise the interaction between α-dystroglycan and the extracellular basement membrane. Without these modifications, the glycoprotein cannot anchor the cell which leads to congenital muscular dystrophy (CMD), characterised by severe brain malformations. O-Galactose (O-Gal) O-galactose is commonly found on lysine residues in collagen, which often have a hydroxyl group added to form hydroxylysine. Because of this addition of an oxygen, hydroxylysine can then be modified by O-glycosylation. Addition of a galactose to the hydroxyl group is initiated in the endoplasmic reticulum, but occurs predominantly in the Golgi apparatus and only on hydroxylysine residues in a specific sequence. While this O-galactosylation is necessary for correct function in all collagens, it is especially common in collagen types IV and V. In some cases, a glucose sugar can be added to the core galactose. O-Fucose (O-Fuc) Addition of fucose sugars to serine and threonine residues is an unusual form of O-glycosylation that occurs in the endoplasmic reticulum and is catalysed by two fucosyltransferases. These were discovered in Plasmodium falciparum and Toxoplasma gondii. Several different enzymes catalyse the elongation of the core fucose, meaning that different sugars can be added to the initial fucose on the protein. Along with O-glucosylation, O-fucosylation is mainly found on epidermal growth factor (EGF) domains found in proteins. O-fucosylation on EGF domains occurs between the second and third conserved cysteine residues in the protein sequence. Once the core O-fucose has been added, it is often elongated by addition of GlcNAc, galactose and sialic acid. Notch is an important protein in development, with several EGF domains that are O-fucosylated. Changes in the elaboration of the core fucose determine what interactions the protein can form, and therefore which genes will be transcribed during development. O-fucosylation might also play a role in protein breakdown in the liver. O-Glucose (O-Glc) Similarly to O-fucosylation, O-glucosylation is an unusual O-linked modification as it occurs in the endoplasmic reticulum, catalysed by O-glucosyltransferases, and also requires a defined sequence in order to be added to the protein. O-glucose is often attached to serine residues between the first and second conserved cysteine residues of EGF domains, for example in clotting factors VII and IX. O-glucosylation also appears to be necessary for the proper folding of EGF domains in the Notch protein. Proteoglycans Proteoglycans consist of a protein with one or more sugar side chains, known as glycosaminoglycans (GAGs), attached to the oxygen of serine and threonine residues. GAGs consist of long chains of repeating sugar units. Proteoglycans are usually found on the cell surface and in the extracellular matrix (ECM), and are important for the strength and flexibility of cartilage and tendons. Absence of proteoglycans is associated with heart and respiratory failure, defects in skeletal development and increased tumor metastasis. Different types of proteoglycans exist, depending on the sugar that is linked to the oxygen atom of the residue in the protein. For example, the GAG heparan sulphate is attached to a protein serine residue through a xylose sugar. The structure is extended with several N-acetyllactosamine repeating sugar units added onto the xylose. This process is unusual and requires specific xylosyltransferases. Keratan sulphate attaches to a serine or threonine residue through GalNAc, and is extended with two galactose sugars, followed by repeating units of glucuronic acid (GlcA) and GlcNAc. Type II keratan sulphate is especially common in cartilage. Lipids Galactose or glucose sugars can be attached to a hydroxyl group of ceramide lipids in a different form of O-glycosylation, as it does not occur on proteins. This forms glycosphingolipids, which are important for the localisation of receptors in membranes. Incorrect breakdown of these lipids leads to a group of diseases known as sphingolipidoses, which are often characterised by neurodegeneration and developmental disabilities. Because both galactose and glucose sugars can be added to the ceramide lipid, we have two groups of glycosphingolipids. Galactosphingolipids are generally very simple in structure and the core galactose is not usually modified. Glucosphingolipids, however, are often modified and can become a lot more complex. Biosynthesis of galacto- and glucosphingolipids occurs differently. Glucose is added onto ceramide from its precursor in the endoplasmic reticulum, before further modifications occur in the Golgi apparatus. Galactose, on the other hand, is added to ceramide already in the Golgi apparatus, where the galactosphingolipid formed is often sulfated by addition of sulfate groups. Glycogenin One of the first and only examples of O-glycosylation on tyrosine, rather than on serine or threonine residues, is the addition of glucose to a tyrosine residue in glycogenin. Glycogenin is a glycosyltransferase that initiates the conversion of glucose to glycogen, present in muscle and liver cells. Clinical significance All forms of O-glycosylation are abundant throughout the body and play important roles in many cellular functions. Lewis epitopes are important in determining blood groups, and allow the generation of an immune response if we detect foreign organs. Understanding them is important in organ transplants. Hinge regions of immunoglobulins contain highly O-glycosylated regions between individual domains to maintain their structure, allow interactions with foreign antigens and protect the region from proteolytic cleavage. Alzheimer's may be affected by O-glycosylation. Tau, the protein that accumulates to cause neurodegeneration in Alzheimer's, contains O-GlcNAc modifications which may be implicated in disease progression. Changes in O-glycosylation are extremely common in cancer. O-glycan structures, and especially the terminal Lewis epitopes, are important in allowing tumor cells to invade new tissues during metastasis. Understanding these changes in O-glycosylation of cancer cells can lead to new diagnostic approaches and therapeutic opportunities. See also Glycosylation N-linked glycosylation References External links GlycoEP: In silico Platform for Prediction of N-, O- and C-Glycosites in Eukaryotic Protein Sequences Post-translational modification
O-linked glycosylation
[ "Chemistry" ]
3,470
[ "Post-translational modification", "Gene expression", "Biochemical reactions" ]
29,589,020
https://en.wikipedia.org/wiki/Ernst%20Fasan
Ernst Leo Albin Fasan (12 August 1926 – 20 July 2021) was an Austrian lawyer and a recognized authority in space law, including metalaw. Biography Fasan was born in Vienna. He grew up in Neunkirchen and attended school in Wiener Neustadt. In 1950 he earned his Doctor of Law at the University of Vienna. Fasan published numerous papers on problems of space law, including metalaw and the scientific Search for Extraterrestrial Intelligence (SETI). In 1970, Fasan published what remains the seminal book on Metalaw, Relations with Alien Intelligences: The Scientific Basis of Metalaw. Fasan was a practicing attorney when in 1958 he helped to establish the Permanent Committee on Space Law of the International Astronautical Federation. Two years later his friend, colleague and fellow space law pioneer Andrew G. Haley invited him to join the committee's successor, the International Institute of Space Law. Fasan was elected to the IISL board of directors in 1962 and also subsequently served as an officer of the organization. In 1963 the IISL awarded Fasan the Andrew G. Haley Gold Medal. In June 2008, as an honorary director, Fasan was among those representing the IISL when it was granted before the United Nations Committee on the Peaceful Uses of Outer Space (COPUOS). Fasan remained active in the SETI field as a member of the SETI Permanent Study Group of the International Academy of Astronautics. Fasan died in July 2021 at the age of 94. Further reading Patricia M. Sterns, Leslie I. Tennen: Private Law, Public Law, Metalaw and Public Policy in Space - A Liber Amicorum in Honor of Ernst Fasan. Springer, Cham 2016, . References External links B.P. Besser, Austria's Ascent into Space: A short historical account, Proceedings of the Concluding Workshop of the Extended ESA History Project, 13-13 April 2005 A Short Biography of Dr. Ernst Fasan, by Adam Chase Korbitz Metalaw and SETI 1926 births 2021 deaths 20th-century Austrian lawyers Lawyers from Vienna Space law
Ernst Fasan
[ "Astronomy" ]
429
[ "Space law", "Outer space" ]
30,995,376
https://en.wikipedia.org/wiki/Photoelectrochemical%20reduction%20of%20carbon%20dioxide
Photoelectrochemical reduction of carbon dioxide, also known as photoelectrolysis of carbon dioxide, is a chemical process whereby carbon dioxide is reduced to carbon monoxide or hydrocarbons by the energy of incident light. This process requires catalysts, most of which are semiconducting materials. The feasibility of this chemical reaction was first theorised by Giacomo Luigi Ciamician, an Italian photochemist. Already in 1912 he stated that "[b]y using suitable catalyzers, it should be possible to transform the mixture of water and carbon dioxide into oxygen and methane, or to cause other endo-energetic processes." Furthermore, the reduced species may prove to be a valuable feedstock for other processes. If the incident light utilized is solar then this process also potentially represents energy routes which combine renewable energy with CO2 reduction. Thermodynamics Thermodynamic potentials for the reduction of CO2 to various products is given in the following table versus NHE at pH = 7. Single electron reduction of CO2 to CO2●− radical occurs at E° = −1.90 V versus NHE at pH = 7 in an aqueous solution at 25 °C under 1 atm gas pressure. The reason behind the high negative thermodynamically unfavorable single electron reduction potential of CO2 is the large reorganization energy between the linear molecule and bent radical anion. Proton-coupled multi-electron steps for CO2 reductions are generally more favorable than single electron reductions, as thermodynamically more stable molecules are produced. Kinetics Thermodynamically, proton coupled multiple-electron reduction of CO2 is easier than single electron reduction. But to manage multiple proton coupled multiple-electron processes is a huge challenge kinetically. This leads to a high overpotential for electrochemical heterogeneous reduction of CO2 to hydrocarbons and alcohols. Even further heterogeneous reduction of singly reduced CO2●− radical anion is difficult because of repulsive interaction between negatively biased electrode and negatively charged anion. Figure 2 shows that in case of a p-type semiconductor/liquid junction photo generated electrons are available at the semiconductor/liquid interface under illumination. The reduction of redox species happens at less negative potential on illuminated p-type semiconductor compared to metal electrode due to the band bending at semiconductor/liquid interface. Figure 3 shows that thermodynamically, some of the proton-coupled multi-electron CO2 reductions are within semiconductors band gap. This makes it feasible to photo-reduce CO2 on p-type semiconductors. Various p-type semiconductors have been successfully employed for CO2 photo reduction including p-GaP, p-CdTe, p-Si, p-GaAs, p-InP, and p-SiC. Kinetically, however, these reactions are extremely slow on given semiconductor surfaces; this leads to significant overpotential for CO2 reduction on these semiconductor surfaces. Apart from high overpotential; these systems have a few advantages including sustainability (nothing is consumed in this system apart from light energy), direct conversion of solar energy to chemical energy, utilization of renewable energy resource for energy intensive process, stability of the process (semiconductors are really stable under illumination) etc. A different approach for photo-reduction of CO2 involves molecular catalysts, photosensitizers and sacrificial electron donors. In this process sacrificial electron donors are consumed during the process and photosensitizers degrade under long exposure to illumination. Solvent effect The photo-reduction of CO2 on p-type semiconductor photo-electrodes has been achieved in both aqueous and non-aqueous media. Main difference between aqueous and non-aqueous media is the solubility of CO2. The solubility of CO2 in aqueous media at 1 atm. of CO2 is around ≈ 35 mM; whereas solubility of CO2 in methanol is around 210 mM and in acetonitrile is around 210 mM. Aqueous media Photoreduction of CO2 to formic acid was demonstrated on an p-GaP photocathode in aqueous media. Apart from several other reports of CO2 photoreduction on p-GaP, there are other p-type semiconductors like p-GaAs, p-InP, p-CdTe, and p+/p-Si have been successfully used for photoreduction of CO2. The lowest potential for CO2 photoreduction was observed on p-GaP. This may be due to high photovoltage excepted from higher band gap p-GaP (2.2 eV) photocathode. Apart from formic acid, other products observed for CO2 photoreduction are formaldehyde, methanol and carbon monoxide. On p-GaP, p-GaAs and p+/p-Si photocathode, the main product is formic acid with small amount of formaldehyde and methanol. However, for p-InP and p-CdTe photocathode, both carbon monoxide and formic acid are observed in similar quantities. Mechanism proposed by Hori based on CO2 reduction on metal electrodes predicts formation of both formic acid (in case of no adsorption of singly reduced CO2●− radical anion to the surface) and carbon monoxide (in case of adsorption of singly reduced CO2●− radical anion to the surface) in aqueous media. This same mechanism can be evoked to explain the formation of mainly formic acid on p-GaP, p-GaAs and p+/p-Si photocathode owing to no adsorption of singly reduced CO2●− radical anion to the surface. In case of p-InP and p-CdTe photocathode, partial adsorption of CO2●− radical anion leads to formation of both carbon monoxide and formic acid. Low catalytic current density for CO2 photoreduction and competitive hydrogen generation are two major drawbacks of this system. Non-aqueous media Maximum catalytic current density for CO2 reduction that can be achieved in aqueous media is only 10 mA cm−2 based solubility of CO2 and diffusion limitations. The integrated maximum photocurrent under Air Mass 1.5 illumination, in the conventional Shockley-Quiesser limit for solar energy conversion for p-Si (1.12 eV), p-InP (1.3 eV), p-GaAs (1.4 eV), and p-GaP (2.3 eV) are 44.0 mA cm−2, 37.0 mA cm−2, 32.5 mA cm−2 and 9.0 mA cm−2, respectively. Therefore, non-aqueous media such as DMF, acetonitrile, methanol are explored as solvent for CO2 electrochemical reduction. In addition, Methanol has been industrially used as a physical absorber of CO2 in the Rectisol method. Similarly to aqueous media system, p-Si, p-InP, p-GaAs, p-GaP and p-CdTe are explored for CO2 photoelectrochemical reduction. Among these, p-GaP has lowest overpotential, whereas, p-CdTe has moderate overpotential but high catalytic current density in DMF with 5% water mixture system. Main product of CO2 reduction in non-aqueous media is carbon monoxide. Competitive hydrogen generation is minimized in non-aqueous media. Proposed mechanism for CO2 reduction to CO in non-aqueous media involves single electron reduction of CO2 to CO2●− radical anion and adsorption of radical anion to surface followed by disproportionate reaction between unreduced CO2 and CO2●− radical anion to form CO32− and CO. See also Artificial photosynthesis Electrochemical reduction of carbon dioxide Photochemical reduction of carbon dioxide Photoelectrolysis of water Photoelectrochemistry References Carbon dioxide Photoelectrochemistry Chemical processes
Photoelectrochemical reduction of carbon dioxide
[ "Chemistry" ]
1,710
[ "Photoelectrochemistry", "Chemical processes", "Electrochemistry", "nan", "Chemical process engineering", "Greenhouse gases", "Carbon dioxide" ]
30,996,073
https://en.wikipedia.org/wiki/Libfixmath
libfixmath is a platform-independent fixed-point math library aimed at developers wanting to perform fast non-integer math on platforms lacking a (or with a low performance) FPU. It offers developers a similar interface to the standard math.h functions for use on Q16.16 fixed-point numbers. libfixmath has no external dependencies other than stdint.h and a compiler which supports 64-bit integer arithmetic (such as GCC). Conditional compilation options exist to remove the requirement for a 64-bit capable compiler as many compilers for microcontrollers and DSPs do not support 64-bit arithmetic. History libfixmath was developed by Ben Brewer and first released publicly as part of the Dingoo SDK. It has since been used to implement a software 3D graphics library called FGL. Q16.16 functions Other functions Performance For the most intensive function (atan2) benchmark results show the following results: Note: These results were calculated using fixtest with caching optimizations turned off. Licensing libfixmath is released under the MIT License, a permissive free software licence, and is free software. See also Binary scaling Fixed-point arithmetic Floating-point arithmetic Q (number format) References External links Project Page Group Page/Mailing List Numerical software C (programming language) libraries Free computer libraries Free software programmed in C
Libfixmath
[ "Mathematics" ]
289
[ "Numerical software", "Mathematical software" ]
30,997,240
https://en.wikipedia.org/wiki/Ending%20lamination%20theorem
In hyperbolic geometry, the ending lamination theorem, originally conjectured by as the eleventh problem out of his twenty-four questions, states that hyperbolic 3-manifolds with finitely generated fundamental groups are determined by their topology together with certain "end invariants", which are geodesic laminations on some surfaces in the boundary of the manifold. The ending lamination theorem is a generalization of the Mostow rigidity theorem to hyperbolic manifolds of infinite volume. When the manifold is compact or of finite volume, the Mostow rigidity theorem states that the fundamental group determines the manifold. When the volume is infinite the fundamental group is not enough to determine the manifold: one also needs to know the hyperbolic structure on the surfaces at the "ends" of the manifold, and also the ending laminations on these surfaces. and proved the ending lamination conjecture for Kleinian surface groups. In view of the Tameness theorem this implies the ending lamination conjecture for all finitely generated Kleinian groups, from which the general case of ELT follows. Ending laminations Ending laminations were introduced by . Suppose that a hyperbolic 3-manifold has a geometrically tame end of the form S×[0,1) for some compact surface S without boundary, so that S can be thought of as the "points at infinity" of the end. The ending lamination of this end is (roughly) a lamination on the surface S, in other words a closed subset of S that is written as the disjoint union of geodesics of S. It is characterized by the following property. Suppose that there is a sequence of closed geodesics on S whose lifts tends to infinity in the end. Then the limit of these simple geodesics is the ending lamination. See also Kleinian groups References Hyperbolic geometry 3-manifolds Kleinian groups
Ending lamination theorem
[ "Mathematics" ]
382
[ "Theorems in differential geometry", "Conjectures that have been proved", "Theorems in geometry", "Mathematical problems", "Mathematical theorems" ]
30,997,525
https://en.wikipedia.org/wiki/Pulvermacher%27s%20chain
The Pulvermacher chain, or in full as it was sold the Pulvermacher hydro-electric chain, was a type of voltaic battery sold in the second half of the 19th century for medical applications. Its chief market was amongst the numerous quack practitioners who were taking advantage of the popularity of the relatively new treatment of electrotherapy, or "electrification" as it was then known. Its unique selling point was its construction of numerous linked cells, rendering it mechanically flexible. A variant intended to be worn wrapped on parts of the body for long periods was known as Pulvermacher's galvanic chain or electric belt. The Pulvermacher Company attracted a great deal of antagonism from the medical community due to their use of the names of well-known physicians in their advertising without permission. The nature of their business; in selling to charlatans and promoting quack practices also made them unpopular with the medical community. Despite this, the Pulvermacher chain was widely reported as a useful source of electricity for medical and scientific purposes, even amongst the most vocal critics of the Pulvermacher Company. Construction Electrically, the machine worked like a voltaic pile, but was constructed completely differently. The electrodes were copper for the cathode and zinc for the anode, with the electrolyte consisting of vinegar or some other weak acid, or a salt solution. Each cell consisted of a wooden dowel with a bifilar winding of copper and zinc wires. The dowels were helically grooved like a screw thread to locate the wires precisely in position. This enabled the copper and zinc wires to be placed very close to each other without coming into electrical contact. Insulated wires could not be used as this would interfere with the operation of the electrolyte. Copper wires were inserted into the ends of the dowels to which the copper and zinc windings were soldered. These end wires were either attached to, or formed into, hooks and eyes for attaching to other cells. This arrangement is depicted in figure 2. These attachments provided the electrical connections as well as the mechanical linkages. Each cell was connected to the next with the copper winding of one being connected to the zinc winding of the next and so on. The cells could be connected end-to-end, or for a more compact assembly side-by-side, in the manner of links in a chain. The voltage delivered by the assembly was controlled by the number of links thus incorporated and could become quite high, even though the current available was no more than from a single cell (to increase the current, the size of the cells must be increased). The shock delivered by such chains was described as "strong" for one chain of 120 links, and as "sharp" for another of 50 links. Prior to use, the chain was soaked in vinegar so that the electrolyte was absorbed into the wooden dowels. The wood of which the dowels were made was chosen to be a very porous type so that the amount of electrolyte absorbed was maximised. The chain would continue to produce a voltage until the dowels dried out, then the chain would have to be resoaked. Typically, the chain would be charged by slowly drawing it through a bowl of vinegar as shown in figure 4. A special link could be included in the chain which incorporated an interrupter circuit. The purpose of the interrupter is to rapidly connect and disconnect the circuit so that the normally steady current of the battery is turned into a rapidly varying current. The usual practice in the use of medical electrical batteries was to feed the output of the interrupter to an induction coil in order to increase the voltage applied to the patient by transformer action. In Pulvermacher's patent however, there is no mention of using induction coils. The Pulvermacher battery could produce large voltages merely by adding more links to the chain. However, the interrupter still had an effect in that an interrupted current produces a stronger sensation of electric shock in the patient than a steady current. A novel feature of Pulvermacher's interrupter was that it was operated by the action of a vibrating spring kept in motion by the movements of the patient without the need for any external input. Interrupters of the time typically had to be hand-cranked by the physician, although there were already some in existence using electro-mechanical automatic interrupters. Later versions of the Pulvermacher chain used clockwork driven interrupters whose rate of interruption could be adjusted so that the rate of shock to the patient could be controlled. Such a clockwork interrupter is fitted to the chain shown in figure 1. It is wound up by turning the handle at the left end. By 1869 a variant of this chain had appeared. In this the wooden dowels were dispensed with, and instead a hollow tube of zinc or magnesium was used. The zinc tube itself formed the anode of the cell, and over this was wound the copper wire cathode, or in yet another version, rings of copper plates. The zinc tube and copper wire were kept apart by stitches of thread. Magnesium was not commonly used by battery manufacturers of the time due to its very high price (unlike today) compared to zinc. However, a cell made with magnesium in place of zinc produces around twice the voltage. More importantly for Pulvermacher, the cell would still output some voltage if the electrolyte was replaced with plain water. Pulvermacher marketed a type of chain that was designed to be worn wrapped around a limb being treated and was claimed to operate with body sweat acting as the electrolyte and no need to charge it with electrolyte from an external source. Pulvermacher also produced a smaller "pocket version" of the chain which had fewer links than the full 120-cell version. Pulvermacher Isaac Lewis Pulvermacher was a physicist and inventor originally concerned with the electric telegraph. He first published details of his chain in August 1850 in German and in the winter of that same year came to Britain to demonstrate the machine to notable physicians. He visited London and Edinburgh on this trip. He gives his residence as Breslau, Kingdom of Prussia in his 1853 US patent. Prior to this, however, he had arrived in Britain from Vienna and all the British sources of the time describe him as "of Vienna". Medical opinion At first, there was a very positive reaction to Pulvermacher. Early in 1851 Pulvermacher gave Golding Bird, a well known London physician with an interest in electrotherapy, a sample of the machine with which to experiment. Bird was impressed enough with it that he later gave a representative of the Pulvermacher Company a testimonial as a letter of introduction to physicians in Edinburgh. Bird thought that the battery would make a useful source of portable electricity and could be used for treating patients with some forms of paralysis in their homes. Contemporary equipment was not very portable, and in the case of friction machines required skilled operators to keep going. By October 1851 Bird felt that he had tested the device sufficiently to give it a glowing article in The Lancet. But even at this early stage there were signs of disquiet. Even as he wrote the favourable report in The Lancet Bird felt the need to level criticism at the Pulvermacher Company's London agent, one C. Meinig, for promoting the device as a "universal panacea" for almost any imaginable complaint in the company's advertisements. Bird was a tireless opponent of quack practitioners, and was particular quick to criticise medically unqualified electrical treatment, as he felt this was a reason professional acceptance of his own work in electrotherapy was being held back. The quack practitioner market was the very sector that the Pulvermacher Company's unrestrained claims were aimed at. Nevertheless, Bird was gracious enough to specifically exclude Pulvermacher himself from responsibility for these "injudiciously puffed" claims. By April 1853 the situation had become very acrimonious. Meinig had been using extracts from the testimonial provided by Bird without permission in order to bolster the Company's, medically largely unsupported, quack advertising claims. Bird threatened a legal injunction but Meinig refused to desist and tried to imply that Bird was benefitting from the publicity. A letter writing campaign by one Dr. McIntyre against the Pulvermacher advertisements led to an exchange of letters in the Association Medical Journal. Bird made plain that he had only ever recommended the chain as a convenient source of electricity and did not support any of the claimed curative powers, most especially those that were supposed to produce instant results (a typical course of electrotherapy at the time could last several months). He criticised some of the chains being sold as delivering "too feeble" a current to be of any medical use and pointed out that the proposed procedure of wrapping the device around an affected limb would make it useless since a conductive path through the skin across each cell would prevent a useful voltage being developed at the terminals (Pulvermacher even suggests in his patent that contact with the body generates enough electricity to be effective even without electrolyte). This resulted in the Journal removing the Pulvermacher advertisements from its pages. The Association Medical Journal was quickly followed by the Medical Times and with growing pressure on The Lancet to do the same this pretty much ended professional medical support for the device, at least for the time being. Despite this inauspicious start with the medical profession, the Pulvermacher chain continued to be described in scientific and medical journals and books as a useful tool throughout the late 1850s and 1860s, even being mentioned in the proceedings of the Royal Society. Even Bird, at the height of his dispute with the Pulvermacher company found himself able to say "the battery of Pulvermacher is an ingenious and useful source of electricity..." Although banned from much of the medical press, the Pulvermacher Company did not restrain its advertising claims or its use of notable names. The College of Dentists investigated its possible use as an anaesthetic during tooth extraction but found no benefit with the device frequently adding to the pain. In 1869, the Pulvermacher Company again found itself the subject of discussion in the medical press when they were involved in legal proceedings. This time the company was itself the victim of quacks when its product was pirated with poor quality imitations and this was the cause of the court case. The Medical Times was prompted by this to examine the efficacy of the Pulvermacher chain ending a long period of the paper ignoring it as a worthless quack instrument. The result was a very positive review of the chain's function and the reviewer particularly praised the workmanship. Competition and decline Pulvermacher patented the chain battery in the US in 1853. This was soon followed by the wearable chain battery belt, or electric belt. Electric belts became enormously popular in the US, far more so than in Europe. This led to the company headquarters being moved to Cincinnati by the 1880s as the Pulvermacher Galvanic Company, but still calling themselves Pulvermacher's of London for the prestige of a European connection. Early models had to be soaked in vinegar before use as in England, but later on models that worked purely by galvanic action with body sweat were introduced. Since the device was being sold essentially as a quack cure it was only necessary to generate enough electricity that the wearer could feel it, no matter how slightly, and know that it was working. Electric belts were made for every conceivable part of the human anatomy: limbs, abdomen, chest, neck – sometimes all worn at the same time. Pulvermacher even had a model designed to attach to the male genitals in a special sac which was claimed to cure impotence and erectile dysfunction. Pulvermacher promoted a theory that loss of "male vigour" in later life was a consequence of masturbation in early life and that a limited supply of semen, which provided the vigour, would run out before time if wasted. Pulvermacher's device was meant to address this shortcoming. Competition was very intense for this lucrative market and the claimed benefits became ever more extravagant. Amongst Pulvermacher's many competitors in the US were the German Electric Belt Company (actually New York based), Dr Crystal's, Dr. Horn's, Addison's, Edson's, Edison's, Owen's and Heidelberg's. Edison's was founded by Thomas Edison Junior, whose father was the famous Thomas Edison. Owen's was originally New York based but expanded across the country until they were put out of business due to fraud. In Europe too, there were competitors. The Medical Battery Company of England made a popular belt. They attempted (unsuccessfully) to sue the Electrical Review when that paper accused them of quackery in 1892. The Iona Company, an Oregon-based company founded by Henry Gaylord Wilshire was still selling belts in 1926 and making large profits: $36,000 ($ inflation adjusted) net from 2,445 belts in five months. By the end of the 1920s the electric belt's popularity had severely declined (but not the public's appetite for other quack electric cures) and the scientific market had long since moved on to better electrical generation technology than chain batteries. Popular culture The Pulvermacher chain, especially in the form of one being worn on the body, was very familiar in the late 19th and early 20th century and would not have needed to be explained to an audience. For instance, there are references to it in the novel Madame Bovary when the character Homais wearing a number of Pulvermacher chains is described as "more bandaged than a Scythian". References Bibliography Coley, N. G. "The collateral sciences in the work of Golding Bird (1814-1854)", Medical History, iss.4, vol.13, October 1969. Hemat, R. A. S. Water, Urotext 2009 . Lardner, Dionysius Electricity, Magnetism, and Acoustics, London: Spottiswoode & Co. 1856. Meyer, Moritz Electricity in its Relations to Practical Medicine, New York: D. Appleton and Co., 1869. Peña, Carolyn Thomas de la The Body Electric: How Strange Machines Built the Modern American, New York and London: New York University Press, 2005 . Powell, George Denniston The Practice of Medical Electricity, Dublin: Fannin & Co. 1869. Pulvermacher, Isaac Lewis "Improvement in voltaic batteries and apparatus for medical and other purposes", , issued 1 February 1853. Schlesinger, Henry The Battery: How Portable Power Sparked a Technological Revolution, Washington: Smithsonian Books, 2010 . Further reading Pulvermacher, J. L. J.L. Pulvermacher's patent portable hydro-electric voltaic chain batteries, New York: Printed by C. Dinsmore and Company. 1853. External links "Pulvermacher's chain", Scientific & Medical Antiques, photograph of a late model. "Pulvermacher's chain belts", The Turn of the Century Electrotherapy Museum, photographs of the pocket chain battery. Electric battery Electrochemistry Electrotherapy History of medicine
Pulvermacher's chain
[ "Chemistry" ]
3,176
[ "Electrochemistry" ]
30,999,091
https://en.wikipedia.org/wiki/Terrestrial%20ecosystem
Terrestrial ecosystems are ecosystems that are found on land. Examples include tundra, taiga, temperate deciduous forest, tropical rain forest, grassland, deserts. Terrestrial ecosystems differ from aquatic ecosystems by the predominant presence of soil rather than water at the surface and by the extension of plants above this soil/water surface in terrestrial ecosystems. There is a wide range of water availability among terrestrial ecosystems (including water scarcity in some cases), whereas water is seldom a limiting factor to organisms in aquatic ecosystems. Because water buffers temperature fluctuations, terrestrial ecosystems usually experience greater diurnal and seasonal temperature fluctuations than do aquatic ecosystems in similar climates. Terrestrial ecosystems are of particular importance especially in meeting Sustainable Development Goal 15 that targets the conservation-restoration and sustainable use of terrestrial ecosystems. Organisms and processes Organisms in terrestrial ecosystems have adaptations that allow them to obtain water when the entire body is no longer bathed in that fluid, means of transporting the water from limited sites of acquisition to the rest of the body, and means of preventing the evaporation of water from body surfaces. They also have traits that provide body support in the atmosphere, a much less buoyant medium than water, and other traits that render them capable of withstanding the extremes of temperature, wind, and humidity that characterize terrestrial ecosystems. Finally, the organisms in terrestrial ecosystems have evolved many methods of transporting gametes in environments where fluid flow is much less effective as a transport medium. This is terrestrial ecosystems. Common Types of Terrestrial Plants Four main groupings for terrestrial plants are bryophytes, pteridophytes, gymnosperms, and angiosperms, have been existing for many years and have allowed diversity into our ecosystems . Size and plants Terrestrial ecosystems occupy 55,660,000 mi2 (144,150,000 km2), or 28.26% of Earth's surface. Major plant taxa in terrestrial ecosystems are members of the division Magnoliophyta (flowering plants), of which there are about 275,000 species, and the division Pinophyta (conifers), of which there are about 500 species. Members of the division Bryophyta (mosses and liverworts), of which there are about 24,000 species, are also important in some terrestrial ecosystems. Major animal taxa in terrestrial ecosystems include the classes Insecta (insects) with about 900,000 species, Aves (birds) with 8,500 species, and Mammalia (mammals) with approximately 4,100 species. See also Aquatic-terrestrial subsidies Colonization of land - history of terrestrial life Soil ecology References Ecosystems
Terrestrial ecosystem
[ "Biology" ]
525
[ "Symbiosis", "Ecosystems" ]
30,999,216
https://en.wikipedia.org/wiki/Protein%20moonlighting
Protein moonlighting is a phenomenon by which a protein can perform more than one function. It is an excellent example of gene sharing. Ancestral moonlighting proteins originally possessed a single function but, through evolution, acquired additional functions. Many proteins that moonlight are enzymes; others are receptors, ion channels or chaperones. The most common primary function of moonlighting proteins is enzymatic catalysis, but these enzymes have acquired secondary non-enzymatic roles. Some examples of functions of moonlighting proteins secondary to catalysis include signal transduction, transcriptional regulation, apoptosis, motility, and structural. Protein moonlighting occurs widely in nature. Protein moonlighting through gene sharing differs from the use of a single gene to generate different proteins by alternative RNA splicing, DNA rearrangement, or post-translational processing. It is also different from the multifunctionality of the protein, in which the protein has multiple domains, each serving a different function. Protein moonlighting by gene sharing means that a gene may acquire and maintain a second function without gene duplication and without loss of the primary function. Such genes are under two or more entirely different selective constraints. Various techniques have been used to reveal moonlighting functions in proteins. The detection of a protein in unexpected locations within cells, cell types, or tissues may suggest that a protein has a moonlighting function. Furthermore, the sequence or structure homology of a protein may be used to infer both primary functions as well as secondary moonlighting functions of a protein. The most well-studied examples of gene sharing are crystallins. These proteins, when expressed at low levels in many tissues function as enzymes, but when expressed at high levels in eye tissue, become densely packed and thus form lenses. While the recognition of gene sharing is relatively recent—the term was coined in 1988, after crystallins in chickens and ducks were found to be identical to separately identified enzymes—recent studies have found many examples throughout the living world. Joram Piatigorsky has suggested that many or all proteins exhibit gene sharing to some extent, and that gene sharing is a key aspect of molecular evolution. The genes encoding crystallins must maintain sequences for catalytic function and transparency maintenance function. Inappropriate moonlighting is a contributing factor in some genetic diseases, and moonlighting provides a possible mechanism by which bacteria may become resistant to antibiotics. Discovery The first observation of a moonlighting protein was made in the late 1980s by Joram Piatigorsky and Graeme Wistow during their research on crystallin enzymes. Piatigorsky determined that lens crystallin conservation and variance are due to other moonlighting functions outside of the lens. Originally Piatigorsky called these proteins "gene sharing" proteins, but the colloquial description moonlighting was subsequently applied to proteins by Constance Jeffery in 1999 to draw a similarity between multitasking proteins and people who work two jobs. The phrase "gene sharing" is ambiguous since it is also used to describe horizontal gene transfer, hence the phrase "protein moonlighting" has become the preferred description for proteins with more than one function. Evolution It is believed that moonlighting proteins came about by means of evolution through which uni-functional proteins gained the ability to perform multiple functions. With alterations, much of the protein's unused space can provide new functions. Many moonlighting proteins are the result of the gene fusion of two single function genes. Alternatively a single gene can acquire a second function since the active site of the encoded protein typically is small compared to the overall size of the protein leaving considerable room to accommodate a second functional site. In yet a third alternative, the same active site can acquire a second function through mutations of the active site. The development of moonlighting proteins may be evolutionarily favorable to the organism since a single protein can do the job of multiple proteins conserving amino acids and energy required to synthesize these proteins. However, there is no universally agreed upon theory that explains why proteins with multiple roles evolved. While using one protein to perform multiple roles seems advantageous because it keeps the genome small, we can conclude that this is probably not the reason for moonlighting because of the large amount of noncoding DNA. Functions Many proteins catalyze a chemical reaction. Other proteins fulfill structural, transport, or signaling roles. Furthermore, numerous proteins have the ability to aggregate into supramolecular assemblies. For example, a ribosome is made up of 90 proteins and RNA. A number of the currently known moonlighting proteins are evolutionarily derived from highly conserved enzymes, also called ancient enzymes. These enzymes are frequently speculated to have evolved moonlighting functions. Since highly conserved proteins are present in many different organisms, this increases the chance that they would develop secondary moonlighting functions. A high fraction of enzymes involved in glycolysis, an ancient universal metabolic pathway, exhibit moonlighting behavior. Furthermore, it has been suggested that as many as 7 out of 10 proteins in glycolysis and 7 out of 8 enzymes of the tricarboxylic acid cycle exhibit moonlighting behavior. An example of a moonlighting enzyme is pyruvate carboxylase. This enzyme catalyzes the carboxylation of pyruvate into oxaloacetate, thereby replenishing the tricarboxylic acid cycle. Surprisingly, in yeast species such as H. polymorpha and P. pastoris, pyruvate carboylase is also essential for the proper targeting and assembly of the peroxisomal protein alcohol oxidase (AO). AO, the first enzyme of methanol metabolism, is a homo-octameric flavoenzyme. In wild type cells, this enzyme is present as enzymatically active AO octamers in the peroxisomal matrix. However, in cells lacking pyruvate carboxylase, AO monomers accumulate in the cytosol, indicating that pyruvate carboxylase has a second fully unrelated function in assembly and import. The function in AO import/assembly is fully independent of the enzyme activity of pyruvate carboxylase, because amino acid substitutions can be introduced that fully inactivate the enzyme activity of pyruvate carboxylase, without affecting its function in AO assembly and import. Conversely, mutations are known that block the function of this enzyme in the import and assembly of AO, but have no effect on the enzymatic activity of the protein. The E. coli anti-oxidant thioredoxin protein is another example of a moonlighting protein. Upon infection with the bacteriophage T7, E. coli thioredoxin forms a complex with T7 DNA polymerase, which results in enhanced T7 DNA replication, a crucial step for successful T7 infection. Thioredoxin binds to a loop in T7 DNA polymerase to bind more strongly to the DNA. The anti-oxidant function of thioredoxin is fully autonomous and fully independent of T7 DNA replication, in which the protein most likely fulfills the functional role. ADT2 and ADT5 are other examples of moonlighting proteins found in plants. Both of these proteins have roles in phenylalanine biosynthesis like all other ADTs. However ADT2, together with FtsZ is necessary in chloroplast division and ADT5 is transported by stromules into the nucleus. Examples Mechanisms In many cases, the functionality of a protein not only depends on its structure, but also its location. For example, a single protein may have one function when found in the cytoplasm of a cell, a different function when interacting with a membrane, and yet a third function if excreted from the cell. This property of moonlighting proteins is known as "differential localization". For example, in higher temperatures DegP (HtrA) will function as a protease by the directed degradation of proteins and in lower temperatures as a chaperone by assisting the non-covalent folding or unfolding and the assembly or disassembly of other macromolecular structures. Furthermore, moonlighting proteins may exhibit different behaviors not only as a result of its location within a cell, but also the type of cell that the protein is expressed in. Multifunctionality could also be as a consequence of differential post translational modifications (PTMs). In the case of the glycolytic enzyme glyceraldehyde-3-phosphate dehydrogenase (GAPDH) alterations in the PTMs have been shown to be associated with higher order multi functionality. Other methods through which proteins may moonlight are by changing their oligomeric state, altering concentrations of the protein's ligand or substrate, use of alternative binding sites, or finally through phosphorylation. An example of a protein that displays different function in different oligomeric states is pyruvate kinase which exhibits metabolic activity as a tetramer and thyroid hormone–binding activity as a monomer. Changes in the concentrations of ligands or substrates may cause a switch in a protein's function. For example, in the presence of high iron concentrations, aconitase functions as an enzyme while at low iron concentration, aconitase functions as an iron-responsive element-binding protein (IREBP) to increase iron uptake. Proteins may also perform separate functions through the use of alternative binding sites that perform different tasks. An example of this is ceruloplasmin, a protein that functions as an oxidase in copper metabolism and moonlights as a copper-independent glutathione peroxidase. Lastly, phosphorylation may sometimes cause a switch in the function of a moonlighting protein. For example, phosphorylation of phosphoglucose isomerase (PGI) at Ser-185 by protein kinase CK2 causes it to stop functioning as an enzyme, while retaining its function as an autocrine motility factor. Hence when a mutation takes place that inactivates a function of a moonlighting proteins, the other function(s) are not necessarily affected. The crystal structures of several moonlighting proteins, such as I-AniI homing endonuclease / maturase and the PutA proline dehydrogenase / transcription factor, have been determined. An analysis of these crystal structures has demonstrated that moonlighting proteins can either perform both functions at the same time, or through conformational changes, alternate between two states, each of which is able to perform a separate function. For example, the protein DegP plays a role in proteolysis with higher temperatures and is involved in refolding functions at lower temperatures. Lastly, these crystal structures have shown that the second function may negatively affect the first function in some moonlighting proteins. As seen in ƞ-crystallin, the second function of a protein can alter the structure, decreasing the flexibility, which in turn can impair enzymatic activity somewhat. Identification methods Moonlighting proteins have usually been identified by chance because there is no clear procedure to identify secondary moonlighting functions. Despite such difficulties, the number of moonlighting proteins that have been discovered is rapidly increasing. Furthermore, moonlighting proteins appear to be abundant in all kingdoms of life. Various methods have been employed to determine a protein's function including secondary moonlighting functions. For example, the tissue, cellular, or subcellular distribution of a protein may provide hints as to the function. Real-time PCR is used to quantify mRNA and hence infer the presence or absence of a particular protein which is encoded by the mRNA within different cell types. Alternatively immunohistochemistry or mass spectrometry can be used to directly detect the presence of proteins and determine in which subcellular locations, cell types, and tissues a particular protein is expressed. Mass spectrometry may be used to detect proteins based on their mass-to-charge ratio. Because of alternative splicing and posttranslational modification, identification of proteins based on the mass of the parent ion alone is very difficult. However tandem mass spectrometry in which each of the parent peaks is in turn fragmented can be used to unambiguously identify proteins. Hence tandem mass spectrometry is one of the tools used in proteomics to identify the presence of proteins in different cell types or subcellular locations. While the presence of a moonlighting protein in an unexpected location may complicate routine analyses, at the same time, the detection of a protein in unexpected multiprotein complexes or locations suggests that protein may have a moonlighting function. Furthermore, mass spectrometry may be used to determine if a protein has high expression levels that do not correlate to the enzyme's measured metabolic activity. These expression levels may signify that the protein is performing a different function than previously known. The structure of a protein can also help determine its functions. Protein structure in turn may be elucidated with various techniques including X-ray crystallography or NMR. Dual-polarization interferometry may be used to measure changes in protein structure which may also give hints to the protein's function. Finally, application of systems biology approaches such as interactomics give clues to a proteins function based on what it interacts with. Higher order multifunctionality In the case of the glycolytic enzyme glyceraldehyde-3-phosphate dehydrogenase (GAPDH), in addition to the large number of alternate functions it has also been observed that it can be involved in the same function by multiple means (multifunctionality within multifunctionality). For example, in its role in maintenance of cellular iron homeostasis GAPDH can function to import or extrude iron from cells. Moreover, in case of its iron import activities it can traffic into cells holo-transferrin as well as the related molecule lactoferrin by multiple pathways. Crystallins In the case of crystallins, the genes must maintain sequences for catalytic function and transparency maintenance function. The abundant lens crystallins have been generally viewed as static proteins serving a strictly structural role in transparency and cataract. However, recent studies have shown that the lens crystallins are much more diverse than previously recognized and that many are related or identical to metabolic enzymes and stress proteins found in numerous tissues. Unlike other proteins performing highly specialized tasks, such as globin or rhodopsin, the crystallins are very diverse and show numerous species differences. Essentially all vertebrate lenses contain representatives of the α and β/γ crystallins, the "ubiquitous crystallins", which are themselves heterogeneous, and only few species or selected taxonomic groups use entirely different proteins as lens crystallins. This paradox of crystallins being highly conserved in sequence while extremely diverse in number and distribution shows that many crystallins have vital functions outside the lens and cornea, and this multi-functionality of the crystallins is achieved by moonlightining. Gene regulation Crystallin recruitment may occur by changes in gene regulation that leads to high lens expression. One such example is gluthathione S-transferase/S11-crystallin that was specialized for lens expression by change in gene regulation and gene duplication. The fact that similar transcriptional factors such as Pax-6, and retinoic acid receptors, regulate different crystalline genes, suggests that lens-specific expression have played a crucial role for recruiting multifunctional protein as crystallins. Crystallin recruitment has occurred both with and without gene duplication, and tandem gene duplication has taken place among some of the crystallins with one of the duplicates specializing for lens expression. Ubiquitous α –crystallins and bird δ –crystallins are two examples. Alpha crystallins The α-crystallins, which contributed to the discovery of crystallins as borrowed proteins, have continually supported the theory of gene sharing, and helped delineating the mechanisms used for gene sharing as well. There are two α-crystallin genes (αA and αB), which are about 55% identical in amino acid sequence. Expression studies in non-lens cells showed that the αB-crystallin, other than being a functional lens protein, is a functional small heat shock protein. αB-crystallin is induced by heat and other physiological stresses, and it can protect the cells from elevated temperatures and hypertonic stress. αB-crystallin is also overexpressed in many pathologies, including neurodegenerative diseases, fibroblasts of patients with Werner syndrome showing premature senescence, and growth abnormalities. In addition to being overexpressed under abnormal conditions, αB-crystallin is constitutively expressed in heart, skeletal muscle, kidney, lung and many other tissues. In contrast to αB-crystallin, except for low-level expression in the thymus, spleen and retina, αA-crystallin is highly specialized for expression in the lens and is not stress-inducible. However, like αB-crystallin, it can also function as molecular chaperone and protect against thermal stress. Beta/gamma-crystallins β/γ-crystallins are different from α-crystallins in that they are a large multigene family. Other proteins like bacterial spore coat, a slime mold cyst protein, and epidermis differentiation-specific protein, contain the same Greek key motifs and are placed under β/γ crystallin superfamily. This relationship supports the idea that β/γ- crystallins have been recruited by a gene-sharing mechanism. However, except for few reports, non-refractive function of the β/γ-crystallin is yet to be found. Corneal crystallins Similar to lens, cornea is a transparent, avascular tissue derived from the ectoderm that is responsible for focusing light onto the retina. However, unlike lens, cornea depends on the air-cell interface and its curvature for refraction. Early immunology studies have shown that BCP 54 comprises 20–40% of the total soluble protein in bovine cornea. Subsequent studies have indicated that BCP 54 is ALDH3, a tumor and xenobiotic-inducible cytosolic enzyme, found in human, rat, and other mammals. Non refractive roles of crystallins in lens and cornea While it is evident that gene sharing resulted in many of lens crystallins being multifunctional proteins, it is still uncertain to what extent the crystallins use their non-refractive properties in the lens, or on what basis they were selected. The α-crystallins provide a convincing case for a lens crystallin using its non-refractive ability within the lens to prevent protein aggregation under a variety of environmental stresses and to protect against enzyme inactivation by post-translational modifications such as glycation. The α-crystallins may also play a functional role in the stability and remodeling of the cytoskeleton during fiber cell differentiation in the lens. In cornea, ALDH3 is also suggested to be responsible for absorbing UV-B light. Co-evolution of lens and cornea through gene sharing Based on the similarities between lens and cornea, such as abundant water-soluble enzymes, and being derived from ectoderm, the lens and cornea are thought to be co-evolved as a "refraction unit." Gene sharing would maximize light transmission and refraction to the retina by this refraction unit. Studies have shown that many water-soluble enzymes/proteins expressed by cornea are identical to taxon-specific lens crystallins, such as ALDH1A1/ η-crystallin, α-enolase/τ-crystallin, and lactic dehydrogenase/ -crystallin. Also, the anuran corneal epithelium, which can transdifferentiate to regenerate the lens, abundantly expresses ubiquitous lens crystallins, α, β and γ, in addition to the taxon-specific crystallin α-enolase/τ-crystallin. Overall, the similarity in expression of these proteins in the cornea and lens, both in abundance and taxon-specificity, supports the idea of co-evolution of lens and cornea through gene sharing. Relationship to similar concepts Gene sharing is related to, but distinct from, several concepts in genetics, evolution, and molecular biology. Gene sharing entails multiple effects from the same gene, but unlike pleiotropy, it necessarily involves separate functions at the molecular level. A gene could exhibit pleiotropy when single enzyme function affects multiple phenotypic traits; mutations of a shared gene could potentially affect only a single trait. Gene duplication followed by differential mutation is another phenomenon thought to be a key element in the evolution of protein function, but in gene sharing, there is no divergence of gene sequence when proteins take on new functions; the single polypeptide takes on new roles while retaining old ones. Alternative splicing can result in the production of multiple polypeptides (with multiple functions) from a single gene, but by definition, gene sharing involves multiple functions of a single polypeptide. Clinical significance The multiple roles of moonlighting proteins complicates the determination of phenotype from genotype, hampering the study of inherited metabolic disorders. The complex phenotypes of several disorders are suspected to be caused by the involvement of moonlighting proteins. The protein GAPDH has at least 11 documented functions, one of which includes apoptosis. Excessive apoptosis is involved in many neurodegenerative diseases, such as Huntington's, Alzheimer's, and Parkinson's as well as in brain ischemia. In one case, GAPDH was found in the degenerated neurons of individuals who had Alzheimer's disease. Although there is insufficient evidence for definite conclusions, there are well documented examples of moonlighting proteins that play a role in disease. One such disease is tuberculosis. One moonlighting protein in M. tuberculosis has a function which counteracts the effects of antibiotics. Specifically, the bacterium gains antibiotic resistance against ciprofloxacin from overexpression of glutamate racemase in vivo. GAPDH localized to the surface of pathogenic mycobacteriea has been shown to capture and traffic the mammalian iron carrier protein transferrin into cells resulting in iron acquisition by the pathogen. See also Enzyme promiscuity Pseudoenzymes External links moonlightingproteins.org database References Evolutionary biology Molecular genetics Proteins
Protein moonlighting
[ "Chemistry", "Biology" ]
4,629
[ "Evolutionary biology", "Biomolecules by chemical classification", "Molecular genetics", "Molecular biology", "Proteins" ]
31,001,114
https://en.wikipedia.org/wiki/DNA%20nanoball%20sequencing
DNA nanoball sequencing is a high throughput sequencing technology that is used to determine the entire genomic sequence of an organism. The method uses rolling circle replication to amplify small fragments of genomic DNA into DNA nanoballs. Fluorescent nucleotides bind to complementary nucleotides and are then polymerized to anchor sequences bound to known sequences on the DNA template. The base order is determined via the fluorescence of the bound nucleotides This DNA sequencing method allows large numbers of DNA nanoballs to be sequenced per run at lower reagent costs compared to other next generation sequencing platforms. However, a limitation of this method is that it generates only short sequences of DNA, which presents challenges to mapping its reads to a reference genome. After purchasing Complete Genomics, the Beijing Genomics Institute (BGI) refined DNA nanoball sequencing to sequence nucleotide samples on their own platform. Procedure DNA Nanoball Sequencing involves isolating DNA that is to be sequenced, shearing it into small 100 – 350 base pair (bp) fragments, ligating adapter sequences to the fragments, and circularizing the fragments. The circular fragments are copied by rolling circle replication resulting in many single-stranded copies of each fragment. The DNA copies concatenate head to tail in a long strand, and are compacted into a DNA nanoball. The nanoballs are then adsorbed onto a sequencing flow cell. The color of the fluorescence at each interrogated position is recorded through a high-resolution camera. Bioinformatics are used to analyze the fluorescence data and make a base call, and for mapping or quantifying the 50bp, 100bp, or 150bp single- or paired-end reads. DNA Isolation, fragmentation, and size capture Cells are lysed and DNA is extracted from the cell lysate. The high-molecular-weight DNA, often several megabase pairs long, is fragmented by physical or enzymatic methods to break the DNA double-strands at random intervals. Bioinformatic mapping of the sequencing reads is most efficient when the sample DNA contains a narrow length range. For small RNA sequencing, selection of the ideal fragment lengths for sequencing is performed by gel electrophoresis; for sequencing of larger fragments, DNA fragments are separated by bead-based size selection. Attaching adapter sequences Adapter DNA sequences must be attached to the unknown DNA fragment so that DNA segments with known sequences flank the unknown DNA. In the first round of adapter ligation, right (Ad153_right) and left (Ad153_left) adapters are attached to the right and left flanks of the fragmented DNA, and the DNA is amplified by PCR. A splint oligo then hybridizes to the ends of the fragments which are ligated to form a circle. An exonuclease is added to remove all remaining linear single-stranded and double-stranded DNA products. The result is a completed circular DNA template. Rolling circle replication Once a single-stranded circular DNA template is created, containing sample DNA that is ligated to two unique adapter sequences has been generated, the full sequence is amplified into a long string of DNA. This is accomplished by rolling circle replication with the Phi 29 DNA polymerase which binds and replicates the DNA template. The newly synthesized strand is released from the circular template, resulting in a long single-stranded DNA comprising several head-to-tail copies of the circular template. The resulting nanoparticle self-assembles into a tight ball of DNA approximately 300 nanometers (nm) across. Nanoballs remain separated from each other because they are negatively charged naturally repel each other, reducing any tangling between different single stranded DNA lengths. DNA nanoball patterned array To obtain DNA sequence, the DNA nanoballs are attached to a patterned array flow cell. The flow cell is a silicon wafer coated with silicon dioxide, titanium, hexamethyldisilazane (HMDS), and a photoresist material. The DNA nanoballs are added to the flow cell and selectively bind to the positively-charged aminosilane in a highly ordered pattern, allowing a very high density of DNA nanoballs to be sequenced. Imaging After each DNA nucleotide incorporation step, the flow cell is imaged to determine which nucleotide base bound to the DNA nanoball. The fluorophore is excited with a laser that excites specific wavelengths of light. The emission of fluorescence from each DNA nanoball is captured on a high resolution CCD camera. The image is then processed to remove background noise and assess the intensity of each point. The color of each DNA nanoball corresponds to a base at the interrogative position and a computer records the base position information. Sequencing data format The data generated from the DNA nanoballs is formatted as standard FASTQ formatted files with contiguous bases (no gaps). These files can be used in any data analysis pipeline that is configured to read single-end or paired-end FASTQ files. For example: Read 1, from a 100bp paired end run from @CL100011513L1C001R013_126365/1 CTAGGCAACTATAGGTCTCAGTTAAGTCAAATAAAATTCACATCAAATTTTTACTCCCACCATCCCAACACTTTCCTGCCTGGCATATGCCGTGTCTGCC + FFFFFFFFFFFGFGFFFFFF;FFFFFFFGFGFGFFFFFF;FFFFGFGFGFFEFFFFFEDGFDFF@FCFGFGCFFFFFEFFEGDFDFFFFFGDAFFEFGFF Corresponding Read 2: @CL100011513L1C001R013_126365/2 TGTCTACCATATTCTACATTCCACACTCGGTGAGGGAAGGTAGGCACATAAAGCAATGGCAGTACGGTGTAATACATGCTAATGTAGAGTAAGCACTCAG + 3E9E<ADEBB:D>E?FD<<@EFE>>ECEF5CE:B6E:CEE?6B>B+@??31/FD:0?@:E9<3FE2/A:/8>9CB&=E<7:-+>;29:7+/5D9)?5F/: Informatics Tips Reference Genome Alignment Default parameters for the popular aligners are sufficient. Read Names In the FASTQ file created by BGI/MGI sequencers using DNA nanoballs on a patterned array flowcell, the read names look like this: BGISEQ-500: CL100025298L1C002R050_244547 MGISEQ-2000: V100006430L1C001R018613883 Read names can be parsed to extract three variables describing the physical location of the read on the patterned array: (1) tile/region, (2) x coordinate, and (3) y coordinate. Note that, due to the order of these variables, these read names cannot be natively parsed by Picard MarkDuplicates in order to identify optical duplicates. However, as there are none on this platform, this poses no problem to Picard-based data analysis. Duplicates Because DNA nanoballs remain confined their spots on the patterned array there are no optical duplicates to contend with during bioinformatics analysis of sequencing reads. It is suggested to run Picard MarkDuplicates as follows: java -jar picard.jar MarkDuplicates I=input.bam O=marked_duplicates.bam M=marked_dup_metrics.txt READ_NAME_REGEX=null A test with Picard-friendly, reformatted read names demonstrates the absence of this class of duplicate read: The single read marked as an optical duplicate is most assuredly artefactual. In any case, the effect on the estimated library size is negligible. Advantages DNA nanoball sequencing technology offers some advantages over other sequencing platforms. One advantage is the eradication of optical duplicates. DNA nanoballs remain in place on the patterned array and do not interfere with neighboring nanoballs. Another advantage of DNA nanoball sequencing include the use of high-fidelity Phi 29 DNA polymerase to ensure accurate amplification of the circular template, several hundred copies of the circular template compacted into a small area resulting in an intense signal, and attachment of the fluorophore to the probe at a long distance from the ligation point results in improved ligation. Disadvantages The main disadvantage of DNA nanoball sequencing is the short read length of the DNA sequences obtained with this method. Short reads, especially for DNA high in DNA repeats, may map to two or more regions of the reference genome. A second disadvantage of this method is that multiple rounds of PCR have to be used. This can introduce PCR bias and possibly amplify contaminants in the template construction phase. However, these disadvantages are common to all short-read sequencing platforms are not specific to DNA nanoballs. Applications DNA nanoball sequencing has been used in recent studies. Lee et al. used this technology to find mutations that were present in a lung cancer and compared them to normal lung tissue. They were able to identify over 50,000 single nucleotide variants. Roach et al. used DNA nanoball sequencing to sequence the genomes of a family of four relatives and were able to identify SNPs that may be responsible for a Mendelian disorder, and were able to estimate the inter-generation mutation rate. The Institute for Systems Biology has used this technology to sequence 615 complete human genome samples as part of a survey studying neurodegenerative diseases, and the National Cancer Institute is using DNA nanoball sequencing to sequence 50 tumours and matched normal tissues from pediatric cancers. Significance Massively parallel next generation sequencing platforms like DNA nanoball sequencing may contribute to the diagnosis and treatment of many genetic diseases. The cost of sequencing an entire human genome has fallen from about one million dollars in 2008, to $4400 in 2010 with the DNA nanoball technology. Sequencing the entire genomes of patients with heritable diseases or cancer, mutations associated with these diseases have been identified, opening up strategies, such as targeted therapeutics for at-risk people and for genetic counseling. As the price of sequencing an entire human genome approaches the $1000 mark, genomic sequencing of every individual may become feasible as part of normal preventative medicine. References DNA sequencing Genomics techniques
DNA nanoball sequencing
[ "Chemistry", "Biology" ]
2,202
[ "Genetics techniques", "Genomics techniques", "Molecular biology techniques", "DNA sequencing" ]
31,001,609
https://en.wikipedia.org/wiki/Single%20molecule%20fluorescent%20sequencing
Single molecule fluorescent sequencing is one method of DNA sequencing. The core principle is the imaging of individual fluorophore molecules, each corresponding to one base. By working on single molecule level, amplification of DNA is not required, avoiding amplification bias. The method lends itself to parallelization by probing many sequences simultaneously, imaging all of them at the same time. The principle can be applied stepwise (e.g. the Helicos implementation), or in real time (as in the Pacific Biosciences implementation). References Molecular biology DNA Biotechnology DNA sequencing
Single molecule fluorescent sequencing
[ "Chemistry", "Biology" ]
120
[ "Biotechnology", "Molecular biology techniques", "DNA sequencing", "nan", "Molecular biology", "Biochemistry" ]
31,001,884
https://en.wikipedia.org/wiki/Transcription%20activator-like%20effector%20nuclease
Transcription activator-like effector nucleases (TALEN) are restriction enzymes that can be engineered to cut specific sequences of DNA. They are made by fusing a TAL effector DNA-binding domain to a DNA cleavage domain (a nuclease which cuts DNA strands). Transcription activator-like effectors (TALEs) can be engineered to bind to practically any desired DNA sequence, so when combined with a nuclease, DNA can be cut at specific locations. The restriction enzymes can be introduced into cells, for use in gene editing or for genome editing in situ, a technique known as genome editing with engineered nucleases. Alongside zinc finger nucleases and CRISPR/Cas9, TALEN is a prominent tool in the field of genome editing. TALE DNA-binding domain TAL effectors are proteins that are secreted by Xanthomonas bacteria via their type III secretion system when they infect plants. The DNA binding domain contains a repeated highly conserved 33–34 amino acid sequence with divergent 12th and 13th amino acids. These two positions, referred to as the Repeat Variable Diresidue (RVD), are highly variable and show a strong correlation with specific nucleotide recognition. This straightforward relationship between amino acid sequence and DNA recognition has allowed for the engineering of specific DNA-binding domains by selecting a combination of repeat segments containing the appropriate RVDs. Notably, slight changes in the RVD and the incorporation of "nonconventional" RVD sequences can improve targeting specificity. DNA cleavage domain The non-specific DNA cleavage domain from the end of the FokI endonuclease can be used to construct hybrid nucleases that are active in a yeast assay. These reagents are also active in plant cells and in animal cells. Initial TALEN studies used the wild-type FokI cleavage domain, but some subsequent TALEN studies also used FokI cleavage domain variants with mutations designed to improve cleavage specificity and cleavage activity. The FokI domain functions as a dimer, requiring two constructs with unique DNA binding domains for sites in the target genome with proper orientation and spacing. Both the number of amino acid residues between the TALE DNA binding domain and the FokI cleavage domain and the number of bases between the two individual TALEN binding sites appear to be important parameters for achieving high levels of activity. Engineering TALEN constructs The simple relationship between amino acid sequence and DNA recognition of the TALE binding domain allows for the efficient engineering of proteins. In this case, artificial gene synthesis is problematic because of improper annealing of the repetitive sequence found in the TALE binding domain. One solution to this is to use a publicly available software program (DNAWorks) to calculate oligonucleotides suitable for assembly in a two step PCR oligonucleotide assembly followed by whole gene amplification. A number of modular assembly schemes for generating engineered TALE constructs have also been reported. Both methods offer a systematic approach to engineering DNA binding domains that is conceptually similar to the modular assembly method for generating zinc finger DNA recognition domains. Transfection Once the TALEN constructs have been assembled, they are inserted into plasmids; the target cells are then transfected with the plasmids, and the gene products are expressed and enter the nucleus to access the genome. Alternatively, TALEN constructs can be delivered to the cells as mRNAs, which removes the possibility of genomic integration of the TALEN-expressing protein. Using an mRNA vector can also dramatically increase the level of homology directed repair (HDR) and the success of introgression during gene editing. Genome editing Mechanisms TALEN can be used to edit genomes by inducing double-strand breaks (DSB), which cells respond to with repair mechanisms. Non-homologous end joining (NHEJ) directly ligates DNA from either side of a double-strand break where there is very little or no sequence overlap for annealing. This repair mechanism induces errors in the genome via indels (insertion or deletion), or chromosomal rearrangement; any such errors may render the gene products coded at that location non-functional. Because this activity can vary depending on the species, cell type, target gene, and nuclease used, it should be monitored when designing new systems. A simple heteroduplex cleavage assay can be run which detects any difference between two alleles amplified by PCR. Cleavage products can be visualized on simple agarose gels or slab gel systems. Alternatively, DNA can be introduced into a genome through NHEJ in the presence of exogenous double-stranded DNA fragments. Homology directed repair can also introduce foreign DNA at the DSB as the transfected double-stranded sequences are used as templates for the repair enzymes. Applications TALEN has been used to efficiently modify plant genomes, creating economically important food crops with favorable nutritional qualities. They have also been harnessed to develop tools for the production of biofuels. In addition, it has been used to engineer stably modified human embryonic stem cell and induced pluripotent stem cell (IPSCs) clones and human erythroid cell lines, to generate knockout C. elegans, knockout rats, knockout mice, and knockout zebrafish. Moreover, the method can be used to generate knockin organisms. Wu et al.obtained a Sp110 knockin cattle using Talen nickases to induce increased resistance of tuberculosis. This approach has also been used to generate knockin rats by TALEN mRNA microinjection in one-cell embryos. TALEN has also been utilized experimentally to correct the genetic errors that underlie disease. For example, it has been used in vitro to correct the genetic defects that cause disorders such as sickle cell disease, xeroderma pigmentosum, and epidermolysis bullosa. Recently, it was shown that TALEN can be used as tools to harness the immune system to fight cancers; TALEN-mediated targeting can generate T cells that are resistant to chemotherapeutic drugs and show anti-tumor activity. In theory, the genome-wide specificity of engineered TALEN fusions allows for correction of errors at individual genetic loci via homology-directed repair from a correct exogenous template. In reality, however, the in situ application of TALEN is currently limited by the lack of an efficient delivery mechanism, unknown immunogenic factors, and uncertainty in the specificity of TALEN binding. Another emerging application of TALEN is its ability to combine with other genome engineering tools, such as meganucleases. The DNA binding region of a TAL effector can be combined with the cleavage domain of a meganuclease to create a hybrid architecture combining the ease of engineering and highly specific DNA binding activity of a TAL effector with the low site frequency and specificity of a meganuclease. In comparison to other genome editing techniques, TALEN falls in the middle in terms of difficulty and cost. Unlike ZFNs, TALEN recognizes single nucleotides. It's far more straightforward to engineer interactions between TALEN DNA binding domains and their target nucleotides than it is to create interactions with ZFNs and their target nucleotide triplets. On the other hand, CRISPR relies on ribonucleotide complex formation instead of protein/DNA recognition. gRNAs have occasionally limitations regarding feasibility due to lack of PAM sites in the target sequence and even though they can be cheaply produced, the current development lead to a remarkable decrease of cost for TALENs, so that they are in a similar price and time range like CRISPR based genome editing. TAL effector nuclease precision The off-target activity of an active nuclease may lead to unwanted double-strand breaks and may consequently yield chromosomal rearrangements and/or cell death. Studies have been carried out to compare the relative nuclease-associated toxicity of available technologies. Based on these studies and the maximal theoretical distance between DNA binding and nuclease activity, TALEN constructs are believed to have the greatest precision of the currently available technologies. See also Genome editing with engineered nucleases Zinc finger nuclease Meganuclease CRISPR References External links E-TALEN.org A comprehensive tool for TALEN design PDB Molecule of the Month An entry in the Protein Database's monthly structural highlight Biological engineering DNA Genetic engineering Genome editing History of biotechnology Molecular biology Non-coding RNA Repetitive DNA sequences
Transcription activator-like effector nuclease
[ "Chemistry", "Engineering", "Biology" ]
1,751
[ "Genetics techniques", "Biological engineering", "History of biotechnology", "Genome editing", "Genetic engineering", "Molecular genetics", "Repetitive DNA sequences", "Molecular biology", "Biochemistry" ]
31,002,280
https://en.wikipedia.org/wiki/Electrochemical%20energy%20conversion
Electrochemical energy conversion is a field of energy technology concerned with electrochemical methods of energy conversion including fuel cells and photoelectrochemical. This field of technology also includes electrical storage devices like batteries and supercapacitors. It is increasingly important in context of automotive propulsion systems. There has been the creation of more powerful, longer running batteries allowing longer run times for electric vehicles. These systems would include the energy conversion fuel cells and photoelectrochemical mentioned above. See also Bioelectrochemical reactor Chemotronics Electrochemical cell Electrochemical engineering Electrochemical reduction of carbon dioxide Electrofuels Electrohydrogenesis Electromethanogenesis Enzymatic biofuel cell Photoelectrochemical cell Photoelectrochemical reduction of CO2 Notes External links International Journal of Energy Research MSAL NIST scientific journal article Georgia tech Electrochemistry Electrochemical engineering Energy engineering Energy conversion Biochemical engineering
Electrochemical energy conversion
[ "Chemistry", "Engineering", "Biology" ]
184
[ "Biological engineering", "Bioengineering stubs", "Chemical engineering", "Biotechnology stubs", "Electrochemical engineering", "Biochemical engineering", "Energy engineering", "Electrochemistry", "Electrochemistry stubs", "Biochemistry", "Electrical engineering", "Physical chemistry stubs" ]
31,002,344
https://en.wikipedia.org/wiki/Bioelectrochemical%20reactor
A Bioelectrochemical reactor is a type of bioreactor where bioelectrochemical processes are used to degrade/produce organic materials using microorganisms. This bioreactor has two compartments: The anode, where the oxidation reaction takes place; And the cathode, where the reduction occurs. At these sites, electrons are passed to and from microbes to power reduction of protons, breakdown of organic waste, or other desired processes. They are used in microbial electrosynthesis, environmental remediation, and electrochemical energy conversion. Examples of bioelectrochemical reactors include microbial electrolysis cells, microbial fuel cells, enzymatic biofuel cells, electrolysis cells, microbial electrosynthesis cells, and biobatteries. Principles Electron current is inherent to microbial metabolism. Microorganisms transfer electrons from an electron donor (lower potential species) to an electron acceptor (higher potential species). If the electron acceptor is an external ion or molecule, the process is called respiration. If the process is internal, electron transfer is called fermentation. The microorganism attempts to maximize their energy gain by selecting the electron acceptor with the highest potential available. In nature, mainly minerals containing iron or manganese oxides are reduced. Often soluble electron acceptors are depleted in the microbial environment. The microorganism can also maximize their energy by selecting a good electron donor that can be easily metabolized. These processes are done by extracellular electron transfer (EET). The theoretical free energy change (ΔG) for microorganisms relates directly to the potential difference between the electron acceptor and the donor. However, inefficiencies like internal resistance will decrease this free energy change. The advantage of these devices is their high selectivity in high speed processes limited by kinetic factors. The most commonly studied species are Shewanella oneidensis and Geobacter sulfurreducens. However, more species have been studied in recent years. On March 25, 2013, scientists at the University of East Anglia were able to transfer electrical charge by allowing bacteria to touch a metal or mineral surface. The research shows that it is possible to 'tether' bacteria directly to electrodes. History In 1911 M. Potter described how microbial conversions could create reducing power, and thus electric current. Twenty years later Cohen (1931) investigated the capacity of bacteria to produce an electrical flow and he noted that the main limitation is the small capacity of current generation in microorganisms. Berk and Canfield (1964) didn't build the first microbial fuel cell (MFC) until the 60's. Currently, the investigation of bioelectrochemical reactors is increasing. These devices have real applications in fields like water treatment, energy production and storage, resources production, recycling and recovery. Applications Water Treatment Bioelectrochemical reactors are finding an application in wastewater treatment settings. Current activated sludge processes are energy- and cost-inefficient due to sludge maintenance, aeration needs, and energy needs. By using a bioelectrochemical reactor that utilizes the concept of trickling filtering, these inefficiencies can be addressed. While processing wastewater using this reactor, nitrification, denitrification, and organic matter removal all take place simultaneously in both aerobic and anaerobic conditions using multiple different microbes located on the anode of the system. Though the processing parameters of the reactor affect the overall composition of each microbe, genus Geobacter and genus Desulfuromonas are frequently found in these applications. In popular culture In Final Fantasy: The Spirits Within, soldiers use power backpacks based on bacteria. In Subnautica, the player can build a bioreactor that serves the same purpose as a bioelectrochemical reactor. See also Bioelectrochemistry Bioelectronics Electrochemical cell Electrochemical energy conversion Electrochemical engineering Electrochemical reduction of carbon dioxide Electrofuels Electrolytic cell Electromethanogenesis Galvanic cell References External links Bioelectrochemistry Electrochemical engineering Bioreactors
Bioelectrochemical reactor
[ "Chemistry", "Engineering", "Biology" ]
855
[ "Bioreactors", "Biological engineering", "Bioelectrochemistry", "Chemical reactors", "Chemical engineering", "Electrochemical engineering", "Biochemical engineering", "Microbiology equipment", "Electrochemistry", "Electrical engineering" ]
31,003,358
https://en.wikipedia.org/wiki/Traffic%20congestion%20reconstruction%20with%20Kerner%27s%20three-phase%20theory
Vehicular traffic can be either free or congested. Traffic occurs in time and space, i.e., it is a spatiotemporal process. However, usually traffic can be measured only at some road locations (for example, via road detectors, video cameras, probe vehicle data, or phone data). For efficient traffic control and other intelligent transportation systems, the reconstruction of traffic congestion is necessary at all other road locations at which traffic measurements are not available. Traffic congestion can be reconstructed in space and time (Fig. 1) based on Boris Kerner’s three-phase traffic theory with the use of the ASDA and FOTO models introduced by Kerner. Kerner's three-phase traffic theory and, respectively, the ASDA/FOTO models are based on some common spatiotemporal features of traffic congestion observed in measured traffic data. Common spatiotemporal empirical features of traffic congestion Definition Common spatiotemporal empirical features of traffic congestion are those spatiotemporal features of traffic congestion, which are qualitatively the same for different highways in different countries measured during years of traffic observations. In particular, common features of traffic congestion are independent on weather, road conditions and road infrastructure, vehicular technology, driver characteristics, day time, etc. Kerner's definitions [S] and [J], respectively, for the synchronized flow and wide moving jam phases in congested traffic are examples of common spatiotemporal empirical features of traffic congestion. Propagation of wide moving jams through highway bottlenecks In empirical observations, traffic congestion occurs usually at a highway bottleneck as a result of traffic breakdown in an initially free flow at the bottleneck. A highway bottleneck can result from on- and off-ramps, road curves and gradients, road works, etc. In congested traffic (this is a synonym term to traffic congestion), a phenomenon of the propagation of a moving traffic jam (moving jam for short) is often observed. A moving jam is a local region of low speed and great density that propagates upstream as a whole localized structure. The jam is limited spatially by two jam fronts. At the downstream jam front, vehicles accelerate to a higher speed downstream of the jam. At the upstream jam front, vehicles decelerate while approaching the jam. A wide moving jam is a moving jam that exhibits the characteristic jam feature [J], which is a common spatiotemporal empirical feature of traffic congestion. The jam feature [J] defines the wide moving jam traffic phase in congested traffic as follows. Definition [J] for wide moving jam A wide moving jam is a moving traffic jam, which exhibits the characteristic jam feature [J] to propagate through any bottlenecks while maintaining the mean velocity of the downstream jam front denoted by . Kerner's jam feature [J] can be explained as follows. The motion of the downstream jam front results from acceleration of drivers from a standstill within the jam to traffic flow downstream of the jam. After a vehicle has begun to accelerate escaping from the jam, to satisfy safety driving, the following vehicle begins to accelerate with a time delay. We denote the mean value of this time delay in vehicle acceleration at the downstream jam front by . Because the average distance between vehicles within the jam, including average vehicle length, equals (where is the average vehicle density within the jam), the mean velocity of the downstream jam front is . When traffic parameters (percentage of long vehicles, weather, driver characteristics, etc.) do not change over time, and are constant in time. This explains why the mean velocity of the downstream jam front (1) is the characteristic parameter that does not depend on the flow rates and densities upstream and downstream of the jam. Catch effect: pinning of downstream front of synchronized flow at bottleneck In contrast with the jam feature [J], the mean velocity of the downstream front of synchronized flow is not self-maintained during the front propagation. This is the common feature of synchronized flow that is one of the two phases of traffic congestion. A particular case of this common feature of synchronized flow is that the downstream synchronized flow front is usually caught at a highway bottleneck. This pinning of the downstream front of synchronized flow at the bottleneck is called the catch effect. Note that at this downstream front of synchronized flow, vehicles accelerate from a lower speed within synchronized flow upstream of the front to a higher speed in free flow downstream of the front. Definition [S] for synchronized flow Synchronized flow is defined as congested traffic that does not exhibit the jam feature [J]; in particular, the downstream front of synchronized flow is often fixed at the bottleneck. Thus Kerner's definitions [J] and [S] for the wide moving jam and synchronized flow phases of his three-phase traffic theory are indeed associated with common empirical features of traffic congestion. Empirical example of wide moving jam and synchronized flow Vehicle speeds measured with road detectors (1 min averaged data) illustrate Kerner's definitions [J] and [S] (Fig. 2 (a, b)). There are two spatiotemporal patterns of congested traffic with low vehicle speeds in Fig. 2 (a). One pattern of congested traffic propagates upstream with almost constant mean velocity of the downstream pattern front through the freeway bottleneck. According to the definition [J] this pattern of congested traffic belongs to the "wide moving jam" traffic phase. In contrast, the downstream front of the other pattern of the congested traffic is fixed at the bottleneck. According to the definition [S] this pattern of congested traffic belongs to the "synchronized flow" traffic phase (Fig. 2 (a) and (b)). ASDA and FOTO models The FOTO (Forecasting of traffic objects) model reconstructs and tracks regions of synchronized flow in space and time. The ASDA (Automatische Staudynamikanalyse: Automatic Tracking of Moving Jams) model reconstructs and tracks wide moving jams. The ASDA/FOTO models are devoted to on-line applications without calibration of model parameters under different environment conditions, road infrastructure, percentage of long vehicles, etc. General features Firstly, the ASDA/FOTO models identify the synchronized flow and wide moving jam phases in measured data of congested traffic. One of the empirical features the synchronized flow and wide moving jam phases used in the ASDA/FOTO models for traffic phase identification is as follows: Within a wide moving jam, both the speed and flow rate are very small (Fig. 2 (c-f)). In contrast, whereas the speed with the synchronized flow phase is considerably lower than in free flow (Fig. 2 (c, e)), the flow rate in synchronized flow can be as great as in free flow (Fig. 2 (d, f)). Secondly, based on the abovementioned common features of wide moving jams and synchronized flow, the FOTO model tracks the downstream and upstream fronts of synchronized flow denoted by , , where is time (Fig. 3). The ASDA model tracks the downstream and upstream fronts of wide moving jams denoted by , (Fig. 3). This tracking is carried out between road locations at which the traffic phases have initially been identified in measured data, i.e., when synchronized flow and wide moving jams cannot be measured. In other words, the tracking of synchronized flow by the FOTO model and wide moving jams by the ASDA model is performed at road locations at which no traffic measurements are available, i.e., the ASDA/FOTO models make the forecasting of the front locations of the traffic phases in time. The ASDA/FOTO models enable us to predict the merging and/or the dissolution of one or more initially different synchronized flow regions and of one or more initially different wide moving jams that occur between measurement locations. ASDA/FOTO models for data measured by road detectors Cumulative flow approach for FOTO While the downstream front of synchronized flow at which vehicles accelerate to free flow is usually fixed at the bottleneck (see Fig. 2 (a, b)), the upstream front of synchronized flow at which vehicles moving initially in free flow must decelerate approaching synchronized flow can propagate upstream. In empirical (i.e., measured) traffic data, the velocity of the upstream front of synchronized flow depends usually considerably both on traffic variables within synchronized flow downstream of the front and within free flow just upstream of this front. A good correspondence with empirical data is achieved, if a time-dependence of the location of the synchronized flow front is calculated by the FOTO model with the use of a so-called cumulative flow approach: where and [vehicles/h] are respectively the flow rates upstream and downstream of the synchronized flow front, is a model parameter [m/vehicles], and is the number of road lanes. Two approaches for jam tracking with ASDA There are two main approaches for the tracking of wide moving jams with the ASDA model: The use of the Stokes-shock-wave formula. The use of a characteristic velocity of wide moving jams. The use of the Stokes-shock-wave formula in ASDA The current velocity of a front of a wide moving jam is calculated though the use of the shock-wave formula derived by Stokes in 1848: , where and the flow rate and density upstream of the jam front that velocity should be found; and are the flow rate and density downstream of this jam front. In (3) no any relationship, in particular, no fundamental diagram is used between the flow rates , and vehicle densities , found from measured data independent of each other. The use of a characteristic velocity of wide moving jams If measured data are not available for the tracking of the downstream jam front with the Stokes-shock-wave formula (3), the formula is used in which is the characteristic velocity of the downstream jam front associated with Kerner's jam feature [J] discussed above. This means that after the downstream front of a wide moving jam has been identified at a time instant , the location of the downstream front of the jam can be estimated with formula The characteristic jam velocity is illustrated in Fig. 4. Two wide moving jams propagate upstream while maintaining the mean velocity of their downstream fronts. There are two jams following each other in this empirical example. However, in contrast with the mean velocity of the downstream jam front, the mean velocity of the upstream jam front depends on the flow rate and density in traffic flow upstream of the jam. Therefore, in a general case the use of formula (5) can lead to a great error by the estimation of the mean velocity of the upstream jam front. In many data measured on German highways has been found . However, although the mean velocity of the downstream jam front is independent of the flow rates and densities upstream and downstream of the jam, can depend considerably on traffic parameters like the percentage of long vehicles in traffic, weather, driver characteristics, etc. As a result, the mean velocity found in different data measured over years of observations varies approximately within the range . On-line applications of ASDA/FOTO models in traffic control centres Reconstruction and tracking of spatiotemporal congested patterns with the ASDA/FOTO models is done today online permanently in the traffic control centre of the federal state Hessen (Germany) for 1200 km of freeway network. Since April 2004 measured data of nearly 2500 detectors are automatically analyzed by ASDA/FOTO. The resulting spatiotemporal traffic patterns are illustrated in a space-time diagram showing congested pattern features like Fig. 5. The online system has also been installed in 2007 for North Rhine-Westphalia freeways. The raw traffic data are transferred to WDR, the major public radio broadcasting station from North Rhine-Westphalia in Cologne, who offers traffic messages to the end customer (e. g., radio listener or driver) via broadcast channel RDS. The application covers a part of the whole freeway network with 1900 km of freeway and more than 1000 double loop detectors. In addition, since 2009 ASDA/FOTO models are online in the northern part of Bavaria. Average traffic flow characteristics and travel time In addition to spatiotemporal reconstruction of traffic congestion (Figs. 1 and 5), the ASDA/FOTO models can provide average traffic flow characteristics within synchronized flow and wide moving jams. In turn, this permits the estimation of either travel time on a road section or travel time along any vehicle trajectory (see examples of trajectories 1–4 in Fig. 5). ASDA/FOTO models for data measured by probe vehicles Firstly, the ASDA and FOTO models identify transition points for phase transitions along the trajectory of a probe vehicle. Each of the transition points is associated to the front separating spatially two of the three different traffic phases each other (free flow (F), synchronized flow (S), wide moving jam (J)). After the transition points have been found, the ASDA/FOTO models reconstruct regions of synchronized flow and wide moving jams in space and time with the use of empirical features of these traffic phases discussed above (see Figs. 2 and 4). See also Active Traffic Management Floating car data Fundamental diagram Intelligent transportation system Kerner's breakdown minimization principle Microscopic traffic flow model Three phase traffic theory Traffic bottleneck Traffic flow Traffic wave Traffic congestion Transportation forecasting Notes Bibliography Kerner B. S., Konhäuser P. (1994). Structure and parameters of clusters in traffic flow, Physical Review E, Vol. 50, 54 Kerner B. S., Rehborn H. (1996). Experimental features and characteristics of traffic jams. Physical Review E, Vol. 53, 1297 Kerner B. S., Rehborn H. (1996). Experimental properties of complexity in traffic flow. Physical Review E, Vol. 53, R4257 Kerner B. S., Kirschfink H., Rehborn H. (1997) Automatische Stauverfolgung auf Autobahnen, Straßenverkehrstechnik, No. 9, pp 430–438 Kerner B. S., Rehborn H. (1998) Messungen des Verkehrsflusses: Charakteristische Eigenschaften von Staus auf Autobahnen, Internationales Verkehrswesen, 5/1998, pp 196–203 Kerner B. S., Rehborn H., Aleksić M., Haug A., Lange R. (2000) Verfolgung und Vorhersage von Verkehrsstörungen auf Autobahnen mit "ASDA" und "FOTO" im online-Betrieb in der Verkehrsrechnerzentrale Rüsselsheim, Straßenverkehrstechnik, No. 10, pp 521–527 Kerner B. S., Rehborn H., Aleksić M., Haug A. (2001) Methods for Tracing and Forecasting of Congested Traffic Patterns on Highways, Traffic Engineering and Control, 09/2001, pp 282–287 Kerner B. S., Rehborn H., Aleksić M., Haug A., Lange R. (2001) Online Automatic tracing and forecasting of traffic patterns with models "ASDA" and "FOTO", Traffic Engineering and Control, 11/2001, pp 345–350 Kerner B. S., Rehborn H., Aleksić M., Haug A. (2004): Recognition and Tracing of Spatial-Temporal Congested Traffic Patterns on Freeways, Transportation Research C, 12, pp 369–400 Palmer J., Rehborn H. (2007) ASDA/FOTO based on Kerner's Three-Phase Traffic Theory in North-Rhine Westfalia (in German), Straßenverkehrstechnik, No. 8, pp 463–470 Palmer J., Rehborn H., Mbekeani L. (2008) Traffic Congestion Interpretation Based on Kerner's Three-Phase Traffic Theory in USA, In: Proceedings 15th World Congress on ITS, New York Palmer J., Rehborn H. (2009) Reconstruction of congested traffic patterns using traffic state detection in autonomous vehicles based on Kerner's three-phase traffic theory, In: Proceedings of. 16th World Congress on ITS, Stockholm Rehborn H, Klenov S.L. (2009) Traffic Prediction of Congested Patterns, In: R. Meyers (Ed.): Encyclopedia of Complexity and Systems Science, Springer New York, 2009, pp 9500–9536 Boris S. Kerner, Rehborn H, Klenov S L, Palmer J, Prinn M (2009) Verfahren zur Verkehrszustandsbestimmung in einem Fahrzeug, (Method for traffic state detection in a vehicle), German Patent publication DE 10 2008 003 039 A1. Further reading B.S. Kerner, Introduction to Modern Traffic Flow Theory and Control: The Long Road to Three-Phase Traffic Theory, Springer, Berlin, New York 2009 B.S. Kerner, The Physics of Traffic, Springer, Berlin, New York 2004 Road transport Transportation engineering Mathematical physics Road traffic management
Traffic congestion reconstruction with Kerner's three-phase theory
[ "Physics", "Mathematics", "Engineering" ]
3,598
[ "Applied mathematics", "Theoretical physics", "Industrial engineering", "Transportation engineering", "Civil engineering", "Mathematical physics" ]
31,004,619
https://en.wikipedia.org/wiki/Reactor%20Experiment%20for%20Neutrino%20Oscillation
The Reactor Experiment for Neutrino Oscillation (RENO) is a short baseline reactor neutrino oscillation experiment in South Korea. The experiment was designed to either measure or set a limit on the neutrino mixing matrix parameter θ13, a parameter responsible for oscillations of electron neutrinos into other neutrino flavours. RENO has two identical detectors, placed at distances of 294 m and 1383 m, that observe electron antineutrinos produced by six reactors at the Hanbit Nuclear Power Plant (the old name: the Yeonggwang Nuclear Power Plant) in Korea. Each detector consists of of gadolinium-doped liquid scintillator (LAB), surrounded by an additional 450 tons of buffer, veto, and shielding liquids. On 3 April 2012, with some corrections on 8 April, the RENO collaboration announced a 4.9σ observation of θ13 ≠ 0, with This measurement confirmed a similar result announced by the Daya Bay Experiment three weeks before and is consistent with earlier, but less significant results by T2K, MINOS and Double Chooz. RENO released updated results in December 2013, confirming θ13 ≠ 0 with a significance of 6.3σ: In 2014, RENO announced the observation of an unexpectedly large number of neutrinos with an energy of . This has since been confirmed by the Daya Bay and Double Chooz experiments, and the cause remains an outstanding puzzle. Expansion plans, referred to as RENO-50, will add a third medium-baseline detector at a distance of 47 km. This distance is better for observing neutrino oscillations, but requires a much larger detector due to the smaller neutrino flux. The location, near Dongshin University, has a 450 m high mountain (Mt. Guemseong), which will provide 900 m.w.e. shielding for the detector. If funded, this will contain of scintillator, surrounded by photomultiplier tubes. References Neutrino experiments Reactor neutrino experiments
Reactor Experiment for Neutrino Oscillation
[ "Physics" ]
422
[ "Particle physics stubs", "Particle physics" ]
33,567,447
https://en.wikipedia.org/wiki/Theoretical%20gravity
In geodesy and geophysics, theoretical gravity or normal gravity is an approximation of Earth's gravity, on or near its surface, by means of a mathematical model. The most common theoretical model is a rotating Earth ellipsoid of revolution (i.e., a spheroid). Other representations of gravity can be used in the study and analysis of other bodies, such as asteroids. Widely used representations of a gravity field in the context of geodesy include spherical harmonics, mascon models, and polyhedral gravity representations. Principles The type of gravity model used for the Earth depends upon the degree of fidelity required for a given problem. For many problems such as aircraft simulation, it may be sufficient to consider gravity to be a constant, defined as: based upon data from World Geodetic System 1984 (WGS-84), where is understood to be pointing 'down' in the local frame of reference. If it is desirable to model an object's weight on Earth as a function of latitude, one could use the following: where = = = = latitude, between −90° and +90° Neither of these accounts for changes in gravity with changes in altitude, but the model with the cosine function does take into account the centrifugal relief that is produced by the rotation of the Earth. On the rotating sphere, the sum of the force of the gravitational field and the centrifugal force yields an angular deviation of approximately (in radians) between the direction of the gravitational field and the direction measured by a plumb line; the plumb line appears to point southwards on the northern hemisphere and northwards on the southern hemisphere. rad/s is the diurnal angular speed of the Earth axis, and km the radius of the reference sphere, and the distance of the point on the Earth crust to the Earth axis. For the mass attraction effect by itself, the gravitational acceleration at the equator is about 0.18% less than that at the poles due to being located farther from the mass center. When the rotational component is included (as above), the gravity at the equator is about 0.53% less than that at the poles, with gravity at the poles being unaffected by the rotation. So the rotational component of change due to latitude (0.35%) is about twice as significant as the mass attraction change due to latitude (0.18%), but both reduce strength of gravity at the equator as compared to gravity at the poles. Note that for satellites, orbits are decoupled from the rotation of the Earth so the orbital period is not necessarily one day, but also that errors can accumulate over multiple orbits so that accuracy is important. For such problems, the rotation of the Earth would be immaterial unless variations with longitude are modeled. Also, the variation in gravity with altitude becomes important, especially for highly elliptical orbits. The Earth Gravitational Model 1996 (EGM96) contains 130,676 coefficients that refine the model of the Earth's gravitational field. The most significant correction term is about two orders of magnitude more significant than the next largest term. That coefficient is referred to as the term, and accounts for the flattening of the poles, or the oblateness, of the Earth. (A shape elongated on its axis of symmetry, like an American football, would be called prolate.) A gravitational potential function can be written for the change in potential energy for a unit mass that is brought from infinity into proximity to the Earth. Taking partial derivatives of that function with respect to a coordinate system will then resolve the directional components of the gravitational acceleration vector, as a function of location. The component due to the Earth's rotation can then be included, if appropriate, based on a sidereal day relative to the stars (≈366.24 days/year) rather than on a solar day (≈365.24 days/year). That component is perpendicular to the axis of rotation rather than to the surface of the Earth. A similar model adjusted for the geometry and gravitational field for Mars can be found in publication NASA SP-8010. The barycentric gravitational acceleration at a point in space is given by: where: M is the mass of the attracting object, is the unit vector from center-of-mass of the attracting object to the center-of-mass of the object being accelerated, r is the distance between the two objects, and G is the gravitational constant. When this calculation is done for objects on the surface of the Earth, or aircraft that rotate with the Earth, one has to account for the fact that the Earth is rotating and the centrifugal acceleration has to be subtracted from this. For example, the equation above gives the acceleration at 9.820 m/s2, when , and The centripetal radius is , and the centripetal time unit is approximately reduces this, for to 9.79379 m/s2, which is closer to the observed value. Basic formulas Various, successively more refined, formulas for computing the theoretical gravity are referred to as the International Gravity Formula, the first of which was proposed in 1930 by the International Association of Geodesy. The general shape of that formula is: in which (φ) is the gravity as a function of the geographic latitude φ of the position whose gravity is to be determined, denotes the gravity at the equator (as determined by measurement), and the coefficients and are parameters that must be selected to produce a good global fit to true gravity. Using the values of the GRS80 reference system, a commonly used specific instantiation of the formula above is given by: Using the appropriate double-angle formula in combination with the Pythagorean identity, this can be rewritten in the equivalent forms Up to the 1960s, formulas based on the Hayford ellipsoid (1924) and of the famous German geodesist Helmert (1906) were often used. The difference between the semi-major axis (equatorial radius) of the Hayford ellipsoid and that of the modern WGS84 ellipsoid is ; for Helmert's ellipsoid it is only . Somigliana equation A more recent theoretical formula for gravity as a function of latitude is the International Gravity Formula 1980 (IGF80), also based on the GRS80 ellipsoid but now using the Somigliana equation (after Carlo Somigliana (1860–1955)): where, (formula constant); is the defined gravity at the equator and poles, respectively; are the equatorial and polar semi-axes, respectively; is the spheroid's squared eccentricity; providing, A later refinement, based on the WGS84 ellipsoid, is the WGS (World Geodetic System) 1984 Ellipsoidal Gravity Formula: (where = 9.8321849378 ms−2) The difference with IGF80 is insignificant when used for geophysical purposes, but may be significant for other uses. Further details For the normal gravity of the sea level ellipsoid, i.e., elevation h = 0, this formula by Somigliana (1929) applies: with = Normal gravity at Equator = Normal gravity at poles a = semi-major axis (Equator radius) b = semi-minor axis (pole radius) = latitude Due to numerical issues, the formula is simplified to this: with (e is the eccentricity) For the Geodetic Reference System 1980 (GRS 80) the parameters are set to these values: Approximation formula from series expansions The Somigliana formula was approximated through different series expansions, following this scheme: International gravity formula 1930 The normal gravity formula by Gino Cassinis was determined in 1930 by International Union of Geodesy and Geophysics as international gravity formula along with Hayford ellipsoid. The parameters are: In the course of time the values were improved again with newer knowledge and more exact measurement methods. Harold Jeffreys improved the values in 1948 at: International gravity formula 1967 The normal gravity formula of Geodetic Reference System 1967 is defined with the values: International gravity formula 1980 From the parameters of GRS 80 comes the classic series expansion: The accuracy is about ±10−6 m/s2. With GRS 80 the following series expansion is also introduced: As such the parameters are: c1 = 5.279 0414·10−3 c2 = 2.327 18·10−5 c3 = 1.262·10−7 c4 = 7·10−10 The accuracy is at about ±10−9 m/s2 exact. When the exactness is not required, the terms at further back can be omitted. But it is recommended to use this finalized formula. Height dependence Cassinis determined the height dependence, as: The average rock density ρ is no longer considered. Since GRS 1967 the dependence on the ellipsoidal elevation h is: Another expression is: with the parameters derived from GRS80: where with : This adjustment is about right for common heights in aviation; but for heights up to outer space (over ca. 100 kilometers) it is out of range. WELMEC formula In all German standards offices the free-fall acceleration g is calculated in respect to the average latitude φ and the average height above sea level h with the WELMEC–Formel: The formula is based on the International gravity formula from 1967. The scale of free-fall acceleration at a certain place must be determined with precision measurement of several mechanical magnitudes. Weighing scales, the mass of which does measurement because of the weight, relies on the free-fall acceleration, thus for use they must be prepared with different constants in different places of use. Through the concept of so-called gravity zones, which are divided with the use of normal gravity, a weighing scale can be calibrated by the manufacturer before use. Example Free-fall acceleration in Schweinfurt: Data: Latitude: 50° 3′ 24″ = 50.0567° Height above sea level: 229.7 m Density of the rock plates: ca. 2.6 g/cm3 Measured free-fall acceleration: g = 9.8100 ± 0.0001 m/s2 Free-fall acceleration, calculated through normal gravity formulas: Cassinis: g = 9.81038 m/s2 Jeffreys: g = 9.81027 m/s2 WELMEC: g = 9.81004 m/s2 See also Gravity anomaly Reference ellipsoid EGM96 (Earth Gravitational Model 1996) Standard gravity : 9.806 65 m/s2 References Further reading Karl Ledersteger: Astronomische und physikalische Geodäsie. Handbuch der Vermessungskunde Band 5, 10. Auflage. Metzler, Stuttgart 1969 B.Hofmann-Wellenhof, Helmut Moritz: Physical Geodesy, , Springer-Verlag Wien 2006. Wolfgang Torge: Geodäsie. 2. Auflage. Walter de Gruyter, Berlin u.a. 2003. Wolfgang Torge: Geodäsie. Walter de Gruyter, Berlin u.a. 1975 External links Definition des Geodetic Reference System 1980 (GRS80) (pdf, engl.; 70 kB) Gravity Information System der Physikalisch-Technischen Bundesanstalt, engl. Online-Berechnung der Normalschwere mit verschiedenen Normalschwereformeln Gravimetry Geodesy Geophysics
Theoretical gravity
[ "Physics", "Mathematics" ]
2,399
[ "Applied mathematics", "Applied and interdisciplinary physics", "Geodesy", "Geophysics" ]
31,986,082
https://en.wikipedia.org/wiki/Rigorous%20coupled-wave%20analysis
Rigorous coupled-wave analysis (RCWA), also known as Fourier modal method (FMM), is a semi-analytical method in computational electromagnetics that is most typically applied to solve scattering from periodic dielectric structures. It is a Fourier-space method so devices and fields are represented as a sum of spatial harmonics. Floquet's theorem The method is based on Floquet's theorem that the solutions of periodic differential equations can be expanded with Floquet functions (or sometimes referred as a Bloch wave, especially in the solid-state physics community). A device is divided into layers that are each uniform in the z direction. A staircase approximation is needed for curved devices with properties such as dielectric permittivity graded along the z-direction. The electromagnetic modes in each layer are calculated and analytically propagated through the layers. The overall problem is solved by matching boundary conditions at each of the interfaces between the layers using a technique like scattering matrices. To solve for the electromagnetic modes, which are decided by the wave vector of the incident plane wave, in periodic dielectric medium, Maxwell's equations (in partial differential form) as well as the boundary conditions are expanded by the Floquet functions in Fourier space. This technique transforms the partial differential equation into a matrix valued ordinary differential equation as a function of height over the periodic media. The finite representation of these Floquet functions in Fourier space renders the matrices finite, thus allowing the method to be feasibly solved by computers. Fourier factorization Being a Fourier-space method it suffers several drawbacks. Gibbs phenomenon is particularly severe for devices with high dielectric contrast. Truncating the number of spatial harmonics can also slow convergence and techniques like fast Fourier factorization (FFF) should be used. FFF is straightforward to implement for 1D gratings, but the community is still working on a straightforward approach for crossed grating devices. The difficulty with FFF in crossed grating devices is that the field must be decomposed into parallel and perpendicular components at all of the interfaces. This is not a straightforward calculation for arbitrarily shaped devices. Finally, because the RCWA method usually requires the convolution of discontinuous functions represented in Fourier Space, very careful consideration must given to the how these discontinuities are treated in the formulation of Maxwell's equations in Fourier space. A key contribution was made by Lifeng Li in understanding the properties of convolving the Fourier transforms of two functions with coincident discontinuities that nevertheless form a continuous function. This led to a reformulation of RCWA with a significant improvement in convergence when using truncated Fourier series. Boundary conditions Boundary conditions must be enforced at the interfaces between all the layers. When many layers are used, this becomes too large to solve simultaneously. Instead, RCWA borrows from network theory and calculates scattering matrices. This allows the boundary conditions to be solved one layer at a time. Almost without exception, however, the scattering matrices implemented for RCWA are inefficient and do not follow long standing conventions in terms of how S11, S12, S21, and S22 are defined. Other methods exist, like the enhanced transmittance matrices (ETM), R matrices, and H matrices. ETM, for example, is considerably faster but less memory efficient. RCWA can applied to aperiodic structures with appropriate use of perfectly matched layers. Applications RCWA analysis applied to a polarized broadband reflectometry measurement is used within the semiconductor power device industry as a measurement technique to obtain detailed profile information of periodic trench structures. This technique has been used to provide trench depth and critical dimension (CD) results comparable to cross-section SEM, while having the added benefit of being both high-throughput and non-destructive. In order to extract critical dimensions of a trench structure (depth, CD, and sidewall angle), the measured polarized reflectance data must have a sufficiently large wavelength range and analyzed with a physically valid model (for example: RCWA in combination with the Forouhi-Bloomer Dispersion relations for n and k). Studies have shown that the limited wavelength range of a standard reflectometer (375 - 750 nm) does not provide the sensitivity to accurately measure trench structures with small CD values (less than 200 nm). However, by using a reflectometer with the wavelength range extended from 190 - 1000 nm, it is possible to accurately measure these smaller structures. RCWA is also used to improve diffractive structures for high efficiency solar cells. For the simulation of the whole solar cell or solar module, RCWA can be efficiently combined with the OPTOS formalism. See also Finite-difference time-domain (FDTD) References See Chapter 6 in Design and Optimization of Nano-Optical Elements by Coupling Fabrication to Optical Behavior External links RODIS EMpy FMMAX MRCWA S4 Unigit (RCWA, Rayleigh–Fourier & C-method) RawDog Mathematical physics Computational electromagnetics Fourier analysis Holography
Rigorous coupled-wave analysis
[ "Physics", "Mathematics" ]
1,038
[ "Computational electromagnetics", "Applied mathematics", "Theoretical physics", "Computational physics", "Mathematical physics" ]
40,390,496
https://en.wikipedia.org/wiki/Herman%20E.%20Schroeder
Herman E. Schroeder (6 July 1915 – 28 November 2009) was a research director at DuPont, inventor of the first practical adhesive for bonding rubber to nylon for B29 bomber tires, and a pioneer in the development of specialty elastomers. Early life and education Schroder was born in Brooklyn, New York on July 6, 1915. He attended Harvard University, where he received his bachelor's and master's degrees in chemistry in 1936 and 1937 and his Ph.D. in organic chemistry in 1939. Awards In 1984, Schroeder received the Charles Goodyear Medal. DuPont awarded him the Lavoisier Medal for Inspirational Research Leadership in 1992. Publications Death He died in 2009 in Greenville, Delaware. References External links Finding aids for the Herman Schroeder papers and the Herman Schroeder photograph collection are available at Hagley Museum and Library. Both collections are available for research. Polymer scientists and engineers 1915 births 2009 deaths American chemical engineers DuPont people Scientists from Brooklyn Harvard University alumni Engineers from New York City 20th-century American engineers
Herman E. Schroeder
[ "Chemistry", "Materials_science" ]
220
[ "Polymer scientists and engineers", "Physical chemists", "Polymer chemistry" ]
40,394,203
https://en.wikipedia.org/wiki/Heat-transfer%20fluid
In fluid thermodynamics, a heat transfer fluid (HTF) is a gas or liquid that takes part in heat transfer by serving as an intermediary in cooling on one side of a process, transporting and storing thermal energy, and heating on another side of a process. Heat transfer fluids are used in countless applications and industrial processes requiring heating or cooling, typically in a closed circuit and in continuous cycles. Cooling water, for instance, cools an engine, while heating water in a hydronic heating system heats the radiator in a room. Water is the most common heat transfer fluid because of its economy, high heat capacity and favorable transport properties. However, the useful temperature range is restricted by freezing below 0 °C and boiling at elevated temperatures depending on the system pressure. Antifreeze additives can alleviate the freezing problem to some extent. However, many other heat transfer fluids have been developed and used in a huge variety of applications. For higher temperatures, oil or synthetic hydrocarbon- or silicone-based fluids offer lower vapor pressure. Molten salts and molten metals can be used for transferring and storing heat at temperatures above 300 to 400 °C where organic fluids start to decompose. Gases such as water vapor, nitrogen, argon, helium and hydrogen have been used as heat transfer fluids where liquids are not suitable. For gases the pressure typically needs to be elevated to facilitate higher flow rates with low pumping power. In order to prevent overheating, fluid flows inside a system or a device so as to transfer the heat outside that particular device or system. They generally have a high boiling point and a high heat capacity. High boiling point prevents the heat transfer liquids from vaporising at high temperatures. High heat capacity enables a small amount of the refrigerant to transfer a large amount of heat very efficiently. It must be ensured that the heat transfer liquids used should not have a low boiling point. This is because a low boiling point will result in vaporisation of the liquid at low temperatures when they are used to exchange heat with hot substances. This will produce vapors of the liquid in the machine itself where they are used. Also, the heat transfer fluids should have high heat capacity. The heat capacity denotes the amount of heat the fluid can hold without changing its temperature. In case of liquids, it also shows the amount of heat the liquid can hold before its temperature reaches its boiling point and ultimately vaporises. If the fluid has low heat capacity, then it will mean that a large amount of the fluid will be required to exchange a relatively small amount of heat. This will increase the cost of using heat transfer fluids and will reduce the efficiency of the process. In case of liquid heat transfer fluids, usage of their small quantity will result in their vaporisation which can be dangerous for the equipment where they are used. The equipment will be designed for liquids but their vaporisation will include vapors in the flow channel. Also gases occupy larger volume than liquids at the same pressure. The production of vapors will increase the pressure on the walls of the pipe/channel where it will be flowing. This may cause the flow channel to rupture. Characteristics of heat transfer fluids Heat transfer fluids have distinct thermal and chemical properties which determine their suitability for various industrial applications. Key characteristics include: Thermal Stability: This refers to a fluid's resistance to irreversible changes in its physical properties at varying temperatures. Fluids with high thermal stability have fewer degradation pathways, leading to longer service lifetimes and less maintenance. The determination of a fluid's thermal stability is often based on tests such as ASTM D6743, which assess degradation products formed under thermal stress. Viscosity: The viscosity of a fluid affects its flow characteristics and pumping costs. Lower viscosity fluids are easier to pump and circulate within a system. Heat Capacity: A fluid’s heat capacity indicates how much thermal energy it can transport and store, impacting the efficiency of the heat transfer process. Thermal Conductivity and Thermal Diffusivity: These properties influence the rate at which heat is transferred through the fluid, affecting how quickly a system can respond to temperature changes. Corrosion Potential: The compatibility of a heat transfer fluid with system materials is crucial to minimize corrosion and extend the life of the equipment. Freezing and Boiling Points: Fluids should have high boiling and low freezing points to remain in the desired phase during the heat transfer process and to avoid phase change-related issues within the operating temperature range. Industrial Applications Heat transfer fluids are integral to various industrial applications, enabling precise temperature control in manufacturing processes. In the food industry, they are vital for processing meats and snacks. Chemical processes often rely on them for batch reactors and continuous operations. The plastics, rubber, and composites sectors use heat transfer fluids in molding and extrusion processes. They are also critical in petrochemical synthesis and distillation, oil and gas refining, and for converting materials in presses and laminating operations. Heat Transfer Fluids in Solar Energy In solar power plants, heat transfer fluids are used in concentrators like linear Fresnel and parabolic trough systems for efficient energy generation and thermal storage. Molten salts and synthetic heat transfer fluids are utilized based on their ability to function at various temperature ranges, contributing to the generation of electricity and the manufacturing of polysilicon for photovoltaic cells. These fluids assist in the purification and cooling steps of polysilicon production, essential for creating high-purity silicon for solar and electronic applications. Technico-economic analyses are usually performed to select the appropriate heat transfer fluid. Regarding the selection of a low-cost or cost-effective thermal oil, it is important to consider not only the acquisition or purchase cost, but also the operating and replacement costs. An oil that is initially more expensive may prove to be more cost-effective in the long run if it offers higher thermal stability, thereby reducing the frequency of replacement. Common Heat Transfer Fluids The choice of a heat transfer fluid is critical for system efficiency and longevity. Here are some commonly used fluids: Water: The most widely used due to its high heat capacity and thermal conductivity. Mono-ethylene glycol: Often used in a mixture with water to lower the freezing point for use in colder climates. Propylene glycol: Preferred in food production and other industries where toxicity might be a concern. Silicone oil: Used for its stability at high temperatures and electrical insulating properties. Synthetic and aromatic heat transfer fluids: Employed in high-temperature applications, such as solar power generation and industrial heat processes. Molten salts: Utilized in solar energy systems for their capacity for thermal storage and ability to operate at very high temperatures. Vegetable oil: Utilized in solar energy systems because they are biodegradable and renewable. See also Coolant Heat-transfer oil (oil cooling, oil heater) References Further reading Heat transfer Fluid dynamics
Heat-transfer fluid
[ "Physics", "Chemistry", "Engineering" ]
1,397
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Chemical engineering", "Thermodynamics", "Piping", "Fluid dynamics" ]
37,626,088
https://en.wikipedia.org/wiki/DNA%20damage%20%28naturally%20occurring%29
Natural DNA damage is an alteration in the chemical structure of DNA, such as a break in a strand of DNA, a nucleobase missing from the backbone of DNA, or a chemically changed base such as 8-OHdG. DNA damage can occur naturally or via environmental factors, but is distinctly different from mutation, although both are types of error in DNA. DNA damage is an abnormal chemical structure in DNA, while a mutation is a change in the sequence of base pairs. DNA damages cause changes in the structure of the genetic material and prevents the replication mechanism from functioning and performing properly. The DNA damage response (DDR) is a complex signal transduction pathway which recognizes when DNA is damaged and initiates the cellular response to the damage. DNA damage and mutation have different biological consequences. While most DNA damages can undergo DNA repair, such repair is not 100% efficient. Un-repaired DNA damages accumulate in non-replicating cells, such as cells in the brains or muscles of adult mammals, and can cause aging. (Also see DNA damage theory of aging.) In replicating cells, such as cells lining the colon, errors occur upon replication of past damages in the template strand of DNA or during repair of DNA damages. These errors can give rise to mutations or epigenetic alterations. Both of these types of alteration can be replicated and passed on to subsequent cell generations. These alterations can change gene function or regulation of gene expression and possibly contribute to progression to cancer. Throughout the cell cycle there are various checkpoints to ensure the cell is in good condition to progress to mitosis. The three main checkpoints are at G1/s, G2/m, and at the spindle assembly checkpoint regulating progression through anaphase. G1 and G2 checkpoints involve scanning for damaged DNA. During S phase the cell is more vulnerable to DNA damage than any other part of the cell cycle. G2 checkpoint checks for damaged DNA and DNA replication completeness. Types Damage to DNA that occurs naturally can result from metabolic or hydrolytic processes. Metabolism releases compounds that damage DNA including reactive oxygen species, reactive nitrogen species, reactive carbonyl species, lipid peroxidation products, and alkylating agents, among others, while hydrolysis cleaves chemical bonds in DNA. Naturally occurring oxidative DNA damages arise at least 10,000 times per cell per day in humans and as much as 100,000 per cell per day in rats as documented below. Oxidative DNA damage can produce more than 20 types of altered bases as well as single strand breaks. Other types of endogeneous DNA damages, given below with their frequencies of occurrence, include depurinations, depyrimidinations, double-strand breaks, O6-methylguanines, and cytosine deamination. DNA can be damaged via environmental factors as well. Environmental agents such as UV light, ionizing radiation, and genotoxic chemicals. Replication forks can be stalled due to damaged DNA and double strand breaks are also a form of DNA damage. Frequencies The list below shows some frequencies with which new naturally occurring DNA damages arise per day, due to endogenous cellular processes. Oxidative damages Humans, per cell per day: 10,000 11,500 2,800 specific damages 8-oxoGua, 8-oxodG plus 5-HMUra Rats, per cell per day: 74,000 86,000 100,000 Mice, per cell per day: 34,000 specific damages 8-oxoGua, 8-oxodG plus 5-HMUra 47,000 specific damages oxo8dG in mouse liver 28,000 specific damages 8-oxoGua, 8-oxodG, 5-HMUra Depurinations Mammalian cells, per cell per day: 2,000 to 10,000 9,000 12,000 13,920 Depyrimidinations Mammalian cells, per cell per day: 600 696 Single-strand breaks Mammalian cells, per cell per day: 55,200 Double-strand breaks Human cells, per cell cycle 10 50 O6-methylguanines Mammalian cells, per cell per day: 3,120 Cytosine deamination Mammalian cells, per cell per day: 192 Another important endogenous DNA damage is M1dG, short for (3-(2'-deoxy-beta-D-erythro-pentofuranosyl)-pyrimido[1,2-a]-purin-10(3H)-one). The excretion in urine (likely reflecting rate of occurrence) of M1dG may be as much as 1,000-fold lower than that of 8-oxodG. However, a more important measure may be the steady-state level in DNA, reflecting both rate of occurrence and rate of DNA repair. The steady-state level of M1dG is higher than that of 8-oxodG. This points out that some DNA damages produced at a low rate may be difficult to repair and remain in DNA at a high steady-state level. Both M1dG and 8-oxodG are mutagenic. Steady-state levels Steady-state levels of DNA damages represent the balance between formation and repair. More than 100 types of oxidative DNA damage have been characterized, and 8-oxodG constitutes about 5% of the steady state oxidative damages in DNA. Helbock et al. estimated that there were 24,000 steady state oxidative DNA adducts per cell in young rats and 66,000 adducts per cell in old rats. This reflects the accumulation of DNA damage with age. DNA damage accumulation with age is further described in DNA damage theory of aging. Swenberg et al. measured average amounts of selected steady state endogenous DNA damages in mammalian cells. The seven most common damages they evaluated are shown in Table 1. Evaluating steady-state damages in specific tissues of the rat, Nakamura and Swenberg indicated that the number of abasic sites varied from about 50,000 per cell in liver, kidney and lung to about 200,000 per cell in the brain. Biomolecular pathways Proteins promoting endogenous DNA damage were identified in a 2019 paper as the DNA "damage-up" proteins (DDPs). The DDP mechanisms fall into 3 clusters: reactive oxygen increase by transmembrane transporters, chromosome loss by replisome binding, replication stalling by transcription factors. The DDP human homologs are over-represented in known cancer drivers, and their RNAs in tumors predict heavy mutagenesis and a poor prognosis. Repair of damaged DNA In the presence of DNA damage, the cell can either repair the damage or induce cell death if the damage is beyond repair. Types The seven main types of DNA repair and one pathway of damage tolerance, the lesions they address, and the accuracy of the repair (or tolerance) are shown in this table. For a brief description of the steps in repair see DNA repair mechanisms or see each individual pathway. Aging and cancer The schematic diagram indicates the roles of insufficient DNA repair in aging and cancer, and the role of apoptosis in cancer prevention. An excess of naturally occurring DNA damage, due to inherited deficiencies in particular DNA repair enzymes, can cause premature aging or increased risk for cancer (see DNA repair-deficiency disorder). On the other hand, the ability to trigger apoptosis in the presence of excess un-repaired DNA damage is critical for prevention of cancer. Apoptosis and cancer prevention DNA repair proteins are often activated or induced when DNA has sustained damage. However, excessive DNA damage can initiate apoptosis (i.e., programmed cell death) if the level of DNA damage exceeds the repair capacity. Apoptosis can prevent cells with excess DNA damage from undergoing mutagenesis and progression to cancer. Inflammation is often caused by infection, such as with hepatitis B virus (HBV), hepatitis C virus (HCV) or Helicobacter pylori. Chronic inflammation is also a central characteristic of obesity. Such inflammation causes oxidative DNA damage. This is due to the induction of reactive oxygen species (ROS) by various intracellular inflammatory mediators. HBV and HCV infections, in particular, cause 10,000-fold and 100,000-fold increases in intracellular ROS production, respectively. Inflammation-induced ROS that cause DNA damage can trigger apoptosis, but may also cause cancer if repair and apoptotic processes are insufficiently protective. Bile acids, stored in the gall bladder, are released into the small intestine in response to fat in the diet. Higher levels of fat cause greater release. Bile acids cause DNA damage, including oxidative DNA damage, double-strand DNA breaks, aneuploidy and chromosome breakage. High-normal levels of the bile acid deoxycholic acid cause apoptosis in human colon cells, but may also lead to colon cancer if repair and apoptotic defenses are insufficient. Apoptosis serves as a safeguard mechanism against tumorigenesis. It prevents the increased mutagenesis that excess DNA damage could cause, upon replication. At least 17 DNA repair proteins, distributed among five DNA repair pathways, have a "dual role" in response to DNA damage. With moderate levels of DNA damage, these proteins initiate or contribute to DNA repair. However, when excessive levels of DNA damage are present, they trigger apoptosis. DNA damage response The packaging of eukaryotic DNA into chromatin is a barrier to all DNA-based processes that require enzyme action. For most DNA repair processes, the chromatin must be remodeled. In eukaryotes, ATP-dependent chromatin remodeling complexes and histone-modifying enzymes are two factors that act to accomplish this remodeling process after DNA damage occurs. Further DNA repair steps, involving multiple enzymes, usually follow. Some of the first responses to DNA damage, with their timing, are described below. More complete descriptions of the DNA repair pathways are presented in articles describing each pathway. At least 169 enzymes are involved in DNA repair pathways. Base excision repair Oxidized bases in DNA are produced in cells treated with Hoechst dye followed by micro-irradiation with 405 nm light. Such oxidized bases can be repaired by base excision repair. When the 405 nm light is focused along a narrow line within the nucleus of a cell, about 2.5 seconds after irradiation, the chromatin remodeling enzyme Alc1 achieves half-maximum recruitment onto the irradiated micro-line. The line of chromatin that was irradiated then relaxes, expanding side-to-side over the next 60 seconds. Within 6 seconds of the irradiation with 405 nm light, there is half-maximum recruitment of OGG1 to the irradiated line. OGG1 is an enzyme that removes the oxidative DNA damage 8-oxo-dG from DNA. Removal of 8-oxo-dG, during base excision repair, occurs with a half-life of 11 minutes. Nucleotide excision repair Ultraviolet (UV) light induces the formation of DNA damages including pyrimidine dimers (such as thymine dimers) and 6,4 photoproducts. These types of "bulky" damages are repaired by nucleotide excision repair. After irradiation with UV light, DDB2, in a complex with DDB1, the ubiquitin ligase protein CUL4A and the RING finger protein ROC1, associates with sites of damage within chromatin. Half-maximum association occurs in 40 seconds. PARP1 also associates within this period. The PARP1 protein attaches to both DDB1 and DDB2 and then PARylates (creates a poly-ADP ribose chain) on DDB2 that attracts the DNA remodeling protein ALC1. ALC1 relaxes chromatin at sites of UV damage to DNA. In addition, the ubiquitin E3 ligase complex DDB1-CUL4A carries out ubiquitination of the core histones H2A, H3, and H4, as well as the repair protein XPC, which has been attracted to the site of the DNA damage. XPC, upon ubiquitination, is activated and initiates the nucleotide excision repair pathway. Somewhat later, at 30 minutes after UV damage, the INO80 chromatin remodeling complex is recruited to the site of the DNA damage, and this coincides with the binding of further nucleotide excision repair proteins, including ERCC1. Homologous recombinational repair Double-strand breaks (DSBs) at specific sites can be induced by transfecting cells with a plasmid encoding I-SceI endonuclease (a homing endonuclease). Multiple DSBs can be induced by irradiating sensitized cells (labeled with 5'-bromo-2'-deoxyuridine and with Hoechst dye) with 780 nm light. These DSBs can be repaired by the accurate homologous recombinational repair or by the less accurate non-homologous end joining repair pathway. Here we describe the early steps in homologous recombinational repair (HRR). After treating cells to introduce DSBs, the stress-activated protein kinase, c-Jun N-terminal kinase (JNK), phosphorylates SIRT6 on serine 10. This post-translational modification facilitates the mobilization of SIRT6 to DNA damage sites with half-maximum recruitment in well under a second. SIRT6 at the site is required for efficient recruitment of poly (ADP-ribose) polymerase 1 (PARP1) to a DNA break site and for efficient repair of DSBs. PARP1 protein starts to appear at DSBs in less than a second, with half maximum accumulation within 1.6 seconds after the damage occurs. This then allows half maximum recruitment of the DNA repair enzymes MRE11 within 13 seconds and NBS1 within 28 seconds. MRE11 and NBS1 carry out early steps of the HRR pathway. γH2AX, the phosphorylated form of H2AX is also involved in early steps of DSB repair. The histone variant H2AX constitutes about 10% of the H2A histones in human chromatin. γH2AX (H2AX phosphorylated on serine 139) can be detected as soon as 20 seconds after irradiation of cells (with DNA double-strand break formation), and half maximum accumulation of γH2AX occurs in one minute. The extent of chromatin with phosphorylated γH2AX is about two million base pairs at the site of a DNA double-strand break. γH2AX does not, itself, cause chromatin decondensation, but within 30 seconds of irradiation, RNF8 protein can be detected in association with γH2AX. RNF8 mediates extensive chromatin decondensation, through its subsequent interaction with CHD4, a component of the nucleosome remodeling and deacetylase complex NuRD. Pause for DNA repair After rapid chromatin remodeling, cell cycle checkpoints may be activated to allow DNA repair to be completed before the cell cycle progresses. First, two kinases, ATM and ATR, are activated within 5 or 6 minutes after DNA is damaged. This is followed by phosphorylation of the cell cycle checkpoint protein Chk1, initiating its function, about 10 minutes after DNA is damaged. Role of oxidative damage to guanine in gene regulation The DNA damage 8-oxo-dG does not occur randomly in the genome. In mouse embryonic fibroblasts, a 2 to 5-fold enrichment of 8-oxo-dG was found in genetic control regions, including promoters, 5'-untranslated regions and 3'-untranslated regions compared to 8-oxo-dG levels found in gene bodies and in intergenic regions. In rat pulmonary artery endothelial cells, when 22,414 protein-coding genes were examined for locations of 8-oxo-dG, the majority of 8-oxo-dGs (when present) were found in promoter regions rather than within gene bodies. Among hundreds of genes whose expression levels were affected by hypoxia, those with newly acquired promoter 8-oxo-dGs were upregulated, and those genes whose promoters lost 8-oxo-dGs were almost all downregulated. As reviewed by Wang et al., oxidized guanine appears to have multiple regulatory roles in gene expression. In particular, when oxidative stress produces 8-oxo-dG in the promoter of a gene, the oxidative stress may also inactivate OGG1, an enzyme that targets 8-oxo-dG and normally initiates repair of 8-oxo-dG damage. The inactive OGG1, which no longer excises 8-oxo-dG, nevertheless targets and complexes with 8-oxo-dG, and causes a sharp (~70o) bend in the DNA. This allows the assembly of a transcriptional initiation complex, up-regulating transcription of the associated gene. When 8-oxo-dG is formed in a guanine rich, potential G-quadruplex-forming sequence (PQS) in the coding strand of a promoter, active OGG1 excises the 8-oxo-dG and generates an apurinic/apyrimidinic site (AP site). The AP site enables melting of the duplex to unmask the PQS, adopting a G-quadruplex fold (G4 structure/motif) that has a regulatory role in transcription activation. When 8-oxo-dG is complexed with active OGG1 it may then recruit chromatin remodelers to modulate gene expression. Chromodomain helicase DNA-binding protein 4 (CHD4), a component of the (NuRD) complex, is recruited by OGG1 to oxidative DNA damage sites. CHD4 then attracts DNA and histone methylating enzymes that repress transcription of associated genes. Role of DNA damage in memory formation Oxidation of guanine Oxidation of guanine, particularly within CpG sites, may be especially important in learning and memory. Methylation of cytosines occurs at 60–90% of CpG sites depending on the tissue type. In the mammalian brain, ~62% of CpGs are methylated. Methylation of CpG sites tends to stably silence genes. More than 500 of these CpG sites are de-methylated in neuron DNA during memory formation and memory consolidation in the hippocampus and cingulate cortex regions of the brain. As indicated below, the first step in de-methylation of methylated cytosine at a CpG site is oxidation of the guanine to form 8-oxo-dG. Role of oxidized guanine in DNA de-methylation The figure in this section shows a CpG site where the cytosine is methylated to form 5-methylcytosine (5mC) and the guanine is oxidized to form 8-oxo-2'-deoxyguanosine (in the figure this is shown in the tautomeric form 8-OHdG). When this structure is formed, the base excision repair enzyme OGG1 targets 8-OHdG and binds to the lesion without immediate excision. OGG1, present at a 5mCp-8-OHdG site recruits TET1, and TET1 oxidizes the 5mC adjacent to the 8-OHdG. This initiates de-methylation of 5mC. TET1 is a key enzyme involved in de-methylating 5mCpG. However, TET1 is only able to act on 5mCpG if the guanine was first oxidized to form 8-hydroxy-2'-deoxyguanosine (8-OHdG or its tautomer 8-oxo-dG), resulting in a 5mCp-8-OHdG dinucleotide (see figure in this section). This initiates the de-methylation pathway on the methylated cytosine, finally resulting in an unmethylated cytosine (see DNA oxidation for further steps in forming unmethylated cytosine). Altered protein expression in neurons, due to changes in methylation of DNA, (likely controlled by 8-oxo-dG-dependent de-methylation of CpG sites in gene promoters within neuron DNA) has been established as central to memory formation. Role of double-strand breaks in memory formation Generation of neuronal activity-related DSBs Double-stranded breaks (DSBs) in regions of DNA related to neuronal activity are produced by a variety of mechanisms within and around the genome. The enzyme topoisomerase II, or TOPIIβ plays a key role in DSB formation by aiding in the demethylation or loosening of histones wrapped around the double helix to promote transcription. Once the chromatin structure is opened, DSBs are more likely to accumulate, however, this is normally repaired by TOPIIβ through its intrinsic religation ability that rejoins the cleaved DNA ends. Failure of TOPIIβ to religase can have drastic consequences on protein synthesis, where it is estimated that "blocking TOPIIβ activity alters the expression of nearly one-third of all developmentally regulated genes," such as neural immediate early genes (IEGs) involved in memory consolidation. Rapid expression of egr-1, c-Fos, and Arc IEGs have been observed in response to increased neuronal activity in the hippocampus region of the brain where memory processing takes place. As a preventative measure against TOPIIβ failure, DSB repair molecules are recruited via two different pathways: non-homologous end joining (NHEJ) pathway factors, which perform a similar religation function to that of TOPIIβ, and the homologous recombination (HR) pathway, which uses the non-broken sister strand as a template to repair the damaged strand of DNA. Stimulation of neuronal activity, as previously mentioned in IEG expression, is another mechanism through which DSBs are generated. Changes in level of activity have been used in studies as a biomarker to trace the overlap between DSBs and increased histone H3K4 methylation in promoter regions of IEGs. Other studies have indicated that transposable elements (TEs) can cause DSBs through endogenous activity that involves using endonuclease enzymes to insert and cleave target DNA at random sites. DSBs and memory reconsolidation While accumulation of DSBs generally inhibits long-term memory consolidation, the process of reconsolidation, in contrast, is DSB-dependent. Memory reconsolidation involves the modification of existing memories stored in long-term memory. Research involving NPAS4, a gene that regulates neuroplasticity in the hippocampus during contextual learning and memory formation, has revealed a link between deletions in the coding region and impairments in recall of fear memories in transgenic rats. Moreover, the enzyme H3K4me3, which catalyzes the demethylation of the H3K4 histone, was upregulated at the promoter region of the NPAS4 gene during the reconsolidation process, while knockdown (gene knockdown) of the same enzyme impeded reconsolidation. A similar effect was observed in TOPIIβ, where knockdown also impaired the fear memory response in rats, indicating that DSBs, along with the enzymes that regulate them, influence memory formation at multiple stages. DSBs and neurodegeneration Buildup of DSBs more broadly leads to the degeneration of neurons, hindering the function of memory and learning processes. Due to their lack of cell division and high metabolic activity, neurons are especially prone to DNA damage. Additionally, an imbalance of DSBs and DNA repair molecules for neuronal-activity genes has been linked to the development of various human neurodegenerative diseases including Alzheimer's disease (AD), Parkinson's disease (PD), and amyotrophic lateral sclerosis (ALS). In patients with Alzheimer's disease, DSBs accumulate in neurons at early stages and are the driving force behind memory loss, a key characteristic of the disease. Other external factors that result in increased levels of activity-dependent DSBs in people with AD are oxidative damage to neurons, which can result in more DSBs when multiple lesions occur close to one another. Environmental factors such as viruses and a high-fat diet have also been associated with disrupted function of DNA repair molecules. One targeted therapy for treating patients with AD has involved suppression of the BRCA1 gene in human brains, initially tested in transgenic mice, where DSB levels were observed to have increased and memory loss had occurred, suggesting that BRCA1 could "serve as a therapeutic target for AD and AD-related dementia." Similarly, the protein ATM involved in DNA repair and epigenetic modifications to the genome is positively correlated with neuronal loss in AD brains, indicating the protein is another key component in the intrinsically linked processes of neurodegeneration, DSB production, and memory formation. Role of ATR and ATM Most damage can be repaired without triggering the damage response system, however more complex damage activates ATR and ATM, key protein kinases in the damage response system. DNA damage inhibits M-CDKs which are a key component of progression into mitosis. In all eukaryotic cells, ATR and ATM are protein kinases that detect DNA damage. They bind to DNA damaged sites and activate Chk1, Chk2, and, in animal cells, p53. Together, these proteins make up the DNA damage response system. Some DNA damage does not require the recruitment of ATR and ATM, it is only difficult and extensive damage that requires ATR and ATM. ATM and ATR are required for NHEJ, HR, ICL repair, and NER, as well as replication fork stability during unperturbed DNA replication and in response to replication blocks. ATR is recruited for different forms of damage such as nucleotide damage, stalled replication forks and double strand breaks. ATM is specifically for the damage response to double strand breaks. The MRN complex (composed of Mre11, Rad50, and Nbs1) form immediately at the site of double strand break. This MRN complex recruits ATM to the site of damage. ATR and ATM phosphorylate various proteins that contribute to the damage repair system. The binding of ATR and ATM to damage sites on DNA lead to the recruitment of Chk1 and Chk2. These protein kinases send damage signals to the cell cycle control system to delay the progression of the cell cycle. Chk1 and Chk2 functions Chk1 leads to the production of DNA repair enzymes. Chk2 leads to reversible cell cycle arrest. Chk2, as well as ATR/ATM, can activate p53, which leads to permanent cell cycle arrest or apoptosis. p53 role in DNA damage repair system When there is too much damage, apoptosis is triggered in order to protect the organism from potentially harmful cells.7 p53, also known as a tumor suppressor gene, is a major regulatory protein in the DNA damage response system which binds directly to the promoters of its target genes. p53 acts primarily at the G1 checkpoint (controlling the G1 to S transition), where it blocks cell cycle progression. Activation of p53 can trigger cell death or permanent cell cycle arrest. p53 can also activate certain repair pathways such was NER. Regulation of p53 In the absence of DNA damage, p53 is regulated by Mdm2 and constantly degraded. When there is DNA damage, Mdm2 is phosphorylated, most likely caused by ATM. The phosphorylation of Mdm2 leads to a reduction in the activity of Mdm2, thus preventing the degradation of p53. Normal, undamaged cell, usually has low levels of p53 while cells under stress and DNA damage, will have high levels of p53. p53 serves as transcription factor for bax and p21 p53 serves as a transcription factors for both bax, a proapoptotic protein as well as p21, a CDK inhibitor. CDK Inhibitors result in cell cycle arrest. Arresting the cell provides the cell time to repair the damage, and if the damage is irreparable, p53 recruits bax to trigger apoptosis. DDR and p53 role in cancer p53 is a major key player in the growth of cancerous cells. Damaged DNA cells with mutated p53 are at a higher risk of becoming cancerous. Common chemotherapy treatments are genotoxic. These treatments are ineffective in cancer tumor that have mutated p53 since they do not have a functioning p53 to either arrest or kill the damaged cell. A major problem for life One indication that DNA damages are a major problem for life is that DNA repair processes, to cope with DNA damages, have been found in all cellular organisms in which DNA repair has been investigated. For example, in bacteria, a regulatory network aimed at repairing DNA damages (called the SOS response in Escherichia coli) has been found in many bacterial species. E. coli RecA, a key enzyme in the SOS response pathway, is the defining member of a ubiquitous class of DNA strand-exchange proteins that are essential for homologous recombination, a pathway that maintains genomic integrity by repairing broken DNA. Genes homologous to RecA and to other central genes in the SOS response pathway are found in almost all the bacterial genomes sequenced to date, covering a large number of phyla, suggesting both an ancient origin and a widespread occurrence of recombinational repair of DNA damage. Eukaryotic recombinases that are homologues of RecA are also widespread in eukaryotic organisms. For example, in fission yeast and humans, RecA homologues promote duplex-duplex DNA-strand exchange needed for repair of many types of DNA lesions. Another indication that DNA damages are a major problem for life is that cells make large investments in DNA repair processes. As pointed out by Hoeijmakers, repairing just one double-strand break could require more than 10,000 ATP molecules, as used in signaling the presence of the damage, the generation of repair foci, and the formation (in humans) of the RAD51 nucleofilament (an intermediate in homologous recombinational repair). (RAD51 is a homologue of bacterial RecA.) If the structural modification occurs during the G1 phase of DNA replication, the G1-S checkpoint arrests or postpones the furtherance of the cell cycle before the product enters the S phase. Consequences Differentiated somatic cells of adult mammals generally replicate infrequently or not at all. Such cells, including, for example, brain neurons and muscle myocytes, have little or no cell turnover. Non-replicating cells do not generally generate mutations due to DNA damage-induced errors of replication. These non-replicating cells do not commonly give rise to cancer, but they do accumulate DNA damages with time that likely contribute to aging (). In a non-replicating cell, a single-strand break or other type of damage in the transcribed strand of DNA can block RNA polymerase II-catalysed transcription. This would interfere with the synthesis of the protein coded for by the gene in which the blockage occurred. Brasnjevic et al. summarized the evidence showing that single-strand breaks accumulate with age in the brain (though accumulation differed in different regions of the brain) and that single-strand breaks are the most frequent steady-state DNA damages in the brain. As discussed above, these accumulated single-strand breaks would be expected to block transcription of genes. Consistent with this, as reviewed by Hetman et al., 182 genes were identified and shown to have reduced transcription in the brains of individuals older than 72 years, compared to transcription in the brains of those less than 43 years old. When 40 particular proteins were evaluated in a muscle of rats, the majority of the proteins showed significant decreases during aging from 18 months (mature rat) to 30 months (aged rat) of age. Another type of DNA damage, the double-strand break, was shown to cause cell death (loss of cells) through apoptosis. This type of DNA damage would not accumulate with age, since once a cell was lost through apoptosis, its double-strand damage would be lost with it. Thus, damaged DNA segments undermine the DNA replication machinery because these altered sequences of DNA cannot be utilized as true templates to produce copies of one's genetic material. RAD genes and the cell cycle response to DNA damage in Saccharomyces cerevisiae When DNA is damaged, the cell responds in various ways to fix the damage and minimize the effects on the cell. One such response, specifically in eukaryotic cells, is to delay cell division—the cell becomes arrested for some time in the G2 phase before progressing through the rest of the cell cycle. Various studies have been conducted to elucidate the purpose of this G2 arrest that is induced by DNA damage. Researchers have found that cells that are prematurely forced out of the delay have lower cell viability and higher rates of damaged chromosomes compared with cells that are able to undergo a full G2 arrest, suggesting that the purpose of the delay is to give the cell time to repair damaged chromosomes before continuing with the cell cycle. This ensures the proper functioning of mitosis. Various species of animals exhibit similar mechanisms of cellular delay in response to DNA damage, which can be caused by exposure to x-irradiation. The budding yeast Saccharomyces cerevisiae has specifically been studied because progression through the cell cycle can be followed via nuclear morphology with ease. By studying Saccharomyces cerevisiae, researchers have been able to learn more about radiation-sensitive (RAD) genes, and the effect that RAD mutations may have on the typical cellular DNA damaged-induced delay response. Specifically, the RAD9 gene plays a crucial role in detecting DNA damage and arresting the cell in G2 until the damage is repaired. Through extensive experiments, researchers have been able to illuminate the role that the RAD genes play in delaying cell division in response to DNA damage. When wild-type, growing cells are exposed to various levels of x-irradiation over a given time frame, and then analyzed with a microcolony assay, differences in the cell cycle response can be observed based on which genes are mutated in the cells. For instance, while unirradiated cells will progress normally through the cell cycle, cells that are exposed to x-irradiation either permanently arrest (become inviable) or delay in the G2 phase before continuing to divide in mitosis, further corroborating the idea that the G2 delay is crucial for DNA repair. However, rad strains, which are deficient in DNA repair, exhibit a markedly different response. For instance, rad52 cells, which cannot repair double-stranded DNA breaks, tend to permanently arrest in G2 when exposed to even very low levels of x-irradiation, and rarely end up progressing through the later stages of the cell cycle. This is because the cells cannot repair DNA damage and thus do not enter mitosis. Various other rad mutants exhibit similar responses when exposed to x-irradiation. However, the rad9 strain exhibits an entirely different effect. These cells fail to delay in the G2 phase when exposed to x-irradiation, and end up progressing through the cell cycle unperturbed, before dying. This suggests that the RAD9 gene, unlike the other RAD genes, plays a crucial role in initiating G2 arrest. To further investigate these findings, the cell cycles of double mutant strains have been analyzed. A mutant rad52 rad9 strain—which is both defective in DNA repair and G2 arrest—fails to undergo cell cycle arrest when exposed to x-irradiation. This suggests that even if DNA damage cannot be repaired, if RAD9 is not present, the cell cycle will not delay. Thus, unrepaired DNA damage is the signal that tells RAD9 to halt division and arrest the cell cycle in G2. Furthermore, there is a dose-dependent response; as the levels of x-irradiation—and subsequent DNA damage—increase, more cells, regardless of the mutations they have, become arrested in G2. Another, and perhaps more helpful way to visualize this effect is to look at photomicroscopy slides. Initially, slides of RAD+ and rad9 haploid cells in the exponential phase of growth show simple, single cells, that are indistinguishable from each other. However, the slides look much different after being exposed to x-irradiation for 10 hours. The RAD+ slides now show RAD+ cells existing primarily as two-budded microcolonies, suggesting that cell division has been arrested. In contrast, the rad9 slides show the rad9 cells existing primarily as 3 to 8 budded colonies, and they appear smaller than the RAD+ cells. This is further evidence that the mutant RAD cells continued to divide and are deficient in G2 arrest. However, there is evidence that although the RAD9 gene is necessary to induce G2 arrest in response to DNA damage, giving the cell time to repair the damage, it does not actually play a direct role in repairing DNA. When rad9 cells are artificially arrested in G2 with MBC, a microtubule poison that prevents cellular division, and then treated with x-irradiation, the cells are able to repair their DNA and eventually progress through the cell cycle, dividing into viable cells. Thus, the RAD9 gene plays no role in actually repairing damaged DNA—it simply senses damaged DNA and responds by delaying cell division. The delay, then, is mediated by a control mechanism, rather than the physical damaged DNA. On the other hand, it is possible that there are backup mechanisms that fill the role of RAD9 when it is not present. In fact, some studies have found that RAD9 does indeed play a critical role in DNA repair. In one study, rad9 mutant and normal cells in the exponential phase of growth were exposed to UV-irradiation and synchronized in specific phases of the cell cycle. After being incubated to permit DNA repair, the extent of pyrimidine dimerization (which is indicative of DNA damage) was assessed using sensitive primer extension techniques. It was found that the removal of DNA photolesions was much less efficient in rad9 mutant cells than normal cells, providing evidence that RAD9 is involved in DNA repair. Thus, the role of RAD9 in repairing DNA damage remains unclear. Regardless, it is clear that RAD9 is necessary to sense DNA damage and halt cell division. RAD9 has been suggested to possess 3' to 5' exonuclease activity, which is perhaps why it plays a role in detecting DNA damage. When DNA is damaged, it is hypothesized that RAD9 forms a complex with RAD1 and HUS1, and this complex is recruited to sites of DNA damage. It is in this way that RAD9 is able to exert its effects. Although the function of RAD9 has primarily been studied in the budding yeast Saccharomyces cerevisiae, many of the cell cycle control mechanisms are similar between species. Thus, we can conclude that RAD9 likely plays a critical role in the DNA damage response in humans as well. See also References Cellular processes DNA DNA repair Molecular genetics Mutation Senescence
DNA damage (naturally occurring)
[ "Chemistry", "Biology" ]
8,450
[ "DNA repair", "Senescence", "Molecular genetics", "Cellular processes", "Molecular biology", "Metabolism" ]
37,630,116
https://en.wikipedia.org/wiki/Information-centric%20networking
Information-centric networking (ICN) is an approach to evolve the Internet infrastructure away from a host-centric paradigm, based on perpetual connectivity and the end-to-end principle, to a network architecture in which the focal point is identified information (or content or data). Some of the application areas of ICN are in web applications, multimedia streaming, the Internet of Things, Wireless Sensor Networks and Vehicular networks and emerging applications such as social networks, Industrial IoTs. In this paradigm, connectivity may well be intermittent, end-host and in-network storage can be capitalized upon transparently, as bits in the network and on data storage devices have exactly the same value, mobility and multi access are the norm and anycast, multicast, and broadcast are natively supported. Data becomes independent from location, application, storage, and means of transportation, enabling in-network caching and replication. The expected benefits are improved efficiency, better scalability with respect to information/bandwidth demand and better robustness in challenging communication scenarios. In information-centric networking the cache is a network level solution, and it has rapidly changing cache states, higher request arrival rates and smaller cache sizes. In particular, information-centric networking caching policies should be fast and lightweight. IRTF Working Group The Internet Research Task Force (IRTF) is sponsoring a research group on Information-Centric Networking Research, which serves as a forum for the exchange and analysis of ICN research ideas and proposals. Current and future work items and outputs are managed on the ICNRG wiki. References Computer networking
Information-centric networking
[ "Technology", "Engineering" ]
326
[ "Computer networking", "Computer science", "Computer engineering" ]
37,631,059
https://en.wikipedia.org/wiki/Ontario%20silver%20mine
The Ontario silver mine is a mine that was active starting in 1872, and is located near Park City, Utah, United States. History The lode was discovered by accident on 19 January 1872 by Herman Budden, Rector Steen (Pike), John Kain, and Gus McDowell. The mine was purchased by George Hearst through R. C. Chambers from the prospectors for $27,000 on 24 August 1872. Hearst and his business partners James Ben Ali Haggin and Lloyd Tevis owned this mine and constructed the necessary infrastructure to make it productive, including hoists and stamp mill. The mine was not profitable for its first three years. According to legend, expenses of development substantially drained Hearst's financial resources. As a result of his straitened circumstances, Hearst sold his home and horses, and even dismissed his servants and enrolled his son William Randolph Hearst in public school. Chambers, who had been retained as manager, brought the bonanza ore body into production by the late 1870s. It eventually produced fifty million dollars' worth of silver and lead. By the time of Hearst's death in 1891, the Ontario mine had paid him more than $12 million in dividends. This was only one of the four big mines he had bought shares in and that were located in the West, including the Ophir on the Comstock Lode, the Homestake Mine (South Dakota), and the Anaconda Copper Mine (Montana). The mine also made manager Chambers one of Utah's Bonanza Kings. The Ontario mine was credited as being more consistent in yielding annual dividends during the late nineteenth century than any other mine in Utah. The Ontario company's mill was also the birthplace of two significant hydrometallurgical processes, the Russell Process and the Cyanide Process. Edward H Russell (Yale 1878) developed his process for working low grade silver ores by a leaching process, 1883–1884, and young Louis Janin (UC Berkeley) experimented with cyanide on the ores, filing a caveat to patent a cyanide process in 1886. Between 1874 and 1964, the Ontario Mine produced 41,289 ounces of gold, 55,710,608 ounces of silver, 164,231,209 pounds of lead, 210,350,684 pounds of zinc, and 3,911,102 pounds of copper. Primary ores included argentiferous galena, sphalerite, and tetrahedrite-tennantite with pyrite and quartz gangue. The Ontario mine reopened as a tourist attraction in 1995, only to close again after a few years. See also Ontario Hot Springs References Further reading (1994) "Mining" article in the Utah History Encyclopedia. The article was written by Philip F. Notarianni and the Encyclopedia was published by the University of Utah Press. ISBN 9780874804256. Archived from the original on November 4, 2023, and retrieved on October 2, 2024. Buildings and structures in Summit County, Utah Mines in Utah Hearst family 1872 in Utah Territory Landmarks in Utah Silver mines in the United States Stamp mills
Ontario silver mine
[ "Chemistry", "Engineering" ]
635
[ "Stamp mills", "Metallurgical facilities", "Mining equipment" ]
37,633,242
https://en.wikipedia.org/wiki/Domain%20wall%20%28magnetism%29
A domain wall is a term used in physics which can have similar meanings in magnetism, optics, or string theory. These phenomena can all be generically described as topological solitons which occur whenever a discrete symmetry is spontaneously broken. Magnetism In magnetism, a domain wall is an interface separating magnetic domains. It is a transition between different magnetic moments and usually undergoes an angular displacement of 90° or 180°. A domain wall is a gradual reorientation of individual moments across a finite distance. The domain wall thickness depends on the anisotropy of the material, but on average spans across around 100–150 atoms. The energy of a domain wall is simply the difference between the magnetic moments before and after the domain wall was created. This value is usually expressed as energy per unit wall area. The width of the domain wall varies due to the two opposing energies that create it: the magnetocrystalline anisotropy energy and the exchange energy (), both of which tend to be as low as possible so as to be in a more favorable energetic state. The anisotropy energy is lowest when the individual magnetic moments are aligned with the crystal lattice axes thus reducing the width of the domain wall. Conversely, the exchange energy is reduced when the magnetic moments are aligned parallel to each other and thus makes the wall thicker, due to the repulsion between them (where anti-parallel alignment would bring them closer, working to reduce the wall thickness). In the end an equilibrium is reached between the two and the domain wall's width is set as such. An ideal domain wall would be fully independent of position, but the structures are not ideal and so get stuck on inclusion sites within the medium, also known as crystallographic defects. These include missing or different (foreign) atoms, oxides, insulators and even stresses within the crystal. This prevents the formation of domain walls and also inhibits their propagation through the medium. Thus a greater applied magnetic field is required to overcome these sites. Note that the magnetic domain walls are exact solutions to classical nonlinear equations of magnets (Landau–Lifshitz model, nonlinear Schrödinger equation and so on). Symmetry of multiferroic domain walls Since domain walls can be considered as thin layers, their symmetry is described by one of the 528 magnetic layer groups. To determine the layer's physical properties, a continuum approximation is used which leads to point-like layer groups. If continuous translation operation is considering as identity, these groups transform to magnetic point groups. It was shown that there are 125 such groups. It was found that if a magnetic point group is pyroelectric and/or pyromagnetic then the domain wall carries polarization and/or magnetization respectively. These criteria were derived from the conditions of the appearance of the uniform polarization and/or magnetization. After their application to any inhomogeneous region, they predict the existence of even parts in functions of the distribution of order parameters. Identification of the remaining odd parts of these functions was formulated based on symmetry transformations that interrelate domains. The symmetry classification of magnetic domain walls contains 64 magnetic point groups. Symmetry-based predictions of the structure of the multiferroic domain walls have been proven using phenomenology coupling via magnetization and/or polarization spatial derivatives (flexomagnetoelectric). Depinning of a domain wall Non-magnetic inclusions in the volume of a ferromagnetic material, or dislocations in crystallographic structure, can cause "pinning" of the domain walls (see animation). Such pinning sites cause the domain wall to sit in a local energy minimum and an external field is required to "unpin" the domain wall from its pinned position. The act of unpinning will cause sudden movement of the domain wall and sudden change of the volume of both neighbouring domains; this causes Barkhausen noise. Types of walls Bloch wall A Bloch wall is a narrow transition region at the boundary between magnetic domains, over which the magnetization changes from its value in one domain to that in the next, named after the physicist Felix Bloch. In a Bloch domain wall, the magnetization rotates about the normal of the domain wall. In other words, the magnetization always points along the domain wall plane in a 3D system, in contrast to Néel domain walls. Bloch domain walls appear in bulk materials, i.e. when sizes of magnetic material are considerably larger than domain wall width (according to the width definition of Lilley ). In this case the energy of the demagnetization field does not impact the micromagnetic structure of the wall. Mixed cases are possible as well when the demagnetization field changes the magnetic domains (magnetization direction in domains) but not the domain walls. Néel wall A Néel wall is a narrow transition region between magnetic domains, named after the French physicist Louis Néel. In the Néel wall, the magnetization smoothly rotates from the direction of magnetization within the first domain to the direction of magnetization within the second. In contrast to Bloch walls, the magnetization rotates about a line that is orthogonal to the normal of the domain wall. In other words, it rotates such that it points out of the domain wall plane in a 3D system. It consists of a core with fast varying rotation, where the magnetization points are nearly orthogonal to the two domains, and two tails where the rotation logarithmically decays. Néel walls are the common magnetic domain wall type in very thin films, where the exchange length is very large compared to the thickness. Without magnetic anisotropy Néel walls would spread across the whole volume. See also Ferromagnetism Flux pinning Ginzburg–Landau theory Magnetic domain Magnetic flux quantum Quantum vortex Topological defect References External links Illustration of a Bloch and Néel Wall Bloch wall transition animation 2-d stability of the Néel wall, Antonio DeSimone, Hans Knüpfer and Felix Otto in Calculus of Variations and Partial Differential Equations, 2006 Ferromagnetism
Domain wall (magnetism)
[ "Chemistry", "Materials_science" ]
1,252
[ "Magnetic ordering", "Ferromagnetism" ]
39,093,199
https://en.wikipedia.org/wiki/Mathematical%20Q%20models
Mathematical Q models provide a model of the earth's response to seismic waves. In reflection seismology, the anelastic attenuation factor, often expressed as seismic quality factor or Q, which is inversely proportional to attenuation factor, quantifies the effects of anelastic attenuation on the seismic wavelet caused by fluid movement and grain boundary friction. When a plane wave propagates through a homogeneous viscoelastic medium, the effects of amplitude attenuation and velocity dispersion may be combined conveniently into the single dimensionless parameter, Q. As a seismic wave propagates through a medium, the elastic energy associated with the wave is gradually absorbed by the medium, eventually ending up as heat energy. This is known as absorption (or anelastic attenuation) and will eventually cause the total disappearance of the seismic wave. The frequency-dependent attenuation of seismic waves leads to decreased resolution of seismic images with depth. Transmission losses may also occur due to friction or fluid movement, and for a given physical mechanism, they can be conveniently described with an empirical formulation where elastic moduli and propagation velocity are complex functions of frequency. Bjørn Ursin and Tommy Toverud published an article where they compared different Q models. Basics In order to compare the different models they considered plane-wave propagation in a homogeneous viscoelastic medium. They used the Kolsky–Futterman model as a reference and studied several other models. These other models were compared with the behavior of the Kolsky–Futterman model. The Kolsky–Futterman model was first described in the article ‘Dispersive body waves’ by Futterman (1962). 'Seismic inverse Q-filtering' by Yanghua Wang (2008) contains an outline discussing the theory of Futterman, beginning with the wave equation: where U(r,w) is the plane wave of radial frequency w at travel distance r, k is the wavenumber and i is the imaginary unit. Reflection seismograms record the reflection wave along the propagation path r from the source to reflector and back to the surface. Equation (1.1) has an analytical solution given by: where k is the wave number. When the wave propagates in inhomogeneous seismic media the propagation constant k must be a complex value that includes not only an imaginary part, the frequency-dependent attenuation coefficient, but also a real part, the dispersive wave number. We can call this K(w) a propagation constant in line with Futterman. k(w) can be linked to the phase velocity of the wave with the formula: Kolsky's attenuation-dispersion model To obtain a solution that can be applied to seismic k(w) must be connected to a function that represents the way in which U(r,w) propagates in the seismic media. This function can be regarded as a Q-model. In his outline Wang calls the Kolsky–Futterman model the Kolsky model. The model assumes the attenuation α(w) to be strictly linear with frequency over the range of measurement: And defines the phase velocity as: where cr and Qr are the phase velocity and the Q value at a reference frequency wr. For a large value of Qr >> 1 the solution (1.6) can be approximated to where Kolsky’s model was derived from and fit well with experimental observations. The theory for materials satisfying the linear attenuation assumption requires that the reference frequency wr is a finite (arbitrarily small but nonzero) cut-off on the absorption. According to Kolsky, we are free to choose wr following the phenomenological criterion that it be small compared with the lowest measured frequency w in the frequency band. More information regarding this concept can be found in Futterman (1962) Computations For each of the Q models Ursin B. and Toverud T. presented in their article they computed the attenuation (1.5) and phase velocity (1.6) in the frequency band 0–300 Hz. Fig.1. presents the graph for the Kolsky model – attenuation (left) and phase velocity (right) with cr = 2000 m/s, Qr = 100 and wr = 2100 Hz. Q models Wang listed the different Q models that Ursin B. and Toverud T. applied in their study, classifying the models into two groups. The first group consists of models 1-5 below, the other group including models 6-8. The main difference between these two groups is the behaviour of the phase velocity when the frequency approaches zero. Whereas the first group has a zero-valued phase velocity, the second group has a finite, nonzero phase velocity. 1) the Kolsky model (linear attenuation) 2) the Strick–Azimi model (power-law attenuation) 3) the Kjartansson model (constant Q) 4) Azimi's second and third models (non-linear attenuation) 5) Müller's model (power-law Q) 6) Standard linear solid Q model for attenuation and dispersion the Zener model (the standard linear solid) 7) the Cole–Cole model (a general linear-solid) 8) a new general linear model Notes References External links Some aspects of seismic inverse Q-filtering theory by Knut Sørsdal Seismology measurement Geophysics
Mathematical Q models
[ "Physics" ]
1,147
[ "Applied and interdisciplinary physics", "Geophysics" ]
39,093,307
https://en.wikipedia.org/wiki/Chandy%E2%80%93Misra%E2%80%93Haas%20algorithm%20resource%20model
The Chandy–Misra–Haas algorithm resource model checks for deadlock in a distributed system. It was developed by K. Mani Chandy, Jayadev Misra and Laura M Haas. Locally dependent Consider the n processes P1, P2, P3, P4, P5,, ... ,Pn which are performed in a single system (controller). P1 is locally dependent on Pn, if P1 depends on P2, P2 on P3, so on and Pn−1 on Pn. That is, if , then is locally dependent on . If P1 is said to be locally dependent to itself if it is locally dependent on Pn and Pn depends on P1: i.e. if , then is locally dependent on itself. Description The algorithm uses a message called probe(i,j,k) to transfer a message from controller of process Pj to controller of process Pk. It specifies a message started by process Pi to find whether a deadlock has occurred or not. Every process Pj maintains a boolean array dependent which contains the information about the processes that depend on it. Initially the values of each array are all "false". Controller sending a probe Before sending, the probe checks whether Pj is locally dependent on itself. If so, a deadlock occurs. Otherwise it checks whether Pj, and Pk are in different controllers, are locally dependent and Pj is waiting for the resource that is locked by Pk. Once all the conditions are satisfied it sends the probe. Controller receiving a probe On the receiving side, the controller checks whether Pk is performing a task. If so, it neglects the probe. Otherwise, it checks the responses given Pk to Pj and dependentk(i) is false. Once it is verified, it assigns true to dependentk(i). Then it checks whether k is equal to i. If both are equal, a deadlock occurs, otherwise it sends the probe to next dependent process. Algorithm In pseudocode, the algorithm works as follows: Controller sending a probe if Pj is locally dependent on itself then declare deadlock else for all Pj,Pk such that (i) Pi is locally dependent on Pj, (ii) Pj is waiting for Pk and (iii) Pj, Pk are on different controllers. send probe(i, j, k). to home site of Pk Controller receiving a probe if (i)Pk is idle / blocked (ii) dependentk(i) = false, and (iii) Pk has not replied to all requests of to Pj then begin "dependents""k"(i) = true; if k == i then declare that Pi is deadlocked else for all Pa,Pb such that (i) Pk is locally dependent on Pa, (ii) Pa is waiting for '''Pb and (iii) Pa, Pb are on different controllers. send probe(i, a, b). to home site of Pb end' Example P1 initiates deadlock detection. C1 sends the probe saying P2 depends on P3. Once the message is received by C2, it checks whether P3 is idle. P3 is idle because it is locally dependent on P4 and updates dependent3(2) to True. As above, C2 sends probe to C3 and C3 sends probe to C1. At C1, P1 is idle so it update dependent''1(1) to True. Therefore, deadlock can be declared. Complexity Suppose there are controllers and processes, at most messages need to be exchanged to detect a deadlock, with a delay of messages. References Algorithms
Chandy–Misra–Haas algorithm resource model
[ "Mathematics" ]
768
[ "Applied mathematics", "Algorithms", "Mathematical logic" ]
39,094,073
https://en.wikipedia.org/wiki/Social%20facilitation%20in%20animals
Social facilitation in animals is when the performance of a behaviour by an animal increases the probability of other animals also engaging in that behaviour or increasing the intensity of the behaviour. More technically, it is said to occur when the performance of an instinctive pattern of behaviour by an individual acts as a releaser for the same behaviour in others, and so initiates the same line of action in the whole group. It has been phrased as "The energizing of dominant behaviors by the presence of others." Social facilitation occurs in a wide variety of species under a range of circumstances. These include feeding, scavenging, teaching, sexual behaviour, coalition formation, group displays, flocking behaviour, and dustbathing. For example, in paper wasp species, Agelaia pallipes, social facilitation is used to recruitment to food resources. By using chemical communication, A. pallipes pool the independent search efforts to locate and defend food sources from other organisms. Social facilitation is sometimes used to develop successful social scavenging strategies. Griffon vultures are highly specialized scavengers that rely on finding carcasses. When foraging, griffon vultures soar at up to 800 m above the ground. Although some fresh carcasses are located directly by searching birds, the majority of individuals find food by following other vultures, i.e. social facilitation. A chain reaction of information transfer extends from the carcass as descending birds are followed by other birds, which themselves cannot directly see the carcass, ultimately drawing birds from an extensive area over a short period of time. Moller used a play-back technique to investigate the effects of singing by the black wheatear (Oenanthe leucura) on the behaviour of both conspecifics and heterospecifics. It was found that singing increased in both groups in response to the wheateater and Moller suggested the conspicuous dawn (and dusk) chorus of bird song may be augmented by social facilitation due to the singing of conspecifics as well as heterospecifics. See also Behavioral contagion Social facilitation References Ethology Animal communication
Social facilitation in animals
[ "Biology" ]
464
[ "Behavioural sciences", "Ethology", "Behavior" ]
39,094,287
https://en.wikipedia.org/wiki/Stimulus%E2%80%93response%20compatibility
Stimulus–response (S–R) compatibility is the degree to which a person's perception of the world is compatible with the required action. S–R compatibility has been described as the "naturalness" of the association between a stimulus and its response, such as a left-oriented stimulus requiring a response from the left side of the body. A high level of S–R compatibility is typically associated with a shorter reaction time, whereas a low level of S-R compatibility tends to result in a longer reaction time, a phenomenon known as the Simon effect. The term "stimulus-response compatibility" was first coined by Arnold Small in a presentation in 1951. Determinants of reaction time Visual location S–R compatibility can be seen in the variation in the amount of time taken to respond to a visual stimulus, given the similarity of the event that prompts the action, and the action itself. For example, a visual stimulus in the left of a person's field of vision is more compatible with a response involving the left hand than with a response involving the right hand. Evidence In 1953, Paul Fitts and C. M. Seeger ran the first experiment conclusively demonstrating that certain responses are more compatible with certain stimuli, during which subjects were alternatively instructed to press buttons on their left and right in response to lights which could appear in either the left or right corner of their field of vision. The study found that subjects took longer when the stimulus and response were incompatible. This was not in and of itself evidence for a relationship between S–R compatibility and reaction time; an alternate hypothesis posited that the delay was simply the result of the sensory information taking longer to reach neural processing centers when hemispheres are crossed. This alternate hypothesis was disproven by a follow-up trial in which Fitts and Seeger had subjects cross their arms, so that the left hand would press the right button and vice versa; the difference between reaction times of subjects in the standard and crossed-arms trials was statistically insignificant, even though the neural signal traveled a greater distance. Refinements and improvements The reverse scenario was tested in a 1954 experiment by Richard L. Deninger and Paul Fitts, in which it was demonstrated that subjects responded more quickly when the stimulus and response were compatible. Solid evidence that S-R compatibility impacted the response planning phase was not found until 1995, when Bernhard Hommel demonstrated that modifying stimuli in ways unrelated to S-R compatibility, such as the size of the objects on the computer screen, did not increase reaction time. Auditory location This phenomenon also applies to auditory stimuli. For example, hearing a tone in one ear prepares that side of the body to respond, and the reaction time will be longer if one is required to perform an action with the opposite side of the body as the side where the tone was heard, or vice versa. Evidence In 2000, T. E. Roswarski and Robert Proctor conducted a variation of the original Fitts and Seeger experiment involving auditory tones in each ear instead of lights. The experiment showed that the reaction time for auditory signals is also influenced by S-R compatibility. Motion Another determinant of S-R compatibility is the destination of a moving stimulus. For example, an object moving towards the right hand is more compatible with a right-hand response than an object moving towards the left hand, even if the object is closer to the left hand when the stimulus is perceived. Evidence An experiment by Claire Michaels in 1988 demonstrated the role of motion in determining S–R compatibility. In this experiment, subjects were presented with a computer display with their hands extended, and a square on the screen would appear at some random location and move towards either the right or left hand. Choice reaction time was faster when subjects responded with the same hand the square was moving towards. This experiment showed that reaction time was affected more by the destination of the square than by its current location relative to the hand by showing that reaction time was even shorter when the square started in the middle of the screen than when it was close to the destination hand. Affordance Also important to S–R compatibility is the type of stimulus; familiar objects tend to invite specific responses. As one example, if an object is perceived as more easily (or more typically) manipulable with one hand than the other, any response requiring use of the other hand will tend to have a long reaction time. Evidence In 1998, Mike Tucker and Rob Ellis conducted an experiment at the University of Plymouth which expanded the concept of S–R compatibility to higher-order cognition. In their experiment, subjects were given two buttons, one on their left and one on their right, and shown a series of pictures of familiar objects like frying pans and teacups. For each image, they were asked to press the left button if the object in the image was upright and the right button if the object was inverted. However, the objects also varied in their rotation, such that the handles faced either left or right. The experiment revealed that seeing the handle pointing in one direction primed subjects to reach with the corresponding hand, which caused discrepancies in S-R compatibility that affected reaction time; for example, a subject seeing an inverted teapot with a handle pointing left took longer to press the button on the right than a subject who saw the same teapot pointing right. Expectations Prior knowledge and stereotyping plays a role in S–R compatibility. If a required response is inconsistent with a person's stereotyped knowledge of a stimulus and its "typical" reactions, even if the person is aware of the necessary response in the new situation, compatibility will be low. For example, light switches in the United Kingdom are "on" when toggled down, but light switches in the United States are "on" when toggled up; a native of one country visiting the other will demonstrate low S-R compatibility when turning the lights on or off. As another example, red lights are universally associated with "stop" and green with "go", and a reversed configuration will result in a longer reaction time. Applications S–R compatibility is an important consideration in the field of human-computer interaction, and in software engineering. Programs are easier and more intuitive to use when the input of the user and the output of the program are S–R-compatible. This would also be an important consideration in the physical design of objects...for instance, an electrical appliance with an on/off switch will be most intuitive if it is designed to conform to cultural expectations. Additionally, principles of S–R compatibility are important considerations for psychology researchers; experiments may need to be controlled for the phenomenon. For example, behavioral neuroscience researchers should make sure that a task does not inadvertently vary along dimensions of S–R compatibility. See also Stroop effect Mental chronometry Hick's law Simon effect Priming Further reading Bächtold, Daniel, Martin Baumüller, & Peter Brugger. "Stimulus-response compatibility in representational space". Neuropsychologia, Volume 36, Issue 8, 1 August 1998, Pages 731–735 References External links Usabilityfirst.com Psycnet.apa.org Books.google.com Books.google.com Experimental psychology Human–computer interaction Cognitive science Cognitive psychology 1950s neologisms
Stimulus–response compatibility
[ "Engineering", "Biology" ]
1,499
[ "Behavior", "Behavioural sciences", "Human–machine interaction", "Cognitive psychology", "Human–computer interaction" ]
39,096,026
https://en.wikipedia.org/wiki/Effect%20of%20radiation%20on%20perceived%20temperature
The "radiation effect" results from radiation heat exchange between human bodies and surrounding surfaces, such as walls and ceilings. It may lead to phenomena such as houses feeling cooler in the winter and warmer in the summer at the same temperature. For example, in a room in which air temperature is maintained at 22 °C at all times, but in which the inner surfaces of the house is estimated to be an average temperature of 10 °C in the winter or 25 °C in the summer, heat transfer from the surfaces to the individual will occur, resulting in a difference in the perceived temperature. We can observe and compare the rate of radiation heat transfer between a person and the surrounding surfaces if we first make a few simplifying assumptions: The heat exchange in the environment is in a "steady state", meaning that there is a constant flow of heat either into or out of the house. The person is completely surrounded by the interior surfaces of the room. Heat transfer by convection is not considered. The walls, ceiling, and floor are all at the same temperature. For an average person, the outer surface area is 1.4 m2, the surface temperature is 30 °C, and the emissivity (ε) is 0.95. Emissivity is the ability of a surface to emit radiative energy compared to that of a black body at the same temperature. We will be using the following equation to find out how much heat is lost by a person standing in the same room in summertime as compared to the winter, at exactly the same thermostat reading temperature: Where is the rate of heat loss (W), is the emissivity (or the ability of an objects surface to emit energy by radiation) of a person, is the Stefan–Boltzmann constant (), is the surface area of a person, is the surface temperature of a person (K), and is the surface temperature of the walls, ceiling, and floor (K). This equation is only valid for an object standing in a completely enclosed room, box, etc. In the winter, the amount of heat loss from a person is then 152 Watts if the inner surfaces of the room is, for example, 10 degrees Celsius. In the summer, the amount of heat loss from a person, when the inner surfaces of the room were 25 degrees Celsius, was found to be 40.9 Watts. Thermal radiation emitted by all bodies above absolute zero (-273.15 °C). It differs from other forms of electromagnetic radiation such as x-rays, gamma rays, microwaves that are not related to temperature. Therefore, people constantly radiate their body heat, but at different rates depending on body and surrounding temperatures. From these values, the rate of heat loss from a person is almost four times as large in the winter than in the summer, which explains the "chill" we feel in the winter even if the thermostat setting is kept the same. References Heat transfer
Effect of radiation on perceived temperature
[ "Physics", "Chemistry" ]
606
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Thermodynamics" ]
46,439,622
https://en.wikipedia.org/wiki/Adequate%20Public%20Facilities%20Ordinance
An Adequate Public Facilities Ordinance (APFO, also known as a Concurrency Regulation) is an American legislative method to tie public infrastructure to growth for a region. APFOs take into account the availability of infrastructure. They can manage growth, but are considered separate from growth controls such as building moratoria. History Ramapo, New York (see Golden v. Planning Board of Ramapo); Petaluma, California; and Boulder, Colorado were some of the early adopters of this tool in America. The state of Florida uses the term "concurrency" in its growth management act. Scope APFO regulations are typically applied to a jurisdiction which has legislative control of a given area. In America, this can be at a state, county, or city level. A conflict can occur when APFO regulations differ in scope between jurisdictions where there is shared funding and legislative authority (such as a city located inside a county that funds schools). While APFOs are intended to mitigate infrastructure shortcomings for a particular area, the mitigation may apply to areas offsite of the development project. APFO regulations usually apply to individual projects on a case-by-case basis. APFO regulations take into account some or all of a jurisdiction's infrastructure requirements, including: Transportation School facilities Water supply Water treatment Roads Other elements include: CIP – Capitol Improvement Programs Service Level Standards Criticism Traditional opponents of APFO legislation include industries affected by moratoria or fees, including realtors, developers, and some Smart Growth advocates. Home costs for some locations that have enacted APFO have experienced increases in housing prices affecting affordable housing, in conjunction with positive effects of relief from school capacity shortcomings. See also Ecistics Activity centre Context theory Exclusionary zoning Form-based codes Inclusionary zoning Mixed use development New urbanism Non-conforming use Planning permission Principles of Intelligent Urbanism Reverse sensitivity Spot zoning Statutory planning NIMBY References External links An example of a Florida APFO regulation Dolan v. City of Tigard – property rights Real estate in the United States Real property law Urban planning
Adequate Public Facilities Ordinance
[ "Engineering" ]
422
[ "Urban planning", "Architecture" ]
28,205,830
https://en.wikipedia.org/wiki/Asprox%20botnet
The Asprox botnet (discovered around 2008), also known by its aliases Badsrc and Aseljo, is a botnet mostly involved in phishing scams and performing SQL injections into websites to spread malware. It is a highly infectious malware which spreads through an email or through a clone website. It can be used to trace any kind of personal or financial information and activities online. Operations Since its discovery in 2008 the Asprox botnet has been involved in multiple high-profile attacks on various websites in order to spread malware. The botnet itself consists of roughly 15,000 infected computers as of May, 2008, although the size of the botnet itself is highly variable as the controllers of the botnet have been known to deliberately shrink (and later regrow) their botnet to prevent more aggressive countermeasures from the IT Community. The botnet propagates itself in a somewhat unusual way, as it actively searches and infects vulnerable websites running Active Server Pages. Once it finds a potential target the botnet performs a SQL injection on the website, inserting an IFrame which redirects the user visiting the site to a site hosting Malware. The botnet usually attacks in waves – the goal of each wave is to infect as many websites as possible, thus achieving the highest possible spread rate. Once a wave is completed the botnet lay dormant for an extended amount of time, likely to prevent aggressive counterreactions from the security community. The initial wave took place in July, 2008, which infected an estimated 1,000 – 2,000 pages. An additional wave took place in October 2009, infecting an unknown number of websites. Another wave took place in June 2010, increasing the estimated total number of infected domains from 2,000 to an estimated 10,000 – 13,000 within a day. Notable high-profile infections While the infection targets of the Asprox botnet are randomly determined through Google searches, some high-profile websites have been infected in the past. Some of these infections have received individual coverage. Sony PlayStation U.S. Adobe's Serious Magic website Several government, healthcare and business related websites See also Botnet Malware Email spam Cybercrime Internet security References Internet security Distributed computing projects Spamming Botnets
Asprox botnet
[ "Engineering" ]
471
[ "Distributed computing projects", "Information technology projects" ]