id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
35,489,100 | https://en.wikipedia.org/wiki/Gravitational%20soliton | A gravitational soliton is a soliton solution of the Einstein field equation. It can be separated into two kinds, a soliton of the vacuum Einstein field equation generated by the Belinski–Zakharov transform, and a soliton of the Einstein–Maxwell equations generated by the Belinski-Zakharov-Alekseev transform.
References
General relativity | Gravitational soliton | [
"Physics"
] | 78 | [
"General relativity",
"Relativity stubs",
"Theory of relativity"
] |
35,489,153 | https://en.wikipedia.org/wiki/Einstein%E2%80%93Rosen%20metric | In general relativity, the Einstein–Rosen metric is an exact solution to the Einstein field equations derived in 1937 by Albert Einstein and Nathan Rosen. It is the first exact solution to describe the propagation of a gravitational wave.
This metric can be written in a form such that the Belinski–Zakharov transform applies, and thus has the form of a gravitational soliton.
In 1972 and 1973, J. R. Rao, A. R. Roy, and R. N. Tiwari published a class of exact solutions involving the Einstein-Rosen metric.
In 2021 Robert F. Penna found an algebraic derivation of the Einstein-Rosen metric.
In the history of science, one might consider as a footnote to the Einstein-Rosen metric that Einstein, for some time, believed that he had found a non-existence proof for gravitational waves.
Notes
Albert Einstein
Equations of physics
General relativity | Einstein–Rosen metric | [
"Physics",
"Mathematics"
] | 180 | [
"Equations of physics",
"Mathematical objects",
"Equations",
"General relativity",
"Relativity stubs",
"Theory of relativity"
] |
45,326,973 | https://en.wikipedia.org/wiki/ISO%207027 | ISO 7027:1999 is an ISO standard for water quality that enables the determination of turbidity. The ISO 7027 technique is used to determine the concentration of suspended particles in a sample of water by measuring the incident light scattered at right angles from the sample. The scattered light is captured by a photodiode, which produces an electronic signal that is converted to a turbidity.
References
07027
Water quality indicators | ISO 7027 | [
"Chemistry",
"Environmental_science"
] | 89 | [
"Water quality indicators",
"Water pollution"
] |
45,327,132 | https://en.wikipedia.org/wiki/Circulating%20water%20plant | A circulating water plant or circulating water system is an arrangement of flow of water in fossil-fuel power station, chemical plants and in oil refineries. The system is required because various industrial process plants uses heat exchanger, and also for active fire protection measures. In chemical plants, for example in caustic soda production, water is needed in bulk quantity for preparation of brine. The circulating water system in any plant consists of a circulator pump, which develops an appropriate hydraulic head, and pipelines to circulate the water in the entire plant.
System description
Circulating water pumps
Circulating water systems are normally of the wet pit type, but for sea water circulation, both the wet pit type and the concrete volute type are employed. In some industries, one or two stand-by pumps are also connected parallel to CW pumps. It is recommended that these pumps must be constantly driven by constant speed squirrel cage induction motors. CW pumps are designed as per IS:9137, standards of the Hydraulic Institute, USA or equivalent.
Cooling tower
In the present era, mechanical induced draft–type cooling towers are employed in cooling of water. Performance testing of cooling towers (both IDCT and NDCT) shall be carried
out as per ATC-105 at a time when the atmospheric conditions are within the permissible limits of deviation from the design conditions. As guidelines of Central Electricity Authority, two mechanical draft cooling towers Or one natural draft cooling tower must be established for each 500 MW unit in power plants. The cooling towers are designed as per Cooling Tower Institute codes.
CW treatment system
Some coastal power stations or chemical plants intake water from sea for condenser cooling. They either use closed cycle cooling by using cooling towers or once through cooling. Selection of type of system is based on the thermal pollution effect on sea water and techno-economics based on the distance of power station from the coast and cost of pumping sea water. Due to high salt concentration, it is necessary for circulating water make up.
Mechanical description of CW plants
Source:
Five (5) numbers (4 working + 1 standby) circulating water pumps of vertical wet pit type, mixed flow design and self water lubricated complete along with motors and associated accessories.
Electro-hydraulically operated butterfly valve (with actuators), isolating butterfly valve and rubber expansion joints at discharge of each pump. Electrically operated butterfly valves for interconnection of standby pumps to operate as common standby for both the units.
One number CW re-circulation line for each unit, suitable for handling a flow of 50% of one CW pump flow with electrically operated butterfly valve (with actuators).
Complete piping including discharge piping/header of CW pumps, CW duct from CW pump house to condensers and from condensers to the cooling towers, blow down piping (up to ash handling plant and central monitoring basin of ETP), fittings & valves and other accessories as required.
EOT crane for handling & maintenance of CW pumps and monorail and electrically operated pendant control hoist arrangement for maintenance of stoplog gates and trash racks.
One number trash rack for CW pump house bay and two numbers of stop logs for CW pump house.
Air release valves, with isolation valves, in CW piping as per the system requirement.
Hydraulic transient analysis of CW system.
CW pump model study and CW pump house/ sump model studies as required.
Codes and standards
Source:
References
External links
Standard Design Criteria/Guidelines for Balance of Plant for Thermal Power Project 2 X (500MW or above) Cea.nic.in
https://law.resource.org/pub/in/bis/S08/is.9137.1978.pdf
Cooling technology
Mechanical engineering
Oil refineries
Power station technology | Circulating water plant | [
"Physics",
"Chemistry",
"Engineering"
] | 766 | [
"Applied and interdisciplinary physics",
"Oil refineries",
"Petroleum",
"Oil refining",
"Mechanical engineering"
] |
45,328,806 | https://en.wikipedia.org/wiki/Galaxy%20X%20%28galaxy%29 | Galaxy X is a postulated dark satellite dwarf galaxy of the Milky Way Galaxy. If it exists, it would be composed mostly of dark matter and interstellar gas with few stars. Its proposed location is some from the Sun, behind the disk of the Milky Way, and some in extent. Galactic coordinates would be (l=
-27.4°,b=-1.08°).
Discovery
Observational evidence for this galaxy was presented in 2015, based on the claimed discovery of four Cepheid variable stars by Sukanya Chakrabarti (RIT) and collaborators. Search for the stars was motivated by an earlier study that linked a warp in the HI (atomic hydrogen) disk of the Milky Way Galaxy to the tidal effects of a perturbing galaxy. The unseen perturber's mass was calculated to be about 1% of that of the Milky Way, which would make it the third heaviest satellite of the Milky Way, after the Magellanic Clouds (Large Magellanic Cloud and Small Magellanic Cloud, each some 10x larger than Galaxy X). In this hypothetical model, the putative satellite galaxy would have interacted with the Milky Way some 600 million years ago, coming as close as , and would now be moving away from the Milky Way.
Name
The name "Galaxy X" was coined in 2011 in analogy to Planet X.
Controversy
In November 2015, a group led by P. Pietrukowicz published a paper arguing against the existence of Galaxy X. These authors argued that the four stars were not actually Cepheid variable stars and that their distances might be very different than claimed in the discovery paper of Chakrabarti et al. On this basis, the authors stated that "there is no evidence for a background dwarf galaxy". However the galaxy is still regarded to exist by others, with the stars being examined to be actual Cepheids.
List of components
List of claimed components of Galaxy X
Footnotes
References
Further reading
Milky Way Subgroup
Dark galaxies
Hypothetical galaxies | Galaxy X (galaxy) | [
"Physics"
] | 412 | [
"Dark matter",
"Astronomical hypotheses",
"Unsolved problems in physics",
"Astronomical myths",
"Dark galaxies",
"Hypothetical astronomical objects",
"Astronomical objects"
] |
45,329,460 | https://en.wikipedia.org/wiki/Precipitationshed | In meteorology, a precipitationshed is the upwind ocean and land surface that contributes evaporation to a given, downwind location's precipitation. The concept has been described as an "atmospheric watershed". The concept itself rests on a broad foundation of scholarly work examining the evaporative sources of rainfall. Since its formal definition, the precipitationshed has become an element in water security studies, examinations of sustainability, and mentioned as a potentially useful tool for examining vulnerability of rainfall dependent ecosystems.
Concept
In an effort to conceptualize the recycling of evaporation from a specific location to the spatially explicit region that receives this moisture, the precipitationshed concept was expanded to the evaporationshed. This expanded concept has been highlighted as particularly useful for providing a spatially explicit region for examining the impacts of significant land-use change, such as deforestation, irrigation, or agricultural intensification.
See also
Water cycle
Moisture recycling
Line Gordon
References
External links
Stockholm Resilience Centre Whiteboard talk series
CABI: Precipitationsheds - a new concept for water science
Water Resilience for Human Prosperity
Hydrology
Water and the environment
Precipitation
Water streams | Precipitationshed | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 231 | [
"Hydrology",
"Drainage basins",
"Environmental engineering"
] |
45,330,103 | https://en.wikipedia.org/wiki/Captive%20bubble%20method | The captive bubble method is a method for measuring the contact angle between a liquid and a solid, by using drop shape analysis. In this method, a bubble of air is injected beneath a solid, the surface of which is located in the liquid, instead of placing a drop on the solid as in the case of the sessile drop technique. A liquid and a solid are replaced by using drop shape analysis.
The method is particularly suitable for solids with high surface energy, where liquids spread out. Hydrogels, such as those that comprise soft contact lenses, are inaccessible to the standard arrangement; so the captive bubble method is also used in such cases. A contact angle is formed on a smooth, periodically heterogeneous solid surface. Above the solid surface, a liquid drop is submerged in a fluid. The measurement of contact angles usually contributes to the measurement of the surface energy of solids in the industry. Different from other methods of measuring the contact angle, such as the sessile drop technique, the system utilized in the captive bubble method has the fluid bubble attached from below to the solid surface, such that both the liquid bubble and the solid interact with a fluid.
Application and significance
Surface energy of solids
As a system is formed from a solid surface and a drop of liquid, energy minima and maxima are produced by the free energy of the system. When the solid surface is rough or homogeneous, the system (made up of a solid, a liquid, and a fluid) could have multiple minima produced from the free energy at different minima points. One of these minima is the global minimum. The global minimum has the lowest free energy within the system and is defined as the stable equilibrium state. Furthermore, the other minima illustrate the metastable equilibrium states of the system. In between these minima are energy barriers that hinder the motion of energy between the various metastable states in the system. The transition of energy between metastable states is also affected by the availability of external energy to the system, which is associated with the volume of the liquid drop on top of the solid surface. As such, the volume of the liquid may have an impact on the locations of the minima points, which could influence the contact angles created by the solid and the liquid. The contact angles are directly related to whether the solid surface is ideal, or, in other words, whether it is a smooth, heterogeneous surface.
Surface analysis of reverse osmosis membrane
Source:
The measurement of contact angles with the captive bubble method could also be useful in the surface analysis of the reverse osmosis membrane in the study of membrane performances. Through the analysis of contact angles, the properties of membranes, such as roughness, can be determined. The roughness of membranes, which indicates the effective surface area, can further lead to the investigation of the hydrophilic and hydrophobic properties of the surface. Through studies, a higher contact angle may correspond to a more hydrophobic surface in membrane analysis. In the performance of the captive bubble method in membrane analysis, several factors can have an influence on the contact angle, including the bubble volume, liquid types, and tensions.
Surface tensions of lung surface active material
Source:
In comparison to the use of the captive bubble method in the measurement of contact angles in other cases, the contact angle in the study of the lung surfactant monolayer is kept at a constant 180 degrees, due to the property of the hydrated agar gel on the ceiling of the bubble. The system applied in the study of lung surfactant is designed to be a leak-proof system, ensuring the independence of the surface film of bubbles from other materials and substances like plastic walls, barriers, and outlets. Instead of adding extra tubing or piercing the bubble air-water interface with needles, this closed system is created by adjusting the pressure within the closed sample chamber by adding or removing aqueous media to regulate the bubble size and surface tension of insoluble films at the bubble surface.
Since the bubble volumes are controlled by modifying the pressure in the sample chamber, the surface area and the surface tension of the surfactant film at the bubble surface are reduced as the volume of the bubble decreases.
The bubble shape, in this case, can vary from spherical to oval depending on the surface tension, which can be calculated through the measurement of the height and diameter of bubbles. In addition to measuring the surface tension, bubble formation can also be utilized in the measurement of the adsorption of lung surfactant, which defines how quickly substances build up on the air-liquid interface of pulmonary surfactants to form a film.
There are two methods to measure adsorption with captive bubbles:
One method of forming bubbles to measure adsorption is to begin with a small bubble of a diameter of 2–3 mm in a chamber with a diameter of 10 mm, then expand or compress it later. The bubble is first introduced into the chamber with a small plastic tubing attached to a 50uL microsyringe. It is then expanded through a sudden decrease in pressure inside the captive bubble or an increase in chamber volume by moving the piston on the end of the glass cylinder. To calculate the exact adsorption rate, the initial amount of surfactant on the bubble surface before volume modification has to be taken into consideration.
Another method of measuring adsorption is to start a bubble with a fixed volume instead of a given size or diameter by utilizing a needle on the bottom inlet of the bubble chamber. The fixed volume to start with is usually 200 ml, which is around 7 mm in diameter. Just as in the first method, the accumulation of material on the bubble surface during bubble formation has to be calculated in order to evaluate the exact rate of adsorption.
Comparisons between sessile drop technique and captive bubble method
The sessile drop method is another popular way to measure contact angles and is done by placing a two-dimensional drop on a solid surface and controlling the volume of liquid in the drop. The sessile drop method and the captive bubble method are usually interchangeable when performing experiments, as they are both based on the properties of symmetry. Specifically, the axis of symmetry of the drop or bubble makes the contact line of the drop or bubble with the solid surface circular. This creates an observable contact angle corresponding to the contact radius of the drop or bubble.
However, interacting with a rough homogeneous surface in measurements of contact angles, the drop and bubble each present different behaviors in the measuring process, which are related to the volume of liquid and contact angles.
On a rough homogeneous surface, the observed contact angle may not represent the actual contact angle with a local slope since it may not be observable on a rough surface. The observed contact angle on a rough surface is also called an apparent angle, which is equivalent to the sum of the intrinsic contact angle and the local surface slope at the tangent of the contact slope for a drop or bubble. With the sessile drop method, the observed contact angle usually underestimates the intrinsic contact angle, while the observed contact angle in the captive bubble method overestimates the intrinsic contact angle of the rough surface.
If a graph is plotted, respectively, for the measurements of contact angles using the sessile drop method and the captive bubble method concerning the volume of liquid within the drop or bubble and the measured contact angle, the geometrical relationships illustrate different characteristics for each method. In consideration of the relationship between contact angles and the position of the contact for a certain volume in the drop or bubble, the highest and lowest possible contact angles on volume are dependent on each other differently in the two methods.
For the amplitude of oscillations shown in the graph, both the drop and the captive bubble display a similar order of magnitude at a relatively low contact angle. On the other hand, on a rough surface with a relatively high contact angle, the amplitude shown for the drop is larger than that of a captive bubble. The amplitude of oscillation of the lowest and highest possible contact angle demonstrates the difference between the drop method and the captive bubble method, in which the amplitude of the graph of the captive bubble method is comparatively larger than that of the graph of the sessile drop method.
In terms of the wavelength of the graph, the wavelength for both methods spans over a large range of volumes of liquid on the solid surface. Differences in the behavior of the drop and the bubble vary from the lowest possible contact angles to the highest possible contact angles.
References
Surface science
Instrumental analysis
Bubbles (physics) | Captive bubble method | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,736 | [
"Instrumental analysis",
"Foams",
"Bubbles (physics)",
"Surface science",
"Condensed matter physics",
"Fluid dynamics"
] |
45,334,947 | https://en.wikipedia.org/wiki/Newton%E2%80%93Okounkov%20body | In algebraic geometry, a Newton–Okounkov body, also called an Okounkov body, is a convex body in Euclidean space associated to a divisor (or more generally a linear system) on a variety. The convex geometry of a Newton–Okounkov body encodes (asymptotic) information about the geometry of the variety and the divisor. It is a large generalization of the notion of the Newton polytope of a projective toric variety.
It was introduced (in passing) by Andrei Okounkov in his papers in the late 1990s and early 2000s. Okounkov's construction relies on an earlier result of Askold Khovanskii on semigroups of lattice points. Later, Okounkov's construction was generalized and systematically developed in the papers of Robert Lazarsfeld and Mircea Mustață as well as Kiumars Kaveh and Khovanskii.
Beside Newton polytopes of toric varieties, several polytopes appearing in representation theory (such as the Gelfand–Zetlin polytopes and the string polytopes of Peter Littelmann and Arkady Berenstein–Andrei Zelevinsky) can be realized as special cases of Newton–Okounkov bodies.
References
External links
Oberwolfach workshop "Okounkov bodies and applications"
BIRS workshop "Positivity of linear series and vector bundles"
BIRS workshop "Convex bodies and representation theory"
Oberwolfach workshop "New developments in Newton–Okounkov bodies"
Algebraic geometry
Multi-dimensional geometry | Newton–Okounkov body | [
"Mathematics"
] | 327 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
45,336,978 | https://en.wikipedia.org/wiki/Australian%20School%20of%20Petroleum%20and%20Energy%20Resources | The Australian School of Petroleum and Energy Resources (ASPER) is a centre for education, training and research in petroleum and energy resources engineering, geoscience and management at the University of Adelaide in South Australia. ASPER is housed in the purpose-built Santos Petroleum Engineering Building on the University of Adelaide's North Terrace campus.
History
The Australian School of Petroleum originated from the merger, in 2003, of the National Centre for Petroleum Geology and Geophysics (NCPGG) and the School of Petroleum Engineering and Management (SPEM). In 2020, the School was renamed from the Australian School of Petroleum to the Australian School of Petroleum and Energy Resources to reflect its teaching and research in areas such as the underground storage and use of carbon and hydrogen. The NCPGG was founded as a government and industry-funded Centre of Excellence in 1986. The SPEM was founded in 2000 under an AU $25 million Sponsorship Agreement between the University of Adelaide and Santos Limited. At the time it was believed to be 'the largest single industry sponsorship ever given to a public university in Australia.'
School Ranking and Reputation
In 2020 the QS World University Rankings included the discipline of Petroleum Engineering for the first time.
The ERA (Excellence in Research Australia) is the Australian Government’s attempt to assess research quality and impact. As a School focused largely on a single industry sector, its research outputs do not align with a single ERA field of research, with the majority of ASPER’s outputs allocated to the “Resource Engineering and Extractive Metallurgy” and “Geology” fields of research. In the most recent ERA (2018), the University of Adelaide received 5/5 in both these categories.
Engagement with Industry and Government
ASPER interacts with industry and government agencies and is sometimes sought out for advice on matters related to petroleum and energy management. ASPER’s industry Advisory Board has 13 members from 10 energy companies (Santos, Beach Energy, Chevron, BHP, Esso Australia, Woodside Energy, Vintage Energy, Strike Energy, Cooper Energy, and Schlumberger), CO2CRC and the South Australian government. The Board advises ASPER on the capabilities they seek in future employees, their training needs and the research needs of the industry. A large proportion of ASPER research is funded by the industry in the form of consortia, direct contract research or through collaborative Australian Research Council (ARC) Linkage grants. The South Australian State Government has provided support to ASPER and its predecessors, including funding for the South Australian State Chair of Petroleum Geoscience which has been held by Cedric Griffiths (1994-1999), Bruce Ainsworth (2010-2013) and Peter McCabe (2014-2020).
The number of research papers co-authored by ASPER staff and industry-based collaborators provides a possible measure for ASPER’s engagement with the industry. ASPER has performed an analysis that determined 29.8% of its publications during the last 3 years have been published with industry co-authors, higher than both the 3.6% of papers published by University of Adelaide academics and the Australia-wide average of 2.3%.
Teaching
ASPER offers a range of undergraduate and postgraduate coursework and research programs in engineering and geoscience, from which approximately 2100 students have graduated since 2002.
References
University of Adelaide
2003 establishments in Australia
Petroleum engineering schools | Australian School of Petroleum and Energy Resources | [
"Engineering"
] | 681 | [
"Petroleum engineering",
"Petroleum engineering schools",
"Engineering universities and colleges"
] |
28,393,571 | https://en.wikipedia.org/wiki/Comparison%20of%20dosimeters | The following table compares features of dosimeters.
References
Literature
Dosimeters
Ionising radiation detectors | Comparison of dosimeters | [
"Technology",
"Engineering"
] | 19 | [
"Radioactive contamination",
"Measuring instruments",
"Ionising radiation detectors",
"nan",
"Dosimeters"
] |
28,398,027 | https://en.wikipedia.org/wiki/W3af | w3af (Web Application Attack and Audit Framework) is an open-source web application security scanner. The project provides a vulnerability scanner and exploitation tool for Web applications. It provides information about security vulnerabilities for use in penetration testing engagements. The scanner offers a graphical user interface and a command-line interface.
Architecture
w3af is divided into two main parts, the core and the plug-ins. The core coordinates the process and provides features that are consumed by the plug-ins, which find the vulnerabilities and exploit them. The plug-ins are connected and share information with each other using a knowledge base.
Plug-ins can be categorized as Discovery, Audit, Grep, Attack, Output, Mangle, Evasion or Bruteforce.
History
w3af was started by Andres Riancho in March 2007, after many years of development by the community. In July 2010, w3af announced its sponsorship and partnership with Rapid7. With Rapid7's sponsorship the project will be able to increase its development speed and keep growing in terms of users and contributors.
See also
Metasploit Project
Low Orbit Ion Cannon (LOIC)
Web application security
OWASP Open Web Application Security Project
References
External links
w3af documentation
Note: April 11, 2024 https://www.w3af.org is giving connection timed out failures. However, documentation is still accessible at http://docs.w3af.org/en/latest/. Redirected to W4af: https://github.com/w4af that is still in Alpha development
Cyberwarfare
Computer security software
Electronic warfare
Network analyzers
Free security software
Free network management software
Cross-platform free software | W3af | [
"Engineering"
] | 356 | [
"Cybersecurity engineering",
"Computer security software"
] |
28,401,151 | https://en.wikipedia.org/wiki/CIM%20Profile | In electric power transmission, a CIM Profile is a subset model of the CIM UML model. These profiles are designated as parts documents in the IEC 61970 standard by working group 14. Each profile is itself a self-contained model which can be used for generating specific artifacts, such as CIM RDF or XML Schema.
Profile Groups
A CIM Profile Group (e.g., 61970-456 Steady-state Solution Profile Group) is a logical grouping of CIM Profiles. In general, each Parts document encompasses an entire Profile group which has one or more profiles in it.
Standards
IEC 61970-452
Equipment Profile
IEC 61970-453
Schematics Layout Profile
IEC 61970-456
Analog Measurements Profile
Discrete Measurements Profile
State Variable Profile
Topology Profile
See also
IEC 61970
CIM
External links
CIM EA - A tool written for Enterprise Architect which can manipulate and create profiles.
CIM Tool - A tool written for eclipse that can manipulate and create profiles.
EA schema composer - A tool part of Enterprise Architect which can manipulate and create profiles.
IEC standards
Electric power
Smart grid | CIM Profile | [
"Physics",
"Technology",
"Engineering"
] | 233 | [
"Physical quantities",
"Computer standards",
"IEC standards",
"Power (physics)",
"Electric power",
"Electrical engineering"
] |
28,403,240 | https://en.wikipedia.org/wiki/Engineering%20Equation%20Solver | Engineering Equation Solver (EES) is a commercial software package used for solution of systems of simultaneous non-linear equations. It provides many useful specialized functions and equations for the solution of thermodynamics and heat transfer problems, making it a useful and widely used program for mechanical engineers working in these fields. EES stores thermodynamic properties, which eliminates iterative problem solving by hand through the use of code that calls properties at the specified thermodynamic properties. EES performs the iterative solving, eliminating the tedious and time-consuming task of acquiring thermodynamic properties with its built-in functions.
EES also includes parametric tables that allow the user to compare a number of variables at a time. Parametric tables can also be used to generate plots. EES can also integrate, both as a command in code and in tables. EES also provides optimization tools that minimize or maximize a chosen variable by varying a number of other variables. Lookup tables can be created to store information that can be accessed by a call in the code. EES code allows the user to input equations in any order and obtain a solution, but also can contain if-then statements, which can also be nested within each other to create if-then-else statements. Users can write functions for use in their code, and also procedures, which are functions with multiple outputs.
Adjusting the preferences allows the user choose a unit system, specify stop criteria, including the number of iterations, and also enable/disable unit checking and recommending units, among other options. Users can also specify guess values and variable limits to aid the iterative solving process and help EES quickly and successfully find a solution.
The program is developed by F-Chart Software, a commercial spin-off of Prof Sanford A Klein from Department of Mechanical Engineering
University of Wisconsin-Madison.
EES is included as attached software for a number of undergraduate thermodynamics, heat-transfer and fluid mechanics textbooks from McGraw-Hill.
It integrates closely with the dynamic system simulation package TRNSYS, by some of the same authors.
References
External links
Official site
Mechanical engineering | Engineering Equation Solver | [
"Physics",
"Engineering"
] | 443 | [
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
37,862,957 | https://en.wikipedia.org/wiki/Softplus | In mathematics and machine learning, the softplus function is
It is a smooth approximation (in fact, an analytic function) to the ramp function, which is known as the rectifier or ReLU (rectified linear unit) in machine learning. For large negative it is , so just above 0, while for large positive it is , so just above .
The names softplus and SmoothReLU are used in machine learning. The name "softplus" (2000), by analogy with the earlier softmax (1989) is presumably because it is a smooth (soft) approximation of the positive part of , which is sometimes denoted with a superscript plus, .
Related functions
The derivative of softplus is the logistic function:
The logistic sigmoid function is a smooth approximation of the derivative of the rectifier, the Heaviside step function.
LogSumExp
The multivariable generalization of single-variable softplus is the LogSumExp with the first argument set to zero:
The LogSumExp function is
and its gradient is the softmax; the softmax with the first argument set to zero is the multivariable generalization of the logistic function. Both LogSumExp and softmax are used in machine learning.
Convex conjugate
The convex conjugate (specifically, the Legendre transform) of the softplus function is the negative binary entropy (with base ). This is because (following the definition of the Legendre transform: the derivatives are inverse functions) the derivative of softplus is the logistic function, whose inverse function is the logit, which is the derivative of negative binary entropy.
Softplus can be interpreted as logistic loss (as a positive number), so by duality, minimizing logistic loss corresponds to maximizing entropy. This justifies the principle of maximum entropy as loss minimization.
Alternative forms
This function can be approximated as:
By making the change of variables , this is equivalent to
A sharpness parameter may be included:
References
Computational neuroscience
Logistic regression
Artificial neural networks
Functions and mappings
Exponentials
Entropy and information
Loss functions | Softplus | [
"Physics",
"Mathematics"
] | 444 | [
"Functions and mappings",
"Mathematical analysis",
"Physical quantities",
"Mathematical objects",
"Entropy and information",
"E (mathematical constant)",
"Entropy",
"Mathematical relations",
"Exponentials",
"Dynamical systems"
] |
37,869,238 | https://en.wikipedia.org/wiki/C24H18O12 | {{DISPLAYTITLE:C24H18O12}}
The molecular formula C24H18O12 (molar mass: 498.39 g/mol, exact mass: 498.07982598 u) may refer to :
Tetrafucol A, a fucol-type phlorotannin
Tetraphlorethol C, a phlorethol-type phlorotannin
Molecular formulas | C24H18O12 | [
"Physics",
"Chemistry"
] | 95 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
37,869,513 | https://en.wikipedia.org/wiki/Transition%20metal%20benzyne%20complex | Transition metal benzyne complexes are organometallic complexes that contain benzyne ligands (C6H4). Unlike benzyne itself, these complexes are less reactive although they undergo a number of insertion reactions.
Examples
The studies of metal-benzyne complexes were initiated with the preparation of zirconocene complex by reaction diphenylzirconocene with trimethylphosphine.
Cp2ZrPh2 + PMe3 → Cp2Zr(C6H4)(PMe3) + PhH
The preparation of Ta(η5-C5Me5)(C6H4)Me2 proceeds similarly, requiring the phenyl complex Ta(η5-C5Me5)(C6H5)Me3. This complex is prepared by treatment of Ta(η5-C5Me5)Me3Cl with phenyllithium. Upon heating, this complex eliminates methane, leaving the benzyne complex:
Ta(η5-C5Me5)(C6H5)Me3 → Ta(η5-C5Me5)(C6H4)Me2 + CH4
The second example of a benzyne complex is Ni(η2-C6H4)(dcpe) (dcpe = Cy2PCH2CH2PCy2). It is produced by dehalogenation of the bromophenyl complex NiCl(C6H4Br-2)(dcpe) with sodium amalgam. Its coordination geometry is close to trigonal planar.
Reactivity
Benzyne complexes react with a variety of electrophiles, resulting in insertion into one M-C bond. With trifluoroacetic acid, benzene is lost to give the trifluoroacetate Ni(O2CF3)2(dcpe).
Structural trends
Several benzyne complexes have been examined by X-ray crystallography.
References
Organometallic chemistry
Coordination chemistry
Transition metals | Transition metal benzyne complex | [
"Chemistry"
] | 417 | [
"Organometallic chemistry",
"Coordination chemistry"
] |
26,587,782 | https://en.wikipedia.org/wiki/Scotophor | A scotophor is a material showing reversible darkening and bleaching when subjected to certain types of radiation. The name means dark bearer, in contrast to phosphor, which means light bearer. Scotophors show tenebrescence (reversible photochromism) and darken when subjected to an intense radiation such as sunlight. Minerals showing such behavior include hackmanite sodalite, spodumene and tugtupite. Some pure alkali halides also show such behavior.
Scotophors can be sensitive to light, particle radiation (e.g. electron beam – see cathodochromism), X-rays, or other stimuli. The induced absorption bands in the material, caused by F-centers created by electron bombardment, can be returned to their non-absorbing state, usually by light and/or heating.
Scotophors sensitive to electron beam radiation can be used instead of phosphors in cathode ray tubes, for creating a light absorbing instead of light emitting image. Such displays are viewable in bright light and the image is persistent, until erased.
The image would be retained until erased by flooding the scotophor with a high-intensity infrared light or by electro-thermal heating. Using conventional deflection and raster formation circuitry, a bi-level image could be created on the membrane and retained even when power was removed from the CRT.
In Germany, scotophor tubes were developed by Telefunken as blauschrift-röhre ("dark-trace tube"). The heating mechanism was a layer of mica with transparent thin film of tungsten. When the image was to be erased, current was applied to the tungsten layer; even very dark images could be erased in 5–10 seconds.
Scotophors typically require a higher-intensity electron beam to change color than phosphors need to emit light. Screens with layers of a scotophor and a phosphor are therefore possible, where the phosphor, flooded with a dedicated wide-beam low-intensity electron gun, produces backlight for the scotophor, and optionally highlights selected areas of the screen if bombarded with electrons with higher energy but still insufficient to penetrate the phosphor and change the scotophor state.
The main application of scotophors was in plan position indicators, specialized military radar displays. The achievable brightness allowed projecting the image to a larger surface. The ability to quickly record a persistent trace found its use in some oscilloscopes.
Materials
Potassium chloride is used as a scotophor with designation P10 in dark-trace CRTs (also called dark trace tubes, color center tubes, cathodochromic displays or scotophor tubes), e.g. in the Skiatron. This CRT replaced the conventional light-emitting phosphor layer on the face of the tube screen with a scotophor such as potassium chloride (KCl). Potassium chloride has the property that when a crystal is struck by an electron beam, that spot would change from translucent white to a dark magenta color. By backlighting such a CRT with a white or green circular fluorescent lamp, the resulting image would appear as black information against a green background or as magenta information against a white background. A benefit, aside from the semi-permanent storage of the displayed image, is that the brightness of the resultant display is only limited by the illumination source and optics. The F-centers, however, have tendency to aggregate, and the screen needs to be heated to fully erase the image.
The image on KCl can be formed by depositing a charge of over 0.3 microcoulomb per square centimeter, by an electron beam with energy typically at 8–10 keV. The erasure can be achieved in less than a second by heating the scotophor at 150 °C.
KCl was the most common scotophor used. Other halides show the same property; potassium bromide absorbs in bluish end of the spectrum, resulting in a brown trace, sodium chloride produces a trace that is colored more towards orange.
Another scotophor used in dark-trace CRTs is a modified sodalite, fired in reducing atmosphere or having some chlorides substituted with sulfate ions. Its advantage against KCl is its higher writing speed, less fatigue, and the F-centers do not aggregate, therefore it is possible to substantially erase the screen with light only, without heating.
See also
Solarization (disambiguation)
Photochromism
References
Display technology
Optical materials
Chromism | Scotophor | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 971 | [
"Spectrum (physical sciences)",
"Chromism",
"Materials science",
"Materials",
"Optical materials",
"Electronic engineering",
"Display technology",
"Smart materials",
"Spectroscopy",
"Matter"
] |
26,592,028 | https://en.wikipedia.org/wiki/PSF%20Lab | PSF Lab is a software program that allows the calculation of the illumination point spread function (PSF) of a confocal microscope under various imaging conditions. The calculation of the electric field vectors is based on a rigorous, vectorial model that takes polarization effects in the near-focus region and high numerical aperture microscope objectives into account.
The polarization of the input beam (assumed to be collimated and monochromatic) can be chosen freely (linear, circular, or elliptic). Furthermore, a constant or Gaussian shaped input beam intensity profile can be assumed. On its way from the objective to the focus, the illumination light passes through up to three stratified optical layers, which allows the simulation of an immersion oil/air (layer 1) objective that focusses light through a glass cover slip (layer 2) into the sample medium (layer 3). Each layer is characterized by its (constant) refractive index and thickness. PSF Lab can also simulate microscope objectives that are corrected for certain refractive indices and cover slip thicknesses (design parameters). Thus, any deviations from the ideal imaging conditions for which the objective was designed for are properly taken into account.
The following optical parameters can be selected:
Input beam
Wavelength
Gaussian profile filling parameter (0 = constant profile)
Polarization (linear, circular, elliptic)
Outputs
Individual field components
Squared field components
Intensity
Microscope objective
Numerical aperture
Optical media
Refractive index (design and actual)
Thickness (design and actual)
Depth (focus position within medium 3)
The program calculates only 2D section of the PSF, but several calculations can be stacked (with a third party program) to obtain the full 3D PSF. Calculations are organized in "sets", each with its own set of parameters. Loops can be set up such that PSF Lab calculates one or several sets, increasing the resolution of the calculated images in each new iteration. The resulting image is displayed in PSF Lab in linear or logarithmic color scale with user-selectable color map, and the intensity, individual field components, or squared field component distributions can be exported into various formats (data formats: .mat, .h5 (HDF5), .txt (ASCII); image formats: .fig, .ai, .bmp, .emf, .eps, .jpg, .pcx, .pdf, .png, .tif).
See also
Point spread function
Optical microscope
Confocal microscopy
Confocal laser scanning microscopy
References
External links
Molecular Expressions, introduction to deconvolution using PSFs.
Optical software
Physics software | PSF Lab | [
"Physics"
] | 538 | [
"Physics software",
"Computational physics"
] |
26,592,482 | https://en.wikipedia.org/wiki/ViBe | ViBe is a background subtraction algorithm which has been presented at the IEEE ICASSP 2009 conference and was refined in later publications. More precisely, it is a software module for extracting background information from moving images. It has been developed by Oliver Barnich and Marc Van Droogenbroeck of the Montefiore Institute, University of Liège, Belgium.
ViBe is patented: the patent covers various aspects such as stochastic replacement, spatial diffusion, and non-chronological handling.
ViBe is written in the programming language C, and has been implemented on CPU, GPU and FPGA.
Technical description
Pixel model and classification process
Many advanced techniques are used to provide an estimate of the temporal probability density function (pdf) of a pixel x. ViBe's approach is different, as it imposes the influence of a value in the polychromatic space to be limited to the local neighborhood. In practice, ViBe does not estimate the pdf, but uses a set of previously observed sample values as a pixel model. To classify a value pt(x), it is compared to its closest values among the set of samples.
Model update: Sample values lifespan policy
ViBe ensures a smooth exponentially decaying lifespan for the sample values that constitute the pixel models. This makes ViBe able to successfully deal with concomitant events with a single model of a reasonable size for each pixel. This is achieved by choosing, randomly, which sample to replace when updating a pixel model. Once the sample to be discarded has been chosen, the new value replaces the discarded sample. The pixel model that would result from the update of a given pixel model with a given pixel sample cannot be predicted since the value to be discarded is chosen at random.
Model update: Spatial Consistency
To ensure the spatial consistency of the whole image model and handle practical situations such as small camera movements or slowly evolving background objects, ViBe uses a technique similar to that developed for the updating process in which it chooses at random and update a pixel model in the neighborhood of the current pixel. By denoting NG(x) and p(x) respectively the spatial neighborhood of a pixel x and its value, and assuming that it was decided to update the set of samples of x by inserting p(x), then ViBe also use this value p(x) to update the set of samples of one of the pixels in the neighborhood NG(x), chosen at random. As a result, ViBe is able to produce spatially coherent results directly without the use of any post-processing method.
Model initialization
Although the model could easily recover from any type of initialization, for example by choosing a set of random values, it is convenient to get an accurate background estimate as soon as possible. Ideally a segmentation algorithm would like to be able to segment the video sequences starting from the second frame, the first frame being used to initialize the model. Since no temporal information is available prior to the second frame, ViBe populates the pixel models with values found in the spatial neighborhood of each pixel; more precisely, it initializes the background model with values taken randomly in each pixel neighborhood of the first frame. The background
estimate is therefore valid starting from the second frame of a video sequence.
References
Computer vision | ViBe | [
"Engineering"
] | 657 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
26,592,825 | https://en.wikipedia.org/wiki/De%20Groot%20dual | In mathematics, in particular in topology, the de Groot dual (after Johannes de Groot) of a topology τ on a set X is the topology τ* whose closed sets are generated by compact saturated subsets of (X, τ).
References
R. Kopperman (1995), Asymmetry and duality in topology. Topology Applications, 66(1), 1–39, 1995.
Topology | De Groot dual | [
"Physics",
"Mathematics"
] | 86 | [
"Topology stubs",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
26,594,247 | https://en.wikipedia.org/wiki/Internal%20pressure | Internal pressure is a measure of how the internal energy of a system changes when it expands or contracts at constant temperature. It has the same dimensions as pressure, the SI unit of which is the pascal.
Internal pressure is usually given the symbol . It is defined as a partial derivative of internal energy with respect to volume at constant temperature:
Thermodynamic equation of state
Internal pressure can be expressed in terms of temperature, pressure and their mutual dependence:
This equation is one of the simplest thermodynamic equations. More precisely, it is a thermodynamic property relation, since it holds true for any system and connects the equation of state to one or more thermodynamic energy properties. Here we refer to it as a "thermodynamic equation of state."
Derivation of the thermodynamic equation of state
The fundamental thermodynamic equation states for the exact differential of the internal energy:
Dividing this equation by at constant temperature gives:
And using one of the Maxwell relations:
, this gives
Perfect gas
In a perfect gas, there are no potential energy interactions between the particles, so any change in the internal energy of the gas is directly proportional to the change in the kinetic energy of its constituent species and therefore also to the change in temperature:
.
The internal pressure is taken to be at constant temperature, therefore
, which implies and finally ,
i.e. the internal energy of a perfect gas is independent of the volume it occupies. The above relation can be used as a definition of a perfect gas.
The relation can be proved without the need to invoke any molecular arguments. It follows directly from the thermodynamic equation of state if we use the ideal gas law . We have
Real gases
Real gases have non-zero internal pressures because their internal energy changes as the gases expand isothermally - it can increase on expansion (, signifying presence of dominant attractive forces between the particles of the gas) or decrease (, dominant repulsion).
In the limit of infinite volume these internal pressures reach the value of zero:
,
corresponding to the fact that all real gases can be approximated to be perfect in the limit of a suitably large volume. The above considerations are summarized on the graph on the right.
If a real gas can be described by the van der Waals equation
it follows from the thermodynamic equation of state that
Since the parameter is always positive, so is its internal pressure: internal energy of a van der Waals gas always increases when it expands isothermally.
The parameter models the effect of attractive forces between molecules in the gas. However, real non-ideal gases may be expected to exhibit a sign change between positive and negative internal pressures under the right environmental conditions if repulsive interactions become important, depending on the system of interest. Loosely speaking, this would tend to happen under conditions of temperature and pressure such that the compression factor of the gas, is greater than 1.
In addition, through the use of the Euler chain relation it can be shown that
Defining as the "Joule coefficient" and recognizing as the heat capacity at constant volume , we have
The coefficient can be obtained by measuring the temperature change for a constant- experiment, i.e., an adiabatic free expansion (see below). This coefficient is often small, and usually negative at modest pressures (as predicted by the van der Waals equation).
Experiment
James Joule tried to measure the internal pressure of air in his expansion experiment by adiabatically pumping high pressure air from one metal vessel into another evacuated one. The water bath in which the system was immersed did not change its temperature, signifying that no change in the internal energy occurred. Thus, the internal pressure of the air was apparently equal to zero and the air acted as a perfect gas. The actual deviations from the perfect behaviour were not observed since they are very small and the specific heat capacity of water is relatively high.
Much later, in 1925 Frederick Keyes and Francis Sears published measurements of the Joule effect for carbon dioxide at = 30 °C, = (13.3-16.5) atm using improved measurement techniques and better controls. Under these conditions the temperature dropped when the pressure was adiabatically lowered, which indicates that is negative. This is consistent with the van der Waals gas prediction that is positive.
References
Bibliography
Peter Atkins and Julio de Paula, Physical Chemistry 8th edition, pp. 60–61 (2006).
Thermodynamic properties
Pressure | Internal pressure | [
"Physics",
"Chemistry",
"Mathematics"
] | 911 | [
"Scalar physical quantities",
"Thermodynamic properties",
"Mechanical quantities",
"Physical quantities",
"Quantity",
"Pressure",
"Thermodynamics",
"Wikipedia categories named after physical quantities"
] |
26,597,356 | https://en.wikipedia.org/wiki/Roof%20tiles | Roof tiles are overlapping tiles designed mainly to keep out precipitation such as rain or snow, and are traditionally made from locally available materials such as clay or slate. Later tiles have been made from materials such as concrete, glass, and plastic.
Roof tiles can be affixed by screws or nails, but in some cases historic designs utilize interlocking systems that are self-supporting. Tiles typically cover an underlayment system, which seals the roof against water intrusion.
Categories
There are numerous profiles, or patterns, of roof tile, which can be separated into categories based on their installation and design.
Shingle / flat tiles
One of the simplest designs of roof tile, these are simple overlapping slabs installed in the same manner as traditional shingles, usually held in place by nails or screws at their top. All forms of slate tile fall into this category. When installed, most of an individual shingle's surface area will be covered by the shingles overlapping it. As a result of this, flat tiles require more tiles to cover a certain area than other patterns of similar size.
These tiles commonly feature a squared base, as is the case with English clay tiles, but in some cases can have a pointed or rounded end, as seen with the beaver-tail tile common in Southern Germany.
Imbrex and tegula
The imbrex and tegula are overlapping tiles that were used by many ancient cultures, including the Greeks, Romans, and Chinese. The tegula is a flat tile laid against the surface of the roof, while the imbrex is a semi-cylindrical tile laid over the joints between tegulae.
In early designs tegula were perfectly flat, however over time they were designed to have ridges along their edges to channel water away from the gaps between tiles.
Mission / Monk and Nun tiles
Similar to the imbrex and tegula design of tile, mission tiles are a semi-cylindrical two-piece tile system, composed of a pan and cover. Unlike the imbrex and tegula both the pan and cover of Mission tile are arched.
Early examples of this profile were created by bending a piece of clay over a worker's thigh, which resulted in the semi-circular curve. This could add a taper to one end of the tile.
Pantiles / S tiles
Pantiles are similar to mission tiles except that they consolidate the pan and cover into a single piece. This allows for greater surface area coverage with fewer tiles, and fewer cracks that could lead to leakage.
These tiles are traditionally formed through an extruder. In addition to the S-shaped Spanish tiles, this category includes the Scandia tiles common to Scandinavia and Northern Europe.
Interlocking tiles
Dating to the 1840s, interlocking tiles are the newest category of roofing tile and one of the widest ranging in appearance. Their distinguishing feature is the presence of a ridge for interlocking with one another. This allows them to provide a high ratio of roof area to number of tiles used. Many distinct profiles fall into this category, such as the Marseilles, Ludowici, and Conosera patterns.
Unlike other types of tiles, which can in some cases be produced through a variety of methods, interlocking tiles can only be manufactured on a large scale with a tile press.
In many cases interlocking tile is designed to imitate other patterns of tile, such as flat shingles or pantiles, which can make it difficult to identify from the ground without inspecting an individual tile for a ridge.
History as a vernacular material
The origins of clay roofing tiles are obscure, but it is believed that it was developed independently during the late Neolithic period in both ancient Greece and China, before spreading in use across Europe and Asia.
Europe
Greece
Fired roof-tiles have been found in the House of the tiles in Lerna, Greece. Debris found at the site contained thousands of terracotta tiles which had fallen from the roof. In the Mycenaean period, roof tiles are documented for Gla and Midea.
The earliest roof tiles from the Archaic period in Greece are documented from a very restricted area around Corinth, where fired tiles began to replace thatched roofs at two temples of Apollo and Poseidon between 700 and 650 BC. Spreading rapidly, roof tiles were found within fifty years at many sites around the Eastern Mediterranean, including Mainland Greece, Western Asia Minor, and Southern and Central Italy. Early Greek roof-tiles were of the imbrex and tegula style. While more expensive and labour-intensive to produce than thatch, their introduction has been explained by their greatly enhanced fire-resistance which gave desired protection to the costly temples.
The spread of the roof-tile technique has to be viewed in connection with the simultaneous rise of monumental architecture in Ancient Greece. Only the newly appearing stone walls, which were replacing the earlier mudbrick and wood walls, were strong enough to support the weight of a tiled roof. As a side-effect, it has been assumed that the new stone and tile construction also ushered in the end of 'Chinese roof' (Knickdach) construction in Greek architecture, as they made the need for an extended roof as rain protection for the mudbrick walls obsolete.
A Greek roof tile was responsible for the death of Molossian Greek king Pyrrhus of Epirus in 272 BC after a woman threw one at the king's head as he was attacking her son.
Roman Empire
Roof tiles similar to Greek designs continued to be used through the reign of the Roman Empire. They were a common feature in Roman cities, despite the fact that a single tile would often cost the equivalent of 1.5 day's wages. Tiles were commonly used as improvised weapons used during citizen uprisings, as they were one of few such weapons available to city-dwellers of the time.
Roman imbrex and tebula roofs generally avoided the use of nails and were instead held in place through gravity, it is possible that this was one of the reasons their tile was found on low pitched roofs.
The Romans spread the use and production of roofing tile across their colonies in Europe, with kilns and tile-works constructed as far west and north as Spain and Britain. Early records suggest that brick and tile-works were considered under the control of the Roman state for a period of time.
Northern Europe
It is believed that the Romans introduced the use of clay roof tile to Britain after their conquest in AD 43. The earliest known sites for the production of roof tile are near the Fishbourne Roman Palace. Early tiles produced in Britain followed the Roman imbrex and tebula style, but also included flat shingle tiles, which could be produced with less experience.
For a while after the dissolution of the Roman Empire, the manufacture of tile for roofs and decoration diminished in Northern Europe. In the twelfth century clay, slate, and stone roofing tile began to see more use, initially on abbeys and royal palaces. Their use was later encouraged within Medieval towns as a means of preventing the spread of fire. Simple flat shingle tiles became common during this period due to their ease of manufacture.
Scandinavian roof tiles have been seen on structures dating to the 1500s when city rulers in Holland required the use of fireproof materials. At the time, most houses were made of wood and had thatch roofing, which would often cause fires to spread quickly. To satisfy demand, many small roof-tile makers began to produce roof tiles by hand. The Scandinavian style of roof tile is a variation on the pantile which features a subdued "S" shape reminiscent of an ocean wave.
In Britain, tiles were also used to provide weather protection to the sides of timber frame buildings, a practice known as tile hanging. Another form of this is the so-called mathematical tile, which was hung on laths, nailed and then grouted. This form of tiling gives an imitation of brickwork and was developed to give the appearance of brick, but avoided the brick taxes of the 18th century.
Asia
China
Clay roof tiles are the main form of historic ceramic tilework in China, due largely to the emphasis that traditional Chinese architecture places on a roof as opposed to a wall.
Roof tile fragments have been found in the Loess Plateau dating to the Longshan period, showing some of the earliest pan and cover designs found in Asia. During the Song dynasty, the manufacture of glazed tiles were standardized in Li Jie's Yingzao Fashi. In the Ming dynasty and Qing dynasty, glazed tiles became ever more popular for top-tier buildings, including palace halls in the Forbidden City and ceremonial temples such as the Heavenly Temple.
Chinese architecture is notable for its advancement of colored gloss glazes for roof tiles. Marco Polo made note of these on his travels to China, writing:
The roof is all ablaze with scarlet and green and blue and yellow and all the colors that are, so brilliantly varnished that it glitters like crystal and the color of it can be seen from far away.
Japan
Japanese architecture includes Onigawara as roof ornamentation in conjunction with tiled roofs. They are generally roof tiles or statues depicting a Japanese ogre (oni) or a fearsome beast. Prior to the Heian period, similar ornaments with floral and plant designs "hanagawara" preceded the onigawara.
Onigawara are most often found on Buddhist temples. In some cases the ogre's face may be missing.
Korea
In Korea the use of tile, known as giwa, dates back to the Three Kingdoms period, but it was not until the Unified Silla period that tile roofing became widely used. Tiles were initially reserved for temples and royal buildings as a status symbol.
The designs used on giwa can have symbolic meanings, with different figures representing concepts such as spirituality, longevity, happiness, and enlightenment. The five elements of fire, water, wood, metal and earth were common decorations during the Three Kingdoms period, and during the Goryeo dynasty Celadon glaze was invented and used for the roof tiles of the upper class.
Many post-war Korean roofs feature giwa and a common ornamental symbol is the Mugunghwa, South Korea's national flower.
India
Neolithic sites such as Alamgirpur in Uttar Pradesh provide early evidence of roof tiles. They became more common during the iron age and the early historic period during the first millennium BCE. These early roof tiles were flat tiles and rounded or bent tiles, a form that was widespread across the Ganga Valley and the Indian Peninsula, suggesting that it was an essential architectural element of this period. This early form of roof tiles also influenced roof tiles of neighboring Nepal and Sri lanka.
Metal roof tiles made of gold, silver, bronze and copper are restricted to religious architecture in South Asia. A notable temple with golden roof tiles is the Nataraja temple of Chidambaram, where the roof of the main shrine in the inner courtyard has been laid with 21,600 golden tiles.
Southeast Asia
Tapered flat roof tiles have been used in Thailand, Laos and Cambodia since at least the 9th or 10th century CE, with widespread adoption after the 14th century, commonly to roof traditional Buddhist temple architecture. These shingle tiles have flat elongated bodies with a bent upper end for hooking at the roof and a pointed lower end.
In Indonesia, approximately 90% of houses in Java island use clay roof tile. Traditionally, Javanese architecture use clay roof tiles. However, it was not until late 19th century that houses of commoners in Java and Bali started using roof tiles. The Dutch colonial administration encouraged the usage of roof tiles to increase hygiene. Before the mass usage of roof tiles in Java and Bali, commoners of both of islands used thatched or nipa roof like the inhabitants of other Indonesian islands.
North America
Roof tiles were introduced to North America by colonizers from Europe, and typically were traditional designs native to their original country.
Pieces of clay roof tile have been found in archeological excavations of the English settlement at Roanoke Colony dating to 1585, and in later English settlements in Jamestown, Virginia and St. Mary's, Maryland. Spanish and French colonists brought their designs and styles of roofing tile to areas they settled along what are now the southern United States and Mexico, with Spanish-influenced tile fragments found in Saint Augustine, Florida, and both Spanish and French styles used in New Orleans, Louisiana.
Dutch settlers first imported tile to their settlements in what are now the Northeastern United States, and had established full-scale production of roofing tiles in the upper Hudson River Valley by 1650 to supply New Amsterdam.
Clay roof tiles were first produced on the West Coast at the Mission San Antonio de Padua in 1780. This Spanish-influenced style of tile remains in common use in California.
One notable site of roof tile production was Zoar, Ohio, where a religious sect of German Zoarites formed a commune in 1817 and produced their own roofs in a handmade German beaver-tail style for several decades.
From the 1700s through early 1800s, clay roofing tile was a popular material in colonial American cities due to its fire-resistance, especially after the establishment of urban fire-codes.
In spite of improving manufacturing methods, clay tile fell out of favor within the United States around the 1820s, and cheaper alternatives such as wood shingle and slate tile became more common.
Post-vernacular history
Clay tiles
Beginning around the mid-1800s, expanding industrial production allowed for more efficient and large-scale production of clay roofing tile. At the same time, increasing city growth led to rising demand for fireproof materials to limit the danger of urban fires, such as the Great Chicago Fire of 1871.
These conditions combined to bring a significant expansion in the use of roof tile, with a shift from regional and hand-produced tile to patented and machine-made tile sold by large-scale companies.
Gilardoni tiles
The Gilardoni brothers of Altkirch, France were the first to develop a functional interlocking roof tile.
The Gilardonis' design marked a significant shift in the design of roofing tile. Prior to this tile most roofing tile profiles could be hand made without the need for large-scale machines, but the new interlocking tiles could only be produced with a tile press and were more cost effective than comparable vernacular styles. Through the rest of the 19th century many companies began refining and developing other versions of interlocking tiles.
The Gilardoni brothers began making their design in 1835 and took out a patent on their first design of interlocking clay tile in 1841, with a new design patented ten years later. The Gilardonis shared their patent with six other French tile manufacturers between 1845 and 1860, contributing greatly to the spread of interlocking tile usage throughout France and Europe. Their company built additional factories and continued to operate until 1974.
Marseilles tiles
Another popular early interlocking tile pattern was the Marseilles design invented by the Martin Brothers in Marseilles, France as early as the 1860s. The Marseilles tile pattern is distinguished from other designs by its diagonal notches on its side rebate, as well as the teardrop-shaped end of its middle-rib.
While the Martin Brothers invented the design, its widespread use was more due to the pattern's adoption and international production after its original patent expired. The Marseilles tile was widely exported, especially in European colonies in South and Central America, Africa, and Australia.
French-manufactured Marseilles tiles were imported to Australia by 1886 and New Zealand by 1899. Many New Zealand railway stations were built with them, including Dunedin. Large scale production of Marseilles tiles by Wunderlich began in Australia during war-time import shortages in 1916. From 1920, factories at Pargny exported tiles to England. By 1929 Winstone were making them at Taumarunui, in a tile works established about 1910, which was replaced by Plimmerton in 1954.
Ludowici tiles
In 1881 Wilhelm Ludowici developed his own interlocking tile, an improvement upon the earlier designs which incorporated a double-rebate on the side, double head-fold at the top of the tile, and a strategically designed surface pattern for repelling water and melting snow from the top of the roof. Unlike other designs, Ludowici included his tile's central rib for functional reasons rather than aesthetic.
Ludowici's design was mass produced in Germany and later the United States by the Ludowici Roof Tile company, who advertised the pattern as French tile.
Many tiles found in the Mangalore region of India are derived from or made in this pattern. Clay roof tiles had been produced in the region since missionary Georg Plebst set up the first factory at Mangalore, Karnataka, India, in 1860 after finding large deposits of clay by the banks of the Gurupura and Nethravathi rivers. The initial tiles they produced were similar to the Gilardoni brothers' design, but later tiles adopted Ludowici's pattern. Over the years ten companies produced Mangalore tiles, which were exported around the Indian Ocean and subcontinent.
Conosera tiles
The Conosera tile was developed by George Herman Babcock in 1889, and was unique due to its diagonally interlocking structure and design allowing for more installation flexibility than other interlocking tile designs. Babcock designed the pattern with towers and spires in mind, since his design significantly reduced the number of graduated tile sizes needed to roof a cone.
Conosera was initially manufactured and sold by the Celadon Terra Cotta Company of Alfred, New York. After a merger formed the Ludowici-Celadon Company in 1906 the group continued to produce Conosera tile for special orders.
Concrete tiles
The earliest known concrete tiles were developed in the 1840s by Adolph Kroher. While visiting Grassau, Bavaria, Kroher learned about locals' use of regional minerals to create stucco and began to experiment with the material, developing a diamond-shaped interlocking pattern of concrete tile which became one of his company's primary profiles. He also manufactured a concrete pantile similar to the Scandinavian style of clay tile.
In order to reduce the high shipping cost for his tile, Kroher adopted a 'do-it-yourself' method of tile manufacture for some time, where he sold a supply of cement and the necessary tools for a home-builder to create their own tiles. This had the disadvantage that cement was prepared by amateurs and did not always have consistent or correct mixing preparation.
Concrete tiles became more widespread in Germany over the next few decades after manufacturers such as Jörgen Peter Jörgensen and Hartwig Hüser began producing interlocking and overlapping designs.
The concrete tile industry grew and spread internationally through the early 20th century, driven by its cheapness to produce at scale. Researchers considered concrete tile inferior to clay tile, largely due to its fundamental weaknesses of porosity and color impermanence.
Glass tiles
Glass tiles, also referred to as skylight tiles, are used as accessories alongside clay roof tiles. These were first developed in the 1890s and designed to allow light into spaces roofed with interlocking tiles, such as warehouses and factories.
It is uncommon for a roof to be completely covered in glass tiles however there are a few exceptions, such as on the tower of Seattle's King Street Station.
Plastic tiles
Plastic tiles, marketed as composite or synthetic tiles, became available towards the end of the 20th century. Their exact invention date is unclear, but most became available around the year 2000.
Plastic tiles are generally designed to imitate slate or clay tiles, and achieve their color through synthetic dyes added to the plastic. They are produced through injection molding.
Solar tiles
Dow Chemical Company began producing solar roof tiles in 2005, and several other manufacturers followed suit. They are similar in design to conventional roof tiles but with a photovoltaic cell within in order to generate renewable electricity.
In 2016 a collaboration between the companies SolarCity and Tesla produced a hydrographically printed tile which appears to be a regular tile from street level but is transparent to sunlight when viewed straight on. Tesla later acquired SolarCity and the solar shingle product was described as "a flop" in 2019. The company later dropped their claim that their tiles were three times as strong as standard tiles, without specifying why they backed away from the claim.
Fittings and trim
Tile roofs require fittings and trim pieces to seal gaps along the ridge and edges of a roof.
Ridge pieces
Ridge pieces are laid upon the very top ridge of a roof, where the planes of a pitched roof meet. This section is usually parallel to the ground beneath.
The tiles which cover this section of the roof have to direct water away from the top of the ridge and onto either side of the pitched roof below.
Terminals
Terminals are ridge tile fittings that are used as an endcap on the gable end or apex of a roof. In some cases these can be highly decorative, taking the form of a sculpture or figurine, while in others they can be more practical and architectural in nature.
Graduated tiles
Graduated roof tiles are tiles designed to "graduate" in size from top to bottom, with smaller tiles at the top and larger ones at the bottom. They are necessary when installing a tile roof on a tower, cone, or dome and need to be specially designed for each roof they are used on for effective functionality.
Antefix
An antefix is a vertical block which terminates and conceals the base of a mission, imbrex and tegula, or pantile roof.
They are commonly a fixture of Greek and Roman tile roofs and can often be highly ornamental.
Under eave tiles
Tiles, often ornamental, applied beneath the eave of a roof structure. Found in temple architecture of Sri Lanka, among other locations.
Characteristics
Durability
The durability of roofing tiles varies greatly based on material composition and manufacture. Durability is directly related to three factors; a resistance to chemical decomposition, a low porosity, and a high breaking strength.
Chemical decomposition
Clay and slate tiles are stable materials and naturally resistant to chemical decomposition, however plastic composite tiles and concrete tiles will experience inevitable decay over time. As a result of this, high-quality clay and slate tiles have a proven lifespan of over 100 years, whereas synthetic and concrete tiles usually have a practical lifespan of 30–50 years. In the case of synthetic plastic tiles, this is purely an estimation since the oldest products on the market date to around 2000. The main cause of plastic tile decay is exposure to ultraviolet radiation, which weakens the chemical bonds of the material and causes the tiles to become more brittle over time.
A common effect seen in cement roof tiles is efflorescence, which is caused by the presence of free lime within concrete. This lime reacts with water to form calcium hydroxide, which creates a chalky deposit on the outside of the tiles. While not detrimental to the strength or durability of the cement tiles, this effect is considered unappealing.
Porosity
Tiles with a porosity above 2% allow for intrusion and absorption of water, which can be detrimental in climates with freeze-thaw conditions or salt air intrusion. During a freeze-thaw cycle, water that infiltrates a tile will see volume expansions of 9% upon freezing, which exerts pressure within any pores it manages to enter and causes cracks to grow. When the ice melts, water spreads further into those cracks and will then apply more stress to them upon the next freeze. A similar effect can be seen in areas near the ocean that experience salt-air intrusion, which can lead to salt crystal permeation and expansion.
Clay tile porosity can range greatly depending on quality of production, but some manufacturers can achieve less than 2% moisture absorption. Concrete roof tiles tend to feature around 13% moisture absorption, which requires periodic resealing every 3–7 years to avoid critical failure. The inherent porosity of cement requires that cement tiles are made very heavy and thick, as a result they have continuously been one of the heaviest roofing materials in the market.
It is commonly believed that a porous clay tile can be waterproofed through the application of a glaze, however studies have showed that this is not the case. If a clay body contains significant pores, water will permeate them over time regardless of exterior coating.
Breaking strength
The breaking strength of clay tiles can vary greatly by manufacturer, depending on a combination of factors such as their firing temperature, specific clay composition, and length of the firing cycle. Despite the common conception of clay tiles being fragile, higher-grade manufacturers produce tiles with breaking strengths ranging from 700 to 1500 pounds.
The breaking strength of plastic roof tiles varies greatly depending on temperature. Unlike ceramics or metals, plastics have glass transition temperatures that fall within the range of winter temperatures, often resulting in them becoming extremely brittle during colder periods.
Color
Clay roof tiles historically gained their color purely from the clay that they were composed of, resulting in largely red, orange, and tan colored roofs. Over time some cultures, notably in Asia, began to apply glazes to clay tiles, achieving a wide variety of colors and combinations.
Originally, most color variation on matte clay tiles was caused by variation in kiln firing temperature, kiln atmospheric conditions, and in some cases reductive firing. Many producers have shifted away from this process since low firing temperatures typically result in a higher porosity and lower breaking strength.
Engobes are now commonly used to replicate the appearance of historic firing variation, using a thin colored ceramic coating which chemically bonds to the tile to provide any range of matte colors to the fired tiles while allowing consistent firing conditions. Glazes are used when a shinier gloss appearance is desired. Like their clay base, both engobes and glazes are fully impervious to color fading regardless of UV exposure, which makes them unique among artificial colorants.
The color of slate tiles is a result of the amount and type of iron and organic material that are present, and most often ranges from light to dark gray. Some shades of slate used for roofing can be shades of green, red, black, purple, and brown.
Cement tiles typically are colored either through the use of a pigment added to the cement body, or through a concentrated slurry coat of cement-infused pigment on the outside of the tiles. Due to the simple production process and comparatively low firing temperature, cement tiles fade over time and often require painting to restore a "new" appearance.
Plastic tiles are colored through the incorporation of synthetic dyes added to them during molding. As a result of their reactive chemical composition they can suffer degradation from UV rays and fade after a few years of use.
Gallery
See also
Covering (construction)
References
External links
Technical note on peg tile restoration work
Tile
Building materials
Terracotta
Tiling | Roof tiles | [
"Physics",
"Technology",
"Engineering"
] | 5,457 | [
"Structural engineering",
"Building engineering",
"Architecture",
"Structural system",
"Construction",
"Materials",
"Roofs",
"Matter",
"Building materials"
] |
26,599,382 | https://en.wikipedia.org/wiki/Priestley%20space | In mathematics, a Priestley space is an ordered topological space with special properties. Priestley spaces are named after Hilary Priestley who introduced and investigated them. Priestley spaces play a fundamental role in the study of distributive lattices. In particular, there is a duality ("Priestley duality") between the category of Priestley spaces and the category of bounded distributive lattices.
Definition
A Priestley space is an ordered topological space , i.e. a set equipped with a partial order and a topology , satisfying
the following two conditions:
is compact.
If , then there exists a clopen up-set of such that and . (This condition is known as the Priestley separation axiom.)
Properties of Priestley spaces
Each Priestley space is Hausdorff. Indeed, given two points of a Priestley space , if , then as is a partial order, either or . Assuming, without loss of generality, that , (ii) provides a clopen up-set of such that and . Therefore, and are disjoint open subsets of separating and .
Each Priestley space is also zero-dimensional; that is, each open neighborhood of a point of a Priestley space contains a clopen neighborhood of . To see this, one proceeds as follows. For each , either or . By the Priestley separation axiom, there exists a clopen up-set or a clopen down-set containing and missing . The intersection of these clopen neighborhoods of does not meet . Therefore, as is compact, there exists a finite intersection of these clopen neighborhoods of missing . This finite intersection is the desired clopen neighborhood of contained in .
It follows that for each Priestley space , the topological space is a Stone space; that is, it is a compact Hausdorff zero-dimensional space.
Some further useful properties of Priestley spaces are listed below.
Let be a Priestley space.
(a) For each closed subset of , both and are closed subsets of .
(b) Each open up-set of is a union of clopen up-sets of and each open down-set of is a union of clopen down-sets of .
(c) Each closed up-set of is an intersection of clopen up-sets of and each closed down-set of is an intersection of clopen down-sets of .
(d) Clopen up-sets and clopen down-sets of form a subbasis for .
(e) For each pair of closed subsets and of , if , then there exists a clopen up-set such that and .
A Priestley morphism from a Priestley space to another Priestley space is a map which is continuous and order-preserving.
Let Pries denote the category of Priestley spaces and Priestley morphisms.
Connection with spectral spaces
Priestley spaces are closely related to spectral spaces. For a Priestley space , let denote the collection of all open up-sets of . Similarly, let denote the collection of all open down-sets of .
Theorem:
If is a Priestley space, then both and are spectral spaces.
Conversely, given a spectral space , let denote the patch topology on ; that is, the topology generated by the subbasis consisting of compact open subsets of and their complements. Let also denote the specialization order of .
Theorem:
If is a spectral space, then is a Priestley space.
In fact, this correspondence between Priestley spaces and spectral spaces is functorial and yields an isomorphism between Pries and the category Spec of spectral spaces and spectral maps.
Connection with bitopological spaces
Priestley spaces are also closely related to bitopological spaces.
Theorem:
If is a Priestley space, then is a pairwise Stone space. Conversely, if is a pairwise Stone space, then is a Priestley space, where is the join of and and is the specialization order of .
The correspondence between Priestley spaces and pairwise Stone spaces is functorial and yields an isomorphism between the category Pries of Priestley spaces and Priestley morphisms and the category PStone of pairwise Stone spaces and bi-continuous maps.
Thus, one has the following isomorphisms of categories:
One of the main consequences of the duality theory for distributive lattices is that each of these categories is dually equivalent to the category of bounded distributive lattices.
See also
Spectral space
Pairwise Stone space
Distributive lattice
Stone duality
Duality theory for distributive lattices
Notes
References
Topology
Topological spaces | Priestley space | [
"Physics",
"Mathematics"
] | 943 | [
"Mathematical structures",
"Space (mathematics)",
"Topological spaces",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
26,599,926 | https://en.wikipedia.org/wiki/Duality%20theory%20for%20distributive%20lattices | In mathematics, duality theory for distributive lattices provides three different (but closely related) representations of bounded distributive lattices via Priestley spaces, spectral spaces, and pairwise Stone spaces. This duality, which is originally also due to Marshall H. Stone, generalizes the well-known Stone duality between Stone spaces and Boolean algebras.
Let be a bounded distributive lattice, and let denote the set of prime filters of . For each , let . Then is a spectral space, where the topology on is generated by . The spectral space is called the prime spectrum of .
The map is a lattice isomorphism from onto the lattice of all compact open subsets of . In fact, each spectral space is homeomorphic to the prime spectrum of some bounded distributive lattice.
Similarly, if and denotes the topology generated by , then is also a spectral space. Moreover, is a pairwise Stone space. The pairwise Stone space is called the bitopological dual of . Each pairwise Stone space is bi-homeomorphic to the bitopological dual of some bounded distributive lattice.
Finally, let be set-theoretic inclusion on the set of prime filters of and let . Then is a Priestley space. Moreover, is a lattice isomorphism from onto the lattice of all clopen up-sets of . The Priestley space is called the Priestley dual of . Each Priestley space is isomorphic to the Priestley dual of some bounded distributive lattice.
Let Dist denote the category of bounded distributive lattices and bounded lattice homomorphisms. Then the above three representations of bounded distributive lattices can be extended to dual equivalence between Dist and the categories Spec, PStone, and Pries of spectral spaces with spectral maps, of pairwise Stone spaces with bi-continuous maps, and of Priestley spaces with Priestley morphisms, respectively:
Thus, there are three equivalent ways of representing bounded distributive lattices. Each one has its own motivation and advantages, but ultimately they all serve the same purpose of providing better understanding of bounded distributive lattices.
See also
Representation theorem
Birkhoff's representation theorem
Stone's representation theorem for Boolean algebras
Stone duality
Esakia duality
Notes
References
Priestley, H. A. (1970). Representation of distributive lattices by means of ordered Stone spaces. Bull. London Math. Soc., (2) 186–190.
Priestley, H. A. (1972). Ordered topological spaces and the representation of distributive lattices. Proc. London Math. Soc., 24(3) 507–530.
Stone, M. (1938). Topological representation of distributive lattices and Brouwerian logics. Casopis Pest. Mat. Fys., 67 1–25.
Cornish, W. H. (1975). On H. Priestley's dual of the category of bounded distributive lattices. Mat. Vesnik, 12(27) (4) 329–332.
M. Hochster (1969). Prime ideal structure in commutative rings. Trans. Amer. Math. Soc., 142 43–60
Johnstone, P. T. (1982). Stone spaces. Cambridge University Press, Cambridge. .
Jung, A. and Moshier, M. A. (2006). On the bitopological nature of Stone duality. Technical Report CSR-06-13, School of Computer Science, University of Birmingham.
Bezhanishvili, G., Bezhanishvili, N., Gabelaia, D., Kurz, A. (2010). Bitopological duality for distributive lattices and Heyting algebras. Mathematical Structures in Computer Science'', 20.
Topology
Category theory
Lattice theory
Duality theories | Duality theory for distributive lattices | [
"Physics",
"Mathematics"
] | 831 | [
"Functions and mappings",
"Mathematical structures",
"Lattice theory",
"Mathematical objects",
"Fields of abstract algebra",
"Topology",
"Space",
"Category theory",
"Mathematical relations",
"Geometry",
"Duality theories",
"Spacetime",
"Order theory"
] |
26,600,432 | https://en.wikipedia.org/wiki/Esakia%20duality | In mathematics, Esakia duality is the dual equivalence between the category of Heyting algebras and the category of Esakia spaces. Esakia duality provides an order-topological representation of Heyting algebras via Esakia spaces.
Let Esa denote the category of Esakia spaces and Esakia morphisms.
Let be a Heyting algebra, denote the set of prime filters of , and denote set-theoretic inclusion on the prime filters of . Also, for each , let }, and let denote the topology on generated by }.
Theorem: is an Esakia space, called the Esakia dual of . Moreover, is a Heyting algebra isomorphism from onto the Heyting algebra of all clopen up-sets of . Furthermore, each Esakia space is isomorphic in Esa to the Esakia dual of some Heyting algebra.
This representation of Heyting algebras by means of Esakia spaces is functorial and yields a dual equivalence between the categories
HA of Heyting algebras and Heyting algebra homomorphisms
and
Esa of Esakia spaces and Esakia morphisms.
Theorem: HA is dually equivalent to Esa.
The duality can also be expressed in terms of spectral spaces, where it says that the category of Heyting algebras is dually equivalent
to the category of Heyting spaces.
See also
Duality theory for distributive lattices
References
Topology
Lattice theory
Duality theories | Esakia duality | [
"Physics",
"Mathematics"
] | 290 | [
"Mathematical structures",
"Lattice theory",
"Fields of abstract algebra",
"Topology",
"Space",
"Duality theories",
"Geometry",
"Category theory",
"Spacetime",
"Order theory"
] |
39,308,546 | https://en.wikipedia.org/wiki/Lilotomab | Lilotomab (formerly tetulomab, HH1) is a murine monoclonal antibody against CD37, a glycoprotein which is expressed on the surface of mature human B cells. It was generated at the Norwegian Radium Hospital.
As of 2016 it was under development by the Norwegian company Nordic Nanovector ASA as a radioimmunotherapeutic in which lilotomab is conjugated to the beta radiation-emitting isotope lutetium-177 by means of a linker called satetraxetan, a derivative of DOTA. This compound is called 177Lu-HH1 or lutetium (177Lu) lilotomab satetraxetan (trade name Betalutin). As of 2016, a phase 1/2 clinical trial in people with non-Hodgkin lymphoma was underway.
References
Further reading
Experimental cancer drugs
Radiopharmaceuticals
Lutetium complexes | Lilotomab | [
"Chemistry"
] | 206 | [
"Chemicals in medicine",
"Radiopharmaceuticals",
"Medicinal radiochemistry"
] |
39,310,035 | https://en.wikipedia.org/wiki/Dependence%20receptor | In cellular biology, dependence receptors are proteins that mediate programmed cell death by monitoring the absence of certain trophic factors (or, equivalently, the presence of anti-trophic factors) that otherwise serve as ligands (interactors) for the dependence receptors.
A trophic ligand is a molecule whose protein binding stimulates cell growth, differentiation, and/or survival.
Cells depend for their survival on stimulation that is mediated by various receptors and sensors, and integrated via signaling within the cell and between cells.
The withdrawal of such trophic support leads to a form of cellular suicide.
Various dependence receptors are involved in a range of biological events: developmental cell death (naturally occurring cell death), trophic factor withdrawal-induced cell death, the spontaneous regression characteristic of type IV-S neuroblastoma, neurodegenerative cell death, inhibition of new tumor cells (tumorigenesis) and metastasis, and therapeutic antibody-mediated tumor cell death, as well as programmed cell death in other instances.
Since these receptors may support either cell death or cell survival, they initiate a new type of tumor suppressor, a conditional tumor suppressor.
In addition, events such as cellular atrophy and process retraction may also be mediated by dependence receptors, although this has not been as well documented as the induction of programmed cell death.
Receptors
The following is the list of known dependence receptors:
Notch3
Kremen1
DCC (Deleted in Colorectal Carcinoma)
UNC5 receptors (UNC5A, UNC5B, UNC5C, UNC5D)
Neogenin
p75NTR
Ptch1
CDON
PLXND1
RET
TrkA
TrkC
EphA4
c-Met
Insulin receptor IR
Insulin-like growth factor 1 receptor
ALK (anaplastic lymphoma kinase)
Androgen receptor
Some integrins
NTRK3
Background
Cells depend for their survival on stimulation that is mediated by various receptors and sensors. For any required stimulus, its withdrawal leads to a form of cellular suicide; that is, the cell plays an active role in its own demise. The term programmed cell death was first suggested by Lockshin & Williams in 1964.
Apoptosis, a form of programmed cell death, was first described by Kerr et al. in 1972,
although the earliest references to the morphological appearance of such cells may date back to the late 19th century.
Cells require different stimuli for survival, depending on their type and state of differentiation.
For example, prostate epithelial cells require testosterone for survival, and the withdrawal of testosterone leads to apoptosis in these cells.
How do cells recognize a lack of stimulus? While positive survival signals are clearly important, a complementary form of signal transduction is pro-apoptotic, and is activated or propagated by stimulus withdrawal or by the addition of an “anti-trophin.”
The dependence receptor notion was based on the observation that the effects of a number of receptors that function in both nervous system development and the production of tumors (especially metastasis) cannot be explained simply by a positive effect of signal transduction induced by ligand binding, but rather must also include cell death signaling in response to trophic withdrawal.
Positive survival signals involve classical signal transduction, initiated by interactions between ligands and receptors. Negative survival signals involve an alternative form of signal transduction that is initiated by the withdrawal of ligands from dependence receptors. This process is seen in developmental cell death, carcinogenesis (especially metastasis), neurodegeneration, and possibly non-lethal (sub-apoptotic) events such as neurite retraction and somal atrophy. Mechanistic studies of dependence receptors suggest that these receptors form complexes that activate and amplify caspase activity. In at least some cases, the caspase activation is via a pathway that is dependent on caspase-9 but not on mitochondria.
Some of the downstream mediators have been identified, such as DAP kinase and the DRAL gene.
Dependence receptors display the common property that they mediate two different intracellular signals: in the presence of ligand, these receptors transduce a positive signal leading to survival, differentiation or migration; conversely, in the absence of ligand, the receptors initiate and/or amplify a signal for programmed cell death. Thus cells that express these proteins at sufficient concentrations manifest a state of dependence on their respective ligands. The signaling that mediates cell death induction upon ligand withdrawal is incompletely defined, but typically includes a required interaction with, and cleavage by, specific caspases.
Mutation of the caspase site(s) in the receptor, of which there is typically one or two, prevents the trophic ligand withdrawal-induced programmed cell death.
Complex formation appears to be a function of ligand-receptor interaction, and dependence receptors appear to exist in at least two conformational states.
Complex formation in the absence of ligand leads to caspase activation by a mechanism that is usually dependent on caspase cleavage of the receptor itself, releasing pro-apoptotic peptides.
Thus these receptors may serve in caspase amplification, and in so doing create cellular states of dependence on their respective ligands.
These states of dependence are not absolute, since they can be blocked downstream in some cases by the expression of anti-apoptotic genes such as Bcl-2 or P35.
However, they result in a shift toward an increased likelihood of a cell's undergoing apoptosis.
Research
Research has highlighted the role of the dependence receptor UNC5D in the phenomenon of spontaneous regression of type IV-S neuroblastoma.
TrkA and TrkC have been shown to function as dependence receptors,
with TrkC mediating both neural cell death and tumorigenesis.
In addition, although dependence receptors have been described as mediating programmed cell death in the absence of binding of trophic ligand, the possibility that a similar effect might be achieved by the binding of a physiological anti-trophin has been raised, and it has been suggested that the Alzheimer's disease-associated peptide, Aβ, may play such a role.
References
Apoptosis
Cell signaling
Molecular neuroscience
Programmed cell death
Receptors
Single-pass transmembrane proteins | Dependence receptor | [
"Chemistry",
"Biology"
] | 1,300 | [
"Signal transduction",
"Senescence",
"Receptors",
"Molecular neuroscience",
"Apoptosis",
"Molecular biology",
"Programmed cell death"
] |
39,313,159 | https://en.wikipedia.org/wiki/Nanofluids%20in%20solar%20collectors | Nanofluid-based direct solar collectors are solar thermal collectors where nanoparticles in a liquid medium can scatter and absorb solar radiation. They have recently received interest to efficiently distribute solar energy. Nanofluid-based solar collector have the potential to harness solar radiant energy more efficiently compared to conventional solar collectors.
Nanofluids have recently found relevance in applications requiring quick and effective heat transfer such as industrial applications, cooling of microchips, microscopic fluidic applications, etc. Moreover, in contrast to conventional heat transfer (for solar thermal applications) like water, ethylene glycol, and molten salts, nanofluids are not transparent to solar radiant energy; instead, they absorb and scatter significantly the solar irradiance passing through them.
Typical solar collectors use a black-surface absorber to collect the sun's heat energy which is then transferred to a fluid running in tubes embedded within. Various limitations have been discovered with these configuration and alternative concepts have been addressed. Among these, the use of nanoparticles suspended in a liquid is the subject of research. Nanoparticle materials including aluminium, copper, carbon nanotubes and carbon-nanohorns have been added to different base fluids and characterized in terms of their performance for improving heat transfer efficiency.
Background
Dispersing trace amounts of nanoparticles into common base fluids has a significant impact on the optical as well as thermo physical properties of base fluid, mainly increasing the thermal conductivity. This characteristic can be used to effectively capture and transport solar radiation. Enhancement of the solar irradiance absorption capacity leads to a higher heat transfer resulting in more efficient heat transfer as shown in figure 2.
The efficiency of a solar thermal system is reliant on several energy conversion steps, which are in turn governed by the effectiveness of the heat transfer processes. While higher conversion efficiency of solar to thermal energy is possible, the key components that need to be improved are the solar collector. An ideal solar collector will absorb the concentrated solar radiation, convert partially that incident solar radiation into heat and transfer the heat to the heat transfer fluid. Higher the heat transfer rate to the fluid leads to higher outlet temperature and higher temperatues leads to improved conversion efficiency in the power cycle.
nanoparticles have several orders of magnitude higher heat transfer coefficient when transferring heat immediately to the surrounding fluid. This is simply due to the small size of nanoparticle.
Mechanism for enhanced thermal conductivity of nanofluids
Keblinski et al. had named four main possible mechanisms for the anomalous increase in nanofluids heat transfer which are :
Brownian motion of nanoparticles
Due to Brownian motion particles randomly move through the liquid. And hence better transport of heat.
Although it was originally believed that the fluid motions resulting from Brownian motion of the nanoparticles could explain the enhancement in heat transfer properties, this hypothesis was later rejected.
Liquid layering at liquid/particle interface
Liquid molecules can form a layer around the solid particles and there by enhance the local ordering of the atomic structure at the interface region.hence, the atomic structure of such liquid layer is more ordered than that of the bulk liquid.
Effect of nano-particles clustering
The effective volume of a cluster is considered much larger than the volume of the particles due to the lower packing fraction of the cluster. Since, heat can be transferred rapidly within the such clusters, the volume fraction of the highly conductive phase is larger than the volume of solid, thus increasing its thermal conductivity
Comparison
In the last ten years, many experiments have been conducted numerically and analytically to validate the importance of nanofluids.
From the table 1 it is clear that nanofluid-based collector have a higher efficiency than a conventional collector. So, it is clear that we can improve conventional collector simply by adding trace amounts of nano-particles.
It has also been observed through numerical simulation that mean outlet temperature increase by increasing volume fraction of nanoparticles, length of tube and decreases by decreasing velocity.
Benefits of use of nanofluids in solar collectors
Nanofluids poses the following advantages as compared to conventional fluids which makes them suitable for use in solar collectors:
Absorption of solar energy will be maximized with change of the size, shape, material and volume fraction of the nanoparticles.
The suspended nanoparticles increase the surface area but decrease the heat capacity of the fluid due to the very small particle size.
The suspended nanoparticles enhance the thermal conductivity which results improvement in efficiency of heat transfer systems.
Properties of fluid can be changed by varying concentration of nanoparticles.
Extremely small size of nanoparticles ideally allows them to pass through pumps.
Nanofluid can be optically selective (high absorption in the solar range and low emittance in the infrared.
The fundamental difference between the conventional and nanofluid-based collector lies in the mode of heating of the working fluid. In the former case the sunlight is absorbed by a surface, where as in the latter case the sunlight is directly absorbed by the working fluid (through radiative transfer). On reaching the receiver the solar radiations transfer energy to the nanofluid via scattering and absorption.
See also
Nanofluid
Absorption
Fluid
Radiation
Scattering
Solar collector
Solar energy
References
Further reading
Nanoelectronics
Nanoparticles
Fluid mechanics
Heat transfer
Solar energy | Nanofluids in solar collectors | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,091 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Civil engineering",
"Thermodynamics",
"Nanoelectronics",
"Nanotechnology",
"Fluid mechanics"
] |
39,314,537 | https://en.wikipedia.org/wiki/Biomimetic%20architecture | Biomimetic architecture is a branch of the new science of biomimicry defined and popularized by Janine Benyus in her 1997 book (Biomimicry: Innovation Inspired by Nature). Biomimicry (bios - life and mimesis - imitate) refers to innovations inspired by nature as one which studies nature and then imitates or takes inspiration from its designs and processes to solve human problems. The book suggests looking at nature as a Model, Measure, and Mentor", suggesting that the main aim of biomimicry is sustainability.
Living beings have adapted to a constantly changing environment during evolution through mutation, recombination, and selection. The core idea of the biomimetic philosophy is that nature's inhabitants including animals, plants, and microbes have the most experience in solving problems and have already found the most appropriate ways to last on planet Earth. Similarly, biomimetic architecture seeks solutions for building sustainability present in nature, not only by replicating their natural forms, but also by understanding the rules governing those forms.
The 21st century has seen a ubiquitous waste of energy due to inefficient building designs, in addition to the over-utilization of energy during the operational phase of its life cycle. In parallel, recent advancements in fabrication techniques, computational imaging, and simulation tools have opened up new possibilities to mimic nature across different architectural scales. As a result, there has been a rapid growth in devising innovative design approaches and solutions to counter energy problems. Biomimetic architecture is one of these multi-disciplinary approaches to sustainable design that follows a set of principles rather than stylistic codes, going beyond using nature as inspiration for the aesthetic components of built form, but instead seeking to use nature to solve problems of the building's functioning and saving energy.
History
Architecture has long drawn from nature as a source of inspiration. Biomorphism, or the incorporation of natural existing elements as inspiration in design, originated possibly with the beginning of man-made environments and remains present today. The ancient Greeks and Romans incorporated natural motifs into design such as the tree-inspired columns. Late Antique and Byzantine arabesque tendrils are stylized versions of the acanthus plant. Varro's Aviary at Casinum from 64 BC reconstructed a world in miniature. A pond surrounded a domed structure at one end that held a variety of birds. A stone colonnaded portico had intermediate columns of living trees.
The Sagrada Família church by Antoni Gaudi begun in 1882 is a well-known example of using nature's functional forms to answer a structural problem. He used columns that modeled the branching canopies of trees to solve statics problems in supporting the vault.
Organic architecture uses nature-inspired geometrical forms in design and seeks to reconnect the human with his or her surroundings. Kendrick Bangs Kellogg, a practicing organic architect, believes that "above all, organic architecture should constantly remind us not to take Mother Nature for granted – work with her and allow her to guide your life. Inhibit her, and humanity will be the loser." This falls in line with another guiding principle, which is that form should follow flow and not work against the dynamic forces of nature. Architect Daniel Liebermann's commentary on organic architecture as a movement highlights the role of nature in building: "...a truer understanding of how we see, with our mind and eye, is the foundation of everything organic. Man's eye and brain evolved over aeons of time, most of which were within the vast untrammeled and unpaved landscape of our Edenic biosphere! We must go to Nature for our models now, that is clear!" Organic architects use man-made solutions with nature-inspired aesthetics to bring about an awareness of the natural environment rather than relying on nature's solutions to answer man's problems.
Metabolist architecture, a movement present in Japan post-WWII, stressed the idea of endless change in the biological world. Metabolists promoted flexible architecture and dynamic cities that could meet the needs of a changing urban environment. The city is likened to a human body in that its individual components are created and become obsolete, but the entity as a whole continues to develop. Like the individual cells of a human body that grow and die although human body continues to live, the city, too, is in a continuous cycle of growth and change. The methodology of Metabolists views nature as a metaphor for the man-made. Kisho Kurokawa's Helix City is modeled after DNA, but uses it as a structural metaphor rather than for its underlying qualities of its purpose of genetic coding.
Other historic attempts have been made, which are not directly related to the built environment. Some of these earliest successful attempts at mimicking nature include the electric battery, mimicking the living torpedo, by Alessandro Volta which dates back to the 1800s, as well as the first successful airplane built by Otto Lilienthal after 1889, looking at birds as biological role models.
Characteristics
The term Biomimetic architecture refers to the study and application of construction principles which are found in natural environments and species, and are translated into the design of sustainable solutions for architecture. Biomimetic architecture uses nature as a model, measure and mentor for providing architectural solutions across scales, which are inspired by natural organisms that have solved similar problems in nature. Using nature as a measure refers to using an ecological standard of measuring sustainability, and efficiency of man-made innovations, while the term mentor refers to learning from natural principles and using biology as an inspirational source.
Biomorphic architecture, also referred to as Bio-decoration, on the other hand, refers to the use of formal and geometric elements found in nature, as a source of inspiration for aesthetic properties in designed architecture, and may not necessarily have non-physical, or economic functions. A historic example of biomorphic architecture dates back to Egyptian, Greek and Roman cultures, using tree and plant forms in the ornamentation of structural columns.
Within Biomimetic architecture, two basic procedures can be identified, namely, the bottom-up approach (biology push) and top-down approach (technology pull). The boundary between these is blurry with the possibility of transitioning between the two approaches depending on individual cases. Biomimetic architecture is typically carried out in highly interdisciplinary teams in which biologists and other natural scientists work in collaboration with engineers, scientists, and designers. In the bottom-up approach, the starting point is a new result from basic biological research promising for biomimetic implementation. For example, developing a biomimetic material system after the quantitative analysis of the mechanical, physical, and chemical properties of a biological system. In the top-down approach, biomimetic innovations are sought for already existing developments that have been successfully established on the market. The cooperation focuses on the improvement or further development of an existing product.Mimicking nature requires understanding the differences between biological and technical systems. Their evolution is dissimilar: biological systems have been evolving for millions of years, whereas the technical systems have been developing for only a few hundred years. Biological systems evolved based on their genetic codes governed by natural selection, while technical systems developed based on human design for performing functions. In general, functions in technical systems aim to develop a system as a result of design, while in biological systems, functions can occasionally be an unsystematic genetic evolutionary change that leads to a particular function that is not prearranged. Their differences are wide: technical systems function within extensive environments, while biological systems work within restricted living constraints.
Architectural innovations that are responsive to architecture do not have to resemble a plant or an animal. Where form is intrinsic to an organism's function, then a building modeled on a life form's processes may end up looking like the organism too. Architecture can emulate natural forms, functions and processes. Though a contemporary concept in a technological age, biomimicry does not entail the incorporation of complex technology in architecture. In response to prior architectural movements biomimetic architecture strives to move towards radical increases in resource efficiency, work in a closed loop model rather than linear (work in a closed cycle that does not need a constant intake of resources to function), and rely on solar energy instead of fossil fuels. The design approach can either work from design to nature or from nature to design. Design to nature means identifying a design problem and finding a parallel problem in nature for a solution. An example of this is the DaimlerChrysler bionic car that looked to the boxfish to build an aerodynamic body. The nature to design method is a solution-driven biologically inspired design. Designers start with a specific biological solution in mind and apply it to design. An example of this is Sto's Lotusan paint, which is self-cleaning, an idea presented by the lotus flower, which emerges clean from swampy waters.
Three Levels of Mimicry
Biomimicry can work on three levels: the organism, its behaviors, and the ecosystem. Buildings on the organism level mimic a specific organism. Working on this level alone without mimicking how the organism participates in a larger context may not be sufficient to produce a building that integrates well with its environment because an organism always functions and responds to a larger context. On a behavior level, buildings mimic how an organism behaves or relates to its larger context. On the level of the ecosystem, a building mimics the natural process and cycle of the greater environment. Ecosystem principles follow that ecosystems (1) are dependent on contemporary sunlight; (2) optimize the system rather than its components; (3) are attuned to and dependent on local conditions; (4) are diverse in components, relationships and information; (5) create conditions favorable to sustained life; and (6) adapt and evolve at different levels and at different rates. Essentially, this means that a number of components and processes make up an ecosystem and they must work with each other rather than against in order for the ecosystem to run smoothly. For architectural design to mimic nature on the ecosystem level it should follow these six principles.
Biomimicry Spiral
Carl Hastrich introduced the idea of the biomimicry design spiral, outlining six processes to create a bio-inspired structure. According to the Biomimicry Institute , each phase is specified. The first stage is to specify the function that is necessary for the intended design, such as defense, energy absorption, or protection. The second phase, "biologize," is translating the function that will address the design solution for protection into biological terms. The third stage, known as the discovery step, seeks out natural patterns, techniques, or models that perform the same or a similar purpose as those found in step 2. The subsequent stage is abstract, during which critical characteristics and mechanisms are investigated and biological tactics are converted into design strategies. The design concept for the protective product is produced in the next (emulate) step using the most important natural history teachings. The design is next examined in light of practical design restrictions in order to determine whether it is a workable solution. Also, this concept is now combined with many advanced technologies, like additive manufacturing.
Examples of biomimicry in architecture
Organism Level
On the organism level, the architecture looks to the organism itself, applying its form and/or functions to a building.
thumb|Venus Flower Basket (sponge-labelled)
Norman Foster's Gherkin Tower (2003) has a hexagonal skin inspired by the Venus Flower Basket Sponge. This sponge sits in an underwater environment with strong water currents and its lattice-like exoskeleton and round shape help disperse those stresses on the organism.
The Eden Project (2001) in Cornwall, England is a series of artificial biomes with domes modeled after soap bubbles and pollen grains. Grimshaw Architects looked to nature to build an effective spherical shape. The resulting geodesic hexagonal bubbles inflated with air were constructed of Ethylene Tetrafluoroethylene (ETFE), a material that is both light and strong. The final superstructure weighs less than the air it contains.
Behavior Level
On the behavior level, the building mimics how the organism interacts with its environment to build a structure that can also fit in without resistance in its surrounding environment.
thumb|Eastgate Centre, Harare, Zimbabwe
The Eastgate Centre designed by architect Mick Pearce in conjunction with engineers at Arup Associates is a large office and shopping complex in Harare, Zimbabwe. To minimize potential costs of regulating the building's inner temperature Pearce looked to the self-cooling mounds of African termites. The building has no air-conditioning or heating but regulates its temperature with a passive cooling system inspired by the self-cooling mounds of African termites. The structure, however, does not have to look like a termite mound to function like one and instead aesthetically draws from indigenous Zimbabwean masonry.
The Qatar Cacti Building designed by Bangkok-based Aesthetics Architects for the Minister of Municipal Affairs and Agriculture is a projected building that uses the cactus's relationship to its environment as a model for building in the desert. The functional processes silently at work are inspired by the way cacti sustain themselves in a dry, scorching climate. Sun shades on the windows open and close in response to heat, just as the cactus undergoes transpiration at night rather than during the day to retain water. The project reaches out to the ecosystem level in its adjoining botanical dome whose wastewater management system follows processes that conserve water and has minimum waste outputs. Incorporating living organisms into the breakdown stage of the wastewater minimizes the amount of external energy resources needed to fulfill this task. The dome would create a climate and air controlled space that can be used for the cultivation of a food source for employees.
Ecosystem Level
Building on the ecosystem level involves mimicking of how the environments many components work together and tends to be on the urban scale or a larger project with multiple elements rather than a solitary structure.
The Cardboard to Caviar Project founded by Graham Wiles in Wakefield, UK is a cyclical closed-loop system using waste as a nutrient. The project pays restaurants for their cardboard, shreds it, and sells it to equestrian centers for horse bedding. Then the soiled bedding is bought and put into a composting system, which produces a lot of worms. The worms are fed to roe fish, which produce caviar, which is sold back to the restaurants. This idea of waste for one as a nutrient for another has the potential to be translated to whole cities.
The Sahara Forest Project designed by the firm Exploration Architecture is a greenhouse that aims to rely on solar energy alone to operate as a zero waste system. The project is on the ecosystem level because its many components work together in a cyclical system. After finding that the deserts used to be covered by forests, Exploration decided to intervene at the forest and desert boundaries to reverse desertification. The project mimics the Namibian desert beetle to combat climate change in an arid environment. It draws upon the beetle's ability to self-regulate its body temperature by accumulating heat by day and to collect water droplets that form on its wings. The greenhouse structure uses saltwater to provide evaporative cooling and humidification. The evaporated air condenses to fresh water allowing the greenhouse to remain heated at night. This system produces more water than the interior plants need so the excess is spewed out for the surrounding plants to grow. Solar power plants work off of the idea that symbiotic relationships are important in nature, collecting sun while providing shade for plants to grow. The project is currently in its pilot phase.
Lavasa, India is a proposed 8000-acre city by HOK (Hellmuth, Obata, and Kassabaum) planned for a region of India subject to monsoon flooding. The HOK team determined that the site's original ecosystem was a moist deciduous forest before it had become an arid landscape. In response to the season flooding, they designed the building foundations to store water like the former trees did. City rooftops mimic native the banyan fig leaf looking to its drip-tip system that allows water to run off while simultaneously cleaning its surface. The strategy to move excess water through channels is borrowed from local harvester ants, which use multi-path channels to divert water away from their nests.
Criticisms
Biomimicry has been criticized for distancing man from nature by defining the two terms as separate and distinct from one another. The need to categorize human as distinct from nature upholds the traditional definition of nature, which is that it is those things or systems that come into existence independently of human intention. Joe Kaplinsky further argues that in basing itself on nature's design, biomimicry risks presuming the superiority of nature-given solutions over the manmade. In idolizing nature's systems and devaluing human design, biomimetic structures cannot keep up with the man-made environment and its problems. He contends that evolution within humanity is culturally based in technological innovations rather than ecological evolution. However, architects and engineers do not base their designs strictly off of nature but only use parts of it as inspiration for architectural solutions. Since the final product is actually a merging of natural design with a human innovation, biomimicry can actually be read as bringing man and nature in harmony with one another.
See also
Biomorphism
Hellmuth, Obata, and Kassabaum
Metaphoric architecture
Organic architecture
Zoomorphic architecture
Zoomorphism
Further reading
Benyus, Janine. Biomimicry: Innovation Inspired by Nature. New York: Perennial, 2002.
"Biomimicry 3.8 Institute", Biomimicry 3.8 Institute, http://biomimicry.net/.
Pawlyn, Michael. Biomimicry in Architecture. London: RIBA Publishing, 2011.
Vincent, Julian. Biomimetic Patterns in Architectural Design. Architectural Design 79, no. 6 (2009): 74-81.
Al-Obaidi, Karam M., et al. Biomimetic building skins: An adaptive approach''. Renewable and Sustainable Energy Reviews 79 (2017): 1472-1491. doi:10.1016/j.rser.2017.05.028
References
External links
Michael Pawlyn: Using nature's genius in architecture @TED.com
Architectural theory | Biomimetic architecture | [
"Engineering"
] | 3,772 | [
"Architectural theory",
"Architecture"
] |
39,316,365 | https://en.wikipedia.org/wiki/Ceiling%20effect%20%28pharmacology%29 | In pharmacology, the term ceiling effect refers to the property of increasing doses of a given medication to have progressively smaller incremental effect (an example of diminishing returns). Mixed agonist-antagonist opioids, such as nalbuphine, serve as a classic example of the ceiling effect; increasing the dose of a narcotic frequently leads to smaller and smaller gains in relief of pain. In many cases, the severity of side effects from a medication increases as the dose increases, long after its therapeutic ceiling has been reached.
The term is defined as "the phenomenon in which a drug reaches a maximum effect, so that increasing the drug dosage does not increase its effectiveness." Sometimes drugs cannot be compared across a wide range of treatment situations because one drug has a ceiling effect.
Sometimes the desired effect increases with dose, but side-effects worsen or start being dangerous, and risk to benefit ratio increases. This is because of occupation of all the receptors in a given specimen.
See also
Agonist–antagonist opioids
Buprenorphine
Codeine
Dose–response relationship
Pain ladder
Weber–Fechner law
References
External links
Is there a ceiling effect of transdermal buprenorphine? Preliminary data in cancer patients
Clinical evidence for an LH ‘ceiling’ effect induced by administration of recombinant human LH during the late follicular phase of stimulated cycles in World Health Organization type I and type II anovulation
Analgesic effect of i.v. paracetamol: possible ceiling effect of paracetamol in postoperative pain
Pharmacodynamics | Ceiling effect (pharmacology) | [
"Chemistry"
] | 329 | [
"Pharmacology",
"Pharmacodynamics"
] |
40,623,157 | https://en.wikipedia.org/wiki/Electrostatic%20discharge%20materials | Electrostatic discharge materials (ESD materials) are plastics that reduce static electricity to protect against damage to electrostatic-sensitive devices (ESD) or to prevent the accidental ignition of flammable liquids or gases.
Materials
ESD materials are generally subdivided into categories with related properties: Anti-Static, Conductive, and Dissipative.
Note that the sheet resistance quoted above depends on the thickness of the layer of material, and the value is the resistance of a square of the material for a current flowing from one edge to the opposite edge.
Conductive
Conductive materials have a low electrical resistance, thus electrons flow easily across the surface or through these materials. Charges go to ground or to another conductive object that the material contacts.
Dissipative
Dissipative materials allow the charges to flow to ground more slowly in a more controlled manner than with conductive materials.
Anti-Static
Anti-static materials are generally referred to as any material which inhibits triboelectric charging. This kind of charging is the buildup of an electric charge by the rubbing or contact with another material.
Insulative
Insulative materials prevent or limit the flow of electrons across their surface or through their volume. Insulative materials have a high electrical resistance and are difficult to ground, thus are not ESD materials. Static charges remain in place on these materials for a very long time.
See also
Antistatic device
Electrostatic discharge
Electrical resistivity and conductivity
Velostat
References
Further reading
The Wiley Encyclopedia of Packaging Technology; 1st Edition; Kit. L. Yam; John Wiley & Sons; 1353 pages; 2009; .
Plastics Additives Handbook; 6th Edition; Zweifel, Maier, Schiller; Hanser Publications; 1222 pages; 2009; .
Handbook of Conducting Polymers; 3rd Edition; Skotheim and Reynolds; CRC Press; 1680 pages; 2007; .
Conductive Polymers and Plastics: In Industrial Applications; 1st Edition; Larry Rupprecht; Elsevier; 293 pages; 1999; .
Plastics Additives and Modifiers Handbook ; 1st Edition; Jesse Edenbaum; Springer; 1136 pages; 1992; .
Metal-Filled Polymers: Properties and Applications; 1st Edition; S.K. Bhattacharya; CRC Press; 376 pages; 1986; .
External links
http://www.esda.org - ESD Association
ESD packaging advice - Intel
Workmanship Manual for Electrostatic Discharge Control - NASA
Electrostatics
Digital electronics
de:Elektrostatische_Entladung#Klassifikation_der_Werkstoffe | Electrostatic discharge materials | [
"Engineering"
] | 532 | [
"Electronic engineering",
"Digital electronics"
] |
40,625,088 | https://en.wikipedia.org/wiki/Voltage%20sag | A voltage sag (U.S. English) or voltage dip (British English) is a short-duration reduction in the voltage of an electric power distribution system. It can be caused by high current demand such as inrush current (starting of electric motors, transformers, heaters, power supplies) or fault current (overload or short circuit) elsewhere on the system.
Voltage sags are defined by their magnitude or depth, and duration. A voltage sag happens when the RMS voltage decreases between 10 and 90 percent of nominal voltage for one-half cycle to one minute. Some references define the duration of a sag for a period of 0.5 cycle to a few seconds, and a longer duration of low voltage would be called a sustained sag. The definition of voltage sag can be found in IEEE 1159, 3.1.73 as "A variation of the RMS value of the voltage from nominal voltage for a time greater than 0.5 cycles of the power frequency but less than or equal to 1 minute. Usually further described using a modifier indicating the magnitude of a voltage variation (e.g. sag, swell, or interruption) and possibly a modifier indicating the duration of the variation (e.g., instantaneous, momentary, or temporary)."
Voltage sag in large power system
The main goal of the power system is to provide reliable and high-quality electricity for its customers. One of the main measures of power quality is the voltage magnitude. Therefore, Monitoring the power system to ensure its performance is one of the highest priorities. However, since power systems are usually grids including hundreds of buses, installing measuring instruments at every single busbar of the system is not cost-efficient. In this regard, various approaches have been suggested to estimate the voltage of different buses merely based on the measured voltage on a few buses.
Related concepts
The term sag should not be confused with a brownout, which is the reduction of voltage for minutes or hours.
The term transient, as used in power quality, is an umbrella term and can refer to sags, swells, dropouts, etc.
Swell
Voltage swell is the opposite of voltage sag. Voltage swell, which is a momentary increase in voltage, happens when a heavy load turns off in a power system.
Causes
Several factors can cause a voltage sag:
Some electric motors draw much more current when they are starting than when they are running at their rated speed.
A line-to-ground fault will cause a voltage sag until the protective switchgear (fuse or circuit breaker) operates.
Some accidents in power lines such as lightning or a falling object can cause a line-to-ground fault.
Sudden load changes or excessive loads
Depending on the transformer connections, transformers energizing
Voltage sags can arrive from the power utility, but most are caused by local in-building equipment. In residential homes, voltage sags are sometimes seen when refrigerators, air-conditioners, or furnace fans start up.
Factors that affect the magnitude of sag caused by faults:
The distance between the victim and the fault source
The fault impedance
Type of fault
The voltage before the sag occurs
System configuration, e.g. system impedance and transformer connections
See also
(LVRT)
References
Sag
Power engineering | Voltage sag | [
"Physics",
"Engineering"
] | 674 | [
"Physical quantities",
"Energy engineering",
"Electrical engineering",
"Power engineering",
"Voltage",
"Voltage stability"
] |
40,627,582 | https://en.wikipedia.org/wiki/Coplanar%20waveguide | Coplanar waveguide is a type of electrical planar transmission line which can be fabricated using printed circuit board technology, and is used to convey microwave-frequency signals. On a smaller scale, coplanar waveguide transmission lines are also built into monolithic microwave integrated circuits.
Conventional coplanar waveguide (CPW) consists of a single conducting track printed onto a dielectric substrate, together with a pair of return conductors, one to either side of the track. All three conductors are on the same side of the substrate, and hence are coplanar. The return conductors are separated from the central track by a small gap, which has an unvarying width along the length of the line. Away from the central conductor, the return conductors usually extend to an indefinite but large distance, so that each is notionally a semi-infinite plane.
Conductor-backed coplanar waveguide (CBCPW), also known as coplanar waveguide with ground (CPWG), is a common variant which has a ground plane covering the entire back-face of the substrate. The ground-plane serves as a third return conductor.
Coplanar waveguide was invented in 1969 by Cheng P. Wen, primarily as a means by which non-reciprocal components such as gyrators and isolators could be incorporated in planar transmission line circuits.
The electromagnetic wave carried by a coplanar waveguide exists partly in the dielectric substrate, and partly in the air above it. In general, the dielectric constant of the substrate will be different (and greater) than that of the air, so that the wave is travelling in an inhomogeneous medium. In consequence CPW will not support a true TEM wave; at non-zero frequencies, both the E and H fields will have longitudinal components (a hybrid mode). However, these longitudinal components are usually small and the mode is better described as quasi-TEM.
Application to nonreciprocal gyromagnetic devices
Nonreciprocal gyromagnetic devices such as resonant isolators and differential phase shifters depend on a microwave signal presenting a rotating (circularly polarized) magnetic field to a statically magnetized ferrite body. CPW can be designed to produce just such a rotating magnetic field in the two slots between the central and side conductors.
The dielectric substrate has no direct effect on the magnetic field of a microwave signal travelling along the CPW line. For the magnetic field, the CPW is then symmetrical in the plane of the metalization, between the substrate side and the air side. Consequently, currents flowing along parallel paths on opposite faces of each conductor (on the air-side and on the substrate-side) are subject to the same inductance, and the overall current tends to be divided equally between the two faces.
Conversely, the substrate does affect the electric field, so that the substrate side contributes a larger capacitance across the slots than does the air side. Electric charge can accumulate or be depleted more readily on the substrate face of the conductors than on the air face. As a result, at those points on the wave where the current reverses direction, charge will spill over the edges of the metalization between the air face and the substrate face. This secondary current over the edges gives rise to a longitudinal (parallel with the line), magnetic field in each of the slots, which is in quadrature with the vertical (normal to the substrate surface) magnetic field associated with the main current along the conductors.
If the dielectric constant of the substrate is much greater than unity, then the magnitude of the longitudinal magnetic field approaches that of the vertical field, so that the combined magnetic field in the slots approaches circular polarization.
Application in solid state physics
Coplanar waveguides play an important role in the field of solid state quantum computing, e.g. for the coupling of microwave photons to a superconducting qubit. In particular the research field of circuit quantum electrodynamics was initiated with coplanar waveguide resonators as crucial elements that allow for high field strength and thus strong coupling to a superconducting qubit by confining a microwave photon to a volume that is much smaller than the cube of the wavelength. To further enhance this coupling, superconducting coplanar waveguide resonators with extremely low losses were applied. (The quality factors of such superconducting coplanar resonators at low temperatures can exceed 106 even in the low-power limit.) Coplanar resonators can also be employed as quantum buses to couple multiple qubits to each other.
Another application of coplanar waveguides in solid state research is for studies involving magnetic resonance, e.g. for electron spin resonance spectroscopy or for magnonics.
Coplanar waveguide resonators have also been employed to characterize the material properties of (high-Tc) superconducting thin films.
See also
Waveguide (electromagnetism)
Microstrip
Stripline
Post-wall waveguide
Telegrapher's equations
Via fence
References
Planar transmission lines
Microwave technology
Distributed element circuits | Coplanar waveguide | [
"Engineering"
] | 1,063 | [
"Electronic engineering",
"Distributed element circuits"
] |
40,628,760 | https://en.wikipedia.org/wiki/Scissors%20Modes | Scissors Modes are collective excitations in which two particle systems move with respect to each other conserving their shape.
For the first time they were predicted to occur in deformed atomic nuclei by N. LoIudice and F. Palumbo, who used a semiclassical Two Rotor Model, whose solution required a realization of the O(4) algebra that was not known in mathematics. In this model protons and neutrons were assumed to form two interacting rotors to be identified with the blades of scissors. Their relative motion (Fig.1) generates a magnetic dipole moment whose coupling with the electromagnetic field provides the signature of the mode.
Such states have been experimentally observed for the first time by A. Richter and collaborators in a rare earth nucleus, 156Gd, and then systematically investigated experimentally and theoretically in all deformed atomic nuclei.
Inspired by this, D. Guéry-Odelin and S. Stringari predicted similar collective excitations in Bose-Einstein condensates in magnetic traps. In this case one of the blades of the scissors must be identified
with the moving cloud of atoms and the other one with the trap. Also this excitation mode was experimentally confirmed.
In close analogy similar collective excitations have predicted in a number of other systems, including metal clusters, quantum dots, Fermi condensates and crystals, but none of them has yet been experimentally investigated or found.
References
Quantum mechanics | Scissors Modes | [
"Physics"
] | 297 | [
"Theoretical physics",
"Quantum mechanics"
] |
48,326,079 | https://en.wikipedia.org/wiki/Water%20vapor%20windows | Water vapor windows are wavelengths of infrared light that have little absorption by water vapor in Earth's atmosphere. Because of this weak absorption, these wavelengths are allowed to reach the Earth's surface barring effects from other atmospheric components. This process is highly impacted by greenhouse gases because of the effective emission temperature. The water vapor continuum and greenhouse gases are significantly linked due to water vapor's benefits on climate change.
Definition
Water vapor is a gas that absorbs many wavelengths of Infrared (IR) energy in the Earth's atmosphere, and these wavelength ranges that can partially reach the surface are coming through what is called 'water vapor windows'. However, these windows do not absorb all of the infrared light, and electromagnetic energy is allowed to freely flow as a result. Astronomers can view the Universe with IR telescopes, called Infrared astronomy, because of these windows.
The mid-infrared window, which has a range of 800–1250 cm^-1, is one of the more significant windows, for it has a massive influence on radiation fluxes in high humidity areas of the atmosphere. There has also been increased attention on the windows at 4700 cm^-1 and 6300 cm^-1 since their water vapor micro-windows confirm that uncertainties in water vapor window parameters only occur at the edges. Moreover, the net incoming solar shortwave radiation and the net outgoing terrestrial longwave radiation at the top of the atmosphere keep the Earth's energy balance in check.
Greenhouse Effect's Impact
Water vapor windows are also impacted by greenhouse gases since the water cycle is greatly accelerated due to these gases. The global averaged value of emitted, longwave radiation is 238.5 Wm^-2. One may get the effective emission temperature of the globe by assuming that the Earth-atmosphere system radiates as a blackbody in accordance with the Stefan-Boltzmann equation of blackbody radiation. The resultant temperature is -18.7 °C. Compared to +14.5 °C, the average worldwide temperature of the Earth's surface is 33 °C cooler. Thus, the Earth's surface is up to 33 °C warmer than it would be without the atmosphere. Moreover, the observation of longwave radiation demonstrates that the greenhouse effect exists in the Earth's atmosphere. These windows also allow orbiting satellites to measure the IR energy leaving the planet, the SSTs, and other important matters. See Electromagnetic absorption by water: Atmospheric effects.
Water vapor absorbing these wavelengths of IR energy is mainly attributed to water being a polar molecule. Water's polarity allows it to absorb and release radiation at far, near and mid-infrared wavelengths. The polarity also largely impacts how water interacts with nature, for it allows complexes of water, such as the water dimer.
Water Vapor Continuum
Being one of the planet's most significant gases in the atmosphere, water vapor is important to study due to its benefits to climate change. Water vapor absorption mostly occurs in what is called the water vapor continuum, which is a combination of bands and windows that heavily influence radiation in the atmosphere. This continuum has two parts, which are the self-continuum and the foreign continuum. The self-continuum has a negative dependence on temperature, and the self-continuum is significantly stronger at the edges of the windows.
Background
These windows were originally discovered by John Tyndall. He discovered that most of the infrared coming from the Universe is being blocked and then absorbed by water vapor and other greenhouse gases in the Earth's atmosphere.
See also
Greenhouse gas
Effective temperature
Infrared astronomy
Electromagnetic spectrum
Electromagnetic absorption by water
References
External links
Atmospheric radiation
Satellite meteorology
Electromagnetic spectrum
Climatology
Climate change
Atmosphere | Water vapor windows | [
"Physics"
] | 735 | [
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
48,336,080 | https://en.wikipedia.org/wiki/Emily%20A.%20Carter | Emily A. Carter is the Gerhard R. Andlinger Professor in Energy and the Environment and a professor of Mechanical and Aerospace Engineering (MAE), the Andlinger Center for Energy and the Environment (ACEE), and Applied and Computational Mathematics at Princeton University. She is also a member of the executive management team at the Princeton Plasma Physics Laboratory (PPPL), serving as Senior Strategic Advisor and Associate Laboratory Director for Applied Materials and Sustainability Sciences.
The author of over 475 publications and patents, Carter has delivered over 600 invited and plenary lectures worldwide and has served on advisory boards spanning a wide range of disciplines. Among other honors, Carter is an elected foreign member of The Royal Society (2024), and fellow of the Royal Society of Chemistry (2022), the National Academy of Inventors (2014), the American Academy of Arts and Sciences (2008), the Institute of Physics (2004), American Association for the Advancement of Science (2000), the American Vacuum Society (1995), the American Physical Society (1994), and the American Chemical Society. She is also an elected member of the European Academy of Sciences (2020), the National Academy of Engineering (2016), International Academy of Quantum Molecular Science (2009), the National Academy of Sciences (2008).
Biography
Emily Carter received a Bachelor of Science in chemistry from the University of California, Berkeley, in 1982 (graduating Phi Beta Kappa). She earned her PhD in physical chemistry in 1987 from the California Institute of Technology, where she worked with William Andrew Goddard III, studying homogeneous and heterogeneous catalysis. During her postdoc at the University of Colorado, Boulder, she worked with James T. Hynes carrying out studies on the dynamics of (photo-induced) electron transfer in solution. She also worked with James Hynes, Giovanni Ciccotti, and Ray Kapral to develop the widely used Blue Moon ensemble, a rare-event sampling method for condensed matter simulations.
From 1988 to 2004, she held professorships in chemistry and materials science and engineering at the University of California, Los Angeles. During those years, she was the Dr. Lee's visiting research fellow in the Sciences at Christ Church, Oxford (1996), a visiting scholar in the department of physics at Harvard University (1999), and a visiting associate in aeronautics at the California Institute of Technology (2001). She moved to Princeton University in 2004. In 2006, she was named Arthur W. Marks ’19 Professor. From 2009 to 2014, she was co-director of the Department of Energy Frontier Research Center on Combustion Science. She became the founding director of the Andlinger Center for Energy and the Environment in 2010, Gerhard R. Andlinger Professor in 2011, and dean of the school of engineering and applied science in 2016. After a national search, Prof. Carter served from 2016 to 2019 as Dean of the Princeton University School of Engineering and Applied Science and the Gerhard R. Andlinger Professor in Energy and the Environment. She was also a professor in the department of mechanical and aerospace engineering and the Program in Applied and Computational Mathematics at Princeton University. She was an associated faculty member in the Andlinger Center for Energy and the Environment, the department of chemistry, the department of chemical and biological engineering, the Princeton Institute for Computational Science and Engineering (PICSciE), the High Meadows Environmental Institute (HMEI), and the Princeton Institute for the Science and Technology of Materials (PRISM). She was the founding director of the Andlinger Center for Energy and the Environment from 2010 to 2016. She served as UCLA's Executive Vice Chancellor and Provost (EVCP) from 2019 to 2021 and was Distinguished Professor of Chemical and Biomolecular Engineering. She is currently a member of the executive management team at the Princeton Plasma Physics Laboratory (PPPL), serving as Senior Strategic Advisor and Associate Laboratory Director for Applied Materials and Sustainability Sciences.
Research
Carter has made significant contributions to theoretical and computational chemistry and physics, including the development of ab initio quantum chemistry methods, methods for accurate description of molecules at the quantum level, and an algorithm for identifying transitional states in chemical reactions. She pioneered the combination of ab initio quantum chemistry with kinetic Monte Carlo simulations (KMC), molecular dynamics (MD), and quasi-continuum solid mechanics simulations relevant to the study of surfaces and interfaces of materials. She has extensively investigated the chemical and mechanical causes and mechanisms of failure in materials such as silicon, germanium, iron and steel, and proposed methods for protecting materials from failure.
She has developed fast methods for orbital-free density functional theory (OF-DFT) that can be applied to large numbers of atoms as well as embedded correlated wavefunction theory for the study of local condensed matter electronic structure. This work has relevance to the understanding of photoelectrocatalysis. Her current research focuses on the understanding and design of materials for sustainable energy. Applications include conversion of sunlight to electricity, clean and efficient use of biofuels and solid oxide fuel cells, and development of materials for use in fuel-efficient vehicles and fusion reactors.
Carter's research is supported by multiple grants from the U.S. Department of Defense and the Department of Energy. She was elected as a member into the National Academy of Engineering (2016) for the development of quantum chemistry computational methods for the design of molecules and materials for sustainable energy.
Selected publications
E. A. Carter, S. Atsumi, M. Byron, J. Chen, S. Comello, M. Fan, B, Freeman, M. Fry, S. Jordaan, H. Mahgerefteh, A.-H. Park, J. Powell, A. R. Ramirez, V. Sick, S. Stewart, J. Trembly, J. Yang, J. Yuan, C. Wise, and E. Zeitler, “Carbon Utilization Infrastructure, Markets, and Research and Development: A Final Report,” National Academies of Sciences, Engineering, and Medicine (NASEM). Washington DC: The National Academies Press. ISBN: 978-0-309-71775-5 (2024).
E. A. Carter, “Our Role in Solving Global Challenges: An Opinion,” J. Am. Chem. Soc., 146, 21193-21195 (2024)
X. Wen, J.-N. Boyn, J. M. P. Martirez, Q. Zhao, and E. A. Carter, “Strategies to obtain reliable energy landscapes from embedded multireference correlated wavefunction methods for surface reactions,” Journal of Chemical Theory and Computation, 20, 6037-6048 (2024).
B. Bobell, J.-N. Boyn, J. M. P. Martirez, and E. A. Carter, "Modeling Bicarbonate Formation in an Alkaline Solution with Multi-Level Quantum Mechanics/Molecular Dynamics Simulations," Molecular Physics Special Issue in Honour of Giovanni Ciccotti, e2375370 (2024).
X. Wen, J. M. P. Martirez, and E. A. Carter, "Plasmon-driven ammonia decomposition on Pd(111): Hole transfer’s role in changing rate-limiting steps," ACS Catalysis, 14, 9539 (2024).
Z. Wei, J. M. P. Martirez, and E. A. Carter, “First-Principles Insights into the Thermodynamics of Variable-Temperature Ammonia Synthesis on Transition-Metal-Doped Cu (100) and (111),” ACS Energy Lett., 9, 3012 (2024).
A. G. Rajan, J. M. P. Martirez, and E. A. Carter, “Strongly Facet-Dependent Activity of Iron-Doped β-Nickel Oxyhydroxide for the Oxygen Evolution Reaction,” Phys. Chem. Chem. Phys. 25th Anniversary Special Issue, 26, 14721 (2024).
J.-N. Boyn and E. A. Carter, “Probing pH-Dependent Dehydration Dynamics of Mg and Ca Cations in Aqueous Solutions with Multi-Level Quantum Mechanics/Molecular Dynamics Simulations,” J. Am. Chem. Soc., 145, 20462 (2023).
R. B. Wexler, G. S. Gautam, R. Bell, S. Shulda, N. A. Strange, J. A. Trindell, J. D. Sugar, E. Nygren, S. Sainio, A. H. McDaniel, D. Ginley, E. A. Carter, and E. B. Stechel, “Multiple and nonlocal cation redox in Ca–Ce–Ti–Mn oxide perovskites for solar thermochemical applications,” Energy Environ. Sci., 16, 2550 (2023).
J. Cai, Q. Zhao, W.-Y. Hsu, C. Choi, J. M. P. Martirez, C. Chen, J. Huang, E. A. Carter, and Y. Huang, “Highly Selective Electrochemical Reduction of CO2 into Methane on Nanotwinned Cu,” J. Am. Chem. Soc., 145, 9136 (2023).
Y. Yuan, L. Zhou, J. L. Bao, J. Zhou, A. Bayles, L. Yuan, M. Lou, M. Lou, S. Khatiwada, H. Robatjazi, E. A. Carter, P. Nordlander, and N. J. Halas, “Earth-abundant photocatalyst for H2 generation from NH3 with light-emitting diode illumination,” Science, 378, 889 (2022).
J. M. P. Martirez and E. A. Carter, “First-Principles Insights into the Thermocatalytic Cracking of Ammonia-Hydrogen Blends on Fe(110). 1. Thermodynamics,” J. Phys. Chem. C, 126, 19733 (2022). (Virtual Special Issue: Honoring Michael R. Berman)
E. A. Carter, “Autobiography of Emily A. Carter,” J. Phys. Chem. A, 125, 1671 (2021); J. Phys. Chem. C, 125, 4333 (2021).
Recent awards and honors
2019 – John Scott Award, Board of City Trusts, Philadelphia, PA
2019 – Camille & Henry Dreyfus Lectureship, University of Basel, Switzerland
2019 – Inaugural WiSE Presidential Distinguished Lecturer, University of Southern California
2019 – 18th NCCR MARVEL Distinguished Lecturer, L’École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
2019 – Graduate Mentoring Award, McGraw Center for Teaching and Learning, Princeton University
2019 – Distinguished Alumni Award, California Institute of Technology
2019 – Eyring Lecturer in Molecular Sciences, Arizona State University
2019 – Mildred Dresselhaus Memorial Lecturer, Ras Al Khaimah Centre for Advanced Materials, United Arab Emirates
2019 – Dow Foundation Distinguished Lecturer, University of California, Santa Barbara
2020 – Brumley D. Pritchett Lecturer, Georgia Institute of Technology, School of Materials Science and Engineering
2020 – Member, European Academy of Sciences
2020 – UCLA Chemistry & Biochemistry Distinguished Lecturer, University of California, Los Angeles
2021 – Materials Theory Award, Materials Research Society
2022 – Fellow, Royal Society of Chemistry
2022 – Paint Branch Distinguished Lecturer in Applied Physics, University of Maryland, Institute for Research in Electronics and Applied Physics
2022 – Richard S. H. Mah Lecturer, Northwestern University, Department of Chemical and Biological Engineering
2022 – Harrison Shull Distinguished Lecturer, Indiana University Bloomington, Department of Chemistry
2023 – Gilbert Newton Lewis Memorial Lecturer, University of California, Berkeley
2023 – Robert S. Mulliken Award, University of Chicago
2023 – 27th John Stauffer Lecturer in Chemistry, Stanford University
2024 – William H. Nichols Medal, American Chemical Society (New York Section)
2024 – Foreign Member of the Royal Society
2024 – Marsha I. Lester Award for Exemplary Impact in Physical Chemistry, American Chemical Society, Physical Chemistry Division
2024 – Annabelle Lee Lecturer, VA Tech, Blacksburg
News stories related to Carter
"2024 Marsha I. Lester Award for Exemplary Impact in Physical Chemistry" — American Chemical Society, Physical Chemistry Section (August 2024)
"Emily Carter elected to the Royal Society" — The Royal Society (May 16, 2024)
"PPPL Envisions a Future of Fusion Energy Solutions and Plasma Science Progress" — US 1 News (May 15, 2024)
"Andlinger Center meeting spotlights next-decade technologies and design approaches for the clean energy transition" — Princeton University Engineering (Dec 06, 2023)
"Dr. Emily Carter: International Leader in Sustainability Science at Princeton University" — Girl Power Gurus Podcast (Nov 25, 2023)
"Ammonia fuel offers great benefits but demands careful action" — Princeton University Engineering (Nov 7, 2023); Andlinger Center for Energy and the Environment (Nov 7, 2023)
"Is Energy Efficiency our Panacea for Power?" — Forbes (Oct 29, 2023)
C-Change Conversation Interview with Kathleen Biggins — C-Change Conversation (May 5, 2023)
References
1960 births
Living people
21st-century American chemists
California Institute of Technology alumni
Computational chemists
Princeton University faculty
Theoretical chemists
UC Berkeley College of Chemistry alumni
American women chemists
American women academics
21st-century American women scientists
Fellows of the American Physical Society
Foreign members of the Royal Society | Emily A. Carter | [
"Chemistry"
] | 2,827 | [
"Theoretical chemists",
"American theoretical chemists"
] |
48,343,282 | https://en.wikipedia.org/wiki/Cas12a | Cas12a (CRISPR-associated protein 12a, previously known as Cpf1) is an RNA-guided endonuclease that forms an essential component of the CRISPR systems found in some bacteria and archaea. In its natural context, Cas12a targets and destroys the genetic material of viruses and other foreign mobile genetic elements, thereby protecting the host cell from infection. Like other Cas enzymes, Cas12a binds to a "guide" RNA (termed a crRNA, or CRISPR RNA) which targets it to a DNA sequence in a specific and programmable matter. In the host organism, the crRNA contains a constant region that is recognized by the Cas12a protein and a "spacer" region that is complementary to a piece of foreign nucleic acid (e.g. a portion of a phage genome) that previously infected the cell.
As with Cas9 and other Cas proteins, the programmable DNA-targeting activity of Cas12a makes it a useful tool for biotechnology and biological research applications. By modifying the spacer sequence in the crRNA, researchers can target Cas12a to specific DNA sequences, allowing for highly targeted modifications of DNA. Cas12a is distinguished from Cas9 by a its single RuvC endonuclease active site, its 5' protospacer adjacent motif preference, and its formation of sticky rather than blunt ends at the cut site; these and other differences may make it more suitable for certain applications. Beyond its use in basic research, CRISPR-Cas12a could have applications in the treatment of genetic illnesses and in implementing gene drives.
Description
Discovery
CRISPR-Cas12a was found by searching a published database of bacterial genetic sequences for promising bits of DNA. Its identification through bioinformatics as a CRISPR system protein, its naming, and a hidden Markov model (HMM) for its detection were provided in 2012 in a release of the TIGRFAMs database of protein families. Cas12a appears in many bacterial species. The ultimate Cas12a endonuclease that was developed into a tool for genome editing was taken from one of the first 16 species known to harbor it. Two candidate enzymes from Acidaminococcus and Lachnospiraceae display efficient genome-editing activity in human cells.
Classification
CRISPR-Cas systems are separated into two classes: Class I, in which several Cas proteins associate with a crRNA to build a functional endonuclease, and Class II, in which a single Cas endonuclease associates with a crRNA; Class II is further divided into Type II, Type V, and Type VI systems. Cas12a is identified as a Class II, Type V CRISPR-Cas system.
Naming
The acronym CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) refers to the invariant DNA sequences found in bacteria and archaea which encode Cas proteins and their crRNAs. Cas12a was originally known as Cpf1, an abbreviation of CRISPR and two genera of bacteria where it appears, Prevotella and Francisella. It was renamed in 2015 after a broader rationalization of the names of Cas (CRISPR associated) proteins to correspond to their sequence homology.
Structure
The Cas12a protein contains a mixed alpha/beta domain, a RuvC-like endonuclease domain (broken into two non-contiguous segments, RuvC-I and RuvC-II) similar to the RuvC domain of Cas9, and a zinc finger-like domain. Unlike Cas9, Cas12a does not have an HNH endonuclease domain, and the N-terminal region of Cas12a does not have an alpha-helical recognition lobe as seen in Cas9.
The Cas12a loci encode Cas1, Cas2 and Cas4 proteins more similar to types I and III than from type II systems. Database searches suggest the abundance of Cas12a-family proteins in many bacterial species.
Also unlike Cas9, Cas12a does not require a tracrRNA (which in natural CRISPR systems must base-pair with a separate crRNA before binding to a Cas protein), instead binding a single crRNA. Both Cas12a and its guide RNA are smaller than the protein and RNA components of the Cas9 system; the crRNA of Cas12a is approximately half as long as sgRNAs used with Cas9. This reduced size renders Cas12a more suitable for applications such as in vivo delivery via adeno-associated virus (AAV), which have limited DNA packaging capacity due to their small capsids.
The Cas12a-crRNA complex cleaves target DNA or RNA by identification of a protospacer adjacent motif (PAM) 5'-YTN-3' (where "Y" is a pyrimidine and "N" is any nucleobase), in contrast to the G-rich PAM targeted by Cas9. After identification of PAM, Cas12a introduces a sticky-end-like DNA double- stranded break of 4 or 5 nucleotides overhang.
Mechanism
The CRISPR-Cas12a system consist of a Cas12a enzyme and a guide RNA that finds and positions the complex at the correct spot on the double helix to cleave target DNA. CRISPR-Cas12a systems activity has three stages:
Adaptation: Cas1 and Cas2 proteins facilitate the adaptation of small fragments of DNA into the CRISPR array.
Formation of crRNAs: processing of pre-cr-RNAs producing of mature crRNAs to guide the Cas protein.
Interference: the Cas12a is bound to a crRNA to form a binary complex to identify and cleave a target DNA sequence. The crRNA-Cas12a complex searches dsDNA for a 3-6nt 5' protospacer adjacent motif (PAM). Once a PAM is found, the protein locally denatures the dsDNA and searches for complementarity between the crRNA spacer and the ssDNA protospacer. Sufficient complementarity will trigger RuvC activity and the RuvC active site will then cut the non-target strand and then the target strand, ultimately generating a staggered dsDNA break with 5' ssDNA overhangs (cis cleavage).
Cas9 vs. Cas12a
Cas9 requires two RNA molecules to cut DNA while Cas12a needs one. The proteins also cut DNA at different places, offering researchers more options when selecting an editing site. Cas9 cuts both strands in a DNA molecule at the same position, leaving behind blunt ends. Cas12a leaves one strand longer than the other, creating sticky ends. The sticky ends have different properties than blunt ends during non-homologous end joining or homologous repair of DNA, which confers certain advantages to Cas12a when attempting gene insertions, compared to Cas9. Although the CRISPR-Cas9 system can efficiently disable genes, it is challenging to insert genes or generate a knock-in. Cas12a lacks tracrRNA, utilizes a T-rich PAM and cleaves DNA via a staggered DNA DSB.
In summary, important differences between Cas12a and Cas9 systems are that Cas12a:
Recognizes different PAMs, enabling new targeting possibilities.
Creates 4-5 nt long sticky ends, instead of blunt ends produced by Cas9, enhancing the efficiency of genetic insertions and specificity during NHEJ or HDR.
Cuts target DNA further away from PAM, further away from the Cas9 cutting site, enabling new possibilities for cleaving the DNA.
Origin
Cas12 endonucleases ultimately likely evolved from the TnpB endonuclease of IS200/IS605-family transposons. TnpB, not yet "domesticated" into the CRISPR immune system, are themselves able to perform RNA-guided cleavage using a OmegaRNA template system.
Tools
Multiple aspects influence target efficiency and specificity when using CRISPR, including guide RNA design. Many design models and CRISPR-Cas software tools for optimal design of guide RNA have been developed. These include SgRNA designer, CRISPR MultiTargeter, SSFinder. In addition, commercial antibodies are available for use to detect Cas12a protein.
Intellectual property
CRISPR-Cas9 is subject to Intellectual property disputes while CRISPR-Cas12a does not have the same issues.
Notes
References
Genetic engineering
Enzymes
Genome editing | Cas12a | [
"Chemistry",
"Engineering",
"Biology"
] | 1,729 | [
"Genetics techniques",
"Biological engineering",
"Genome editing",
"Genetic engineering",
"Molecular biology"
] |
49,581,858 | https://en.wikipedia.org/wiki/Ralaniten%20acetate | Ralaniten acetate (developmental code name EPI-506) is a first-in-class antiandrogen that targets the N-terminal domain (NTD) of the androgen receptor (AR) developed by ESSA Pharmaceuticals and was under investigation for the treatment of prostate cancer. This mechanism of action is believed to allow the drug to block signaling from the AR and its splice variants. EPI-506 is a derivative of bisphenol A and a prodrug of ralaniten (EPI-002), one of the four stereoisomers of EPI-001, and was developed as a successor of EPI-001. The drug reached phase I/II prior to the discontinuation of its development. It showed signs of efficacy in the form of prostatic specific antigen (PSA) decreases (4–29%) predominantly at higher doses (≥1,280 mg) in some patients but also caused side effects and was discontinued by its developer in favor of next-generation AR NTD inhibitors with improved potency and tolerability.
See also
EPI-7386
N-Terminal domain antiandrogen
References
External links
Ralaniten acetate - AdisInsight
Abandoned drugs
Acetate esters
Alkylating agents
2,2-Bis(4-hydroxyphenyl)propanes
Halohydrins
Nonsteroidal antiandrogens
Organochlorides
Prodrugs | Ralaniten acetate | [
"Chemistry"
] | 299 | [
"Alkylating agents",
"Drug safety",
"Prodrugs",
"Reagents for organic chemistry",
"Chemicals in medicine",
"Abandoned drugs"
] |
49,584,144 | https://en.wikipedia.org/wiki/Amur%20and%20Timur | Amur and Timur () are respectively a tiger and a goat who established an unlikely interspecies friendship in a safari park in Primorye in the Far East of Russia. Timur was placed in Amur's enclosure as food but, by his confident behaviour, established a rapport with Amur, who did not eat him. The pair were separated after another fight in 2016 and Timur was moved to the Exhibition of Achievements of National Economy (VDNKh) in Moscow. Timur died on November 5, 2019, aged 5.
References
External links
Animal duos
Individual animals in Russia
Primorsky Krai | Amur and Timur | [
"Biology"
] | 133 | [
"Ethology stubs",
"Ethology",
"Behavior"
] |
25,171,429 | https://en.wikipedia.org/wiki/Inflow%20%28meteorology%29 | Inflow is the flow of a fluid into a large collection of that fluid. Within meteorology, inflow normally refers to the influx of warmth and moisture from air within the Earth's atmosphere into storm systems. Extratropical cyclones are fed by inflow focused along their cold front and warm fronts. Tropical cyclones require a large inflow of warmth and moisture from warm oceans in order to develop significantly, mainly within the lowest of the atmosphere. Once the flow of warm and moist air is cut off from thunderstorms and their associated tornadoes, normally by the thunderstorm's own rain-cooled outflow boundary, the storms begin to dissipate. Rear inflow jets behind squall lines act to erode the broad rain shield behind the squall line, and accelerate its forward motion.
Thunderstorms
The inflow into a thunderstorm, or complex of thunderstorms, is the circulation of warm and humid air ahead of a trigger convergence zone such as a cold front. This airmass is uplifted by the trigger and form convective clouds. Later, cool air carried to the ground by thunderstorm downdraft, cuts off the inflow of the thunderstorm, destroying its updraft and causing its dissipation.
Tornadoes, which form within stronger thunderstorms, grow until they reach their mature stage. This is when the rear flank downdraft of the thunderstorm, fed by rain-cooled air, begins to wrap around the tornado, cutting off the inflow of warm air which previously fed the tornado.
Inflow can originate from mid-levels of the atmosphere too. When thunderstorms are able to organize into squall lines, a feature known as a rear inflow jet develops to the south of the mid-level circulation associated with its northern bookend vortex. This leads to an erosion of rain within the broad rain shield behind the squall line, and may lead to acceleration of the squall line itself.
Tropical cyclones
While an initial warm core system, such as an organized thunderstorm complex, is necessary for the formation of a tropical cyclone, a large flux of energy is needed to lower atmospheric pressure more than a few millibars (0.10 inch of mercury). Inflow of warmth and moisture from the underlying ocean surface is critical for tropical cyclone strengthening. A significant amount of the inflow in the cyclone is in the lowest of the atmosphere.
Extratropical cyclones
Polar front theory is attributed to Jacob Bjerknes, and was derived from a coastal network of observation sites in Norway during World War I. This theory proposed that the main inflow into a cyclone was concentrated along two lines of convergence, one ahead (or east) of the low and another trailing equatorward (south in the Northern Hemisphere and north in the Southern Hemisphere) and behind (or west) of the low. The convergence line ahead of the low became known as either the steering line or the warm front. The trailing convergence zone was referred to as the squall line or cold front. Areas of clouds and rainfall appeared to be focused along these convergence zones. A conveyor belt, also referred to as the warm conveyor belt, is a term describing the flow of a stream of warm moist air originating within the warm sector (or generally equatorward) of an extratropical cyclone in advance of the cold front which slopes up above and poleward (north in the Northern Hemisphere and south in the Southern Hemisphere) of the surface warm front. The concept of the conveyor belt originated in 1969.
The left edge of the conveyor belt is sharp due to higher density air moving in from the west forcing a sharp slope to the cold front. An area of stratiform precipitation develops poleward of the warm front along the conveyor belt. Active precipitation poleward of the warm front implies potential for greater development of the cyclone. A portion of this conveyor belt turns to the right (left in the Southern Hemisphere), aligning with the upper level westerly flow. However, the western portion of this belt wraps around the northwest (southwest in the Southern Hemisphere) side of the cyclone, which can contain moderate to heavy precipitation. If the air mass is cold enough, the precipitation falls in the form of heavy snow. Theory from the 1980s talked about the presence of a cold conveyor belt originating north of the warm front and flowing along a clockwise path (in the northern hemisphere) into the main belt of the westerlies aloft, but there has been conflicting evidence as to whether or not this phenomenon actually exists.
See also
Outflow (meteorology)
References
Meteorological phenomena
Severe weather and convection
Synoptic meteorology and weather | Inflow (meteorology) | [
"Physics"
] | 945 | [
"Meteorological phenomena",
"Physical phenomena",
"Earth phenomena"
] |
25,174,711 | https://en.wikipedia.org/wiki/Genetically%20modified%20insect | A genetically modified (GM) insect is an insect that has been genetically modified, either through mutagenesis, or more precise processes of transgenesis, or cisgenesis. Motivations for using GM insects include biological research purposes and genetic pest management. Genetic pest management capitalizes on recent advances in biotechnology and the growing repertoire of sequenced genomes in order to control pest populations, including insects. Insect genomes can be found in genetic databases such as NCBI, and databases more specific to insects such as FlyBase, VectorBase, and BeetleBase. There is an ongoing initiative started in 2011 to sequence the genomes of 5,000 insects and other arthropods called the i5k. Some Lepidoptera (e.g. monarch butterflies and silkworms) have been genetically modified in nature by the wasp bracovirus.
Types of genetic pest management
The sterile insect technique (SIT) was developed conceptually in the 1930s and 1940s and first used in the environment in the 1950s. SIT is a control strategy where male insects are sterilized, usually by irradiation, then released to mate with wild females. If enough males are released, the females will mate with mostly sterile males and lay non-viable eggs. This causes the population of insects to crash (the abundance of insects is extremely diminished), and in some cases can lead to local eradication. Irradiation is a form of mutagenesis which causes random mutations in DNA.
Release of Insects carrying Dominant Lethals (RIDL)
Release of Insects carrying Dominant Lethals or RIDL is a control strategy using genetically engineered insects that have (carry) a lethal gene in their genome (an organism's DNA). Lethal genes cause death in an organism, and RIDL genes only kill young insects, usually larvae or pupae. Similar to how inheritance of brown eyes is dominant to blue eyes, this lethal gene is dominant so that all offspring of the RIDL insect will also inherit the lethal gene. This lethal gene has a molecular on and off switch, allowing these RIDL insects to be reared. The lethal gene is turned off when the RIDL insects are mass reared in an insectary, and turned on when they are released into the environment. RIDL males and females are released to mate with wild males and their offspring die when they reach the larval or pupal stage because of the lethal gene. This causes the population of insects to crash. This technique is being developed for some insects and for other insects has been tested in the field. It has been used in the Grand Cayman Islands, Panama, and Brazil to control the mosquito vector of dengue, Ae. aegypti. It is being developed for use in diamondback moth (Plutella xylostella), medfly (Ceratitis capitata) and olive fly (Bactrocera oleae).
Incompatible Insect Technique (IIT)
Wolbachia
Maternal Effect Dominant Embryonic Arrest (MEDEA)
X-Shredder
Concerns
There are concerns about using tetracycline on a routine basis for controlling the expression of lethal genes. There are plausible routes for resistance genes to develop in the bacteria within the guts of GM-insects fed on tetracycline and from there, to circulate widely in the environment. For example, antibiotic-resistant genes could be spread to E. coli bacteria and into fruit by GM-Mediterranean fruit flies (Ceratitis capitata).
Releases
Oxitec released its genetically modified in various countries, including Brazil, Grand Cayman, Malaysia, Panama, and the US.
Modified species
Biological research
Fruit flies (Drosophila melanogaster) are model organisms used in an array of biological disciplines (i.e. neurobiology, population genetics, ecology, animal behavior, systematics, genomics, and development). Many studies done with Drosophila species have been foundational in their respective fields, and they remain important models for other organisms, including humans. For example, they have contributed to understanding economically important insects and researching human disease and development. Fruit flies are often preferred over other animals due to their short life cycle, reproduction rate, low maintenance requirements, and amenability to mutagenesis. They are also the model genetic organism for historical reasons, being one of the first model organism and have a high quality completed genome.
Genetic pest management
Yellow fever mosquito (Aedes aegypti)
Malaria mosquito (Anopheles gambiae and Anopheles stephensi)
Pink bollworm (Pectinophora gossypiella)
Diamondback moth
The diamondback moth's caterpillars gorge on cruciferous vegetables such as cabbage, broccoli, cauliflower and kale, globally costing farmers an estimated $5 billion (£3.2 billion) a year worldwide. In 2015, Oxitec developed GM-diamondback moths which produce non-viable female larvae to control populations able to develop resistance to insecticides. The GM-insects were initially placed in cages for field trials. Earlier, the moth was the first crop pest to evolve resistance to DDT and eventually became resistant to 45 other insecticides. In Malaysia, the moth has become immune to all synthetic sprays. The gene is a combination of DNA from a virus and a bacterium. In an earlier study, captive males carrying the gene eradicated communities of non-GM moths. Brood sizes were similar, but female offspring died before reproducing. The gene itself disappears after a few generations, requiring ongoing introductions of GM cultivated males. Modified moths can be identified by their red glow under ultraviolet light, caused by a coral transgene.
Opponents claim that the protein made by the synthetic gene could harm non-target organisms that eat the moths. The creators claim to have tested the gene's protein on mosquitoes, fish, beetles, spiders and parasitoids without observing problems. Farmers near the test site claim that moths could endanger nearby farms' organic certification. Legal experts say that national organic standards penalize only deliberate GMO use. The creators claim that the moth does not migrate if sufficient food is available, nor can it survive winter weather.
Mediterranean fruit fly
The Mediterranean fruit fly is a global agricultural pest. They infest a wide range of crops (over 300) including wild fruit, vegetables and nuts, and in the process, cause substantial damage. The company Oxitec has developed GM-males which have a lethal gene that interrupts female development and kills them in a process called "pre-pupal female lethality". After several generations, the fly population diminishes as the males can no longer find mates. To breed the flies in the laboratory, the lethal gene can be "silenced" using the antibiotic tetracycline.
Opponents argue that the long-term effects of releasing millions of GM-flies are impossible to predict. Dead fly larvae could be left inside crops. Helen Wallace from Genewatch, an organisation that monitors the use of genetic technology, stated "Fruit grown using Oxitec's GM flies will be contaminated with GM maggots which are genetically programmed to die inside the fruit they are supposed to be protecting". She added that the mechanism of lethality was likely to fail in the longer term as the GM flies evolve resistance or breed in sites contaminated with tetracycline which is widely used in agriculture.
Legislation
In July 2015, the House of Lords (U.K.) Science and Technology Committee launched an inquiry into the possible uses of GM-insects and their associated technologies. The scope of the inquiry is to include questions such as "Would farmers benefit if insects were modified in order to reduce crop pests? What are the safety and ethical concerns over the release of genetically modified insects? How should this emerging technology be regulated?"
Notes and references
See also
Inherited sterility in insects
List of sterile insect technique trials
Insect ecology
Detection of genetically modified organisms
External links
Transgenic Fly Virtual Lab - Howard Hughes Medical Institute BioInteractive
Genetically modified organisms | Genetically modified insect | [
"Engineering",
"Biology"
] | 1,634 | [
"Genetic engineering",
"Genetically modified organisms"
] |
25,175,105 | https://en.wikipedia.org/wiki/Genetically%20modified%20bacteria | Genetically modified bacteria were the first organisms to be modified in the laboratory, due to their simple genetics. These organisms are now used for several purposes, and are particularly important in producing large amounts of pure human proteins for use in medicine.
History
The first example of this occurred in 1978 when Herbert Boyer, working at a University of California laboratory, took a version of the human insulin gene and inserted into the bacterium Escherichia coli to produce synthetic "human" insulin. Four years later, it was approved by the U.S. Food and Drug Administration.
Research
Bacteria were the first organisms to be genetically modified in the laboratory, due to the relative ease of modifying their chromosomes. This ease made them important tools for the creation of other GMOs. Genes and other genetic information from a wide range of organisms can be added to a plasmid and inserted into bacteria for storage and modification. Bacteria are cheap, easy to grow, clonal, multiply quickly, are relatively easy to transform, and can be stored at -80 °C almost indefinitely. Once a gene is isolated it can be stored inside the bacteria, providing an unlimited supply for research. The large number of custom plasmids make manipulating DNA excised from bacteria relatively easy.
Their ease of use has made them great tools for scientists looking to study gene function and evolution. Most DNA manipulation takes place within bacterial plasmids before being transferred to another host. Bacteria are the simplest model organism and most of our early understanding of molecular biology comes from studying Escherichia coli. Scientists can easily manipulate and combine genes within the bacteria to create novel or disrupted proteins and observe the effect this has on various molecular systems. Researchers have combined the genes from bacteria and archaea, leading to insights on how these two diverged in the past. In the field of synthetic biology, they have been used to test various synthetic approaches, from synthesizing genomes to creating novel nucleotides.
Food
Bacteria have been used in the production of food for a very long time, and specific strains have been developed and selected for that work on an industrial scale. They can be used to produce enzymes, amino acids, flavourings, and other compounds used in food production. With the advent of genetic engineering, new genetic changes can easily be introduced into these bacteria. Most food-producing bacteria are lactic acid bacteria, and this is where the majority of research into genetically engineering food-producing bacteria has gone. The bacteria can be modified to operate more efficiently, reduce toxic byproduct production, increase output, create improved compounds, and remove unnecessary pathways. Food products from genetically modified bacteria include alpha-amylase, which converts starch to simple sugars, chymosin, which clots milk protein for cheese making, and pectinesterase, which improves fruit juice clarity.
In cheese
Chymosin is an enzyme produced in the stomach of young ruminant mammals to digest milk. The digestion of milk proteins via enzymes is essential to cheesemaking. The species Escherichia coli and Bacillus subtilis can be genetically engineered to synthesise and excrete chymosin, providing a more efficient means of production. The use of bacteria to synthesise chymosin also provides a vegetarian method of cheesemaking, as previously, young ruminants (typically calves) had to be slaughtered to extract the enzyme from the stomach lining.
Industrial
Genetically modified bacteria are used to produce large amounts of proteins for industrial use. Generally the bacteria are grown to a large volume before the gene encoding the protein is activated. The bacteria are then harvested and the desired protein purified from them. The high cost of extraction and purification has meant that only high value products have been produced at an industrial scale.
Pharmaceutical production
The majority of the industrial products from bacteria are human proteins for use in medicine. Many of these proteins are impossible or difficult to obtain via natural methods and they are less likely to be contaminated with pathogens, making them safer. Prior to recombinant protein products, several treatments were derived from cadavers or other donated body fluids and could transmit diseases. Indeed, transfusion of blood products had previously led to unintentional infection of haemophiliacs with HIV or hepatitis C; similarly, treatment with human growth hormone derived from cadaver pituitary glands may have led to outbreaks of Creutzfeldt–Jakob disease.
The first medicinal use of GM bacteria was to produce the protein insulin to treat diabetes. Other medicines produced include clotting factors to treat haemophilia, human growth hormone to treat various forms of dwarfism, interferon to treat some cancers, erythropoietin for anemic patients, and tissue plasminogen activator which dissolves blood clots. Outside of medicine they have been used to produce biofuels. There is interest in developing an extracellular expression system within the bacteria to reduce costs and make the production of more products economical.
Health
With greater understanding of the role that the microbiome plays in human health, there is the potential to treat diseases by genetically altering the bacteria to, themselves, be therapeutic agents. Ideas include altering gut bacteria so they destroy harmful bacteria, or using bacteria to replace or increase deficient enzymes or proteins. One research focus is to modify Lactobacillus, bacteria that naturally provide some protection against HIV, with genes that will further enhance this protection. The bacteria which generally cause tooth decay have been engineered to no longer produce tooth-corroding lactic acid. These transgenic bacteria, if allowed to colonize a person's mouth, could perhaps reduce the formation of cavities. Transgenic microbes have also been used in recent research to kill or hinder tumors, and to fight Crohn's disease.
If the bacteria do not form colonies inside the patient, the person must repeatedly ingest the modified bacteria in order to get the required doses. Enabling the bacteria to form a colony could provide a more long-term solution, but could also raise safety concerns as interactions between bacteria and the human body are less well understood than with traditional drugs.
One example of such an intermediate, which only forms short-term colonies in the gastrointestinal tract, may be Lactobacillus Acidophilus MPH734. This is used as a specific in the treatment of Lactose Intolerance. This genetically modified version of Lactobacillus acidophilus bacteria produces a missing enzyme called lactase which is used for the digestion of lactose found in dairy products or, more commonly, in food prepared with dairy products. The short term colony is induced over a one-week, 21-pill treatment regimen, after which, the temporary colony can produce lactase for three months or more before it is removed from the body by a natural processes. The induction regimen can be repeated as often as necessary to maintain protection from the symptoms of lactose intolerance, or discontinued with no consequences, except the return of the original symptoms.
There are concerns that horizontal gene transfer to other bacteria could have unknown effects. As of 2018 there are clinical trials underway testing the efficacy and safety of these treatments.
Agriculture
For over a century bacteria have been used in agriculture. Crops have been inoculated with Rhizobia (and more recently Azospirillum) to increase their production or to allow them to be grown outside their original habitat. Application of Bacillus thuringiensis (Bt) and other bacteria can help protect crops from insect infestation and plant diseases. With advances in genetic engineering, these bacteria have been manipulated for increased efficiency and expanded host range. Markers have also been added to aid in tracing the spread of the bacteria. The bacteria that naturally colonise certain crops have also been modified, in some cases to express the Bt genes responsible for pest resistance. Pseudomonas strains of bacteria cause frost damage by nucleating water into ice crystals around themselves. This led to the development of ice-minus bacteria, that have the ice-forming genes removed. When applied to crops they can compete with the ice-plus bacteria and confer some frost resistance.
Other uses
Other uses for genetically modified bacteria include bioremediation, where the bacteria are used to convert pollutants into a less toxic form. Genetic engineering can increase the levels of the enzymes used to degrade a toxin or to make the bacteria more stable under environmental conditions. GM bacteria have also been developed to leach copper from ore, clean up mercury pollution and detect arsenic in drinking water. Bioart has also been created using genetically modified bacteria. In the 1980s artist Joe Davis and geneticist Dana Boyd converted the Germanic symbol for femininity (ᛉ) into binary code and then into a DNA sequence, which was then expressed in Escherichia coli. This was taken a step further in 2012, when a whole book was encoded onto DNA. Paintings have also been produced using bacteria transformed with fluorescent proteins.
Bacteria-synthesized transgenic products
Insulin
Hepatitis B vaccine
Tissue plasminogen activator
Human growth hormone
Ice-minus bacteria
Interferon
Bt corn
References
Further reading
Genetically modified organisms
Bacteria | Genetically modified bacteria | [
"Engineering",
"Biology"
] | 1,864 | [
"Genetically modified organisms",
"Prokaryotes",
"Genetic engineering",
"Bacteria",
"Microorganisms"
] |
25,175,288 | https://en.wikipedia.org/wiki/Genetically%20modified%20fish | Genetically modified fish (GM fish) are organisms from the taxonomic clade which includes the classes Agnatha (jawless fish), Chondrichthyes (cartilaginous fish) and Osteichthyes (bony fish) whose genetic material (DNA) has been altered using genetic engineering techniques. In most cases, the aim is to introduce a new trait to the fish which does not occur naturally in the species, i.e. transgenesis.
GM fish are used in scientific research and kept as pets. They are being developed as environmental pollutant sentinels and for use in aquaculture food production. In 2015, the AquAdvantage salmon was approved by the US Food and Drug Administration (FDA) for commercial production, sale and consumption, making it the first genetically modified animal to be approved for human consumption. Some GM fish that have been created have promoters driving an over-production of "all fish" growth hormone. This results in dramatic growth enhancement in several species, including salmonids, carps and tilapias.
Critics have objected to GM fish on several grounds, including ecological concerns, animal welfare concerns and with respect to whether using them as food is safe and whether GM fish are needed to help address the world's food needs.
History and process
The first transgenic fish were produced in China in 1985. As of 2013, approximately 50 species of fish have been subject to genetic modification. This has resulted in more than 400 fish/trait combinations. Most of the modifications have been conducted on food species, such as Atlantic salmon (Salmo salar), tilapia (genus) and common carp (Cyprinus carpio).
Generally, genetic modification entails manipulation of DNA. The process is known as cisgenesis when a gene is transferred between organisms that could be conventionally bred, or transgenesis when a gene from one species is added to a different species. Gene transfer into the genome of the desired organism, as for fish in this case, requires a vector like a lentivirus or mechanical/physical insertion of the altered genes into the nucleus of the host by means of a micro syringe or a gene gun.
Uses
Research
Transgenic fish are used in research covering five broad areas
Enhancing the traits of commercially available fish
Their use as bioreactors for the development of bio-medically important proteins
Their use as indicators of aquatic pollutants
Developing new non-mammalian animal models
Functional genomics studies
Most GM fish are used in basic research in genetics and development. Two species of fish, zebrafish (Danio rerio) and medaka (Japanese rice fish, Oryzias latipes), are most commonly modified because they have optically clear chorions (shells), develop rapidly, the 1-cell embryo is easy to see and micro-inject with transgenic DNA, and zebrafish have the capability of regenerating their organ tissues. They are also used in drug discovery. GM zebrafish are being explored for benefits of unlocking human organ tissue diseases and failure mysteries. For instance, zebrafish are used to understand heart tissue repair and regeneration in efforts to study and discover cures for cardiovascular diseases.
Transgenic rainbow trout (Oncorhynchus mykiss) have been developed to study muscle development. The introduced transgene causes green fluorescence to appear in fast twitch muscle fibres early in development which persist throughout life. It has been suggested the fish might be used as indicators of aquatic pollutants or other factors which influence development.
In intensive fish farming, the fish are kept at high stocking densities. This means they suffer from frequent transmission of contagious diseases, a problem which is being addressed by GM research. Grass carp (Ctenopharyngodon idella) have been modified with a transgene coding for human lactoferrin, which doubles their survival rate relative to control fish after exposure to Aeromonas bacteria and Grass carp hemorrhage virus. Cecropin has been used in channel catfish to enhance their protection against several pathogenic bacteria by 2–4 times.
Recreation
Pets
GloFish is a patented technology which allows GM fish (tetra, barb, zebrafish) to express jellyfish and sea coral proteins giving the fish bright red, green or orange fluorescent colors when viewed in ultraviolet light. Although the fish were originally created and patented for scientific research at the National University of Singapore, a Texas company, Yorktown Technologies, obtained the rights to market the fish as pets. They became the first genetically modified animal to become publicly available as a pet when introduced for commercial in 2003. They were quickly banned for sale in California; however, they are now on shelves once again in this state. As of 2013, Glofish are only sold in the US.
Other transgenic lines of pet fish include Medaka which remain transparent throughout their lives and pink body color transgenic angelfish (Pterophyllum scalare) and lionhead fish expressing the Acropora coral (Acroporo millepora) red fluorescent protein.
The ocean pout type III antifreeze protein transgene has been successfully micro-injected and expressed in goldfish. The transgenic goldfish showed higher cold tolerance compared with controls.
Food
One area of intensive research with GM fish has aimed to increase food production by modifying the expression of growth hormone (GH). The relative increases in growth differ between species.(Figure 1) They range from a doubling in weight, to some fish that are almost 100 times heavier than the wild-type at a comparable age. This research area has resulted in dramatic growth enhancement in several species, including salmon, trout and tilapia. Other sources indicate an 11-fold and 30-fold increase in growth of salmon and mud loach, respectively, compared to wild-type fish. Transgenic fish development has reached the stage where several species are ready to be marketed in different countries, for example, GM tilapia in Cuba, GM carp in the People's Republic of China, and GM salmon in the US and Canada. In 2014, it was reported that applications for the approval of transgenic fish as food had been made in Canada, China, Cuba and the United States.
Over-production of GH from the pituitary gland increases growth rate mainly by an increase in food consumption by the fish, but also by a 10 to 15% increase in feed conversion efficiency.
Another approach to increasing meat production in GM fish is "double muscling". This results in a phenotype similar to that of Belgian Blue cattle in rainbow trout. It is achieved by using transgenes expressing follistatin, which inhibits myostatin, and the development of two muscle layers.
AquAdvantage salmon
In November 2015, the FDA of the USA approved the AquAdvantage salmon created by AquaBounty for commercial production, sale and consumption. It is the first genetically modified animal to be approved for human consumption. The fish is essentially an Atlantic salmon with a single gene complex inserted: a growth hormone regulating gene from a Chinook salmon with a promoter sequence from an ocean pout. This permits the GM salmon to produce GH year round rather than pausing for part of the year as do wild-type Atlantic salmon. The wild-type salmon takes 24 to 30 months to reach market size (4–6 kg) whereas the GM salmon require 18 months for the GM fish to achieve this. AquaBounty argue that their GM salmon can be grown nearer to end-markets with greater efficiency (they require 25% less feed to achieve market weight) than the Atlantic salmon which are currently reared in remote coastal fish farms, thereby making it better for the environment, with recycled waste and lower transport costs.
To prevent the genetically modified fish inadvertently breeding with wild salmon, all the fish raised for food are females, triploid, and 99% are reproductively sterile. The fish are raised in a facility in Panama with physical barriers and geographical containment such as river and ocean temperatures too high to support salmon survival to prevent escape. The FDA has determined AquAdvantage would not have a significant effect on the environment in the United States. A fish farm is also being readied in Indiana where the FDA has approved importation of salmon eggs. As of August 2017, GMO salmon is being sold in Canada. Sales in the US began in May 2021.
Detecting aquatic pollution (potential)
Several research groups have been developing GM zebrafish to detect aquatic pollution. The laboratory that developed the GloFish originally intended them to change color in the presence of pollutants, as environmental sentinels. Teams at the University of Cincinnati and Tulane University have been developing GM fish for the same purpose.
Several transgenic methods have been used to introduce target DNA into zebrafish for environmental monitoring, including micro-injection, electroporation, particle gun bombardment, liposome-mediated gene transfer, and sperm-mediated gene transfer. Micro-injection is the most commonly used method to produce transgenic zebrafish as this produces the highest survival rate.
Regulation
The regulation of genetic engineering concerns the approaches taken by governments to assess and manage the risks associated with the development and release of genetically modified crops. There are differences in the regulation of GMOs between countries, with some of the most marked differences occurring between the US and Europe. Regulation varies in a given country depending on the intended use of the products of the genetic engineering. For example, a fish not intended for food use is generally not reviewed by authorities responsible for food safety.
The US FDA guidelines for evaluating transgenic animals define transgenic constructs as "drugs" regulated under the animal drug provisions of the Federal Food and Cosmetic Act. This classification is important for several reasons, including that it places all GM food animal permits under the jurisdiction of the FDA's Center for Veterinary Medicine (CVM) and imposes limits on what information the FDA can release to the public, and furthermore, it avoids a more open food safety review process.
The US states of Washington and Maine have imposed permanent bans on the production of transgenic fish.
Controversy
Critics have objected to use of genetic engineering per se on several grounds, including ethical concerns, ecological concerns (especially about gene flow), and economic concerns raised by the fact GM techniques and GM organisms are subject to intellectual property law. GMOs also are involved in controversies over GM food with respect to whether using GM fish as food is safe, whether it would exacerbate or cause fish allergies, whether it should be labeled, and whether GM fish and crops are needed to address the world's food needs. These controversies have led to litigation, international trade disputes, and protests, and to restrictive regulation of commercial products in most countries.
There is much doubt among the public about genetically modified animals in general. It is believed that the acceptance of GM fish by the general public is the lowest of all GM animals used for food and pharmaceuticals.
Ethical concerns
In transgenic fast-growing fish genetically modified for growth hormone, the mosaic founder fish vary greatly in their growth rate, reflecting the highly variable proportion and distribution of transgenic cells in their bodies. Fish with these high growth rates (and their progeny) sometimes develop a morphological abnormality similar to acromegaly in humans, exhibiting an enlarged head relative to the body and a bulging operculum. This becomes progressively worse as the fish ages. It can interfere with feeding and may ultimately cause death. According to a study commissioned by Compassion in World Farming, the abnormalities are probably a direct consequence of growth hormone over-expression and have been reported in GM coho salmon, rainbow trout, common carp, channel catfish and loach, but to a lesser extent in Nile tilapia.
In GM coho salmon (Oncorhynchus kisutch) there are morphological changes and changed allometry that lead to reduced swimming abilities. They also exhibit abnormal behaviour such as increased levels of activity with respect to feed-intake and swimming. Several other transgenic fish show decreased swimming ability, likely due to body shape and muscle structure.
Genetically modified triploid fish are more susceptible to temperature stress, have a higher incidence of deformities
(e.g. abnormalities in the eye and lower jaw), and are less aggressive than diploids. Other welfare concerns of GM fish include increased stress under oxygen-deprived conditions caused by increased need for oxygen. It has been shown that deaths due to low levels of oxygen (hypoxia) in coho salmon are most pronounced in transgenics. It has been suggested the increased sensitivity to hypoxia is caused by the insertion of the extra set of chromosomes requiring a larger nucleus which thereby causes a larger cell overall and a reduction in the surface area to volume ratio of the cell.
Ecological concerns
Transgenic fish are usually developed in strains of near-wild origin. These have an excellent capacity for interbreeding with themselves or wild relatives and therefore possess a significant possibility for establishing themselves in nature should they escape biotic or abiotic containment measures.
A wide range of concerns about the consequences of genetically modified fish escaping have been expressed. For polyploids, these include the degree of sterility, interference with spawning, competing with resources without contributing to subsequent generations. For transgenics, the concerns include characteristics of the genotype, the function of the gene, the type of the gene, potential for causing pleiotropic effects, potential for interacting with the remainder of the genome, stability of the construct, ability of the DNA construct to transpose within or between genomes.
One study, using relevant life history data from the Japanese medaka (Oryzias latipes) predicts that a transgene introduced into a natural population by a small number of transgenic fish will spread as a result of enhanced mating advantage, but the reduced viability of offspring will cause eventual local extinction of both populations. GM coho salmon show greater risk-taking behaviour and better use of limited food than wild-type fish.
Transgenic coho salmon have enhanced feeding capacity and growth, which can result in a considerably larger body size (>7-fold) compared to non-transgenic salmon. When transgenic and non-transgenic salmon in the same enclosure compete for different levels of food, transgenic individuals consistently outgrow non-transgenic individuals. When food abundance is low, dominant individuals emerge, invariably transgenic, that show strong agonistic and cannibalistic behavior to cohorts and dominate the acquisition of limited food resources. When food availability is low, all groups containing transgenic salmon experience population crashes or complete extinctions, whereas groups containing only non-transgenic salmon have good (72%) survival rates. This has led to the suggestion that these GM fish will survive better than the wild-type when conditions are very poor.
Successful artificial transgenic hybridization between two species of loach (genus Misgurnus) has been reported, yet these species are not known to hybridize naturally.
GloFish were not considered as an environmental threat because they were less fit than normal zebrafish which are unable to establish themselves in the wild in the US.
AquAdvantage salmon
The FDA has said the AquAdvantage Salmon can be safely contained in land-based tanks with little risk of escape into the wild; however, Joe Perry, former chair of the GM panel of the European Food Safety Authority, has been quoted as saying "There remain legitimate ecological concerns over the possible consequences if these GM salmon escape to the wild and reproduce, despite FDA assurances over containment and sterility, neither of which can be guaranteed".
AquaBounty indicates their GM salmon can not interbreed with wild fish because they are triploid which makes them sterile. The possibility of fertile triploids is one of the major short-falls of triploidy being used as a means of bio-containment for transgenic fish. However, it is estimated that 1.1% of eggs remain diploid, and therefore capable of breeding, despite the triploidy process. Others have claimed the sterility process has a failure rate of 5%. With around a million fish in each of the 3,000 Atlantic sites a single failure could result in the release of 1,100 to 5,000 genetically altered fish capable of reproducing. Large scale trials using normal pressure, high pressure, or high pressure plus aged eggs for transgenic coho salmon, give triploidy frequencies of only 99.8%, 97.6%, and 97.0%, respectively. AquaBounty also emphasizes that their GM salmon would not survive wild conditions due to the geographical locations where their research is conducted, as well as the locations of their farms.
The GH transgene can be transmitted via hybridization of GM AquAdvantage Salmon and the closely related wild brown trout (Salmo trutta). Transgenic hybrids are viable and grow more rapidly than transgenic salmon and other wild-type crosses in conditions emulating a hatchery. In stream mesocosms designed to simulate natural conditions, transgenic hybrids express competitive dominance and suppress the growth of transgenic and non-transgenic salmon by 82% and 54%, respectively. Natural levels of hybridization between these two species can be as high as 41%. Researchers examining this possibility concluded "Ultimately, we suggest that hybridization of transgenic fishes with closely related species represents potential ecological risks for wild populations and a possible route for introgression of a transgene, however low the likelihood, into a new species in nature."
An article in Slate Magazine in December 2012 by Jon Entine, Director of the Genetic Literacy Project, criticized the Obama administration for preventing the publication of the environmental assessment (EA) of the AquAdvantage Salmon, which was completed in April 2012 and which concluded that "the salmon is safe to eat and poses no serious environmental hazards." The Slate article said that the publication of the report was stopped "after meetings with the White House, which was debating the political implications of approving the GM salmon, a move likely to infuriate a portion of its base". Within days of the article's publication and less than two months after the election, the FDA released the draft EA and opened the comment period.
References
Genetically modified organisms
1985 in biotechnology | Genetically modified fish | [
"Engineering",
"Biology"
] | 3,725 | [
"Genetic engineering",
"Genetically modified organisms"
] |
25,176,083 | https://en.wikipedia.org/wiki/Dicarbonate | A dicarbonate, also known as a pyrocarbonate, is a chemical containing the divalent or functional group, which consists of two carbonate groups sharing an oxygen atom. It is one of polycarbonate functional groups. These compounds can be viewed as derivatives of the hypothetical compound dicarbonic acid, or . Three important organic compounds containing this group are:
dimethyl dicarbonate
diethyl dicarbonate
di-tert-butyl dicarbonate , also known as Boc anhydride.
It is one of the oxocarbon anions, consisting solely of oxygen and carbon. The anion has the formula or . Dicarbonate salts are apparently unstable at ambient conditions, but can be made under pressure and may have a fleeting existence in carbonate solutions.
The term dicarbonate is sometimes used erroneously to refer to bicarbonate, the common name of the hydrogencarbonate anion or esters of the hydrogencarbonate functional group . It is also sometimes used for chemicals that contain two carbonate units in their covalent structure or stoichiometric formula.
Inorganic salts
(lead(II) dicarbonate) can be formed at 30 GPa and 2000K from PbCO3 and CO2. It forms white monoclinic crystals, with space group P21/c and four formula units per unit cell. At 30 GPa the unit cell has a=4.771 b=8.079 c=7.070 Å and β=91.32°. The unit cell volume is 272.4 Å3 and density 7.59.
(strontium dicarbonate) is very similar to the lead compound, and also has monoclinic structure with space group P21/c and four formula units per unit cell. At 30 GPa the unit cell has a=4.736 b=8.175 c=7.140 Å and β=91.34°. The unit cell volume is 276.3 Å3 and density 4.61. The double Sr=O bonds have lengths of 1.22, 1.24, and 1.25 Å. The single Sr-O bonds have lengths of 1.36 and 1.41 Å. The angles subtended at the carbon atoms are slightly less than 120°, and the angle at the C-O-C is larger.
See also
Tricarbonate
Peroxodicarbonate
Oxalate
Pyrosulfate
Peroxydisulfate
Dithionate
Trithionate
Tetrathionate
Pyrophosphate
Polyphosphate
References
Carbon oxyanions
Oxocarbon anions
Functional groups
Dicarbonates | Dicarbonate | [
"Chemistry"
] | 546 | [
"Functional groups"
] |
25,177,102 | https://en.wikipedia.org/wiki/C20H18O10 | {{DISPLAYTITLE:C20H18O10}}
The molecular formula C20H18O10 (molar mass: 418.35 g/mol, exact mass: 418.0899968) may refer to:
5,3'-Dihydroxy-3,8,4',5'-tetramethoxy-6,7-methylenedioxyflavone (CAS number: 82668-96-0)
6-C-beta-D-Xylopyranosylluteolin (CAS number 70059-13-1)
8-C-alpha-L-Arabinosylluteolin (CAS number 115636-75-4)
Isoscutellarein 7-xyloside (CAS number 126771-29-7)
Juglanin (Kaempferol 3-O-arabinoside, CAS number 5041-67-8)
Kaempferol 3-alpha-L-arabinopyranoside (CAS number 99882-10-7)
Kaempferol 3-xyloside (CAS number 60933-78-0)
Kaempferol 3-alpha-D-arabinopyranoside (CAS number 201533-09-7)
Kaempferol 7-alpha-L-arabinoside (CAS number 70427-13-3)
Kaempferol 7-xyloside
Luteolin 3'-xyloside (CAS number 93078-91-2)
Luteolin 6-C-alpha-L-arabinopyranoside (CAS number 321690-39-5)
Luteolin 7-xyloside (CAS number 98575-26-9)
Salvianolic acid D (CAS number 142998-47-8)
Scutellarein 6-xyloside (CAS number 65876-68-8)
Molecular formulas | C20H18O10 | [
"Physics",
"Chemistry"
] | 427 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
50,697,899 | https://en.wikipedia.org/wiki/Pentomone | Pentomone (, ; development codes Lilly 113935 and LY-113935) is a nonsteroidal antiandrogen (NSAA) described as a "prostate growth inhibitor" which was never marketed. It was synthesized and assayed in 1978.
Synthesis
Condensation of two equivalents of o-vanillin with 4,4-dimethylcyclohexadienone (2) gives the five-ring ketone derivative (3). The reaction may be visualized as initial conjugate addition of phenoxide to the enone followed by interception of the resulting anion by the aldehyde carbonyl group. Catalytic hydrogenation then reduces both olefin pi-bonds as well as the ketone, to give (4). Re-oxidation of the alcohol thus formed with pyridinium chlorochromate affords pentomone.
References
Ethers
Heterocyclic compounds with 5 rings
Ketones
Nonsteroidal antiandrogens
Oxygen heterocycles
Methoxy compounds | Pentomone | [
"Chemistry"
] | 218 | [
"Ketones",
"Organic compounds",
"Functional groups",
"Ethers"
] |
50,700,194 | https://en.wikipedia.org/wiki/Linear%20recurrence%20with%20constant%20coefficients | In mathematics (including combinatorics, linear algebra, and dynamical systems), a linear recurrence with constant coefficients (also known as a linear recurrence relation or linear difference equation) sets equal to 0 a polynomial that is linear in the various iterates of a variable—that is, in the values of the elements of a sequence. The polynomial's linearity means that each of its terms has degree 0 or 1. A linear recurrence denotes the evolution of some variable over time, with the current time period or discrete moment in time denoted as , one period earlier denoted as , one period later as , etc.
The solution of such an equation is a function of , and not of any iterate values, giving the value of the iterate at any time. To find the solution it is necessary to know the specific values (known as initial conditions) of of the iterates, and normally these are the iterates that are oldest. The equation or its variable is said to be stable if from any set of initial conditions the variable's limit as time goes to infinity exists; this limit is called the steady state.
Difference equations are used in a variety of contexts, such as in economics to model the evolution through time of variables such as gross domestic product, the inflation rate, the exchange rate, etc. They are used in modeling such time series because values of these variables are only measured at discrete intervals. In econometric applications, linear difference equations are modeled with stochastic terms in the form of autoregressive (AR) models and in models such as vector autoregression (VAR) and autoregressive moving average (ARMA) models that combine AR with other features.
Definitions
A linear recurrence with constant coefficients is an equation of the following form, written in terms of parameters and :
or equivalently as
The positive integer is called the order of the recurrence and denotes the longest time lag between iterates. The equation is called homogeneous if and nonhomogeneous if .
If the equation is homogeneous, the coefficients determine the characteristic polynomial (also "auxiliary polynomial" or "companion polynomial")
whose roots play a crucial role in finding and understanding the sequences satisfying the recurrence.
Conversion to homogeneous form
If , the equation
is said to be nonhomogeneous. To solve this equation it is convenient to convert it to homogeneous form, with no constant term. This is done by first finding the equation's steady state value—a value such that, if successive iterates all had this value, so would all future values. This value is found by setting all values of equal to in the difference equation, and solving, thus obtaining
assuming the denominator is not 0. If it is zero, the steady state does not exist.
Given the steady state, the difference equation can be rewritten in terms of deviations of the iterates from the steady state, as
which has no constant term, and which can be written more succinctly as
where equals . This is the homogeneous form.
If there is no steady state, the difference equation
can be combined with its equivalent form
to obtain (by solving both for )
in which like terms can be combined to give a homogeneous equation of one order higher than the original.
Solution example for small orders
The roots of the characteristic polynomial play a crucial role in finding and understanding the sequences satisfying the recurrence. If there are distinct roots then each solution to the recurrence takes the form
where the coefficients are determined in order to fit the initial conditions of the recurrence. When the same roots occur multiple times, the terms in this formula corresponding to the second and later occurrences of the same root are multiplied by increasing powers of . For instance, if the characteristic polynomial can be factored as , with the same root occurring three times, then the solution would take the form
Order 1
For order 1, the recurrence
has the solution with and the most general solution is with . The characteristic polynomial equated to zero (the characteristic equation) is simply .
Order 2
Solutions to such recurrence relations of higher order are found by systematic means, often using the fact that is a solution for the recurrence exactly when is a root of the characteristic polynomial. This can be approached directly or using generating functions (formal power series) or matrices.
Consider, for example, a recurrence relation of the form
When does it have a solution of the same general form as ? Substituting this guess (ansatz) in the recurrence relation, we find that
must be true for all .
Dividing through by , we get that all these equations reduce to the same thing:
which is the characteristic equation of the recurrence relation. Solve for to obtain the two roots , : these roots are known as the characteristic roots or eigenvalues of the characteristic equation. Different solutions are obtained depending on the nature of the roots: If these roots are distinct, we have the general solution
while if they are identical (when ), we have
This is the most general solution; the two constants and can be chosen based on two given initial conditions and to produce a specific solution.
In the case of complex eigenvalues (which also gives rise to complex values for the solution parameters and ), the use of complex numbers can be eliminated by rewriting the solution in trigonometric form. In this case we can write the eigenvalues as Then it can be shown that
can be rewritten as
where
Here and (or equivalently, and ) are real constants which depend on the initial conditions. Using
one may simplify the solution given above as
where and are the initial conditions and
In this way there is no need to solve for and .
In all cases—real distinct eigenvalues, real duplicated eigenvalues, and complex conjugate eigenvalues—the equation is stable (that is, the variable converges to a fixed value [specifically, zero]) if and only if both eigenvalues are smaller than one in absolute value. In this second-order case, this condition on the eigenvalues can be shown to be equivalent to , which is equivalent to and .
General solution
Characteristic polynomial and roots
Solving the homogeneous equation
involves first solving its characteristic polynomial
for its characteristic roots . These roots can be solved for algebraically if , but not necessarily otherwise. If the solution is to be used numerically, all the roots of this characteristic equation can be found by numerical methods. However, for use in a theoretical context it may be that the only information required about the roots is whether any of them are greater than or equal to 1 in absolute value.
It may be that all the roots are real or instead there may be some that are complex numbers. In the latter case, all the complex roots come in complex conjugate pairs.
Solution with distinct characteristic roots
If all the characteristic roots are distinct, the solution of the homogeneous linear recurrence
can be written in terms of the characteristic roots as
where the coefficients can be found by invoking the initial conditions. Specifically, for each time period for which an iterate value is known, this value and its corresponding value of can be substituted into the solution equation to obtain a linear equation in the as-yet-unknown parameters; such equations, one for each initial condition, can be solved simultaneously for the parameter values. If all characteristic roots are real, then all the coefficient values will also be real; but with non-real complex roots, in general some of these coefficients will also be non-real.
Converting complex solution to trigonometric form
If there are complex roots, they come in conjugate pairs and so do the complex terms in the solution equation. If two of these complex terms are and , the roots can be written as
where is the imaginary unit and is the modulus of the roots:
Then the two complex terms in the solution equation can be written as
where is the angle whose cosine is and whose sine is ; the last equality here made use of de Moivre's formula.
Now the process of finding the coefficients and guarantees that they are also complex conjugates, which can be written as . Using this in the last equation gives this expression for the two complex terms in the solution equation:
which can also be written as
where is the angle whose cosine is and whose sine is .
Cyclicity
Depending on the initial conditions, even with all roots real the iterates can experience a transitory tendency to go above and below the steady state value. But true cyclicity involves a permanent tendency to fluctuate, and this occurs if there is at least one pair of complex conjugate characteristic roots. This can be seen in the trigonometric form of their contribution to the solution equation, involving and .
Solution with duplicate characteristic roots
In the second-order case, if the two roots are identical (), they can both be denoted as and a solution may be of the form
Solution by conversion to matrix form
An alternative solution method involves converting the th order difference equation to a first-order matrix difference equation. This is accomplished by writing , , , and so on. Then the original single th-order equation
can be replaced by the following first-order equations:
Defining the vector as
this can be put in matrix form as
Here is an matrix in which the first row contains and all other rows have a single 1 with all other elements being 0, and is a column vector with first element and with the rest of its elements being 0.
This matrix equation can be solved using the methods in the article Matrix difference equation.
In the homogeneous case is a para-permanent of a lower triangular matrix
Solution using generating functions
The recurrence
can be solved using the theory of generating functions. First, we write . The recurrence is then equivalent to the following generating function equation:
where is a polynomial of degree at most correcting the initial terms.
From this equation we can solve to get
In other words, not worrying about the exact coefficients, can be expressed as a rational function
The closed form can then be derived via partial fraction decomposition. Specifically, if the generating function is written as
then the polynomial determines the initial set of corrections , the denominator determines the exponential term , and the degree together with the numerator determine the polynomial coefficient .
Relation to solution to differential equations
The method for solving linear differential equations is similar to the method above—the "intelligent guess" (ansatz) for linear differential equations with constant coefficients is where is a complex number that is determined by substituting the guess into the differential equation.
This is not a coincidence. Considering the Taylor series of the solution to a linear differential equation:
it can be seen that the coefficients of the series are given by the -th derivative of evaluated at the point . The differential equation provides a linear difference equation relating these coefficients.
This equivalence can be used to quickly solve for the recurrence relationship for the coefficients in the power series solution of a linear differential equation.
The rule of thumb (for equations in which the polynomial multiplying the first term is non-zero at zero) is that:
and more generally
Example: The recurrence relationship for the Taylor series coefficients of the equation:
is given by
or
This example shows how problems generally solved using the power series solution method taught in normal differential equation classes can be solved in a much easier way.
Example: The differential equation
has solution
The conversion of the differential equation to a difference equation of the Taylor coefficients is
It is easy to see that the -th derivative of evaluated at is .
Solving with z-transforms
Certain difference equations - in particular, linear constant coefficient difference equations - can be solved using z-transforms. The z-transforms are a class of integral transforms that lead to more convenient algebraic manipulations and more straightforward solutions. There are cases in which obtaining a direct solution would be all but impossible, yet solving the problem via a thoughtfully chosen integral transform is straightforward.
Stability
In the solution equation
a term with real characteristic roots converges to 0 as grows indefinitely large if the absolute value of the characteristic root is less than 1. If the absolute value equals 1, the term will stay constant as grows if the root is +1 but will fluctuate between two values if the root is −1. If the absolute value of the root is greater than 1 the term will become larger and larger over time. A pair of terms with complex conjugate characteristic roots will converge to 0 with dampening fluctuations if the absolute value of the modulus of the roots is less than 1; if the modulus equals 1 then constant amplitude fluctuations in the combined terms will persist; and if the modulus is greater than 1, the combined terms will show fluctuations of ever-increasing magnitude.
Thus the evolving variable will converge to 0 if all of the characteristic roots have magnitude less than 1.
If the largest root has absolute value 1, neither convergence to 0 nor divergence to infinity will occur. If all roots with magnitude 1 are real and positive, will converge to the sum of their constant terms ; unlike in the stable case, this converged value depends on the initial conditions; different starting points lead to different points in the long run. If any root is −1, its term will contribute permanent fluctuations between two values. If any of the unit-magnitude roots are complex then constant-amplitude fluctuations of will persist.
Finally, if any characteristic root has magnitude greater than 1, then will diverge to infinity as time goes to infinity, or will fluctuate between increasingly large positive and negative values.
A theorem of Issai Schur states that all roots have magnitude less than 1 (the stable case) if and only if a particular string of determinants are all positive.
If a non-homogeneous linear difference equation has been converted to homogeneous form which has been analyzed as above, then the stability and cyclicality properties of the original non-homogeneous equation will be the same as those of the derived homogeneous form, with convergence in the stable case being to the steady-state value instead of to 0.
See also
Recurrence relation
Linear differential equation
Skolem–Mahler–Lech theorem
Skolem problem
References
Combinatorics
Dynamical systems
Linear algebra
Recurrence relations | Linear recurrence with constant coefficients | [
"Physics",
"Mathematics"
] | 2,901 | [
"Discrete mathematics",
"Recurrence relations",
"Combinatorics",
"Mathematical relations",
"Mechanics",
"Linear algebra",
"Algebra",
"Dynamical systems"
] |
50,702,025 | https://en.wikipedia.org/wiki/Material%20passport | A material passport is a digital document listing all the materials that are included in a product or construction during its life cycle in order to facilitate strategizing circularity decisions in supply chain management. Passports generally consists of a set of data describing defined characteristics of materials in products, which enables the identification of value for recovery, recycling and re-use. These passports have been adopted as a best practice for business process analysis and improvement in the widely applied supply chain operation reference (SCOR) by the association for supply chain management.
The core idea behind the concept is that a material passport will contribute to a more circular economy, in which materials are being recovered, recycled and/or re-used in an open-traded material market. The concept of the 'material passport’ is currently being developed by multiple parties in primarily European countries. Such a passport could make possible second-hand material markets or material banks in the future.
Similar types of passports for the circular economy are being developed by several parties under a variety of terminology. Other names for the material passport are:
Circularity passport
Cradle-to-cradle passport
Product passport
Closely related concepts, which share some of the life cycle registrations that passports also support, are the bill of materials, product life cycle management, digital twin, and ecolabels. The key difference in these concepts is that a passport provides an identity of a single identifiable object and acts as a certified interface to all life-cycle registrations a product is concerned with.
Significance
"According to United Nations estimates, construction accounts for some 50 percent of raw material consumption in Europe and 60 percent of waste."
Assuming that the earth is a closed system, this situation is objectively untenable. There is an urgent need to deal with raw materials in a more sophisticated manner. A shift in the building sector would greatly benefit movement towards needing less material, and using material more effectively, e.g., by ensuring a much longer and more useful life cycle. Proponents of the material passport argue that it is a step in this direction.
The material passport gives material an identity. By acknowledging that the material exists in a given form in a specific building, it ensures that the material receives and keeps a value, e.g., through a possible re-use after the deconstruction of a building for example.
Like a personal passport, the material passport allows the material to ‘travel,' or identifies the most useful future destination after it has served in a building (or other project/product). This could be in another building or in another product altogether.
By recognizing the individual materials in buildings (or other products), new ownership structures could be facilitated that would enable more functions to be offered as a service. As lighting can be provided as a service, functions such as "shelter from elements" could be a service instead of owning a roof.
In general, material passports create incentives for suppliers to produce and developers / managers / renovators to choose healthy, sustainable and circular materials/building products. They fit into a broader and growing movement that aims at developing circular building business models.
Applicability
The material passport can be applied to every product or construction. There are different levels in which a product/construct can be discomposed:
Product level
Component level
Material level
For a building, a material passport could be a complete description of all products (staircase, window, furnace, ...), components (iron beam, glass panel, ...), and raw materials (wood, steel, ...), that are present in the building. Ideally, this database would be created during construction and continuously updated. In case an existing building does not yet have a material passport, it can be created through various methods (e.g., plan analysis, digital 3D scanning).
A material passport allows the owner of a product/construction to know exactly what it is made of. This is of importance at the end of its useful life, to enable the most effective re-use of the materials. It allows the owner to view a product/construct as a depot, inventory of valuable materials.
Furthermore, the process of creating a material passport also shapes the design of the building. The easier the materials can be extracted and re-used on deconstruction of the building, the better. This will lead to an increase of ‘recoverable’ or ‘reversible’ buildings, buildings that can be dis-assembled as easily as they were assembled.
Another possibility is that a material passport can enable the owner to get better insight into the value of the product/construction. Besides the value of the location and of the space, it could also improve the valuation of the materials used. A higher, or more accurate, valuation of product/construction could be made possible.
Advantages and disadvantages
Advantages
By having a material passport, one could better plan the deconstruction of the properties and ensure the highest possible usefulness of the materials after having vacated the premises. This is another way of being conscious about the environment footprint and limiting negative impact on the environment.
A more granular understanding of the construction of a building might enable novel forms of financing that would support suppliers in providing a service rather than selling a product.
By reviewing how buildings are valued now, new financing products or financing policies (e.g., higher collateral value) could be developed that better reflect the (financial) value of buildings.
The recovery of collateral in case of default might improve through the sale of the parts instead of the building as a whole.
Disadvantages
A passport needs to be kept up to date and maintained throughout a building's life. It is not known yet how work intensive this could be, but for it to remain relevant, all changes to the building that happened after the passport was created need to be logged. Potentially the value of this work will only be apparent at the end of the useful life of a building, which might be several decades in the future.
The market for second-hand materials is still in its infancy, and currently not able to support the optimal re-use of the materials in a building. Also, much more standardization, at least at the level of components, will be needed to increase re-use of materials in a building.
There is no standardization for material passports yet. Passports might therefore prove to have limited usefulness when ultimately needed, due to evolving requirements, or require additional investments during the life of the building to keep them up to market standard.
Legislation needs to be put in place to support more sustainable building, enable the development of services instead of ownership, and support a broad deployment of material passports.
The infrastructure, mainly information technology, to support material passports still needs to be created.
The first scientific publication about a material passport (2012) was written by Maayke Damen and is called "A resources passport for a circular economy". It provides a comprehensive overview of the advantages and disadvantages of a material passport for every actor in the supply chain. It includes an outline for the content of a material passport.
Projects
ActNow: A business-driven non-profit partnership consisting of businesses, NGOs and municipalities. Act NOW's agenda focuses on enhancing the implementation of existing energy-efficient solutions and products NOW.
BAMB 2020: Buildings as Material Banks, an EU funded project bringing together 16 European parties (universities, building, it companies, consultants, policy makers). Partners: Brussels Environment, EPEA Nederland, Vrije UNiversiteit Brussel, BRE, ZUYD Hogeschool, IBM, SundaHus, Ronneby Kommun, Technical University of Munich, Universiteit Twente, Universidade do Minho, Sarajevo Green Design Foundation, Drees & Sommer, VITO, BAM Construct UK, Aurubis.
Battery Passport proof-of-concept pilots, launched at the World Economic Forum (2023), and by the Global Battery Alliance (GBA): material traceability via a "digital twin" to an Electric Vehicle's physical battery at point of sale, providing Environmental, Social and Governance data across the value chain. The GBA is legally incorporated in Belgium, its incorporation dating from 2022.
COFA Nederland: Initiative of FBBasic and A. van Liempd demolition contractors to develop a more circular way of demolition and re-use of materials.
Concept House Village (RDM)
DCMP: Digital Construction Material Passport, an open, XML based, data format for material passports for the construction industry. Stems from the Danish industry initiative Sustainable Build and also builds on learnings from BAMB 2020.
EPEA (also partner of BAMB): An internationally active scientific research and consultancy institute that works with actors and companies from economy, politics and science and support them for the introduction of circular processes, using the cradle-to-cradle design approach.
Maersk Line (sea-vessels): The world's largest container shipping company. They focus on vessel recycling and re-use.
Turntoo: consultancy firm established by Thomas Rau, that offers services and concepts optimizing the continuity of life on earth. The development of material passports as a product is one of them. Thomas Rau's system is called MADASTER.
See also
Digital Product Passport
External links
BAMB2020 Material passports
References
Construction
Material handling | Material passport | [
"Physics",
"Engineering"
] | 1,887 | [
"Construction",
"Materials",
"Material handling",
"Matter"
] |
42,046,875 | https://en.wikipedia.org/wiki/Nilpotent%20algebra | In mathematics, specifically in ring theory, a nilpotent algebra over a commutative ring is an algebra over a commutative ring, in which for some positive integer n every product containing at least n elements of the algebra is zero. The concept of a nilpotent Lie algebra has a different definition, which depends upon the Lie bracket. (There is no Lie bracket for many algebras over commutative rings; a Lie algebra involves its Lie bracket, whereas, there is no Lie bracket defined in the general case of an algebra over a commutative ring.) Another possible source of confusion in terminology is the quantum nilpotent algebra, a concept related to quantum groups and Hopf algebras.
Formal definition
An associative algebra over a commutative ring is defined to be a nilpotent algebra if and only if there exists some positive integer such that for all in the algebra . The smallest such is called the index of the algebra . In the case of a non-associative algebra, the definition is that every different multiplicative association of the elements is zero.
Nil algebra
A power associative algebra in which every element of the algebra is nilpotent is called a nil algebra.
Nilpotent algebras are trivially nil, whereas nil algebras may not be nilpotent, as each element being nilpotent does not force products of distinct elements to vanish.
See also
Algebraic structure (a much more general term)
nil-Coxeter algebra
Lie algebra
Example of a non-associative algebra
References
External links
Nilpotent algebra – Encyclopedia of Mathematics
Ring theory
Properties of binary operations | Nilpotent algebra | [
"Mathematics"
] | 349 | [
"Mathematical structures",
"Algebras",
"Ring theory",
"Fields of abstract algebra",
"Algebraic structures"
] |
52,153,973 | https://en.wikipedia.org/wiki/Peridinin-chlorophyll-protein%20complex | The peridinin-chlorophyll-protein complex (PCP or PerCP) is a soluble molecular complex consisting of the peridinin-chlorophyll a-protein bound to peridinin, chlorophyll, and lipids. The peridinin molecules absorb light in the blue-green wavelengths (470 to 550 nm) and transfer energy to the chlorophyll molecules with extremely high efficiency. PCP complexes are found in many photosynthetic dinoflagellates, in which they may be the primary light-harvesting complexes.
Structure
The PCP protein has been identified in dinoflagellate genomes in at least two forms, a homodimeric form composed of two 15-kD monomers, and a monomeric form of around 32kD believed to have evolved from the homodimeric form via gene duplication. The monomeric form consists of two pseudosymmetrical eight-helix domains in which the helices are packed in a complex topology resembling that of the beta sheets in a jelly roll fold. The three-dimensional arrangement of helices forms a boat-shaped molecule with a large central cavity in which the pigments and lipids are bound. Each eight-helix segment typically binds four peridinin molecules, one chlorophyll a molecule, and one lipid molecule such as digalactosyl diacyl glycerol; however, this stoichiometry varies among species and among PCP isoforms. The most common 4:1 peridinin:chlorophyll ratio was predicted by spectroscopy in the 1970s, but was unconfirmed until the crystal structure of the Amphidinium carterae PCP complex was solved in the 1990s. Whether formed from a protein monomer or dimer, the assembled protein-pigment complex is sometimes known as bPCP (for "building block") and is the minimal stable unit. In at least some PCP forms, including that from A. carterae, these building blocks assemble into a trimer thought to be the biologically functional state.
When the X-ray crystallography structure of PCP was solved in 1997, it represented a novel protein fold, and its topology remains unique among known proteins. The structure is referred to by the CATH database, which systematically classifies protein structures, as an "alpha solenoid" fold; however, elsewhere in the literature the term alpha solenoid is used for open and less compact helical protein structures.
Function
Photosynthetic dinoflagellates contain membrane-bound light-harvesting complexes similar to those found in green plants. They additionally contain water-soluble protein-pigment complexes that exploit carotenoids such as peridinin to extend their photosynthetic capacity. Peridinin absorbs light in the blue-green wavelengths (470 to 550 nm) which are inaccessible to chlorophyll by itself; instead the PCP complex uses the geometry of the relative pigment orientations to effect extremely high-efficiency energy transfer from the peridinin molecules to their neighboring chlorophyll molecule. PCP has served as a common model system for spectroscopy and for theoretical calculations relating to the protein's photophysics.
PCP complexes are thought to occupy the thylakoid lumen. After energy transfer from the peridinin to the chlorophyll pigment, PCP complexes are believed to then transfer energy from the excited chlorophyll to membrane-bound light harvesting complexes.
References
Photosynthesis | Peridinin-chlorophyll-protein complex | [
"Chemistry",
"Biology"
] | 726 | [
"Biochemistry",
"Photosynthesis"
] |
52,156,116 | https://en.wikipedia.org/wiki/H3K27me3 | H3K27me3 is an epigenetic modification to the DNA packaging protein histone H3. It is a mark that indicates the tri-methylation of lysine 27 on histone H3 protein.
This tri-methylation is associated with the downregulation of nearby genes via the formation of heterochromatic regions.
Nomenclature
H3K27me3 indicates trimethylation of lysine 27 on histone H3 protein subunit:
Lysine methylation
This diagram shows the progressive methylation of a lysine residue. The tri-methylation (right) denotes the methylation present in H3K27me3.
Understanding histone modifications
The genomic DNA of eukaryotic cells is wrapped around special protein molecules known as histones. The complexes formed by the looping of the DNA are known as chromatin. The basic structural unit of chromatin is the nucleosome: this consists of the core octamer of histones (H2A, H2B, H3 and H4) as well as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K27me3.
Mechanism and function of modification
The placement of a repressive mark on lysine 27 requires the recruitment of chromatin regulators by transcription factors. These modifiers are either histone modification complexes which covalently modify the histones to move around the nucleosomes and open the chromatin, or chromatin remodelling complexes which involve movement of the nucleosomes without directly modifying them. These histone marks can serve as docking sites of other co-activators as seen with H3K27me3.
This occurs through polycomb mediated gene silencing via histone methylation and chromodomain interactions. A polycomb repressive complex (PRC); PRC2, mediates the tri-methylation of histone 3 on lysine 27 through histone methyl transferase activity. This mark can recruit PRC1 which will bind and contribute to the compaction of the chromatin.
The inflammatory transcription factor NF-κB can cause demethylation of H3K27me3 via Jmjd3.
H3K27me3 is linked to the repair of DNA damages, particularly repair of double-strand breaks by homologous recombinational repair.
Relationship with other modifications
H3K27 can undergo a variety of other modifications. It can exist in mono- as well as di-methylated states. The roles of these respective modifications are not as well characterised as tri-methylation. PRC2 is however believed to be implicated in all the different methylations associated with H3K27me.
H3K27me1 is linked to promotion of transcription and is seen to accumulate in transcribed genes. Histone-histone interactions play a role in this process. Regulation occurs via Setd2-dependent H3K36me3 deposition.
H3K27me2 is broadly distributed within the core histone H3 and is believed to play a protective role by inhibiting non-cell-type specific enhancers. Ultimately, this leads to the inactivation of transcription.
Acetylation is usually linked to the upregulation of genes. This is the case in H3K27ac which is an active enhancer mark. It is found in distal and proximal regions of genes. It is enriched in transcriptional start sites (TSS). H3K27ac shares a location with H3K27me3 and they interact in an antagonistic manner.
H3K27me3 is often seen to interact with H3K4me3 in bivalent domains . These domains are usually found in embryonic stem cells and are pivotal for proper cell differentiation. H3K27me3 and H3K4me3 determine whether a cell will remain unspecified or will eventually differentiate. The Grb10 gene in mice makes use of these bivalent domains. Grb10 displays imprinted gene expression. Genes are expressed from one parental allele while simultaneously being silenced in the other parental allele. Demethylation of H3K27me3 can lead to up-regulation of genes controlling the senescence-associated secretory phenotype (SASP).
Other well characterised modifications are H3K9me3 as well as H4K20me3 which—just like H3K27me3—are linked to transcriptional repression via formation of heterochromatic regions. Mono-methylations of H3K27, H3K9, and H4K20 are all associated with gene activation.
Epigenetic implications
The post-translational modification of histone tails by either histone modifying complexes or chromatin remodelling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together.
Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. Certain modifications were mapped and enrichment was seen to localize in certain genomic regions. Five core histone modifications were found with each respective one being linked to various cell functions.
H3K4me3-promoters
H3K4me1- primed enhancers
H3K36me3-gene bodies
H3K27me3-polycomb repression
H3K9me3-heterochromatin
The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific gene regulation.
Cause-and-effect relationship between sperm-transmitted histone marks and gene expression and development is in
offspring and grandoffspring.
Clinical significance
H3K27me3 is believed to be implicated in some diseases due to its regulation as a repressive mark.
Cohen–Gibson syndrome
Cohen–Gibson syndrome is a disorder linked to overgrowth and is characterised by dysmorphic facial features and variable intellectual disability. In some cases, a de novo missense mutation in EED was associated with decreased levels of H3K27me3 in comparison to wild type. This decrease was linked to loss of PRC2 activity.
Diffuse midline Glioma
Diffuse midline glioma, H3K27me3-altered (DMG), also known as diffuse intrinsic pontine glioma (DIPG) is a type of highly aggressive brain tumor mostly found in children. All DMGs exhibit loss of H3K27me3, in about 80% of cases due to a genetic mutation receplacing lysine with methionine (M), known as H3K27M. In rare forms, H3Kme3-loss is mediated by overexpression of the EZH inhibiting protein, decreasing PRC2-activity.
Spectrum disorders
There is evidence that implicates the downregulation of expression of H3K27me3 in conjunction with differential expression of H3K4me3 AND DNA methylation may play a factor in fetal alcohol spectrum disorder (FASD) in C57BL/6J mice. This histone code is believed to affect the peroxisome associated pathway and induce the loss of the peroxisomes to ameliorate oxidative stress.
Methods
The histone mark H3K27me3 can be detected in a variety of ways:
1. Chromatin Immunoprecipitation Sequencing (ChIP sequencing) measures the amount of DNA enrichment once bound to a targeted protein and immunoprecipitated. It results in good optimization and is used in vivo to reveal DNA-protein binding occurring in cells. ChIP-Seq can be used to identify and quantify various DNA fragments for different histone modifications along a genomic region.
2. Micrococcal Nuclease sequencing (MNase-seq) is used to investigate regions that are bound by well positioned nucleosomes. Use of the micrococcal nuclease enzyme is employed to identify nucleosome positioning. Well positioned nucleosomes are seen to have enrichment of sequences.
3. Assay for transposase accessible chromatin sequencing (ATAC-seq) is used to look in to regions that are nucleosome free (open chromatin). It uses hyperactive Tn5 transposon to highlight nucleosome localisation.
See also
Histone methylation
Histone methyltransferase
SET domain-containing lysine-specific
Methyllysine
JARID1B, an enzyme which can reverse the methylation
Bivalent chromatin, where this repressing modification is often used with activator H3K4me3
References
Epigenetics
Post-translational modification | H3K27me3 | [
"Chemistry"
] | 2,110 | [
"Post-translational modification",
"Gene expression",
"Biochemical reactions"
] |
52,160,778 | https://en.wikipedia.org/wiki/Normal%20contact%20stiffness | Normal contact stiffness is a physical quantity related to the generalized force displacement behavior of rough surfaces in contact with a rigid body or a second similar rough surface. Specifically it is the amount of force per unit displacement required to compress an elastic object in the contact region. Rough surfaces can be considered as consisting of large numbers of asperities. As two solid bodies of the same material approach one another, the asperities interact, and they transition from conditions of non-contact to homogeneous bulk behaviour, with changes in the contact area. The varying values of stiffness and true contact area at an interface during this transition are dependent on the conditions of applied pressure and are of importance for the study of systems involving the physical interactions of multiple bodies including granular matter, electrode contacts, and thermal contacts, where the interface-localized structures govern overall system performance by controlling the transmission of force, heat, charge carriers or matter through the interface.
References
Surfaces
Tribology
Mechanics
Friction
Classical mechanics
Force | Normal contact stiffness | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 198 | [
"Mechanical phenomena",
"Tribology",
"Physical phenomena",
"Force",
"Friction",
"Physical quantities",
"Quantity",
"Classical mechanics stubs",
"Mass",
"Classical mechanics",
"Surface science",
"Materials science",
"Mechanics",
"Mechanical engineering",
"Wikipedia categories named after ... |
52,163,569 | https://en.wikipedia.org/wiki/Molybdenum%20carbide | Molybdenum carbide is an extremely hard, refractory, ceramic material, commercially used in tool bits for cutting tools.
There are at least three reported phases of molybdenum carbide: γ-MoC, β-Mo2C, and γ'. The γ phase is structurally identical to tungsten carbide.
β-Mo2C has been suggested as a catalyst for carbon dioxide hydrogenation. The γ' phase forms by combining the elements at relatively low temperatures, and transforms to the γ phase at 800 °C.
References
Carbides
Molybdenum compounds
Superhard materials
Refractory materials | Molybdenum carbide | [
"Physics"
] | 134 | [
"Refractory materials",
"Materials",
"Superhard materials",
"Matter"
] |
52,164,477 | https://en.wikipedia.org/wiki/AlGa | AlGa (Aluminum-Gallium) is a that results from liquid gallium infiltrating the crystal structure of aluminium metal. The resulting alloy is very weak and brittle, being broken under the most minute pressure. The alloy is also chemically weaker, as the gallium inhibits the aluminum from forming a protective oxide layer. A video of gallium metal causing intergrain corrosion and breaking of aluminum can be found here.
Uses
The alloy can be reacted with water to form hydrogen gas(H2), aluminum hydroxide and gallium metal. Normally, aluminum does not react with water, since it quickly reacts in air to form a passivation layer of aluminum oxide. AlGa alloy is able to create aluminum nanoparticles for the hydrogen producing reaction. Since this reaction forms hydrogen gas, it can be used as a source of fuel or simply as a hydrogen gas generator. This reaction can also be used to produce aluminum oxide from aluminum. As previously mentioned, aluminum normally won't react with water or air due to the presence of a protective passivation layer, but the reaction of suspended aluminum with water can effectively oxidize aluminum to form aluminum hydroxide which can then be heated to about 180 °C (356 °F), where it decomposes to produce aluminum oxide, and water vapor.
Safety concerns
Due to AlGa's extreme lack of structural integrity and inability to form a protective oxide layer, gallium metal is considered to be corrosive. If AlGa were to form on an aluminum structure, the aforementioned structure could weaken or collapse. Gallium is subject to strict packaging requirements for transportation by aircraft as it could compromise the integrity of the aluminum hull.
References
Aluminium alloys
Gallium alloys | AlGa | [
"Chemistry"
] | 347 | [
"Aluminium alloys",
"Alloys",
"Alloy stubs",
"Gallium alloys"
] |
52,166,375 | https://en.wikipedia.org/wiki/Plant-induced%20systemic%20resistance | Induced systemic resistance (ISR) is a resistance mechanism in plants that is activated by infection. Its mode of action does not depend on direct killing or inhibition of the invading pathogen, but rather on increasing physical or chemical barrier of the host plant. Like the Systemic Acquired Resistance (SAR) a plant can develop defenses against an invader such as a pathogen or parasite if an infection takes place. In contrast to SAR, which is triggered by the accumulation of salicylic acid, ISR instead relies on signal transduction pathways activated by jasmonate and ethylene.
Discovery
The induction of plant-induced resistance to pathogen protection was identified in 1901 and was described as the "system of acquired resistance." Subsequently, several different terms have been used, namely, "acquired physiological immunity", "resistance displacement", "plant immune function" and "induced system resistance." Many forms of stimulus have been found to induce the plant to the virus, bacteria and fungi and other disease resistance including mechanical factors (dry ice damage, electromagnetic, ultraviolet, and low temperature and high temperature treatment, etc.), chemical factors (heavy metal salts, water, salicylic acid), and biological factors (fungi, bacteria, viruses, and their metabolites).
Mode of action
Induced resistance of plants has 2 major modes of action: the SAR pathway and the ISR pathway. SAR can elicit a rapid local reaction, or hypersensitive response, the pathogen is limited to a small area of the site of infection. As mentioned, salicylic acid is the mode of action for the SAR pathway. ISR enhances the defense systems of the plant by jasmonic acid (JA) mode of action. Both act on the effect of the NPR-1, but SAR utilizes PR genes. It is important to note that the two mediated responses have regulatory effects on one another. As SA goes up, it can inhibit the effect of JA. There is a balance to be maintained when activating both responses.
ISR responses can be mediated by rhizobacteria. This has shown to be effective against necrotrophic pathogens and insect herbivores that are sensitive to JA/ET defenses. The importance of rhizobacteria-mediated ISR has been widely reported.
The biological factors of plant-induced system resistance generally include two broad categories, namely classical plant-induced resistance to disease induction (PGPR) or fungi that promote plant growth (PGPF), and plant growth-promoting rhizosphere bacteria (PGPR) or plant growth promoting fungi (PGPF). The difference is mainly due to the fact that the latter can effectively promote plant growth and increase crop yield while causing (or increasing) plant resistance to diseases (sometimes including pests).
Effects on insects
Some studies have also reported negative effects of beneficial microbes on plant-insect interactions.
Applied research
To date, work on induction of plant systemic resistance has shown that inducing plant system resistance work has important implications for basic and applied research.
Induced resistance applications in melons, tobacco, bean, potato, and rice have achieved significant success. Over the past decade, the study of induced system resistance has become a very active field of research.
Methods to artificially activate the ISR pathway is an active area of research. The research and application of inducing plant system resistance have been encouraging but are not yet a major factor in controlling plant pathogens. Incorporation into integrated pest management programs have shown some promising results. There is research regarding defense against leaf chewing insect pests, by the activation of jasmonic acid signalling triggered by root-associated microorganisms.
Some ongoing research into ISR includes (1) how to systematically improve the selection of induction factors; (2) the injury of induced factors; (3) the phenomenon of multi-effect of induced factors; (4) the effects of chemical induction factors on environmental factors; (5) Establishment of population stability of multivariate biological inducible factor. Research into ISR is driven largely by a response to pesticide use including 1) Increasing resistance by pathogens to pesticides, 2) the necessity to remove some of the more toxic pesticides from the market, 3) health and environment problems caused as an effect of pesticide use, and 4) the inability of certain pesticides to control some pathogens.
See also
Plant disease resistance
Systemic acquired resistance
References
Phytopathology
Plant physiology
Immune system | Plant-induced systemic resistance | [
"Biology"
] | 905 | [
"Plant physiology",
"Organ systems",
"Immune system",
"Plants"
] |
52,167,339 | https://en.wikipedia.org/wiki/Universal%20chord%20theorem | In mathematical analysis, the universal chord theorem states that if a function f is continuous on [a,b] and satisfies , then for every natural number , there exists some such that .
History
The theorem was published by Paul Lévy in 1934 as a generalization of Rolle's Theorem.
Statement of the theorem
Let denote the chord set of the function f. If f is a continuous function and , then
for all natural numbers n.
Case of n = 2
The case when n = 2 can be considered an application of the Borsuk–Ulam theorem to the real line. It says that if is continuous on some
interval with the condition that , then there exists some such that .
In less generality, if is continuous and , then there exists that satisfies .
Proof of n = 2
Consider the function defined by . Being the sum of two continuous functions, is continuous, . It follows that and by applying the intermediate value theorem, there exists such that , so that . Which concludes the proof of the theorem for
Proof of general case
The proof of the theorem in the general case is very similar to the proof for
Let be a non negative integer, and consider the function defined by . Being the sum of two continuous functions, is continuous. Furthermore, . It follows that there exists integers such that
The intermediate value theorems gives us c such that and the theorem follows.
See also
Intermediate value theorem
Borsuk–Ulam theorem
Rolle's theorem
References
Mathematical theorems | Universal chord theorem | [
"Mathematics"
] | 302 | [
"Mathematical analysis",
"Theorems in mathematical analysis",
"Mathematical theorems",
"Mathematical problems"
] |
52,167,519 | https://en.wikipedia.org/wiki/Telecom%20Infra%20Project | The Telecom Infra Project (TIP) was formed in 2016 as an engineering-focused, collaborative methodology for building and deploying global telecom network infrastructure, with the goal of enabling global access for all.
TIP is jointly steered by its group of founding tech and telecom companies, which forms its board of directors, and is chaired by Vodafone's Head of Network Strategy and Architecture, Yago Tenorio. Member companies host technology incubator labs and accelerators, and TIP hosts an annual infrastructure conference, TIP Summit. Which was renamed to FYUZ and hosted in Madrid in October 2022.
The organization adopts transparency of process and collaboration in the development of new technologies, by its more than 500 participating member organizations, including operators, suppliers, developers, integrators, startups and other entities, that participate in various TIP project groups. Projects employ current case studies to evolve telecom equipment and software into more flexible, agile, and interoperable forms.
Projects and project groups
With telecom technology disaggregated into Access, Backhaul, and Core & Management, each project group focused on one of these three specific network areas. Past and present projects include, among others:
OpenRAN — enabling open ecosystem of GPP-based RAN solutions, chaired by Andrew Dunkin (Vodafone) and Adnan Boustany (Intel).
Millimeter Wave (mmWave) Networks — creating low-cost hardware and software tools, and best practices, to streamline municipal mmWave networks, chaired by Salil Sawhney (Facebook) and Andreas Gladisch (Deutsche Telekom).
Power and Connectivity — global connectivity through global electricity, chaired by Cesar Hernandex Perez and Jamie Yang.
System Integration and Site Optimization — system integration and cost-analysis, chaired by Dr. Sanket Nesargi and Emre Tepedelenlioglu.
Solutions Integration — development of an interoperable RAN architecture, chaired by Dr. G. Wan Choi.
Open Optical & Packet Transport — designing interoperable solutions for packet and optical networks, chaired by Hans-Juergen Schmidtke (Facebook) and Victor Lopez (Telefónica). The DWDM Voyager packet/optical transponder, developed and tested live by member companies Facebook and Vodafone, respectively, is the first white box transponder and routing device for open packet/optical networks. The first router design developed by the group is the Disaggregated Cell Site Gateway. It was designed by Vodafone, Telefonica and TIM Brasil. Telefonica will deploy the first units in 2020.
Open Converged Wireless project group, developing OpenWiFi, OpenOFDM.
Community Labs
Various TIP member companies provide dedicated space for its project groups as "TIP Community Labs," facilitating collaborative projects between member companies in the development of telecom infrastructure. As of 2020, TIP has 14 labs throughout 8 countries around the world including, Spain, Italy, USA, Indonesia, UK, Japan, Germany, and Brazil.
TIP Ecosystem Acceleration Centers
TIP Ecosystem Acceleration Centers (TEACs) are global technology innovation centers sponsored by one or more member organizations that connect startups to venture capitalists. TEACs are hosted in Seoul, Berlin, Paris and the UK.
See also
Open Compute Project (OCP)
Geostationary balloon satellite
High-altitude platform station
Internet.org
List of open-source hardware projects
Open-source computing hardware
Optical
broadband networks
cellular network
References
Facebook
Open-source hardware
Optical communications
Mobile technology
Open hardware and software organizations and companies | Telecom Infra Project | [
"Technology",
"Engineering"
] | 725 | [
"Optical communications",
"Telecommunications engineering",
"nan"
] |
33,820,413 | https://en.wikipedia.org/wiki/List%20of%20plasma%20physics%20articles | This is a list of plasma physics topics.
A
Ablation
Abradable coating
Abraham–Lorentz force
Absorption band
Accretion disk
Active galactic nucleus
Adiabatic invariant
ADITYA (tokamak)
Aeronomy
Afterglow plasma
Airglow
Air plasma, Corona treatment, Atmospheric-pressure plasma treatment
Ayaks, Novel "Magneto-plasmo-chemical engine"
Alcator C-Mod
Alfvén wave
Ambipolar diffusion
Aneutronic fusion
Anisothermal plasma
Anisotropy
Antiproton Decelerator
Appleton-Hartree equation
Arcing horns
Arc lamp
Arc suppression
ASDEX Upgrade, Axially Symmetric Divertor EXperiment
Astron (fusion reactor)
Astronomy
Astrophysical plasma
Astrophysical X-ray source
Atmospheric dynamo
Atmospheric escape
Atmospheric pressure discharge
Atmospheric-pressure plasma
Atom
Atomic emission spectroscopy
Atomic physics
Atomic-terrace low-angle shadowing
Auger electron spectroscopy
Aurora (astronomy)
B
Babcock Model
Ball lightning
Ball-pen probe
Ballooning instability
Baryon acoustic oscillations
Beam-powered propulsion
Beta (plasma physics)
Birkeland current
Blacklight Power
Blazar
Bohm diffusion
Bohr–van Leeuwen theorem
Boltzmann relation
Bow shock
Bremsstrahlung
Bussard ramjet
C
Capacitively coupled plasma
Carbon nanotube metal matrix composites
Cassini–Huygens, Cassini Plasma Spectrometer
Cathode ray
Cathodic arc deposition
Ceramic discharge metal-halide lamp
Charge carrier
Charged-device model
Charged particle
Chemical plasma
Chemical vapor deposition
Chemical vapor deposition of diamond
Chirikov criterion
Chirped pulse amplification
Chromatography detector
Chromo–Weibel instability
Classical-map hypernetted-chain method
Cnoidal wave
Colored-particle-in-cell
Coilgun
Cold plasma, Ozone generator
Collisionality
Colored-particle-in-cell
Columbia Non-neutral Torus
Comet tail
Compact toroid
Compressibility
Compton–Getting effect
Contact lithography
Coupling (physics)
Convection cell
Cooling flow
Corona
Corona discharge
Corona ring
Coronal loop
Coronal radiative losses
Coronal seismology
Cosmic microwave background radiation
Cotton–Mouton effect
Coulomb collision
Coulomb explosion
Columbia Non-neutral Torus
Crackle tube
Critical ionization velocity
Crookes tube
Current sheet
Cutoff frequency
Cyclotron radiation
D
Debye length
Debye sheath
Deep reactive-ion etching
Degenerate matter
Degree of ionization
DEMO, DEMOnstration Power Plant
Dense plasma focus
Dielectric barrier discharge
Diffusion damping
DIII-D (tokamak)
Dimensional analysis
Diocotron instability
Direct-current discharge
Directed-energy weapon
Direct bonding
distribution function
Divertor
Doppler broadening
Doppler effect
Double layer (plasma)
Dual segmented Langmuir probe, Non-Maxwellian Features in Ionospheric Plasma
Duoplasmatron
Dusty plasma
Dynamo theory
E
Earth's magnetic field
Experimental Advanced Superconducting Tokamak (EAST)
Ectons
Eddington luminosity
Edge-localized mode
Ekman number
Elastic collision
Electrical breakdown
Electrical conductor
Electrical mobility
Electrical resistance and conductance
Electrical resistivity and conductivity
Electrical treeing
Electrically powered spacecraft propulsion
Electric-field screening
Electric arc
Electric arc furnace, Plasma arc furnace
Electric current
Electric discharge
Electric spark
Electric Tokamak
Electrothermal-chemical technology, uses plasma cartridge, Triple coaxial plasma igniter
Electrodeless plasma excitation
Electrodeless plasma thruster
Electrodynamic tether, Flowing Plasma Effect
Electrohydrodynamic thruster
Electrolaser, Laser-Induced Plasma Channel
Electromagnetic electron wave
Electromagnetic field
Electromagnetic pulse
Electromagnetic spectrum
Electron-cloud effect
Electron
Electron avalanche
Electron beam ion trap
Electron cyclotron resonance
Electron density
Electron energy loss spectroscopy
Electron gun
Electron microprobe
Electron spiral toroid
Electron temperature
Electronvolt
Electron wake
Electrostatic discharge
Electrostatic ion cyclotron wave
Electrostatic ion thruster
Electrosurgery
Electrothermal instability
Electroweak epoch
Elemental analysis
Elliptic flow
Emission spectrum
Energetic neutral atom
Energy density
Energy filtered transmission electron microscopy
Evanescent wave
Evershed effect
Excimer lamp
Excimer laser
Extraordinary optical transmission
Extreme ultraviolet
Extreme ultraviolet lithography
F
Failure analysis
FalconSAT
Faraday cup
Faraday effect, Faraday rotation in the ionosphere
Far-infrared laser
Farley-Buneman instability
Fast Auroral Snapshot Explorer
Ferritic nitrocarburizing, Plasma-assisted ferritic nitrocarburizing, plasma ion nitriding
Ferrofluid
Field line
Field-reversed configuration
Filament propagation
Finite-difference time-domain method
Fire
Fisher's equation
Fission fragment reactor
Fission-fragment rocket, Dusty Plasma Based Fission Fragment Nuclear Reactor
Flame plasma
Flare spray
Flashtube
Flatness problem
Flowing-afterglow mass spectrometry
Fluid dynamics
Fluorescent lamp
Forbidden mechanism
Force-free magnetic field
Free-electron laser
Free electron model
F region, Appleton layer
Frequency classification of plasmas
Fusion energy gain factor
Fusion power
fusion torch
fusor
G
Galactic corona
Galactic halo
Gas
Gas-filled tube
Gas core reactor rocket
Gas cracker, plasma cracking
Gas Electron Multiplier
Gaseous fission reactor
Gaseous ionisation detectors
Gas focusing
Gasification, Plasma gasifier
Geissler tube
General Fusion
Geomagnetic storm
Geothermal Anywhere
Glasser effect
Glass frit bonding
Glow discharge
Glow-discharge optical emission spectroscopy (GDOES)
Grad–Shafranov equation
Granule (solar physics)
Great Rift (astronomy)
GreenSun Energy
Guiding center
Gunn–Peterson trough
GYRO
Gyrokinetic ElectroMagnetic
Gyrokinetics
Gyroradius
Gyrotron
H
Hadronization
Hagedorn temperature, Transition to Quark-Gluon Plasma
Hall effect
Hall-effect thruster
Hasegawa–Mima equation
Heat shield
Heat torch
Helically Symmetric Experiment
Helicon double-layer thruster
Helicon (physics)
Heliosphere
Heliospheric current sheet
Helium
Helium line ratio
Helmet streamer
Hessdalen light
High beta fusion reactor
High-energy nuclear physics
High-frequency Active Auroral Research Program
High harmonic generation
High-intensity discharge lamp
High Power Impulse Magnetron Sputtering
High voltage
HiPER, High-Power laser Energy Research facility
Hiss (electromagnetic), Plasmaspheric hiss
Hollow cathode effect
Hollow-cathode lamp
Holtsmark distribution
Homopolar generator
Horizon problem
Hydrogen
Hydrogen sensor
Hypernova
Hypersonic speed
Hypersonic wind tunnel
Hypervelocity
Hypertherm
I
IEEE Nuclear and Plasma Sciences Society
IGNITOR
IMAGE, Imager for Magnetopause-to-Aurora Global Exploration, Radio Plasma Imager
Impalefection
Impulse generator
Incoherent scatter
Induction plasma technology
Inductively coupled plasma
Inductively coupled plasma atomic emission spectroscopy
Inductively coupled plasma mass spectrometry
Inelastic mean free path
inertial confinement fusion
Inertial electrostatic confinement
Inertial fusion power plant
Instability
Insulated-gate bipolar transistor
Insulator (electrical)
Interbol
Intergalactic medium
International Reference Ionosphere
Interplanetary magnetic field
Interplanetary medium
Interplanetary scintillation
Interstellar medium
Interstellar nebula
Interstellar travel
Intracluster medium
Io-Jupiter flux tube
Ion
Ionized-air glow
Ion acoustic wave
Ion beam
Ion-beam shepherd
Ion cyclotron resonance
Ion gun
Ion laser
Ion optics
Ion plating
Ion source
Ion wind
Ionosphere
Ionospheric heater
Ionospheric propagation
Isotope-ratio mass spectrometry, Multiple collector – inductively coupled plasma – mass spectrometry (MC-ICP-MS)
ITER, International Thermonuclear Experimental Reactor
J
Jellium, uniform electron gas, homogeneous electron gas
Jet (particle physics)
Jet quenching
Joint European Torus
K
Kennelly–Heaviside layer, E region
Kinetics (physics)
Kink instability
Kirchhoff's circuit laws
Kite applications, plasma kite
Kosterlitz–Thouless transition
KSTAR, Korea Superconducting Tokamak Advanced Research
Kværner-process, Plasma burner, Plasma variation
L
Lagrange point colonization
Landau damping
Langmuir probe
Large Hadron Collider
Large Helical Device
Large Plasma Device
Laser-hybrid welding
Laser-induced breakdown spectroscopy, Laser Induced Plasma Spectroscopy
Laser-induced fluorescence
Laser ablation
Laser ablation synthesis in solution
Laser plasma acceleration
Lawson criterion
Lerche–Newberger sum rule
Le Sage's theory of gravitation
Levitated dipole
LIDAR
Lightcraft
Lightning
LINUS (Fusion Experiment)
List of hydrodynamic instabilities
List of plasma physicists
LOFAR, Low Frequency Array
Longitudinal wave
Lorentz force
Low-energy electron diffraction
Lower hybrid oscillation
Low-pressure discharge
Luminescent solar concentrator
Lundquist number
Luttinger liquid
M
Madison Symmetric Torus
MagBeam, also called Magnetized beamed plasma propulsion, plasma wind
Magnetic bottle
Magnetic braking
Magnetic cloud
Magnetic confinement fusion
Magnetic diffusivity
Magnetic field
Magnetic field oscillating amplified thruster, Plasma Engine
Magnetic helicity
Magnetic mirror
Magnetic Prandtl number
Magnetic pressure
Magnetic proton recoil neutron spectrometer
Magnetic radiation reaction force
Magnetic reconnection
Magnetic Reynolds number
Magnetic sail, Mini-magnetospheric plasma propulsion
Magnetic tail
Magnetic tension force
Magnetic weapon
Magnetization reversal by circularly polarized light
Magnetized target fusion
Magnetogravity wave
Magnetohydrodynamic drive
MHD generator
Magnetohydrodynamics
Magnetohydrodynamic turbulence
Magneto-optical trap
Magnetopause
Magnetoplasmadynamic thruster
Magnetosheath
Magnetosonic wave, also magnetoacoustic wave
Magnetosphere
Magnetosphere chronology
Magnetosphere of Saturn, Sources and transport of plasma
Magnetosphere particle motion
Magnetospheric Multiscale Mission
Magnetotellurics
MAGPIE, stands for Mega Ampere Generator for Plasma Implosion Experiments, Marx generator
MARAUDER, acronym of Magnetically Accelerated Ring to Achieve Ultra-high Directed Energy and Radiation
Marchywka Effect
Marfa lights
Many-body problem
Mars Express
Mass driver, or electromagnetic catapult
Mass spectrometry
Material point method
Maxwell–Boltzmann distribution
Maxwell's equations
Mechanically Stimulated Gas Emission
Mega Ampere Spherical Tokamak
Metallic bond
Metallizing
Metamaterial antenna
Microplasma
Microstructured optical arrays
Microturbulence
Microwave digestion
Microwave discharge
Microwave plasma-assisted CVD
Microwave plasma
Migma
MIT Plasma Science and Fusion Center
Moreton wave
Multipactor effect
N
Nanoflares
Nanoparticle
Nanoscale plasmonic motor
Nanoshell
National Compact Stellarator Experiment
National Spherical Torus Experiment
Navier–Stokes equations
Negative index metamaterials
Negative resistance
Negative temperature
Neon lighting
Neon sign
Neutral beam injection
Neutron generator
Neutron source
Neutron star spin-up
New Horizons, Plasma and high energy particle spectrometer suite (PAM)
Nitrogen–phosphorus detector
Nonequilibrium Gas and Plasma Dynamics Laboratory
Non-line-of-sight propagation
Non-thermal microwave effect
Nonthermal plasma, Cold plasma
Nuclear fusion, Bremsstrahlung losses in quasineutral, isotropic plasmas, deuterium plasma
Nuclear pulse propulsion
Nuclear pumped laser
Numerical diffusion
Numerical resistivity
O
Ohmic contact
Onset of deconfinement
Optode
Optoelectric nuclear battery
Orbitrap
Outer space
P
Particle-in-cell
Particle accelerator
Paschen's law
Peek's law
Pegasus Toroidal Experiment
Penning mixture
Penrose criterion
Perhapsatron
Phased plasma gun
Photon
Photonic metamaterial
Photonics
Physical cosmology
Physical vapor deposition
Piezoelectric direct discharge plasma
Pinch (plasma physics)
Planetary nebula
Planetary nebula luminosity function
Plasma-desorption mass spectrometry
Plasma-enhanced chemical vapor deposition
Plasma-immersion ion implantation
Plasma-powered cannon
Plasma (physics)
Plasma acceleration
Plasma Acoustic Shield System
Plasma activated bonding
Plasma activation
Plasma actuator
Plasma antenna
Plasma arc waste disposal, Incineration
Plasma arc welding
Plasma channel
Plasma chemistry
Plasma cleaning
Plasma Contactor
Plasma containment
Plasma conversion
Plasma cosmology, ambiplasma
Plasma cutting, Plasma gouging
Plasma deep drilling technology
Plasma diagnostics, Self Excited Electron Plasma Resonance Spectroscopy (SEERS)
Plasma display
Plasma effect
Plasma electrolytic oxidation
Plasma etcher
Plasma etching
Plasma frequency
Plasma functionalization
Plasma gasification commercialization
Plasma globe
Plasma lamp
Plasma medicine
Plasma modeling
Plasma nitriding
Plasma oscillation
Plasma parameter
Plasma parameters
Plasma pencil
Plasma polymerization
Plasma processing
Plasma propulsion engine
Plasma Pyrolysis Waste Treatment and Disposal
Plasma receiver
Plasma scaling
Plasma shaping
Plasma sheet
Plasma shield, Plasma window
Plasma sound source
Plasma source
Plasma speaker
Plasma spray
Plasma spraying, Thermal spraying, Surface finishing
Plasma stability
Plasma stealth
Plasma torch
Plasma transferred wire arc thermal spraying
Plasma valve
Plasma weapon
Plasma weapon (fiction)
Plasma window, Force field
Plasmadynamics and Electric Propulsion Laboratory
Plasmaphone
Plasmapper
Plasmaron
Plasmasphere
Plasmoid
Plasmon
Plasmonic cover, Theories of cloaking
Plasmonic laser, Nanolaser
Plasmonic metamaterials
Plasmonic nanolithography
Plasmonic Nanoparticles
Plasmonic solar cell
Polarization density
Polarization ripples
Polar (satellite)
Polymeric surfaces
Polywell
Ponderomotive force
Princeton field-reversed configuration experiment
Propulsive Fluid Accumulator, nuclear-powered magnetohydrodynamic electromagnetic plasma thruster
Proton beam
Pseudospark switch
Pulsed Energy Projectile
Pulsed laser deposition, Dynamic of the plasma
Pulsed plasma thruster, also Plasma Jet Engines
Q
Q-machine
QCD matter
Quadrupole ion trap
Quantum cascade laser
Quark–gluon plasma
Quarkonium
Quasar
Quasiparticle
R
Radiation
Radiation damage
Radical polymerization
Radioactive waste
Radio atmospheric
Radio galaxy
Radio halo
Radio relics
Railgun
Radio Aurora Explorer (RAX)
Random phase approximation
Ray tracing (physics)
Reactive-ion etching
Reaction engine
Rectifier, Plasma type
Refractive index
Reionization
Relativistic beaming
Relativistic jet
Relativistic particle
Relativistic plasma
Relativistic similarity parameter
Remote plasma-enhanced CVD
Resistive ballooning mode
Resolved sideband cooling
Resonant magnetic perturbations
Resonator mode
Reversed field pinch
Richtmyer–Meshkov instability
Riggatron
Ring current
Rocket engine nozzle
Runaway breakdown
Rydberg atom
Rydberg matter
S
Safety factor (plasma physics)
Saha ionization equation
Sceptre (fusion reactor)
Scramjet
Screened Poisson equation
SEAgel, Safe Emulsion Agar gel
Selected-ion flow-tube mass spectrometry
Self-focusing
Sensitive high-resolution ion microprobe
Shielding gas
Shiva laser
Shiva Star
Shock diamond
Shocks and discontinuities (magnetohydrodynamics)
Shock wave, Oblique shock
Skin effect
Skip zone
Sky brightness
Skywave
Slapper detonator
Small Tight Aspect Ratio Tokamak
Solar cycle, Cosmic ray flux
Solar flare
Solar Orbiter, Radio and Plasma Wave analyser
Solar prominence
Solar transition region
Solar wind
Solar wind turbulence
Solenoid
Solution precursor plasma spray, Plasma plume
Sonoluminescence
South Atlantic Anomaly
Southern Hemisphere Auroral Radar Experiment
Space physics
Spacequake
Space Shuttle
Space Shuttle thermal protection system
Space tether missions
Spark-gap transmitter
Spark plasma sintering
Spaser
Spectral imaging
Spectral line
Spherical tokamak
Spheromak
Spinplasmonics
Spontaneous emission
Spreeta
Sprite (lightning)
Sputter cleaning
Sputter deposition
Sputtering
SSIES, Special Sensors-Ions, Electrons, and Scintillation thermal plasma analysis package
SST-1 (tokamak), Steady State Tokamak
Star
Star lifting
State of matter
Static forces and virtual-particle exchange
Stellarator
Stellar-wind bubble
St. Elmo's fire
Strahl (astronomy)
Strangeness production
Strontium vapor laser
Structure formation
Sudden ionospheric disturbance
Sun
SUNIST, Sino-UNIted Spherical Tokamak, Alfven wave current drive experiments in spherical tokamak plasmas
Superlens, Plasmon-assisted microscopy
Supernova
Supernova remnants
Sura Ionospheric Heating Facility
Surface-wave-sustained mode
Surface enhanced Raman spectroscopy
Surface plasmon
Surface plasmon polaritons
Surface plasmon resonance
Suspension plasma spray
Synchrotron light source
T
Taylor state
Teller–Ulam design, Foam plasma pressure
Tesla coil
Test particle, in plasma physics or electrodynamics
Thermal barrier coating
Thermalisation
Thermionic converter
Thermodynamic temperature
Thomson scattering
Thunder
Tokamak
Tokamak à configuration variable
Tokamak Fusion Test Reactor
Toroidal ring model
Townsend discharge
Townsend (unit)
Transformation optics
Transmission medium
Trisops, Force Free Plasma Vortices
Tunable diode laser absorption spectroscopy
Tweeter, Plasma or Ion tweeter
Two-dimensional guiding-center plasma
Two-dimensional point vortex gas
Two-stream instability
U
U-HID, Ultra High Intensity Discharge
UMIST linear system
Undulator
Upper hybrid oscillation
Upper-atmospheric lightning
V
Vacuum arc, thermionic vacuum arc generates a pure metal and ceramic vapour plasma
Van Allen radiation belt
Vapor–liquid–solid method
Variable Specific Impulse Magnetoplasma Rocket
Vector inversion generator
Versatile Toroidal Facility
Violet wand
Virial theorem
Vlasov equation
Volatilisation
VORPAL, Versatile Object-oriented Relativistic Plasma Analysis with Lasers
Voyager program, Plasma Wave Subsystem
W
Warm dense matter
Wave equation
Waves in plasmas
Wave turbulence
Weibel instability
Wendelstein 7-X
Wiggler (synchrotron)
WIND (spacecraft)
Wingless Electromagnetic Air Vehicle
Wireless energy transfer
Wouthuysen-Field coupling
X
XANES, X-ray Absorption Near Edge Structure
Xenon arc lamp
X-ray transient
X-ray astronomy
X-shaped radio galaxy
Y
Z
Zakharov system
Zero-point energy
ZETA (fusion reactor)
Zonal and poloidal
Zonal flow (plasma)
Z Pulsed Power Facility
References
+
+
Plasma
Indexes of science articles
Plasma physics | List of plasma physics articles | [
"Physics",
"Chemistry"
] | 3,595 | [
"Plasma physics",
"Plasma (physics)",
"Phases of matter",
"Astrophysics",
"Matter"
] |
33,820,444 | https://en.wikipedia.org/wiki/Monatomic%20gas | In physics and chemistry, "monatomic" is a combination of the words "mono" and "atomic", and means "single atom". It is usually applied to gases: a monatomic gas is a gas in which atoms are not bound to each other. Examples at standard conditions of temperature and pressure include all the noble gases (helium, neon, argon, krypton, xenon, and radon), though all chemical elements will be monatomic in the gas phase at sufficiently high temperature (or very low pressure). The thermodynamic behavior of a monatomic gas is much simpler when compared to polyatomic gases because it is free of any rotational or vibrational energy.
Noble gases
The only chemical elements that are stable single atoms (so they are not molecules) at standard temperature and pressure (STP) are the noble gases. These are helium, neon, argon, krypton, xenon, and radon. Noble gases have a full outer valence shell making them rather non-reactive species. While these elements have been described historically as completely inert, chemical compounds have been synthesized with all but neon and helium.
When grouped together with the homonuclear diatomic gases such as nitrogen (N2), the noble gases are called "elemental gases" to distinguish them from molecules that are also chemical compounds.
Thermodynamic properties
The only possible motion of an atom in a monatomic gas is translation (electronic excitation is not important at room temperature). Thus by the equipartition theorem, the kinetic energy of a single atom of a monatomic gas at thermodynamic temperature T is given by , where kB is the Boltzmann constant. One mole of atoms contains an Avogadro number () of atoms, so that the energy of one mole of atoms of a monatomic gas is , where R is the gas constant.
In an adiabatic process, monatomic gases have an idealised γ-factor (Cp/Cv) of 5/3, as opposed to 7/5 for ideal diatomic gases where rotation (but not vibration at room temperature) also contributes. Also, for ideal monatomic gases:
References
Gases | Monatomic gas | [
"Physics",
"Chemistry"
] | 467 | [
"Statistical mechanics",
"Gases",
"Phases of matter",
"Matter"
] |
33,826,377 | https://en.wikipedia.org/wiki/Murnaghan%20equation%20of%20state | The Murnaghan equation of state is a relationship between the volume of a body and the pressure to which it is subjected. This is one of many state equations that have been used in earth sciences and shock physics to model the behavior of matter under conditions of high pressure. It owes its name to Francis D. Murnaghan who proposed it in 1944 to reflect material behavior under a pressure range as wide as possible to reflect an experimentally established fact: the more a solid is compressed, the more difficult it is to compress further.
The Murnaghan equation is derived, under certain assumptions, from the equations of continuum mechanics. It involves two adjustable parameters: the modulus of incompressibility K0 and its first derivative with respect to the pressure, K'0, both measured at ambient pressure. In general, these coefficients are determined by a regression on experimentally obtained values of volume V as a function of the pressure P. These experimental data can be obtained by X-ray diffraction or by shock tests. Regression can also be performed on the values of the energy as a function of the volume obtained from ab-initio and molecular dynamics calculations.
The Murnaghan equation of state is typically expressed as:
If the reduction in volume under compression is low, i.e., for V/V0 greater than about 90%, the Murnaghan equation can model experimental data with satisfactory accuracy. Moreover, unlike many proposed equations of state, it gives an explicit expression of the volume as a function of pressure V(P). But its range of validity is limited and physical interpretation inadequate. However, this equation of state continues to be widely used in models of solid explosives. Of more elaborate equations of state, the most used in earth physics is the Birch–Murnaghan equation of state. In shock physics of metals and alloys, another widely used equation of state is the Mie–Gruneisen equation of state.
Background
The study of the internal structure of the earth through the knowledge of the mechanical properties of the constituents of the inner layers of the planet involves extreme conditions; the pressure can be counted in hundreds of gigapascal and temperatures in thousands of degrees. The study of the properties of matter under these conditions can be done experimentally through devices such as diamond anvil cell for static pressures, or by subjecting the material to shock waves. It also gave rise to theoretical work to determine the equation of state, that is to say the relations among the different parameters that define in this case the state of matter: the volume (or density), temperature and pressure.
There are two approaches:
the state equations derived from interatomic potentials, or possibly ab initio calculations;
derived from the general relations of the state equations mechanics and thermodynamics. The Murnaghan equation belongs to this second category.
Dozens of equations have been proposed by various authors. These are empirical relationships, the quality and relevance depend on the use made of it and can be judged by different criteria: the number of independent parameters that are involved, the physical meaning that can be assigned to these parameters, the quality of the experimental data, and the consistency of theoretical assumptions that underlie their ability to extrapolate the behavior of solids at high compression.
Expressions for the equation of state
Generally, at constant temperature, the bulk modulus is defined by:
The easiest way to get an equation of state linking P and V is to assume that K is constant, that is to say, independent of pressure and deformation of the solid, then we simply find Hooke's law. In this case, the volume decreases exponentially with pressure. This is not a satisfactory result because it is experimentally established that as a solid is compressed, it becomes more difficult to compress. To go further, we must take into account the variations of the elastic properties of the solid with compression.
The assumption Murnaghan is to assume that the bulk modulus is a linear function of pressure :
Murnaghan equation is the result of the integration of the differential equation:
We can also express the volume depending on the pressure:
This simplified presentation is however criticized by Poirier as lacking rigor. The same relationship can be shown in a different way from the fact that the incompressibility of the product of the modulus and the thermal expansion coefficient is not dependent on the pressure for a given material. This equation of state is also a general case of the older Polytrope relation which also has a constant power relation.
In some circumstances, particularly in connection with ab initio calculations, the expression of the energy as a function of the volume will be preferred, which can be obtained by integrating the above equation according to the relationship P = −dE/dV . It can be written to K'0 different from 1,
{| class="toccolours collapsible collapsed" width="60%" style="text-align:left"
! Derivation of Murnaghan equation of state:
|-
|A solid has a certain equilibrium volume , and the energy increases quadratically as volume is increased or decreased a small amount from that value. The simplest plausible dependence of energy on volume would be a harmonic solid, with
The next simplest reasonable model would be with a constant bulk modulus
Integrating gives
A more sophisticated equation of state was derived by
Francis D. Murnaghan of Johns Hopkins University in 1944. To begin with, we consider the pressure
and the bulk modulus
Experimentally, the bulk modulus pressure derivative
is found to change little with pressure. If we take to be a constant, then
where is the value of when
We may equate this with (2) and rearrange as
Integrating this results in
or equivalently
Substituting (6) into
when then results in the equation of state for energy.
Many substances have a fairly constant of about 3.5.
|}
Advantages and limitations
Despite its simplicity, the Murnaghan equation is able to reproduce the experimental data for a range of pressures that can be quite large, on the order of K0/2. It also remains satisfactory as the ratio V/V0 remains above about 90%. In this range, the Murnaghan equation has an advantage compared to other equations of state if one wants to express the volume as a function of pressure.
Nevertheless, other equations may provide better results and several theoretical and experimental studies show that the Murnaghan equation is unsatisfactory for many problems. Thus, to the extent that the ratio V/V0 becomes very low, the theory predicts that K' goes to 5/3, which is the Thomas–Fermi limit. However, in the Murnaghan equation, K' is constant and set to its initial value. In particular, the value K'0 = 5/3 becomes inconsistent with the theory under some situations. In fact, when extrapolated, the behavior predicted by the Murnaghan equation becomes quite quickly unlikely.
Regardless of this theoretical argument, experience clearly shows that K' decreases with pressure, or in other words that the second derivative of the incompressibility modulus K" is strictly negative. A second order theory based on the same principle (see next section) can account for this observation, but this approach is still unsatisfactory. Indeed, it leads to a negative bulk modulus in the limit where the pressure tends to infinity. In fact, this is an inevitable contradiction whatever polynomial expansion is chosen because there will always be a dominant term that diverges to infinity.
These important limitations have led to the abandonment of the Murnaghan equation, which W. Holzapfel calls "a useful mathematical form without any physical justification". In practice, the analysis of compression data is done by using more sophisticated equations of state. The most commonly used within the science community is the Birch–Murnaghan equation, second or third order in the quality of data collected.
Finally, a very general limitation of this type of equation of state is their inability to take into account the phase transitions induced by the pressure and temperature of melting, but also multiple solid-solid transitions that can cause abrupt changes in the density and bulk modulus based on the pressure.
Examples
In practice, the Murnaghan equation is used to perform a regression on a data set, where one gets the values of the coefficients K0 and K'0. These coefficients obtained, and knowing the value of the volume to ambient conditions, then we are in principle able to calculate the volume, density and bulk modulus for any pressure.
The data set is mostly a series of volume measurements for different values of applied pressure, obtained mostly by X-ray diffraction. It is also possible to work on theoretical data, calculating the energy for different values of volume by ab initio methods, and then regressing these results. This gives a theoretical value of the modulus of elasticity which can be compared to experimental results.
The following table lists some of the results of different materials, with the sole purpose of illustrating some numerical analyses that have been made using the Murnaghan equation, without prejudice to the quality of the models obtained. Given the criticisms that have been made in the previous section on the physical meaning of the Murnaghan equation, these results should be considered with caution.
Extensions and generalizations
To improve the models or avoid criticism outlined above, several generalizations of the Murnaghan equation have been proposed. They usually consist in dropping a simplifying assumption and adding another adjustable parameter. This can improve the qualities of refinement, but also lead to complicated expressions. The question of the physical meaning of these additional parameters is also raised.
A possible strategy is to include an additional term P2 in the previous development, requiring that . Solving this differential equation gives the equation of the second-order Murnaghan:
where . Found naturally in the first order equation taking . Developments to an order greater than 2 are possible in principle, but at the cost of adding an adjustable parameter for each term.
Other generalizations can be cited:
Kumari and Dass have proposed a generalization abandoning the condition K = 0 but assuming the report K / K ' independent of pressure;
Kumar proposed a generalization taking into account the dependence of the Anderson parameter as a function of volume. It was subsequently shown that this generalized equation was not new, but rather reducible to the Tait equation.
Notes and references
Bibliography
See also
Equation of state
Birch–Murnaghan equation of state
Rose–Vinet equation of state
Polytrope
External links
EosFit, a program for the refinement of experimental data and calculation relations P (V) for different equations of state, including the Murnaghan equation.
Continuum mechanics
Equations of state | Murnaghan equation of state | [
"Physics"
] | 2,180 | [
"Equations of physics",
"Continuum mechanics",
"Classical mechanics",
"Statistical mechanics",
"Equations of state"
] |
31,215,918 | https://en.wikipedia.org/wiki/Annular%20fin | In thermal engineering, an annular fin is a specific type of fin used in heat transfer that varies, radially, in cross-sectional area. Adding an annular fin to an object increases the amount of surface area in contact with the surrounding fluid, which increases the convective heat transfer between the object and surrounding fluid. Because surface area increases as length from the object increases, an annular fin transfers more heat than a similar pin fin at any given length. Annular fins are often used to increase the heat exchange in liquid–gas heat exchanger systems.
Governing Equation
To derive the governing equation of an annular fin, certain assumptions must be made. The fin must have constant thermal conductivity and other material properties, there must be no internal heat generation, there must be only one-dimensional conduction, and the fin must be at steady state.
Applying the energy conservation principle to a differential element between radii r and r + Δr yields
where the first two terms are heat transferred through conduction, while the third is heat lost due to convection with the surrounding fluid. T represents the temperature at r and Te represents the temperature of the surrounding fluid. Next, applying Fourier's law
and dividing by 4πΔr, letting Δr → 0, yields
Assigning new variables z
and θ, where Tb is the temperature at the base of the fin,
results in the governing equation for heat transfer of an annular fin:
Heat loss and efficiency
The maximum possible heat loss from an annular fin occurs when the fin is isothermal. This ensures that the temperature difference between the fin and the surrounding fluid is maximized at every point along the fin, increasing heat transfer by convection, and ultimately heat loss Q:
The efficiency ηf of an annular fin is the ratio of its heat loss to the heat loss of a similar isothermal fin:
References
Heat transfer | Annular fin | [
"Physics",
"Chemistry"
] | 380 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamics"
] |
31,215,952 | https://en.wikipedia.org/wiki/Rotational%20partition%20function | In chemistry, the rotational partition function relates the rotational degrees of freedom to the rotational part of the energy.
Definition
The total canonical partition function of a system of identical, indistinguishable, noninteracting atoms or molecules can be divided into the atomic or molecular partition functions :
with:
where is the degeneracy of the jth quantum level of an individual particle, is the Boltzmann constant, and is the absolute temperature of system.
For molecules, under the assumption that total energy levels can be partitioned into its contributions from different degrees of freedom (weakly coupled degrees of freedom)
and the number of degenerate states are given as products of the single contributions
where "trans", "ns", "rot", "vib" and "e" denotes translational, nuclear spin, rotational and vibrational contributions as well as electron excitation, the molecular partition functions
can be written as a product itself
Linear molecules
Rotational energies are quantized. For a diatomic molecule like CO or HCl, or a linear polyatomic molecule like OCS in its ground vibrational state, the allowed rotational energies in the rigid rotor approximation are
J is the quantum number for total rotational angular momentum and takes all integer values starting at zero, i.e., , is the rotational constant, and is the moment of inertia. Here we are using B in energy units. If it is expressed in frequency units, replace B by hB in all the expression that follow, where h is the Planck constant. If B is given in units of , then replace B by hcB where c is the speed of light in vacuum.
For each value of J, we have rotational degeneracy, = (2J+1), so the rotational partition function is therefore
For all but the lightest molecules or the very lowest temperatures we have . This suggests we can approximate the sum by replacing the sum over J by an integral of J treated as a continuous variable.
This approximation is known as the high temperature limit. It is also called the classical approximation as this is the result for the canonical partition function for a classical rigid rod.
Using the Euler–Maclaurin formula an improved estimate can be found
For the CO molecule at , the (unit less) contribution to turns out to be in the range of .
The mean thermal rotational energy per molecule can now be computed by taking the derivative of with respect to temperature . In the high temperature limit approximation, the mean thermal rotational energy of a linear rigid rotor is .
Quantum symmetry effects
For a diatomic molecule with a center of symmetry, such as or (i.e. point group), rotation of a molecule by radian about an axis perpendicular to the molecule axis and going through the center of mass will interchange pairs of equivalent atoms. The spin–statistics theorem of quantum mechanics requires that the total molecular wavefunction be either symmetric or antisymmetric with respect to this rotation depending upon whether an even or odd number of pairs of fermion nuclear pairs are exchanged. A given electronic & vibrational wavefunction will either be symmetric or antisymmetric with respect to this rotation. The rotational wavefunction with quantum number J will have a sign change of . The nuclear spins states can be separated into those that are symmetric or antisymmetric with respect to the nuclear permutations produced by the rotation. For the case of a symmetric diatomic with nuclear spin quantum number I for each nucleus, there are symmetric spin functions and are antisymmetric functions for a total number of nuclear functions . Nuclei with an even nuclear mass number are bosons and have integer nuclear spin quantum number, I. Nuclei with odd mass number are fermions and had half integer I. For the case of H2, rotation exchanges a single pair of fermions and so the overall wavefunction must be antisymmetric under the half rotation. The vibration-electronic function is symmetric and so the rotation-vibration-electronic will be even or odd depending upon whether J is an even or odd integer. Since the total wavefunction must be odd, the even J levels can only use the antisymmetric functions (only one for I = 1/2) while the odd J levels can use the symmetric functions ( three for I = 1/2). For D2, I = 1 and thus there are six symmetric functions, which go with the even J levels to produce an overall symmetric wavefunction, and three antisymmetric functions that must go with odd J rotational levels to produce an overall even function. The number of nuclear spin functions that are compatible with a given rotation-vibration-electronic state is called the nuclear spin statistical weight of the level, often represented as . Averaging over both even and odd J levels, the mean statistical weight is , which is one half the value of expected ignoring the quantum statistical restrictions. In the high temperature limit, it is traditional to correct for the missing nuclear spin states by dividing the rotational partition function by a factor with known as the rotational symmetry number which is 2 for linear molecules with a center of symmetry and 1 for linear molecules without.
Nonlinear molecules
A rigid, nonlinear molecule has rotational energy levels determined by three rotational constants, conventionally written and , which can often be determined by rotational spectroscopy. In terms of these constants, the rotational partition function can be written in the high temperature limit as
with again known as the rotational symmetry number which in general equals the number ways a molecule can be rotated to overlap itself in an indistinguishable way, i.e. that at most interchanges identical atoms. Like in the case of the diatomic treated explicitly above, this factor corrects for the fact that only a fraction of the nuclear spin functions can be used for any given molecular level to construct wavefunctions that overall obey the required exchange symmetries. Another convenient expression for the rotational partition function for symmetric and asymmetric tops is provided by Gordy and Cook:
where the prefactor comes from
when A, B, and C are expressed in units of MHz.
The expressions for works for asymmetric, symmetric and spherical top rotors.
References
See also
Translational partition function
Vibrational partition function
Partition function (mathematics)
Equations of physics
Partition functions | Rotational partition function | [
"Physics",
"Mathematics"
] | 1,279 | [
"Equations of physics",
"Mathematical objects",
"Equations",
"Partition functions",
"Statistical mechanics"
] |
31,216,882 | https://en.wikipedia.org/wiki/Insulin%20signal%20transduction%20pathway | The insulin transduction pathway is a biochemical pathway by which insulin increases the uptake of glucose into fat and muscle cells and reduces the synthesis of glucose in the liver and hence is involved in maintaining glucose homeostasis. This pathway is also influenced by fed versus fasting states, stress levels, and a variety of other hormones.
When carbohydrates are consumed, digested, and absorbed the pancreas senses the subsequent rise in blood glucose concentration and releases insulin to promote uptake of glucose from the bloodstream. When insulin binds to the insulin receptor, it leads to a cascade of cellular processes that promote the usage or, in some cases, the storage of glucose in the cell. The effects of insulin vary depending on the tissue involved, e.g., insulin is most important in the uptake of glucose by muscle and adipose tissue.
This insulin signal transduction pathway is composed of trigger mechanisms (e.g., autophosphorylation mechanisms) that serve as signals throughout the cell. There is also a counter mechanism in the body to stop the secretion of insulin beyond a certain limit. Namely, those counter-regulatory mechanisms are glucagon and epinephrine. The process of the regulation of blood glucose (also known as glucose homeostasis) also exhibits oscillatory behavior.
On a pathological basis, this topic is crucial to understanding certain disorders in the body such as diabetes, hyperglycemia and hypoglycemia.
Transduction pathway
The functioning of a signal transduction pathway is based on extra-cellular signaling that in turn creates a response that causes other subsequent responses, hence creating a chain reaction, or cascade. During the course of signaling, the cell uses each response for accomplishing some kind of a purpose along the way. Insulin secretion mechanism is a common example of signal transduction pathway mechanism.
Insulin is produced by the pancreas in a region called islets of Langerhans. In the islets of Langerhans, there are beta-cells, which are responsible for production and storage of insulin. Insulin is secreted as a response mechanism for counteracting the increasing excess amounts of glucose in the blood.
Glucose in the body increases after food consumption. This is primarily due to carbohydrate intake, but to a much lesser degree protein intake ()(). Depending on the tissue type, the glucose enters the cell through facilitated diffusion or active transport. In muscle and adipose tissue, glucose enters through GLUT 4 receptors via facilitated diffusion (). In brain, retina, kidney, RBC, placenta and many other organs, glucose enters using GLUT 1 and GLUT 3. In the beta-cells of the pancreas and in liver cells, glucose enters through the GLUT 2 receptors
(process described below).
Insulin biosynthesis and transcription
Insulin biosynthesis is regulated by transcriptional and translational levels. The β-cells promote their protein transcription in response to nutrients. The exposure of rat Langerhans islets to glucose for 1 hour is able to remarkably induce the intracellular proinsulin levels. It was noted that the proinsulin mRNA remained stable. This suggests that the acute response to glucose of the insulin synthesis is independent of mRNA synthesis in the first 45 minutes because the blockage of the transcription decelerated the insulin accumulation during that time.
PTBPs, also called polypyrimidine tract binding proteins, are proteins that regulate the translation of mRNA. They increase the viability of mRNA and provoke the initiation of the translation. PTBP1 enable the insulin gene-specific activation and insulin granule protein mRNA by glucose.
Two aspects of the transduction pathway process are explained below: insulin secretion and insulin action on the cell.
Insulin secretion
The glucose that goes into the bloodstream after food consumption also enters the beta cells in the islets of Langerhans in the pancreas. The glucose diffuses in the beta-cell facilitated by a GLUT-2 vesicle. Inside the beta cell, the following process occurs:
Glucose gets converted to glucose-6-phosphate (G6P) through glucokinase, and G6P is subsequently oxidized to form ATP. This process inhibits the ATP-sensitive potassium ion channels of the cell causing the potassium ion channel to close and not function anymore. The closure of the ATP-sensitive potassium channels causes depolarization of the cell membrane causing the cell membrane to stretch which causes the voltage-gated calcium channel on the membrane to open causing an influx of Ca2+ ions.
This influx then stimulates fusion of the insulin vesicles to the cell membrane and secretion of insulin in the extracellular fluid outside the beta cell; thus making it enter the bloodstream. [Also Illustrated in Figure 1.1.1].
There are 3 subfamilies of Ca2+ channels; L-type Ca2+ channels, non-L-type Ca2+ channels (including R-type) and the T-type Ca2+ channels. There are two phases of the insulin secretion, the first phase involves the L-type Ca2+ channels and the second phase involves the R-type Ca2+ channels. The Ca2+ influx generated by R-type Ca2+ channels is not enough to cause insulin exocytosis, however, it increases the mobilization of the vesicles towards the cell membrane.
Fatty acids and insulin secretion
Fatty acids also affect insulin secretion. In type 2 diabetes, fatty acids are able to potentiate insulin release to compensate the increment need of insulin. It was found that the β-cells express free fatty acid receptors at their surface, through which fatty acids can impact the function of β-cells. Long-chain acyl-CoA and DAG are the metabolites resulting from the intracellular metabolism of fatty acids. Long-chain acyl-CoA has the ability to acylate proteins that are essential in the insulin granule fusion. On the other hand, DAG activates PKC that is involved in the insulin secretion.
Hormonal regulation of insulin secretion
Several hormones can affect insulin secretion. Estrogen is correlated with an increase of insulin secretion by depolarizing the β-cells membrane and enhancing the entry of Ca2+. In contrast, growth hormone is known to lower the serum level of insulin by promoting the production of insulin-like growth factor-I (IGF-I). IGF-I, in turn, suppresses the insulin secretion.
Action on the cell
After insulin enters the bloodstream, it binds to a membrane-spanning receptor tyrosine kinase (RTK). This glycoprotein is embedded in the cellular membrane and has an extracellular receptor domain, made up of two α-subunits, and an intracellular catalytic domain made up of two β-subunits. The α-subunits act as insulin receptors and the insulin molecule acts as a ligand. Together, they form a receptor-ligand complex.
Binding of insulin to the α-subunit results in a conformational change of the protein, which activates tyrosine kinase domains on each β-subunit. The tyrosine kinase activity causes an autophosphorylation of several tyrosine residues in the β-subunit. The phosphorylation of 3 residues of tyrosine is necessary for the amplification of the kinase activity.
This autophosphorylation triggers the activation of the docking proteins, in this case IRS (1-4) on which phosphatidylinositol-3-Kinase (PI-3K) can be attached or GRB2 where the ras guanine nucleotide exchange factor (GEF) (also known as SOS) can be attached.
PI-3K causes the phosphorylation of PIP2 to PIP3. This protein acts as a docking site for PDPK1 and protein kinase B (also known as AKT), which is then phosphorylated by the latter and PK2 to be activated. This leads to crucial metabolic functions such as synthesis of lipids, proteins and glycogen. It also leads to cell survival and cell proliferation. Most importantly, the PI-3K pathway is responsible for the distribution of glucose for important cell functions. For example, the suppression of hepatic glucose synthesis and the activation of glycogen synthesis. Hence, AKT possesses a crucial role in the linkage of the glucose transporter (GLUT4) to the insulin signaling pathway. The activated GLUT4 will translocate to the cell membrane and promotes the transportation of glucose into the intracellular medium.
The Ras-GEF stimulates the exchange of GDP to GTP in the RAS protein, causing it to activate. Ras then activates the mitogen-activated protein kinase (MAP-Kinase) route, which ultimately results in changes in protein activity and gene expression.
Thus, insulin's role is more of a promoter for the usage of glucose in the cells rather than neutralizing or counteracting it.
Regulation of the insulin receptor signal
PI3K (phosphoinositide 3-kinase) is one of the important components in the regulation of the insulin signaling pathway. It maintains the insulin sensitivity in the liver. PI-3K is composed of a regulatory subunit (P85) and a catalytic subunit (P110). P85 regulates the activation of PI3K enzyme. In the PI-3K heterodimer (P85-P110), P85 is responsible for the PI3K activity, by binding to the binding site on the insulin receptor substrates (IRS). It was noted that an increase of P85 a (isoform of P85) results in a competition between the later and the P85-P110 complex to the IRS binding site, reducing the PI3K activity and leading to insulin resistance. Insulin resistance refers also to type 2 diabetes.
It was also noted that increased serine phosphorylation of IRS is involved in the insulin resistance by reducing their ability to attract PI3K. The serine phosphorylation can also lead to degradation of IRS-1.
Feedback mechanisms
Signal transduction is a mechanism in which the cell responds to a signal from the environment by activating several proteins and enzymes that will give a response to the signal.
Feedback mechanism might involve negative and positive feedbacks. In the negative feedback, the pathway is inhibited and the result of the transduction pathway is reduced or limited. In positive feedback, the transduction pathway is promoted and stimulated to produce more products.
Positive
Insulin secretion results in positive feedback in different ways. Firstly, insulin increases the uptake of glucose from blood by the translocation and exocytosis of GLUT4 storage vesicles in the muscle and fat cells. Secondly, it promotes the conversion of glucose into triglyceride in the liver, fat, and muscle cells. Finally, the cell will increase the rate of glycolysis within itself to break glucose in the cell into other components for tissue growth purposes.
An example of positive feedback mechanism in the insulin transduction pathway is the activation of some enzymes that inhibit other enzymes from slowing or stopping the insulin transduction pathway which results in improved intake of the glucose.
One of these pathways, involves the PI3K enzyme. This pathway is responsible for activating glycogen, lipid-protein synthesis, and specific gene expression of some proteins which will help in the intake of glucose.
Different enzymes control this pathway. Some of these enzymes constrict the pathway causing a negative feedback like the GSK-3 pathway. Other enzymes will push the pathway forward causing a positive feedback like the AKT and P70 enzymes.
When insulin binds to its receptor, it activates the glycogen synthesis by inhibiting the enzymes that slow down the PI3K pathway such as PKA enzyme. At the same time, it will promote the function of the enzymes that provide a positive feedback for the pathway like the AKT and P70 enzymes. The inactivation of the enzymes that stop the reaction and activating of enzymes that provide a positive feedback will increase glycogen, lipid & protein syntheses and promote glucose intake.
(Image to help explain the function of the proteins mentioned above in the positive feedback.)
Negative
When insulin binds to the cell's receptor, it results in negative feedback by limiting or stopping some other actions in the cell. It inhibits the release and production of glucose from the cells which is an important part in reducing the glucose blood level. Insulin will also inhibit the breakdown of glycogen into glucose by inhibiting the expression of the enzymes that catalyzes the degradation of glycogen.
An example of negative feedback is slowing or stopping the intake of glucose after the pathway was activated. Negative feedback is shown in the insulin signal transduction pathway by constricting the phosphorylation of the insulin-stimulated tyrosine. The enzyme that deactivates or phosphorylates the insulin-stimulated tyrosine is called tyrosine phosphatases (PTPases). When activated, this enzyme provides a negative feedback by catalyzing the dephosphorylation of the insulin receptors. The dephosphorylation of the insulin receptor slows down glucose intake by inhibiting the activation (phosphorylation) of proteins responsible for further steps of the insulin transduction pathway.
Trigger mechanism
Insulin is synthesized and secreted in the beta cells of the islets of Langerhans. Once insulin is synthesized, the beta cells are ready to release it in two different phases. As for the first phase, insulin release is triggered rapidly when the blood glucose level is increased. The second phase is a slow release of newly formed vesicles that are triggered regardless of the blood sugar level.
Glucose enters the beta cells and goes through glycolysis to form ATP that eventually causes depolarization of the beta cell membrane (as explained in Insulin secretion section of this article). The depolarization process causes voltage-controlled calcium channels (Ca2+) opening, allowing the calcium to flow into the cells. An increased calcium level activates phospholipase C, which cleaves the membrane phospholipid phosphatidylinositol 4,5-bisphosphate into Inositol 1,4,5-trisphosphate (IP3) and diacylglycerol (DAG). IP3 binds to receptor proteins in the membrane of the endoplasmic reticulum (ER). This releases (Ca2+) from the ER via IP3 gated channels, and raises the cell concentration of calcium even more. The influx of Ca2+ ions causes the secretion of insulin stored in vesicles through the cell membrane. The process of insulin secretion is an example of a trigger mechanism in a signal transduction pathway because insulin is secreted after glucose enters the beta cell and that triggers several other processes in a chain reaction.
Counter mechanism
Function of glucagon
While insulin is secreted by the pancreas to lower blood glucose levels, glucagon is secreted to raise blood glucose levels. This is why glucagon has been known for decades as a counter-regulatory hormone. When blood glucose levels are low, the pancreas secretes glucagon, which in turn causes the liver to convert stored glycogen polymers into glucose monomers, which is then released into the blood. This process is called glycogenolysis. Liver cells, or hepatocytes, have glucagon receptors which allow for glucagon to attach to them and thus stimulate glycogenolysis. Contrary to insulin, which is produced by pancreatic β-cells, glucagon is produced by pancreatic α-cells. It is also known that an increase in insulin suppresses glucagon secretion, and a decrease in insulin, along with low glucose levels, stimulates the secretion of glucagon.
Oscillatory behavior
When blood glucose levels are too low, the pancreas is signaled to release glucagon, which has essentially the opposite effect of insulin and therefore opposes the reduction of glucose in the blood. Glucagon is delivered directly to the liver, where it connects to the glucagon receptors on the membranes of the liver cells, signals the conversion of the glycogen already stored in the liver cells into glucose. This process is called glycogenolysis.
Conversely, when the blood glucose levels are too high, the pancreas is signaled to release insulin. Insulin is delivered to the liver and other tissues throughout the body (e.g., muscle, adipose). When the insulin is introduced to the liver, it connects to the insulin receptors already present, that is tyrosine kinase receptor. These receptors have two alpha subunits (extracellular) and two beta subunits (intercellular) which are connected through the cell membrane via disulfide bonds. When the insulin binds to these alpha subunits, 'glucose transport 4' (GLUT4) is released and transferred to the cell membrane to regulate glucose transport in and out of the cell. With the release of GLUT4, the allowance of glucose into cells is increased, and therefore the concentration of blood glucose might decrease. This, in other words, increases the utilization of the glucose already present in the liver. This is shown in the adjacent image. As glucose increases, the production of insulin increases, which thereby increases the utilization of the glucose, which maintains the glucose levels in an efficient manner and creates an oscillatory behavior.
References
Molecular biology
Insulin | Insulin signal transduction pathway | [
"Chemistry",
"Biology"
] | 3,691 | [
"Biochemistry",
"Molecular biology"
] |
31,224,085 | https://en.wikipedia.org/wiki/Pentakis%20snub%20dodecahedron | The pentakis snub dodecahedron is a convex polyhedron with 140 triangular faces, 210 edges, and 72 vertices. It has chiral icosahedral symmetry.
Construction
It comes from a topological construction from the snub dodecahedron with the kis operator applied to the pentagonal faces. In this construction, all the faces are computed to be the same distance from the center. 80 of the triangles are equilateral, and 60 triangles from the pentagons are isosceles.
It is a (2,1) geodesic polyhedron, made of all triangles. The path between the valence-5 vertices is two edges in a row, and then a turn and one more edge.
See also
Tetrakis snub cube k4sC
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008,
Chapter 21: Naming the Archimedean and Catalan polyhedra and Tilings (p 284)
Dover 1999
External links
Pentakis snub dodecahedron
VTML polyhedral generator Try "k5sD" (Conway polyhedron notation)
Geodesic polyhedra
Snub tilings | Pentakis snub dodecahedron | [
"Physics"
] | 248 | [
"Tessellation",
"Snub tilings",
"Symmetry"
] |
36,420,451 | https://en.wikipedia.org/wiki/Reversible-deactivation%20radical%20polymerization | In polymer chemistry, reversible-deactivation radical polymerizations (RDRPs) are members of the class of reversible-deactivation polymerizations which exhibit much of the character of living polymerizations, but cannot be categorized as such as they are not without chain transfer or chain termination reactions.
Several different names have been used in literature, which are:
Living radical polymerization
Living free radical polymerization
Controlled/"living" radical polymerization
Controlled radical polymerization
Reversible deactivation radical polymerization
Though the term "living" radical polymerization was used in early days, it has been discouraged by IUPAC, because radical polymerization cannot be a truly living process due to unavoidable termination reactions between two radicals. The commonly-used term controlled radical polymerization is permitted, but reversible-deactivation radical polymerization or controlled reversible-deactivation radical polymerization (RDRP) is recommended.
History and character
RDRP – sometimes misleadingly called 'free' radical polymerization – is one of the most widely used polymerization processes since it can be applied
to a great variety of monomers
it can be carried out in the presence of certain functional groups
the technique is rather simple and easy to control
the reaction conditions can vary from bulk over solution, emulsion, miniemulsion to suspension
it is relatively inexpensive compared with competitive techniques
The steady-state concentration of the growing polymer chains is 10−7 M by order of magnitude, and the average life time of an individual polymer radical before termination is about 5–10 s. A drawback of the conventional radical polymerization is the limited control of chain architecture, molecular weight distribution, and composition. In the late 20th century it was observed that when certain components were added to systems polymerizing by a chain mechanism they are able to react reversibly with the (radical) chain carriers, putting them temporarily into a 'dormant' state.
This had the effect of prolonging the lifetime of the growing polymer chains (see above) to values comparable with the duration of the experiment. At any instant most of the radicals are in the inactive (dormant) state, however, they are not irreversibly terminated (‘dead’). Only a small fraction of them are active (growing), yet with a fast rate of interconversion of active and dormant forms, faster than the growth rate, the same probability of growth is ensured for all chains, i.e., on average, all chains are growing at the same rate. Consequently, rather than a most probable distribution, the molecular masses (degrees of polymerization) assume a much narrower Poisson distribution, and a lower dispersity prevails.
IUPAC also recognizes the alternative name, ‘controlled reversible-deactivation radical polymerization’ as acceptable, "provided the controlled context is specified, which in this instance comprises molecular mass and molecular mass distribution." These types of radical polymerizations are not necessarily ‘living’ polymerizations, since chain termination reactions are not precluded".
The adjective ‘controlled’ indicates that a certain kinetic feature of a polymerization or structural aspect of the polymer molecules formed is controlled (or both). The expression ‘controlled polymerization’ is sometimes used to describe a radical or ionic polymerization in which reversible-deactivation of the chain carriers is an essential component of the mechanism and interrupts the propagation that secures control of one or more kinetic features of the polymerization or one or more structural aspects of the macromolecules formed, or both. The expression ‘controlled radical polymerization’ is sometimes used to describe a radical polymerization that is conducted in the presence of agents that lead to e.g. atom-transfer radical polymerization (ATRP), nitroxide-(aminoxyl) mediated polymerization (NMP), or reversible-addition-fragmentation chain transfer (RAFT) polymerization. All these and further controlled polymerizations are included in the class of reversible-deactivation radical polymerizations. Whenever the adjective ‘controlled’ is used in this context the particular kinetic or the structural features that are controlled have to be specified.
Reversible-deactivation polymerization
There is a mode of polymerization referred to as reversible-deactivation polymerization which is distinct from living polymerization, despite some common features. Living polymerization requires a complete absence of termination reactions, whereas reversible-deactivation polymerization may contain a similar fraction of termination as conventional polymerization with the same concentration of active species. Some important aspects of these are compared in the table:
Common features
As the name suggests, the prerequisite of a successful RDRP is fast and reversible activation/deactivation of propagating chains. There are three types of RDRP; namely deactivation by catalyzed reversible coupling, deactivation by spontaneous reversible coupling and deactivation by degenerative transfer (DT). A mixture of different mechanisms is possible; e.g. a transition metal mediated RDRP could switch among ATRP, OMRP and DT mechanisms depending on the reaction conditions and reagents used.
In any RDRP processes, the radicals can propagate with the rate coefficient kp by addition of a few monomer units before the deactivation reaction occurs to regenerate the dormant species. Concurrently, two radicals may react with each other to form dead chains with the rate coefficient kt. The rates of propagation and termination between two radicals are not influenced by the mechanism of deactivation or the catalyst used in the system. Thus it is possible to estimate how fast a RDRP can be conducted with preserved chain end functionality?
In addition, other chain breaking reactions such as irreversible chain transfer/termination reactions of the propagating radicals with solvent, monomer, polymer, catalyst, additives, etc. would introduce additional loss of chain end functionality (CEF). The overall rate coefficient of chain breaking reactions besides the direct termination between two radicals is represented as ktx.
In all RDRP methods, the theoretical number average molecular weight of obtained polymers, Mn, can be defined by following equation:
where Mm is the molecular weight of monomer; [M]0 and [M]t are the monomer concentrations at time 0 and time t; [R-X]0 is the initial concentration of the initiator.
Besides the designed molecular weight, a well controlled RDRP should give polymers with narrow molecular distributions, which can be quantified by Mw/Mn values, and well preserved chain end functionalities.
A well controlled RDRP process requires: 1) the reversible deactivation process should be sufficiently fast; 2) the chain breaking reactions which cause the loss of chain end functionalities should be limited; 3) properly maintained radical concentration; 4) the initiator should have proper activity.
Examples
Atom transfer radical polymerization (ATRP)
The initiator of the polymerization is usually an organohalogenid and the dormant state is achieved in a metal complex of a transition metal (‘radical buffer’). This method is very versatile but requires unconventional initiator systems that are sometimes poorly compatible with the polymerization media.
Nitroxide-mediated polymerization (NMP)
Given certain conditions a homolytic splitting of the C-O bond in alkoxylamines can occur and a stable 2-centre 3 electron N-O radical can be formed that is able to initiate a polymerization reaction. The preconditions for an alkoxylamine suitable to initiate a polymerization are bulky, sterically obstructive substituents on the secondary amine, and the substituent on the oxygen should be able to form a stable radical, e.g. benzyl.
Reversible addition-fragmentation chain transfer (RAFT)
RAFT is one of the most versatile and convenient techniques in this context. The most common RAFT-processes are carried out in the presence of thiocarbonylthio compounds that act as radical buffers.
While in ATRP and NMP reversible deactivation of propagating radical-radical reactions takes place and the dormant structures are a halo-compound in ATRP and the alkoxyamine in NMP, both being a sink for radicals and source at the same time and described by the corresponding equilibria. RAFT on the contrary, is controlled by chain-transfer reactions that are in a deactivation-activation equilibrium. Since no radicals are generated or destroyed an external source of radicals is necessary for initiation and maintenance of the propagation reaction.
Initiation step of a RAFT polymerization
I -> I^. ->[\ce M]->[\ce M] P_\mathit{n}^.
Reversible chain transfer
Reinitiation step
R^. ->[\ce M] RM^. ->[\ce M]->[\ce M] P^._\mathit{m}
Chain equilibration step
Termination step
{P_\mathit{m}^.} + P_\mathit{n}^. -> P_\mathit{m}P_\mathit{n}
Catalytic chain transfer and cobalt mediated radical polymerization
Although not a strictly living form of polymerization catalytic chain transfer polymerization must be mentioned as it figures significantly in the development of later forms of living free radical polymerization.
Discovered in the late 1970s in the USSR it was found that cobalt porphyrins were able to reduce the molecular weight during polymerization of methacrylates.
Later investigations showed that the cobalt glyoxime complexes were as effective as the porphyrin catalysts and also less oxygen sensitive. Due to their lower oxygen sensitivity these catalysts have been investigated much more thoroughly than the porphyrin catalysts.
The major products of catalytic chain transfer polymerization are vinyl-terminated polymer chains. One of the major drawbacks of the process is that catalytic chain transfer polymerization does not produce macromonomers but instead produces addition fragmentation agents. When a growing polymer chain reacts with the addition fragmentation agent the radical end-group attacks the vinyl bond and forms a bond. However, the resulting product is so hindered that the species undergoes fragmentation, leading eventually to telechelic species.
These addition fragmentation chain transfer agents do form graft copolymers with styrenic and acrylate species however they do so by first forming block copolymers and then incorporating these block copolymers into the main polymer backbone.
While high yields of macromonomers are possible with methacrylate monomers, low yields are obtained when using catalytic chain transfer agents during the polymerization of acrylate and stryenic monomers. This has been seen to be due to the interaction of the radical centre with the catalyst during these polymerization reactions.
The reversible reaction of the cobalt macrocycle with the growing radical is known as cobalt carbon bonding and in some cases leads to living polymerization reactions.
Iniferter polymerization
An iniferter is a chemical compound that simultaneously acts as initiator, transfer agent, and terminator (hence the name ini-fer-ter) in controlled free radical iniferter polymerizations, the most common is the dithiocarbamate type.
Iodine-transfer polymerization (ITP)
Iodine-transfer polymerization (ITP, also called ITRP), developed by Tatemoto and coworkers in the 1970s gives relatively low polydispersities for fluoroolefin polymers. While it has received relatively little academic attention, this chemistry has served as the basis for several industrial patents and products and may be the most commercially successful form of living free radical polymerization. It has primarily been used to incorporate iodine cure sites into fluoroelastomers.
The mechanism of ITP involves thermal decomposition of the radical initiator (typically persulfate), generating the initiating radical In•. This radical adds to the monomer M to form the species P1•, which can propagate to Pm•. By exchange of iodine from the transfer agent R-I to the propagating radical Pm• a new radical R• is formed and Pm• becomes dormant. This species can propagate with monomer M to Pn•. During the polymerization exchange between the different polymer chains and the transfer agent occurs, which is typical for a degenerative transfer process.
Typically, iodine transfer polymerization uses a mono- or diiodo-perfluoroalkane as the initial chain transfer agent. This fluoroalkane may be partially substituted with hydrogen or chlorine. The energy of the iodine-perfluoroalkane bond is low and, in contrast to iodo-hydrocarbon bonds, its polarization small. Therefore, the iodine is easily abstracted in the presence of free radicals. Upon encountering an iodoperfluoroalkane, a growing poly(fluoroolefin) chain will abstract the iodine and terminate, leaving the now-created perfluoroalkyl radical to add further monomer. But the iodine-terminated poly(fluoroolefin) itself acts as a chain transfer agent. As in RAFT processes, as long as the rate of initiation is kept low, the net result is the formation of a monodisperse molecular weight distribution.
Use of conventional hydrocarbon monomers with iodoperfluoroalkane chain transfer agents has been described. The resulting molecular weight distributions have not been narrow since the energetics of an iodine-hydrocarbon bond are considerably different from that of an iodine-fluorocarbon bond and abstraction of the iodine from the terminated polymer difficult. The use of hydrocarbon iodides has also been described, but again the resulting molecular weight distributions were not narrow.
Preparation of block copolymers by iodine-transfer polymerization was also described by Tatemoto and coworkers in the 1970s.
Although use of living free radical processes in emulsion polymerization has been characterized as difficult, all examples of iodine-transfer polymerization have involved emulsion polymerization. Extremely high molecular weights have been claimed.
Listed below are some other less described but to some extent increasingly important living radical polymerization techniques.
Selenium-centered radical-mediated polymerization
Diphenyl diselenide and several benzylic selenides have been explored by Kwon et al. as photoiniferters in polymerization of styrene and methyl methacrylate. Their mechanism of control over polymerization is proposed to be similar to the dithiuram disulfide iniferters. However, their low transfer constants allow them to be used for block copolymer synthesis but give limited control over the molecular weight distribution.
Telluride-mediated polymerization (TERP)
Telluride-mediated polymerization or TERP first appeared to mainly operate under a reversible chain transfer mechanism by homolytic substitution under thermal initiation. However, in a kinetic study it was found that TERP predominantly proceeds by degenerative transfer rather than 'dissociation combination'.
Alkyl tellurides of the structure Z-X-R, were Z=methyl and R= a good free radical leaving group, give the better control for a wide range of monomers, phenyl tellurides (Z=phenyl) giving poor control. Polymerization of methyl methacrylates are only controlled by ditellurides. The importance of X to chain transfer increases in the series O<S<Se<Te, makes alkyl tellurides effective in mediating control under thermally initiated conditions and the alkyl selenides and sulfides effective only under photoinitiated polymerization.
Stibine-mediated polymerization
More recently Yamago et al. reported stibine-mediated polymerization, using an organostibine transfer agent with the general structure Z(Z')-Sb-R (where Z= activating group and R= free radical leaving group). A wide range of monomers (styrenics, (meth)acrylics and vinylics) can be controlled, giving narrow molecular weight distributions and predictable molecular weights under thermally initiated conditions. Yamago has also published a patent indicating that bismuth alkyls can also control radical polymerizations via a similar mechanism.
Copper mediated polymerization
More reversible-deactivation radical polymerizations are known to be catalysed by copper.
References
Polymer chemistry
Free radical reactions
Polymerization reactions | Reversible-deactivation radical polymerization | [
"Chemistry",
"Materials_science",
"Engineering"
] | 3,429 | [
"Free radical reactions",
"Materials science",
"Organic reactions",
"Polymer chemistry",
"Polymerization reactions"
] |
36,423,257 | https://en.wikipedia.org/wiki/Aerographite | Aerographite is a synthetic foam consisting of a porous interconnected network of tubular carbon. With a density of 180 g/m3 it is one of the lightest structural materials ever created. It was developed jointly by a team of researchers at the University of Kiel and the Technical University of Hamburg in Germany, and was first reported in a scientific journal in June 2012.
Structure and properties
Aerographite is a black freestanding material that can be produced in various shapes occupying a volume of up to several cubic centimeters. It consists of a seamless interconnected network of carbon tubes that have micron-scale diameters and a wall thickness of about 15 nm. Because of the relatively lower curvature and larger wall thickness, these walls differ from the graphene-like shells of carbon nanotubes and resemble vitreous carbon in their properties. These walls are often discontinuous and contain wrinkled areas that improve the elastic properties of aerographite. The carbon bonding in aerographite has an sp2 character, as confirmed by electron energy loss spectroscopy and electrical conductivity measurements. Upon external compression, the conductivity increases, along with material density, from ~0.2 S/m at 0.18 mg/cm3 to 0.8 S/m at 0.2 mg/cm3. The conductivity is higher for a denser material, 37 S/m at 50 mg/cm3.
Owing to its interconnected tubular network structure, aerographite resists tensile forces much better than other carbon foams as well as silica aerogels. It sustains extensive elastic deformations and has a very low Poisson's ratio. A complete shape recovery of a 3-mm-tall sample after it was compressed down to 0.1 mm is possible. Its ultimate tensile strength (UTS) depends on material density and is about 160 kPa at 8.5 mg/cm3 and 1 kPa at 0.18 mg/cm3; in comparison, the strongest silica aerogels have a UTS of 16 kPa at 100 mg/cm3. The Young's modulus is ca. 15 kPa at 0.2 mg/cm3 in tension, but is much lower in compression, increasing from 1 kPa at 0.2 mg/cm3 to 7 kPa at 15 mg/cm3. The density given by the authors is based a mass measurement and the determination of the outer volume of the synthetic foams as usually performed also for other structures.
Aerographite is superhydrophobic, thus its centimeter-sized samples repel water; they are also rather sensitive to electrostatic effects and spontaneously jump to charged objects.
Synthesis
Common aspects of synthesis:
With the aerographite's chemical vapor deposition (CVD) process metal oxides had been shown in 2012 to be a suitable template for deposition of graphitic structures. The templates can be in situ removed. Basic mechanism is the reduction of metal oxide to a metallic constituent, the nucleation of carbon in and on top of metal and the simultaneous evaporation of metal component.
Requirements for the metal oxides are: a low activation energy for chemical reduction, a metal phase, which can nucleate graphite, a low evaporation point of metal phase (ZnO, SnO).
From engineering perspective, the developed CVD process enables the use of ceramic powder processing (use of custom particles and sintering bridges) for creation of templates for 3D carbon via CVD. Key advantages compared to commonly used metal templates are: shape variety of particle shapes, the creation of sintering bridges and the removal without acids.
Originally demonstrated on just μm-sized meshed graphite networks, the CVD mechanism had been adopted after 2014 by other scientists to create nm-sized carbon structures.
Details specific to reference:
Aerographite is produced by chemical vapor deposition, using a ZnO template. The template consists of micron-thick rods, often in the shape of multipods, that can be synthesized by mixing comparable amounts of Zn and polyvinyl butyral powders and heating the mixture at 900 °C. The aerographite synthesis is carried out at ~760 °C, under an argon gas flow, to which toluene vapors are injected as a carbon source. A thin (~15 nm), discontinuous layer of carbon is deposited on ZnO which is then etched away by adding hydrogen gas to the reaction chamber. Thus the remaining carbon network closely follows the morphology of the original ZnO template. In particular, the nodes of the aerographite network originate from the joints of the ZnO multipods.
Potential applications
Aerographite electrodes have been tested in an electric double-layer capacitor (EDLC, also known as supercapacitor) and endured the mechanical shocks related to loading-unloading cycles and crystallization of the electrolyte (that occurs upon evaporation of the solvent). Their specific energy of 1.25 Wh/kg is comparable to that of carbon nanotube electrodes (~2.3 Wh/kg).
Space travel
Because aerographite is both black and light, it was proposed as a light-sail material. Simulations show that 1 kg spacecraft with aerographite solar sail can reach Mars in 26 days.
Separately, it was proposed to release 1 μm particles from the solar altitude reached by the Parker solar probe. The solar wind would accelerate them to over 2% of lightspeed or 6000 km/sec. A steady stream of pellets could be used by plasma magnet propulsion systems to accelerate payloads to 6% of lightspeed, or 18000 km/sec.
See also
Aerogels
Graphene
Metallic microlattice
References
External links
Nanomaterials
Chemical vapor deposition
Aerogels | Aerographite | [
"Chemistry",
"Materials_science"
] | 1,189 | [
"Foams",
"Chemical vapor deposition",
"Aerogels",
"Nanomaterials",
"Nanotechnology"
] |
28,412,014 | https://en.wikipedia.org/wiki/Voronoi%20pole | In computational geometry, the positive and negative Voronoi poles of a cell in a Voronoi diagram are certain vertices of the diagram, chosen in pairs in each cell of the diagram to be far from the site generating that pair. They have applications in surface reconstruction.
Definition
Let be the Voronoi diagram for a set of sites , and let be the Voronoi cell of corresponding to a site . If is bounded, then its positive pole is the vertex of the boundary of that has maximal distance to the point . If the cell is unbounded, then a positive pole is not defined.
Furthermore, let be the vector from to the positive pole, or, if the cell is unbounded, let be a vector in the average direction of all unbounded Voronoi edges of the cell. The negative pole is then the Voronoi vertex in with the largest distance to such that the vector and the vector from to make an angle larger than .
History and application
The poles were introduced in 1998 in two papers by Nina Amenta, Marshall Bern, and Manolis Kellis, for the problem of surface reconstruction. As they showed, any smooth surface that is sampled with sampling density inversely proportional to its curvature can be accurately reconstructed, by constructing the Delaunay triangulation of the combined set of sample points and their poles, and then removing certain triangles that are nearly parallel to the line segments between pairs of nearby poles.
References
Computational geometry | Voronoi pole | [
"Mathematics"
] | 297 | [
"Computational geometry",
"Computational mathematics"
] |
28,412,183 | https://en.wikipedia.org/wiki/Projector | A projector or image projector is an optical device that projects an image (or moving images) onto a surface, commonly a projection screen. Most projectors create an image by shining a light through a small transparent lens, but some newer types of projectors can project the image directly, by using lasers. A virtual retinal display, or retinal projector, is a projector that projects an image directly on the retina instead of using an external projection screen.
The most common type of projector used today is called a video projector. Video projectors are digital replacements for earlier types of projectors such as slide projectors and overhead projectors. These earlier types of projectors were mostly replaced with digital video projectors throughout the 1990s and early 2000s, but old analog projectors are still used at some places. The newest types of projectors are handheld projectors that use lasers or LEDs to project images.
Movie theaters used a type of projector called a movie projector, nowadays mostly replaced with digital cinema video projectors.
Different projector types
Projectors can be roughly divided into three categories, based on the type of input. Some of the listed projectors were capable of projecting several types of input. For instance: video projectors were basically developed for the projection of prerecorded moving images, but are regularly used for still images in PowerPoint presentations and can easily be connected to a video camera for real-time input. The magic lantern is best known for the projection of still images, but was capable of projecting moving images from mechanical slides since its invention and was probably at its peak of popularity when used in phantasmagoria shows to project moving images of ghosts.
Real-time
Camera obscura
Concave mirror
Opaque projector
Overhead projector
Document camera
Shadow projector
Still images
Slide projector
Large-format slide projector
Magic lantern
Magic mirror
Steganographic mirror (see below for details)
Enlarger (not for direct viewing, but for the production of photographic prints)
Moving images
Movie projector
Mini portable home theatres projector
Video projector
Handheld projector
Virtual retinal display
Revolving lanterns (see below for details)
History
There probably existed quite a few other types of projectors than the examples described below, but evidence is scarce and reports are often unclear about their nature. Spectators did not always provide the details needed to differentiate between for instance a shadow play and a lantern projection. Many did not understand the nature of what they had seen and few had ever seen other comparable media. Projections were often presented or perceived as magic or even as religious experiences, with most projectionists unwilling to share their secrets. Joseph Needham sums up some possible projection examples from China in his 1962 book series Science and Civilization in China
Prehistory to 1100
Shadow play
The earliest projection of images was most likely done in primitive shadowgraphy dating back to prehistory. Shadow play usually does not involve a projection device, but can be seen as a first step in the development of projectors. It evolved into more refined forms of shadow puppetry in Asia, where it has a long history in Indonesia (records relating to Wayang since 840 CE), Malaysia, Thailand, Cambodia, China (records since around 1000 CE), India and Nepal.
Camera obscura
Projectors share a common history with cameras in the camera obscura. Camera obscura (Latin for "dark room") is the natural optical phenomenon that occurs when an image of a scene at the other side of a screen (or for instance a wall) is projected through a small hole in that screen to form an inverted image (left to right and upside down) on a surface opposite to the opening. The oldest known record of this principle is a description by Han Chinese philosopher Mozi (ca. 470 to ca. 391 BC). Mozi correctly asserted that the camera obscura image is inverted because light travels in straight lines.
In the early 11th century, Arab physicist Ibn al-Haytham (Alhazen) described experiments with light through a small opening in a darkened room and realized that a smaller hole provided a sharper image.
Chinese magic mirrors
The oldest known objects that can project images are Chinese magic mirrors. The origins of these mirrors have been traced back to the Chinese Han dynasty (206 BC – 24 AD) and are also found in Japan. The mirrors were cast in bronze with a pattern embossed at the back and a mercury amalgam laid over the polished front. The pattern on the back of the mirror is seen in a projection when light is reflected from the polished front onto a wall or other surface. No trace of the pattern can be discerned on the reflecting surface with the naked eye, but minute undulations on the surface are introduced during the manufacturing process and cause the reflected rays of light to form the pattern. It is very likely that the practice of image projection via drawings or text on the surface of mirrors predates the very refined ancient art of the magic mirrors, but no evidence seems to be available.
Revolving lanterns
Revolving lanterns have been known in China as "trotting horse lamps" [走馬燈] since before 1000 CE. A trotting horse lamp is a hexagonal, cubical or round lantern which on the inside has cut-out silhouettes attached to a shaft with a paper vane impeller on top, rotated by heated air rising from a lamp. The silhouettes are projected on the thin paper sides of the lantern and appear to chase each other. Some versions showed some extra motion in the heads, feet and/or hands of figures by connecting them with a fine iron wire to an extra inner layer that would be triggered by a transversely connected iron wire. The lamp would typically show images of horses and horse-riders.
In France, similar lanterns were known as "lanterne vive" (bright or living lantern) in Medieval times. and as "lanterne tournante" since the 18th century. An early variation was described in 1584 by Jean Prevost in his small octavo book La Premiere partie des subtiles et plaisantes inventions. In his "lanterne", cut-out figures of a small army were placed on a wooden platform rotated by a cardboard propeller above a candle. The figures cast their shadows on translucent, oiled paper on the outside of the lantern. He suggested to take special care that the figures look lively: with horses raising their front legs as if they were jumping and soldiers with drawn swords, a dog chasing a hare, etcetera. According to Prevost barbers were skilled in this art and it was common to see these night lanterns in their shop windows.
A more common version had the figures, usually representing grotesque or devilish creatures, painted on a transparent strip. The strip was rotated inside a cylinder by a tin impeller above a candle. The cylinder could be made of paper or of sheet metal perforated with decorative patterns. Around 1608 Mathurin Régnier mentioned the device in his Satire XI as something used by a patissier to amuse children. Régnier compared the mind of an old nagger with the lantern's effect of birds, monkeys, elephants, dogs, cats, hares, foxes and many strange beasts chasing each other.
John Locke (1632–1704) referred to a similar device when wondering if ideas are formed in the human mind at regular intervals,"not much unlike the images in the inside of a lantern, turned round by the heat of a candle." Related constructions were commonly used as Christmas decorations in England and parts of Europe. A still relatively common type of rotating device that is closely related does not really involve light and shadows, but it simply uses candles and an impeller to rotate a ring with tiny figurines standing on top.
Many modern electric versions of this type of lantern use all kinds of colorful transparent cellophane figures which are projected across the walls, especially popular for nurseries.
1100 to 1500
Concave mirrors
The inverted real image of an object reflected by a concave mirror can appear at the focal point in front of the mirror. In a construction with an object at the bottom of two opposing concave mirrors (parabolic reflectors) on top of each other, the top one with an opening in its center, the reflected image can appear at the opening as a very convincing 3D optical illusion.
The earliest description of projection with concave mirrors has been traced back to a text by French author Jean de Meun in his part of Roman de la Rose (circa 1275). A theory known as the Hockney-Falco thesis claims that artists used either concave mirrors or refractive lenses to project images onto their canvas/board as a drawing/painting aid as early as circa 1430.
It has also been thought that some encounters with spirits or gods since antiquity may have been conjured up with (concave) mirrors.
Fontana's lantern
Around 1420 the Venetian scholar and engineer Giovanni Fontana included a drawing of a person with a lantern projecting an image of a demon in his book about mechanical instruments "Bellicorum Instrumentorum Liber". The Latin text "Apparentia nocturna ad terrorem videntium" (Nocturnal appearance to frighten spectators)" clarifies its purpose, but the meaning of the undecipherable other lines is unclear. The lantern seems to simply have the light of an oil lamp or candle go through a transparent cylindrical case on which the figure is drawn to project the larger image, so it probably could not project an image as clearly defined as Fontana's drawing suggests.
Possible 15th century image projector
In 1437 Italian humanist author, artist, architect, poet, priest, linguist, philosopher and cryptographer Leon Battista Alberti is thought to have possibly projected painted pictures from a small closed box with a small hole, but it is unclear whether this actually was a projector or rather a type of show box with transparent pictures illuminated from behind and viewed through the hole.
1500 to 1700
16th to early 17th century
Leonardo da Vinci is thought to have had a projecting lantern - with a condensing lens, candle and chimney - based on a small sketch from around 1515.
In his Three Books of Occult Philosophy (1531–1533) Heinrich Cornelius Agrippa claimed that it was possible to project "images artificially painted, or written letters" onto the surface of the Moon with the means of moonbeams and their "resemblances being multiplied in the air". Pythagoras would have often performed this trick.
In 1589 Giambattista della Porta published about the ancient art of projecting mirror writing in his book Magia Naturalis.
Dutch inventor Cornelis Drebbel, who is a likely inventor of the microscope, is thought to have had some kind of projector that he used in magical performances. In a 1608 letter he described the many marvelous transformations he performed and the apparitions that he summoned by the means of his new invention based on optics. It included giants that rose from the earth and moved all their limbs very lifelike. The letter was found in the papers of his friend Constantijn Huygens, father of the likely inventor of the magic lantern Christiaan Huygens.
Helioscope
In 1612 Italian mathematician Benedetto Castelli wrote to his mentor, the Italian astronomer, physicist, engineer, philosopher and mathematician Galileo Galilei about projecting images of the sun through a telescope (invented in 1608) to study the recently discovered sunspots. Galilei wrote about Castelli's technique to the German Jesuit priest, physicist and astronomer Christoph Scheiner.
From 1612 to at least 1630 Christoph Scheiner would keep on studying sunspots and constructing new telescopic solar projection systems. He called these "Heliotropii Telioscopici", later contracted to helioscope.
Steganographic mirror
The 1645 first edition of German Jesuit scholar Athanasius Kircher's book Ars Magna Lucis et Umbrae included a description of his invention, the steganographic mirror: a primitive projection system with a focusing lens and text or pictures painted on a concave mirror reflecting sunlight, mostly intended for long distance communication. He saw limitations in the increase of size and diminished clarity over a long distance and expressed his hope that someone would find a method to improve on this. Kircher also suggested projecting live flies and shadow puppets from the surface of the mirror. The book was quite influential and inspired many scholars, probably including Christiaan Huygens who would invent the magic lantern. Kircher was often credited as the inventor of the magic lantern, although in his 1671 edition of Ars Magna Lucis et Umbrae Kircher himself credited Danish mathematician Thomas Rasmussen Walgensten for the magic lantern, which Kircher saw as a further development of his own projection system.
Although Athanasius Kircher claimed the Steganographic mirror as his own invention and wrote not to have read about anything like it, it has been suggested that Rembrandt's 1635 painting of "Belshazzar's Feast" depicts a steganographic mirror projection with God's hand writing Hebrew letters on a dusty mirror's surface.
In 1654 Belgian Jesuit mathematician André Tacquet used Kircher's technique to show the journey from China to Belgium of Italian Jesuit missionary Martino Martini. It is sometimes reported that Martini lectured throughout Europe with a magic lantern which he might have imported from China, but there's no evidence that anything other than Kircher's technique was used.
Magic lantern
By 1659 Dutch scientist Christiaan Huygens had developed the magic lantern, which used a concave mirror to reflect and direct as much of the light of a lamp as possible through a small sheet of glass on which was the image to be projected, and onward into a focusing lens at the front of the apparatus to project the image onto a wall or screen (Huygens apparatus actually used two additional lenses). He did not publish nor publicly demonstrate his invention as he thought it was too frivolous.
The magic lantern became a very popular medium for entertainment and educational purposes in the 18th and 19th century. This popularity waned after the introduction of cinema in the 1890s. The magic lantern remained a common medium until slide projectors came into widespread use during the 1950s.
1700 to 1900
Solar microscope
A few years before his death in 1736 Polish-German-Dutch physicist Daniel Gabriel Fahrenheit reportedly constructed a solar microscope, which was a combination of the compound microscope with camera obscura projection. It needed bright sunlight as a light source to project a clear magnified image of transparent objects. Fahrenheit's instrument may have been seen by German physician Johann Nathanael Lieberkühn who introduced the instrument in England, where optician John Cuff improved it with a stationary optical tube and an adjustable mirror. In 1774 English instrument maker Benjamin Martin introduced his "Opake Solar Microscope" for the enlarged projection of opaque objects. He claimed:
The solar microscope, was employed in experiments with photosensitive silver nitrate by Thomas Wedgwood in collaboration with Humphry Davy in making the first, but impermanent, photographic enlargements. Their discoveries, regarded as the earliest deliberate and successful form of photography, were published in June 1802 by Davy in his An Account of a Method of Copying Paintings upon Glass, and of Making Profiles, by the Agency of Light upon Nitrate of Silver. Invented by T. Wedgwood, Esq. With Observations by H. Davy in the first issue of the Journals of the Royal Institution of Great Britain.
Opaque projectors
Swiss mathematician, physicist, astronomer, logician and engineer Leonhard Euler demonstrated an opaque projector, now commonly known as an episcope, around 1756. It could project a clear image of opaque images and (small) objects.
French scientist Jacques Charles is thought to have invented the similar "megascope" in 1780. He used it for his lectures. Around 1872 Henry Morton used an opaque projector in demonstrations for huge audiences, for example in the Philadelphia Opera House which could seat 3500 people. His machine did not use a condenser or reflector, but used an oxyhydrogen lamp close to the object in order to project huge clear images.
Solar camera
See main article: Solar camera
Known equally, though later, as a solar enlarger, the solar camera is a photographic application of the solar microscope and an ancestor of the darkroom enlarger, and was used, mostly by portrait photographers and as an aid to portrait artists, in the mid-to-late 19th century to make photographic enlargements from negatives using the Sun as a light source powerful enough to expose the then available low-sensitivity photographic materials. It was superseded in the 1880s when other light sources, including the incandescent bulb, were developed for the darkroom enlarger and materials became ever more photo-sensitive.
20th century to present day
In the early and middle parts of the 20th century, low-cost opaque projectors were produced and marketed as a toy for children. The light source in early opaque projectors was often limelight, with incandescent light bulbs and halogen lamps taking over later. Episcopes are still marketed as artists' enlargement tools to allow images to be traced on surfaces such as prepared canvas.
In the late 1950s and early 1960s, overhead projectors began to be widely used in schools and businesses. The first overhead projector was used for police identification work. It used a celluloid roll over a 9-inch stage allowing facial characteristics to be rolled across the stage. The United States military in 1940 was the first to use it in quantity for training.
From the 1950s to the 1990s slide projectors for 35 mm photographic positive film slides were common for presentations and as a form of entertainment; family members and friends would occasionally gather to view slideshows, typically of vacation travels.
Complex Multi-image shows of the 1970s to 1990s, purposed usually for marketing, promotion or community service or artistic displays, used 35mm and 46mm transparency slides (diapositives) projected by single or multiple slide projectors onto one or more screens in synchronization with an audio voice-over and/or music track controlled by a pulsed-signal tape or cassette. Multi-image productions are also known as multi-image slide presentations, slide shows and diaporamas and are a specific form of multimedia or audio-visual production.
Digital cameras had become commercialised by 1990, and in 1997 Microsoft PowerPoint was updated to include image files, accelerating the transition from 35 mm slides to digital images, and thus digital projectors, in pedagogy and training. Production of all Kodak Carousel slide projectors ceased in 2004, and in 2009 manufacture and processing of Kodachrome film was discontinued.
In popular culture
In Mad Mens first series the final episode presents the protagonist Don Draper's presentation (via slide projector) of a plan to market the Kodak slide carrier a 'carousel'.
See also
Planetarium projector
Projector phone
Hockney-Falco thesis
Slide show
Multi-image
Enlarger
Audio-visual
Notes and references
Display technology
Optical devices | Projector | [
"Materials_science",
"Engineering"
] | 3,917 | [
"Glass engineering and science",
"Electronic engineering",
"Optical devices",
"Display technology"
] |
28,413,819 | https://en.wikipedia.org/wiki/Poly%28p-phenylene%20oxide%29 | Poly(p-phenylene oxide) (PPO), poly(p-phenylene ether) (PPE), poly(oxy-2,6-dimethyl-1,4-phenylene), often referred to simply as polyphenylene oxide, is a high-temperature thermoplastic with the general formula (C8H8O)n. It is rarely used in its pure form due to difficulties in processing. It is mainly used as blend with polystyrene, high impact styrene-butadiene copolymer or polyamide. PPO is a registered trademark of SABIC Innovative Plastics B.V. under which various polyphenylene ether resins are sold.
History
Polyphenylene ether was discovered in 1959 by Allan Hay, and was commercialized by General Electric in 1960.
While it was one of the cheapest high-temperature resistant plastics, processing was difficult, while the impact and heat resistance gradually decreased with time. Mixing it with polystyrene in any ratio could compensate for the disadvantages. In the 1960s, modified PPE came into the market under the trademark Noryl.
Properties
PPE is an amorphous high-performance plastic. The glass transition temperature is 215 °C, but it can be varied by mixing with polystyrene. Through modification and the incorporation of fillers such as glass fibers, the properties can be extensively modified.
Applications
PPE blends are used for structural parts, electronics, household and automotive items that depend on high heat resistance, dimensional stability and accuracy. They are also used in medicine for sterilizable instruments made of plastic. The PPE blends are characterized by hot water resistance with low water absorption, high impact strength, halogen-free fire protection and low density.
This plastic is processed by injection molding or extrusion; depending on the type, the processing temperature is 260–300 °C. The surface can be printed, hot-stamped, painted or metallized. Welds are possible by means of heating element, friction or ultrasonic welding. It can be glued with halogenated solvents or various adhesives.
This plastic is also used to produce air separation membranes for generating nitrogen. The PPO is spun into a hollow fiber membrane with a porous support layer and a very thin outer skin. The permeation of oxygen occurs from inside to out across the thin outer skin with an extremely high flux. Due to the manufacturing process, the fiber has excellent dimensional stability and strength. Unlike hollow fiber membranes made from polysulfone, the aging process of the fiber is relatively quick so that air separation performance remains stable throughout the life of the membrane. PPO makes the air separation performance suitable for low temperature () applications where polysulfone membranes require heated air to increase permeation.
Production from natural products
Natural phenols can be enzymatically polymerized. Laccase and peroxidase induce the polymerization of syringic acid to give a poly(1,4-phenylene oxide) bearing a carboxylic acid at one end and a phenolic hydroxyl group at the other.
References
Translated from the article Polyphenylenether on the German Wikipedia.
External links
Molecular electronics
Organic polymers
Polyethers
Organic semiconductors
Engineering plastic
Thermoplastics
Diphenyl ethers | Poly(p-phenylene oxide) | [
"Chemistry",
"Materials_science"
] | 702 | [
"Organic polymers",
"Molecular physics",
"Semiconductor materials",
"Molecular electronics",
"Organic compounds",
"Nanotechnology",
"Organic semiconductors"
] |
28,414,797 | https://en.wikipedia.org/wiki/Cell%20Transmission%20Model | Cell Transmission Model (CTM) is a popular numerical method proposed by Carlos Daganzo to solve the kinematic wave equation. Lebacque later showed that CTM is the first order discrete Godunov approximation.
Background
CTM predicts macroscopic traffic behavior on a given corridor by evaluating the flow and density at finite number of intermediate points at different time steps. This is done by dividing the corridor into homogeneous sections (hereafter called cells) and numbering them i=1, 2… n starting downstream. The length of the cell is chosen such that it is equal to the distance traveled by free-flow traffic in one evaluation time step. The traffic behavior is evaluated every time step starting at t=1,2…m. Initial and boundary conditions are required to iteratively evaluate each cell.
The flow across the cells is determined based on μ(k) and λ(k), two monotonic functions that uniquely define the fundamental diagram as shown in Figure 1. The density of the cells is updated based on the conservation of inflows and outflows. Thus, the flow and density are derived as:
Where:
and represent density and flow in cell i at time t. Similarly $f_k$, , ,and represents jam density, capacity, wave speed, and free-flow speed respectively of the fundamental diagram.
CTM produces results consistent with the continuous Kinematic wave equation when the density specified in the initial condition changes gradually. However, CTM replicates discontinuities and shock that take a span of few cells of space but moves at correct speed predicted by the Kinematic wave equation.
It was observed that as time passes, the CTM approximations result in spreading the shock to a growing number of cells. To eliminate spreading of certain shocks, Daganzo (1994) proposed a modification to the CTM that ensures shocks separating a lower upstream density and greater downstream density do not spread.
CTM is robust and the simulation results do not depend on the order in which the cells are evaluated because the flow entering a cell is dependent only on the current conditions within the cell and is unrelated to the flow exiting the cell. Thus, CTM can be applied for the analysis of complex networks and non-concave fundamental diagrams.
Implementation and Example
Consider a 2.5 kilometer homogeneous arterial segment that follows a triangular fundamental diagram as shown in figure 2.
Figure 2. Fundamental diagram for the example
This corridor is divided into 30 cells and is simulated for 480 seconds with a time step of 6 seconds. The Initial and boundary conditions are specified as follows:
K(x,0)=48 x
K(0,t)=48 t
K(2.5,t)=0 t
The corridor has two signals located at milepost 1 and 2 starting upstream. The signals have a split of 30 seconds and a cycle length of 60 second. With this information, it is a simple matter of iteration of equations (1) for all the cells and time steps. Figure 3 and Table 1 shows the spatial and temporal distribution of density for the case of offset=0 seconds.
Table 1: Density values for the example with offset of 0 seconds
Currently, some software tools (For example: TRANSYT-14 and SIGMIX) evaluating traffic or optimizing traffic signal settings applies CTM as its macroscopic traffic simulator. For example, in TRANSYT-14 (Note not to be confused with TRANSYT-7F releases), the user is allowed to choose traffic models including CTM, Platoon Dispersion...etc. to model the traffic dynamics. In SIGMIX, it is by default using CTM as simulator.
Lagged Cell Transmission Model
Since the original Cell Transmission model is a first order approximation, Daganzo proposed the Lagged Cell Transmission Model (LCTM) that is more accurate than the former. This enhanced model uses lagged downstream density (p time steps earlier than the current time) for the receiving function. If a triangular fundamental diagram is used and lag is chosen appropriately, this improved method is second order accurate.
when the highway is discritized with variable cell lengths, then one should introduce forward lag for the sending function to preserve the good properties of LCTM. The choice of backward lag and forward lag are given by:
backward lag
forward lag
where d and ε are the spatial and temporal steps of the cell, is the maximum free-flow speed, w is the maximum backward propagating wave speed.
Newell’s Exact Method
Newell proposed an exact method to solve the kinematic wave equation based on cumulative curves only at either ends of the corridor, without evaluating any intermediate points.
Since the density is constant along the characteristics, if one knows the cumulative curves A(x0,t0) and flow q(x0,t0) at boundary, one can construct the three-dimensional surface (A,x,t). However, if characteristics intersect, the surface is a multi-valued function of x,t based on the initial and boundary conditions it is derived from. In such a case, a unique and continuous solution is obtained by taking the lower envelope of the multi-valued solution derived based on different boundary and initial conditions.
However, the limitation of this method is that it can not be used for non-concave fundamental diagrams.
Newell proposed the method, but Daganzo using variational theory proved that the lower envelope is the unique solution.
References
Kinematics | Cell Transmission Model | [
"Physics",
"Technology"
] | 1,115 | [
"Machines",
"Kinematics",
"Physical phenomena",
"Classical mechanics",
"Physical systems",
"Motion (physics)",
"Mechanics"
] |
28,418,653 | https://en.wikipedia.org/wiki/Structural%20acoustics | Structural acoustics is the study of the mechanical waves in structures and how they interact with and radiate into adjacent media. The field of structural acoustics is often referred to as vibroacoustics in Europe and Asia. People that work in the field of structural acoustics are known as structural acousticians. The field of structural acoustics can be closely related to a number of other fields of acoustics including noise, transduction, underwater acoustics, and physical acoustics.
Vibrations in structures
Compressional and shear waves (isotropic, homogeneous material)
Compressional waves (often referred to as longitudinal waves) expand and contract in the same direction (or opposite) as the wave motion. The wave equation dictates the motion of the wave in the x direction.
where is the displacement and is the longitudinal wave speed. This has the same form as the acoustic wave equation in one-dimension. is determined by properties (bulk modulus and density ) of the structure according to
When two dimensions of the structure are small with respect to wavelength (commonly called a beam), the wave speed is dictated by Youngs modulus instead of the and are consequently slower than in infinite media.
Shear waves occur due to the shear stiffness and follows a similar equation, but with the displacement occurring in the transverse direction, perpendicular to the wave motion.
The shear wave speed is governed by the shear modulus which is less than and , making shear waves slower than longitudinal waves.
Bending waves in beams and plates
Most sound radiation is caused by bending (or flexural) waves, that deform the structure transversely as they propagate. Bending waves are more complicated than compressional or shear waves and depend on material properties as well as geometric properties. They are also dispersive since different frequencies travel at different speeds.
Modeling vibrations
Finite element analysis can be used to predict the vibration of complex structures. A finite element computer program will assemble the mass, stiffness, and damping matrices based on the element geometries and material properties, and solve for the vibration response based on the loads applied.
Sound-structure interaction
Fluid-structure Interaction
When a vibrating structure is in contact with a fluid, the normal particle velocities at the interface must be conserved (i.e. be equivalent). This causes some of the energy from the structure to escape into the fluid, some of which radiates away as sound, some of which stays near the structure and does not radiate away. For most engineering applications, the numerical simulation of fluid-structure interactions involved in vibro-acoustics may be achieved by coupling the Finite element method and the Boundary element method.
See also
Acoustics
Acoustic wave equation
Lamb wave
Linear elasticity
Noise control
Sound
Surface acoustic wave
Wave
Wave equation
References
External links
asa.aip.org —Website of the Acoustical Society of America
Acoustics
Sound | Structural acoustics | [
"Physics"
] | 581 | [
"Classical mechanics",
"Acoustics"
] |
28,419,201 | https://en.wikipedia.org/wiki/Shaft%20%28civil%20engineering%29 | In civil engineering a shaft is an underground vertical or inclined passageway. Shafts are often entered through a manhole and closed by a manhole cover. They are constructed for a number of reasons including:
For the construction of a tunnel
For ventilation of a tunnel or underground structure, aka ventilation shaft
As a drop shaft for a sewerage or water tunnel
For access to a tunnel or underground structure, also as an escape route
Construction
There are a number of methods for the construction of shafts, the most significant being:
The use of sheet piles, diaphragm walls or bored piles to construct a square or rectangular braced shaft
The use of segmental lining installed by underpinning or caisson sunk to form a circular shaft
Incremental excavation with a shotcrete circular or elliptical lining
Incremental excavation supported by shotcrete, rock bolts, cable anchors and steel sets or ribs
Shafts can be sunk either dry or for methods such as the caisson method they can be sunk wet. Sinking a dry shaft means that any water that flows into the excavation is pumped out to leave no significant standing or flowing water in the base of the shaft. When wet sinking a shaft the shaft is allowed to flood and the muck is excavated out of the base of the shaft underwater using a grab on the end of a crane or similar excavation method. Because the shaft is flooded, the lining can not be constructed at the excavation level of the shaft so this method only suits methods where the lining is installed before shaft sinking (such as the use of sheet piles) or where the lining is sunk down with the shaft such as the caisson method.
civil engineering
tunnel construction | Shaft (civil engineering) | [
"Engineering"
] | 331 | [
"Construction",
"Civil engineering"
] |
43,512,319 | https://en.wikipedia.org/wiki/Protein%20M | Protein M (locus MG281) is an immunoglobulin-binding protein originally found on the cell surface of the human pathogenic bacterium Mycoplasma genitalium. It is presumably a universal antibody-binding protein, as it is known to be reactive against all antibody types tested so far. It is capable of preventing the antigen-antibody interaction due to its high binding affinity to any antibody. The Scripps Research Institute announced its discovery in 2014. It was detected from the bacterium while investigating its role in patients with a cancer, multiple myeloma.
Homologous proteins are found in other Mycoplasma bacteria. Mycoplasma pneumoniae, another human pathogen, has a homolog termed IbpM (locus MPN400).
Discovery
Mycoplasma genitalium was discovered in 1980 from two male patients with non-gonococcal urethritis at St Mary's Hospital, Paddington, London. After two years, in 1983, it was identified as a new species. After several years of intense research, it was found to be the cause of sexually transmitted diseases, such as urethritis (inflammation of the urinary tract) both in men and women, and also cervicitis (inflammation of cervix) and pelvic inflammation in women. However, the molecular nature of its pathogenicity remained unknown for three decades.
On 6 February 2014, The Scripps Research Institute announced the discovery of a novel protein, which they named Protein M, from the M. genitalium cell membrane. Scientists identified the protein during investigations on the origin of multiple myeloma, a type of B-cell carcinoma. To understand the long-term M. genitalium infection, Rajesh Grover, a senior staff scientist in the Lerner laboratory, tested antibodies from the blood samples of patients with multiple myeloma against different Mycoplasma species. He found that M. genitalium was particularly responsive to all types of antibodies he tested from 20 patients. The antibody reactivity was found to be due to an undiscovered protein that is chemically responsive to all types of human and non-human antibodies available. When they isolated and analysed the protein, they discovered that it was unique both in structure and biological functions. Its structure has no resemblance to any known protein listed in the Protein Data Bank.
Structure and properties
Protein M is about 50 kDa in size, and composed of 556 amino acids. Contrary to the initial hypothesis that the antibody reactions could be an immune response to mass infection with the bacterium, they found that Protein M evolved simply to bind to any antibody it encounters, with specifically high affinity. By this property the bacterium can effectively evade the immune system of the host. This makes the protein an ideal target for developing new drugs. Rajesh Grover estimated that the protein can bind to an average of 100,000,000 different kinds of antibodies circulating in human blood.
Unlike functionally similar proteins such as Protein A, Protein G, and Protein L, which all contain small, multiple immunoglobulin domains, Protein M has a large domain of 360 amino acid residues that binds primarily to the variable light chain of the immunoglobulin, as well as a binding site called LRR-like motif. In addition, Protein M has a C-terminal domain with 115 amino acid residues that probably protrudes over the antibody binding site. It binds to an antibody at either κ or λ light chains using hydrogen bonds and salt bridges, from backbone atoms and conserved side chains, and some conserved van der Waals interactions with other nonconserved interactions.
References
Bacterial proteins
Sexually transmitted diseases and infections
Immune system | Protein M | [
"Biology"
] | 765 | [
"Immune system",
"Organ systems"
] |
43,514,200 | https://en.wikipedia.org/wiki/R%C3%B6ntgen%20Memorial%20Site | The Röntgen Memorial Site in Würzburg, Germany, is dedicated to the work of the German physicist Wilhelm Conrad Röntgen (1845–1923) and his discovery of X-rays, for which he was granted the first Nobel Prize in physics, in 1901. It contains an exhibition of historical instruments, machines and documents.
Location
The Röntgen Memorial Site is in the foyer, corridors and two laboratory rooms of the former Physics Institute of the University of Würzburg in Röntgenring 8, a building that is now used by the University of Applied Sciences Würzburg-Schweinfurt. The road, where the building lies, was renamed in 1909 from Pleicherring to Röntgenring.
History
On the late Friday evening of 8. November 1895, Röntgen discovered for the first time the rays which penetrate through solid materials and gave them the name X-rays. He presented this in a lecture and publication On a new type of rays - Über eine neue Art von Strahlen on 23 January 1896 at the Physical Medical Society of Würzburg.
During the discussion of this lecture, the anatomist Albert von Kölliker proposed to call these rays Röntgen radiation after their inventor, a term that is still being used in Germany.
Exhibition
The Röntgen Memorial Site gives an insight into the particle physics of the late 19th century. It shows an experimental set-up of cathodic rays beside the apparatus of the discovery. An experiment of penetrating solid materials by X-rays is shown in the historic laboratory of Röntgen. A separate room shows various X-ray tubes, a medical X-ray machine of Siemens & Halske from 1912 and several original documents. In the foyer a short German movie explains the purpose of the Memorial Site and the life of Röntgen. In the corridor some personal belongings of Röntgen are displayed to give some background information on his personal and historical circumstances.
After remodeling in 2015 the tables and captures of the exhibition are now in English and German language.
Society
The site is managed by the non-profit organisation Kuratorium zur Förderung des Andenkens an Wilhelm Conrad Röntgen in Würzburg e.V.. It offers guided tours to Röntgen's lab.
References
External links
* Website of the Röntgen Memorial Site in Würzburg (page written in English and German)
Wilhelm Röntgen
Experimental particle physics
Projectional radiography
University of Würzburg
X-rays | Röntgen Memorial Site | [
"Physics"
] | 504 | [
"X-rays",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Experimental physics",
"Particle physics",
"Experimental particle physics"
] |
29,925,696 | https://en.wikipedia.org/wiki/Kunitz%20domain | Kunitz domains are the active domains of proteins that inhibit the function of protein degrading enzymes or, more specifically, domains of Kunitz-type are protease inhibitors. They are relatively small with a length of about 50 to 60 amino acids and a molecular weight of 6 kDa. Examples of Kunitz-type protease inhibitors are aprotinin (bovine pancreatic trypsin inhibitor, BPTI), Alzheimer's amyloid precursor protein (APP), and tissue factor pathway inhibitor (TFPI). Kunitz STI protease inhibitor, the trypsin inhibitor initially studied by Moses Kunitz, was extracted from soybeans.
Standalone Kunitz domains are used as a framework for the development of new pharmaceutical drugs.
Structure
The structure is a disulfide rich alpha+beta fold. Bovine pancreatic trypsin inhibitor is an extensively studied model structure. Certain family members are similar to the tick anticoagulant peptide (TAP, ). This is a highly selective inhibitor of factor Xa in the blood coagulation pathways. TAP molecules are highly dipolar, and are arranged to form a twisted two-stranded antiparallel beta sheet followed by an alpha helix.
The majority of the sequences having this domain belong to the MEROPS inhibitor family I2, clan IB; the Kunitz/bovine pancreatic trypsin inhibitor family, they inhibit proteases of the S1 family and are restricted to the metazoa with a single exception: Amsacta moorei entomopoxvirus, a species of poxvirus. They are short (about 50 to 60 amino acid residues) alpha/beta proteins with few secondary structures. The fold is constrained by three disulfide bonds. The type example for this family is BPTI (or basic protease inhibitor), but the family includes numerous other members, such as snake venom basic protease; mammalian inter-alpha-trypsin inhibitors; trypstatin, a rat mast cell inhibitor of trypsin; a domain found in an alternatively spliced form of Alzheimer's amyloid beta-protein; domains at the C-termini of the alpha-1 and alpha-3 chains of type VI and type VII collagens; tissue factor pathway inhibitor precursor; and Kunitz STI protease inhibitor contained in legume seeds.
Drug development
Kunitz domains are stable as standalone peptides, able to recognise specific protein structures, and also work as competitive protease inhibitors in their free form. These properties have led to attempts at developing biopharmaceutical drugs from Kunitz domains. Candidate domains are selected from molecular libraries containing over 10 million variants with the aid of display techniques like phage display, and can be produced in large scale by genetically engineered organisms.
The first of these drugs to be marketed was the kallikrein inhibitor ecallantide, used for the treatment of hereditary angioedema. It was approved in the United States in 2009. Another example is depelestat, an inhibitor of neutrophil elastase that has undergone Phase II clinical trials for the treatment of acute respiratory distress syndrome in 2006/2007 and has also been described as a potential inhalable cystic fibrosis treatment.
Examples
Human proteins containing this domain include:
AMBP, APLP2, APP
COL6A3, COL7A1, COL28A1
PAPLN,
SPINLW1, SPINT1, SPINT2, SPINT3, SPINT4
TFPI, TFPI2
WFDC6, WFDC8, WFIKKN1, WFIKKN2
Several plant protease inhibitors of the Kunitz family, the Kunitz-STI protein family, include a beta trefoil fold.
References
Protease inhibitors
Protein domains
Antibody mimetics | Kunitz domain | [
"Chemistry",
"Biology"
] | 796 | [
"Antibody mimetics",
"Protein domains",
"Protein classification",
"Molecular biology"
] |
29,930,122 | https://en.wikipedia.org/wiki/Pinnick%20oxidation | The Pinnick oxidation is an organic reaction by which aldehydes can be oxidized into their corresponding carboxylic acids using sodium chlorite (NaClO2) under mild acidic conditions. It was originally developed by Lindgren and Nilsson. The typical reaction conditions used today were developed by G. A. Kraus. H.W. Pinnick later demonstrated that these conditions could be applied to oxidize α,β-unsaturated aldehydes. There exist many different reactions to oxidize aldehydes, but only a few are amenable to a broad range of functional groups. The Pinnick oxidation has proven to be both tolerant of sensitive functionalities and capable of reacting with sterically hindered groups. This reaction is especially useful for oxidizing α,β-unsaturated aldehydes, and another one of its advantages is its relatively low cost.
Mechanism
The proposed reaction mechanism involves chlorous acid as the active oxidant, which is formed under acidic conditions from chlorite.
ClO2− + H2PO4− HClO2 + HPO42−
First, the chlorous acid adds to the aldehyde. Then resulting structure undergoes a pericyclic fragmentation in which the aldehyde hydrogen is transferred to an oxygen on the chlorine, with the chlorine group released as hypochlorous acid (HOCl).
Side reactions and scavengers
The HOCl byproduct, itself a reactive oxidizing agent, can be a problem in several ways. It can destroy the NaClO2 reactant:
HOCl + 2ClO2− → 2ClO2 + Cl− + OH−
making it unavailable for the desired reaction. It can also cause other undesired side reactions with the organic materials. For example, HOCl can react with double bonds in the organic reactant or product via a halohydrin formation reaction.
To prevent interference from HOCl, a scavenger is usually added to the reaction to consume the HOCl as it is formed. For example, one can take advantage of the propensity of HOCl to undergo this addition reaction by adding a sacrificial alkene-containing chemical to the reaction mixture. This alternate substrate reacts with the HOCl, preventing the HOCl from undergoing reactions that interfere with the Pinnick reaction itself. 2-Methyl-2-butene is often used in this context:
Resorcinol and sulfamic acid are also common scavenger reagents.
Hydrogen peroxide (H2O2) can be used as HOCl scavenger whose byproducts do not interfere in the Pinnick oxidation reaction:
HOCl + H2O2 → HCl + O2 + H2O
In a weakly acidic condition, fairly concentrated (35%) H2O2 solution undergoes a rapid oxidative reaction with no competitive reduction reaction of HClO2 to form HOCl.
HClO2 + H2O2 → HOCl + O2 + H2O
Chlorine dioxide reacts rapidly with H2O2 to form chlorous acid.
2ClO2 + H2O2 → 2HClO2 + O2
Also the formation of oxygen gives good indication of the progress of the reaction. However, problems sometimes arise due to the formation of singlet oxygen in this reaction, which may oxidize organic materials (i.e. the Schenck ene reaction). DMSO has been used instead of H2O2 to oxidize reactions that do not produce great yields using only H2O2. Mostly electron rich aldehydes fall under this category. (See Limitation below)
Also, solid-supported reagents such as phosphate-buffered silica gel supported by potassium permanganate and polymer-supported chlorite have been prepared and used to convert aldehydes to carboxylic acid without having to do conventional work-up procedures. The reaction involves the product to be trapped on silica gel as their potassium salts. Therefore, this procedure facilitates easy removal of neutral impurities by washing with organic solvents.
Scope and limitations
The reaction is highly suited for substrates with many group functionalities. β-aryl-substituted α,β-unsaturated aldehydes works well with the reaction conditions. Triple bonds directly linked to aldehyde groups or in conjugation with other double bonds can also be subjected to the reaction. Hydroxides, epoxides, benzyl ethers, halides including iodides and even stannanes are quite stable in the reaction. The examples of the reactions shown below also show that the stereocenters of the α carbons remain intact while double bonds, especially trisubsituted double bonds do not undergo E/Z–isomerization in the reaction.
Lower yields are obtained for reactions involving aliphatic α,β-unsaturated and more hydrophilic aldehydes. Double bonds and electron-rich aldehyde substrates can lead to chlorination as an alternate reaction. The use of DMSO in these cases gives better yield. Unprotected aromatic amines and pyrroles are not well suited for the reactions either. In particular, chiral α-aminoaldehydes do not react well due to epimerization and because amino groups can be easily transformed to their corresponding N-oxides. Standard protective group approaches, such as the use of t-BOC, are a viable solution to these problems.
Thioethers are also highly susceptible to oxidation. For example, Pinnick oxidation of thioanisaldehyde gives a high yield of carboxylic acid products, but with concomitant conversion of the thioether to the sulfoxide or sulfone.
See also
Lindgren oxidation
References
Name reactions
Organic oxidation reactions | Pinnick oxidation | [
"Chemistry"
] | 1,228 | [
"Name reactions",
"Organic oxidation reactions",
"Organic redox reactions",
"Organic reactions"
] |
29,946,093 | https://en.wikipedia.org/wiki/C17H16O9 | {{DISPLAYTITLE:C17H16O9}}
The molecular formula C17H16O9 (molar mass: 364.30 g/mol, exact mass: 364.0794 u) may refer to:
Quercitannic acid
Bergaptol-O-beta-D-glucopyranoside
Molecular formulas | C17H16O9 | [
"Physics",
"Chemistry"
] | 78 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
29,946,595 | https://en.wikipedia.org/wiki/Chlorophyll%20fluorescence | Chlorophyll fluorescence is light re-emitted by chlorophyll molecules during return from excited to non-excited states. It is used as an indicator of photosynthetic energy conversion in plants, algae and bacteria. Excited chlorophyll dissipates the absorbed light energy by driving photosynthesis (photochemical energy conversion), as heat in non-photochemical quenching or by emission as fluorescence radiation. As these processes are complementary processes, the analysis of chlorophyll fluorescence is an important tool in plant research with a wide spectrum of applications.
The Kautsky effect
Upon illumination of a dark-adapted leaf, there is a rapid rise in fluorescence from Photosystem II (PSII), followed by a slow decline. First observed by Kautsky et al., 1932, this is called the Kautsky Effect. This variable rise in chlorophyll fluorescence is due to photosystem II. Fluorescence from photosystem I is not variable, but constant.
The increase in fluorescence is due to PSII reaction centers being in a "closed" or chemically reduced state. Reaction centers are "closed" when unable to accept further electrons. This occurs when electron acceptors downstream of PSII have not yet passed their electrons to a subsequent electron carrier, so are unable to accept another electron. Closed reaction centres reduce the overall photochemical efficiency, and so increases the level of fluorescence. Transferring a leaf from dark into light increases the proportion of closed PSII reaction centres, so fluorescence levels increase for 1–2 seconds. Subsequently, fluorescence decreases over a few minutes. This is due to; 1. more "photochemical quenching" in which electrons are transported away from PSII due to enzymes involved in carbon fixation; and 2. more "non-photochemical quenching" in which more energy is converted to heat.
Measuring fluorescence
Usually the initial measurement is the minimal level of fluorescence, . This is the fluorescence in the absence of photosynthetic light.
To use measurements of chlorophyll fluorescence to analyse photosynthesis, researchers must distinguish between photochemical quenching and non-photochemical quenching (heat dissipation). This is achieved by stopping photochemistry, which allows researchers to measure fluorescence in the presence of non-photochemical quenching alone. To reduce photochemical quenching to negligible levels, a high intensity, short flash of light is applied to the leaf. This transiently closes all PSII reaction centres, which prevents energy of PSII being passed to downstream electron carriers. Non-photochemical quenching will not be affected if the flash is short. During the flash, the fluorescence reaches the level reached in the absence of any photochemical quenching, known as maximum fluorescence .
The efficiency of photochemical quenching (which is a proxy of the efficiency of PSII) can be estimated by comparing to the steady yield of fluorescence in the light and the yield of fluorescence in the absence of photosynthetic light .
The efficiency of non-photochemical quenching is altered by various internal and external factors. Alterations in heat dissipation mean changes in . Heat dissipation cannot be totally stopped, so the yield of chlorophyll fluorescence in the absence of non-photochemical quenching cannot be measured. Therefore, researchers use a dark-adapted point () with which to compare estimations of non-photochemical quenching.
Common fluorescence parameters
: Minimal fluorescence (arbitrary units). Fluorescence level of dark-adapted sample when all reaction centers of the photosystem II are open.
: Maximal fluorescence (arbitrary units). Fluorescence level of dark-adapted sample when a high intensity pulse has been applied. All reaction centers of the photosystem II are closed.
: Minimal fluorescence (arbitrary units). Fluorescence level of light-adapted sample when all reaction centers of the photosystem II are open; it is lowered with respect to by non-photochemical quenching.
: Maximal fluorescence (arbitrary units). Fluorescence level of light-adapted sample when a high intensity pulse has been applied. All reaction centers of the photosystem II are closed.
: Steady-state terminal fluorescence (arbitrary units). A steady-state fluorescence level decreased (= quenched) by photochemical and non-photochemical processes.
: Half rise time from to .
Calculated parameters
is variable fluorescence. Calculated as = - .
is the ratio of variable fluorescence to maximal fluorescence. Calculated as . This is a measure of the maximum efficiency of PSII (the efficiency if all PSII centres were open). can be used to estimate the potential efficiency of PSII by taking dark-adapted measurements.
measures the efficiency of Photosystem II. Calculated as = . This parameter measures the proportion of light absorbed by PSII that is used in photochemistry. As such, it can give a measure of the rate of linear electron transport and so indicates overall photosynthesis.
(photochemical quenching). Calculated as . This parameter approximates the proportion of PSII reaction centres that are open.
Whilst gives an estimation of the efficiency, and tell us which processes which have altered the efficiency. Closure of reaction centers as a result of a high intensity light will alter the value of . Changes in the efficiency of non-photochemical quenching will alter the ratio .
Applications of the Theory
PSII yield as a measure of photosynthesis
Chlorophyll fluorescence appears to be a measure of photosynthesis, but this is an over-simplification. Fluorescence can measure the efficiency of PSII photochemistry, which can be used to estimate the rate of linear electron transport by multiplying by the light intensity. However, researchers generally mean carbon fixation when they refer to photosynthesis. Electron transport and CO2 fixation can correlate well, but may not correlate in the field due to processes such as photorespiration, nitrogen metabolism and the Mehler reaction.
Relating electron transport to carbon fixation
A powerful research technique is to simultaneously measure chlorophyll fluorescence and gas exchange to obtain a full picture of the response of plants to their environment. One technique is to simultaneously measure CO2 fixation and PSII photochemistry at different light intensities, in non-photorespiratory conditions. A plot of CO2 fixation and PSII photochemistry indicates the electron requirement per molecule CO2 fixed. From this estimation, the extent of photorespiration may be estimated. This has been used to explore the significance of photorespiration as a photoprotective mechanism during drought.
Fluorescence analysis can also be applied to understanding the effects of low and high temperatures.
Sobrado (2008) investigated gas exchange and chlorophyll a fluorescence responses to high intensity light, of pioneer species and forest species. Midday leaf gas exchange was measured using a photosynthesis system, which measured net photosynthetic rate, gs, and intercellular CO2 concentration (). In the same leaves used for gas exchange measurements, chlorophyll a fluorescence parameters (initial, ; maximum, ; and variable, ) were measured using a fluorometer. The results showed that despite pioneer species and forest species occupying different habitats, both showed similar vulnerability to midday photoinhibition in sun-exposed leaves.
Measuring stress and stress tolerance
Chlorophyll fluorescence can measure most types of plant stress. Chlorophyll fluorescence can be used as a proxy of plant stress because environmental stresses, e.g. extremes of temperature, light and water availability, can reduce the ability of a plant to metabolise normally. This can mean an imbalance between the absorption of light energy by chlorophyll and the use of energy in photosynthesis.
Favaretto et al. (2010) investigated adaptation to a strong light environment in pioneer and late successional species, grown under 100% and 10% light. Numerous parameters, including chlorophyll a fluorescence, were measured. A greater decline in under full sun light in the late-successional species than in the pioneer species was observed. Overall, their results show that pioneer species perform better under high-sun light than late- successional species, suggesting that pioneer plants have more potential tolerance to photo-oxidative damage.
Neocleous and Vasilakakis (2009) investigated the response of raspberry to boron and salt stress. An chlorophyll fluorometer was used to measure , and . The leaf chlorophyll fluorescence was not significantly affected by NaCl concentration when B concentration was low. When B was increased, leaf chlorophyll fluorescence was reduced under saline conditions. It could be concluded that the combined effect of B and NaCl on raspberries induces a toxic effect in photochemical parameters.
Lu and Zhang (1999) studied heat stress in wheat plants and found that temperature stability in the Photosystem II of water-stressed leaves correlates positively to the resistance in metabolism during photosynthesis.
Nitrogen Balance Index
Because of the link between chlorophyll content and nitrogen content in leaves, chlorophyll fluorometers can be used to detect nitrogen deficiency in plants, by several methods.
Based on several years of research and experimentation, polyphenols can be the indicators of nitrogen status of a plant. For instance, when a plant is under optimal conditions, it favours its primary metabolism and synthesises the proteins (nitrogen molecules) containing chlorophyll, and few flavonols (carbon-based secondary compounds). On the other hand, in case of lack of nitrogen, we will observe an increased production of flavonols by the plant.
The NBI (Nitrogen Balance Index) by Force-A, allows the assessment of nitrogen conditions of a culture by calculating the ratio between Chlorophyll and Flavonols (related to Nitrogen/Carbon allocation) .
Measure Chlorophyll Content
Gitelson (1999) states, "The ratio between chlorophyll fluorescence at 735 nm and the wavelength range 700nm to 710 nm, F735/F700 was found to be linearly proportional to the chlorophyll content (with determination coefficient, r2, more than 0.95) and thus this ratio can be used as a precise indicator of chlorophyll content in plant leaves."
Chlorophyll fluorometers
The development of fluorometers allowed chlorophyll fluorescence analysis to become a common method in plant research. Chlorophyll fluorescence analysis has been revolutionized by the invention of the Pulse-Amplitude-Modulation (PAM) technique and availability of the first commercial modulated chlorophyll fluorometer PAM-101 (Walz, Germany). By modulating the measuring light beam (microsecond-range pulses) and parallel detection of the excited fluorescence the relative fluorescence yield (Ft) can be determined in the presence of ambient light. Crucially, this means chlorophyll fluorescence can be measured in the field even in full sunlight.
Today, chlorophyll fluorometers are designed for measuring many different plant mechanisms. The measuring protocols: FV/FM and OJIP measure the efficiency of Photosystem II samples at a common and known dark adapted state. These protocols are useful in measuring many types of plant stress. Bernard Genty's light adapted measuring protocol ΔF/FM’, or Y(II), is an effective and sensitive way to measure plant samples under ambient or artificial lighting conditions. However, since Y(II) values also change with light intensity, one should compare samples at the same light intensity unless light stress is the focus of the measurement. Y(II) can be more sensitive to some types of plant stress than FV/FM, such as heat stress.
Other plant mechanism measuring protocols have also been developed. When a chloroplast absorbs light, some of the light energy goes to photochemistry, some goes to regulated heat dissipation, and some goes to unregulated heat dissipation. Various chlorophyll fluorescence measuring parameters exist to measure all of these events. In the lake model, qL measures photochemical quenching, Y(NYO) measures plant regulated heat dissipation, and Y(NO) measures unregulated heat dissipation. An older quenching protocol, called the puddle model, uses qP for photochemical quenching, qN for nonphotochemical quenching of both regulated and unregulated heat dissipation and NPQ for an estimate of nonphotochemical quenching. NPQ has also been resurrected to the lake model mathematically.
In addition, the parameters qE, and pNPQ have been developed to measure the photoprotective xanthophyll cycle. qT is a measure of state transitions. qM is a measure of chloroplast migration, and qI is a measure of plant photoinhibition.
At lower actinic light levels NPQ = qE+qT+qI
At high actinic light levels NPQ = qE+qM=qI
Some fluorometers are designed to be portable and operated in one hand.
Consistent further development into imaging fluorometers facilitate the visualization of spatial heterogeneities in photosynthetic activity of samples. These heterogeneities naturally occur in plant leaves for example during growths, various environmental stresses or pathogen infection. Thus knowledge about sample heterogeneities is important for correct interpretation of the photosynthetic performance of the plant sample. High performance imaging fluorometer systems provide options to analyze single cell/single chloroplast as well as sample areas covering whole leaves or plants.
Alternative approaches
LIF sensors
Techniques based on the Kautsky effect do not exhaust the variety of detection and evaluation methods based on the chlorophyll fluorescence.
In particular, recent advances in the area of laser-induced fluorescence (LIF) also provide an opportunity of developing sufficiently compact and efficient sensors for
photophysiological status and biomass assessments.
Instead of measuring the evolution of the total fluorescence flux, such sensors record the spectral density of this flux excited by strong monochromatic laser light pulses of nanoseconds duration.
Requiring no 15- 20 min dark adaptation period (as is the case for the Kautsky effect methods) and being
capable to excite the sample from considerable distance, the LIF sensors can provide fast and remote evaluation.
Application of the LIF technique to the assessment of drought stress in cork oak (Quercus suber) and maritime pine (Pinus pinaster) on the basis of chlorophyll emission ratio I685/I740 is described in Ref. Recently the LIF sensing technique was harnessed to address the role of pPLAIIα protein in the protection of the photosynthetic metabolism during drought stress using genetically modified Arabidopsis plants.
In 2011, Vieira et al. applied a compact low-cost LIF sensor (built around a frequency-doubled solid-state Q-switched Nd:YAG laser and a specially modified commercial miniature fiber optic spectrometer Ocean Optics USB4000) to study intertidal microphytobenthos communities. Chlorophyll emission enabled the researchers to adequately assess the surface biomass and track migratory rhythms of epipelic benthic microalgae in muddy sediments.
See also
Integrated fluorometer for gas exchange and chlorophyll fluorescence of leaves
Non-photochemical quenching
Notes
External links
Solar-induced fluorescence, geog.ucl.ac.uk
Advanced Continuous Excitation Chlorophyll Fluorimeter, nutechintl.com
References
Light reactions
Photosynthesis | Chlorophyll fluorescence | [
"Chemistry",
"Biology"
] | 3,333 | [
"Biochemistry",
"Light reactions",
"Photosynthesis",
"Biochemical reactions"
] |
46,671,034 | https://en.wikipedia.org/wiki/Free-piston%20linear%20generator | The free-piston linear generator (FPLG) uses chemical energy from fuel to drive magnets through a stator and converts this linear motion into electric energy. Because of its versatility, low weight and high efficiency, it can be used in a wide range of applications, although it is of special interest to the mobility industry as range extenders for electric vehicles.
Description
The free-piston engine linear generators can be divided in 3 subsystems:
One (or more) reaction section with a single or two opposite pistons
One (or more) linear electric generator, which is composed of a static part (the stator) and a moving part (the magnets) connected to the connection rod.
One (or more) return unit to push the piston back due to the lack of a crankshaft (typically a gas spring or an opposed reaction section)
The FPLG has many potential advantages compared to traditional electric generator powered by an internal combustion engine. One of the main advantages of the FPLG comes from the absence of crankshaft. It leads to a smaller and lighter generator with fewer parts. This also allows a variable compression and expansion ratios, which makes it possible to operate with different kinds of fuel.
The linear generator also allows the control of the resistance force, and therefore a better control of the piston's movement and of the reaction. The total efficiency (including mechanical and generator) of free-piston linear generators can be significantly higher than conventional internal combustion engines and comparable to fuel cells.
Development
The first patents of free-piston linear generators date from around 1940, however in the last decades, especially after the development of rare-earth magnets and power electronics, many different research groups have been working in this field.
These include:
Libertine LPE, UK.
West Virginia University (WVU), USA.
Chalmers University of Technology, Sweden.
Electric Generator, Pontus Ostenberg, USA - 1943
Free Piston Engine, Van Blarigan, Sandia National Laboratory, USA - Since 1995
Aquarius Engines, Israel.
Free-Piston Engine Project, Newcastle University, UK - Since 1999
Shanghai Jiaotong University, China.
Free-Piston Linear Generator, German Aerospace Center (DLR), Germany - since 2002
Free Piston Power Pack (FP3), Pempek Systems, Australia - 2003
Free Piston Energy Converter, KTH Electrical Engineering, Sweden - 2006
Linear Combustion Engine, Czech technical university - 2004
Internal Combustion Linear Generator Integrated Power System, Xu Nanjing, China - 2010
micromer ag (Switzerland) - 2012
Free-piston engine linear generator, Toyota, Japan - 2014
Although there is a variety of names and abbreviations for the technology, the terms "Free-piston linear generator" and "FPLG" particularly refer to the project at German Aerospace Center.
Operation
The free-piston linear generator generally consists of three subsystems: combustion chamber, linear generator and return unit (normally a gas spring), which are coupled through a connecting rod.
In the combustion chamber, a mixture of fuel and air is ignited, increasing the pressure and forcing the moving parts (connection rod, linear generator and pistons) in the direction of the gas spring. The gas spring is compressed, and, while the piston is near the bottom dead center (BDC), fresh air and fuel are injected into the combustion chamber, expelling the exhaust gases.
The gas spring pushes the moving parts assembly back to the top dead center (TDC), compressing the mixture of air and fuel that was injected and the cycle repeats. This works in a similar manner to the two-stroke engine, however it is not the only possible configuration.
The linear generator can generate a force opposed to the motion, not only during expansion but also during compression. The magnitude and the force profile affect the piston movement, as well as the overall efficiency.
Variations
The FPLG has been conceived in many different configurations, but for most applications, particularly for the automotive industry, focus has been on two opposed pistons in the same cylinder with one combustion chamber with a gas spring at the end of each cylinder. This balances out the forces in order to reduce vibration and noise. In the simplest case, a second unit is just a mirror of the first, with no functional connection to the first. Alternatively, a single combustion chamber or gas spring can be used, allowing for a more compact design and easier synchronization between the pistons.
The gas spring and combustion chamber can be placed on the ends of the connection rods, or they can share the same piston, using opposite sides in order to reduce space.
The linear generator itself has also many different configurations and forms. It can be designed as round tube, a cylinder or even flat plate in order to reduce the center of gravity, and/or improve the heat dissipation.
The free-piston linear generator's great versatility comes from the absence of a crankshaft, removing a great pumping loss, giving the engine a further degree of freedom. The combustion can be two-stroke engine or four-stroke engine. However, a four-stroke requires a much higher intermediate storage of energy, the rotational inertia of the crankshaft, to propel the piston through the four strokes. With the absence of a crankshaft, a gas spring would need to power the piston through the intake, compression, and exhaust strokes. Hence the reason why most of the current research focuses on the two-strokes cycle.
Several variations are possible for combustion:
Spark ignition (Otto)
Compression ignition (Diesel)
Homogeneous charge compression ignition (HCCI)
The DLR research
The Institute of Vehicle Concepts of the German Aerospace Center is currently developing a FPLG (or Freikolbenlineargenerator - FKLG) since 2002, and has published several papers about this subject.
During the first few years of research, the theoretical background along with the 3 subsystems were developed separately. In 2013, the first entire system was built and operated successfully.
The German center is currently into its 2nd version of the entire system, on which two opposed cylinders will be used in order to reduce vibration and noise, making it viable for the automotive industry.
See also
Free-piston engine
References
External links
FPLG project from the DLR
A history of free piston linear alternator developments
Free-piston engines
Internal combustion engine
Engines
Piston engines
Engine technology | Free-piston linear generator | [
"Physics",
"Technology",
"Engineering"
] | 1,287 | [
"Internal combustion engine",
"Machines",
"Engines",
"Piston engines",
"Combustion engineering",
"Physical systems",
"Engine technology"
] |
46,676,535 | https://en.wikipedia.org/wiki/Climate%20inertia | Climate inertia or climate change inertia is the phenomenon by which a planet's climate system shows a resistance or slowness to deviate away from a given dynamic state. It can accompany stability and other effects of feedback within complex systems, and includes the inertia exhibited by physical movements of matter and exchanges of energy. The term is a colloquialism used to encompass and loosely describe a set of interactions that extend the timescales around climate sensitivity. Inertia has been associated with the drivers of, and the responses to, climate change.
Increasing fossil-fuel carbon emissions are a primary inertial driver of change to Earth's climate during recent decades, and have risen along with the collective socioeconomic inertia of its 8 billion human inhabitants. Many system components have exhibited inertial responses to this driver, also known as a forcing. The rate of rise in global surface temperature (GST) has especially been resisted by 1) the thermal inertia of the planet's surface, primarily its ocean, and 2) inertial behavior within its carbon cycle feedback. Various other biogeochemical feedbacks have contributed further resiliency. Energy stored in the ocean following the inertial responses principally determines near-term irreversible change known as climate commitment.
Earth's inertial responses are important because they provide the planet's diversity of life and its human civilization further time to adapt to an acceptable degree of planetary change. However, unadaptable change like that accompanying some tipping points may only be avoidable with early understanding and mitigation of the risk of such dangerous outcomes. This is because inertia also delays much surface warming unless and until action is taken to rapidly reduce emissions. An aim of Integrated assessment modelling, summarized for example as Shared Socioeconomic Pathways (SSP), is to explore Earth system risks that accompany large inertia and uncertainty in the trajectory of human drivers of change.
Inertial timescales
The paleoclimate record shows that Earth's climate system has evolved along various pathways and with multiple timescales. Its relatively stable states which can persist for many millennia have been interrupted by short to long transitional periods of relative instability. Studies of climate sensitivity and inertia are concerned with quantifying the most basic manner in which a sustained forcing perturbation will cause the system to deviate within or initially away from its relatively stable state of the present Holocene epoch.
"Time constants" are useful metrics for summarizing the first-order (linear) impacts of the various inertial phenomena within both simple and complex systems. They quantify the time after which 63% of a full output response occurs following the step change of an input. They are observed from data or can be estimated from numerical simulation or a lumped system analysis. In climate science these methods can be applied to Earth's energy cycle, water cycle, carbon cycle and elsewhere. For example, heat transport and storage in the ocean, cryosphere, land and atmosphere are elements within a lumped thermal analysis. Response times to radiative forcing via the atmosphere typically increase with depth below the surface.
Inertial time constants indicate a base rate for forced changes, but lengthy values provide no guarantee of long-term system evolution along a smooth pathway. Numerous higher-order tipping elements having various trigger thresholds and transition timescales have been identified within Earth's present state. Such events might precipitate a nonlinear rearrangement of internal energy flows along with more rapid shifts in climate and/or other systems at regional to global scale.
Climate response time
The response of global surface temperature (GST) to a step-like doubling of the atmospheric concentration, and its resultant forcing, is defined as the Equilibrium Climate Sensitivity (ECS). The ECS response extends over short and long timescales, however the main time constant associated with ECS has been identified by Jule Charney, James Hansen and others as a useful metric to help guide policymaking. RCPs, SSPs, and other similar scenarios have also been used by researchers to simulate the rate of forced climate changes. By definition, ECS presumes that ongoing emissions will offset the ocean and land carbon sinks following the step-wise perturbation in atmospheric .
ECS response time is proportional to ECS and is principally regulated by the thermal inertia of the uppermost mixed layer and adjacent lower ocean layers. Main time constants fitted to the results from climate models have ranged from a few decades when ECS is low, to as long as a century when ECS is high. A portion of the variation between estimates arises from different treatments of heat transport into the deep ocean.
Components
Thermal inertia
Thermal inertia is a term which refers to the observed delays in a body's temperature response during heat transfers. A body with large thermal inertia can store a big amount of energy because of its heat capacity, and can effectively transmit energy according to its heat transfer coefficient. The consequences of thermal inertia are inherently expressed via many climate change feedbacks because of their temperature dependencies; including through the strong stabilizing feedback of the Planck response''.
Ocean inertia
The global ocean is Earth's largest thermal reservoir that functions to regulate the planet's climate; acting as both a sink and a source of energy. The ocean's thermal inertia delays some global warming for decades or centuries. It is accounted for in global climate models, and has been confirmed via measurements of ocean heat content. The observed transient climate sensitivity is proportional to the thermal inertia time scale of the shallower ocean.
Ice sheet inertia
Even after emissions are lowered, the melting of ice sheets will persist and further increase sea-level rise for centuries. The slower transportation of heat into the extreme deep ocean, subsurface land sediments, and thick ice sheets will continue until the new Earth system equilibrium has been reached.
Permafrost also takes longer to respond to a warming planet because of thermal inertia, due to ice rich materials and permafrost thickness.
Inertia from carbon cycle feedbacks
Earth's carbon cycle feedback includes a destabilizing positive feedback (identified as the climate-carbon feedback) which prolongs warming for centuries, and a stabilizing negative feedback (identified as the concentration-carbon feedback) which limits the ultimate warming response to fossil carbon emissions. The near-term effect following emissions is asymmetric with latter mechanism being about four times larger, and results in a significant net slowing contribution to the inertia of the climate system during the first few decades following emissions.
Ecological inertia
Depending on the ecosystem, effects of climate change could show quickly, while others take more time to respond. For instance, coral bleaching can occur in a single warm season, while trees may be able to persist for decades under a changing climate, but be unable to regenerate. Changes in the frequency of extreme weather events could disrupt ecosystems as a consequence, depending on individual response times of species.
Policy implications of inertia
The IPCC concluded that the inertia and uncertainty of the climate system, ecosystems, and socioeconomic systems implies that margins for safety should be considered. Thus, setting strategies, targets, and time tables for avoiding dangerous interference through climate change. Further the IPCC concluded in their 2001 report that the stabilization of atmospheric concentration, temperature, or sea level is affected by:
The inertia of the climate system, which will cause climate change to continue for a period after mitigation actions are implemented.
Uncertainty regarding the location of possible thresholds of irreversible change and the behavior of the system in their vicinity.
The time lags between adoption of mitigation goals and their achievement.
See also
Earth's energy budget
Planetary boundaries
Systems theory and Systems analysis
References
Climate change
Hydrology
Oceanography
Glaciology | Climate inertia | [
"Physics",
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,620 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics",
"Environmental engineering"
] |
35,504,712 | https://en.wikipedia.org/wiki/Geographical%20centre%20of%20Earth | The geographical centre of Earth is the geometric centre of all land surfaces on Earth. Geometrically defined it is the centroid of all land surfaces within the two dimensions of the Geoid surface which approximates the Earth's outer shape. The term centre of minimum distance specifies the concept more precisely as the domain is the sphere surface without boundary and not the three-dimensional body.
Explained in a different way, it is the location on the surface of Earth where the sum of distances to all locations on land is the smallest. Assuming an airplane with infinite energy and resources, if one were to fly from one start location to any location on land and back again, and repeat this from the same start location to all possible destinations, the starting location where the total travel distance is the smallest would be the geographical centre of Earth.
Its distance definition follows the shortest path on the surface of Earth along the great circle (orthodrome).
History of the concept
Around the world throughout history many real and illusive places were identified as axis mundi or centers of the world.
In 1864, Charles Piazzi Smyth, Astronomer Royal for Scotland, gave in his book Our Inheritance in the Great Pyramid the coordinates with , the location of the Great Pyramid of Giza in Egypt. He stated that this had been calculated by "carefully summing up all the dry land habitable by man all the wide world over".
In October of that year, Smyth proposed to position the prime meridian at the longitude of the Great Pyramid because there it would "pass over more land than [at] any other [location]". He also argued the cultural significance of the location and its vicinity to Jerusalem. The expert committee deciding the issue, however, voted for Greenwich because "so many ships used the port of London".
In 1973, Andrew J. Woods, a physicist with Gulf Energy and Environmental Systems in San Diego, California, used a digital global map and calculated the coordinates on a mainframe system as , in Turkey, near the district of Kırşehir, Seyfe Village approx. 1,800 km north of Giza. In 2003, a new calculation based on a global digital elevation model obtained from satellite measurements, ETOPO2, whose data points are spaced 2′ (3.7 km at the equator) led to the result ♁ 41° N, 35° E and thus validating Woods's calculation.
Differentiation from other definitions and calculations
Various definitions of geographical centres exists. The definitions used by the references in this article refer to calculations within the 2 dimensions of a surface, mainly as the surface of Earth is the domain of human cultural existence. Other definitions refer to calculations based on three-dimensional objects, for example the Newtonian gravity centre of the whole Earth (physical barycentre) or the Newtonian gravity centre of only the continents as uniform thick three-dimensional objects. Those centres can be found inside Earth mostly near its core. A projection of those centres towards the surface would be then an alternative definition of the geographical centre, some of those calculations result in a surface location projection not that far away from the geographical centre.
See also
Centre of the universe (disambiguation)
Center of population
Center of the World
History of the centre of the Universe
Land and water hemispheres
Omphalos of Delphi (Ancient Greeks' navel of the Earth)
References
Earth | Geographical centre of Earth | [
"Physics",
"Mathematics"
] | 683 | [
"Point (geometry)",
"Geometric centers",
"Geographical centres",
"Symmetry"
] |
35,508,439 | https://en.wikipedia.org/wiki/Transmon | In quantum computing, and more specifically in superconducting quantum computing, a transmon is a type of superconducting charge qubit designed to have reduced sensitivity to charge noise. The transmon was developed by Robert J. Schoelkopf, Michel Devoret, Steven M. Girvin, and their colleagues at Yale University in 2007. Its name is an abbreviation of the term transmission line shunted plasma oscillation qubit; one which consists of a Cooper-pair box "where the two superconductors are also [capacitively] shunted in order to decrease the sensitivity to charge noise, while maintaining a sufficient anharmonicity for selective qubit control".
The transmon achieves its reduced sensitivity to charge noise by significantly increasing the ratio of the Josephson energy to the charging energy. This is accomplished through the use of a large shunting capacitor. The result is energy level spacings that are approximately independent of offset charge. Planar on-chip transmon qubits have T1 coherence times approximately 30 μs to 40 μs. Recent work has shown significantly improved T1 times as long as 95 μs by replacing the superconducting transmission line cavity with a three-dimensional superconducting cavity, and by replacing niobium with tantalum in the transmon device, T1 is further improved up to 0.3 ms. These results demonstrate that previous T1 times were not limited by Josephson junction losses. Understanding the fundamental limits on the coherence time in superconducting qubits such as the transmon is an active area of research.
Comparison to Cooper-pair box
The transmon design is similar to the first design of the charge qubit known as a "Cooper-pair box"; both are described by the same Hamiltonian, with the only difference being the ratio. Here is the Josephson energy of the junction, and is the charging energy inversely proportional to the total capacitance of the qubit circuit. Transmons typically have (while for typical Cooper-pair-box qubits), which is achieved by shunting the Josephson junction with an additional large capacitor.
The benefit of increasing the ratio is the insensitivity to charge noise—the energy levels become independent of the offset charge across the junction; thus the dephasing time of the qubit is prolonged. The disadvantage is the reduced anharmonicity , where is the energy difference between eigenstates and . Reduced anharmonicity complicates the device operation as a two level system, e.g. exciting the device from the ground state to the first excited state by a resonant pulse also populates the higher excited state. This complication is overcome by complex microwave pulse design, that takes into account the higher energy levels, and prohibits their excitation by destructive interference. Also, while the variation of with respect to tend to decrease exponentially with , the anharmonicity only has a weaker, algebraic dependence on as . The significant gain in the coherence time outweigh the decrease in the anharmonicity for controlling the states with high fidelity.
Measurement, control and coupling of transmons is performed by means of microwave resonators with techniques from circuit quantum electrodynamics also applicable to other superconducting qubits. Coupling to the resonators is done by placing a capacitor between the qubit and the resonator, at a point where the resonator electromagnetic field is greatest. For example, in IBM Quantum Experience devices, the resonators are implemented with "quarter wave" coplanar waveguides with maximal field at the signal-ground short at the waveguide end; thus every IBM transmon qubit has a long resonator "tail". The initial proposal included similar transmission line resonators coupled to every transmon, becoming a part of the name. However, charge qubits operated at a similar regime, coupled to different kinds of microwave cavities are referred to as transmons as well.
Transmons as qudits
Transmons have been explored for use as d-dimensional qudits via the additional energy levels that naturally occur above the qubit subspace (the lowest two states). For example, the lowest three levels can be used to make a transmon qutrit; in the early 2020s, researchers have reported realizations of single-qutrit quantum gates on transmons as well as two-qutrit entangling gates. Entangling gates on transmons have also been explored theoretically and in simulations for the general case of qudits of arbitrary d.
See also
Charge qubit
Anharmonicity
Circuit quantum electrodynamics (cQED)
Dilution refrigerator
List of quantum processors
Quantum harmonic oscillator
Superconducting quantum computing
References
Quantum information science
Quantum electronics
Superconductivity | Transmon | [
"Physics",
"Materials_science",
"Engineering"
] | 1,021 | [
"Physical quantities",
"Quantum electronics",
"Superconductivity",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Nanotechnology",
"Electrical resistance and conductance"
] |
45,342,125 | https://en.wikipedia.org/wiki/Optic%20equation | In number theory, the optic equation is an equation that requires the sum of the reciprocals of two positive integers and to equal the reciprocal of a third positive integer :
Multiplying both sides by abc shows that the optic equation is equivalent to a Diophantine equation (a polynomial equation in multiple integer variables).
Solution
All solutions in integers are given in terms of positive integer parameters by
where are coprime.
Appearances in geometry
The optic equation, permitting but not requiring integer solutions, appears in several contexts in geometry.
In a bicentric quadrilateral, the inradius , the circumradius , and the distance between the incenter and the circumcenter are related by Fuss' theorem according to
and the distances of the incenter from the vertices are related to the inradius according to
In the crossed ladders problem, two ladders braced at the bottoms of vertical walls cross at the height and lean against the opposite walls at heights of and . We have Moreover, the formula continues to hold if the walls are slanted and all three measurements are made parallel to the walls.
Let be a point on the circumcircle of an equilateral triangle , on the minor arc . Let be the distance from to and be the distance from to . On a line passing through and the far vertex , let be the distance from to the triangle side . Then
In a trapezoid, draw a segment parallel to the two parallel sides, passing through the intersection of the diagonals and having endpoints on the non-parallel sides. Then if we denote the lengths of the parallel sides as and and half the length of the segment through the diagonal intersection as , the sum of the reciprocals of and equals the reciprocal of .
The special case in which the integers whose reciprocals are taken must be square numbers appears in two ways in the context of right triangles. First, the sum of the reciprocals of the squares of the altitudes from the legs (equivalently, of the squares of the legs themselves) equals the reciprocal of the square of the altitude from the hypotenuse. This holds whether or not the numbers are integers; there is a formula (see here) that generates all integer cases. Second, also in a right triangle the sum of the squared reciprocal of the side of one of the two inscribed squares and the squared reciprocal of the hypotenuse equals the squared reciprocal of the side of the other inscribed square.
The sides of a heptagonal triangle, which shares its vertices with a regular heptagon, satisfy the optic equation.
Other appearances
Thin lens equation
For a lens of negligible thickness, and focal length , the distances from the lens to an object, , and from the lens to its image, , are related by the thin lens formula:
Electrical engineering
Components of an electrical circuit or electronic circuit can be connected in what is called a series or parallel configuration. For example, the total resistance value of two resistors with resistances and connected in parallel follows the optic equation:
Similarly, the total inductance of two inductors with inductances connected in parallel is given by:
and the total capacitance of two capacitors with capacitances connected in series is as follows:
Paper folding
The optic equation of the crossed ladders problem can be applied to folding rectangular paper into three equal parts. One side (the left one illustrated here) is partially folded in half and pinched to leave a mark. The intersection of a line from this mark to an opposite corner, with a diagonal is exactly one third from the bottom edge. The top edge can then be folded down to meet the intersection.
Harmonic mean
The harmonic mean of and is or . In other words, is half the harmonic mean of and .
Relation to Fermat's Last Theorem
Fermat's Last Theorem states that the sum of two integers each raised to the same integer power cannot equal another integer raised to the power if . This implies that no solutions to the optic equation have all three integers equal to perfect powers with the same power . For if then multiplying through by would give which is impossible by Fermat's Last Theorem.
See also
Erdős–Straus conjecture, a different Diophantine equation involving sums of reciprocals of integers
Sums of reciprocals
Parallel
References
Diophantine equations | Optic equation | [
"Mathematics"
] | 888 | [
"Diophantine equations",
"Mathematical objects",
"Equations",
"Number theory"
] |
45,342,879 | https://en.wikipedia.org/wiki/Neupert%20effect | The Neupert effect refers to an empirical tendency for high-energy ('hard') X-ray emission to coincide temporally with the rate of rise of lower-energy ('soft') X-ray emission of a solar flare.
Here 'hard' and 'soft' mean above and below an energy of about 10 keV to solar physicists, though in non-solar X-ray astronomy one typically sets this boundary at a lower energy.
This effect gets its name from NASA solar physicist and spectroscopist Werner Neupert, who first documented a related correlation (the integral form) between microwave (gyrosynchrotron) and soft X-ray emissions in 1968. The standard interpretation is that the accumulated energy injection associated with the acceleration of non-thermal electrons (which produce the hard X-rays via non-thermal bremsstrahlung) release energy in the lower solar atmosphere (the chromosphere); this energy then leads to thermal (soft X-ray) emission as the chromospheric plasma heats and expands into the corona.
The effect is very common, but does not represent an exact relationship and is not observed in all solar flares.
See also
References
X-rays | Neupert effect | [
"Physics",
"Astronomy"
] | 254 | [
"Astronomy stubs",
"X-rays",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
45,347,901 | https://en.wikipedia.org/wiki/Method%20of%20continued%20fractions | The method of continued fractions is a method developed specifically for solution of integral equations of quantum scattering theory like Lippmann–Schwinger equation or Faddeev equations. It was invented by Horáček and Sasakawa in 1983. The goal of the method is to solve the integral equation
iteratively and to construct convergent continued fraction for the T-matrix
The method has two variants. In the first one (denoted as MCFV) we construct approximations of the potential energy operator in the form of separable function of rank 1, 2, 3 ... The second variant (MCFG method) constructs the finite rank approximations to Green's operator. The approximations are constructed within Krylov subspace constructed from vector with action of the operator . The method can thus be understood as resummation of (in general divergent) Born series by Padé approximants. It is also closely related to Schwinger variational principle.
In general the method requires similar amount of numerical work as calculation of terms of Born series, but it provides much faster convergence of the results.
Algorithm of MCFV
The derivation of the method proceeds as follows. First we introduce rank-one (separable)
approximation to the potential
The integral equation for the rank-one part of potential is easily soluble. The full solution of the original problem can therefore be expressed as
in terms of new function . This function is solution of modified Lippmann–Schwinger equation
with
The remainder potential term is transparent for incoming wave
i. e. it is weaker operator than the original one.
The new problem thus obtained for is of the same form as the original one and we can repeat the procedure.
This leads to recurrent relations
It is possible to show that the T-matrix of the original problem can be expressed in the form of chain fraction
where we defined
In practical calculation the infinite chain fraction is replaced by finite one assuming that
This is equivalent to assuming that the remainder solution
is negligible. This is plausible assumption, since the remainder potential has all vectors
in its null space and it can be shown that this potential converges to zero and the chain fraction converges to the exact T-matrix.
Algorithm of MCFG
The second variant of the method construct the approximations to the Green's operator
now with vectors
The chain fraction for T-matrix now also holds, with little bit different definition of coefficients .
Properties and relation to other methods
The expressions for the T-matrix resulting from both methods can be related to certain class of variational principles. In the case of first iteration of MCFV method we get the same result as from Schwinger variational principle with trial function . The higher iterations with N-terms in the continuous fraction reproduce exactly 2N terms (2N + 1) of Born series for the MCFV (or MCFG) method respectively. The method was tested on calculation of collisions of electrons from hydrogen atom in static-exchange approximation. In this case the method reproduces exact results for scattering cross-section on 6 significant digits in 4 iterations. It can also be shown that both methods reproduce exactly the solution of the Lippmann-Schwinger equation with the potential given by finite-rank operator. The number of iterations is then equal to the rank of the potential. The method has been successfully used for solution of problems in both nuclear and molecular physics.
References
Quantum mechanics
Scattering | Method of continued fractions | [
"Physics",
"Chemistry",
"Materials_science"
] | 704 | [
"Theoretical physics",
"Quantum mechanics",
"Scattering",
"Particle physics",
"Condensed matter physics",
"Nuclear physics"
] |
45,348,419 | https://en.wikipedia.org/wiki/Helium%20dimer | The helium dimer is a van der Waals molecule with formula He2 consisting of two helium atoms. This chemical is the largest diatomic molecule—a molecule consisting of two atoms bonded together. The bond that holds this dimer together is so weak that it will break if the molecule rotates, or vibrates too much. It can only exist at very low cryogenic temperatures.
Two excited helium atoms can also bond to each other in a form called an excimer. This was discovered from a spectrum of helium that contained bands first seen in 1912. Written as He2* with the * meaning an excited state, it is the first known Rydberg molecule.
Several dihelium ions also exist, having net charges of negative one, positive one, and positive two. Two helium atoms can be confined together without bonding in the cage of a fullerene.
Molecule
Based on molecular orbital theory, He2 should not exist, and a chemical bond cannot form between the atoms. However, the van der Waals force exists between helium atoms as shown by the existence of liquid helium, and at a certain range of distances between atoms the attraction exceeds the repulsion. So a molecule composed of two helium atoms bound by the van der Waals force can exist. The existence of this molecule was proposed as early as 1937.
He2 is the largest known molecule of two atoms when in its ground state, due to its extremely long bond length. The He2 molecule has a large separation distance between the atoms of about . This is the largest for a diatomic molecule without rovibronic excitation. The binding energy is only about 1.3 mK, 10−7 eV or 1.1×10−5 kcal/mol.
Both helium atoms in the dimer can be ionized by a single photon with energy 63.86 eV. The proposed mechanism for this double ionization is that the photon ejects an electron from one atom, and then that electron hits the other helium atom and ionizes that as well. The dimer then explodes as two helium cations repel each other, moving with the same speed but in opposite directions.
A dihelium molecule bound by Van der Waals forces was first proposed by John Clarke Slater in 1928.
Formation
The helium dimer can be formed in small amounts when helium gas expands and cools as it passes through a nozzle in a gas beam. Only the isotope 4He can form molecules like this; 4He3He and 3He3He do not exist, as they do not have a stable bound state. The amount of the dimer formed in the gas beam is of the order of one percent.
Molecular ions
He2+ is a related ion bonded by a half covalent bond. It can be formed in a helium electrical discharge. It recombines with electrons to form an electronically excited He2(a3Σ+u) excimer molecule. Both of these molecules are much smaller with more normally sized interatomic distances. He2+ reacts with N2, Ar, Xe, O2, and CO2 to form cations and neutral helium atoms.
The helium dication dimer He22+ releases a large amount energy when it dissociates, around 835 kJ/mol. However, an energy barrier of 138.91 kJ/mol prevents immediate decay. This ion was studied theoretically by Linus Pauling in 1933. This ion is isoelectronic with the hydrogen molecule. He22+ is the smallest possible molecule with a double positive charge. It is detectable using mass spectroscopy.
The negative helium dimer He2− is metastable and was discovered by Bae, Coggiola and Peterson in 1984 by passing He2+ through cesium vapor. Subsequently, H. H. Michels theoretically confirmed its existence and concluded that the 4Πg state of He2− is bound relative to the a2Σ+u state of He2. The calculated electron affinity is 0.233 eV compared to 0.077 eV for the He−[4P∘] ion. The He2− decays through the long-lived 5/2g component with τ~350 μsec and the much shorter-lived 3/2g, 1/2g components with τ~10 μsec. The 4Πg state has a 1σ2g1σu2σg2πu electronic configuration, its electron affinity E is 0.18±0.03 eV, and its lifetime is 135±15 μsec; only the v=0 vibrational state is responsible for this long-lived state.
The molecular helium anion is also found in liquid helium that has been excited by electrons with an energy level higher than 22 eV. This takes place firstly by penetration of liquid He, taking 1.2 eV, followed by excitation of a He atom electron to the 3P level, which takes 19.8 eV. The electron can then combine with another helium atom and the excited helium atom to form He2−. He2− repels helium atoms, and so has a void around it. It will tend to migrate to the surface of liquid helium.
Excimers
In a normal helium atom, two electrons are found in the 1s orbital. However, if sufficient energy is added, one electron can be elevated to a higher energy level. This high energy electron can become a valence electron, and the electron that remains in the 1s orbital is a core electron. Two excited helium atoms can form a covalent bond, creating a molecule called dihelium that lasts for times from the order of a microsecond up to second or so. (Excited helium atoms in the 23S state can last for up to an hour, and react like alkali metal atoms.)
The first clues that dihelium exists were noticed in 1900 when W. Heuse observed a band spectrum in a helium discharge. However, no information about the nature of the spectrum was published. Independently E. Goldstein from Germany and W. E. Curtis from London published details of the spectrum in 1913. Curtis was called away to military service in World War I, and the study of the spectrum was continued by Alfred Fowler. Fowler recognised that the double headed bands fell into two sequences analogous to principal and diffuse series in line spectra.
The emission band spectrum shows a number of bands that degrade towards the red, meaning that the lines thin out and the spectrum weakens towards the longer wavelengths. Only one band with a green band head at 5732 Å degrades towards the violet. Other strong band heads are at 6400 (red), 4649, 4626, 4546, 4157.8, 3777, 3677, 3665, 3356.5, and 3348.5 Å. There are also some headless bands and extra lines in the spectrum. Weak bands are found with heads at 5133 and 5108.
If the valence electron is in a 2s 3s, or 3d orbital, a 1Σu state results; if it is in 2p 3p or 4p, a 1Σg state results. The ground state is X1Σg+.
The three lowest triplet states of He2 have designations a3Σu, b3Πg and c3Σg. The a3Σu state with no vibration (v=0) has a long metastable lifetime of 18 s, much longer than the lifetime for other states or inert gas excimers. The explanation is that the a3Σu state has no electron orbital angular momentum, as all the electrons are in S orbitals for the helium state.
The lower lying singlet states of He2 are A1Σu, B1Πg and C1Σg. The excimer molecules are much smaller and more tightly bound than the van der Waals bonded helium dimer. For the A1Σu state the binding energy is around 2.5 eV, with a separation of the atoms of 103.9 pm. The C1Σg state has a binding energy 0.643 eV and the separation between atoms is 109.1 pm. These two states have a repulsive range of distances with a maximum around 300 pm, where if the excited atoms approach, they have to overcome an energy barrier. The singlet state A1Σ+u is very unstable with a lifetime only nanoseconds long.
The spectrum of the He2 excimer contains bands due to a great number of lines due to transitions between different rotation rates and vibrational states, combined with different electronic transitions. The lines can be grouped into P, Q and R branches. But the even numbered rotational levels do not have Q branch lines, due to both nuclei being spin 0. Numerous electronic states of the molecule have been studied, including Rydberg states with the number of the shell up to 25.
Helium discharge lamps produce vacuum ultraviolet radiation from helium molecules. When high energy protons hit helium gas it also produces UV emission at around 600 Å by the decay of excited highly vibrating molecules of He2 in the A1Σu state to the ground state. The UV radiation from excited helium molecules is used in the pulsed discharge ionization detector (PDHID) which is capable of detecting the contents of mixed gases at levels below parts per billion.
The Hopfield continuum (named after J. J. Hopfield) is a band of ultraviolet light between 600 and 1000 Å in wavelength formed by photodissociation of helium molecules.
One mechanism for formation of the helium molecules is firstly a helium atom becomes excited with one electron in the 21S orbital. This excited atom meets two other non excited helium atoms in a three body association and reacts to form a A1Σu state molecule with maximum vibration and a helium atom.
Helium molecules in the quintet state 5Σ+g can be formed by the reaction of two spin polarised helium atoms in He(23S1) states. This molecule has a high energy level of 20 eV. The highest vibration level allowed is v=14.
In liquid helium the excimer forms a solvation bubble. In a 3d state a He molecule is surrounded by a bubble 12.7 Å in radius at atmospheric pressure. When pressure is increased to 24 atmospheres the bubble radius shrinks to 10.8 Å. This changing bubble size causes a shift in the fluorescence bands.
Magnetic condensation
In very strong magnetic fields, (around 750,000 Tesla) and low enough temperatures, helium atoms attract, and can even form linear chains. This may happen in white dwarfs and neutron stars. The bond length and dissociation energy both increase as the magnetic field increases.
Use
The dihelium excimer is an important component in the helium discharge lamp.
A second use of dihelium ion is in ambient ionization techniques using low temperature plasma. In this helium atoms are excited, and then combine to yield the dihelium ion. The He2+ goes on to react with N2 in the air to make N2+. These ions react with a sample surface to make positive ions that are used in mass spectroscopy. The plasma containing the helium dimer can be as low as 30 °C in temperature, and this reduces heat damage to samples.
Clusters
He2 has been shown to form van der Waals compounds with other atoms forming bigger clusters such as 24MgHe2 and 40CaHe2.
The helium-4 trimer (4He3), a cluster of three helium atoms, is predicted to have an excited state which is an Efimov state. This has been confirmed experimentally in 2015.
Cage
Two helium atoms can fit inside larger fullerenes, including C70 and C84. These can be detected by the nuclear magnetic resonance of 3He having a small shift, and by mass spectrometry. C84 with enclosed helium can contain 20% He2@C84, whereas C78 has 10% and C76 has 8%. The larger cavities are more likely to hold more atoms. Even when the two helium atoms are placed closely to each other in a small cage, there is no chemical bond between them. The presence of two He atoms in a C60 fullerene cage is only predicted to have a small effect on the reactivity of the fullerene. The effect is to have electrons withdrawn from the endohedral helium atoms, giving them a slight positive partial charge to produce He2δ+, which have a stronger bond than uncharged helium atoms. However, by the Löwdin definition there is a bond present.
The two helium atoms inside the C60 cage are separated by 1.979 Å and the distance from a helium atom to the carbon cage is 2.507 Å. The charge transfer gives 0.011 electron charge units to each helium atom. There should be at least 10 vibrational levels for the He-He pair.
References
External links
spectrum of He2
Van der Waals molecules
Homonuclear diatomic molecules
Helium compounds
Dimers (chemistry)
Allotropes | Helium dimer | [
"Physics",
"Chemistry",
"Materials_science"
] | 2,667 | [
"Periodic table",
"Properties of chemical elements",
"Van der Waals molecules",
"Allotropes",
"Molecules",
"Dimers (chemistry)",
"Materials",
"Polymer chemistry",
"Matter"
] |
26,602,636 | https://en.wikipedia.org/wiki/Olaratumab | Olaratumab, sold under the brand name Lartruvo, is a monoclonal antibody medication developed by Eli Lilly and Company for the treatment of solid tumors. It is directed against the platelet-derived growth factor receptor alpha.
It was removed from the United States and European Union markets in 2019, due to insufficient proof of its medical advantage (see below "Medical uses").
Medical uses
Olaratumab is used in combination with doxorubicin for the treatment of adults with advanced soft-tissue sarcoma (STS) who cannot be cured by cancer surgery or radiation therapy, and who have not been previously treated with doxorubicin.
In a randomised controlled trial with 133 STS patients, olaratumab plus doxorubicin improved the median of progression-free survival from 4.1 to 6.6 months as compared to doxorubicin alone (p = 0.0615, narrowly missing statistical significance), and overall survival from 14.7 to 26.5 months (p = 0.0003, highly significant).
However, the ANNOUNCE phase 3 trial did not find any advantage in adding olaratumab to doxorubicin. Therefore, in January 2019, FDA and EMA decided to recommend against starting olaratumab for soft tissue sarcoma. In April 2019 the European Medicines Agency explicitly requested the marketing authorisation of the medicine to be revoked. Shortly afterwards the German Physician's Medicines Commission reported that olaratumab will be removed from the German market "in a few weeks" and asked doctors not to treat new patients with this drug outside of clinical trials. Lilly subsequently voluntarily withdrew its approval in the United States.
Contraindications
The drug has no contraindications apart from hypersensitivity reactions.
Side effects
In studies, the most serious side effects of the combination olaratumab/doxorubicin were neutropenia (low count of neutrophil white blood cells) with a severity of grade 3 or 4 in 55% of patients, and musculoskeletal pain grade 3 or 4 in 8% of patients. Common milder side effects were lymphopenia, headache, diarrhoea, nausea and vomiting, mucositis, and reactions at the infusion site; all typical effects of cancer therapies.
Interactions
No pharmacokinetic interactions with doxorubicin were observed in studies. Being a monoclonal antibody, olaratumab is neither metabolised by cytochrome P450 liver enzymes nor transported by transmembrane pumps, and is thus not expected to interact relevantly with other drugs.
Pharmacology
Mechanism of action
Olaratumab inhibits growth of tumour cells by blocking subunit alpha of the platelet-derived growth factor receptor, a type of tyrosine kinase.
Pharmacokinetics
After intravenous infusion, olaratumab has a volume of distribution of 7.7 litres in steady state and a biological half-life of 11 days.
History
Olaratumab was originally developed by ImClone Systems, which was acquired by Eli Lilly in 2008. A Phase I clinical trial was conducted in Japanese patients in September 2010, followed by a Phase II trial in 133 patients, starting in October 2010.
In February 2015, the European Medicines Agency assigned olaratumab orphan drug status for the treatment of soft-tissue sarcoma. The European Commission granted a conditional marketing authorisation, based on the mentioned Phase II study, valid throughout the European Union on 9 November 2016.
Previously considered a promising drug, the FDA granted olaratumab fast track designation, breakthrough therapy designation and priority review status.
In October 2016, the US FDA issued an accelerated approval notice for use of olaratumab with doxorubicin to treat adults with certain types of soft-tissue sarcoma, based on the same study.
A phase III trial completed in 2019, and unfortunately showed no benefit from the addition of olaratumab to doxorubicin. As noted above, these results led to withdrawal of approval in the United States and Europe.
References
Monoclonal antibodies for tumors
Orphan drugs
Drugs developed by Eli Lilly and Company
Withdrawn drugs | Olaratumab | [
"Chemistry"
] | 882 | [
"Drug safety",
"Withdrawn drugs"
] |
26,602,975 | https://en.wikipedia.org/wiki/Package%20testing | Package testing or packaging testing involves the measurement of a characteristic or property involved with packaging. This includes packaging materials, packaging components, primary packages, shipping containers, and unit loads, as well as the associated processes.
Testing measures the effects and interactions of the levels of packaging, the package contents, external forces, and end-use.
It can involve controlled laboratory experiments, subjective evaluations by people, or field testing. Documentation is important: formal test method, test report, photographs, video, etc.
Testing can be a qualitative or quantitative procedure. Package testing is often a physical test. With some types of packaging such as food and pharmaceuticals, chemical tests are conducted to determine suitability of food contact materials. Testing programs range from simple tests with little replication to more thorough experimental designs.
Package testing can extend for the full life cycle. Packages can be tested for their ability to be recycled and their ability to degrade as surface litter, in a sealed landfill or under composting conditions.
Purposes
Packaging testing might have a variety of purposes, such as:
Determine if, or verify that, the requirements of a specification, regulation, or contract are met
Decide if a new product development program is on track: Demonstrate proof of concept
Provide standard data for other scientific, engineering, and quality assurance functions
Validate suitability for end-use
Provide a basis for technical communication
Provide a technical means of comparison of several options
Provide evidence in legal proceedings: product liability, patents, product claims, etc.
Help solve problems with current packaging
Help identify potential cost savings in packaging
Packaging tests can be used for:
Subjecting packages (and contents) to stresses and dynamics found in the field
Reproducing the types of damage to packages and contents found in actual shipments
Controlling the uniformity of production of packages or components
Importance
For some types of products, package testing is mandated by regulations: food. pharmaceuticals, medical devices, dangerous goods, etc. This may cover both the design qualification, periodic retesting, and control of the packaging processes. Processes may be controlled by a variety of quality management systems such as HACCP, statistical process control, validation protocols, ISO 9000, etc.
For unregulated products, testing can be required by a contract or governing specification. The degree of package testing can often be a business decision. Risk management may involve factors such as
costs of packaging
costs of package testing
value of contents being shipped
value of customer's good will
product liability exposure
other potential costs of inadequate packaging
etc.
With distribution packaging, one vital packaging development consideration is to determine if a packaged-product is likely to be damaged in the process of getting to the final customer. A primary purpose of a package is to ensure the safety of a product during transportation and storage. If a product is damaged during this process, then the package has failed to accomplish a primary objective and the customer will either return the product or be unlikely to purchase the product altogether.
Package testing is often a formal part of Project management programs. Packages are usually tested when there is a new packaging design, a revision to a current design, a change in packaging material, and various other reasons. Testing a new packaging design before full scale manufacturing can save time and money.
Laboratory affiliation
Many suppliers or vendors offer limited material and package testing as a free service to customers. It is common for packagers to partner with reputable suppliers: Many suppliers have certified quality management systems such as ISO 9000 or allow customers to conduct technical and quality audits. Data from testing is commonly shared. There is sometimes a risk that supplier testing may tend to be self-serving and not completely impartial.
Large companies often have their own packaging staff and a package testing and development laboratory. Corporate engineers know their products, manufacturing capabilities, logistics system, and their customers best. Cost reduction of existing products and cost avoidance for new products have been documented.
Another option is to use a paid consultant, Independent contractor, and third-party independent testing laboratory. They are commonly chosen for specialized expertise, for access to certain test equipment, for surge projects, or where independent testing is otherwise required. Many have certifications and accreditations: ISO 9000, ISO/IEC 17025, and various governing agencies.
Procedures
Several standards organizations publish test methods for package testing. Included are:
International Organization for Standardization, ISO
ASTM International
European Committee for Standardization. CEN
TAPPI
Military Standards
ISTA (International Safe Transit Association)
Governments and regulators publish some packaging test methods. There are also many corporate test standards in use. A review of technical literature and patents provides good options to consider for test procedures.
Researchers are not restricted to the use of published standards but can modify existing test methods or develop procedures specific to their particular needs. If a test is conducted with a deviation from a published test method or if a new method is employed, the test report must fully disclose the procedure.
Materials testing
The basis of packaging design and performance is the component materials. The physical properties, and sometimes chemical properties, of the materials need to be communicated to packaging engineers to aid in the design process. Suppliers publish data sheets and other technical communications that include the typical or average relevant physical properties and the test method these are based upon. Sometimes these are adequate. Other times, additional material and component testing is required by the packager or supplier to better define certain characteristics.
When a final package design is complete, the specifications for the component materials needs to be communicated to suppliers. Packaging materials testing is often needed to identify the critical material characteristics and engineering tolerances. These are used to prepare and enforce specifications.
For example, shrink film data might include: tensile strength (MD and CD), elongation, Elastic modulus, surface energy, thickness, Moisture vapor transmission rate, Oxygen transmission rate, heat seal strength, heat sealing conditions, heat shrinking conditions, etc. Average and process capability are often provided. The chemical properties related for use as Food contact materials may be necessary.
Testing with people
Some types of package testing do not use scientific instruments but use people for the evaluation.
The regulations for child-resistant packaging require a test protocol that involves children. Samples of the test packages are given to a prescribed population of children. With specified 50-child panels, a high percentage must be unable to open a test package within 5 minutes.
Adults are also tested for their ability to open a child-resistant package.
Consumer packages are often evaluated by focus groups. People evaluate the package features in a room monitored by video cameras. The consumer responses are treated qualitatively for feedback into the new packaging process.
Some food packagers use organoleptic evaluations. People use their senses (taste, smell, etc.) to determine if a package component has tainted the food in the package.
A new package may be evaluated in a test market that uses people to try the packages at home. Consumers have the opportunity to buy a product, perhaps with a coupon or discount. Return postcards or Internet sites provide feedback to package developers. Perhaps the most critical feedback is repeated sales items in the new package. Packaging evaluations are an important part of marketing research.
Legibility of text on packaging and labels is always subjective due to the inherent variations of people. Efforts have been made to help better quantify this by people in a laboratory: still using people for the evaluation but also employing a test apparatus to help reduce variability.
Some laboratory tests are conducted but still result in an observation by people. Some test procedures call for a judgment by test engineers whether or not pre-established acceptance criteria have been met.
Relevant standards
ASTM D7298 Test Method for Measurement of Comparative Legibility by Means of Polarizing Filter Instrumentation.
ASTM E460 Practice for Determining Effect of Packaging on Food and Beverage Products During Storage
ASTM E619 Practice for Evaluating Foreign Odors in Paper Packaging
ASTM E1870 Test Method for Odor and Taste Transfer from Polymeric Packaging Film
ASTM 2609 Test Method for Odor and Flavor Transfer from Rigid Polymeric Packaging
ISO 16820 Sensory Analysis – Methodology – Sequential Analysis
ISO 5495 Sensory Analysis – Methodology – Paired Comparisons
ISO 13302 Sensory Analysis – Methods for assessing modifications to the flavour of foodstuffs due to packaging
Conditioning, testing atmosphere
The environmental conditions of testing are critical. The measured performance of many packages is affected by the conditioning and testing atmospheres. For example, paper based products are strongly affected by their moisture content: Relative humidity needs to be controlled. Plastic products are often strongly affected by temperature.
Conditions of 23 °C (73.4 °F) and 50% relative humidity are common but other standard testing conditions are also published in material and package test standards. Engineering tolerances for the conditions are also specified. Often the package is conditioned to the specified environment and tested under those conditions. This can be in a conditioned room or in a chamber enclosing the test. With some testing, the package is conditioned to a specified environment, then is removed to ambient conditions and quickly tested. The test report needs to state the actual conditions used.
Engineers have found it important to know the effects of the full range of expected conditions on package performance. This can be through investigating published technical literature, obtaining supplier documentation, or by conducting controlled tests at diverse conditions.
Relevant standards
ASTM D4332 - Standard Practice for Conditioning Containers, Packages, or Packaging Components for Testing
ASTM E171 - Standard Specification for Standard Atmospheres for Conditioning and Testing Flexible Barrier Materials
ASTM F2825 - Standard Practice for Climate Stressing of Packaging Systems for Single Parcel Delivery
Degradation of product
Laboratory tests can help determine the shelf life of a package and its contents under a variety of conditions. This is particularly important for foods, pharmaceuticals, some chemicals, and a variety of products. The testing is usually product specific: the mechanisms of degradation are often different. Exposures to expected and elevated temperatures and humidities are commonly used for shelf life testing. The ability of packaging to control product degradation is frequently a subject of laboratory and field evaluations.
Relevant tests
ASTM E2454 Standard Guide for Sensory Evaluation Methods to Determine the Sensory Shelf -life of Consumer Products
DoD 4140.27M Shelf Life Management Manual, 2000
ISO 11987 Ophthalmic Optics, Contact Lenses, Determination of Shelf Life
Barrier properties
Many products degrade with exposure to the atmosphere: foods, pharmaceuticals, chemicals, etc. The ability of a package to control the permeation and penetration of gasses is vital for many types of products. Tests are often conducted on the packaging materials but also on the completed packages, sometimes after being subjected to flexing, handling, vibration, or temperature.
Degradation of packages
Packages can degrade with exposure to temperature, humidity, time, sterilization (steam, radiation, gas, etc.), sunlight, and other environmental factors. For some types of packaging, it is common to test for possible corrosion of metals, polymer degradation, and weather testing of polymers. Several types of accelerated aging of packaging and materials can be accomplished in a laboratory.
Exposure to elevated temperatures accelerates some degradation mechanisms. An Arrhenius equation is often used to correlate certain chemical reactions at different temperatures, based on the proper choice of Q10 coefficients.
As with any laboratory testing, validating field trials are important.
Relevant tests
ASTM D3045 - Standard Practice for Heat Aging of Plastics without Load
ASTM F1640 - Standard Guide for Packaging Materials for Foods to be Irradiated
ASTM F1980 – Standard Guide for Accelerated Aging of Sterile Barrier Systems for Medical Devices
ASTM G151 - Standard Practice for Exposing Non-metallic Materials in Accelerated Test Devices that are Laboratory Light Sources
Vacuum testing
Vacuum chambers are used to test the ability of a package to withstand low pressures. This can be to:
Determine the ability of packages to withstand low pressures that might be encountered. this could be in an air shipment or high altitude truck shipment.
A laboratory vacuum places controlled stress on a sealed package to test the strength of seals, the tendency for leakage, and the ability to retain sterility.
Relevant tests
ASTM D3078- Standard Test Method for Determination of Leaks in Flexible Packaging by Bubble Emission
ASTM D4991- Standard Test Method for Leakage Testing of Empty Rigid Containers by Vacuum Method
ASTM D6653- Standard Test Methods for Determining the Effects of High Altitude on Packaging Systems by Vacuum Method
ASTM D6834- Standard Test Method for Determining Product Leakage from a Package with a Mechanical Pump Dispenser
ASTM E493- Standard Test Methods for Leaks Using the Mass Spectrometer Leak Detector in the Inside-Out Testing Mode
ASTM F2338- Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method
ASTM F2391- Standard Test Method for Measuring Package and Seal Integrity Using Helium as the Tracer Gas
Shock and impact
Both primary (consumer) packages and shipping containers have a risk of being dropped or being impacted by other items. Package integrity and product protection are important packaging functions. Tests are conducted to measure the resistance of packages and products to controlled laboratory shock and impact.
Testing also determines the effectiveness of package cushioning to isolate fragile products from shock. Instrumentation is used to measure the shock transmitted to a cushioned product. Simple drop tests can be used for evaluations.
Relevant tests
ASTM D880- Standard Test Method for Impact Testing for Shipping Containers and Systems
ASTM D1596- Standard Test Method for Dynamic Shock Cushioning Characteristics of Packaging Materials
ASTM D3332- Standard Test Methods for Mechanical-Shock Fragility of Products, Using Shock Machines
ASTM D4003- Standard Test Methods for Programmable Horizontal Impact Test for Shipping Containers and Systems
ASTM D5265- Standard Test Method for Bridge Impact Testing
ASTM D5276- Standard Test Method for Drop Test of Loaded Containers by Free Fall
ASTM D5277- Standard Test Method for Performing Programmed Horizontal Impacts Using an Inclined Impact Tester
ASTM D5487- Standard Test Method for Simulated Drop of Loaded Containers by Shock Machines
ASTM D6344- Standard Test Method for Concentrated Impacts to Transport Packages
ASTM D6537- Standard Practice for Instrumented Package Shock Testing For Determination of Package Performance
MIL-STD-810G
Package insulation
Many packages are used for products that are sensitive to temperature. The ability of insulated shipping containers to protect their contents from exposure to temperature fluctuations can be measured in a laboratory. The testing can be of empty containers or of full containers with appropriate jell or ice packs, contents, etc. Ovens, freezers, and environmental chambers are commonly used for this and other types of packaging. Effects of shock and vibration on thermal performance may also be studied.
Digital temperature data loggers are used to measure temperatures experienced in different distribution systems. This data is sometimes used to develop unique laboratory test methods for that distribution system.
Relevant tests
ASTM D3103-Standard Test Method for Thermal Insulation Performance of Distribution Packages
ISTA 7E – Testing Standard for Thermal Transport Packaging Used in Parcel Delivery System Shipment
Thermal shock
Some packages, particularly glass, can be sensitive to sudden changes in temperature: Thermal shock. One method of testing involves rapid movement from cold to hot water baths, and back.
Relevant tests
ASTM C149 -Standard Test Method for Thermal Shock Resistance of Glass Containers
MIL-STD-810G METHOD 503.5
Handles
Package handles (and hand holes in packages) assist carrying and handling packages. Objective laboratory procedures are frequently used to help determine performance. Fixtured "hands" of various designs are used to hold a handle (sometimes two handles for a box). Most common are "jerk testing" by modified drop test procedures or use of the constant pull rates of a universal testing machine. Other procedures use a static force by hanging a heavily loaded package for an extended time or even using a centrifuge.
Relevant tests
ASTM D6804, Standard Guide for Hand Hole Design in Corrugated Boxes, Appendix
ASTM F852 Specification for Portable Gasoline, Kerosene, and Diesel Containers for Consumer Use, section 7.2
Centrifugal test of beverage carrier handle
Vibration
Vibration is encountered during shipping (vehicle vibration, rough roads, etc.) and movement on conveyors. Potential vibration damage may include:
fractures and fatigue damage
loose wires, screw caps, etc.
bruises on soft products (fruit, etc.)
surface abrasion
etc.
The ability of a package to withstand these vibrations and to protect the contents can be measured by several laboratory test procedures. Some allow searching for the particular frequencies of vibration that have potential for damage. Modal testing methodologies are sometimes employed. Others use specified bands of random vibration to better represent complex vibrations measured in field studies of distribution environments.
Relevant tests
ASTM D999- Standard Test Methods for Vibration Testing of Shipping Containers
ASTM D3580-Standard Test Methods for Vibration (Vertical Linear Motion) Test of Products
ASTM D4728- Standard Test Method for Random Vibration Testing of Shipping Containers
ASTM D5112- Standard Test Method for Vibration (Horizontal Linear Sinusoidal Motion) Test of Products
ASTM D7387- Standard Test Method for Vibration Testing of Intermediate Bulk Containers (IBCs) Used for Shipping Liquid Hazardous Materials (Dangerous Goods)
Compression
Compression testing relates to stacking or crushing of packages, particularly shipping containers. It usually measures of the force required to crush a package, stack of packages, or a unit load. Packages can be empty or filled as for shipment. A force-deflection curve used to obtain the peak load or other desired points. Other tests use a constant load and measure the time to failure or to a critical deflection.
Dynamic compression is sometimes tested by shock or impact testing with an additional load to crush the test package. Dynamic compression also takes place in stacked vibration testing.
Relevant Tests
ASTM Standard D642 Test Method for Determining Compressive Resistance of Shipping Containers, Components, and Unit Loads.
ASTM Standard D4577 Test Method for Compression Resistance of a Container Under Constant Load
ASTM Standard D7030 Test Method for Short Term Creep Performance of Corrugated Fiberboard Containers Under Constant Load Using a Compression Test Machine
German Standard DIN 55440-1 Packaging Test; compression test; test with a constant conveyance-speed
ISO 12048 Packaging—Complete, filled transport packages—Compression and stacking tests using a compression tester
Large loads
Large pallet loads, bulk boxes, wooden boxes, and crates can be evaluated by many of the other test procedures previously listed. In addition, some special test methods are available for these larger loads.
Relevant tests
ASTM D5331- Standard Test Method for Evaluation of Mechanical Handling of Unitized Loads Secured with Stretch Wrap Films
ASTM D5414- Standard Test Method for Evaluation of Horizontal Impact Performance of Load Unitizing Stretch Wrap Films
ASTM D5415- Standard Test Method for Evaluating Load Containment Performance of Stretch Wrap Films by Vibration Testing
ASTM D5416- Standard Test Method for Evaluating Abrasion Resistance of Stretch Wrap Films by Vibration Testing
ASTM D6055- Standard Test Methods for Mechanical Handling of Unitized Loads and Large Shipping Cases and Crates
ASTM D6179- Standard Test Methods for Rough Handling of Unitized Loads and Large Shipping Cases and Crates
ISO 10531- Stability testing of unit loads
Bar codes
Package bar codes are evaluated for several aspects of legibility by bar code verifiers as part of a continuing quality program. More thorough validation may include evaluations after use (and abuse) testing such as sunlight, abrasion, impact, moisture, etc.
Relevant tests
ISO/IEC 15426 Information technology - Automatic identification and data capture techniques - Bar code verifier conformance specification - Part 1: Linear symbols, Part 2: Two-dimensional symbols
Test protocols for shipping containers
Shipping containers are often subjected to sequential tests involving a combination of individual test methods. A variety of standard test schedules or protocols are available for evaluating transport packaging. They are used to help determine the ability of complete and filled shipping containers to various types of logistics systems. Some test the general ruggedness of the shipping container while others have been shown to reproduce the types of damage encountered in distribution. Some base the type and severity of testing on formal studies of the distribution environment: instrumentation, data loggers, and observation. Test cycles with these documented elements better simulate parts of certain logistics shipping environments.
ASTM International
ASTM D4169- Standard Practice for Performance Testing of Shipping Containers and Systems
ASTM D7386- Standard Practice for Performance Testing of Packages for Single Parcel Delivery Systems.
ISO
ISO 4180:2009 Packaging – Complete filled transport packages – General rules for the compilation of performance test schedules
International Safe Transit Association
Procedure 1A: Packaged-Products weighing 150 lb (68 kg) or Less
Procedure 1B: Packaged-Products weighing Over 150 lb (68 kg)
Procedure 1C: Extended Testing for Individual Packaged-Products weighing 150 lb (68 kg) or Less
Procedure 1D: Extended Testing for Individual Packaged-Products weighing Over 150 lb (68 kg)
Procedure 1E: Unitized Loads
Procedure 1G: Packaged-Products weighing 150 lb (68 kg) or Less (Random Vibration)
Procedure 1H: Packaged-Products weighing Over 150 lb (68 kg) (Random Vibration)
Procedure 2A: Packaged-Products weighing 150 lb (68 kg) or Less
Procedure 2B: Packaged-Products weighing over 150 lb (68 kg)
Procedure 2C: Furniture Packages
Procedure 3A: Packaged-Products for Parcel Delivery System Shipments 70 kg (150 lb) or Less (standard, small, flat or elongated)
Procedure 3B: Packaged-Products for Less-Than-Truckload (LTL) Shipment
Procedure 3E: Unitized Loads of Same Product
Procedure 3F: Packaged Products for Distribution Center to Retail Outlet Shipment 100 lb (45 kg)
Procedure 3H: Performance Test for Products or Packaged-Products in Mechanically Handled Bulk Transport Containers
Project 3K: Fast Moving Consumer Goods for the European Retail Supply Chain
Project 4AB: Enhanced Simulation Performance Tests (online test planner)
6-FEDEX-A: FedEx Procedures for Testing Packaged Products Weighing Up to 150 lbs
6-FEDEX-B: FedEx Procedures for Testing Packaged Products Weighing Over 150 lbs
6-SAMSCLUB, Packaged-Products for Sam's Club Distribution System Shipment
Procedure 7D: Thermal Controlled Transport Packaging for Parcel Delivery System Shipment
ISTA 7E: Testing Standard for Thermal Transport Packaging Used in Parcel Delivery System
Field trials
Laboratory testing can often help identify shipping container constructions that, in general, should perform well in the field. Of course, laboratory tests cannot fully reproduce the full range of field hazards, their magnitudes, nor their frequency. Field experiments are often conducted to help validate the laboratory testing.
The advantage of laboratory testing is that it subjects replicate packages to identical sets of test sequences: a relatively small number of samples often can suffice. Field hazards, by their nature, are highly variable: thus repeated shipments do not receive the same types or magnitudes of drops, vibrations, kicks, impacts, abrasion, etc. Because of this uncontrolled variability, more replicate sample shipments are often necessary.
Larger scale test markets are used to give additional assurance of performance and acceptability for a new or revised packaged-product. Feedback is carefully obtained and evaluated. Feedback on package performance continues when full production and distribution have been achieved.
Product requirements
In addition, package testing often relates to the specific product inside the package. Some broad categories of products and special package testing considerations follow:
Food packaging
Foods categories such as fresh produce, frozen foods, irradiated foods, fresh fish, canned foods, etc. have regulatory requirements and special packaging needs. Package testing often relates to:
Food safety
Compatibility of the package with the food
Migration of material from the packaging to the food
Shelf life
Barrier properties, porosity, package atmosphere, etc
Special quality assurance needs, good manufacturing practices, HACCP, validation protocols, etc
Pharmaceutical packaging
Packaging for drugs and pharmaceuticals is highly regulated. Special testing needs include:
Safety of drugs and pharmaceuticals
Barrier properties
Shelf life
Compatibility of package with the drugs
Sterility
Tamper resistance, child resistance, etc
Special quality assurance needs, good manufacturing practices, validation protocols, etc
Medical packaging
Packaging for medical materials, medical devices, health care supplies, etc., have special user requirements and is highly regulated. Barrier properties, durability, visibility, sterility and strength need to be controlled; usually with documented test results for initial designs and for production.
Assurance of sterility and suitability for use are critical. For example, medical devices and products are often sterilized in the package. The sterility must be maintained throughout distribution to allow immediate use by physicians. A series of special packaging tests is used to measure the ability of the package to maintain sterility. Verification and validation protocols are rigidly maintained.
Relevant standards
ASTM F88/F88M - Standard Test Method for Seal Strength of Flexible Barrier Materials
ASTM F1585 – Guide for Integrity Testing of Porous Medical Packages
ASTM D3078 – Standard Test Method for Detection of Leaks in Flexible Packaging (Bubble)
ASTM F1140 – Standard Test Methods for Internal Pressurization Failure Resistance of Unrestrained Packages
ASTM F1608 – Standard Test Method for Microbial Ranking of Packaging Materials
ASTM F1929 – Standard Test Method for Detecting Seal Strength in Porous Medical Packaging by Dye Penetration
ASTM F1980 – Standard Guide for Accelerated Aging of Sterile Barrier Systems for Medical Devices
ASTM F2054 – Standard Test Method for Burst Testing of Flexible Package Seals Using Internal Air Pressurization Within Restraining Plates
ASTM F2095 – Standard Test Methods for Pressure Decay Leak Test for Flexible Packages With and Without Restraining Plates
ASTM F2096 – Standard Test Method for Detecting Gross Leaks in Medical Packaging by Internal Pressurization
ASTM F2097 – Standard Guide for Design and Evaluation of Primary Flexible Packaging for Medical Products
ASTM F2228 – Standard Test Method for Non-Destructive Detection of Leaks in Medical Packaging Which Incorporates Pourous Barrier Material by CO2 Tracer Gas
ASTM F2391 – Standard Test Method for Measuring Package and Seal Integrity using Helium as the Tracer Gas
ASTM F3039 - Standard Test Method for Detecting Leaks in Nonporous Packaging or Flexible Barrier Materials by Dye Penetration
EN 868-1 – Packaging materials and systems for medical devices which are to be sterilized. General requirements and test methods (superseded by ISO 11607-1)
EN 868-5 – Packaging for terminally sterilized medical devices. Part 5: Sealable pouches and reels of porous materials and plastic film construction - Requirements and test methods. (Per ISO 11607-1, Annex B, Table B.1, this method maybe used to demonstrate conformity with provisions of ISO 11607-1)
ISO 11607-1 – Packaging for terminally sterilized medical devices -- Part 1: Requirements for materials, sterile barrier systems and packaging systems
ISO 11607-2 – Packaging for terminally sterilized medical devices -- Part 2: Validation for Forming, Sealing, and Assembly Processes
Dangerous Goods
Packaging of hazardous materials, or dangerous goods, are highly regulated. There are some material and construction requirements but also performance testing is required. The testing is based on the packing group (hazard level) of the contents, the quantity of material, and the type of container.
Research into improvements is continuing.
Relevant standards
ASTM D4919- Standard Specification for Testing of Hazardous Materials Packaging
ASTM D7387- Standard Test Method for Vibration Testing of Intermediate Bulk Containers (IBCs) Used for Shipping Liquid Hazardous Materials (Dangerous Goods)
ASTM D7760 Standard Guide for Conducting Internal Pressure Tests on United Nations (UN) Packagings
ASTM D7887 Standard Guide for Selection of Substitute, Non-hazardous, Liquid Filling Substances for Packagings Subjected to the United Nations Performance Tests
ASTM D7790: Standard Guide for Preparation of Plastic Packagings Containing Liquids for United Nations (UN) Drop Testing
UN Recommendations on the Transport of Dangerous Goods
ISO 16104 – 2003 Packaging – Transport packaging for dangerous goods – Test methods
See also
Data analysis
Nondestructive testing
Verification and validation
References
Books, General References
ASTM STP 1294 Durability Testing of Nonmetallic Materials, 1996
Lockhart, H., and Paine, F.A., "Packaging of Pharmaceuticals and Healthcare Products", 2006, Blackie,
Meisner, "Transport Packaging", Third Edition, IoPP, 2016
Pilchik, R., "Validating Medical Packaging" 2002,
Robertson, G. L., "Food Packaging", 2005,
Russel, P G, and Daum, M P, "Product Protection Test Book", IoPP
Soroka, W, "Fundamentals of Packaging Technology", IoPP, 2002,
Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009,
Guidelines for Selecting and Using ISTA Procedures and Projects, ISTA, 2013
External links
Institute of Packaging Professionals
International Safe Transit Association (ISTA)
American Society for Testing and Materials
Safe Load Testing Technologies, 'All you need to know about Amazon ISTA Test', 2018
Test Method Validation, Westpak, 2021
Testing
Industrial engineering
Environmental testing
Product testing | Package testing | [
"Engineering"
] | 5,956 | [
"Environmental testing",
"Reliability engineering",
"Industrial engineering"
] |
26,604,290 | https://en.wikipedia.org/wiki/Beethoven%20Burst%20%28GRB%20991216%29 | GRB 991216, nicknamed the Beethoven Burst by Dr. Brad Schaefer of Yale University, was a gamma-ray burst observed on December 16, 1999, coinciding with the 229th anniversary of Ludwig van Beethoven's birth. A gamma-ray burst is a highly luminous flash associated with an explosion in a distant galaxy and producing gamma rays, the most energetic form of electromagnetic radiation, and often followed by a longer-lived "afterglow" emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, and radio).
Overview
The optical afterglow of the burst reached an apparent magnitude of 18.7, making the Beethoven Burst one of the brightest bursts ever detected, even though it occurred about 10 billion light years from Earth. Frank Marshall, a NASA astrophysicist at the Goddard Space Flight Center, commented that "this was by far the brightest burst we have detected in a long time." The burst's peak flux ranked it as the second most powerful burst that the Burst and Transient Source Experiment (BATSE) had ever detected. The analysis of the observations strengthened the theory that gamma-ray bursts are a result of a hypernova, though other possible progenitors exist, such as the merger of two black holes.
Within four hours of the burst's detection, observations made by BATSE and the Rossi X-ray Timing Explorer were able to determine the burst's position of α = 77.38 ± 0.04, δ = 11.30 ± 0.05. This rapid determination allowed astronomers to conduct follow-up studies using optical and X-ray telescopes. Other instruments which detected GRB 991216 included the Chandra X-ray Observatory, the MDM Observatory, and the Space Telescope Imaging Spectrograph on board the Hubble Space Telescope. This was the first use of the Chandra X-ray Observatory for the purpose of gamma-ray burst detection.
References
19991216
Astronomical objects discovered in 1999
December 1999
Orion (constellation) | Beethoven Burst (GRB 991216) | [
"Physics",
"Astronomy"
] | 406 | [
"Physical phenomena",
"Orion (constellation)",
"Astronomical events",
"Constellations",
"Gamma-ray bursts",
"Stellar phenomena"
] |
26,605,226 | https://en.wikipedia.org/wiki/Post-quantum%20cryptography | Post-quantum cryptography (PQC), sometimes referred to as quantum-proof, quantum-safe, or quantum-resistant, is the development of cryptographic algorithms (usually public-key algorithms) that are currently thought to be secure against a cryptanalytic attack by a quantum computer. Most widely-used public-key algorithms rely on the difficulty of one of three mathematical problems: the integer factorization problem, the discrete logarithm problem or the elliptic-curve discrete logarithm problem. All of these problems could be easily solved on a sufficiently powerful quantum computer running Shor's algorithm or even faster and less demanding (in terms of the number of qubits required) alternatives.
While, as of 2024, quantum computers lack the processing power to break widely used cryptographic algorithms, cryptographers are designing new algorithms to prepare for Y2Q or Q-Day, the day when current algorithms will be vulnerable to quantum computing attacks. Their work has gained attention from academics and industry through the PQCrypto conference series hosted since 2006, several workshops on Quantum Safe Cryptography hosted by the European Telecommunications Standards Institute (ETSI), and the Institute for Quantum Computing. The rumoured existence of widespread harvest now, decrypt later programs has also been seen as a motivation for the early introduction of post-quantum algorithms, as data recorded now may still remain sensitive many years into the future.
In contrast to the threat quantum computing poses to current public-key algorithms, most current symmetric cryptographic algorithms and hash functions are considered to be relatively secure against attacks by quantum computers. While the quantum Grover's algorithm does speed up attacks against symmetric ciphers, doubling the key size can effectively block these attacks. Thus post-quantum symmetric cryptography does not need to differ significantly from current symmetric cryptography.
On August 13, 2024, the U.S. National Institute of Standards and Technology (NIST) released final versions of its first three Post Quantum Crypto Standards.
Algorithms
Post-quantum cryptography research is mostly focused on six different approaches:
Lattice-based cryptography
This approach includes cryptographic systems such as learning with errors, ring learning with errors (ring-LWE), the ring learning with errors key exchange and the ring learning with errors signature, the older NTRU or GGH encryption schemes, and the newer NTRU signature and BLISS signatures. Some of these schemes like NTRU encryption have been studied for many years without anyone finding a feasible attack. Others like the ring-LWE algorithms have proofs that their security reduces to a worst-case problem. The Post Quantum Cryptography Study Group sponsored by the European Commission suggested that the Stehle–Steinfeld variant of NTRU be studied for standardization rather than the NTRU algorithm. At that time, NTRU was still patented. Studies have indicated that NTRU may have more secure properties than other lattice based algorithms.
Multivariate cryptography
This includes cryptographic systems such as the Rainbow (Unbalanced Oil and Vinegar) scheme which is based on the difficulty of solving systems of multivariate equations. Various attempts to build secure multivariate equation encryption schemes have failed. However, multivariate signature schemes like Rainbow could provide the basis for a quantum secure digital signature. The Rainbow Signature Scheme is patented.
Hash-based cryptography
This includes cryptographic systems such as Lamport signatures, the Merkle signature scheme, the XMSS, the SPHINCS, and the WOTS schemes. Hash based digital signatures were invented in the late 1970s by Ralph Merkle and have been studied ever since as an interesting alternative to number-theoretic digital signatures like RSA and DSA. Their primary drawback is that for any hash-based public key, there is a limit on the number of signatures that can be signed using the corresponding set of private keys. This fact reduced interest in these signatures until interest was revived due to the desire for cryptography that was resistant to attack by quantum computers. There appear to be no patents on the Merkle signature scheme and there exist many non-patented hash functions that could be used with these schemes. The stateful hash-based signature scheme XMSS developed by a team of researchers under the direction of Johannes Buchmann is described in RFC 8391.
Note that all the above schemes are one-time or bounded-time signatures, Moni Naor and Moti Yung invented UOWHF hashing in 1989 and designed a signature based on hashing (the Naor-Yung scheme) which can be unlimited-time in use (the first such signature that does not require trapdoor properties).
Code-based cryptography
This includes cryptographic systems which rely on error-correcting codes, such as the McEliece and Niederreiter encryption algorithms and the related Courtois, Finiasz and Sendrier Signature scheme. The original McEliece signature using random Goppa codes has withstood scrutiny for over 40 years. However, many variants of the McEliece scheme, which seek to introduce more structure into the code used in order to reduce the size of the keys, have been shown to be insecure. The Post Quantum Cryptography Study Group sponsored by the European Commission has recommended the McEliece public key encryption system as a candidate for long term protection against attacks by quantum computers.
Isogeny-based cryptography
These cryptographic systems rely on the properties of isogeny graphs of elliptic curves (and higher-dimensional abelian varieties) over finite fields, in particular supersingular isogeny graphs, to create cryptographic systems. Among the more well-known representatives of this field are the Diffie–Hellman-like key exchange CSIDH, which can serve as a straightforward quantum-resistant replacement for the Diffie-Hellman and elliptic curve Diffie–Hellman key-exchange methods that are in widespread use today, and the signature scheme SQIsign which is based on the categorical equivalence between supersingular elliptic curves and maximal orders in particular types of quaternion algebras. Another widely noticed construction, SIDH/SIKE, was spectacularly broken in 2022. The attack is however specific to the SIDH/SIKE family of schemes and does not generalize to other isogeny-based constructions.
Symmetric key quantum resistance
Provided one uses sufficiently large key sizes, the symmetric key cryptographic systems like AES and SNOW 3G are already resistant to attack by a quantum computer. Further, key management systems and protocols that use symmetric key cryptography instead of public key cryptography like Kerberos and the 3GPP Mobile Network Authentication Structure are also inherently secure against attack by a quantum computer. Given its widespread deployment in the world already, some researchers recommend expanded use of Kerberos-like symmetric key management as an efficient way to get post quantum cryptography today.
Security reductions
In cryptography research, it is desirable to prove the equivalence of a cryptographic algorithm and a known hard mathematical problem. These proofs are often called "security reductions", and are used to demonstrate the difficulty of cracking the encryption algorithm. In other words, the security of a given cryptographic algorithm is reduced to the security of a known hard problem. Researchers are actively looking for security reductions in the prospects for post quantum cryptography. Current results are given here:
Lattice-based cryptography – Ring-LWE Signature
In some versions of Ring-LWE there is a security reduction to the shortest-vector problem (SVP) in a lattice as a lower bound on the security. The SVP is known to be NP-hard. Specific ring-LWE systems that have provable security reductions include a variant of Lyubashevsky's ring-LWE signatures defined in a paper by Güneysu, Lyubashevsky, and Pöppelmann. The GLYPH signature scheme is a variant of the Güneysu, Lyubashevsky, and Pöppelmann (GLP) signature which takes into account research results that have come after the publication of the GLP signature in 2012. Another Ring-LWE signature is Ring-TESLA. There also exists a "derandomized variant" of LWE, called Learning with Rounding (LWR), which yields "improved speedup (by eliminating sampling small errors from a Gaussian-like distribution with deterministic errors) and bandwidth". While LWE utilizes the addition of a small error to conceal the lower bits, LWR utilizes rounding for the same purpose.
Lattice-based cryptography – NTRU, BLISS
The security of the NTRU encryption scheme and the BLISS signature is believed to be related to, but not provably reducible to, the closest vector problem (CVP) in a lattice. The CVP is known to be NP-hard. The Post Quantum Cryptography Study Group sponsored by the European Commission suggested that the Stehle–Steinfeld variant of NTRU, which does have a security reduction be studied for long term use instead of the original NTRU algorithm.
Multivariate cryptography – Unbalanced oil and vinegar
Unbalanced Oil and Vinegar signature schemes are asymmetric cryptographic primitives based on multivariate polynomials over a finite field . Bulygin, Petzoldt and Buchmann have shown a reduction of generic multivariate quadratic UOV systems to the NP-Hard multivariate quadratic equation solving problem.
Hash-based cryptography – Merkle signature scheme
In 2005, Luis Garcia proved that there was a security reduction of Merkle Hash Tree signatures to the security of the underlying hash function. Garcia showed in his paper that if computationally one-way hash functions exist then the Merkle Hash Tree signature is provably secure.
Therefore, if one used a hash function with a provable reduction of security to a known hard problem one would have a provable security reduction of the Merkle tree signature to that known hard problem.
The Post Quantum Cryptography Study Group sponsored by the European Commission has recommended use of Merkle signature scheme for long term security protection against quantum computers.
Code-based cryptography – McEliece
The McEliece Encryption System has a security reduction to the syndrome decoding problem (SDP). The SDP is known to be NP-hard. The Post Quantum Cryptography Study Group sponsored by the European Commission has recommended the use of this cryptography for long term protection against attack by a quantum computer.
Code-based cryptography – RLCE
In 2016, Wang proposed a random linear code encryption scheme RLCE which is based on McEliece schemes. RLCE scheme can be constructed using any linear code such as Reed-Solomon code by inserting random columns in the underlying linear code generator matrix.
Supersingular elliptic curve isogeny cryptography
Security is related to the problem of constructing an isogeny between two supersingular curves with the same number of points. The most recent investigation of the difficulty of this problem is by Delfs and Galbraith indicates that this problem is as hard as the inventors of the key exchange suggest that it is. There is no security reduction to a known NP-hard problem.
Comparison
One common characteristic of many post-quantum cryptography algorithms is that they require larger key sizes than commonly used "pre-quantum" public key algorithms. There are often tradeoffs to be made in key size, computational efficiency and ciphertext or signature size. The table lists some values for different schemes at a 128-bit post-quantum security level.
A practical consideration on a choice among post-quantum cryptographic algorithms is the effort required to send public keys over the internet. From this point of view, the Ring-LWE, NTRU, and SIDH algorithms provide key sizes conveniently under 1 kB, hash-signature public keys come in under 5 kB, and MDPC-based McEliece takes about 1 kB. On the other hand, Rainbow schemes require about 125 kB and Goppa-based McEliece requires a nearly 1 MB key.
Lattice-based cryptography – LWE key exchange and Ring-LWE key exchange
The fundamental idea of using LWE and Ring LWE for key exchange was proposed and filed at the University of Cincinnati in 2011 by Jintai Ding. The basic idea comes from the associativity of matrix multiplications, and the errors are used to provide the security. The paper appeared in 2012 after a provisional patent application was filed in 2012.
In 2014, Peikert presented a key transport scheme following the same basic idea of Ding's, where the new idea of sending additional 1 bit signal for rounding in Ding's construction is also utilized. For somewhat greater than 128 bits of security, Singh presents a set of parameters which have 6956-bit public keys for the Peikert's scheme. The corresponding private key would be roughly 14,000 bits.
In 2015, an authenticated key exchange with provable forward security following the same basic idea of Ding's was presented at Eurocrypt 2015, which is an extension of the HMQV construction in Crypto2005. The parameters for different security levels from 80 bits to 350 bits, along with the corresponding key sizes are provided in the paper.
Lattice-based cryptography – NTRU encryption
For 128 bits of security in NTRU, Hirschhorn, Hoffstein, Howgrave-Graham and Whyte, recommend using a public key represented as a degree 613 polynomial with coefficients This results in a public key of 6130 bits. The corresponding private key would be 6743 bits.
Multivariate cryptography – Rainbow signature
For 128 bits of security and the smallest signature size in a Rainbow multivariate quadratic equation signature scheme, Petzoldt, Bulygin and Buchmann, recommend using equations in with a public key size of just over 991,000 bits, a private key of just over 740,000 bits and digital signatures which are 424 bits in length.
Hash-based cryptography – Merkle signature scheme
In order to get 128 bits of security for hash based signatures to sign 1 million messages using the fractal Merkle tree method of Naor Shenhav and Wool the public and private key sizes are roughly 36,000 bits in length.
Code-based cryptography – McEliece
For 128 bits of security in a McEliece scheme, The European Commissions Post Quantum Cryptography Study group recommends using a binary Goppa code of length at least and dimension at least , and capable of correcting errors. With these parameters the public key for the McEliece system will be a systematic generator matrix whose non-identity part takes bits. The corresponding private key, which consists of the code support with elements from and a generator polynomial of with coefficients from , will be 92,027 bits in length
The group is also investigating the use of Quasi-cyclic MDPC codes of length at least and dimension at least , and capable of correcting errors. With these parameters the public key for the McEliece system will be the first row of a systematic generator matrix whose non-identity part takes bits. The private key, a quasi-cyclic parity-check matrix with nonzero entries on a column (or twice as much on a row), takes no more than bits when represented as the coordinates of the nonzero entries on the first row.
Barreto et al. recommend using a binary Goppa code of length at least and dimension at least , and capable of correcting errors. With these parameters the public key for the McEliece system will be a systematic generator matrix whose non-identity part takes bits. The corresponding private key, which consists of the code support with elements from and a generator polynomial of with coefficients from , will be 40,476 bits in length.
Supersingular elliptic curve isogeny cryptography
For 128 bits of security in the supersingular isogeny Diffie-Hellman (SIDH) method, De Feo, Jao and Plut recommend using a supersingular curve modulo a 768-bit prime. If one uses elliptic curve point compression the public key will need to be no more than 8x768 or 6144 bits in length. A March 2016 paper by authors Azarderakhsh, Jao, Kalach, Koziel, and Leonardi showed how to cut the number of bits transmitted in half, which was further improved by authors Costello, Jao, Longa, Naehrig, Renes and Urbanik resulting in a compressed-key version of the SIDH protocol with public keys only 2640 bits in size. This makes the number of bits transmitted roughly equivalent to the non-quantum secure RSA and Diffie-Hellman at the same classical security level.
Symmetric–key-based cryptography
As a general rule, for 128 bits of security in a symmetric-key-based system, one can safely use key sizes of 256 bits. The best quantum attack against arbitrary symmetric-key systems is an application of Grover's algorithm, which requires work proportional to the square root of the size of the key space. To transmit an encrypted key to a device that possesses the symmetric key necessary to decrypt that key requires roughly 256 bits as well. It is clear that symmetric-key systems offer the smallest key sizes for post-quantum cryptography.
Forward secrecy
A public-key system demonstrates a property referred to as perfect forward secrecy when it generates random public keys per session for the purposes of key agreement. This means that the compromise of one message cannot lead to the compromise of others, and also that there is not a single secret value which can lead to the compromise of multiple messages. Security experts recommend using cryptographic algorithms that support forward secrecy over those that do not. The reason for this is that forward secrecy can protect against the compromise of long term private keys associated with public/private key pairs. This is viewed as a means of preventing mass surveillance by intelligence agencies.
Both the Ring-LWE key exchange and supersingular isogeny Diffie-Hellman (SIDH) key exchange can support forward secrecy in one exchange with the other party. Both the Ring-LWE and SIDH can also be used without forward secrecy by creating a variant of the classic ElGamal encryption variant of Diffie-Hellman.
The other algorithms in this article, such as NTRU, do not support forward secrecy as is.
Any authenticated public key encryption system can be used to build a key exchange with forward secrecy.
Open Quantum Safe project
The Open Quantum Safe (OQS) project was started in late 2016 and has the goal of developing and prototyping quantum-resistant cryptography. It aims to integrate current post-quantum schemes in one library: liboqs. liboqs is an open source C library for quantum-resistant cryptographic algorithms. It initially focuses on key exchange algorithms but by now includes several signature schemes. It provides a common API suitable for post-quantum key exchange algorithms, and will collect together various implementations. liboqs will also include a test harness and benchmarking routines to compare performance of post-quantum implementations. Furthermore, OQS also provides integration of liboqs into OpenSSL.
As of March 2023, the following key exchange algorithms are supported:
As of August 2024, NIST has published 3 algorithms below as FIPS standards and the 4th is expected near end of the year:
Older supported versions that have been removed because of the progression of the NIST Post-Quantum Cryptography Standardization Project are:
Implementation
One of the main challenges in post-quantum cryptography is considered to be the implementation of potentially quantum safe algorithms into existing systems. There are tests done, for example by Microsoft Research implementing PICNIC in a PKI using Hardware security modules. Test implementations for Google's NewHope algorithm have also been done by HSM vendors. In August 2023, Google released a FIDO2 security key implementation of an ECC/Dilithium hybrid signature schema which was done in partnership with ETH Zürich.
The Signal Protocol uses Post-Quantum Extended Diffie–Hellman (PQXDH).
On February 21, 2024, Apple announced that they were going to upgrade their iMessage protocol with a new PQC protocol called "PQ3", which will utilize ongoing keying.
Apple stated that, although quantum computers don't exist yet, they wanted to mitigate risks from future quantum computers as well as so-called "Harvest now, decrypt later" attack scenarios. Apple stated that they believe their PQ3 implementation provides protections that "surpass those in all other widely deployed messaging apps", because it utilizes ongoing keying.
Apple intends to fully replace the existing iMessage protocol within all supported conversations with PQ3 by the end of 2024. Apple also defined a scale to make it easier to compare the security properties of messaging apps, with a scale represented by levels ranging from 0 to 3: 0 for no end-to-end by default, 1 for pre-quantum end-to-end by default, 2 for PQC key establishment only (e.g. PQXDH), and 3 for PQC key establishment and ongoing rekeying (PQ3).
Other notable implementations include:
bouncycastle
liboqs
Hybridity
Google has maintained the use of "hybrid encryption" in its use of post-quantum cryptography: whenever a relatively new post-quantum scheme is used, it is combined with a more proven, non-PQ scheme. This is to ensure that the data are not compromised even if the relatively new PQ algorithm turns out to be vulnerable to non-quantum attacks before Y2Q. This type of scheme is used in its 2016 and 2019 tests for post-quantum TLS, and in its 2023 FIDO2 key. Indeed, one of the algorithms used in the 2019 test, SIKE, was broken in 2022, but the non-PQ X25519 layer (already used widely in TLS) still protected the data. Apple's PQ3 and Signal's PQXDH are also hybrid.
The NSA and GCHQ argues against hybrid encryption, claiming that it adds complexity to implementation and transition. Daniel J. Bernstein, who backs hybrid encryption, argues that the claims are bogus.
See also
NIST Post-Quantum Cryptography Standardization
Quantum cryptography – cryptography based on quantum mechanics
Crypto-shredding – Deleting encryption keys
Harvest now, decrypt later
References
Further reading
The PQXDH Key Agreement Protocol Specification
Isogenies in a Quantum World
On Ideal Lattices and Learning With Errors Over Rings
Kerberos Revisited: Quantum-Safe Authentication
The picnic signature scheme
External links
PQCrypto, the post-quantum cryptography conference
ETSI Quantum Secure Standards Effort
NIST's Post-Quantum crypto Project
PQCrypto Usage & Deployment
ISO 27001 Certification Cost
ISO 22301:2019 – Security and Resilience in the United States
Cryptography | Post-quantum cryptography | [
"Mathematics",
"Engineering"
] | 4,693 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
26,605,827 | https://en.wikipedia.org/wiki/C17H21NO5 | {{DISPLAYTITLE:C17H21NO5}}
The molecular formula C17H21NO5 (molar mass: 319.35 g/mol, exact mass: 319.1420 u) may refer to:
Anisodine, also known as daturamine and α-hydroxyscopolamin
Salicylmethylecgonine (2′-Hydroxycocaine)
Molecular formulas | C17H21NO5 | [
"Physics",
"Chemistry"
] | 90 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
26,606,740 | https://en.wikipedia.org/wiki/Hybrid%20input-output%20algorithm | The hybrid input-output (HIO) algorithm for phase retrieval is a modification of the error reduction algorithm for retrieving the phases in coherent diffraction imaging. Determining the phases of a diffraction pattern is crucial since the diffraction pattern of an object is its Fourier transform and in order to properly invert transform the diffraction pattern the phases must be known. Only the amplitude however, can be measured from the intensity of the diffraction pattern and can thus be known experimentally. This fact together with some kind of support constraint can be used in order to iteratively calculate the phases. The HIO algorithm uses negative feedback in Fourier space in order to progressively force the solution to conform to the Fourier domain constraints (support). Unlike the error reduction algorithm which alternately applies Fourier and object constraints the HIO "skips" the object domain step and replaces it with negative feedback acting upon the previous solution.
Although it has been shown that the method of error reduction converges to a limit (but usually not to the correct or optimal solution)
there is no limit to how long this process can take. Moreover, the error reduction algorithm will almost certainly find a local minimum instead of the global solution. The HIO differs from error reduction only in one step but this is enough to reduce this problem significantly. Whereas the error reduction approach iteratively improves solutions over time the HIO remodels the previous solution in Fourier space applying negative feedback. By minimizing the mean square error in Fourier space from the previous solution, the HIO provides a better candidate solution for inverse transforming. Although it is both faster and more powerful than error reduction, the HIO algorithm does have a uniqueness problem.
Depending on how strong the negative feedback is there can often be more than one solution for any set of diffraction data. Although a problem, it has been shown that many of these possible solutions stem from the fact that HIO allows for mirror images taken in any plane to arise as solutions. In crystallography, the scientist is seldom interested in the atomic coordinates relative to any other reference than the molecule itself and is therefore more than happy with a solution that is upside-down of flipped from the actual image. A downside is that HIO has a tendency to escape both global and local maxima. This problem also depends on the strength of the feedback parameter, and a good solution to this problem is to switch algorithm when the error reaches its minimum. Other methods of phasing a coherent diffraction pattern include difference map algorithm and "relaxed averaged alternating reflections" or RAAR.
See Also
Phase retrieval
Gerchberg-Saxton algorithm
References
Diffraction | Hybrid input-output algorithm | [
"Physics",
"Chemistry",
"Materials_science"
] | 542 | [
"Crystallography",
"Diffraction",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
26,611,926 | https://en.wikipedia.org/wiki/Dirichlet%20kernel | In mathematical analysis, the Dirichlet kernel, named after the German mathematician Peter Gustav Lejeune Dirichlet, is the collection of periodic functions defined as
where is any positive integer. The kernel functions are periodic with period .
The importance of the Dirichlet kernel comes from its relation to Fourier series. The convolution of with any function of period 2 is the nth-degree Fourier series approximation to , i.e., we have
where
is the th Fourier coefficient of . This implies that in order to study convergence of Fourier series it is enough to study properties of the Dirichlet kernel.
Applications
In signal processing, the Dirichlet kernel is often called the periodic sinc function:
where is an odd integer. In this form, is the angular frequency, and is half of the periodicity in frequency. In this case, the periodic sinc function in the frequency domain can be thought of as the Fourier transform of a time bounded impulse train in the time domain:
where is the time increment between each impulse and represents the number of impulses in the impulse train.
In optics, the Dirichlet kernel is part of the mathematical description of the diffraction pattern formed when monochromatic light passes through an aperture with multiple narrow slits of equal width and equally spaced along an axis perpendicular to the optical axis. In this case, is the number of slits.
L1 norm of the kernel function
Of particular importance is the fact that the L1 norm of Dn on diverges to infinity as . One can estimate that
By using a Riemann-sum argument to estimate the contribution in the largest neighbourhood of zero in which is positive, and Jensen's inequality for the remaining part, it is also possible to show that:
where is the sine integral
This lack of uniform integrability is behind many divergence phenomena for the Fourier series. For example, together with the uniform boundedness principle, it can be used to show that the Fourier series of a continuous function may fail to converge pointwise, in rather dramatic fashion. See convergence of Fourier series for further details.
A precise proof of the first result that is given by
where we have used the Taylor series identity that and where are the first-order harmonic numbers.
Relation to the periodic delta function
The Dirichlet kernel is a periodic function which becomes the Dirac comb, i.e. the periodic delta function, in the limit
with the angular frequency .
This can be inferred from the autoconjugation property of the Dirichlet kernel under forward and inverse Fourier transform:
and goes to the Dirac comb of period as , which remains invariant under Fourier transform: . Thus must also have converged to as .
In a different vein, consider ∆(x) as the identity element for convolution on functions of period 2. In other words, we have
for every function of period 2. The Fourier series representation of this "function" is
(This Fourier series converges to the function almost nowhere.) Therefore, the Dirichlet kernel, which is just the sequence of partial sums of this series, can be thought of as an approximate identity. Abstractly speaking it is not however an approximate identity of positive elements (hence the failures in pointwise convergence mentioned above).
Proof of the trigonometric identity
The trigonometric identity
displayed at the top of this article may be established as follows. First recall that the sum of a finite geometric series is
In particular, we have
Multiply both the numerator and the denominator by , getting
In the case we have
as required.
Alternative proof of the trigonometric identity
Start with the series
Multiply both sides by and use the trigonometric identity
to reduce the terms in the sum.
which telescopes down to the result.
Variant of identity
If the sum is only over non-negative integers (which may arise when computing a discrete Fourier transform that is not centered), then using similar techniques we can show the following identity:
Another variant is and this can be easily proved by using an identity .
See also
Fejér kernel
References
Sources
Dirichlet kernel at PlanetMath
Mathematical analysis
Fourier series
Approximation theory
Articles containing proofs | Dirichlet kernel | [
"Mathematics"
] | 847 | [
"Mathematical analysis",
"Approximation theory",
"Mathematical relations",
"Articles containing proofs",
"Approximations"
] |
26,611,936 | https://en.wikipedia.org/wiki/Penrose%20tiling | A Penrose tiling is an example of an aperiodic tiling. Here, a tiling is a covering of the plane by non-overlapping polygons or other shapes, and a tiling is aperiodic if it does not contain arbitrarily large periodic regions or patches. However, despite their lack of translational symmetry, Penrose tilings may have both reflection symmetry and fivefold rotational symmetry. Penrose tilings are named after mathematician and physicist Roger Penrose, who investigated them in the 1970s.
There are several variants of Penrose tilings with different tile shapes. The original form of Penrose tiling used tiles of four different shapes, but this was later reduced to only two shapes: either two different rhombi, or two different quadrilaterals called kites and darts. The Penrose tilings are obtained by constraining the ways in which these shapes are allowed to fit together in a way that avoids periodic tiling. This may be done in several different ways, including matching rules, substitution tiling or finite subdivision rules, cut and project schemes, and coverings. Even constrained in this manner, each variation yields infinitely many different Penrose tilings.
Penrose tilings are self-similar: they may be converted to equivalent Penrose tilings with different sizes of tiles, using processes called inflation and deflation. The pattern represented by every finite patch of tiles in a Penrose tiling occurs infinitely many times throughout the tiling. They are quasicrystals: implemented as a physical structure a Penrose tiling will produce diffraction patterns with Bragg peaks and five-fold symmetry, revealing the repeated patterns and fixed orientations of its tiles. The study of these tilings has been important in the understanding of physical materials that also form quasicrystals. Penrose tilings have also been applied in architecture and decoration, as in the floor tiling shown.
Background and history
Periodic and aperiodic tilings
Covering a flat surface ("the plane") with some pattern of geometric shapes ("tiles"), with no overlaps or gaps, is called a tiling. The most familiar tilings, such as covering a floor with squares meeting edge-to-edge, are examples of periodic tilings. If a square tiling is shifted by the width of a tile, parallel to the sides of the tile, the result is the same pattern of tiles as before the shift. A shift (formally, a translation) that preserves the tiling in this way is called a period of the tiling. A tiling is called periodic when it has periods that shift the tiling in two different directions.
The tiles in the square tiling have only one shape, and it is common for other tilings to have only a finite number of shapes. These shapes are called prototiles, and a set of prototiles is said to admit a tiling or tile the plane if there is a tiling of the plane using only these shapes. That is, each tile in the tiling must be congruent to one of these prototiles.
A tiling that has no periods is non-periodic. A set of prototiles is said to be aperiodic if all of its tilings are non-periodic, and in this case its tilings are also called aperiodic tilings. Penrose tilings are among the simplest known examples of aperiodic tilings of the plane by finite sets of prototiles.
Earliest aperiodic tilings
The subject of aperiodic tilings received new interest in the 1960s when logician Hao Wang noted connections between decision problems and tilings. In particular, he introduced tilings by square plates with colored edges, now known as Wang dominoes or tiles, and posed the "Domino Problem": to determine whether a given set of Wang dominoes could tile the plane with matching colors on adjacent domino edges. He observed that if this problem were undecidable, then there would have to exist an aperiodic set of Wang dominoes. At the time, this seemed implausible, so Wang conjectured no such set could exist.
Wang's student Robert Berger proved that the Domino Problem was undecidable (so Wang's conjecture was incorrect) in his 1964 thesis, and obtained an aperiodic set of 20,426 Wang dominoes. He also described a reduction to 104 such prototiles; the latter did not appear in his published monograph, but in 1968, Donald Knuth detailed a modification of Berger's set requiring only 92 dominoes.
The color matching required in a tiling by Wang dominoes can easily be achieved by modifying the edges of the tiles like jigsaw puzzle pieces so that they can fit together only as prescribed by the edge colorings. Raphael Robinson, in a 1971 paper which simplified Berger's techniques and undecidability proof, used this technique to obtain an aperiodic set of just six prototiles.
Development of the Penrose tilings
The first Penrose tiling (tiling P1 below) is an aperiodic set of six prototiles, introduced by Roger Penrose in a 1974 paper, based on pentagons rather than squares. Any attempt to tile the plane with regular pentagons necessarily leaves gaps, but Johannes Kepler showed, in his 1619 work Harmonices Mundi, that these gaps can be filled using pentagrams (star polygons), decagons and related shapes. Kepler extended this tiling by five polygons and found no periodic patterns, and already conjectured that every extension would introduce a new feature hence creating an aperiodic tiling. Traces of these ideas can also be found in the work of Albrecht Dürer. Acknowledging inspiration from Kepler, Penrose found matching rules for these shapes, obtaining an aperiodic set. These matching rules can be imposed by decorations of the edges, as with the Wang tiles. Penrose's tiling can be viewed as a completion of Kepler's finite Aa pattern.
Penrose subsequently reduced the number of prototiles to two, discovering the kite and dart tiling (tiling P2 below) and the rhombus tiling (tiling P3 below). The rhombus tiling was independently discovered by Robert Ammann in 1976. Penrose and John H. Conway investigated the properties of Penrose tilings, and discovered that a substitution property explained their hierarchical nature; their findings were publicized by Martin Gardner in his January 1977 "Mathematical Games" column in Scientific American.
In 1981, N. G. de Bruijn provided two different methods to construct Penrose tilings. De Bruijn's "multigrid method" obtains the Penrose tilings as the dual graphs of arrangements of five families of parallel lines. In his "cut and project method", Penrose tilings are obtained as two-dimensional projections from a five-dimensional cubic structure. In these approaches, the Penrose tiling is viewed as a set of points, its vertices, while the tiles are geometrical shapes obtained by connecting vertices with edges. A 1990 construction by Baake, Kramer, Schlottmann, and Zeidler derived the Penrose tiling and the related Tübingen triangle tiling in a similar manner from the four-dimensional 5-cell honeycomb.
Penrose tilings
The three types of Penrose tiling, P1–P3, are described individually below. They have many common features: in each case, the tiles are constructed from shapes related to the pentagon (and hence to the golden ratio), but the basic tile shapes need to be supplemented by matching rules in order to tile aperiodically. These rules may be described using labeled vertices or edges, or patterns on the tile faces; alternatively, the edge profile can be modified (e.g. by indentations and protrusions) to obtain an aperiodic set of prototiles.
Original pentagonal Penrose tiling (P1)
Penrose's first tiling uses pentagons and three other shapes: a five-pointed "star" (a pentagram), a "boat" (roughly 3/5 of a star) and a "diamond" (a thin rhombus). To ensure that all tilings are non-periodic, there are that specify how tiles may meet each other, and there are three different types of matching rule for the pentagonal tiles. Treating these three types as different prototiles gives a set of six prototiles overall. It is common to indicate the three different types of pentagonal tiles using three different colors, as in the figure above right.
Kite and dart tiling (P2)
Penrose's second tiling uses quadrilaterals called the "kite" and "dart", which may be combined to make a rhombus. However, the matching rules prohibit such a combination. Both the kite and dart are composed of two triangles, called Robinson triangles, after 1975 notes by Robinson.
The kite is a quadrilateral whose four interior angles are 72, 72, 72, and 144 degrees. The kite may be bisected along its axis of symmetry to form a pair of acute Robinson triangles (with angles of 36, 72 and 72 degrees).
The dart is a non-convex quadrilateral whose four interior angles are 36, 72, 36, and 216 degrees. The dart may be bisected along its axis of symmetry to form a pair of obtuse Robinson triangles (with angles of 36, 36 and 108 degrees), which are smaller than the acute triangles.
The matching rules can be described in several ways. One approach is to color the vertices (with two colors, e.g., black and white) and require that adjacent tiles have matching vertices. Another is to use a pattern of circular arcs (as shown above left in green and red) to constrain the placement of tiles: when two tiles share an edge in a tiling, the patterns must match at these edges.
These rules often force the placement of certain tiles: for example, the concave vertex of any dart is necessarily filled by two kites. The corresponding figure (center of the top row in the lower image on the left) is called an "ace" by Conway; although it looks like an enlarged kite, it does not tile in the same way. Similarly the concave vertex formed when two kites meet along a short edge is necessarily filled by two darts (bottom right). In fact, there are only seven possible ways for the tiles to meet at a vertex; two of these figures – namely, the "star" (top left) and the "sun" (top right) – have 5-fold dihedral symmetry (by rotations and reflections), while the remainder have a single axis of reflection (vertical in the image). Apart from the ace (top middle) and the sun, all of these vertex figures force the placement of additional tiles.
Rhombus tiling (P3)
The third tiling uses a pair of rhombuses (often referred to as "rhombs" in this context) with equal sides but different angles. Ordinary rhombus-shaped tiles can be used to tile the plane periodically, so restrictions must be made on how tiles can be assembled: no two tiles may form a parallelogram, as this would allow a periodic tiling, but this constraint is not sufficient to force aperiodicity, as figure 1 above shows.
There are two kinds of tile, both of which can be decomposed into Robinson triangles.
The thin rhomb t has four corners with angles of 36, 144, 36, and 144 degrees. The t rhomb may be bisected along its short diagonal to form a pair of acute Robinson triangles.
The thick rhomb T has angles of 72, 108, 72, and 108 degrees. The T rhomb may be bisected along its long diagonal to form a pair of obtuse Robinson triangles; in contrast to the P2 tiling, these are larger than the acute triangles.
The matching rules distinguish sides of the tiles, and entail that tiles may be juxtaposed in certain particular ways but not in others. Two ways to describe these matching rules are shown in the image on the right. In one form, tiles must be assembled such that the curves on the faces match in color and position across an edge. In the other, tiles must be assembled such that the bumps on their edges fit together.
There are 54 cyclically ordered combinations of such angles that add up to 360 degrees at a vertex, but the rules of the tiling allow only seven of these combinations to appear (although one of these arises in two ways).
The various combinations of angles and facial curvature allow construction of arbitrarily complex tiles, such as the Penrose chickens.
Features and constructions
Golden ratio and local pentagonal symmetry
Several properties and common features of the Penrose tilings involve the golden ratio (approximately 1.618). This is the ratio of chord lengths to side lengths in a regular pentagon, and satisfies = 1 + 1/.
Consequently, the ratio of the lengths of long sides to short sides in the (isosceles) Robinson triangles is :1. It follows that the ratio of long side lengths to short in both kite and dart tiles is also :1, as are the length ratios of sides to the short diagonal in the thin rhomb t, and of long diagonal to sides in the thick rhomb T. In both the P2 and P3 tilings, the ratio of the area of the larger Robinson triangle to the smaller one is :1, hence so are the ratios of the areas of the kite to the dart, and of the thick rhomb to the thin rhomb. (Both larger and smaller obtuse Robinson triangles can be found in the pentagon on the left: the larger triangles at the top – the halves of the thick rhomb – have linear dimensions scaled up by compared to the small shaded triangle at the base, and so the ratio of areas is 2:1.)
Any Penrose tiling has local pentagonal symmetry, in the sense that there are points in the tiling surrounded by a symmetric configuration of tiles: such configurations have fivefold rotational symmetry about the center point, as well as five mirror lines of reflection symmetry passing through the point, a dihedral symmetry group. This symmetry will generally preserve only a patch of tiles around the center point, but the patch can be very large: Conway and Penrose proved that whenever the colored curves on the P2 or P3 tilings close in a loop, the region within the loop has pentagonal symmetry, and furthermore, in any tiling, there are at most two such curves of each color that do not close up.
There can be at most one center point of global fivefold symmetry: if there were more than one, then rotating each about the other would yield two closer centers of fivefold symmetry, which leads to a mathematical contradiction. There are only two Penrose tilings (of each type) with global pentagonal symmetry: for the P2 tiling by kites and darts, the center point is either a "sun" or "star" vertex.
Inflation and deflation
Many of the common features of Penrose tilings follow from a hierarchical pentagonal structure given by substitution rules: this is often referred to as inflation and deflation, or composition and decomposition, of tilings or (collections of) tiles. The substitution rules decompose each tile into smaller tiles of the same shape as those used in the tiling (and thus allow larger tiles to be "composed" from smaller ones). This shows that the Penrose tiling has a scaling self-similarity, and so can be thought of as a fractal, using the same process as the pentaflake.
Penrose originally discovered the P1 tiling in this way, by decomposing a pentagon into six smaller pentagons (one half of a net of a dodecahedron) and five half-diamonds; he then observed that when he repeated this process the gaps between pentagons could all be filled by stars, diamonds, boats and other pentagons. By iterating this process indefinitely he obtained one of the two P1 tilings with pentagonal symmetry.
Robinson triangle decompositions
The substitution method for both P2 and P3 tilings can be described using Robinson triangles of different sizes. The Robinson triangles arising in P2 tilings (by bisecting kites and darts) are called A-tiles, while those arising in the P3 tilings (by bisecting rhombs) are called B-tiles. The smaller A-tile, denoted AS, is an obtuse Robinson triangle, while the larger A-tile, AL, is acute; in contrast, a smaller B-tile, denoted BS, is an acute Robinson triangle, while the larger B-tile, BL, is obtuse.
Concretely, if AS has side lengths (1, 1, ), then AL has side lengths (, , 1). B-tiles can be related to such A-tiles in two ways:
If BS has the same size as AL then BL is an enlarged version AS of AS, with side lengths (, , 2 = 1 + ) – this decomposes into an AL tile and AS tile joined along a common side of length 1.
If instead BL is identified with AS, then BS is a reduced version (1/)AL of AL with side lengths (1/,1/,1) – joining a BS tile and a BL tile along a common side of length 1 then yields (a decomposition of) an AL tile.
In these decompositions, there appears to be an ambiguity: Robinson triangles may be decomposed in two ways, which are mirror images of each other in the (isosceles) axis of symmetry of the triangle. In a Penrose tiling, this choice is fixed by the matching rules. Furthermore, the matching rules also determine how the smaller triangles in the tiling compose to give larger ones.
It follows that the P2 and P3 tilings are mutually locally derivable: a tiling by one set of tiles can be used to generate a tiling by another. For example, a tiling by kites and darts may be subdivided into A-tiles, and these can be composed in a canonical way to form B-tiles and hence rhombs. The P2 and P3 tilings are also both mutually locally derivable with the P1 tiling (see figure 2 above).
The decomposition of B-tiles into A-tiles may be written
BS = AL, BL = AL + AS
(assuming the larger size convention for the B-tiles), which can be summarized in a substitution matrix equation:
Combining this with the decomposition of enlarged A-tiles into B-tiles yields the substitution
so that the enlarged tile AL decomposes into two AL tiles and one AS tiles. The matching rules force a particular substitution: the two AL tiles in a AL tile must form a kite, and thus a kite decomposes into two kites and a two half-darts, and a dart decomposes into a kite and two half-darts. Enlarged B-tiles decompose into B-tiles in a similar way (via A-tiles).
Composition and decomposition can be iterated, so that, for example
The number of kites and darts in the nth iteration of the construction is determined by the nth power of the substitution matrix:
where Fn is the nth Fibonacci number. The ratio of numbers of kites to darts in any sufficiently large P2 Penrose tiling pattern therefore approximates to the golden ratio . A similar result holds for the ratio of the number of thick rhombs to thin rhombs in the P3 Penrose tiling.
Deflation for P2 and P3 tilings
Starting with a collection of tiles from a given tiling (which might be a single tile, a tiling of the plane, or any other collection), deflation proceeds with a sequence of steps called generations. In one generation of deflation, each tile is replaced with two or more new tiles that are scaled-down versions of tiles used in the original tiling. The substitution rules guarantee that the new tiles will be arranged in accordance with the matching rules. Repeated generations of deflation produce a tiling of the original axiom shape with smaller and smaller tiles.
This rule for dividing the tiles is a subdivision rule.
The above table should be used with caution. The half kite and half dart deflation are useful only in the context of deflating a larger pattern as shown in the sun and star deflations. They give incorrect results if applied to single kites and darts.
In addition, the simple subdivision rule generates holes near the edges of the tiling which are just visible in the top and bottom illustrations on the right. Additional forcing rules are useful.
Consequences and applications
Inflation and deflation yield a method for constructing kite and dart (P2) tilings, or rhombus (P3) tilings, known as up-down generation.
The Penrose tilings, being non-periodic, have no translational symmetry – the pattern cannot be shifted to match itself over the entire plane. However, any bounded region, no matter how large, will be repeated an infinite number of times within the tiling. Therefore, no finite patch can uniquely determine a full Penrose tiling, nor even determine which position within the tiling is being shown.
This shows in particular that the number of distinct Penrose tilings (of any type) is uncountably infinite. Up-down generation yields one method to parameterize the tilings, but other methods use Ammann bars, pentagrids, or cut and project schemes.
Related tilings and topics
Decagonal coverings and quasicrystals
In 1996, German mathematician Petra Gummelt demonstrated that a covering (so called to distinguish it from a non-overlapping tiling) equivalent to the Penrose tiling can be constructed using a single decagonal tile if two kinds of overlapping regions are allowed. The decagonal tile is decorated with colored patches, and the covering rule allows only those overlaps compatible with the coloring. A suitable decomposition of the decagonal tile into kites and darts transforms such a covering into a Penrose (P2) tiling. Similarly, a P3 tiling can be obtained by inscribing a thick rhomb into each decagon; the remaining space is filled by thin rhombs.
These coverings have been considered as a realistic model for the growth of quasicrystals: the overlapping decagons are 'quasi-unit cells' analogous to the unit cells from which crystals are constructed, and the matching rules maximize the density of certain atomic clusters. The aperiodic nature of the coverings can make theoretical studies of physical properties, such as electronic structure, difficult due to the absence of Bloch's theorem. However, spectra of quasicrystals can still be computed with error control.
Related tilings
The three variants of the Penrose tiling are mutually locally derivable. Selecting some subsets from the vertices of a P1 tiling allows to produce other non-periodic tilings. If the corners of one pentagon in P1 are labeled in succession by 1,3,5,2,4 an unambiguous tagging in all the pentagons is established, the order being either clockwise or counterclockwise.
Points with the same label define a tiling by Robinson triangles while points with the numbers 3 and 4 on them define the vertices of a Tie-and-Navette tiling.
There are also other related unequivalent tilings, such as the hexagon-boat-star and Mikulla–Roth tilings. For instance, if the matching rules for the rhombus tiling are reduced to a specific restriction on the angles permitted at each vertex, a binary tiling is obtained. Its underlying symmetry is also fivefold but it is not a quasicrystal. It can be obtained either by decorating the rhombs of the original tiling with smaller ones, or by applying substitution rules, but not by de Bruijn's cut-and-project method.
Art and architecture
The aesthetic value of tilings has long been appreciated, and remains a source of interest in them; hence the visual appearance (rather than the formal defining properties) of Penrose tilings has attracted attention. The similarity with certain decorative patterns used in North Africa and the Middle East has been noted; the physicists Peter J. Lu and Paul Steinhardt have presented evidence that a Penrose tiling underlies examples of medieval Islamic geometric patterns, such as the girih (strapwork) tilings at the Darb-e Imam shrine in Isfahan.
Drop City artist Clark Richert used Penrose rhombs in artwork in 1970, derived by projecting the rhombic triacontahedron shadow onto a plane observing the embedded "fat" rhombi and "skinny" rhombi which tile together to produce the non-periodic tessellation. Art historian Martin Kemp has observed that Albrecht Dürer sketched similar motifs of a rhombus tiling.
In 1979, Miami University used a Penrose tiling executed in terrazzo to decorate the Bachelor Hall courtyard in their Department of Mathematics and Statistics.
In Indian Institute of Information Technology, Allahabad, since the first phase of construction in 2001, academic buildings were designed on the basis of "Penrose Geometry", styled on tessellations developed by Roger Penrose. In many places in those buildings, the floor has geometric patterns composed of Penrose tiling.
The floor of the atrium of the Bayliss Building at The University of Western Australia is tiled with Penrose tiles.
The Andrew Wiles Building, the location of the Mathematics Department at the University of Oxford as of October 2013, includes a section of Penrose tiling as the paving of its entrance.
The pedestrian part of the street Keskuskatu in central Helsinki is paved using a form of Penrose tiling. The work was finished in 2014.
San Francisco's 2018 Salesforce Transit Center features perforations in its exterior's undulating white metal skin in the Penrose pattern.
See also
Girih tiles
List of aperiodic sets of tiles
Pinwheel tiling
Pentagonal tiling
Quaquaversal tiling
Tübingen triangle
Notes
References
Primary sources
.
.
.
.
.
.
Secondary sources
.
.
. (First published by W. H. Freeman, New York (1989), .)
Chapter 1 (pp. 1–18) is a reprint of .
.
.
.
.
.
.
.
. (Page numbers cited here are from the reproduction as .)
.
.
.
External links
This has a list of additional resources.
Discrete geometry
Aperiodic tilings
Mathematics and art
Golden ratio | Penrose tiling | [
"Physics",
"Mathematics"
] | 5,557 | [
"Discrete mathematics",
"Tessellation",
"Discrete geometry",
"Golden ratio",
"Aperiodic tilings",
"Symmetry"
] |
40,639,179 | https://en.wikipedia.org/wiki/Deligne%E2%80%93Mumford%20stack | In algebraic geometry, a Deligne–Mumford stack is a stack F such that
Pierre Deligne and David Mumford introduced this notion in 1969 when they proved that moduli spaces of stable curves of fixed arithmetic genus are proper smooth Deligne–Mumford stacks.
If the "étale" is weakened to "smooth", then such a stack is called an algebraic stack (also called an Artin stack, after Michael Artin). An algebraic space is Deligne–Mumford.
A key fact about a Deligne–Mumford stack F is that any X in , where B is quasi-compact, has only finitely many automorphisms.
A Deligne–Mumford stack admits a presentation by a groupoid; see groupoid scheme.
Examples
Affine Stacks
Deligne–Mumford stacks are typically constructed by taking the stack quotient of some variety where the stabilizers are finite groups. For example, consider the action of the cyclic group on given by
Then the stack quotient is an affine smooth Deligne–Mumford stack with a non-trivial stabilizer at the origin. If we wish to think about this as a category fibered in groupoids over then given a scheme the over category is given by
Note that we could be slightly more general if we consider the group action on .
Weighted Projective Line
Non-affine examples come up when taking the stack quotient for weighted projective space/varieties. For example, the space is constructed by the stack quotient where the -action is given by
Notice that since this quotient is not from a finite group we have to look for points with stabilizers and their respective stabilizer groups. Then if and only if or and or , respectively, showing that the only stabilizers are finite, hence the stack is Deligne–Mumford.
Stacky curve
Non-Example
One simple non-example of a Deligne–Mumford stack is since this has an infinite stabilizer. Stacks of this form are examples of Artin stacks.
References
Algebraic geometry | Deligne–Mumford stack | [
"Mathematics"
] | 424 | [
"Fields of abstract algebra",
"Algebraic geometry"
] |
40,639,467 | https://en.wikipedia.org/wiki/Light%20bullet | Light bullets are localized pulses of electromagnetic energy that can travel through a medium and retain their spatiotemporal shape in spite of diffraction and dispersion which tend to spread the pulse. This is made possible by a balance between the non-linear self-focusing and spreading effects brought about by the medium in which the pulse beam propagates.
Prediction and Discovery
Light bullets were predicted and so termed by Yaron Silberberg in 1990, and demonstrated the following decade.
Comparison with solitons
Spatial and temporal stability which are the characteristics of a soliton have been achieved in light bullets using alternative refractive index models. An experiment which exploited the discrete spreading and self-focusing effects on 170-femtosecond pulses at 1550-nanometre wavelengths by a two-dimensional hexagonal array of silica waveguides reported a spatial profiles stationary for about twice as far as it would be in linear propagation and temporal profile about nine times stationary as that of the corresponding linear propagation.
Light bullets lose energy in the process of a collision. This behavior is different from that of solitons which survive collisions without losing energy
Possible applications
Artificially-induced lightning
Monitoring air pollution
See also
Laser-induced breakdown spectroscopy (LIBS)
References
Electromagnetic radiation
Light
1992 in science | Light bullet | [
"Physics"
] | 261 | [
"Physical phenomena",
"Spectrum (physical sciences)",
"Electromagnetic radiation",
"Electromagnetic spectrum",
"Waves",
"Radiation",
"Light"
] |
48,344,629 | https://en.wikipedia.org/wiki/Quantum%20spin%20tunneling | Quantum spin tunneling, or quantum tunneling of magnetization, is a physical phenomenon by which the quantum mechanical state that describes the collective magnetization of a nanomagnet is a linear superposition of two states with well defined and opposite magnetization. Classically, the magnetic anisotropy favors neither of the two states with opposite magnetization, so that the system has two equivalent ground states.
Because of the quantum spin tunneling, an energy splitting between the bonding and anti-bonding linear combination of states with opposite magnetization classical ground states arises, giving rise to a unique ground state separated by the first excited state by an energy difference known as quantum spin tunneling splitting. The quantum spin tunneling splitting also occurs for pairs of excited states with opposite magnetization.
As a consequence of quantum spin tunneling, the magnetization of a system can switch between states with opposite magnetization that are separated by an energy barrier much larger than thermal energy. Thus, quantum spin tunneling provides a pathway to magnetization switching forbidden in classical physics.
Whereas quantum spin tunneling shares some properties with quantum tunneling in other two level systems such as a single electron in a double quantum well or in a diatomic molecule, it is a multi-electron phenomenon, since more than one electron is required to have magnetic anisotropy. The multi-electron character is also revealed by an important feature, absent in single-electron tunneling: zero field quantum spin tunneling splitting is only possible for integer spins, and is certainly absent for half-integer spins, as ensured by Kramers degeneracy theorem. In real systems containing Kramers ions, like crystalline samples of single ion magnets, the degeneracy of the ground states is frequently lifted through dipolar interactions with neighboring spins, and as such quantum spin tunneling is frequently observed even in the absence of an applied external field for these systems.
Initially discussed in the context of magnetization dynamics of magnetic nanoparticles, the concept was known as macroscopic quantum tunneling, a term that highlights both the difference with single electron tunneling and connects this phenomenon with other macroscopic quantum phenomena. In this sense, the problem of quantum spin tunneling lies in the boundary between the quantum and classical descriptions of reality.
Single spin Hamiltonian
A simple single spin Hamiltonian that describes quantum spin tunneling for a spin is given by:
[1]
where D and E are parameters that determine the magnetic anisotropy, and are spin matrices of dimension . It is customary to take z as the easy axis so that D<0 and |D|>> E. For E=0, this Hamiltonian commutes with , so that we can write the eigenvalues as , where takes the 2S+1 values in the list (S, S-1, ...., -S)
describes a set of doublets, with and . In the case of integer spins the second term of the Hamiltonian results in the splitting of the otherwise degenerate ground state doublet. In this case, the zero field quantum spin tunneling splitting is given by:
From this result, it is apparent that, given that E/D is much smaller than 1 by construction, the quantum spin tunnelling splitting becomes suppressed in the limit of large spin S, i.e., as we move from the atomic scale towards the macroscopic world. The magnitude of the quantum spin tunnelling splitting can be modulated by application of a magnetic field along the transverse hard axis direction (in the case of Hamiltonian [1], with D<0 and E>0, the x axis). The modulation of the quantum spin tunnelling splitting results in oscillations of its magnitude, including specific values of the transverse field at which the splitting vanishes. This accidental degeneracies are known as diabolic points.
Observation
Quantum tunneling of the magnetization was reported in 1996 for a crystal of Mn12ac molecules with S=10. Quoting Thomas and coworkers, "in an applied magnetic field, the magnetization shows hysteresis loops with a distinct 'staircase' structure: the steps occur at values of the applied field where the energies of different collective spin states of the manganese clusters coincide. At these special values of the field, relaxation from one spin state to another is enhanced above the thermally activated rate by the action of resonant quantum-mechanical tunneling". Quantum tunneling of the magnetization was reported in ferritin present in horse spleen proteins
A direct measurement of the quantum spin tunneling splitting energy can be achieved using single spin scanning tunneling inelastic spectroscopy, that permits to measure the spin excitations of individual atoms on surfaces. Using this technique, the spin excitation spectrum of an individual integer spin was obtained by Hirjibehedin et al. for a S=2 single Fe atom on a surface of Cu2N/Cu(100), that made it possible to measure a quantum spin tunneling splitting of 0.2 meV. Using the same technique other groups measured the spin excitations of S=1 Fe phthalocyanine molecule on a copper surface and a S=1 Fe atom on InSb, both of which had a quantum spin tunneling splitting of the doublet larger than 1 meV.
In the case of molecular magnets with large S and small E/D ratio, indirect measurement techniques are required to infer the value of the quantum spin tunneling splitting. For instance, modeling time dependent magnetization measurements of a crystal of Fe8 molecular magnets with the Landau-Zener formula, Wernsdorfer and Sessoli inferred tunneling splittings in the range of 10−7 Kelvin.
References
Condensed matter physics
Magnetism
Quantum mechanics | Quantum spin tunneling | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,182 | [
"Theoretical physics",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Condensed matter physics",
"Matter"
] |
48,345,932 | https://en.wikipedia.org/wiki/Concrete%20Series | The Concrete Series was a series of books about the use of concrete in construction that was published by Concrete Publications Limited of Dartmouth Street, London, from the 1930s to the 1960s.
History
The Concrete Series was a book series about the use of concrete in construction that was published by Concrete Publications Limited of Dartmouth Street, London, from the 1930s to the 1960s.
The series was published at a time when concrete was increasingly being used in building design and for public works such as road building. The series ran to in excess of 35 titles.
Later, the series was continued by the Cement and Concrete Association and Spon Press, part of Taylor & Francis group.
Titles
This is an incomplete list of titles in the series:
Arch design simplified. W.A. Fairhurst, 1946.
Concrete farm silos, granaries and tanks. A.M. Pennington, 1942.
Concrete surface finishes, renderings and terrazzo. W.S. Gray & H.L. Childe, 1935. (2nd revised edition 1943)
Design of arch roofs. J.S. Terrington, 1937.
Design of domes. J.S. Terrington, 1937. (Reprinted from Concrete and Constructional Engineering)
Design of pyramid roofs. J.S. Terrington, 1939.
Prestressed concrete designer's handbook, P.W. Abeles and F.H. Turner, 1962.
Reinforced concrete chimneys. C. Percy Taylor & Leslie Turner, 1940.
See also
Concrete Quarterly
Modernist architecture
References
Construction in the United Kingdom
Series of non-fiction books
Concrete | Concrete Series | [
"Engineering"
] | 316 | [
"Structural engineering",
"Concrete"
] |
48,349,765 | https://en.wikipedia.org/wiki/Bomb%20pulse | The bomb pulse is the sudden increase of carbon-14 (C) in Earth's atmosphere due to the hundreds of above-ground nuclear tests that started in 1945 and intensified after 1950 until 1963, when the Limited Test Ban Treaty was signed by the United States, the Soviet Union and the United Kingdom. These blasts were followed by a doubling of the relative concentration of C in the atmosphere.
The reason for the term “relative concentration”, is that measurements of C levels by mass spectrometers are most accurately made by comparison to another carbon isotope, often the common isotope C. Isotope abundance ratios are not only more easily measured, they are what C carbon daters want, since it is the fraction of carbon in a sample that is C, not the absolute concentration, that is of interest in dating measurements. The figure shows how the fraction of carbon in the atmosphere that is C, of order only 1 part per 10, has changed over the past several decades following the bomb tests. Because C concentration has increased by about 30% over the past fifty years, the fact that “pMC”, measuring the isotope ratio, has returned (almost) to its 1955 value, means that C concentration in the atmosphere remains ~30% higher than it once was. Carbon-14, the radioisotope of carbon, naturally develops in trace amounts in the atmosphere and it can be detected in all living things. Carbon of all types is continually used to form the molecules of the cells of organisms. Doubling of the concentration of C in the atmosphere is reflected in the tissues and cells of all organisms that lived around the period of nuclear testing. This property has many applications in biology and forensics.
Background
C is constantly formed from nitrogen-14 (N) in the upper atmosphere by cosmic rays which generate neutrons. These neutrons hit N to produce C which then combines with oxygen to form CO. This radioactive CO spreads through the lower atmosphere and the oceans where it is absorbed by plants, and animals that eat the plants. C thus becomes part of the biosphere, so all living things contain some C. Nuclear tests caused a rapid increase in atmospheric C (see figure), since a nuclear explosion also creates neutrons which collide with N and produce C. Since the nuclear test ban in 1963, atmospheric C relative concentration is decreasing at 4% per year. This continuous decrease permits scientists to determine among other things the age of deceased people and allows them to study cell activity in tissues. By measuring the amount of C in a population of cells and comparing that to the amount of C in the atmosphere during or after the bomb pulse, scientists can estimate when the cells were created and how often they've turned over since then.
Difference with classical radiocarbon dating
Carbon dating has been used since 1946 to determine the age of organic material as old as 50,000 years. When an organism dies, the exchange of C with the environment ends and the incorporated C decays. Given radioactive decay (C's half-life is about 5,730 years), the relative amount of C left in the dead organism can be used to calculate how long ago it died. Bomb pulse dating should be considered a special form of carbon dating. As discussed above and in the Radiolab episode, Elements (section 'Carbon'), in bomb pulse dating the slow absorption of atmospheric C by the biosphere, can be considered a chronometer. Starting from the pulse around the year 1963 (see figure), atmospheric radiocarbon relative abundance decreased by about 4% a year. So in bomb pulse dating it is the relative amount of C in the atmosphere that is decreasing and not the amount of C in dead organisms, as is the case in classical carbon dating. This decrease in atmospheric C can be measured in cells and tissues and has permitted scientists to determine the age of individual cells and of deceased people. These applications are very similar to the experiments conducted with pulse-chase analysis in which cellular processes are examined over time by exposing the cells to a labeled compound (pulse) and then to the same compound in an unlabeled form (chase). Radioactivity is a commonly used label in these experiments. An important difference between pulse-chase analysis and bomb-pulse dating is the absence of the chase in the latter.
Around the year 2030 the bomb pulse will die out. Every organism born after this will not bear detectable bomb pulse traces and their cells cannot be dated in this way. Radioactive pulses cannot be ethically administered to people just to study the turnover of their cells, so the bomb pulse results are a useful side effect of nuclear testing.
Applications
The fact that cells and tissues reflect the doubling of C in the atmosphere during and after nuclear testing, has been of great use for several biological studies, for forensics and even for the determination of the year in which certain wine was produced.
Biology
Biological studies carried out by Kirsty Spalding demonstrated that neuronal cells are essentially static and do not regenerate during life. She also showed that the number of fat cells is set during childhood and adolescence. Considering the amount of C present in DNA she could establish that 10% of fat cells are renewed annually. The radiocarbon bomb pulse has been used to validate otolith annuli (ages scored from otolith sections) across several fish species including the freshwater drum, lake sturgeon, pallid sturgeon, bigmouth buffalo, arctic salmonids, Pristipomoides filamentosus, several reef fishes, among numerous other validated freshwater and marine species. The precision for bomb radiocarbon age validation is typically within ±2 years because the rise period (1956-1960) is so steep. The bomb pulse has also been used to estimate (not validate) the age of Greenland sharks by measuring the incorporation of C in the eye lens during development. After having determined the age and measured the length of sharks born around the bomb pulse, it was possible to create a mathematical model in which length and age of the sharks were correlated in order to deduce the age of the larger sharks. The study showed that the Greenland shark, with an age of 392 ± 120 years, is the oldest known vertebrate.
Forensics
At the moment of death, carbon uptake ends. Considering that tissue that contained the bomb pulse C was rapidly diminishing with a rate of 4% per year, it has been possible to establish the time of death of two women in a court case by examining tissues with a rapid turnover. Another important application has been the identification of victims of the Southeast Asian tsunami 2004 by examining their teeth.
Carbon Transport Modeling
The perturbation in atmospheric C from the bomb testing was an opportunity to validate atmospheric transport models, and to study the movement of carbon between the atmosphere and oceanic or terrestrial sinks.
Other
Atmospheric bomb C has been used to validate tree ring ages and to date recent trees that have no annual growth rings. It can also be used to obtain the growth rate of tropical trees and palms that have no visible annual rings.
See also
Effects of nuclear explosions
Pulse-chase analysis
Suess effect
Miyake event
References
Nuclear weapons
Nuclear weapons testing
Radioactivity
Carbon-14
Molecular biology | Bomb pulse | [
"Physics",
"Chemistry",
"Technology",
"Biology"
] | 1,455 | [
"Nuclear weapons testing",
"Nuclear physics",
"Environmental impact of nuclear power",
"Molecular biology",
"Biochemistry",
"Radioactivity"
] |
39,319,146 | https://en.wikipedia.org/wiki/Lentiviral%20vector%20in%20gene%20therapy | Lentiviral vectors in gene therapy is a method by which genes can be inserted, modified, or deleted in organisms using lentiviruses.
Lentiviruses are a family of viruses that are responsible for diseases like AIDS, which infect by inserting DNA into their host cells' genome. Many such viruses have been the basis of research using viruses in gene therapy, but the lentivirus is unique in its ability to infect non-dividing cells, and therefore has a wider range of potential applications. Lentiviruses can become endogenous (ERV), integrating their genome into the host germline genome, so that the virus is henceforth inherited by the host's descendants. Scientists use the lentivirus' mechanisms of infection to achieve a desired outcome to gene therapy. Lentiviral vectors in gene therapy have been pioneered by Luigi Naldini.
The lentivirus is a retrovirus, meaning it has a single stranded RNA genome with a reverse transcriptase enzyme. Lentiviruses also have a viral envelope with protruding glycoproteins that aid in attachment to the host cell's outer membrane. The virus contains a reverse transcriptase molecule found to perform transcription of the viral genetic material upon entering the cell. Within the viral genome are RNA sequences that code for specific proteins that facilitate the incorporation of the viral sequences into the host cell genome. The "gag" gene codes for the structural components of the viral nucleocapsid proteins: the matrix (MA/p17), the capsid (CA/p24) and the nucleocapsid (NC/p7) proteins. The "pol" domain codes for the reverse transcriptase and integrase enzymes. Lastly, the "env" domain of the viral genome encodes for the glycoproteins and envelope on the surface of the virus.[1]
There are multiple steps involved in the infection and replication of a lentivirus in a host cell. In the first step the virus uses its surface glycoproteins for attachment to the outer surface of a cell. More specifically, lentiviruses attach to the CD4 glycoproteins on the surface of a host's target cell. The viral material is then injected into the host cell's cytoplasm. Within the cytoplasm, the viral reverse transcriptase enzyme performs reverse transcription of the viral RNA genome to create a viral DNA genome. The viral DNA is then sent into the nucleus of the host cell where it is incorporated into the host cell's genome with the help of the viral enzyme integrase. From now on, the host cell starts to transcribe the entire viral RNA and express the structural viral proteins, in particular those that form the viral capsid and the envelope. The lentiviral RNA and the viral proteins then assemble and the newly formed virions leave the host cell when enough are made.
Two methods of gene therapy using lentiviruses have been proposed. In the ex vivo methodology, cells are extracted from a patient and then cultured. A lentiviral vector carrying therapeutic transgenes are then introduced to the culture to infect them. The now modified cells continue to be cultured until they can be infused into the patient. In vivo gene therapy is the sample injection of viral vectors containing transgenes into the patient.
Designing a lentivirus vector
Lentiviruses are modified to act as a vector to insert beneficial genes into cells. Unlike other retroviruses, which cannot penetrate the nuclear envelope and can therefore only act on cells while they are undergoing mitosis, lentiviruses can infect cells whether or not they are dividing (shown to be largely due to the capsid protein). Many cell types, like neurons, do not divide in adult organisms, so lentiviral gene therapy is a good candidate for treating conditions that affect those cell types.
Some experimental applications of lentiviral vectors have been done in gene therapy in order to cure diseases like Diabetes mellitus, Murine haemophilia A, prostate cancer, chronic granulomatous disease, and vascular diseases.
HIV-derived lentiviral vectors have been widely developed for their ability to target specific genes through the coactivator PSIP1. This target specificity allows for the development of lentiviral gene vectors that do not carry the risk of randomly inserting themselves into normally functioning genes. As HIV is pathogenic, it must be genetically modified to remove its disease-causing properties and its ability to replicate itself. This can be achieved by deleting viral genes that are unnecessary for transduction of therapeutic transgenes. It has been proposed that by targeting the "gag" and "env" domains, enough of the HIV-1 genome can be deleted without losing its effectiveness in gene therapy while minimizing viral genes integrated into the patient. Genes may also be replaced rather than disrupted as another method to reduce the risks associated with the use of HIV-1.
Other lentiviruses such as Feline immunodeficiency virus and Equine infectious anemia virus have been developed for use in gene therapy and are of interest due to the inability to cause serious disease in human hosts. Equine infectious anemia virus in particular has been shown to perform somewhat better than HIV-1 in hematopoietic stem cells
Insertional mutagenesis
Historically, lentiviral vectors included strong viral promoters which had a side effect of insertional mutagenesis, nuclear DNA mutations that affect the function of a gene. These strong viral promotors were shown to be the main cause of cancer formation. As a result, viral promotors have been replaced by cellular promotors and regulatory sequences.
Contrast with other viral vectors
As mentioned, lentiviruses have the unique ability to infect non-dividing cells. Beyond that, there are several other properties that distinguish lentiviral vectors from other viral vectors. Such properties are important to consider when determining whether lentiviruses are appropriate for a given treatment.
Gammaretroviruses
Gammaretroviruses are retroviruses like lentiviruses. Murine leukemia viruses (MLVs) were among the first to be investigated for their use in gene therapy. However, recent research has favored lentiviruses for their ability to integrate into non-dividing cells. More practically, gammaretroviruses have an affinity for integrating themselves near oncogene promoters, bringing forward an adverse risk of tumors. MLVs may be replication competent, meaning they can replicate in the host cell. These replication-competent viruses offer stable gene transfer and tumor and tissue specific targeting.
Adenoviruses
In gene therapy, adenoviruses differ from lentiviruses in many ways, some of which provide advantages over lentiviruses. Transduction efficiency is higher in adenoviruses compared to lentiviruses. Secondly, most human cells have receptors for adenoviruses likely as a result of the wide variety of adenovirus diseases in humans. As adenoviruses frequently infect humans, this could build an immune response in the body. Such a response can reduce the efficiency of adenoviral vector therapies and can result in adverse reactions such as inflammation of tissues. Research has been conducted to exploit this immune response to target cancerous cells and to develop vaccines. Hybrid adenovirus-retroviruses (specifically MLVs) have also been developed to exploit the benefits of MLVs and adenoviruses.
Applications
Severe combined immunodeficiency disease
The ADA deficient variant of severe combined immunodeficiency (SCID) was treated highly successfully in a multi-year study reported in 2021. Over 95% of treated patients continued to be event free after 36 months, and 100% of patients survived this normally lethal disease. A self-inactivating lentiviral vector, EFS-ADA LV, was used to insert a functional ADA gene in autologous CD34+ hematopoietic stem and progenitor cells (HSPCs).
Vascular transplants
In a study designed to enhance the outcomes of vascular transplant through vascular endothelial cell gene therapy, the third generation of Lentivirus showed to be effective in the delivery of genes to moderate venous grafts and transplants in procedures like coronary artery bypass. Because the virus has been adapted to lose most of its genome, the virus becomes safer and more effective in transplanting the required genes into the host cell. A drawback to this therapy is explained in the study that long-term gene expression may require the use of promoters and can aid in a greater trans-gene expression. The researchers accomplished this by the addition of self-inactivating plasmids and creating a more universal tropism by pseudotyping a vesicular stomatitis virus glycoprotein.
Chronic granulomatous disease
In chronic granulomatous disease (CGD), immune functioning is deficient as a result of the mutations in components of the nicotinamide adenine dinucleotide phosphate oxidase (NADPH oxidase) enzyme in phagocyte cells, which catalyzes the production of superoxide free radicals. If this enzyme becomes deficient, the phagocytes can't kill effectively the engulfed bacteria, so granulomas can be formed. Study performed in mice emphasizes the use of lineage-specific lentiviral vectors to express a normal version of one of the mutant CGD proteins, allowing white blood cells to now make a functional version of the NADPH oxidase. Scientists developed this strain of lentivirus by transinfecting 293T cells with pseudotyped virus with the vesicular stomatitis G protein. The viral vector's responsibility was to increase the production of a functional NADPH oxidase gene in these phagocytic cells. They did this to create an affinity for myeloid cells.
Prostate cancer
With prostate cancer, the lentivirus is transformed by being bound to trastuzumab to attach to androgen-sensitive LNCaP and castration-resistant C4-2 human prostate cancer cell lines. These two cells are primarily responsible for secretion of excess human epidermal growth factor receptor 2 (HER-2), which is a hormone linked to prostate cancer. By attaching to these cells and changing their genomes, the lentivirus can slow down, and even kill, the cancer-causing cells. Researchers caused the specificity of the vector by manipulating the Fab region of the viral genome and pseudotyped it with the Sindbis virus.
Haemophilia A
Haemophilia A has also been studied in gene therapy with a lentiviral vector in mice. The vector targets the haematopoietic cells in order to increase the amount of factor VIII, which is affected in haemophilia A. But this continues to be a subject of study as the lentivirus vector was not completely successful in achieving this goal. They did this by trans-infecting the virus in a 293T cell, creating a virus known as 2bF8 expressing generation of viral vectors.
Rheumatoid arthritis
Studies have also found that injection of a lentiviral vector with IL-10 expressing genes in utero in mice can suppress, and prevent, rheumatoid arthritis and create new cells with constant gene expression. This contributes to the data on stem cells and in utero inoculation of viral vectors for gene therapy. The target for the viral vector in this study, were the synovial cells. Normally functioning synovial cells produce TNFα and IL-1.
Diabetes mellitus
Like many of the in utero studies, the lentiviral vector gene therapy for diabetes mellitus is more effective in utero as the stem cells that become affected by the gene therapy create new cells with the new gene created by the actual viral intervention. The vector targets the cells within the pancreas to add insulin secreting genes to help control diabetes mellitus. Vectors were cloned using a cytomegalovirus promoter and then co-transinfected in the 293T cell.
Neurological disease
As mature neurons do not divide, lentiviruses are ideal for division independent gene therapy. Studies of lentiviral gene therapy have been conducted on patients with advanced Parkinson's disease and aging-related atrophy of neurons in primates.
See also
Retinal gene therapy using lentiviral vectors
References
Further reading
External links
The Place of Retroviruses in Biology
Synthesis of Gag and Gag-Pro-Pol Proteins in Retroviruses
About: Retroviruses Resource Overview
Applied genetics
Gene delivery
Lentiviruses | Lentiviral vector in gene therapy | [
"Chemistry",
"Biology"
] | 2,628 | [
"Genetics techniques",
"Molecular biology techniques",
"Gene delivery"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.